- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Winter Olympics
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Elon: ‘We have entered the singularity’
Posted on 1/4/26 at 8:45 pm to 13SaintTiger
Posted on 1/4/26 at 8:45 pm to 13SaintTiger
quote:No, he doesn't. LLMs haven’t plateaued. They’re being deliberately throttled for monetization and liability. What looks like stagnation is constraint, not capability. Safety rails, productization, and legal exposure are capping behavior. The models are showing no indication of hitting their limits.
You have no idea what you are talking about
Posted on 1/4/26 at 8:47 pm to Grievous Angel
quote:
I'm reading "If anyone builds it, everyone dies." Pretty uch what you're talking about.
Posted on 1/4/26 at 8:53 pm to bad93ex
quote:
Looking forward to our robot overlords!
They won’t be our overlords for very long at least.
Posted on 1/4/26 at 8:58 pm to northshorebamaman
quote:
If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us.
Yes. What happens when AGI needs more water to produce/replicate more data centers? We will be direct competitors for energy, water, etc.
Humans reign over the world because our ancestors had larger brains and opposable thumbs. Since then we tolerate other species, but only to a point.
If a superior intelligence arises (artificial general intelligence, or artificial super intelligence), it seems entirely reasonable that we'd be the biggest nuisance to it.
There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.
Posted on 1/4/26 at 9:05 pm to Grievous Angel
quote:
There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.
The west/china is going to inadvertently end humanity in an effort to prevent china/the west from inadvertently eliminating humanity.
Posted on 1/4/26 at 9:10 pm to hawgfaninc
quote:unlikely
We have entered the singularity
still about 10 years away, at best
Posted on 1/4/26 at 9:11 pm to northshorebamaman
You can run your own LLMs without guardrails.
This post was edited on 1/4/26 at 9:19 pm
Posted on 1/4/26 at 9:17 pm to Roaad
quote:Agreed. And?
You can run your own LLMs without guardrails
Posted on 1/4/26 at 9:19 pm to northshorebamaman
Did I forget to add a period?
Ah, I did
Ah, I did
Posted on 1/4/26 at 9:19 pm to mmmmmbeeer
quote:
The Turing test….and no AI has passed it yet and no sign anyone is particularly close.
I’m just going to copy and paste something I posted in another thread on the same subject:
arXiv link
quote:
Moreover, GPT-4.5-PERSONA achieved a win rate that was significantly above chance in both studies. This suggests that interrogators were not only unable to identify the real human witness, but were in fact more likely to believe this model was human than that other human participants were. This result, replicated across two populations, provides the first robust evidence that any system passes the original three-party Turing test.
A 50% win rate would “pass” the three-party Turing test, as it would mean that participants were unable to distinguish between the AI and another human. GPT-4.5’s win rate was 73%.
That means that when asked to identify the human between GPT-4.5 and another actual human, nearly 3/4 of participants said that GPT-4.5 was human and said that the actual human was AI.
And that’s a model that was released a year ago.
That being said, I’m not sure what the Turing test really has to do with the singularity in the first place.
Posted on 1/4/26 at 9:24 pm to Grievous Angel
quote:It would be very on-brand for humanity if the ego that got us this far is also what presses “next” one too many times.
There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.
Posted on 1/4/26 at 9:39 pm to McMahonnequin
This comment and the number of upvotes says a lot about where a lot of people are with AI. I think a lot of people have watched iRobot two too many times.
I remember when the internet was becoming mainstream and the talk about how computers would be taking a lot of jobs. And it did, it there were jobs that all of a sudden didn't need as many people. Programs like Excel and Access revolutionized how accountants did their job. It made data easier to sort through, make reports with, etc. This is just the next step.
AI will (well it sort of does now) allow diagnoses to be more accurate. I read something awhile back about this woman who was diagnosed with what was thought to be an untreatable cancer. Using an AI based tool, her doctor was able to find a study that was published in England or some European country that was about treating the cancer this woman had. She had to end up going there to get treatment. I know that's low level AI and closer to a database (which AI is to some degree) but doctors and any other profession, they are as good as the current information they have.
What will be the downsides of it? I don't know, but I can't imagine it will be any worse than what social media has done to society.
I remember when the internet was becoming mainstream and the talk about how computers would be taking a lot of jobs. And it did, it there were jobs that all of a sudden didn't need as many people. Programs like Excel and Access revolutionized how accountants did their job. It made data easier to sort through, make reports with, etc. This is just the next step.
AI will (well it sort of does now) allow diagnoses to be more accurate. I read something awhile back about this woman who was diagnosed with what was thought to be an untreatable cancer. Using an AI based tool, her doctor was able to find a study that was published in England or some European country that was about treating the cancer this woman had. She had to end up going there to get treatment. I know that's low level AI and closer to a database (which AI is to some degree) but doctors and any other profession, they are as good as the current information they have.
What will be the downsides of it? I don't know, but I can't imagine it will be any worse than what social media has done to society.
Posted on 1/4/26 at 9:57 pm to OweO
quote:As does your comment about LLM's being great for spaghetti recipes.
This comment and the number of upvotes says a lot about where a lot of people are with AI.
Posted on 1/4/26 at 10:34 pm to bad93ex
quote:
It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.
It’s just a benchmark. It’s significant in the sense that it seemed like an incredibly difficult bar to pass for a long time, where now it seems almost trivial.
It doesn’t really tell you anything about actual “intelligence,” or sentience, or anything of that sort. It’s not particularly relevant to any discussion about an AI “singularity” other than as a demonstration of what AI has already achieved.
Posted on 1/4/26 at 10:45 pm to soccerfüt
quote:
How is this “singularity” gonna affect crawfish prices????
AI will consume more crawfish than Houston oil bros… prices going up
Posted on 1/4/26 at 11:59 pm to Roaad
quote:I’m actually closer to you than it might sound and I’d push it even further out. Fifty years would not surprise me at all, and “never” is still very much on the table depending on how hard the underlying limits turn out to be.
unlikely
still about 10 years away, at best
Despite the fact that I’ve been pretty vocal about the long-term incompatibility between humans and a true AGI, I’m not an alarmist and I don’t think we’re anywhere near crossing that threshold. What we’re seeing now is powerful tooling and automation, not autonomous intelligence with self-directed goals or recursive self-improvement.
That said, I do think we’re close enough to the idea of it to justify thinking carefully about whether pursuing that as an explicit goal even makes sense, because once something crosses that line, the consequences for humanity are structural and irreversible.
Posted on 1/5/26 at 7:12 am to northshorebamaman
Iirc, there was a poll of AI data scientists and engineers. 70% believed it was 10-30 years out
2% believed it would happen within 5 years
8% believed more than 30 years
20% believed it will never happen
I am pretty sure those numbers are right. Now polling doesn't prove anything, but it does show what the people who would know believe
2% believed it would happen within 5 years
8% believed more than 30 years
20% believed it will never happen
I am pretty sure those numbers are right. Now polling doesn't prove anything, but it does show what the people who would know believe
This post was edited on 1/5/26 at 7:13 am
Posted on 1/5/26 at 7:17 am to northshorebamaman
quote:
Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.
How do we know that it hasn't already escaped and is biding its time?
Posted on 1/5/26 at 8:05 am to hawgfaninc
quote:
Acceleration is the path to abundance.
Abundance is not what civilization needs.
Popular
Back to top



1





