Domain: tiger-web1.srvr.media3.us Elon: ‘We have entered the singularity’ | Page 5 | O-T Lounge
Started By
Message

re: Elon: ‘We have entered the singularity’

Posted on 1/4/26 at 8:45 pm to
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 8:45 pm to
quote:


You have no idea what you are talking about
No, he doesn't. LLMs haven’t plateaued. They’re being deliberately throttled for monetization and liability. What looks like stagnation is constraint, not capability. Safety rails, productization, and legal exposure are capping behavior. The models are showing no indication of hitting their limits.
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 8:47 pm to
quote:



I'm reading "If anyone builds it, everyone dies." Pretty uch what you're talking about.
Posted by JohnnyKilroy
Cajun Navy Vice Admiral
Member since Oct 2012
40624 posts
Posted on 1/4/26 at 8:53 pm to
quote:

Looking forward to our robot overlords!


They won’t be our overlords for very long at least.
Posted by Grievous Angel
Tuscaloosa, AL
Member since Dec 2008
10798 posts
Posted on 1/4/26 at 8:58 pm to
quote:

If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us.


Yes. What happens when AGI needs more water to produce/replicate more data centers? We will be direct competitors for energy, water, etc.

Humans reign over the world because our ancestors had larger brains and opposable thumbs. Since then we tolerate other species, but only to a point.

If a superior intelligence arises (artificial general intelligence, or artificial super intelligence), it seems entirely reasonable that we'd be the biggest nuisance to it.

There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.


Posted by JohnnyKilroy
Cajun Navy Vice Admiral
Member since Oct 2012
40624 posts
Posted on 1/4/26 at 9:05 pm to
quote:

There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.


The west/china is going to inadvertently end humanity in an effort to prevent china/the west from inadvertently eliminating humanity.
Posted by Roaad
White Privilege Broker
Member since Aug 2006
82695 posts
Posted on 1/4/26 at 9:10 pm to
quote:

We have entered the singularity
unlikely

still about 10 years away, at best
Posted by Roaad
White Privilege Broker
Member since Aug 2006
82695 posts
Posted on 1/4/26 at 9:11 pm to
You can run your own LLMs without guardrails.
This post was edited on 1/4/26 at 9:19 pm
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 9:17 pm to
quote:

You can run your own LLMs without guardrails
Agreed. And?
Posted by Roaad
White Privilege Broker
Member since Aug 2006
82695 posts
Posted on 1/4/26 at 9:19 pm to
Did I forget to add a period?

Ah, I did
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
12719 posts
Posted on 1/4/26 at 9:19 pm to
quote:

The Turing test….and no AI has passed it yet and no sign anyone is particularly close.

I’m just going to copy and paste something I posted in another thread on the same subject:

arXiv link
quote:

Moreover, GPT-4.5-PERSONA achieved a win rate that was significantly above chance in both studies. This suggests that interrogators were not only unable to identify the real human witness, but were in fact more likely to believe this model was human than that other human participants were. This result, replicated across two populations, provides the first robust evidence that any system passes the original three-party Turing test.

A 50% win rate would “pass” the three-party Turing test, as it would mean that participants were unable to distinguish between the AI and another human. GPT-4.5’s win rate was 73%.

That means that when asked to identify the human between GPT-4.5 and another actual human, nearly 3/4 of participants said that GPT-4.5 was human and said that the actual human was AI.

And that’s a model that was released a year ago.

That being said, I’m not sure what the Turing test really has to do with the singularity in the first place.
Posted by Grievous Angel
Tuscaloosa, AL
Member since Dec 2008
10798 posts
Posted on 1/4/26 at 9:20 pm to
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 9:24 pm to
quote:

There are very smart people who acknowledge that that there's a greater than zero percent chance this happens, and they are racing ahead in AI because they are terrified by what the "bad guys" might build before them.
It would be very on-brand for humanity if the ego that got us this far is also what presses “next” one too many times.
Posted by OweO
Plaquemine, La
Member since Sep 2009
121118 posts
Posted on 1/4/26 at 9:39 pm to
This comment and the number of upvotes says a lot about where a lot of people are with AI. I think a lot of people have watched iRobot two too many times.

I remember when the internet was becoming mainstream and the talk about how computers would be taking a lot of jobs. And it did, it there were jobs that all of a sudden didn't need as many people. Programs like Excel and Access revolutionized how accountants did their job. It made data easier to sort through, make reports with, etc. This is just the next step.

AI will (well it sort of does now) allow diagnoses to be more accurate. I read something awhile back about this woman who was diagnosed with what was thought to be an untreatable cancer. Using an AI based tool, her doctor was able to find a study that was published in England or some European country that was about treating the cancer this woman had. She had to end up going there to get treatment. I know that's low level AI and closer to a database (which AI is to some degree) but doctors and any other profession, they are as good as the current information they have.

What will be the downsides of it? I don't know, but I can't imagine it will be any worse than what social media has done to society.
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 9:57 pm to
quote:


This comment and the number of upvotes says a lot about where a lot of people are with AI.
As does your comment about LLM's being great for spaghetti recipes.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
12719 posts
Posted on 1/4/26 at 10:34 pm to
quote:

It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.

It’s just a benchmark. It’s significant in the sense that it seemed like an incredibly difficult bar to pass for a long time, where now it seems almost trivial.

It doesn’t really tell you anything about actual “intelligence,” or sentience, or anything of that sort. It’s not particularly relevant to any discussion about an AI “singularity” other than as a demonstration of what AI has already achieved.
Posted by LSUFanHouston
NOLA
Member since Jul 2009
40506 posts
Posted on 1/4/26 at 10:45 pm to
quote:

How is this “singularity” gonna affect crawfish prices????


AI will consume more crawfish than Houston oil bros… prices going up
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 11:59 pm to
quote:

unlikely

still about 10 years away, at best
I’m actually closer to you than it might sound and I’d push it even further out. Fifty years would not surprise me at all, and “never” is still very much on the table depending on how hard the underlying limits turn out to be.

Despite the fact that I’ve been pretty vocal about the long-term incompatibility between humans and a true AGI, I’m not an alarmist and I don’t think we’re anywhere near crossing that threshold. What we’re seeing now is powerful tooling and automation, not autonomous intelligence with self-directed goals or recursive self-improvement.

That said, I do think we’re close enough to the idea of it to justify thinking carefully about whether pursuing that as an explicit goal even makes sense, because once something crosses that line, the consequences for humanity are structural and irreversible.
Posted by Roaad
White Privilege Broker
Member since Aug 2006
82695 posts
Posted on 1/5/26 at 7:12 am to
Iirc, there was a poll of AI data scientists and engineers. 70% believed it was 10-30 years out

2% believed it would happen within 5 years

8% believed more than 30 years

20% believed it will never happen

I am pretty sure those numbers are right. Now polling doesn't prove anything, but it does show what the people who would know believe
This post was edited on 1/5/26 at 7:13 am
Posted by bad93ex
Walnut Cove
Member since Sep 2018
35227 posts
Posted on 1/5/26 at 7:17 am to
quote:

Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.


How do we know that it hasn't already escaped and is biding its time?
Posted by willeaux
Member since Jan 2006
2984 posts
Posted on 1/5/26 at 8:05 am to
quote:

Acceleration is the path to abundance.


Abundance is not what civilization needs.
first pageprev pagePage 5 of 6Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram