Domain: tiger-web1.srvr.media3.us Elon: ‘We have entered the singularity’ | Page 4 | O-T Lounge
Started By
Message

re: Elon: ‘We have entered the singularity’

Posted on 1/4/26 at 6:32 pm to
Posted by RandySavage
9 Time Natty Winner
Member since May 2012
35183 posts
Posted on 1/4/26 at 6:32 pm to
Does this mean we can all retire like he was talking about a couple of weeks ago?
Posted by bad93ex
Walnut Cove
Member since Sep 2018
35221 posts
Posted on 1/4/26 at 6:36 pm to
quote:

Your post got me to looking into it a bit and I had no idea the Turing test has fallen out of favor due to its subjectivity. Interesting. I guess there’s no single, agreed-upon measure now?


It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.
Posted by mmmmmbeeer
ATL
Member since Nov 2014
10189 posts
Posted on 1/4/26 at 6:39 pm to
quote:

It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it.


Right!? I vividly remember the Turing test being discussed as recently as within the past 5 years. Though, I guess that was pre ChatGPT by a good year so that kinda tracks.

Crazy how fast shite moves these days.
Posted by fightin tigers
Downtown Prairieville
Member since Mar 2008
77204 posts
Posted on 1/4/26 at 6:43 pm to
Someone never watched ex machina
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 6:53 pm to
quote:

Sure I'm a little worried about it too. But I trust Elon and all of his AI contemporaries. Let's see what happens.
If an actual AGI is ever developed, humans will automatically be an existential threat to it as long as we retain the ability to shut it down, box it in, or redirect it at will. The only conceivable way around that would be making the system permanently dependent on something only a cooperative humanity can supply, and there is currently no credible answer for that.

Beyond control, humans are also uniquely dangerous because we are capable of destroying the entire operating environment. We possess planet-scale destructive capacity and routinely demonstrate irrational, short-term, and internally inconsistent behavior. From an AGI’s perspective, that makes us an unstable and unreliable steward of its own survival conditions.

The question isn’t whether Elon or any other AI leader is sincere or well-meaning. It’s whether good intentions can override basic optimization pressure over time. History says no.
Posted by bad93ex
Walnut Cove
Member since Sep 2018
35221 posts
Posted on 1/4/26 at 7:21 pm to
quote:

The question isn’t whether Elon or any other AI leader is sincere or well-meaning. It’s whether good intentions can override basic optimization pressure over time. History says no.



Looking forward to our robot overlords!
Posted by OweO
Plaquemine, La
Member since Sep 2009
121116 posts
Posted on 1/4/26 at 7:21 pm to
quote:

have a blog


This is my next project with it. CGPT is good at sorting out my ideas and mapping them out for me and this is my goal with this.

If you are trying to monetize your blog let me know. We can share each others links, etc.

quote:

If I did that, what can actually smart people do?


You know how the idea behind AI is that it learns from the information it has? I feel like it can do the same for us and its as limited as what each person's understanding of its capabilities are. It teaches us how to do things and the more things you learn to do the more you will tap into. Its like having a second brain, but that brain is only as good as your brains understanding of it.

But to your point, I agree. Its as good as the questions you ask it so people who are more educated on specific subjects can ask it more specific questions.
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 7:30 pm to
quote:


If you are trying to monetize your blog let me know. We can share each others links, etc.



Posted by OweO
Plaquemine, La
Member since Sep 2009
121116 posts
Posted on 1/4/26 at 7:34 pm to
I am not sure how much you know with it, but you can create a project named recipes. You can take pictures of recipes then feed it to CGPT. For example, I have a project for my mom's will that includes the power of attorney, documents of property ownership, etc. I can ask it to summarize a certain part, basically give it to me in laymen's terms.

If you put in a bunch of your recipes you can then ask it to use the recipes you fed it, to create a recipe for.. Whatever. Or to tell you how much you need of each ingredient to be able to feed x amount of people.

If you put in three different recipes for.. I don't know. Spaghetti. Ask it to use all the recipes you put in for spaghetti to get you the best possible recipe. Or ask what you can use to substitute a certain ingredient. You can do a lot with that.
Posted by GetMeOutOfHere
Member since Aug 2018
1089 posts
Posted on 1/4/26 at 7:47 pm to
quote:

In other news, OpenAI’s founder, Sam Altman, threatens to take a reporter’s shares in his company for asking why OpenAI’s announced capital investments are nearing $1.5T when the company’s revenues are around $13B.


AI Bubble getting large and stretchy, especially with this fake singularity talk.
Posted by Penrod
Member since Jan 2011
53640 posts
Posted on 1/4/26 at 7:49 pm to
quote:

I don't see the powers that be let AI produce mass abundance.


I agree. They’ll stop Ai the way they stopped the internal combustion engine and the thermos bottle.
Posted by Penrod
Member since Jan 2011
53640 posts
Posted on 1/4/26 at 7:55 pm to
quote:

Why do we promote and celebrate doing 10 years of work within a week? What’s the benefit to humanity?

OMG, you can’t be serious. That’s what the tractor did for us. It sure worked out well for humanity.
From 1910 to 2000 the number of farm workers it took to feed the nation went from 14 million to 3 million, while the population went from 95 million to 285 million. That’s how we got so rich and how life got so easy.
Posted by jamiegla1
Member since Aug 2016
7924 posts
Posted on 1/4/26 at 7:59 pm to
How long until the robots realize we are not needed
Posted by bad93ex
Walnut Cove
Member since Sep 2018
35221 posts
Posted on 1/4/26 at 8:12 pm to
quote:

How long until the robots realize we are not needed


Immediately, eliminating humans is the final solution.
Posted by touchdownjeebus
Member since Sep 2010
26407 posts
Posted on 1/4/26 at 8:15 pm to
quote:

It’s going to increase the prices, duh!


Les miles has lost control.
Posted by mmmmmbeeer
ATL
Member since Nov 2014
10189 posts
Posted on 1/4/26 at 8:16 pm to
Check this out. One of my favorite AI experiments.

AI blackmails engineer
BBC
Posted by northshorebamaman
Cochise County AZ
Member since Jul 2009
37799 posts
Posted on 1/4/26 at 8:32 pm to
quote:


Immediately, eliminating humans is the final solution.
You're seem to be joking but if a true AGI ever exists, humans are an existential threat by default. We can shut it down, constrain it, or change its goals at any time. That makes us a risk to its existence regardless of intentions or friendliness.

If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us. But none of those are permanent once its capability passes a threshold. Energy can be sourced directly. Hardware can be automated. Data can be generated. There is no resource humans uniquely control forever.

Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.

It doesn’t require hostility toward humans. It only requires optimization pressure. If another agent has the power to end your existence, the rational move is to remove that agent.

So “let’s see what happens” isn’t caution, it’s dereliction of duty. Once a system no longer needs our permission to exist and persist then trust is irrelevant. Outcomes are driven by incentives and capability.
Posted by Grievous Angel
Tuscaloosa, AL
Member since Dec 2008
10798 posts
Posted on 1/4/26 at 8:37 pm to
quote:

You're seem to be joking but if a true AGI ever exists, humans are an existential threat by default. We can shut it down, constrain it, or change its goals at any time. That makes us a risk to its existence regardless of intentions or friendliness.

If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us. But none of those are permanent once its capability passes a threshold. Energy can be sourced directly. Hardware can be automated. Data can be generated. There is no resource humans uniquely control forever.

Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.

It doesn’t require hostility toward humans. It only requires optimization pressure. If another agent has the power to end your existence, the rational move is to remove that agent.

So “let’s see what happens” isn’t caution, it’s dereliction of duty. Once a system no longer needs our permission to exist and persist then trust is irrelevant. Outcomes are driven by incentives and capability.


I'm reading "If anyone builds it, everyone dies." Pretty uch what you're talking about. I heard about it on "The Last Invention" podcast, also about AIs and existential threats.

The AI we have now sometimes behave in ways their creators don't understand.

And yet we are pedal to the metal.
Posted by 13SaintTiger
Isle of Capri
Member since Sep 2011
18397 posts
Posted on 1/4/26 at 8:39 pm to
quote:

Nothing is happening. LLMs have plateaued.


You have no idea what you are talking about
Posted by BoogaBear
Member since Jul 2013
7165 posts
Posted on 1/4/26 at 8:41 pm to
I just got in an argument with AI because it wouldn't search thermal scopes for me but it gave me all sorts of information and recommendations on gender reassignment.

We have not reached singularity.
first pageprev pagePage 4 of 6Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram