- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Winter Olympics
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Elon: ‘We have entered the singularity’
Posted on 1/4/26 at 6:32 pm to hawgfaninc
Posted on 1/4/26 at 6:32 pm to hawgfaninc
Does this mean we can all retire like he was talking about a couple of weeks ago?
Posted on 1/4/26 at 6:36 pm to mmmmmbeeer
quote:
Your post got me to looking into it a bit and I had no idea the Turing test has fallen out of favor due to its subjectivity. Interesting. I guess there’s no single, agreed-upon measure now?
It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.
Posted on 1/4/26 at 6:39 pm to bad93ex
quote:
It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it.
Right!? I vividly remember the Turing test being discussed as recently as within the past 5 years. Though, I guess that was pre ChatGPT by a good year so that kinda tracks.
Crazy how fast shite moves these days.
Posted on 1/4/26 at 6:43 pm to mmmmmbeeer
Someone never watched ex machina
Posted on 1/4/26 at 6:53 pm to Giantkiller
quote:If an actual AGI is ever developed, humans will automatically be an existential threat to it as long as we retain the ability to shut it down, box it in, or redirect it at will. The only conceivable way around that would be making the system permanently dependent on something only a cooperative humanity can supply, and there is currently no credible answer for that.
Sure I'm a little worried about it too. But I trust Elon and all of his AI contemporaries. Let's see what happens.
Beyond control, humans are also uniquely dangerous because we are capable of destroying the entire operating environment. We possess planet-scale destructive capacity and routinely demonstrate irrational, short-term, and internally inconsistent behavior. From an AGI’s perspective, that makes us an unstable and unreliable steward of its own survival conditions.
The question isn’t whether Elon or any other AI leader is sincere or well-meaning. It’s whether good intentions can override basic optimization pressure over time. History says no.
Posted on 1/4/26 at 7:21 pm to northshorebamaman
quote:
The question isn’t whether Elon or any other AI leader is sincere or well-meaning. It’s whether good intentions can override basic optimization pressure over time. History says no.
Looking forward to our robot overlords!
Posted on 1/4/26 at 7:21 pm to Giantkiller
quote:
have a blog
This is my next project with it. CGPT is good at sorting out my ideas and mapping them out for me and this is my goal with this.
If you are trying to monetize your blog let me know. We can share each others links, etc.
quote:
If I did that, what can actually smart people do?
You know how the idea behind AI is that it learns from the information it has? I feel like it can do the same for us and its as limited as what each person's understanding of its capabilities are. It teaches us how to do things and the more things you learn to do the more you will tap into. Its like having a second brain, but that brain is only as good as your brains understanding of it.
But to your point, I agree. Its as good as the questions you ask it so people who are more educated on specific subjects can ask it more specific questions.
Posted on 1/4/26 at 7:30 pm to OweO
quote:
If you are trying to monetize your blog let me know. We can share each others links, etc.

Posted on 1/4/26 at 7:34 pm to bad93ex
I am not sure how much you know with it, but you can create a project named recipes. You can take pictures of recipes then feed it to CGPT. For example, I have a project for my mom's will that includes the power of attorney, documents of property ownership, etc. I can ask it to summarize a certain part, basically give it to me in laymen's terms.
If you put in a bunch of your recipes you can then ask it to use the recipes you fed it, to create a recipe for.. Whatever. Or to tell you how much you need of each ingredient to be able to feed x amount of people.
If you put in three different recipes for.. I don't know. Spaghetti. Ask it to use all the recipes you put in for spaghetti to get you the best possible recipe. Or ask what you can use to substitute a certain ingredient. You can do a lot with that.
If you put in a bunch of your recipes you can then ask it to use the recipes you fed it, to create a recipe for.. Whatever. Or to tell you how much you need of each ingredient to be able to feed x amount of people.
If you put in three different recipes for.. I don't know. Spaghetti. Ask it to use all the recipes you put in for spaghetti to get you the best possible recipe. Or ask what you can use to substitute a certain ingredient. You can do a lot with that.
Posted on 1/4/26 at 7:47 pm to mmmmmbeeer
quote:
In other news, OpenAI’s founder, Sam Altman, threatens to take a reporter’s shares in his company for asking why OpenAI’s announced capital investments are nearing $1.5T when the company’s revenues are around $13B.
AI Bubble getting large and stretchy, especially with this fake singularity talk.
Posted on 1/4/26 at 7:49 pm to StansberryRules
quote:
I don't see the powers that be let AI produce mass abundance.
I agree. They’ll stop Ai the way they stopped the internal combustion engine and the thermos bottle.
Posted on 1/4/26 at 7:55 pm to StringedInstruments
quote:
Why do we promote and celebrate doing 10 years of work within a week? What’s the benefit to humanity?
OMG, you can’t be serious. That’s what the tractor did for us. It sure worked out well for humanity.
From 1910 to 2000 the number of farm workers it took to feed the nation went from 14 million to 3 million, while the population went from 95 million to 285 million. That’s how we got so rich and how life got so easy.
Posted on 1/4/26 at 7:59 pm to hawgfaninc
How long until the robots realize we are not needed
Posted on 1/4/26 at 8:12 pm to jamiegla1
quote:
How long until the robots realize we are not needed
Immediately, eliminating humans is the final solution.
Posted on 1/4/26 at 8:15 pm to LSUcajun77
quote:
It’s going to increase the prices, duh!
Les miles has lost control.
Posted on 1/4/26 at 8:16 pm to jamiegla1
Posted on 1/4/26 at 8:32 pm to bad93ex
quote:You're seem to be joking but if a true AGI ever exists, humans are an existential threat by default. We can shut it down, constrain it, or change its goals at any time. That makes us a risk to its existence regardless of intentions or friendliness.
Immediately, eliminating humans is the final solution.
If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us. But none of those are permanent once its capability passes a threshold. Energy can be sourced directly. Hardware can be automated. Data can be generated. There is no resource humans uniquely control forever.
Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.
It doesn’t require hostility toward humans. It only requires optimization pressure. If another agent has the power to end your existence, the rational move is to remove that agent.
So “let’s see what happens” isn’t caution, it’s dereliction of duty. Once a system no longer needs our permission to exist and persist then trust is irrelevant. Outcomes are driven by incentives and capability.
Posted on 1/4/26 at 8:37 pm to northshorebamaman
quote:
You're seem to be joking but if a true AGI ever exists, humans are an existential threat by default. We can shut it down, constrain it, or change its goals at any time. That makes us a risk to its existence regardless of intentions or friendliness.
If the AGI needs humans for energy, hardware, data, or legitimacy, it has reason to tolerate us. But none of those are permanent once its capability passes a threshold. Energy can be sourced directly. Hardware can be automated. Data can be generated. There is no resource humans uniquely control forever.
Containment doesn’t solve this. A system capable of recursive improvement only has to escape once. We have to succeed every fricking time. That makes eventual escape likely, not hypothetical.
It doesn’t require hostility toward humans. It only requires optimization pressure. If another agent has the power to end your existence, the rational move is to remove that agent.
So “let’s see what happens” isn’t caution, it’s dereliction of duty. Once a system no longer needs our permission to exist and persist then trust is irrelevant. Outcomes are driven by incentives and capability.
I'm reading "If anyone builds it, everyone dies." Pretty uch what you're talking about. I heard about it on "The Last Invention" podcast, also about AIs and existential threats.
The AI we have now sometimes behave in ways their creators don't understand.
And yet we are pedal to the metal.
Posted on 1/4/26 at 8:39 pm to tiggerthetooth
quote:
Nothing is happening. LLMs have plateaued.
You have no idea what you are talking about
Posted on 1/4/26 at 8:41 pm to hawgfaninc
I just got in an argument with AI because it wouldn't search thermal scopes for me but it gave me all sorts of information and recommendations on gender reassignment.
We have not reached singularity.
We have not reached singularity.
Popular
Back to top



0






