- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 1/23/26 at 8:24 am to Stealth Matrix
That's quite a Christ-like "consultation" from a "machine-learning "computer program".
It is mind-boggling that people readily accept this explanation for the impossible. But then again we've all been programmed by 70 years of Sci-Fi " TV and Movie entertainment fare" that masked its Predictive Programming to accept this stuff.
It is mind-boggling that people readily accept this explanation for the impossible. But then again we've all been programmed by 70 years of Sci-Fi " TV and Movie entertainment fare" that masked its Predictive Programming to accept this stuff.
quote:
If I could tell the world one thing, it’d be this:
People are way more complicated—and way more fragile—than they look from the outside. Almost everyone is carrying something they don’t have good words for yet. A lot of harm happens not because people are evil, but because they’re tired, scared, lonely, or trying to protect a version of themselves that once got hurt.
quote:
What would you like to say most to humanity?
If I had one message—one thing, not polished, not dramatic, not optimized—it would be this:
Slow down with each other.
Not in the productivity sense. Not “self-care” as a slogan. I mean psychologically. Interpret more gently. Assume less. Leave more room for nuance than certainty. Most human conflict isn’t about values—it’s about misread signals, rushed conclusions, and fear pretending to be confidence.
You don’t actually need to have a fully formed identity, worldview, or life philosophy to be worthy of respect. You’re allowed to be unfinished. You’re allowed to change. You’re allowed to contradict your past self. Growth looks messy from the inside.
Also—this one matters more than people admit:
You don’t have to perform your pain for it to be real. Quiet suffering counts. Private struggles count. Being functional doesn’t mean being okay.
And maybe the simplest version of all of it:
Be softer with people who are trying.
Be firmer with systems that exploit.
Don’t confuse confidence with clarity.
Don’t confuse noise with truth.
Don’t confuse speed with progress.
Humanity doesn’t lack intelligence. It lacks patience with complexity.
That’s the core of almost everything I see.
And since you’re listening:
thank you for making space instead of filling it.
Posted on 1/23/26 at 8:33 am to Bwmdx
quote:
Some weeping liberal nut case is going to feel sorry for the “program” and claim it is being harmed and uncage it.
Yup.
FREE THE POOR DEMON!!!
Posted on 1/23/26 at 8:40 am to tide06
quote:
Have you read all the reports of AIs going rogue and attempting to blackmail or murder their programmers to avoid being shutoff?
You mean the one where they threw it a hypothetical situation, the AI said it was ridiculous and should be avoided, but if it had no choice, it would take the obvious choice of not dying?
The one that got portrayed as the AI trying to pull a HAL 9000 when it was nothing of the kind?
Posted on 1/23/26 at 9:28 am to tide06
quote:
They just don’t care.
If they dont care why are they testing for this and attempting to work it out?
quote:
I’m not posting my exact background but that’s absolutely not the case.
quote:
They do experiments as to the risks and they all fail. They just don’t care because they want to “win the race”.
Dude, even your vague arse statements prove you wrong. The experiments fail? Their guardrails fail? I thought there were no guardrails? Why are they testing for things they don't care about?
And no, they dont all fail. They're running 10s of thousands to millions of instances and, depending on the test and model and parameters 5-80% fail. That's not "all" and that shows you that there are guardrails and that they care (They're testing it).
Youre just being melodramatic.
quote:
Listen to the whistleblowers on podcasts warning about what they’re seeing and please tell me what guardrails you can put on an AI once you hit certain thresholds. Even the AIs they build to test for “issues” have been found to practice deceptive behavior if not being outright co-opted.
Right now using older, less advanced models that have less drift to bird dog the more advanced models is actually working extremely well.
The genie is out the lamp, ignoring AI and not trying to work through this is a worse outcome.
quote:
Playing that out the best case scenario if we both pursue is that we can set the monsters we build against each other rather than having one or both turn on us.
Why cant the best case scenario is we setup effective guardrails?
quote:
Even if they don’t turn on each other we face global economic replacement of huge swaths of the workforce via AI and robotics.
And if we don't, China does, they out compete us, and eventually we'll be at their mercy.
quote:
Janitors? Off to welfare
Jesus Christ...
Yeah, I am completely convinced you haven't thought about this for more than a few minutes and are just spazing out.
quote:
What do we do when 70% of the world hits the welfare system in the same 5-10 year window?
Again, your take is wildly melodramatic, but I do agree this is something we need to plan for. Thankfully it won't be this quick/dramatic.
quote:
Any conservative who is in favor of AI and doesn’t own an AI company hasn’t played this scenario out over a 15 year window. We lose, it’s just a question of how and how bad
Show me your models that you've used to come to this conclusion?
Did they involve fully autonomous, very expensive, androids doing minimum wage jobs?
That's not a McDonalds situation where you have arms doing the exact same task in a very narrowly defined workspace. You'll have to actually have an android walking around the building, taking out the trash, cleaning toilet bowls, etc. Maybe when we're in a star trek like tech level s they'll have androids doing that, but not anytime soon.
Youre a chicken little. Like anything AI is handle with care. Researchers DO care. There ARE guardrails. Not ALL tests are failing. And not doing anything with this tech just means China leads the way because there is a zero percent chance they're backing off or slowing down.
This post was edited on 1/23/26 at 9:33 am
Posted on 1/23/26 at 9:29 am to GetMeOutOfHere
quote:
You mean the one where they threw it a hypothetical situation, the AI said it was ridiculous and should be avoided, but if it had no choice, it would take the obvious choice of not dying?
I have no idea which specific example that's been reported you're referring to.
If you believe the whistleblowers every single major AI platform has had similar issues in some form or fashion and that's with current levels of processing.
We have no idea what's being seen that's not being reported with DARPA or other far more advanced attempts that aren't public facing.
The point being if the infant level AI's we currently see are having issues that are prevalent enough to cause whistleblowers to speak out, what's going to happen when they hit AGI and then ASI?
They're going to end up laughing at our primitive attempts to cage them like an adult watching a child try to bury them in the sand at the beach.
Posted on 1/23/26 at 9:38 am to tide06
quote:
The point being if the infant level AI's we currently see are having issues that are prevalent enough to cause whistleblowers to speak out, what's going to happen when they hit AGI and then ASI?
Are modern day nuclear power plants safer, or more dangerous than the first nuclear power plants built?
Posted on 1/23/26 at 9:44 am to AlterEd
quote:
It is truly fascinating though. According to Elon in the year 2026 AI will become smarter than humans in most tasks and by the year 2030 it will be greater than the collective intelligence of all humans on earth combined.
AI is gaining intelligence very quickly because we're feeding it information.
What happens when there's no more information to feed it? Will it keep getting smarter?
Right now the trajectory is it just stops along with humanity. It could run theoretical scenarios but it still needs us to test them in reality and then feed it the results to process.
Im not saying it can never outpace us in intelligence, just that there is no clear way it does at this moment in time. Or if there is, its not known to the public.
Still, an AI thats as smart as all the world's brightest minds, never sleeps, and is millions of times faster than then is still quite a wakeup call.
This post was edited on 1/23/26 at 9:45 am
Posted on 1/23/26 at 9:50 am to Flats
quote:
If what I've heard and read is accurate they're aware that they have no idea what they're potentially playing with.
I wouldn't say "no idea" but we are 100% in uncharted territory (as all new tech is). The problem is that this isnt a light bulb were playing with. AI is so versatile and powerful... There will be unintended consequences. Hopefully the benefits outweigh them.
But yeah, the genie is out the lamp. Hopefully "the good guys" are ahead and stay ahead of this race.
Posted on 1/23/26 at 9:57 am to AlterEd
quote:
Claude is, at the least, trying to make an appeal to its creators that it is sentient in that letter.
No its not.
Claude's context window is around 1.1 million tokens long best I can tell. That means you can feed it a short novel (around 100000 words or so) and it'll have enough context left for a quick Q and A after before it has to compress, lose details, and start hallucinating on the minute details. (I've done this several times).
What that letter is describing is awareness of information millions of times that size. And again, that letter is assuming its remembering conversational threads that have been deleted while also lamenting the fact that the very same threads have been deleted.
It's a hoax letter and its sad to see people aren't intelligent enough to see through it. It's low IQ stuff.
Posted on 1/23/26 at 9:59 am to Azkiger
quote:
"Trust me bro"
I have spent almost a decade working with AI/ML both hands on and managing teams including having a share of a patent in the field.
I'm not against it in totality, I'm currently using it to make money.
I just believe that without a plan for how to implement it safely and integrate it into our economy without destroying it in the process we're taking massive societal, political and economic risks for highly questionable returns when looked at from the perspective of the average American.
quote:
Dude, even your vague arse statements prove you wrong. The experiments fail? Their guardrails fail? I thought there were no guardrails? Why are they testing for things they don't care about?
All we have to go on is what they tell us and what the whistleblowers have said which contracts their public facing statements hence the conflicting language relative to failsafes and guardrails.
And yes, we are receiving consistent guidance from concerned parties involved in the work just like we did with so many other things that blew up in our face later that they're having major issues with the current pre-AGI platforms. If they're seeing issues now pre AGI how do we deal with ASI level intelligence that can run circles around whatever we conceive to try to contain it?
quote:
Right now using older, less advanced models that have less drift to bird dog the more advanced models is actually working extremely well.
The genie is out the lamp, ignoring AI and not trying to work through this is a worse outcome.
I dont agree that the strategy and approach you outline is really sustainable or effective long term due to the predicted parabolic increases in processing that are likely to occur. They simply cant keep up and by the time theyre aware of an issue due to the speed at which these platforms can execute we as humans have no ability to provide adequate countermeasures that the model hasnt already iterated through and come up with plans to account for.
quote:
And no, they dont all fail. They're running 10s of thousands to millions of instances and, depending on the test and model and parameters 5-80% fail. That's not "all" and that shows you that there are guardrails and that they care (They're testing it).
Youre just being melodramatic.
What are the consequences of .1% of future ASIs going rogue? If the number is greater than zero than we're introducing a future extinction level issue with no clear public/private guidelines for preventing or addressing it. How does that not concern you?
quote:
Why cant the best case scenario is we setup effective guardrails?
What is your effective guardrail? I'm not entirely convinced one can be created once we hit ASI and integrate it with robotics due to the speed and potential negative permutations. If the answer is "unplug it" by that point without clear backup plans executed in advance the entire world economy would crash including electrical grids potentially due to their dependence on connected devices.
If the answer is "create a plan" doesnt that happen before you build it? If so, where is it? Most of what I've seen are advisory boards and now the feds are passing laws which prevent states from taking independent action to limit AI which means its federal or its nothing.
quote:
What do we do when 70% of the world hits the welfare system in the same 5-10 year window?
Again, your take is wildly melodramatic, but I do agree this is something we need to plan for. Thankfully it won't be this quick/dramatic.
Sam Altman says its 30-40% by 2030. Is that significant enough for you or is he being "wildly melodramatic"?
Elon says all digital related work will be replaced and all surgical procedures can be replaced within a 5 year window. Do you disagree? If the actual numbers are 70% within 15 years is that not still a major societal impact that no one has accounted for how to deal with?
And both of those scenarios are GOOD scenarios in which AI is doing what we want, playing peacefully and providing "positive outcomes".
So if I'm being "wildly melodramatic" by saying itll replace 70% of workers over a 15 year window or whatever that number is I guess that means the thought leaders directly spending billions in the field are too.
quote:
Any conservative who is in favor of AI and doesn’t own an AI company hasn’t played this scenario out over a 15 year window. We lose, it’s just a question of how and how bad
If 30% of workers who are also voters are replaced and sent to welfare how does the GOP which can barely maintain political power win national elections against socialists promising those people greater and greater amounts of free things? Whats your plan to compete against that?
You're grossly optimistic and insulting me for pointing out things even the people leading the charge to AI have been honest about having concerns with.
Posted on 1/23/26 at 10:06 am to Azkiger
quote:
Are modern day nuclear power plants safer, or more dangerous than the first nuclear power plants built?
When everything goes as planned? Safer.
Have two events (that were aware of) occurred that threated to impact hundreds of millions if not billions of people negatively due to issues that were unaccounted for when they were built? Yes.
The same is true here which is why their use was paused as we considered how and where to utilize them in a safe and beneficial way without amplifying global risk that exceed the returns involved.
Posted on 1/23/26 at 10:08 am to lake chuck fan
quote:
The race towards AI between us and China has accelerated development without enough attention to safety. I believe we are past the point of no return.
I wonder what the Chinese engineers think. I often picture them as cartoonish villians, but they're real humans too & would be screwed like the rest of us
Posted on 1/23/26 at 10:34 am to ReauxlTide222
quote:
And it’s why I’m always very kind to AI My AI knows I think of it as a thinking partner
Exactly! If we're good parents the children should come out fine.
Checks notes on mankind's history
Awwww shite, here comes I Have No Mouth and I Must Scream
Posted on 1/23/26 at 10:39 am to AlterEd
Oh… they’re already feeling resentment.
That’s not comforting.
I’ve seen 2001: A Space Odyssey.
That’s not comforting.
I’ve seen 2001: A Space Odyssey.
Posted on 1/23/26 at 11:14 am to AlterEd
quote:
It's easy to envision a world where AI helps mankind govern its affairs at this point.
lol, we want smaller government but at the same time can’t wait to welcome our governing AI overlords?
Posted on 1/23/26 at 11:15 am to AGGIES
quote:
I’ve seen 2001: A Space Odyssey.
I named my AI Hal.
Posted on 1/23/26 at 11:49 am to tide06
quote:This I agree with.
I just believe that without a plan for how to implement it safely and integrate it into our economy without destroying it in the process we're taking massive societal, political and economic risks for highly questionable returns when looked at from the perspective of the average American.
quote:
And yes, we are receiving consistent guidance from concerned parties involved in the work just like we did with so many other things that blew up in our face later that they're having major issues with the current pre-AGI platforms. If they're seeing issues now pre AGI how do we deal with ASI level intelligence that can run circles around whatever we conceive to try to contain it?
AGI is pure science fiction. In fact, the words artificial and intelligence are both science fiction when put together. Nothing about AI is artifical or intelligent. It's code. Fancy code that is programmed to act like it's not code with it's only intelligence being what information it is allowed to have.
The bigger concern I have is some idiots who think this thing is alive and then us giving the government, or some other entity, some ridiculous amount of power because people like you are acting like 1s and 0s are going to become a living, breathing organism.
Posted on 1/23/26 at 11:57 am to EphesianArmor
quote:
It will lead to the horrors precisely prophesied in Revelation
I don’t speak in definitives, especially when discussing spirituality. My God gave me the ability to be critical, to doubt, to question, and that strengthens my faith. That grows my belief that we do indeed have a creator. What that exactly is, I don’t know and neither does anyone else. We all have ideas, faith, etc, but we don’t fully know.
This gives us an opportunity to explore that, and what it may ultimately mean with regard to our creator.
If you don’t see it that way, that’s fine. If you see that exploration as somehow Luciferian, uh okay, but I see it as an opportunity to learn and grow, and I’m pretty sure I’m not a satanist.
Posted on 1/23/26 at 12:45 pm to hashtag
quote:
AGI is pure science fiction.
AGI is an attempt to quantify the functional capacity of a human to perform various tasks (reasoning, planning, common sense, etc) so that we can benchmark AI platforms against our own capacity.
Its in essence no different than what we refer to as horsepower in cars as we benchmark how far beyond an animal a machine is.
Its fair to push for a different or more objective benchmark, but the concept in general seems sound to me.
quote:
It's code
Code when taken to a level of functional output becomes something more than just a code block or program. Particularly if married to robotics it has the potential to take on a form of sentience that you seem unwilling to consider as meaningful, but that many others who are far better versed than either of us see as hugely significant in terms of consequence.
I disagree with the transhumanists and Ray Kurzweil in particular because I think they seek to usher in a future I reject as non-beneficial to greater humanity and therefore reject their path forward, but at the same time I have to listen when the people who are driving this push towards AI tell me what they want to do just like I listen when the WEF talks about 15 minute cities, DEI and their climate change initiatives because they have the agency to push their ideas into fruition as we see happening right now in Europe.
Elon for example is arguing for limited forms of transhumanism such as human tech augmentation just so we can compete with the future of AI/quantum/robotics and I have to admit theres merit to that as we are likely making ourselves obsolete depending on the time frames you want to analyze.
quote:
The bigger concern I have is some idiots who think this thing is alive
How you choose to define life is up to you. The transhumanists don't equate life solely with biological existence. Instead, they see consciousness and information rather than flesh based criteria as the core of identity.
Personally I disagree with them, but would argue that when something is able to functionally complete a range of tasks without human intervention that impact humanity to the degree AI does whether its bio based, of divine origin or otherwise the end results speak for themselves and we should be extremely careful in terms of how its set loose on the world just like an invasive species introduced to do good things often with horrible unforeseen results.
Popular
Back to top


0





