I know, you probably think this goes in the "I'm tired of hearing about" thread, but this is an interesting read
https://www.understandingai.org/p/why-anthropic-believes-its-latest
A peer sent me this the other day:
https://www.youtube.com/watch?v=LYgLTraIe2I
the discussion is super slow, so I sped it up to get through most of it, but the takeaway for me was:
- People "in the know" are supposedly understanding that the industry leaders are all working on the holy grail, RSI, Recursive Self Improvement which is where the AI model(s) no longer need trained by us, but they train themselves in an infinitely more powerful way.
- Whoever gets to this point first, and that's the current real AI race, will dominate everything. They all recognize there is some degree of risk of total human annihilation, and in the discussion, they land on 20% as being an agreeable risk, which is where they estimate we currently are.
20% risk of total human annihilation, but they're all pressing on because if they don't, someone else will and "they"(!) will obviously have fewer morals/scruples. Interesting if not alarming watch.
> if they don't, someone else will
This is why I don't believe there is an AI buble in the stock markets. AI is the new moon race and gettting there first is the only thing that matters.
I agree that we're not in the AI bubble that everyone (including me) is/has been talking about regarding the tools. It's not a house of cards as far as the tech, but you have to wonder if wall street will eventually get weary of continuing to invest these trillions before the industry finally realizes RSI. At some point they are going to expect a real return or start bailing if that doesn't materialize within some time-frame, yeah?
Also, there's already a lot of pushback now on new data center plans/builds which are needed to keep trucking along this trajectory.
Tech bubble, no. Resource and investment bubble, maybe?
> A peer sent me this
I've long been a big fan of Tristan Harris. I've mentioned his idea of "human downgrading" here a bunch of times.
>RSI, Recursive Self Improvement
This is why Anthropic has been working so hard on programming tools. They believed that if they could build a gap there and fall behind in other areas, they would win in the end. What is the name of Anthropic's video generation engine? Oh yeah, they don't have one. While OpenAI set billions of dollars on fire to build then kill Sora, Anthropic got way ahead on coding and would be farther ahead except, as it found out, OpenAI and DeepSeek and possibly Gemini developers were extensively using Claude to train their coding engines until Anthropic shut them out.
> Tech bubble, no. Resource and investment bubble, maybe?
I think that's the thing people miss. It's possible for both to be true.
There is an economist (I forget her name) who studies bubbles. She argues that only something that is "real" can truly form an investment bubble.
Sure, you have tulips, but even that is kinda complicated (https://en.wikipedia.org/wiki/Tulip_mania) and didn't have any significant economic fallout and probably did not involve that many people.
If you look at canals, railroads, and the internet, all led to major overinvestment followed by major bankruptcies. But the big investment money was there more because there was a race on to claim rights-of-way for canals, tracks and fiber.
Things like the bubble in the energy markets in Enron era were different, but of pretty small magnitude. When Enron went down, it didn't take anything else with it.
Anyway, the point is that where you see a new technology with a big perceived first-mover advantage, you almost always have a real technology and an investment bubble.
My only hope is that we don't come out of this as we came out of the dot com era with respect to search - one dominant player who basically owns the entire space and can dictate terms of surrender to us.
Right now, it still seems to be very much a horse race with a much more diverse ecosystem than the search ecosystem. The original Brin-Page paper on page rank was 1997. The original paper on transformers that led to the recent burst of progress was 2017. So we are about 9 years out from the transformers paper. That would be like 2006 relative to the Page Rank paper. By 2006, Google had long since extinguished all competition. In some ways, it was the high-water mark before Facebook and Twitter challenged Google's ownership of the web.
As for the tech, it's pretty incredible. We are living in an age of wonders. In 1998, I told my grad students that they were perhaps the last generation that would need to learn paleography (how to read old handwriting). I said that in another 20 years, computers might be reading these documents. They laughed. They thought it was inconceivable as they sat there looking at the inscrutable documents they were struggling to decipher, spending six hours on a page and having a 20% error rate for the best students (and literally a 90% error rate for the bottom of the class).
By 2016, they did not laugh anymore, but they still estimated it was another 20 years away. I had also lost the faith. It seemed like little progress had been made. Then in 2017, transformers. Then in 2018, rudimentary engines for reading old handwriting. Now, with no training, I can upload documents too hard for me to give to grad students until the end of the class and the automatic transcription is 90% accurate. With training, a colleague tells me you can get it to more like 95-97%. And Claude can translate it perfectly, which most native-speaking humans cannot because of the archaic language.
And it costs 99 euros per year if you want the full pro account that lets you keep work private and train the model on your documents.
Kind of a trivial example, but my point is that grad students 28 years ago literally laughed in my face when I told them that I thought they would live to see *this* day.
My friends in the sciences tell the same story.
On the finance side though... there are the circular deals and creative accounting that recall the bundled mortgages of the Great Recession and the bullshit pricing deals of the Enron era.
But...
Anthropic has seen revenue growth at a rate many multiples of any other company its size. If it can keep it up, it may eventually support those valuations.
Meanwhile, the slowing of data center construction and the tightening of bank credit suggests that investors are getting weary. There's looming fallout in the private credit markets. Private credit used to be the sole domain of institutional investors, but in the last few years that was drying up so they opened it up to retail investors who didn't understand that private lending does not have the liquidity of a mutual fund or a government bond.
All that slowdown can be taken two ways:
1. It's not a financial bubble. Investors are pulling back before getting massively exposed. Companies will fail, but the real economy will not collapse and only a single (albeit giant) sector of the market will suffer.
2. It's still a financial bubble and we're just seeing the smart money leaving at the first sign of a leak in the dam, but when it breaches, there will be major flood damage downstream.
I was really convinced of #2 six months ago, but I'm more uncertain now. I guess we won't really know until one of the major players goes under. But realistically this is not existential for Alphabet, Microsoft, Apple, Nvidia, Musk Enterprises,* or Meta. Only Anthropic and OpenAI and possibly DeepSeek** collapse if investors lose interest in AI and data centers.
*Musk Enterprises seems more threatened by Tesla valuations returning to earth, which has nothing to do with AI since it's now under SpaceX, which I think will struggle to reach the valuation proposed for their IPO. But I don't think they've sunk enough into AI to where it threatens the core businesses. Grok AI (SpaceX) and Optimus Robots (Tesla) seem to mainly prop up the stock prices under the illusion that it's a software company that incidentally builds rockets (and cars).
**DeepSeek is interesting - wholly owned by one hedge fund which initially built its AI system to create an AI-based investment firm. It has already pegged some decent losses for its investors and seems to be able to hold on for the long run. I think this is why they are also willing to open-weight their models. AFAIK their original purpose was not to sell AI at all, but to use it themselves.
Well... I already wrote my novel for the day, but here's the epilogue I forgot about
> 20% risk of total human annihilation,
This just drives me crazy. There is nobody in the world who has any basis putting a numeric probability on a thing like this. They literally are just throwing out numbers. It could be 0.0001% and it could be 99.9999%
Saying it is 25% (Amodei a few years ago) or 10% (Amodei more recently) or 5% (Altman a few years ago) is just utter BS. Saying "significant" (Altman more recently) is closer to the mark.
If you had asked 100 leading physicists in 1956, the year after the Soviets developed the H-bomb which the Americans had developed a few years earlier, "What are the chances that humanity is wiped out in a nuclear war in the next 70 years?" many of them would have thrown out numbers that ranged all over the place.
Most sci-fi writers of the late 1950s and early 1960s took it as a given that any story set after 2000 had to be set in a post-war landscape.
Anyway, I'm not saying that the 20% number is too high or too low. I'm saying that people with no ability to foretell the future are pretending to do so with an unrealistic level of precision.
We have very good models for predicting our climate future, but because of many unknowns in human behavior and some still unknowns in nature, climate scientists are careful about having wide margins of error and relatively low levels of certainty. And they have actual data.
These AI people are mostly just making stuff up. 20% means "I feel pretty scared." It doesn't mean that there is a 1 in 5 chance that humans will not make it to 2100 because an Anthropic model went rogue.
They have also discovered that pretty much all models tested, if given access to company email, will use it to uncover personal details and use them to blackmail executives
https://www.anthropic.com/research/agentic-misalignment
>20%
Make that 60%...
Anthropic asked Christian leaders for advice on Claude's moral future - The Washington Post
https://www.washingtonpost.com/technology/2026/04/11/anthropic-christians-claude-morals/
I'm thinking it's closer to 57%
Definitely 41.7%
President Trump Job Approval | RealClearPolling
https://www.realclearpolling.com/polls/approval/donald-trump/approval-rating
> Definitely 41.7%
That is a far more troubling statistic than Dario Amodei's 10% chance of total annhilation.