>> bubble... crash
I think this is worth a read
Cory Doctorow: What Kind of Bubble is AI?
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind.
But the most important residue after the [dotcom] bubble popped was the millions of young people who’d been lured into dropping out of university in order to take dotcom jobs where they got all-expenses paid crash courses in HTML, Perl, and Python. This army of technologists was unique in that they were drawn from all sorts of backgrounds – art-school dropouts, humanities dropouts, dropouts from earth science and bioscience programs and other disciplines that had historically been consumers of technology, not producers of it.
Contrast that bubble with, say, cryptocurrency/NFTs, or the complex financial derivatives that led up to the 2008 financial crisis. These crises left behind very little reusable residue. The expensively retrained physicists whom the finance sector taught to generate wildly defective risk-hedging algorithms were not able to apply that knowledge to create successor algorithms that were useful. The fraud of the cryptocurrency bubble was far more pervasive than the fraud in the dotcom bubble, so much so that without the fraud, there’s almost nothing left.
Followup commentary covers most of the same ground, but is also worth a read
Pluralistic: What kind of bubble is AI? (19 Dec 2023)
https://pluralistic.net/2023/12/19/bubblenomics/He says he doesn't know which way AI will go, but one big difference is that the server cost of each query is very high. So even if all the IP and skilled technicians are up for grabs after the bubble pops, like in the dotcom bubble, you still need lots of VC money to just get it up and running again and serve up basic queries and eventually it has to turn a profit.
The challenge there is that AI is very good at risk-tolerant tasks (create cover art for my e-book), but that is very low-value. On the other hand, it is not very good at risk-intolerant tasks like driving cars and interpreting medical imaging on its own. Those are high value, but they require a "human in the loop" and have so far not delivered. An obvious use case would be to have a radiologist read the images and then corroborate with an AI, but that makes the process MORE expensive, not less.
So we'll see. But I think the big AI revolutions have already happened, but we just haven't seen the effects yet. I do not believe it is ChatGPT and LLMs in general, but everything that is going to come out of the AI-driven proteinomics and drug discovery and materials science.
I'm not contending that other, bigger applications of AI won't come along. I'm just saying that all the talk about ChatGPT hallucinating and the Google AI being criminally woke is a distraction. Away from the public eye, AI is making huge advances, but those advances will take time to manifest, like Marie Curie's experiments that laid the ground for a nuclear revolution that she did not live to see.