Paywall, but the gist of the article by Jason Furman (chair of council of economic advisors under Obama) is that he thought AI would cement the tech oligopoly but intense competition seems to be doing the opposite. Seems highly debatable to me, but it is true that Chinese models seem to be doing well on a lot fewer resources.
That aside, this quote caught my eye:
QuoteOne recent analysis found that "GPT-4-equivalent performance now costs $0.40/million tokens versus $20 in late 2022." That is the equivalent of a 70 percent annual deflation rate
https://www.nytimes.com/2026/02/25/opinion/ai-industry-competition-innovation.html
On the main thrust of the article, in addition to noting how there is jostling among the models and how new the companies are (without quite coming to grips that the "new" companies are deeply interwoven with incumbent tech giants), he also says
QuoteUsers aren't the only ones switching. The people who work at these companies move from one to another, a sharp contrast to work in Silicon Valley during the era of do-not-poach agreements.
QuoteFor a while, Nvidia was the provider of the most desired chips, especially for the more processing-power-intensive model training runs. Late last year, however, Google's Gemini 3 model vaulted to the top of the leaderboards by relying on a new custom-designed chip. When Anthropic overtook Google for the No. 1 spot, it did so using chips from several companies. Older companies like AMD are re-emerging as formidable designers, as are lean new A.I.-first entrants like Cerebras that are specializing in the inference the A.I. systems use to answer specific queries.
Not sure. And I didn't follow up to see which test suite they were benchmarking against. So I'm not sure it's necessarily true except with respect to a given set of benchmarks.