Three reasons we’re in an AI bubble (and four reasons we’re not)

AI critics are right about the industry’s challenges — but they risk missing the larger story

Three reasons we’re in an AI bubble (and four reasons we’re not)
(DALL-E)

When tech stocks plunged on Monday, temporarily losing a collective $800 billion in value, a long-simmering debate about artificial intelligence boiled over. Almost two years after ChatGPT was released, tech giants are investing more than ever into generative AI and the promise of superintelligence. But with capital expenditures far outstripping AI profits and the pace of innovation seeming to slow, investors are beginning to wonder when Silicon Valley plans to recoup those expenses.

Are we living through an AI bubble? The prominent hedge fund Elliott Management raised eyebrows earlier this week when it told clients that big technology stocks, including Nvidia, are in “bubble land,” the Financial Times reported. Elliott expressed skepticism that demand for Nvidia’s powerful chips would persist, and described AI in general as “overhyped with many applications not ready for prime time.”

It’s possible that AI applications are “never going to be cost-efficient, are never going to actually work right, will take up too much energy, or will prove to be untrustworthy,” the hedge fund wrote.

Elliott has a point: many AI tools today absolutely are not “ready for prime time,” in the sense that you could trust them with your job or your life. AI needs close supervision, sometimes requiring more effort than it would take to simply have done the task yourself. 

At the same time, talk to most software engineers today and they’ll tell you they already can’t imagine doing their jobs without the coding assistance that AI provides. Are they the exception to the rule — or just early in realizing the kind of productivity gains that will come to the rest of the economy over time?

Whether we’re in an AI bubble depends largely on how you define it. When I use the term, I’m talking about a situation where public company stocks and private company valuations are inflated far beyond the profits they will ever deliver, creating the conditions for a huge crash when speculators pull out and prices decline. The dot-com crash of the late 1990s is the canonical example in Silicon Valley.

Are we setting ourselves up for an AI sequel? There’s a case for and against it. Let’s start off with some reasons why the AI bubble might be real.


Companies aren’t yet earning profits on their AI investments, and it’s not clear if or when they will.

Investors like Elliott want to see Silicon Valley pull back on their AI spending this year. But tech giants are doing the opposite: Microsoft, Alphabet, Amazon and Meta all increased expenditures on AI dramatically in the first half of this year, to a collective $106 billion. Some analysts believe they will spend $1 trillion on AI over the next five years, the FT reported.

To recoup those costs, the giants will need to persuade vast numbers of consumers and businesses to buy their services. But for now enterprise spending on AI is mostly limited to small trials, and there’s little evidence so far that most businesses see a compelling reason to buy the tools that are now available. Plenty of people who tried ChatGPT once or twice never returned. As Elliott put it: “There are few real uses … [beyond] summarizing notes of meetings, generating reports and helping with computer coding.” 

Moreover, AI tools have much lower profit margins than other software, thanks to their intensive computing demands and energy needs. So the cost of providing AI services scales with usage in a way that just isn’t true of (for example) Google search or Facebook.

Some of the most prominent startups are throwing in the towel.

Character.AI raised $150 million to build AI chatbots. Adept raised $415 million to build AI agents. Inflection raised a whopping $1.525 billion to build an AI chatbot named Pi.

All of these companies still exist in some form. But they no longer have their founders, who agreed to go work for tech giants in a series of non-acquisition acquisitions that have become the norm for AI upstarts this year. In each case, the non-acquirers (Google for Character, Amazon for Adept, and Microsoft for Inflection) paid investors a modest premium. But venture capitalists are surely stinging that some of the most promising companies in the space failed to deliver the 10x returns that their funds are depending on.

What gives? Regulators are increasingly skeptical about mergers and acquisitions in the tech world, closing off the most popular path for startups to exit. Meanwhile, startups are learning that even billion-dollar fundraises can’t compete with what the giants are prepared to spend to win the AI race. That makes the prospect of taking a company public especially daunting.

“If the very best outcome for extremely well-funded AI companies are deals like these, then it signals that building a standalone, potentially profitable GenAI business is too hard or impossible to pull off,” Gergely Orosz, author of the Pragmatic Engineer newsletter, wrote this week. “We can assume founders are smart people, so throwing in the towel early by selling to Big Tech is likely to be the most optimal outcome.”

The rate of innovation seems to be slowing down.

One reason why ChatGPT created a sensation is that it improved dramatically on the output of the previous version of OpenAI’s large language model. The leap from GPT-2 to GPT-3 was remarkable: a tool that previously had hardly any utility to the average person was suddenly capable of helping a student cheat all the way through elementary school. 

GPT-4, and later GPT-4o, are even better: they’re faster, more efficient, and less likely to hallucinate than GPT-3. But we still use them just the same as we used ChatGPT, and they have few capabilities that their predecessor did not.

For the past several years, AI developers have been able to create much more powerful models simply by increasing the amount of data that their models are trained on. But there are signs that this approach is beginning to show diminishing returns. Until GPT-5 and other next-generation models arrive in the next year or so, it will remain an open question.


Taken together, these factors could reasonably lead a person to believe that the bubble talk is real. So what’s the case against? 

Tech companies are often unprofitable for very long stretches. 

Amazon didn’t turn a profit for the first nine years of its life. Uber reported its first full-year profit this year — 15 years after it was founded. The end of zero-interest rates has made it much more difficult for tech companies to operate this way. But particularly for the richest public companies, making long-term investments and ignoring investors’ complaints has long been the norm. 

Big Tech CEOs are all largely in agreement that AI represents their largest opportunity in at least a generation. In their view, to pull back on spending now would risk ceding the race to their competitors — which would be even worse for their long-term profits. “In tech, when you are going through transitions like this . . . the risk of underinvesting is dramatically higher than overinvesting,” Google CEO Sundar Pichai told investors on an earnings call last month.

Startups failing is a normal part of the venture capital life cycle.

Just because a handful of once-high flyers like Character.AI went looking for the exits doesn’t mean the entire industry is washing out. Just ask the image generator Midjourney, which was expected to generate $200 million last year and has reportedly been profitable since its earliest days. Or ask OpenAI, whose ChatGPT app just had its best ever month in revenue, according to estimates from market research firm app intelligence. (That seems particularly notable given that most American children are not in school in July.)

Tech companies are still wringing plenty of innovation out of current generation models.

Whether GPT-5 and its peers can deliver a step-change in functionality from their predecessors is a real and important question. But focusing on that too much can obscure just how much innovation is left to be wrung out of the models we have today.

To get a sense of what’s still possible, keep an eye on Google DeepMind. Last month, the company used a pair of custom models to earn the equivalent of a silver medal at the International Mathematical Olympiad. (It was one point short of the gold.) Today, they showed off an amateur table tennis robot that beat every beginner who stepped up to face it. (And lost to every advanced player.) And those feats came just a couple months after DeepMind unveiled AlphaFold 3, which predicts protein structures with amazing accuracy.

There’s no telling how long it will take Google to recoup its investment in DeepMind. But are there billions to be made in robotics, medicine, and health care? To me the answer seems obviously to be yes. 

Technology adoption takes a long time.

Credit to my podcast co-host, Kevin Roose, for pointing this one out to me. Microwave ovens were invented in 1947, and by 1971 were only in 1 percent of American homes. They didn’t reach 90 percent of American homes until 1997.

Other technologies proliferate more quickly. ChatGPT set a record for the fastest-growing consumer application ever. And yet it also remains true that, as with the microwave in the 1970s, most people have never tried it. That’s why I’m most interested in what natural early adopters, like software engineers, are doing with AI. The more success they have with Copilot and other AI assistance, the more others will seek out similar tools. But that’s not all going to happen this year.


So how do we square these two cases? 

Here’s my view. AI seems to be creating much bigger opportunities for our biggest companies than our smallest ones. The tech giants can easily afford to plow billions into building next-generation LLMs and subsidizing AI services while they work to bring costs down and scale user bases up. And even if the direst predictions of AI skeptics came true, and no one ever found a profitable use for generative AI, the tech giants would all still have their massive profitable businesses in e-commerce, advertising, and hardware to prevent a true collapse.

On the other hand, startups have a much harder road. The natural advantages they normally have over giants, such as being small and nimble, can’t make up for the high cost of training models and offering AI-powered services. VC portfolios are sufficiently diversified that even their outsized bets on AI won’t sink most of them, just as making outsized bets on crypto didn’t sink most of them. But if AI remains on its current trajectory — and regulators don’t allow more acquisitions — I suspect VCs will be really disappointed.  

But even if we are in a bubble, don’t expect that to be the last word on AI — any more than the dot-com bubble marked the end of the internet. Sometimes technologies that in the moment look too expensive or too unreliable in hindsight only look too early. 

Correction: This post originally said ChatGPT had been available for almost three years. In fact, it was been available for almost two.

Launches

For several years now I've found Oliver Darcy's Reliable Sources newsletter at CNN to be essential reading. Today Oliver announced he's striking out on his own with a new newsletter about the media industry called Status.

It was an instant annual subscription purchase for me. If you want to support high-quality, independent media reporting, check it out.

On the podcast this week: The Times' David McCabe joins us to sort through this week's ruling that Google has illegally maintained its monopoly in search. Then, Kevin and I talk through the AI bubble debate. And finally, all aboard for a new segment we call Hot Mess Express.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and AI bubble opinions: casey@platformer.news.