The end of the AI bubble won't mean the technology will go away. As the financial mega-engineering unwinds — as always, first slowly and then all at once — many of those tools will not only remain but continue to be improved.
The truly seismic change will take place on the plane of fear, belief, and expectations. During the last few years mainstream society has incorporated into its vocabulary of plausible scenarios ideas that used to be culturally science fiction: superintelligence, brain-computer interfaces, "the end of work," etc. None of the (often undistinguishable) warnings and promises from AI companies was an unknown idea: what has changed is that they have shifted from fiction or far-future speculation to something that drives present-day educational choices, management decisions, and public policy.
This change is a side effect of the surprising speed in the improvement of AI capabilities and, more than anything else, their visceral legibility. AI companies promise and threaten things that until now were categorized as impossible — they have to, for financial survival reasons — and it's hard for many people not to take them at face value.
This belief will end. Not just because the industry as it's currently configured is financially unsustainable, but also because the promises themselves can't pan out. For example, it's simply not the case that you can train a language model with "all the data" and "cure disease," among other things because we don't have even close to remotely enough data about most of the complexity of human biochemistry. And that's far from the only prediction from AI companies that have more currency with media, politicians, and investors than with domain specialists.
But even after the path disappears the dreams, good and bad, will remain. Companies now believe that key forms of cognition can be performed at superhuman level. "The West" looks at China's manufacturing capabilities and sees radical transformation at incredibly short time scales. Medical breakthroughs, radical shifts to economic models, whole new approaches to the methods and goals of education: AI has shifted the window of plausibility, and the end of the bubble won't make any of those scenarios go back to the folder of the impossible, at least not in the short term.
That's the Dream Gap: on one side the here-and-now, on the other side in business and political discourse that would have been classified as "Anticipation" five years ago, between them a blank where "ask ChatGPT" used to be but still lots of financial, political, and symbolic capital invested on getting quickly from here to there. It will be the primary window of opportunity of the next decade — dreams fade slow but they do fade; you have to hurry up to catch them — and, because it's a race to achieve what seemed impossible, the winners will hold the cultural, political, and economic high ground for a long time after.
"There will be a gap, fill it" is straightforward advice. How, exactly? "Build more data centers, use more AI" had the advantage of being easy to grasp and execute if you could figure out the capital; in some senses this conceptual simplicity was as important as what it could or could not achieve. Actually filling the Dream Gap won't be so easy. Every person, organization, and government currently betting that AI will get them there wants different things and has different resources, constraints, and possibilities.
The two things the vast majority of them have in common are
One, they did not actively pursue the impossible until the rise of AI shifted the mainstream narrative.
Two, they did not then, and do not now, use the full extent of technological tools and knowledge at their disposal.
The limiting factor has always been symbolic, not practical: the "superintelligent organization" wasn't on the radar of most companies not because it wasn't possible — practically all humans in all roles operate well below the limits of the possible using the right software and domain knowledge — but because it wasn't on the cultural radar of their leaders..
Now those dreams are part of the narrative. They are even framed as a necessity for professional and political survival. So the once impossible is now a goal and, as faith in a certain set of tools fail, everybody will look for others.
The opportunity and duty is to build and offer better paths. Paths plural, because pushing the frontier is always going to require a deep understanding of each domain, tools, limitations, and wider context, together with an understanding of the frontier of computational tools that goes beyond knowledge of the latest AI models and prompting strategies. The Dream Gap is in reality a multitude of gaps, one for each dream, and it will take different organizations and forms of expertise to reach each of them, unified only by the pragmatic pursuit of nothing less than the once impossible. This makes the pitch harder but the outcome possible. The time between the end of the AI bubble and the giving up of those dreams will be the best window of opportunity in a long time to put together the people and resources that will make it a reality.
We live at the tail end of a bubble predicated on the idea of a single possible path to dreams — and away from nightmares — once held impossible. But we also live during the prologue of a time where those dreams and nightmares will be very real possibilities through difficult but walkable paths. The transition between both will be a time of fear and dismay, of promises broken and lingering dreams. Those who can offer possible ways to achieve them when the dominant narrative has crashed will find willing ears and ready hands.
(Originally posted on my blog.)
