The debate about human obsolescence has gotten it all wrong: by focusing myopically into individual activities that can be more-or-less automated it ignores the larger picture of what it means to have vastly increased computational capabilities and what it takes to make the most of them.
A useful cheat code to look at the future frontier of business practices is to ignore industry extrapolations — if your competitors are doing it it's not an innovation — and work backwards from organizations in entirely different fields that are by necessity already grappling with the tools and problems that have yet to trickle down to the general economy. The technologies and issues of "big data," for example, were first visible in the activities and infrastructures of astrophysicists and particle physicists, who had to capture, store, process, and understand orders of magnitude more data than anybody else, much earlier than anybody else.
Today mathematicians are among the scouts of the AI frontier. Mathematical research is in many ways the perfect field of application for current AI tools, as they are precisely of the sort we can expect to be a good fit for mathematical research's organizational and intellectual complexities. Any structural improvement to the speed of mathematical research, however invisible to most people, would have a deep, transformative long-term impact on science and technology.
This article on the blog of the Xena project — part of a widening set of professional mathematicians leveraging cutting-edge specialized software to help and accelerate mathematical research — is a very useful look at the current situation, anchored by the question at the beginning of the article:
Let’s say that someone had a big pot of money, and wanted to use it to accelerate mathematical discovery. How might they go about doing this?
This isn't exactly "how do we use AI?" It's a much better question: "how do we use resources to accelerate thinking at the frontier?" The answer does involve AI (both LLMs and more specialized tools like the Lean interactive theorem prover) but the limiting resource is people with a deep understanding of both those tools and the relevant mathematics - people who are scarce and very difficult to train.
You can see the same, albeit implicitly, in the Integrated Explicit Analytic Number Theory network, a project building a formalized "hypertext" of theorems in analytic number theory. Going through project logs you can see the full AI toolset in play, sometimes saving time, sometimes providing formal validations, but always a tool for, rather than a replacement of, highly trained, often world-class researchers. One gets the clear sense that more people would speed things up in a way that "more compute" or "more data" wouldn't.
Although this is happening first and most clearly in scientific research, every industry attempting to use AI transformationally — except those focused on content generation, which are currently facing their own proto-Götterdämmerung of a race to the bottom — is experiencing the same bottleneck, most often without full awareness of the problem. Tools are deployed, workflows modified, there's training, enthusiasm, features... but as the months go by the company simply isn't operating at the orders-of-magnitude faster strategic and technological tempo promised by AI (if they were and the industry weren't, they'd be taking over the market; if the whole industry were, it would be visible for everybody).
Dig deeper into those AI transformation initiatives and the explanation is clear. For all the emphasis on radical change and new opportunities, there's little or no change to the deeper cognitive architecture of the company:
What it thinks about.
The language it uses to think about it.
The disciplinary frameworks it uses to think about it.
The formal, institutional, and cultural constraints that shape its thinking.
The way an organization thinks is limited by this architecture. No matter how advanced its software or how much data it has, without changes to the fundamental architecture of how it thinks most of this cognitive potential is fritted away as just performative signaling of innovation.
But changing this cognitive architecture and enabling AI on it is very, very hard. It takes the coordinated deployment of deep understanding of the toolbox of "things to think with" — the software, process, cultural, and intellectual components of a cognitive architecture — and, simultaneously, deep knowledge of the industry. Without the former there's no new engine; without the latter the engine isn't connected to the right wheels.
Not every company has people with even one of those skillsets. Very few companies have both. Very very very few companies have the right internal resources to set up and leverage the kind of tightly integrated small team that can pull off this sort of transformation. And without it AI becomes a tool at best, a buzzword at worst.
So it's a valuable resource — well-deployed, it's an AI investment multiplier — as well as a rare one. The main obstacle, for now, lies less in its cost than in its relative obscurity. There's no bidding war because few companies recognize the need even as they feel the impact of the lack. But as every industry begins to see specific companies break away from the pack it'll become clearer what transformative AI really looks like, and what it takes to get there.
New computational technologies going as far back as the earliest clay tablets have been capable of sustaining radically new technologies, activities, and organizations. But neither hardware nor software have ever been enough. Organizations can't take advantage of new tools to think with without changing how they think, a challenge that has to be overcome in a different way not just by each industry but by each organization on its own, as constrained by the specialized knowledge of their members and the strength of their intention to change.
(Originally posted on my blog.)