How much AI do we need, really?
It’s been said that if all AI development paused and went no further than it has right now, there are still years of innovation and disruption and growth to come just by integrating what we have into products. The way the frontier labs and their investors are spending implies that the payoff of developing AI to some notional AGI end state is far greater than whatever value they could get in the shorter term by focussing more on product. Yes, I know ChatGPT is a product, and maybe it will become the operating system of everything in the future. But the bet being made doesn’t seem to be that, based on where labs are spending their money.
What I think is wrong about all of this AGI stuff is that most problems that actually need solving - and that someone will pay you to solve with a product - require a fixed amount of intelligence. Sometimes that’s low, sometimes that’s high. But there’s going to be a point past which developing the AI further doesn’t matter, because you can already do the thing automatically, after which it’s really about cost and convenience.
I don’t think this is just a lack of imagination on my part on how different the AGI future might end up being. Humans in 10 years will still need food, homes, recreation, financial products and access to information. All of these things currently operate in a clunky analogue world and doubtless will change a lot with AI, but none of these things are 4d Vulcan chess problems either. There will be a point past which the benefit from AI enhancement will be mostly saturated.
The problem here is that the labs don’t really have any durable advantage on the technology. Yes they’re well funded, and make splashy breakthroughs, but typically an open weight model will match their performance within 6-12 months. This is great for the product people who are building things, because you just need to focus on building, and wait for the cost of the intelligence level you need to be competed away in the cash furnace.
If you agree with this thesis and are wondering why this isn’t being discussed more, I think the reason is simply that a lot of the next generation of products are still being built, and we don’t yet know what level of intelligence is really needed. It maybe that for many of them, we already have it, for others maybe it’s a year or two away. But the point is that these thresholds are real, and make it harder to justify the investment case for continuing to scale AI training.
