I’d be surprised if Andreessen’s highly educated audience actually believed the lump-sum fallacy, but he goes ahead and dismantles it anyway, introducing—as if it were new to his readers—the concept of productivity growth. He argues that when technology makes companies more productive, they pass the savings on to their customers in the form of lower prices, which leaves people with more money to buy more things, which increases demand, which increases production, in a beautiful self-sustaining virtuous cycle of growth. Even better, because technology makes workers more productive, their employers pay them more, so they have even more to spend, meaning growth gets double the juice.
There is a lot wrong with this argument. When companies become more productive, they do not pass the savings on to customers unless forced to do so by competition or regulation. Competition and regulation are weak in many places and in many industries, especially where companies are becoming larger and more dominant – think big box stores in cities where local stores are closing. (And it’s not like Andreessen isn’t aware of this. His “Time to Build” rails against “forces that inhibit market competition” such as oligopolies and regulatory capture.)
What’s more, large companies are more likely than smaller ones to have the technical resources to implement AI and see significant benefit from it—AI is, after all, most useful when there are large amounts of data to master. So AI can even reduce competition and enrich the owners of the companies that use it without lowering prices for their customers.
Then, while technology can make companies more productive, only sometimes makes the individual workers more productive (so-called marginal productivity). In other cases, it just allows companies to automate some of the work and hire fewer people. Book by Daron Acemoglu and Simon Johnson Power and progressa lengthy but invaluable guide to understanding exactly how technology has historically affected jobs, calls it “so-so automation.”
For example, take the self-checkout kiosks in the supermarket. This does not make the remaining cashier staff more productive; nor do they help the supermarket get more customers or sell more goods. They just let some staff go. Lots of technological advances I can improve marginal productivity, but – the book argues – do they do it depends on how companies decide to implement them. Some uses improve worker abilities; others, like so-so automation, only improve the bottom line. And the company often chooses the former only if its workers or the law force it to do so. (Hear Acemoglu talk about it with me on our podcast Have a beautiful future.)
The real concern about AI and jobs, which Andreessen completely ignores, is that while many people will lose their jobs quickly, new types of jobs—in new industries and markets created by AI—will take longer to emerge, and for many workers retraining will be difficult or unattainable. And it has also happened with every major technological breakthrough to date.
When the rich get richer
Another thing Andreessen would have you believe is that AI will not lead to “enrichment of inequality”. Once again, this is something of a straw man – inequality does not have to be crippled to be worse than it is today. Strangely, Andreessen kind of undercuts his argument here. He says that technology does not lead to inequality because the inventor of the technology has an incentive to make it available to as many people as possible. As a “classic example”, he cites Elon Musk’s scheme to turn Tesla from a luxury brand into a mass-market car – which, he notes, has made Musk “the richest man in the world”.
Yet as Musk became the richest man in the world by bringing Tesla to the masses, and many other technologies also became mainstream, the last 30 years have seen a slow but steady rise in income inequality in the US. Somehow, this doesn’t seem like an argument against technology that promotes inequality.
The Good Stuff
Now we come to the sensible stuff in Andreessen’s oeuvre. Andreessen is right to dismiss the idea that superintelligent AI will destroy humanity. He identifies it as just the latest iteration of a long-standing cultural meme about human creations running amok (Prometheus, the Golem, Frankenstein) and points out that the idea that AI could even to decide to kill us all is a “category error”—i.e. the AI is assumed to have a mind of its own. Instead, he says, AI is “math – code – computers made by people, owned by people, used by people, controlled by people”.
This is absolutely true, a welcome antidote to apocalyptic warnings from the likes of Eliezer Yudkowsky—and completely contradicts Andreessen’s aforementioned claim that giving everyone an “AI trainer” will automatically make the world a better place. As I said before: if humans build, own, use and control artificial intelligence, they will do exactly what they want with it, and that might include frying the planet to a crisp.