We hear a lot about how innovative Silicon Valley and the broader American tech industry is. It’s been disruptive, moving society forward at an ever-accelerating pace. People now have amazingly complex computers in their pockets and the world’s knowledge at their fingertips. Apps drive entertainment, economic activity, and social interactions. And now we have the “AI” “revolution,” coming to disrupt all the job and dazzling tech journalists with all the amazing things its agents can create for us.
I wonder, though: is any of this really innovative? Is it better?
In tech, there’s a concept called Moore’s Law. Formally, this empirical law observes that the number of transistors the semiconductor industry can cram onto a chip doubles every few years. Informally, people understand Moore’s Law as meaning that computer are always getting faster, and therefore more capable, exponentially. But a less well-known observation, Wirth’s Law, points out that software bloats, grows, gets more inefficient, and otherwise slows down at a pace that nearly compensates for the advancements of Moore’s Law. This is why a top-of-the-line Windows 11 desktop or laptop computer now, in 2026, feels similar in terms of snappiness, responsivity, and ability to run basic user tasks like word processing, accounting, or web browsing to a Windows 95 desktop thirty years ago. Our internet connections got faster — that was helpful. Data rates for downloading and viewing photos got fast enough for people not to notice the time any more — that was a big change. Graphics got a lot better. But why does Microsoft Excel feel like it takes the same amount of time to start up, and why does it become unresponsive just as frequently? In other words — is it better?
I think this feeling extends to all sorts of “disruption” that we’ve seen from the tech industry over the past few decades.
In the 90s, people used to take taxis from place to place. Now we have Uber and Lift. From an end-user perspective, they work exactly the same. You signal for a car, go to your destination, and pay for the ride based on distance traveled.1 Is it better?
In the 90s, people used to go to a video store or library to check out movies or box sets of released TV shows to watch at home. Now, we do this through streaming services. People have saved a small amount of time and eliminated a modicum of human interaction as a result.2 Is this really so much better?
In the 90s, when you wanted to have restaurant food at your house, you would collect takeout-and-delivery menus from local establishments or consult the White Pages and call the restaurant with your order. Now, we can do exactly the same thing from an app on our phone. Perhaps, nowadays, there are more options for delivery, rather than takeout only, than there were before.3 Does this deserve breathless talk of innovation and disruption?
And now we come to “AI.” The most powerful thing tech companies have done with “AI” is get the algorithms to “write code on their own,” thus displacing hundreds of thousands of workers, getting everyone to question the value of education that was previously predicated on a “learn to code” focus on applicable skills, and stoking fear that your job will be next. Now, code is highly structured language. As a result, we’ve had code generation, syntax checking, and integrated development environments (IDEs) that automate these tasks for decades. Other kinds of writing are much less structured — which is why the stereotypical quote from Clippy, another innovation of the 90s, was, “It looks like you’re trying to write a letter” rather than “a persuasive essay,” “a fantasy novel,” or “a romantic poem.” The core innovation here that “AI” companies have come up with is automating the process of searching StackOverflow for code relevant to a prompt, and then coupling this to all the syntax-checking we already had. Is it better? Well, that really depends on “for whom,” doesn’t it. It’s not better for consumers, who don’t generally have everyday problems they need to solve by generating new code. It’s not better for coders, who now have to behave like middle managers — a different skill set — and lose practice. It’s not better for companies, who are going to lose their talent pipelines all while they’re accruing technical debt from this code they don’t understand. It’s certainly not better for anyone whose data has been stolen or who lives in the same power grid or planet with a limited supply of rare Earth elements as a data center. There are some helpful niche applications of machine learning. And I guess it’s better for the companies offering “AI” products in that they have a thing to sell. Was all that a worthwhile trade?
This is now my go-to question about tech innovations. Is it better? Is it really better than they way we did things before?
The answer is in the footnotes. All the innovations that have disrupted our lives over the tech boom were not, in general, making people’s lives better. They were creating ways for big corporations to act as middlemen and skim a percentage off of things that people were already doing. And that’s why corporations are so excited about “AI:” they think these tools are a way to expose lots of other industries and workers to this middleman-skimming, even though the tools don’t add value to what workers are doing.
If we don’t like that, then we ought to push our government for action: antitrust and antimonopoly enforcement would be a good place to start. Another would be taxes — on billionaires and corporations.
- Of course, there has been some corporate “innovation” in how to evade federal, state, and local regulations as well as in how to extract more value from more vulnerable riders and drivers. ↩︎
- Of course, now, instead of viewers paying by usage, they pay a subscription fee to a corporation even if they don’t watch anything. ↩︎
- Of course, now, the app company can extract fees from both the diner and the restaurant, instead of the restaurant charging a delivery fee to the diner only. ↩︎