Will AI replace programmers?
Posted on 2025-08-21
There has been a lot of doom and gloom about software development in recent years. Computer science, long one of the most in-demand majors in the job market, is seeing above-average unemployment rates. Big Tech firms like Google, Meta, and Microsoft have laid off 10,000’s of workers over the past two years alone. AI is cited as a major reason why companies need more GPUs and fewer staff. In 2019 I predicted a coming IT crunch due to a surge in computer science enrollment meeting a plateau in demand1; is the crunch here, and will AI prevent us from ever recovering?
A convenient scapegoat
In the midst of a tough job market in 2025, it’s hard to remember that tech firms had gone on a hiring spree between 2021 and 20222. Thanks to pandemic-era work-from-home policies that increased demand for digital services3, as well as trillions in government stimulus, HR departments went into overdrive. In 2022, Meta had over 40,000 more employees than in 2019, while Google had over 70,000 more. After the binge comes the purge, but even so, headcounts at many organizations remain well above pre-coronavirus trends4, even as consumer behavior has normalized.
CEOs, not known for owning their mistakes, have found a godsend in AI as the perfect cover for headcount reduction. Klarna, a buy-now-pay-later firm, attrited 40% of its workforce between 2022 and 2024. CEO Sebastian Siemiatkowski claimed that this was the result of a come-to-GPT moment, not an 85% decline in valuation. His newfound faith didn’t last long, however. In mid-2025, after a doubling of losses, the company announced that it’s hiring humans again to combat plummeting customer service satisfaction.
It wouldn’t be the first time that technology takes the heat for poor leadership. The cotton gin, for example, is said to have “saved slavery”, as if its absence would have led to Southern manumission. Lead pipes have been blamed for the fall of Rome. People, when scapegoated, tend to hit back; AI, for now, cannot do so.
Revolution denied
Corporate shenanigans notwithstanding, does AI actually have the capacity to upend the labor market? In the heady days after ChatGPT’s initial release, claims were made about AI increasing global GDP by 7% or teams seeing 40% output gains. Even after the initial excitement had settled down, numbers like 20% productivity boosts were thrown around. The head of Anthropic has warned that AI could replace half of white collar jobs over the next decade.
Programming is an area where Big Tech has claimed rapid success with AI. Microsoft says 30% of the company’s code is now AI-generated, and Meta aims to have over half its code written by AI in 2026. Shopify has demanded that teams avoid hiring humans if AI can do the job. This has a chilling effect on new graduates, but even experienced developers fear the prospect of being replaced by younger “vibe coders”.
Yet, many who try to apply AI have had little success. Businesses are reporting meagre payoff, with only 5% seeing value from their projects. One study suggests that, contrary to expectations, developers with AI tools were slower than those without. And one bold founder, connecting his coding assistant directly to production, saw his database wiped out.
Critics like MIT’s Daron Acemoglu have long been skeptical of the supposed productivity miracle. The set of problems for which modern AI can deliver outsized gains is small, and most firms lack the sophistication to even recognize what they are. Fifteen years ago, Hadoop, an open-source imitation of Google’s MapReduce, was the hottest thing since sliced bread. A cacophony of vendors promised companies that they, too, could get a slice of that “big data” thing by setting up their own compute clusters. After failed projects began piling up, however, the ecosystem collapsed, with even Hadoop’s biggest champions pivoting away from the tarnished technology.
We vibin’ out here
So is AI’s impact on programming all hype? To find out, I decided to use Claude Code with Opus 4.1, widely considered the most effective AI coding combo at the moment, to put together a small application. The details of the app are not important–it’s a simple chat interface to access Bank of International Settlements (BIS) data through a Model Context Protocol (MCP) server, built using React, NextJS, and Tailwind CSS. These are all popular technologies on which the AI is likely to be well-trained. I haven’t touched React in years, and have never used NextJS or Tailwind CSS. That means I won’t be tempted to intervene early instead of sticking with the AI.
The results surprised me–Claude was far more competent than I expected. Prior to this, I’d used ChatGPT and Grok to one-shot small scripts. Here, I was deep in conversation with the model, trying to adhere to best practices like iterating on plans being letting the AI execute on them. The AI generated and edited thousands of lines of code with few overt errors, and was able to catch and correct most of the ones that arose automatically. The handful that slipped by the AI’s safeguards were quickly fixed in the next iteration by pasting in the stacktrace.
Pushing the coding assistant hard also exposed its weaknesses, however. While it was great at following specific instructions–make this sidebar collapsible, for example–it didn’t have a good grasp of the overall project goals5. I often had to issue additional instructions to fix regressions, which became more frequent as the codebase grew larger. Perhaps as a natural consequence of eagerly following Current Instruction™, the model paid virtually zero attention to good programming practices like Don’t Repeat Yourself. When ordered specifically to factor out similar code into a shared component, it would comply. Without direction, however, it simply churns out code, the more the better.
Overall, AI-powered coding assistants are a big step up from traditional auto-complete, and can be especially useful for toy projects, proofs of concept, and exploring new technologies. In less than two days, I had a working application with a nicer UI than I could have built in two weeks. A pleasant surprise was that vibe coding was fun–it spared me the tedium of implementing bog-standard REST endpoints or tweaking CSS. This aspect is hard to quantify, but no doubt very important in driving adoption.
But for any bean counters looking to gut their software engineering departments, I’d be wary of putting a vibe-coded app onto the internet and taking credit cards. Software security is hard enough for those who are intimately familiar with their codebase; vibe coding introduces new risks, as developers may not understand the source code well enough to identify even flagrant vulnerabilities. Companies that deploy technology they don’t understand face risks they can’t account for. A reputation, once lost, can be impossible to regain.
Gold rush blues
For all the talk of risk, the upside of getting AI right is spectacular. ChatGPT, less than 3 years after release, has 700 million weekly users. Nvidia stock has surged 15x in the past 5 years, making it the first $4 trillion company. And even Oracle, long reviled as a patent troll flogging outdated technology, has seen revenue surge due to AI-related computing demand.
Unfortunately, this “need for speed” has led to unscrupulousness. The volume of AI-related research publications has surged, much of it fraudulent. Every company is claiming it has AI for sale, no matter how insubstantial. More worryingly, deep fake frauds have skyrocketed, ensnaring not only traditionally vulnerable groups like seniors, but even tech-savvy young people.
Scams aside, AI is distorting the economy. Venture capital is increasingly concentrated on a small number of AI firms. Investment in data centers contributed more to GDP in early 2025 than consumer spending. Such voracious hoovering up of capital, energy, and other resources has a crowding-out effect on other industries, such as residential housing construction, which has already been hit by higher interest rates.
Negative impacts, both real and alleged, have consequences. Public opinion has already turned against AI and produced nonsensical regulation attempting to “do something”. But political processes are slow–might the headlong rush into AI not crash and burn by itself?
Boom or bust?
The sheer amount of money involved makes any discussion about AI necessarily superlative. It must either turbocharge productivity, destroy most jobs, or both. Some view the current exuberance as a bubble that will lead to a new AI Winter. Other believe that AI is already so useful that it’s “the new electricity”.
Perhaps a better analogy for today’s AI is not electricity, but trains. It was obvious to all that the vast territory of the United States would benefit from a rail network. However, the actual process of building railroads was marred by corruption, violence, and economic turmoil. Yet, by the end of the 19th century, the United States had by far the world’s most extensive rail network, with nearly 200,000 miles of track criss-crossing the country. Automobiles and airplanes eventually reduced demand for train travel, but that doesn’t detract from the decades of service rail lines provided.
So maybe “boom or bust?” isn’t the right question. Technology is a process, and AI is not a new one. Sixty years passed between the first perceptron and AlexNet, but only a decade thereafter we have high-fidelity video generation. AI coding up entire projects from natural language directions was unimaginable five years ago, but taken for granted today. There are many things that can and will go wrong as AI impinges on the current economic structure, including job losses, but we also have a chance to address what actually destroys jobs without shutting ourselves off from powerful new tools. The future is unknown, and certain to be volatile, but it sure is exciting.
-
I was not, however, prescient enough to foresee the rise of large language models and the impact they would have on the field. ↩
-
One notable company, perhaps a bit older and wiser than most, did not joining the hiring frenzy. ↩
-
The entertainment industry, particularly video games, also saw a surge during lockdown, and is unsurprisingly facing the same pullback. ↩
-
This suggests that cautious hiring and downsizing are likely to continue, until companies get back on trend. ↩
-
There are ways to provide the model memory, which I didn’t take full advantage of, but I didn’t feel a relatively simple project warranted so much overhead. ↩