AI deserves better critics
2026-04-10
As AI spending reaches a fever pitch, with $650 billion of projected capital expenditure in 2026 alone, criticism of the industry has gotten louder. The slings and arrows come from all directions, but can be broadly grouped into a few categories:
- AI’s capabilities are overblown, and talk of AGI is nonsense
- Whatever useful work AI does cannot be done economically
- If AI is able to do useful work economically, it will worsen income inequality by displacing workers
- Regardless of what AI can do, it consumes too much water and electricity
Two of the most prominent voices in this chorus are Ed Zitron, a PR executive turned host of the Better Offline podcast, and cognitive psychologist Gary Marcus, who writes a popular Substack. Their arguments are representative of the broader criticism–and instructive in their shortcomings. The problem is not that AI is being criticized too harshly, but that too much criticism treats a fast-moving industry as if today’s limitations, business models, and social effects were already settled facts. Let’s take a look at the validity of these complaints, and whether they’ll still be relevant in the future.
Can’t is a strong word
There’s no doubt that AI boosters, including the companies that develop and sell models, have overhyped its capabilities. Sam Altman of OpenAI boasted, a year ago, that AI could already rival PhDs. Dario Amodei of Anthropic claimed in 2024 that we’d have “geniuses in a datacenter”, though he gave a more generous 7 to 12 year timeline for feats such as doubling the human lifespan. Such “vision” is typical of Silicon Valley culture, and should be taken with a grain of salt.
It would be wrong, however, to ignore the progress that AI has made in a short time. ChatGPT, when first released, was little more than a curiosity. Within a year it had gone from funny sentence completer to capable research tool. When Claude Code was introduced in February 2025, coding agents struggled to edit a file and implement a single function. Before the year was out, they could orchestrate entire projects.
Marcus is perhaps the most intellectually serious critic in this camp. In the spirit of Yann LeCun, who argues that LLMs cannot truly reason or understand, Marcus maintains that these models are merely “stochastic parrots”–they can’t discover what isn’t in their training data. They won’t find the cure for cancer, he points out, because the cure for cancer hasn’t been written down yet. Other critics make a related claim: that “hallucinations” will always render LLMs untrustworthy.
There is real validity to these critiques, but they increasingly ignore what modern LLMs can actually do. They are no longer sealed vessels interpolating over training data. They use tools to interact with the world, generate new data for themselves, and learn in-context from the results. In many constrained domains, they can now participate in parts of the same workflow a human researcher would use–running experiments, querying databases, and iterating on hypotheses. There is growing evidence of LLMs exhibiting emergent capabilities beyond what exists in their training sets. New architectures facilitating memory, abstraction, and planning will probably be needed for genuine intelligence–LeCun’s own venture into world models may be a step in that direction–but today’s LLMs are already more than stochastic parrots.
As for reliability, the important question is not whether LLMs ever make mistakes, but what kinds of mistakes they make, how often they make them, and whether those errors can be bounded by tooling and supervision. Hallucinations are a real problem, especially in domains where factual precision matters. But many practical uses of AI do not require blind trust; they require a system that is fast, cheap, and good enough to draft, classify, retrieve, or suggest work that a human can then verify.
Good engineers understand that quality needs to be pitted against labor, cost, and other factors. This is obvious in physical products, where there’s usually a wide spectrum of goods across a range of prices. The question is not whether AI can be made perfect, but whether it’s a better trade-off than the alternatives. Coding agents are not competing against John Carmack1; they’re there to help programmers with limited time, energy, and familiarity with a constantly churning tech ecosystem get things done.
Show me the money
The enormous amount of money being poured into AI has raised alarm about a bubble. This is where Zitron is most vocal, painting the AI industry as a house of cards propped up by hype. He and others point to circular financing among a small number of firms, where hardware manufacturers invest money into service providers so that the latter can turn around and buy more hardware from them, as financial chicanery. Leading companies like OpenAI lose billions a year and will continue doing so for the foreseeable future. Anthropic’s rebuttal is that “each model is profitable”, which doesn’t exactly meet GAAP standards2.
Zitron’s case, however, is weakened by his tendency to play fast and loose with the numbers. In March 2026, he claimed that Anthropic had generated only $5 billion in revenue “to date”, based on a court filing in which the company stated that all-time sales since 2023 exceeded $5 billion. By that point, however, Anthropic had already announced an annual revenue run rate of $14 billion in February, a figure that rose to $30 billion by April3. Anthropic is a private company and its actual financials won’t be known until it files for an IPO, but claiming $5 billion in total revenue while ignoring virtually all other data points is not serious analysis. Correcting that claim does not prove the business is sound; it simply means the bearish case should be argued with better numbers. The underlying question–whether these companies can ever turn a profit–is still fair.
AI defenders point out that many big companies, such as Amazon and Uber, lost money for years while investing in their products before becoming profitable. Tech investors can demonstrate a lot of patience, especially with private companies. Reddit was founded in 2005, and did not have its first profitable quarter until 2024, the same year it went public. AI is still in a land-grab phase, and investors currently favor companies that aggressively spend money rather than earn it.
This argument is predicated on AI becoming increasingly useful, and on bigger investments reaping disproportionate rewards, so that today’s losses can be recouped tomorrow via market dominance and price leverage. The big unknown is the shape of the demand curve. There is a threshold at which chatbots and coding agents go from “not useful” to “useful”, and another at which they go from “good enough” to “features most users don’t care about”. Shoveling money into optimizing current use cases may be the wrong move, when capital should instead be deployed into novel domains.
The second camp takes a different perspective. It says that all technological revolutions are made possible by over-exuberance. Railroad, telegraph, electricity, and the web all saw rapid adoption on the back of creative financing. Bubbles, panics, and recessions are mere inconveniences during the march of progress. Few today recall the Panic of 1873, but many continue to benefit from a continent-wide rail network.
All sides have their points, and they don’t even necessarily disagree on what is happening, or what the impact will be. They’re simply looking at the trade-offs differently. Those who are nonchalant about a bubble tend to be the ones who believe they won’t be severely impacted by it. Maybe they have sufficient financial buffer to coast through a bust; perhaps they even view a downturn as an opportunity to pick up assets on the cheap. What none of them dispute is that the money is being spent, and that it is reshaping who benefits and who gets left behind.
The rich get richer?
Perhaps the most emotionally charged criticism of AI is that it will worsen income inequality by wiping out white collar jobs. Geoff Hinton, the “godfather of AI”, warns that middle-class professions like law, medicine, and accounting are on the chopping block. Jack Dorsey, CEO of Block, has blamed AI for deep job cuts the company has made for several years in a row.
These claims deserve scrutiny. The good professor Hinton, brilliant as he is in his domain, betrays a surface-level understanding of what professionals actually do. The technical aspects of most jobs–summarizing case law, reading medical scans, crunching numbers–are not what necessitates people. When it comes to professions in particular, legal and moral responsibility are key. A doctor can be sued for malpractice; a lawyer can be disbarred. Someone has to be on the hook, and “the AI did it” is not an acceptable defense4. There is also a vast amount of context that current AI, trapped inside a computer, struggles to gather. Who’s going to tell it what was discussed over coffee, or at the company Christmas party–the kind of data that no amount of Slack mining can fully capture?
That does not mean employment is safe. The stronger case is not that AI will fully replace doctors, lawyers, or accountants, but that it may let firms operate with fewer junior employees, paralegals, analysts, and support staff. That is a more plausible and more serious concern. Still, it is far from obvious how large the effect will be. As explored in a previous post, Nobel laureate Daron Acemoglu has found that actual productivity gains from AI remain meagre, and many businesses report little payoff.
As for Block, it’s worth noting that the company quadrupled its headcount between 2019 and 2024, from roughly 3,300 to 13,000, while its stock price declined by 75% over the past five years. That’s not a company whose layoffs are driven by revolutionary productivity gains–it’s one correcting a hiring binge. There’s a lot of AI-washing going on, with executives eager to dress up belt-tightening as technological sophistication. What actually destroys jobs is a complex interplay of economic imbalances, over-hiring, and structural shifts–and blaming AI lets executives off the hook for decisions they made long before ChatGPT existed.
It is under-discussed, however, that the concentration of capital in a single sector has distortionary effects on the economy. Spending on AI accounted for the bulk of GDP growth in the first half of 2025. As demand for land, energy, and labor heats up, other sectors such as manufacturing, which requires many of the same inputs, will be squeezed. Many key inputs to AI, such as computer chips, are imported, inflating the trade deficit that the current administration has explicitly vowed to bring down. Much of the benefit of the boom flows to foreign firms like TSMC and SK Hynix, who have made only token investments in American manufacturing. And data centers, once built, generate little long-term employment.
Par for the course
Even setting aside the economic distortions, critics charge that the AI buildout is consuming unsustainable amounts of electricity and water. The per-unit numbers are striking: modern AI GPUs draw 700-1,200 watts per chip, compared to 150-200 watts for traditional CPUs, and AI racks consume 50-150 kilowatts versus 10-15 for conventional ones. An AI data center requires far more electricity than a conventional CPU-based facility for the same number of racks.
But aggregate numbers matter too, because critics often write as if AI data centers are consuming a civilization-scale share of national resources. They are not. Deloitte predicted data centers would account for about 2% of global electricity consumption, or 536 terawatt-hours, in 2025. In the US, Pew estimates that data centers consume 4% of electricity–and this includes all data centers, not just ones running AI workloads. This is roughly on the order of the aluminum and steel industries combined–substantial, but not exceptional for an economy-scale input.
As for water, golf courses consumed 1.63 million acre-feet (~521 billion gallons) of water in 2024. All data centers in the US, meanwhile, consumed about 66 billion liters (~17.4 billion gallons) in 2023. Even assuming that quantity has doubled since then due to AI buildouts, data centers still consume far less water than golf.
What is a serious concern, however, is that data center resource demands tend to be highly concentrated, especially in water-scarce regions such as Phoenix and Dallas. There are an estimated 4,000 data centers in the US, compared to 16,000 golf courses, so the impact of golf is more spread out. Policy could be used to encourage data center construction in areas where electricity and water are more readily available, such as the Pacific Northwest and Great Lakes region, reducing some of the pressure on existing markets.
Better critics, not fewer
Each of the criticisms leveled at AI contains a kernel of truth, but is also muddied by confounding factors and premature conclusions. LLMs have real limitations, but they are being extended with tools, memory, and new architectures faster than the critiques can keep up. The economics are genuinely uncertain, but sloppy accounting–in both directions–obscures more than it reveals. Job displacement fears are understandable, but they often conflate AI’s potential impact with layoffs driven by over-hiring and financial mismanagement. Resource consumption is real, but modest in aggregate and addressable through better policy.
The long and short of it is that AI is still a rapidly changing industry, and it is too early to draw definitive conclusions. Claims about capability, profitability, social impact, and resource usage can all shift dramatically in a short time–as they already have. Criticism that treats today’s snapshot as a final verdict will age poorly, just as the breathless predictions of the boosters inevitably do.
Better critics would ask harder questions without pretending the answers are already known. They would compare AI to realistic alternatives rather than idealized human performance, distinguish local harms from aggregate ones, and separate hype, fraud, and genuine uncertainty instead of mashing them together. That kind of criticism would do more than score points against boosters. It would help shape AI’s evolution rather than merely react to its impact.
-
Carmack is a big believer in AI, and has founded his own AI research firm. ↩
-
While being publicly indifferent about financial losses, Anthropic has been aggressively curbing costs via rate limits and, more controversially, allegedly “dumbing down” models after release. ↩
-
The discrepancy likely reflects the difference between cumulative recognized revenue and annualized run rate, but Zitron presented the lower figure without qualification as if it were the whole picture. ↩
-
Air Canada found this out the hard way when its chatbot fabricated a bereavement fare policy. A tribunal ruled the company liable for the chatbot’s claims–someone still had to answer for it. ↩