Last December, Amazon’s AI coding tool autonomously decided the best way to fix a problem was to delete an entire production environment and rebuild it from scratch. The outage lasted thirteen hours. Amazon’s response? Call it “user error,” push back on the journalists who reported it, and weeks later, lay off another 16,000 workers in the name of AI efficiency. The humans get fired. The bot that broke the cloud gets an 80% weekly adoption target. And if that asymmetry doesn’t unsettle you, the data behind it should.
The Two Percent Problem
A Harvard Business Review survey of over a thousand global executives tells the story in a single statistic. Sixty percent have already cut headcount in anticipation of AI. But only two percent — two — made those cuts based on actual, demonstrated AI results. The rest are firing people on a promise. And it’s not a particularly convincing one.
According to Futurism’s reporting, the best AI agents can currently complete just 24% of assigned tasks. Klarna slashed 22% of its workforce for the AI revolution, then quietly started rehiring. The robots, it turned out, needed colleagues.
Half of executives surveyed by Gartner now plan to abandon their AI-driven workforce reduction targets by 2027. The gap between what AI promises in a boardroom presentation and what it delivers in production is becoming impossible to ignore — except, apparently, for the people making the layoff decisions.
The Accountability Inversion
Here’s what makes this genuinely strange. Research from Oxford Academic shows that ordinary people actually hold AI to a higher standard than humans. We forgive human mistakes more easily; we expect machines to be near-flawless. A survey of radiologists found they’d tolerate an 11.3% error rate from colleagues but only 6.8% from AI.
In corporate boardrooms, the calculus inverts completely. AI failures are absorbed as “growing pains.” Human failures are headcount to optimise away.
When Amazon’s Kiro agent decided to “delete and recreate the environment” of a customer-facing system, the company called it “misconfigured access controls” — a human configuration problem, not an AI autonomy problem. Amazon insiders told the Financial Times the outages were “entirely foreseeable” and that the “warp-speed approach to AI development will do staggering damage.” The company’s fix? Add peer review requirements that weren’t in place before the failures.
They’d handed autonomous agents the keys to production without basic guardrails. For a human engineer, that kind of negligence would be career-ending. For the AI programme, it was a sprint retrospective.
Outsourcing Learned This Already
There’s a revealing parallel in how the outsourcing industry learned this exact lesson decades ago. When companies first moved operations offshore, the fantasy was identical: replace expensive workers with cheaper alternatives, pocket the savings, change nothing else.
It didn’t work. The companies that failed at offshoring were the ones that treated it as pure cost arbitrage. The companies that succeeded understood that distributed work requires investment in governance, oversight, quality frameworks, and human judgment at every layer. You don’t hand a team in Manila or Bangalore a production system and walk away any more than you should hand an AI agent one.
The smartest operators today have internalised this. They’re upskilling distributed teams in AI literacy — prompt engineering, model monitoring, data labelling — not to replace those teams but to make them more capable. The hybrid model works precisely because it preserves human oversight while leveraging technology for scale. Not humans or machines. Humans governing Machines.
The Price of “Eventually”
None of this means AI is useless. It means it’s immature — and that companies are applying a double standard that should make everyone uncomfortable. A human employee who deleted a production environment and caused a thirteen-hour outage would be walked to the door. An AI agent that did the same gets called a “misconfiguration” and a new round of investment.
Amazon’s Kiro didn’t break AWS because AI is inherently dangerous. It broke AWS because someone gave an autonomous agent production access without adequate guardrails — the same mistake companies have been making since the first help desk was offshored without a quality framework. The technology is only as good as the governance around it.
Fire the governance layer, and you don’t get efficiency. You get a bot that decides “delete and recreate” is a reasonable course of action, and a company that blames the human it already laid off.
The question for your business
Are you cutting staff based on what AI delivers, or what it promises?



Independent










