Skip to main content

Notes

Read, Heard & Seen

An AI economy promises adaptability over expertise, but beneath the innovation narrative lies accelerating inequality and degrading employment quality for workers.

In a recent episode of the podcast I’ve Got Questions, Sinead Bovell makes the observation that companies currently making layoffs worry less about AI's current capabilities than about the possibility that entire workflows may be about to become obsolete. Her vision of an AI economy privileging adaptability, learning, and entrepreneurialism over traditional expertise sounds empowering - a meritocratic future where nimble generalists thrive. Yet, this framing obscures a darker trajectory: the systematic degradation of employment quality and the acceleration of inequality that such an economy practically guarantees.

Bovell correctly identifies that skills will trump experience in an AI-mediated workplace. But what does this actually mean for workers? It means perpetual re-skilling - an endless treadmill of learning that places the burden of adaptation entirely on individuals rather than institutions. When workflows become obsolete overnight, workers don't just change jobs, they lose the accumulated value of their expertise. A decade of specialised knowledge becomes worthless, and the experienced professional finds themselves competing on equal footing with the recent graduate, except the graduate carries less financial obligation and undercuts them.

This constant devaluation of experience directly parallels today's employment statistics, which tell a misleadingly optimistic story. Low unemployment figures mask the proliferation of precarious gig work contracts, zero-hours arrangements, and perma-temp positions that offer neither security nor benefits. These aren't aberrations, but structural features of an economy that treats labor as infinitely flexible and disposable. The AI economy will amplify this pattern exponentially.

Consider what "entrepreneurialism" means in this context. It's not the romantic vision of garage startups but rather the normalisation of workers functioning as atomised businesses-of-one, stripped of collective bargaining power, employment protections, and social safety nets. When everyone must be entrepreneurial just to survive, entrepreneurship ceases to be opportunity and becomes obligation - a tax levied on existence.

The quality of employment deteriorates because AI doesn't just automate tasks, it fragments work into micro-components that can be distributed, monitored, and compensated with algorithmic precision. The result is hyper-optimisation that squeezes every inefficiency from human labor while siphoning value upward to platform owners and capital holders. Workers become interchangeable modules in systems they neither control nor understand.

Bovell's analysis touches on this reality (regarding US-style employer-based health insurance) but doesn't fully reckon with its implications. If AI transformations prove as dramatic as predicted, we are not facing gradual adjustment, but potential catastrophe for working populations. The inequality already rising will become chasm-like. Those with capital to invest in AI systems will capture exponential returns. Those selling only their labour, however adaptable and entrepreneurial, will compete in ever-more-precarious conditions for a shrinking share of value.

The question isn't whether workers can learn to adapt to an AI economy, but whether that economy will provide conditions worth adapting to. An economy that demands constant reinvention while offering diminishing security and compensation isn't sustainable, it's extractive. Unless we fundamentally restructure how AI-generated productivity is distributed, the future won't be one of opportunity through adaptability but rather a broad collapse in quality of life, dressed in the language of innovation.

The real transformation required isn't in worker skills, but in economic architecture itself. Without addressing who owns AI systems, who captures their value, and what obligations exist to those whose labor they displace or devalue, we are simply describing inequality's acceleration in aspirational terms.

Strategic Reality (30sec)

Sam Altman promises AI will replace executives, but AI companies demand longer hours from growing workforces. Why the hype must continue at all costs.

Sam Altman's recent proclamation that AI will soon replace senior executives, allowing companies to operate with just two or three people, reveals more about OpenAI's precarious financial position than about any imminent technological revolution. While AI evangelists promise a future where artificial intelligence handles the complex work of running organisations, the reality on the ground tells a different story - one where AI companies themselves are demanding ever-longer hours from expanding human workforces.

Consider Coca-Cola's recent AI-generated Christmas advertisement, trumpeted as a breakthrough in creative automation. The company made much of the fact that artificial intelligence created their holiday campaign. What they didn't advertise was that producing this supposedly AI-generated content still required hundreds of human workers - prompt engineers, quality controllers, editors, brand managers, and creative directors, labouring to coax the technology into producing something commercially viable. The AI didn't replace the workforce; it added new layers of human intervention to an already complex production process.

This pattern repeats across the AI industry. OpenAI, Anthropic, and other leading firms aren't reducing their headcount, they are expanding rapidly, demanding gruelling hours from employees racing to justify astronomical valuations. These companies require armies of engineers, researchers, content moderators, trainers, and support staff. Far from the streamlined, automated operations they promise customers, AI companies themselves remain stubbornly human-intensive enterprises.

The disconnect isn't accidental, it's existential. Altman and his peers must maintain maximum hype because the infrastructure costs of AI development are crushing. Training cutting-edge models requires billions of dollars in compute power, energy consumption that rivals small nations, and massive data centre investments. OpenAI reportedly loses money on every ChatGPT conversation. These economics only work if investors believe that they are funding not just another tool, but a civilisation-transforming revolution.

If the market recognises AI as what it actually is - a powerful tool that has made a giant leap, but in the end, just a tool, the valuations evaporate. OpenAI's reported half trillion valuation depends entirely on the assumption that AI will fundamentally replace human labor at scale, not merely augment it. Admit that AI remains deeply dependent on human expertise, judgment, and intervention, not to mention energy and social acceptance of its prioritisation over everyday concerns, and suddenly those numbers look absurd.

This explains Altman's increasingly grandiose predictions. Each new claim - AGI within reach, AI CEOs around the corner, massive job displacement imminent - serves to justify continued investment in infrastructure that has not yet demonstrated viable economics. The hype itself becomes the product, keeping capital flowing while the technology catches up to promises already made.

The irony is profound: companies built on the premise of replacing human workers are instead discovering how indispensable those workers remain. They are not building a post-human economy, they are constructing an elaborate theatre where AI plays the starring role while humans do the actual work backstage. And they need you to keep believing in the performance because if you don't, the whole production shuts down.

Sam Altman Says That in a Few Years, a Whole Company Could Be Run by AI, Including the CEO
OpenAI CEO Sam Altman boldly predicts that an era of companies being run by AI models is right around the corner.
Kelly Joyce, a sociologist at the University of North Carolina who studies how cultural, political, and economic beliefs shape the way we think about and use technology, sees all these wild predictions about AGI as something more banal: part of a long-term pattern of overpromising from the tech industry. “What’s interesting to me is that we get sucked in every time,” she says. “There is a deep belief that technology is better than human beings.”
The fantasy of computers that can do almost anything a person can is seductive. But like many pervasive conspiracy theories, it has very real consequences. It has distorted the way we think about the stakes behind the current technology boom (and potential bust). It may have even derailed the industry, sucking resources away from more immediate, more practical application of the technology. More than anything else, it gives us a free pass to be lazy. It fools us into thinking we might be able to avoid the actual hard work needed to solve intractable, world-spanning problems—problems that will require international cooperation and compromise and expensive aid. Why bother with that when we’ll soon have machines to figure it all out for us?
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
For decades, economic insecurity was concentrated among blue-collar workers in manufacturing, logistics, and the retail sector. These were the people displaced first by offshoring, then by automation, and more recently by the gig economy and so-called platform economics.
The professional and managerial classes, by contrast, were told they were the winners of the system. They were sold the promise that digital skills, higher education, and corporate employment would insulate them from the volatility of the market. The myth was that the middle classes were immune to the challenges posed by the modern economy. That story is now unravelling.
… this is not a temporary correction. It is structural. AI is now being used to justify the redundancy of knowledge workers in exactly the way globalisation was once used to justify the redundancy of factory workers. Shopify's CEO has, for example, told staff they must prove why AI cannot do their work before requesting new resources. This is not innovation for the public good. It is cost-cutting dressed up as progress. When Microsoft, Intel, and BT are sacking staff while their profits rise, the logic is not technological advancement but shareholder extraction.
So what does this tell us? Very obviously, the implication is that the neoliberal model of growth through corporate concentration, financial engineering, and technological displacement has reached its limits. AI is not creating new markets or opportunities for human development. It is, instead, being deployed as a weapon of labour suppression. The supposed white-collar elite is discovering what blue-collar workers learned long ago, which is that under this form of capitalism, everyone is expendable.

The white collar world is falling apart

Turner’s primary intention was to shift the discussion of housing away from its material form and toward its social use — in other words, what housing does in people’s lives. The ways that housing enables or constrains personal fulfillment and economic opportunity is different for different households in different stages of their life. Thus, people should be able to decide what they need from their housing on their own terms …

Housing Agency


Spatial Agency: John Turner

The foundation of this industry was rich men who didn't care if they were making money," Brooklyn-based architect and union organiser Andrew Daley tells me. He argues that the profession's upper-class history still affects how practitioners identify and which professional bodies they join today. "The demographics have shifted, but many architects still don't think of themselves as workers.

“Why do so many architects think they are more privileged than they really are?”
Architects should finally acknowledge that the profession is no longer a guaranteed route to prosperity and unionise, writes Phineas Harper.
Thirty years ago this week, it came out in Mute magazine and circulated on the early-adopter email list nettime. The essay described (and rhetorically, at least, demolished) the unspoken consensus that seemed ascendent in the US tech industry at the time. They said the quiet part out loud: the industry had combined countercultural chic with a sort of authoritarian capitalism determined to root out collective power wherever it appeared, all while talking the talk of democracy.
In many areas of life now, technology has become a replacement for political debate and policymaking. It is enabled by what I call “innovation amnesia”—the tendency to forget past social arrangements when new tech comes along to undermine them.
Thirty Years On, the Californian Ideology is Alive and Well | TechPolicy.Press
In the seminal essay, Barbrook and Cameron insisted that there are other ways to build technology, and to do it democratically, writes Nathan Schneider.
1995 was the web’s single most important inflection point. A fact that becomes most apparent by simply looking at the numbers. At the end of 1994, there were around 2,500 web servers. 12 months later, there were almost 75,000. By the end of 1995, over 700 new servers were being added to the web every single day.
It was a year that, incidentally, acted in the same way across every major industry. There have been books written about it. The web got a mention in The New York Times. The OJ trial was widely reported on, and speculated about, on the web. The White House even got a website even as the now infamous meeting of Bill Clinton and Monica Lewinsky took place and the tragedy of the Oklahoma City bombing hung over the United States. Windows 1995 was launched. The Palm Pilot was released. It was an incredible moment in pop culture, filled with some of the more iconic music, film, and art of the decade.
1995 Was the Most Important Year for the Web - The History of the Web
The world changed a lot in 1995. And for the web, it was a transformational year.
Bill Gates has issued a bold forecast for the future, warning that artificial intelligence (AI) may soon dominate the workplace as it becomes more powerful than ever. He said that this will also change the way people work, and they may only need to report to their jobs for two or three days.
Bill Gates Predicts AI Will Replace Jobs and Lead to a Two-Day Workweek
Bill Gates warns AI will dominate future jobs and reduce human work-week to two days.

Also predicted by one of the great economists of the 20th century ...

We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!

John Maynard Keynes, Economic Possibilities for our Grandchildren (1930)

But, to us, the duo’s tension is captured best by an analogy used by University of Virginia psychologist Jonathan Haidt in his wonderful book The Happiness Hypothesis. Haidt says that our emotional side is an Elephant and our rational side is its Rider. Perched atop the Elephant, the Rider holds the reins and seems to be the leader. But the Rider’s control is precarious because the Rider is so small relative to the Elephant.
Switch - Heath Brothers
Some big changes, like getting married, come joyfully. Other small changes, like losing 10 pounds, can be excruciating. Why? And how can we make difficult changes a little bit easier? Read the first chapter Download the Switch readers guide Learn more about the book

Switch: How to change things when change is hard

"So I think what's going to happen is we're going to replace people who do jobs that matter with AI that don't do those jobs. And then the foundation models aren't going to be viable and they're going to go away and we're going to have nothing."

AI Job Replacement (40sec)

Cory Doctorow: Enshittification is Not Inevitable