In the first years of the 2020s, artificial intelligence transitioned from academic novelty to real economic force, with not much thought of an AI scare, except ‘John Connor’, a fictional character. Generative models, adaptive agents, and powerful automation tools reshaped industries from code generation to financial services — and with that transformation came a question that has haunted technologists and workers alike: what happens to human labor when machines become faster, cheaper, and often better at cognitive tasks?
That question moved from abstract concern to tangible disruption in late February 2026. A constellation of events — a viral “doomsday” narrative, a sharp market reaction, and high-profile layoffs explicitly attributed to AI productivity gains — converged in what many are calling the week that the AI scare trade turned real.
Viral Essays and the “Global Intelligence Crisis”
Two pieces of speculative commentary — one by AI executive Matt Shumer and the other by Citrini Research — captured widespread public imagination in mid- to late February.
- Matt Shumer’s essay, widely circulated on social media and republished in part by Fortune, warned that AI had crossed a threshold threatening white-collar job security and urged workers to prepare.
- A more dramatic narrative came from Citrini Research’s “The 2028 Global Intelligence Crisis” memo. Framed as a future retrospective from 2028, the piece outlined a world where AI agents rapidly replaced software engineers, analysts, and managers, suppressing consumer incomes and triggering a deflationary economic spiral — a scenario dubbed a “human intelligence displacement spiral.”
Central to the Citrini thesis was the concept of “ghost GDP” — economic output that appears in national accounts because machines produce goods and services but never circulates through the human economy as wages do. Without consumer demand to match rising production, the economy could theoretically flatten or contract, undermining financial markets and labor markets alike.
The memos were explicitly speculative, not forecasts. But their reach — with Shumer’s piece reportedly viewed tens of millions of times — was enough to move markets and spark deep discussion about long-term structural risks from AI adoption.
Markets React: The AI Scare Trade
On the first trading day after these narratives spread, major U.S. equity indexes suffered notable sell-offs. Software and technology stocks bore the brunt of the declines as investors grappled with the possibility that rising automation could erode demand for traditional enterprise services and software licensing.
This phenomenon has been dubbed an “AI scare trade.” The term captures a market dynamic where fear and narrative traction — more than hard data — drive trading patterns as investors price in an uncertain future. In the Fortune article, this reaction was linked to Citadel Securities’ demolition of the doomsday narrative in another piece, arguing that fundamentals don’t support such drastic outcomes.
Citadel’s critique focused on macroeconomic reasoning: productivity shocks have historically lowered costs, expanded output, and increased real income rather than triggered collapse. It emphasized that rapid technological diffusion generally follows an S-curve of slow initial adoption, acceleration, and eventual saturation — not instantaneous displacement of human labor.
Still, the fact that markets responded so visibly to speculative essays demonstrates how deeply AI concerns have penetrated investor psychology. Narratives can shape price action even without solid empirical evidence. That’s a key lesson from the week’s events.
Layoffs and Labor Market Anxiety
Perhaps the most concrete event linked to this “AI scare” was the announcement by Block Inc. (the fintech company behind Square and Cash App) that it would cut around 4,000 jobs — roughly 40% of its workforce — citing productivity gains from AI tools as a driving factor.
Block’s CEO, Jack Dorsey, openly embraced the role of AI in the decision, stating that intelligence tools had changed what it means to operate a company and that a smaller team could accomplish more using these technologies.
The layoffs shocked many employees — including staff who had already embraced internal AI tools — and have become a flashpoint in the broader debate about automation and labor. Investors responded positively in the short term, driving Block’s stock sharply higher following the announcement.
But the broader implications are more ambiguous:
- Some commentators see Block’s moves as simply an acceleration of a trend already under way, where corporations optimize headcount in light of new productivity tools.
- Others argue that using AI as a rationale for cost cutting — sometimes called “AI-washing” — obscures broader business decisions that would have happened regardless of technological change. OpenAI’s CEO has publicly raised this concern, suggesting that some layoffs are being inaccurately attributed to AI.
Regardless of intent, the Block example has reinforced labor market anxieties about job security — a psychological trend increasingly documented in social data and academic surveys. Workers across fields report heightened fear of obsolescence, often tied to AI capabilities.
This fear of becoming obsolete — sometimes labelled FOBO — has real consequences beyond economics. It affects career planning, wage negotiation, mental wellbeing, and long-term labor force participation decisions.
Economic Theory Versus Lived Experience
Economists and strategists remain divided on how to interpret these developments.
On one side are analysts who argue the disruptive narrative is overblown. They point to solid labor market data — for example, rising demand for software engineers and the fact that productivity gains from AI have yet to translate into measurable wage growth or job losses at scale — as evidence that the AI revolution is not here yet.
On the other side are voices warning about structural change: if AI displaces too many high-income workers without corresponding new roles, consumer demand could weaken, creating a self-reinforcing economic downturn. This echoes broader academic work suggesting that generative AI’s capacity to handle non-routine tasks places it in a different category than past automation tools.
The disconnect between macroeconomic theory and individual lived experience is stark. While economists debate abstract concepts like marginal cost and capital-labour substitution, some workers are already feeling dislocated, facing ghosting from recruiters and a dearth of opportunities at mid-career stages.
The Social and Psychological Dimensions
The discussion of AI and the labor market isn’t just economic — it’s deeply social and psychological.
Online forums reveal intense anxiety about career prospects, job displacement, and a sense of being left behind in an accelerating technology landscape. Stories range from recent graduates questioning career paths, to experienced professionals suddenly displaced and struggling to find new opportunities.
This anxiety may be amplified by the repetition of AI-doom narratives across media and social platforms. Some commenters argue that workplace technology fears have existed for years in similar form, and that cyclical panic tends to ebb with time.
But repeated exposure to “AI will take your job” stories — especially when tied to real layoffs — can have real psychological effects, including stress and identity loss. Economic transitions historically create winners and losers; the challenge lies in managing the transition in a way that preserves dignity and opportunity for as many people as possible.
Navigating the Transition
The week that the AI scare became real underscores the need for thoughtful public policy, corporate strategy, and individual planning.
For Policymakers
Governments may need to rethink labor market supports, unemployment insurance, retraining programs, and wage insurance frameworks that can help cushion transitions without discouraging innovation.
For Businesses
Firms should communicate transparently about how they use AI, invest in reskilling programs, and consider the broader social stability implications of rapid automation — not just short-term profit gains.
For Workers
Upskilling — particularly in areas where AI augments human capability instead of replacing it — remains critical. Roles that emphasize creative judgment, interpersonal skills, and cross-disciplinary expertise are likely to remain valuable. Continuous learning and adaptability will remain core labour market skills.
Conclusion
The Fortune piece and the broader events of the past week reveal much more than a market hiccup or a singular round of layoffs. They expose a deeper tension at the heart of the AI era: how do we integrate powerful automation into the economy without fracturing the social contract that has tied work, wages, and community together for generations?
AI’s potential for innovation and efficiency is undeniable. But so are the anxieties, economic uncertainties, and structural challenges ahead. Whether society navigates this transition smoothly or stumbles into deeper instability may depend on choices made not just by technologists, but by policymakers, business leaders, and workers themselves.
Source: This article is a review of multiple issues and topics found on Fortune.com https://fortune.com/2026/02/28/ai-scare-trade-mass-layoffs-white-collar-recession-citrini-shumer-viral-doomsday-essays/
Discover more from Inside AI Robotics
Subscribe to get the latest posts sent to your email.
