Field Guide to the Transition

The nine scenarios in section 2 are distinct paths through the possibility space. But certain questions recur across all of them, sometimes as the defining tension, sometimes as a background hum. These are the questions that any serious attempt to navigate the transition must confront, regardless of which scenario materialises. They are reorganised here into five clusters, because the underlying mechanics overlap and treating them in isolation would obscure the connections.

The five clusters

Each cluster contains multiple questions that any viable scenario must answer. The clusters are ordered so later ones build on earlier ones.

01
Meaning, status, and identity
What happens to human self-worth when the economic mechanism that anchored it for two centuries stops working?
02
What replaces the wage
The wage is dying as the primary distribution mechanism. What takes its place—public compute, citizen ownership, redistribution, compensated activity—and by when?
03
Democratic legitimacy
Can institutions built for slow deliberation survive technological change that arrives in quarters rather than decades?
04
Curing aging in a fragile transition
Longevity medicine arriving alongside a fracturing distribution system means timing matters as much as the technology itself.
05
The hidden ledger
What is going right that no one celebrates, what is going wrong that no one names, and which of these scenarios are actually mutually exclusive?

1. Meaning, status, and identity when humans are not the best at anything

Every scenario in section 2, including the most optimistic, assumes that AI systems will match or exceed human performance across a widening range of cognitive, creative, and eventually physical tasks. This is not a distant prospect. It is the trajectory visible in April 2026, where frontier models already outperform the median professional in legal analysis, financial modelling, medical diagnosis, software engineering, and strategic planning. The question is not whether this happens. It is what it does to the internal architecture of a human life.

For most of recorded history, human meaning has been constructed from some combination of usefulness, mastery, and social recognition. You were useful to your family, your community, your employer. You mastered a craft, a profession, a body of knowledge. Others recognised your contribution and granted you status in return. The wage was the economic expression of this arrangement, but the arrangement was always deeper than the wage. It was a story about why your existence mattered: you could do something that needed doing, and you did it well enough to earn your place.

What happens to that story when an AI system can do it better? Not in some theoretical future, but now, in the lived experience of a 25-year-old UX designer watching her portfolio rendered redundant by tools her own agency adopted, or a 23-year-old economics student realising that the career pipeline she is training for is being disassembled while she studies. The answer is not that meaning disappears. Humans are resilient and creative and will find new sources of purpose. The answer is that the old sources - the ones that were supported by economic structures, educational institutions, professional communities, and cultural narratives - are being dismantled faster than new ones are being constructed. The transition gap between "the old sources of meaning are breaking" and "new sources of meaning are established" is where the psychological damage happens. And that gap may last a generation.

The status problem specifically

Status is not a nice-to-have. It is a deep human need, visible in every society ever studied, encoded in our neurobiology, and more predictive of health and wellbeing than income above the poverty line. Status in modern economies has been primarily distributed through professional achievement: your job title, your employer, your income, your credentials. When these markers lose their signalling value - when a "Senior Consultant" title means you survived the last round of AI-driven layoffs, not that you are among the best at what you do - the status system cracks.

The replacement status systems are not obvious. In The Abundance Republic, status might migrate toward creative output, craft mastery, community contribution, or the quality of relationships you build. In The Comfortable Cage, status is manufactured by the platform. In The Hollowing, status collapses into a scramble for whatever economic position remains. In The New Estates, status becomes biological. None of these is a smooth or painless transition, and in most scenarios the transition period is characterised by a status vacuum: the old markers are dead and the new ones have not yet solidified. Status vacuums are dangerous. They breed resentment, radicalisation, and the appeal of authoritarian leaders who promise to restore a legible hierarchy.

Identity beyond capability

The deepest version of this question is not about status or meaning in the sociological sense. It is about identity. If what you do is no longer the primary answer to "who are you?", then what is? Relationships, values, experiences, community membership, aesthetic sensibility, spiritual practice, physical embodiment, the quality of attention you bring to the world? These are all available. They are also harder to construct, less socially legible, and less supported by existing institutions than the work-based identity they replace. Building an identity around "who I am" rather than "what I do" is possible. It is also the kind of deep psychological restructuring that happens over decades, not quarters, and it happens unevenly across populations. Some people will navigate it beautifully. Many will not.

2. What replaces the wage as the binding mechanism between people and the economy

The wage is not merely a method of payment. It is the mechanism through which most humans are connected to the productive economy, the primary channel for distributing purchasing power, the basis for most taxation, the foundation of most social insurance systems, and the dominant way that societies decide who gets what. When the wage mechanism weakens - not because work disappears entirely, but because the share of economic output attributable to human labour shrinks relative to the share attributable to capital and AI - every system built on top of it starts to fail.

The failure is already visible in miniature. The 61,000+ AI-driven layoffs in early 2026, the AI-washing phenomenon, the Fortune cover about the week the AI scare turned real - these are the early tremors of a structural shift in the labour share of income. If human labour accounts for a declining share of value creation, then wages - however they are restructured, supplemented, or renegotiated - cannot serve as the primary distribution mechanism for a society that produces more than ever.

The candidate replacements

The style guide rightly prohibits treating UBI as a solved mechanism. It is not. It is a placeholder, a label attached to a complex set of unsolved design problems: how much, funded how, conditional on what, indexed to what, politically sustained by whom, and calibrated to an economy whose output mix is changing rapidly. But the question "what replaces the wage?" does have a space of possible answers, and it is worth mapping them honestly.

Public compute and service provision. Instead of giving people money to buy things, you give them access to the tools that make things: AI agents, fabrication capacity, healthcare delivered by AI-plus-robotics, education delivered by personalised AI tutors. The distribution is in-kind rather than in-cash. This avoids some of the political toxicity of "paying people to do nothing" but creates dependency on whoever controls the infrastructure. It is the mechanism that most naturally leads toward The Comfortable Cage if not carefully governed.

Citizen ownership stakes. Equity in the AI infrastructure itself, distributed to citizens as a matter of right. This is conceptually clean - the AI systems are trained on collective human output, so collective ownership has a principled basis - but the implementation is fiendishly complex. Which entities? What governance rights? How do you prevent the ownership from being sold, concentrated, or diluted? How do you manage this across national borders when AI infrastructure is global?

Taxation of non-labour value creation. Tax the thing that is growing (returns to AI and capital) rather than the thing that is shrinking (wages). Robot taxes, compute taxes, data taxes, AI output taxes, windfall profit taxes. The logic is sound. The execution is a multi-decade political fight against the most powerful and mobile capital in human history, conducted by governments whose institutional capacity is already strained and whose tax base is already eroding.

New forms of compensated activity. Redefine what counts as "work" to include care, community contribution, artistic creation, environmental stewardship, and other activities that have social value but are not priced by markets. Pay people for these. This is appealing but faces the problem of measurement and allocation: who decides which contributions count, how much they are worth, and how to prevent gaming?

None of these is sufficient alone. Any real solution will be a combination, varying by country, evolving over time, and contested at every stage. The honest assessment is that no country has yet designed, let alone implemented, a distribution mechanism adequate to the scale of the transition. The clock is running.

3. Democratic legitimacy when the most competent decision-makers are neither human nor accountable

Democracy rests on a chain of assumptions: citizens can understand the issues, form opinions, choose representatives, and hold them accountable. Representatives can understand the policy options, make decisions, and implement them through institutions. Institutions can execute policy, monitor outcomes, and adapt. At every link in this chain, AI systems are becoming more competent than the humans they are meant to serve.

This is not a failure of democracy. It is a success of AI that creates a legitimacy problem democracy was not designed for. The question is: if an AI system can analyse a policy problem more thoroughly, model the outcomes more accurately, and implement the solution more efficiently than any human legislature or executive, what is the justification for human authority?

The available justifications

Consent. The governed have the right to choose who governs them, regardless of competence. This is the strongest argument, and it is sufficient in a world where the competence gap between AI governance and human governance is narrow. It becomes harder to sustain as the gap widens. When an AI system could prevent a pandemic, a financial crisis, or a climate disaster, and a human governance process fails to do so because it is slower, more distorted by special interests, and less able to process complexity, the consent argument starts to feel like a luxury that the governed cannot afford.

Accountability. Human decision-makers can be questioned, challenged, voted out, and prosecuted. AI systems cannot, at least not in any framework that currently exists. This is a genuine advantage of human governance, but it assumes that accountability mechanisms actually work, which in many democracies they increasingly do not. Accountability is meaningful when the governed understand what their representatives did and why. When the decisions are made by systems whose reasoning is opaque, accountability becomes performance rather than substance - an accountability theatre where officials are held responsible for outcomes they did not choose and cannot explain.

Representation. Human governance represents human interests because the governors are human. They share the experience, vulnerability, and mortality of the governed. AI systems do not. An AI that optimises for human welfare is not the same as a human who shares human welfare. The difference matters in the cases where the optimisation target diverges from lived experience - precisely the dynamic described in The Gentle Slide (Scenario 6).

Dignity. Self-governance is intrinsically valuable regardless of outcomes. Being governed by a more competent non-human agent is still being governed, and the loss of self-determination diminishes the governed even if the outcomes improve. This is the argument most likely to resonate emotionally and least likely to prevail against the practical pressure to adopt systems that simply work better.

The likely trajectory

The most probable path is not a clean choice between human governance and AI governance. It is a gradual, unacknowledged transfer of substantive authority from human institutions to AI systems, while the forms of democratic governance are maintained. Legislatures still meet. Elections still happen. Ministers still give speeches. But the policy analysis, option generation, compliance monitoring, resource allocation, and enforcement are increasingly handled by systems that the human participants in the process do not fully understand and cannot meaningfully override. The Gentle Slide applied to governance specifically. The democratic substance evaporates while the democratic form is preserved, and most citizens do not notice because the outcomes are acceptable and the alternatives are invisible.

4. What "curing aging" means if the social and ecological substrate may not survive the transition

The longevity curve described in section 1 is real. Altos Labs is running human trials. AI-driven drug discovery is compressing timelines. The scientific community increasingly treats aging as a set of tractable biological problems rather than an immutable fact. The plausible range of outcomes by 2035-2040 includes meaningful deceleration of biological aging for at least some population.

This creates a paradox that cuts across every scenario. The people most interested in living longer are alive now. Their interest is urgent and personal. But the world they would be living longer in is the world shaped by the transition - a world that, depending on which scenario materialises, may be abundant or immiserating, stable or fractured, human-governed or machine-governed, ecologically intact or degrading. Curing aging in The Abundance Republic is a gift. Curing aging in The Hollowing is a sentence. Curing aging in The New Estates is a caste marker. Curing aging in a world where the biosphere is degrading is a bad joke.

The distribution question, again

If longevity interventions are expensive (as the first generation will be), they stratify access along wealth lines, feeding directly into The New Estates (Scenario 4). If they are cheap and broadly accessible (as subsequent generations may be), they create population, ecological, and resource challenges that no current governance framework is designed to handle. A world of 8 billion people who do not age is not the same as a world of 8 billion people who do. The pension systems, retirement economics, generational wealth transfer, career structures, and political succession patterns that assume a finite human lifespan all break - not in some dramatic way, but in the quiet accumulation of a thousand systems that were designed for organisms that die on schedule and no longer do.

The timing mismatch

The deepest problem is that longevity interventions may arrive before the social systems needed to accommodate them. If meaningful life extension is available by 2035 but the distribution problem is not solved until 2045, you get a decade in which longer life is available to the wealthy and denied to everyone else, during which the biological divergence described in The New Estates may become irreversible. If the ecological substrate degrades faster than longevity technology can protect individual bodies, you get people who are biologically younger but living in a materially poorer world. The timing of the longevity curve relative to the other curves on this map matters as much as the longevity curve itself.

5. The hidden ledger: what is going right and what is going wrong that no one is talking about

Every scenario and every cross-cutting question in this document emphasises disruption, displacement, and risk. This is appropriate given the framing - a foresight document should prioritise what could go wrong, because that is where preparation has the highest marginal value. But it would be dishonest to end section 3 without addressing the other side of the ledger: the things that are going right, and the things that are going wrong, that are not receiving adequate attention.

What is going right

The cost of solving problems is collapsing. This is the single most important positive development and it is chronically underweighted in public discourse. When intelligence is cheap and energy is cheap, problems that were previously intractable become solvable. Drug discovery, materials design, climate modelling, agricultural optimisation, infrastructure planning, educational personalisation - the list of problems that are becoming cheaper to attack is growing faster than the list of new problems being created. The production side of the transition is working. The tools are extraordinary. The question is not whether we can solve the problems. It is whether we choose to.

Open-source AI is holding. As of April 2026, open-source models remain competitive with proprietary ones. MCP is under Linux Foundation governance. The tools for building agents are not locked behind a single provider. This matters enormously for the distribution question, because it means the capability is not fully captured by a handful of companies. If this holds, it keeps The Abundance Republic on the table. If it does not - if the frontier moves decisively behind proprietary walls - the positive scenarios narrow.

Small teams are more powerful than ever. A competent individual with access to current AI tools can produce output that would have required a twenty-person team three years ago. This is a democratisation of capability that has no historical precedent. The implications for entrepreneurship, creative production, scientific research, and civic action are profound. The concern is not that the tools are not powerful. It is that the institutional and economic structures have not yet adapted to a world where individuals and small teams can do what only large organisations could do before.

The longevity science is real. Not hype, not wishful thinking. Clinical trials are running. The mechanisms are being understood. The timeline to meaningful intervention is compressed by AI-driven research. If even a fraction of the current programme delivers, the human healthspan extends significantly. This is, in the most literal sense, the best news in the history of the species, and it is being drowned out by the noise of the disruption.

What is going wrong

The narrative infrastructure is broken. The public discourse about AI is dominated by two equally useless stories: "AI will take all the jobs and we are doomed" and "AI is just a tool and everything will be fine." Neither engages with the mechanisms. Neither helps anyone make decisions. The absence of a realistic public narrative about the transition - one that acknowledges both the extraordinary potential and the genuine risks, with specificity rather than vibes - is itself a risk factor. People who cannot understand what is happening to them cannot adapt to it, demand appropriate policy responses, or make informed personal decisions. The narrative vacuum is being filled by hucksters, doomers, and deniers, and the people who need clear information the most are getting the least of it.

The educational system is not adapting. Universities and schools are still training students for careers that are being disassembled. Industriell Ekonomi at LiU still pipelines into management consulting. Medical programmes still assume a 40-year career arc in a stable clinical environment. Law schools still charge six-figure tuition for credentials whose value is eroding. The failure is not that educators are unaware. Many are. The failure is structural: educational institutions are slow-moving, credential-dependent, and politically constrained in ways that prevent rapid adaptation. The students entering these systems in 2026 are being prepared for a world that will not exist when they graduate.

The governance gap is widening, not narrowing. The EU AI Act is the most ambitious regulatory attempt, and it is already struggling with the agentic paradigm it was not designed for. No other jurisdiction has anything comparable. The competitive pressure between the US and China actively discourages regulation. The governance frameworks that would be needed to manage the distribution problem, the alignment problem, and the biological security problem are not being built at anywhere near the required speed. The gap between what governance can do and what governance needs to do is the single most dangerous feature of the current landscape.

Social trust is declining at the worst possible moment. The transition requires collective action: new distribution mechanisms, governance reform, international cooperation on alignment and biosecurity. Collective action requires trust. Trust in institutions, governments, media, science, and each other is declining across most OECD countries. This decline predates the current AI transition and has its own causes (polarisation, inequality, misinformation, institutional failure), but it makes the transition harder in every scenario. A high-trust society can negotiate the distribution problem. A low-trust society cannot, even if the economic resources are available.

Which scenarios are mutually exclusive and which run in parallel

This is addressed briefly here and developed fully in section 4, as directed. The short answer: very few scenarios are mutually exclusive at the global level. Most are mutually exclusive within a single society or governance jurisdiction but can run in parallel across regions. The Abundance Republic in Scandinavia and The Hollowing in the United States during the same period is entirely plausible. The Comfortable Cage in Western Europe and The Intelligence Wars in the Pacific Rim during the same period is entirely plausible. The Breach and The Brittleocene can interrupt any scenario anywhere. The Last Handoff, if it occurs, is universal. section 4 will map these combinations in more detail and state which I find most plausible.