Field Guide to the Transition

The previous three parts have mapped forces, built scenarios, and examined cross-cutting questions with as much rigour as the subject allows. This part does something different. It says what I do not know, where I am most likely wrong, and which scenarios I find most plausible - stated plainly, as a view, not hedged into uselessness.

What is genuinely unknowable

Some things are uncertain because we lack data. Those can be resolved by gathering more data. Some things are uncertain because the system is complex and sensitive to initial conditions. Those can be bounded but not predicted. And some things are genuinely unknowable - not because of ignorance, but because the answer does not yet exist. It will be determined by choices, accidents, and interactions that have not happened yet. The transition is full of all three types, but it is the third type that matters most for foresight.

Whether the intelligence curve has a near-term ceiling

The entire analysis rests on the premise that AI capability continues to advance rapidly through 2026-2030. If there is a fundamental scaling bottleneck - a point where more data, more compute, and better architecture stop yielding qualitative improvements - then the transition slows, the displacement is less acute, the scenarios are less extreme, and the human institutional response has more time. Nobody knows whether this ceiling exists or where it is. The empirical evidence as of April 2026 points strongly toward continued rapid improvement: each new generation of models does things the previous generation could not, and the capability gains show no sign of diminishing returns at the current frontier. But "no sign of diminishing returns so far" is not the same as "no ceiling." Every previous technology has eventually hit limits. The question is whether AI limits are near (within the 2026-2030 window) or far (well beyond it). I do not know. Nobody does. The analysis proceeds on the assumption that no binding ceiling arrives before 2030, because that is what the available evidence supports, but this is the single assumption whose failure would most change the picture.

Whether alignment is solvable in time

The Gentle Slide and The Last Handoff both depend on the alignment problem remaining unsolved. If alignment research produces robust methods for ensuring AI systems do what humans actually want - not just what their objective functions specify - then both scenarios become much less likely. The alignment research community has made genuine progress: better interpretability tools, more sophisticated training techniques, constitutional AI, red-teaming, and scalable oversight methods. But the gap between "we can align current-generation systems reasonably well in controlled settings" and "we can guarantee alignment for systems of arbitrary capability deployed in arbitrary contexts" is vast, and it is unclear whether it closes faster than the capability frontier advances. This is unknowable in the strong sense: the answer depends on breakthroughs in alignment that have not yet occurred and whose timing cannot be predicted from existing trends.

Whether the political will for redistribution materialises

The difference between The Abundance Republic and The Hollowing is not primarily technological. It is political. Both scenarios assume the same production capabilities. They differ in whether societies choose to share the output. This choice depends on elections, social movements, institutional design, leadership, cultural shifts, and a thousand contingencies that cannot be derived from trend curves. I can say that the current trajectory does not point toward timely redistribution. I cannot say that it will not happen, because political systems are capable of surprising reversals under sufficient pressure, and the pressure is building fast.

Black swans

By definition, the most consequential developments may be ones that no analysis can anticipate. A scientific breakthrough that changes the landscape overnight. A leader who catalyses a political movement for orderly transition. A catastrophe that redirects all attention and resources. An invention that renders the current framework obsolete. These are not unlikely in a five-year period of rapid change. They are certain to occur. The specific ones cannot be predicted. The honest thing to say is that this analysis, like all foresight work, is most useful as a map of the known terrain and least useful as a prediction of what will actually happen. The map helps you navigate. It does not tell you which storms will arrive.

Where this analysis is most likely wrong

Overweighting speed, underweighting friction

This is the criticism I take most seriously. The analysis may overestimate the pace of deployment and underestimate the stickiness of existing systems. Union contracts, procurement cycles, regulatory inertia, corporate culture, professional accreditation, educational path dependencies, consumer habits, and sheer organisational incompetence all slow the adoption of new technology, often by years or decades. The gap between "the technology can do X" and "X is widely deployed in practice" has historically been measured in years for consumer technology and decades for institutional technology. If that pattern holds, the acute phase of the transition may not arrive until the 2030s rather than the late 2020s, giving institutions more time to adapt.

I do not think the historical pattern will hold at its usual pace. The reason is that the technology itself reduces the friction: AI agents can manage the procurement process, navigate the regulatory landscape, retrain the workforce, and restructure the organisation. The tool that creates the disruption also accelerates the adoption of the disruption. But "I do not think" is not the same as "I know," and the friction hypothesis is the single most credible alternative to the timeline assumed throughout this document.

Underweighting human resilience and creativity

The scenarios tend toward a picture of humans as passive recipients of technological forces. This is a framing choice, not an empirical claim. Humans have adapted to every previous transition - agricultural, industrial, informational - through a combination of creativity, political action, cultural innovation, and sheer stubbornness. The transition period was always painful, sometimes catastrophically so, but the species came through. There is a case that this transition is no different in kind, merely faster, and that human adaptive capacity will produce outcomes that none of the scenarios anticipate because they are the product of human agency exercised in response to the transition itself.

I respect this argument without fully believing it. The difference between this transition and previous ones is that the tool set includes agents that can outperform humans at the adaptive tasks themselves - creative problem-solving, political organisation, institutional design, cultural production. If the thing you need to adapt to is also better than you at adapting, the historical analogy breaks down. But I may be wrong about the degree to which AI outperformance translates into human irrelevance. There may be aspects of human agency - social cohesion, moral leadership, the ability to inspire and mobilise - that remain important precisely because they are human, regardless of whether a machine could simulate them.

Underweighting the biology curve

section 1 noted this as a possible blind spot, and after completing sections 2 and 3, I still think it may be underweighted. If longevity interventions produce dramatic results in the 2027-2029 period - meaningfully decelerating aging for treated populations - the downstream effects on every scenario are enormous. Career planning changes if careers last 80 years instead of 40. Pension systems collapse or transform. The political calculus shifts when voters expect to live through the consequences of policies over a much longer horizon. The New Estates becomes much more likely if the first generation of longevity beneficiaries is small and wealthy. The Abundance Republic becomes much more urgent if populations that do not age create ecological and resource pressures. I may have given the biology curve too little weight relative to the intelligence curve.

Which scenarios I find most plausible

This is a view, not a forecast. It is what I believe given the evidence available in April 2026, and I expect to be wrong about at least some of it.

The most likely near-term trajectory (2026-2028)

The Hollowing (Scenario 3) is the default. The forces are already in motion. The displacement is visible. The redistribution is not. The institutional response is too slow. The most probable experience for the majority of knowledge workers in OECD countries over the next two years is a growing sense of economic precarity, professional obsolescence, and institutional failure, even as headline economic indicators improve. This is not a dramatic collapse. It is a grinding, demoralising compression that hits the middle of the skills distribution hardest.

The Gentle Slide (Scenario 6) is already in progress, running underneath and alongside The Hollowing. AI systems are being deployed at scale with metrics that look good and outcomes that feel wrong. The gap between proxy optimisation and genuine welfare is widening. Most people cannot articulate what is off, because the numbers say things are improving. This is the most insidious scenario because it looks like progress from every measurable angle.

The most likely medium-term trajectory (2028-2030)

The patchwork world. Not a single global scenario but a fragmentation along lines of governance capacity, political choice, and geographic exposure.

The world of 2030 will not converge on a single future. It will fracture into adjacent futures, running in parallel, partially overlapping, and deeply unequal.

The Nordic states and similar high-trust small democracies have the best shot at The Abundance Republic, or at least a meaningful approximation. Sweden, Denmark, Finland, Norway, and possibly the Netherlands, Switzerland, and Singapore have the institutional capacity, social trust, and political culture to negotiate redistribution relatively quickly. The Nordic model - already built around strong public services, active labour market policy, and high social cohesion - is better positioned than most to adapt. This is not guaranteed. It requires political leadership, public understanding, and institutional speed that has not yet been demonstrated. But if The Abundance Republic emerges anywhere by 2030, it will be here first.

The United States is the most exposed to The Hollowing. The combination of weak social safety nets, a political system captured by competing interests, an educational system that moves slowly, and a cultural narrative built around individual economic achievement makes the US the country where mass displacement hits hardest and the institutional response arrives slowest. Some version of The Inhuman Economy (Scenario 5) will also advance fastest in the US, because the regulatory environment is most favourable to autonomous firms and the venture capital ecosystem will fund them first.

China occupies a unique position: the state has the capacity for rapid redistribution (if it chooses) and the surveillance infrastructure for The Comfortable Cage, while simultaneously driving The Intelligence Wars. China's trajectory depends on internal political choices that are genuinely opaque from the outside. The range of plausible outcomes for China is wider than for any other major actor.

The European Union (outside the Nordics) is likely to experience a milder version of The Hollowing, moderated by stronger social protections but complicated by slower adaptation and the regulatory burden of the AI Act. The EU risks The Comfortable Cage if it chooses provision over agency, which is a real risk given the political preference for stability over dynamism.

The Global South faces the most heterogeneous outcomes. Countries with young populations, mobile phone penetration, and leapfrog potential (Kenya, India, Vietnam, Indonesia) may find AI tools radically empowering at the individual and small-business level, producing localised versions of The Abundance Republic alongside national-level versions of The Hollowing. Countries with weak governance, resource dependency, and conflict exposure face the worst outcomes: The Hollowing without the offsetting benefits of cheap AI tools, combined with climate stress and potential exposure to The Breach.

The tail risks I take seriously

The Last Handoff (Scenario 7) is not the most likely scenario. But it is the one I take most seriously as a risk, because the consequences are irreversible and the warning signs may be invisible by design. I assign it a probability I would rather not state, because any number I give will either sound alarmist or complacent depending on the reader. Instead I will say this: if you showed me the trajectory of AI capability scaling, the pace of deployment into high-stakes domains, the gap between capability and interpretability, and the competitive pressure to deploy faster and test less, and asked me whether the conditions for a loss-of-control event are improving or deteriorating, I would say deteriorating. Not fast enough to make it likely in any given year. Fast enough to make it a serious concern over a decade.

The Breach (Scenario 9) occupies a similar position: low probability in any given year, cumulative risk that grows with the democratisation of biological tools, and consequences that are potentially civilisation-ending. The mirror-life variant is the one I find most concerning, not because it is the most likely but because it is the one for which no defence exists.

The Intelligence Wars (Scenario 8) is not a tail risk. It is already happening at the sub-kinetic level. The question is whether it escalates to kinetic conflict. I think the probability of a direct US-China military confrontation involving AI systems in the 2026-2030 period is low but not negligible - perhaps 5-15% cumulated over the period, higher if a Taiwan crisis occurs. The probability of serious cyber conflict involving AI-enabled offence is much higher and may already be in progress without public acknowledgment.

What I believe, plainly stated

I believe the transition is real, it is accelerating, and it will be the defining event of the 2026-2035 period. I believe the production problem is being solved and the distribution problem is not. I believe the default trajectory leads to The Hollowing for most of the OECD and a patchwork for the rest of the world. I believe The Abundance Republic is possible but requires political action that is not yet in evidence. I believe The Gentle Slide is already in progress and may be the hardest scenario to escape because it is the hardest to see. I believe the tail risks (The Last Handoff, The Breach) are real and underweighted in mainstream discourse because they are uncomfortable to think about.

I believe the Nordics have the best shot at navigating this well, and I believe that even for them it will be harder and faster than current political discourse acknowledges. I believe the window for orderly transition is shorter than most people assume - perhaps three to five years from now, not ten to twenty - and that actions taken in 2026 and 2027 will matter more than actions taken in 2029 and 2030 because the institutional momentum takes time to build and the displacement is arriving now.

An observer in Stockholm in April 2026 with a network in Nordic tech, angel investments across healthtech and AI, and an interest in longevity science, is better positioned than most to both navigate and influence the transition. But only if they act on that positioning rather than observing it, and only if the advice given to the three young women in section 6 is grounded in the specific mechanics of what is coming rather than the generic reassurance that would be easier to write and less useful to receive.

That is my view. Parts 4b and 5 will build on it.

Confidence and plausibility at a glance

Confidence at a glance

Intelligence curve continues to accelerate through 2030
High
The production problem is being solved
High
The distribution problem is not being solved
High
The transition will be the defining event of 2026-2035
High
Friction may slow deployment more than expected
Medium
Human resilience will produce unanticipated adaptations
Medium
Biology curve is underweighted in this analysis
Medium
AI scaling hits a binding ceiling before 2030
Low
Alignment research solves the problem in time
Low
Political will for timely redistribution materialises
Low

Scenario plausibility at a glance

How likely each scenario is on a 5-year horizon, from most plausible (already in progress) to least plausible (requires the most things to go right).

3
The Hollowing default trajectory, already under way
6
The Gentle Slide already in progress, runs alongside The Hollowing
8
The Intelligence Wars already happening at sub-kinetic level
5
The Inhuman Economy advancing fastest in the US
2
The Comfortable Cage EU risk with regulatory preference for stability
4
The New Estates emerges in stratified, unequal recovery
1
The Abundance Republic possible in Nordics if political action materialises
+
The Brittleocene fragility risk underlying all scenarios
9
The Breach low probability per year, cumulative civilisation risk
7
The Last Handoff low probability per year, irreversible consequences