You are 24, five months from graduation with a Masters in Machine Learning from KTH, and the job market you have spent four years preparing to enter is about to collapse underneath you. I should have told you this in January 2026. I did not, because I was not sure enough, and because telling a bright young engineer that his field's entry level is vanishing while he is still finishing his thesis felt like poisoning the well instead of offering help. That was wrong. Silence was the crueller choice. Here is what I should have said.
1. What I know now that I did not know then
The Tooling Acceleration in Q1 2026 changed everything in your field overnight. By the time you graduated in May 2026, entry-level machine learning engineering work - the work you trained to do - was being done by AI agents faster than by junior engineers. Model training pipelines, data preparation, hyperparameter tuning, basic fine-tuning, LLM application building: these were the jobs that paid 60,000-80,000 kronor a year and taught you the craft through repetition and feedback. Within six months of your graduation, the "ML engineer" job title at junior level had become vestigial. Not gone entirely, but transformed. The work itself had been absorbed by agents. The roles that remained were either principal engineers with deep systems knowledge (which you did not have yet) or something new: people who directed the agents, reviewed their output, and knew when to throw them away and start again. You had trained to be the former. You would have needed to retrain for the latter.
The Hollowing hit the Swedish tech sector harder and faster than most companies prepared for. Klarna, Spotify, Einride, and every other unicorn on your target list had built growth models that assumed a certain pace of hiring. By Redundancy Summer in Q3 2027, those models had broken. One hundred twenty thousand white-collar jobs went away in that quarter across Scandinavia, and the tech sector's junior hiring froze. This was not because these companies had fewer clients or smaller ambitions. It was because they had better tools, and the tools did not require junior engineers at the scale they used to. Klarna in particular - the one you heard most about, the one that seemed inevitable - had cut graduate hiring to nearly zero by the time you would have applied. The dream job you imagined in January 2026 did not exist by July 2027.
Your FAANG-adjacent strategy was targeting the wrong tier. You were not alone in imagining the path: good KTH degree in machine learning, solid projects, internship at a Swedish unicorn or a Stockholm office of Google/Meta, then move into AI infrastructure or applied research. This was the obvious track. It was also the track that the Tooling Acceleration made obsolete. FAANG still hired, but the roles that opened were not entry-level. They were principal engineers designing systems, AI safety specialists (a category that barely existed on your radar), AI infrastructure operators (people who understood the systems that ran the agents), founder-engineers building AI-native products, and a new role class called "agent architects" who designed multi-agent systems and understood their incentive structures. These were all things you could potentially become, but not on a typical two-year gradient from graduate to mid-level engineer. The entry level collapsed. The mid-tier roles were still there, but you had to get there by a path that did not exist in your job market.
The Swedish Compact in Q3 2026 opened a door that almost no one in your peer group even noticed. The government's Omstallningspaketen created a new institutional layer: public compute infrastructure, AI governance structures, safety auditing bodies, and programmes to redistribute the productivity gains from automation. This was not glamorous. It was not on anyone's list of dream employers. But it was hiring. It was paying well. It was building things that mattered. And it wanted exactly your type: someone with real ML credentials, not a bootcamp graduate; someone who understood the technical depth of what these systems could do and could not do; someone who could explain AI to policymakers without bullshitting them. The problem was that the Swedish Compact was not on KTH's radar in the spring of 2026. Your faculty advisors were not pushing you toward it. Your career services office was still selling the Klarna story.
The Nordic Redistribution Pilot in Q2 2028 needed quantitative people, and the combination of AI literacy plus genuine interest in policy was rarer than it should have been. When the compute dividend started being allocated across the Nordics, it needed people who could design the mechanism, run the simulations, understand the failure modes, and explain the trade-offs. Some of this could be done by economists. Most of it needed someone with your profile: technical enough to audit the code, understand the data pipelines, model the incentives. This was a small category of person. If you had positioned yourself in 2026-2027, you could have been in that category. Instead, most of your cohort was either unemployed, freelancing on AI-adjacent projects, or had pivoted into something unrelated.
Your immigrant-background advantage went almost entirely unrecognised until it was too late to capitalise on it. The technical detail: Sweden's AI infrastructure, by 2028-2029, was becoming critical infrastructure for Nordic defence, for managing diaspora networks, and for understanding non-Western technical communities. It turns out that an engineer who speaks Arabic, English, and Swedish, with genuine AI credentials, was worth far more than the market was pricing it. If you had known this in 2026, you could have positioned yourself as a bridge - understanding not just the technical systems but the cultural and linguistic context that made them work in a multi-ethnic, multi-lingual environment. Instead, this advantage went largely unactivated, and the Swedish tech sector kept hiring mono-lingual engineers for jobs that increasingly needed the opposite.
2. What I got wrong about your situation
I assumed you had runway. In January 2026, you were four months from graduation. I thought that was enough time for the labour market to stabilise, or at least to clarify. The runway was not four months. It was six. The Tooling Acceleration happened in Q1. The market response was visible by Q2. By the time you graduated in May, everyone in the sector already knew the entry level was broken. They just were not saying it publicly, because they were still optimising for the cohort of interns and recent graduates they had already committed to hiring. You were the first cohort to graduate into the full reality of it, and by then there was nowhere left to go.
I assumed the elite-track signal would protect you. KTH's reputation in machine learning is real. Your degree is not a gimmick. Your projects are serious. I thought that would be enough to differentiate you from the bootcamp graduates, the online-course learners, and the less-structured competition. It was, technically. But it did not matter. The thing that differentiated you - the fact that you could train models from first principles, understood the mathematics, could debug a production system - was precisely the thing that the Tooling Acceleration made less valuable. Your training made you "good at the thing that is no longer the bottleneck." The scarce thing became "understanding what the AI cannot do" and "designing systems where the AI makes better decisions than humans" and "knowing when to trust the agent and when to override it." These were not things you learned at KTH, because they were not things anyone was teaching in 2024-2025.
I underestimated how locked-in your cohort was to the original narrative. You were not alone in aiming for Klarna or a FAANG Stockholm office. Almost everyone in your programme was. Social proof is one of the most powerful forces in career decision-making, and your peer group all seemed to be on the same track, so the track must be the right one. Except it was not. It was just the track that made sense in 2023 and 2024, and by January 2026 it was already wrong. But the peer group corrects slowly. Collective intelligence about changing markets can be slower than individual intelligence, especially when the individual is not embedded in the institution that produces the group consensus.
3. What I should have told you to do, in order
In 2026 (final months at KTH)
Stop optimising for the unicorn interview. I know it feels like the natural next step. I know your classmates are all doing interview prep and talking to recruiters. The thing is, the interview you are optimising for is not coming. Klarna's hiring window was closing. Spotify's hiring window was closing. The FAANG Stockholm offices had already started saying "no thanks" to batches of applications. You were preparing for a gatekeeping mechanism that was about to disappear. The energy was better spent elsewhere.
Build and ship two or three real products using AI agents. Not tutorials. Not demonstration projects. Real things that people use, where you get feedback, where you find out what you got wrong. One of these should be something you try to monetise - even if you only charge 50 kronor to five customers, the act of shipping something people pay for teaches you things that no course or interview prep teaches. The entry-level candidates who survived the Tooling Acceleration were the ones who had shipped things. The ones who had polished CVs and strong interview skills were the ones who struggled.
Study the EU AI Act in detail. Not to become a legal expert. To understand what AI governance actually looks like, what the compliance mechanisms are, what the edge cases are. The Swedish Compact needed people who could audit systems for compliance. Anthropic's Stockholm hiring in 2027 was almost entirely from the cohort who had taken the AI safety and governance electives that most of KTH's students ignored. If you had become fluent in the actual regulations and the technical implications of them, you would have had options that your classmates did not.
Learn agent orchestration frameworks seriously. LangChain was already showing the pattern. MCP tooling was about to explode. The inference infrastructure that would power the next generation of agents was being built in 2026. The engineers who understood how to design multi-agent systems, how to make them work reliably, how to build the tooling around them - these people became extraordinarily valuable. This was not a side skill. This was the core skill that "entry-level ML engineer" should have pivoted to mean. Almost nobody did this work in 2026. By 2028, everyone wished they had.
Take the AI safety and alignment electives that most of your cohort considered optional or niche. Anthropic, DeepMind, and the new Stockholm AI governance team were hiring heavily from the people who had actually thought about alignment problems. These were courses that seemed soft or theoretical compared to the hard rigour of training models. They turned out to be the opposite: the most practically useful preparation for the jobs that actually opened.
In 2026-2027 (early career)
Your target: Swedish public AI infrastructure team, Anthropic or DeepMind research roles, or a 3-8 person AI-native startup where the equity mattered. Not a junior developer position at a unicorn or a FAANG office. Not consulting disguised as technology. The jobs in the first category were going to be scarce and competitive. The jobs in the second category were disappearing. The time to position for the first category was now, in your last semester and immediately after graduation.
If a position opens in the Swedish Compact's AI governance or infrastructure team, apply. These roles were being built in 2026-2027. The compensation was good. The work was meaningful. The people doing it were not fancy-brand names, but they were building infrastructure that mattered. This was worth more than a consultancy position or a junior role at a company that was about to reduce headcount.
Lean into your immigrant background and your multilingual skills. The technical point: as Sweden's AI systems became critical infrastructure, the need for engineers who understood multiple language contexts, cultural contexts, and could communicate across diaspora networks became acute. This was not about diversity hiring. This was about a genuine technical need. An engineer who speaks Arabic natively, English fluently, and Swedish natively, with real AI credentials, was solving a specific technical problem that a monolingual engineer could not solve. If you had positioned yourself this way - not as "the diverse candidate" but as "the person who understands the multilingual, multi-cultural systems we are building" - you would have had leverage.
In 2028-2030 (consolidation)
By 2028 you have either found your way or you are struggling. If you are in a public infrastructure role or a research team, you are in the right place. Keep building depth. Move laterally every 3-5 years to build breadth. Do not stay at the first employer for a decade, as the old playbook said. The world is moving fast enough that staying put costs you more than the switching costs.
Acquire equity early and often. The Inhuman Economy that emerged from the transition meant that salary was becoming less important relative to ownership. If you are at a startup, push hard for equity. If you are in the public sector, look for mechanisms to own pieces of the infrastructure you are building (research spin-offs, advisory boards, technology commons). The people who will be genuinely wealthy in the 2030s are not the ones who optimised for salary in the 2020s. They are the ones who owned pieces of what became valuable.
4. What I should have told you to stop believing
That "learn to code" was advice from anyone current. It was not. That was advice from 2015. By 2026, it was stale. The message for your cohort should have been "learn to design systems that agents will run, and learn to know when the agent is wrong." This is not the same as code. It is more abstract in some ways and more concrete in others. But the idea that you should spend your time becoming a better coder - writing faster, cleaner, more elegant code - was pointing you in the wrong direction. The bottleneck was not engineering craftsmanship. The bottleneck was architectural thinking and the ability to verify correctness when the system was partly human and partly AI.
That credentials would open doors. Your KTH degree is real and valuable. It differentiates you from the bootcamp graduates. But the idea that a degree from a top programme guarantees placement into a career is dead. It broke in 2026. The degree still has value - the mathematical foundation, the exposure to real systems, the network - but the value is in what you can do with it, not in what it says on your CV. The paper is less important. What you built and shipped is more important.
That the peer group has current information. It does not. Your cohort was optimising for a job market that no longer existed while you were graduating from it. The collective intelligence of a peer group is slow. It lags behind real-time information by months or years. You would have been better off reading actual market signals (hiring freezes, salary drops, job postings in unfamiliar companies) than following the consensus of your classmates. Trust the data. Trust what you see the tools doing. Do not trust the story everyone around you is still telling.
That the safe choice is safe. In January 2026, the safe choice for an AI Masters graduate was Klarna, Spotify, or a FAANG office. These were the dreams. They were also the roles that were about to freeze. The actually safer choice was the unfamiliar one: the government infrastructure role, the research team, the AI-native startup where nobody knew if it would work but the upside was real. The path that feels safe because everyone is on it is often the most dangerous one.
5. What I am telling you now, looking forward from 2031
You are 29. Some of you made it through well. Some of you are still figuring it out. Not all of you landed in the places you imagined in 2026, but the ones who were willing to leave the original narrative and move toward the emerging roles are in better positions than the ones who stayed locked into the closing market.
The window for entry-level pure technical roles has essentially closed. But the window for AI-augmented governance, public infrastructure, and ownership plays is wide open. Both are still accessible for someone at your stage of career, if you can see the opportunity. The people who will build the next layer of the Nordic AI ecosystem are being hired now, in 2031, and the roles are still somewhat undersubscribed because everyone is still looking for the title and the brand name instead of the actual work that needs doing.
The Longevity Threshold in 2029 changed the arithmetic of your career. You are not looking at a 40-year working life. You are looking at 60 years, maybe more. This changes everything about how you should pace yourself and what you should optimise for. The people doing best are those who treat their careers as a series of five-year chapters, each building on the previous one, rather than a single arc to climb. Your first chapter - learning to work with AI as a systems partner, building institutional knowledge, positioning yourself in the emerging infrastructure - sets up everything that comes after. Do not optimise this chapter for status or brand name. Optimise it for capability and leverage.
One last thing: you grew up with these tools. You are not adapting to them. You are native to them. That is an advantage I do not have, no matter how hard I work to catch up. Trust that. The people like me, who learned to code in a different era and are learning AI now, we are always playing catch-up. You have the luxury of never knowing anything different. Use it.
Siri Southwind
Written 31 December 2030