Soundtrack: Frederic Chopin, Piano Sonata No. 2 (preferably the 1975 Martha Agerich recording)

I love software. It’s my calling, my passion and my obsession. I’ve been coding since the age of ten, and I’ve hardly stopped since then. Growing up, programming was a fascinating creative outlet—even more so for scrawny boy with two left hands and a dad-shaped hole in their life. I too was suddenly able to “build” stuff. Not treehouses like my friends had, but other things. Virtual things. Cyber-things. Things that, to me, at least, were even more exciting because there were no limits to what I could build. If I could imagine it, I could build it—provided one had enough time and dedication. And since I had plenty of both, it turned out, I ended out being reasonably good at it, too. Later on, it wasn’t so much the coding itself that attracted me, but rather the entire industry surrounding it. The art and craft of creating something bigger than oneself. Diverse people joining forces, teaming up, collaborating, innovating, and gradually changing the world. And, let’s be honest, making good money while we’re at it, too.

Fast forward twenty years. Software has eaten the world and the coders, the product managers, the designers, the investors, we all had a field day. These days, though, it seems is if AI in turn is in the midst of eating software—and all of us—alive; more so than any other profession. For context: Large Language Models (LLMs) have proven to be less effective replacements for knowledge workers than many AI optimists predicted. Paralegals—often thought of as the AI industry’s canaries in the coal mine—are still in demand. Journalists, though admittedly struggling, have not been eradicated in droves. Likewise, customer service agents have not been replaced in millions (as I once estimated would be necessary for OpenAI to become profitable). The reason for this is simple: the sword wielded by these professions—natural language—is incredibly sharp and it cuts both ways. While it is relatively straightforward to train an AI that can generate strings of words which sound plausible, it’s incredibly difficult to create one that consistently and reliably produces accurate statements about very broad intellectual domains.

But computer code? There’s a reason why ours are called “formal languages.” They’re not as messy or open to interpretation as natural human languages. They’re well structured, precise, and unambiguous. Furthermore, there’s tons of high-quality training data available online. Most importantly, the code generated by a model can be automatically verified from both a syntactic and qualitative standpoint. Hence, generating “good enough” source code is a much easier problem to solve than generating “good enough” legal documents or newspaper articles. And, of course, there are the economic incentives. In 2024, the median paralegal in the U.S. earned about $61,000 per year. A software engineer? $141,000. Which of those professions would you rather build an AI-replacement for if you had to make a choice?

So, here’s the decisive question for everyone in my beloved industry: Quo vadis, Domine? And is the answer, as in the biblical precedent, Roman vado iterum crucifigi? Are we, too, about to be crucified?

The answer, of course, is messy and complicated. However, I see four interrelated dynamics that have emerged over the last 12 months or so which are worth examining before jumping to simple conclusions. Hear me out.

Dynamic #1: The eradication of busywork

One thing has always bothered me with software engineering: The seemingly endless tedium that accompanies even the simplest changes. Contributions to any reasonably sized code base often get bogged down in hours of wiring, plumbing, scaffolding and similar menial tasks. Then there’s your deployment scripts, your automated tests, your infrastructure-as-code. How much time did I and my teams spend introducing columns to database schemas, extending DTO classes, adding create/update/delete REST APIs, adding input fields to HTML forms and writing code to enable data to flow from one architectural layer to another? How many copy-and-paste(-and-change-a-few-bits) tasks do you regularly tick off when you create a new micro service or add a new unit test? How often do you have to restart this and redeploy that, and, in the process, wasting minutes that compound to hours and days that could otherwise be spent productively?

In my opinion, this kind of work was never really worthy of a trained software engineer’s attention. Yet, for many—including some of the aforementioned $141,000-per-year earners—that’s what much of their day job looked like until around 2024. However, AI coding tools have proven very effective at taking over, for lack of a better term, busywork. For most codebases, this type of work can already be outsourced to AI tools without compromising quality, scalability or security. Poof! Gone. Replaced by AI. But what’s next?

Dynamic #2: Feature factories going into overdrive

Have you ever seen a toddler with a sugar rush? It’s the cutest thing in the world! They run around like they’re on cocaine for about half an hour, then either collapse into a slumber or throw a tantrum. Or both. That’s exactly what we’ll see in software companies that use their teams’ newfound efficiency gains without a solid product strategy in place.

Why? Because coding agents, even if you only use them to eliminate busywork, massively increase the efficiency of software teams—if you measure efficiency naively as “features shipped per amount of time.” This efficiency increase would, in principle, offer a choice: Companies can either produce the same output with less effort (if you’ve eliminated all the busywork, you can lay off many of your developers and still expect the same results) or do more with the same input. However, human nature is such that we almost always choose the former over the latter in such cases. This is Jevon’s paradox, which I’ve written about extensively, all over again. Only this time, the output isn’t coal dust and smoke, but more and more features.

Software companies can ship many more features than before. Teams are be asked to build everything that has ever been thought of. Product managers essentially point to their 500-item backlog and tell Claude Code, “Build this, build it all, and build it now.” Alas, the result won’t be better products for consumers or more profitable businesses for shareholders. Quite the contrary, my prediction is that we’ll see bloatware and enshittification on unprecedented levels.

Dynamic #3: Everything that can be built will be built

If you give a thousand monkeys a thousand typewriters—and plenty of time—, eventually one of them will hammer out a Shakespeare sonnet. If you give a million people a million coding agents, … well, you get the idea. As the cost of “building things” plummets, ideas that were previously unfeasible will suddenly become appealing. Let’s pick a random example: Why not develop a task management tool tailored for self-employed yoga teachers in Norway? As an investor, would you raise the necessary funds to hire five developers, a designer, and a product manager to work on this project for six months? Probably not. But what about having one guy in their basement vibe code it in two days, then hunting down a handful of friendly customers, and seeing where it leads? No harm done if that doesn’t take off.

The point is: There are plenty of potential product opportunities out there that don’t require novel ideas or algorithmic breakthroughs. Instead, they’re about covering the “last mile” of value creation for a specific industry or a narrow target group. Of course, this requires getting deeply into the weeds of the chosen domain—understanding exactly what a Norwegian yoga teacher needs from a task management tool, for instance. Rest assured, though, that many people will go to these lengths in the hope of building small but profitable businesses in many different domains and for many types of problems. Especially, of course, if these people had been laid off by software companies who no longer find their services particularly valuable. But will the rise of such vibe coded SaaS apps harm incumbent software companies and their one-size-fits-all products? I’m wary of that notion, but many investors seem to think it will happen.

Dynamic #4: Less venture capital, tighter margins, and a scramble for profit

The logic may be flawed, but the venture capitalists’ thinking these days goes something like this: Enterprises pay a lot of money for software—between $9,000 and $17,000 per employee per year, by some estimates. If enterprises can now vibe code some of their own solutions, the suppliers—the SaaS companies—will be in trouble. Similarly, small, AI-native competitors have a unique opportunity to unseat enterprise SaaS vendors by leveraging speed, agility, and the innovator’s dilemma. Consequently, SaaS stocks are declining, and these companies are finding it more difficult to raise new funding or refinance existing debt. At the same time, the massive buildout of AI infrastructure soaks up capital that may otherwise go to them. Hence, there’s less money in the system, there’s more competition, and higher pressure on profitability.

Inflection point

So, where does that leave us—the coders, designers, testers, and product managers? “Between a rock and a hard place,” is what some pessimists would say. But I believe that the future is not set, one way or the other. I think we’re now at an inflection point in the history of this great industry, one from which several paths lead into the very different futures. Here are three scenarios.

Scenario #1: Supernova in the supply chain

Throughout 2025, analysts have warned about the buildup of a massive AI bubble. Unprecedented amounts of capital are being spent to build AI data centers. Companies like OpenAI are extremely unprofitable and are still propped up by tens of billions of dollars in investment capital. The fundamentals, i.e., revenue versus expenditures, are worsening by the day.

If this bubble bursts, I predict we’ll see a major wave of consolidation in the resource-heavy segment of foundational AI model development, training, and operations. Of today’s eight major AI companies1, at least half will have to fold because they can no longer find funding for their unprofitable business models. This would leave us with a tight oligopoly of three or four AI conglomerates that operate the largest models. As we learned in Economics 101, oligopolies lead to higher prices for consumers, which will inevitably ripple through the supply chain.

Currently, Claude Code costs around $100-$200 per developer per seat, and the efficiency gains compared to not using it are significant. It’s basically a no-brainer. But what if a subscription ends up costing five or ten times that much and, at the same time, wages for software engineers have steadily declined? In such a world, not every developer would use an AI coding agent all the time. Much like ten years ago when not every developer on your team needed the top-of-the-line Visual Studio license, then, not everyone would use an AI coding agent for all their tasks. They would turn into specialized tools for specific tasks, used only in those cases where add real value. Basically, we’d be back to square one.

Scenario #2: The white dwarf of quality

On the other hand, what if the AI bubble doesn’t burst? What if the price of AI coding tools stays the same or even decreases? What if everyone uses them all the time to build everything? One consequence, which we’re already seeing by some accounts, would be a visible drop in the quality, scalability, performance, and security of software. Coding tools excel at generating large amounts of code but are not so great at designing sustainable architectures that can be maintained over a long period of time. Additionally, the more features you ship, the greater the risk that one of them has undetected bugs or that they interact in ways that haven’t been tested. Of course, it also increases the surface area of your products that can be attacked by malicious entities.

In such a world, swathed in low-quality enterprise applications and vibe-coded consumer apps proliferating throughout the app stores, riddled with head-scratching bugs and crashes, I believe we’d see a renaissance of the notion of software engineering as a craft. Testing, documenting, architecting, securing, and scaling applications would become en vogue again, if only as a way for some software companies to differentiate their products from the sea of vibe-coded AI slop. Deciding what not to build would become a much more valuable skill than building. “Made by humans” would become a label like “organic” or “GMO-free.” Plus, people who trained for years to design sustainable systems would be in extra high demand. Why? Because, in the meantime, many of them would have left the field. We’re already seeing reports of software engineers refusing to take on the role of “AI overseer.” Engineers are burning out coordinating subpar coding agents all day long and cleaning up the buggy code generated by them. It’s hard to say if they, once they embarked on entirely different careers, would happily return to the field.

Scenario #3: The heat death of the industry

What if that doesn’t happen either? What if there isn’t a major backlash against AI-coded apps because either they’re so good that no backlash is needed, or because users silently lower their quality expectations? AI companies of course promise the former, but history also provides many examples of the latter. We accept that factory-produced clothes have stitching errors and aren’t as durable as tailor-made shirts. There’s no reason to believe that a similar lowering of the bar wouldn’t also be conceivable when it comes to AI-generated vs. professionally engineered software.

This might be the most pessimistic scenario from a self-interested point of view, but perhaps the most likely. In this case, the above-outlined dynamics would culminate in a slow but steady decline in the profitability of software companies. These companies would thus forsake Jevon’s paradox out of sheer economic pressure, laying off engineers and paying the remaining ones less than during the 2010s heyday. Back then, when interest rates were down and VC money was flooding the industry, demand for engineers was much higher than the available supply. By 2025, however, there was already significant pressure on the labor market for entry-level roles in software engineering, and we’ve seen waves of layoffs at the biggest enterprise software companies—although often for reasons unrelated to AI efficiency gains.

Where would that leave us? With the value of our work declining, a new equilibrium between the supply and demand of software talent would emerge. It would be a world in which we, the engineers, are no longer the superstars of the labor market. Keep in mind that for a long time, we were in such high demand compared to other professions that companies outdid each other to offer higher wages, better benefits, more stock options, and countless other perks and benefits. In this scenario, the free lunches and on-site baristas, the weeklong annual offsites, the hoodies and the expensive Christmas presents, and, alas, the $140,000+ annual pay would be a thing of the past. It’s impossible to predict exactly where the valuation for this “new” software engineering role would fall. Probably still above that of a paralegal. But probably significantly below that of our current peers in the income hierarchy: air traffic controllers.

Now what?

If you came here looking for certainty, I’m sorry to say that I can offer you no such thing. But rest assured: No one else can either, no matter how confidently they tout their opinions. There’s one thing I’m sure of, though. The little boy who discovered a new world of possibilities through coding didn’t enter this field thinking he’d have an easy time and make a fortune. He did it because it was fun and interesting, and it offered something new to learn every day. He did it because he loved designing, building, and tinkering, and later writing, speaking, and teaching as well. Pleasure is still to be found in these activities, regardless of whether the AI bubble bursts or the NASDAQ crumbles. Of course it’s a cliché that our industry is one of constant change, endless disruption, and nonstop learning; it’s also true. How can one begin to prepare for that?

One way is by drawing on timeless wisdom. Read Marcus Aurelius if you want to know what a stressful job really looks like. Read Epictetus to learn how to counter harsh and unfair treatment through cultivating inner peace and resilience. Read Seneca on finding tranquility amidst chaotic times. Read the Victor Frankl on meaning. Read the Buddhists on conscientiousness.

A more practical way is to reconnect with the enduring basics of our own craft. Learn how to break down large problems into smaller ones, train your mind to think algorithmically, spend time empathize with the actual users of your software. Finally, ask yourself: Why did I chose this field to begin with? And lean in to whatever that was that sparked your excitement about this beautiful craft.


  1. OpenAI, Google, Meta, Anthropic, xAI, Microsoft, Mistral, Alibaba ↩︎