Discipline Is Destiny: The Power of Self-Control by Ryan Holiday.

In his book Discipline is Destiny, Ryan Holiday recounts an insightful story about the legendary basketball coach, John Wooden. At the start of his first training session with a new team, Wooden did not begin by teaching zone defense or fast-break strategies. Instead—picture this—he gathered a group of high-paid, world-class athletes in a smelly locker room and showed them how to straighten their socks and put on their shoes. Exactly as you would with a toddler.

To a seasoned pro, this must have seemed like a joke. However, Wooden’s logic was irrefutable: a wrinkle in a sock leads to a blister; a blister leads to discomfort; discomfort leads to missed practice, which leads to a drop in performance. A hastily tied shoelace can get undone, which is the last thing you want at the decisive moment in any game.

As a runner myself, I can totally relate. You don’t want to ruin your streak because of an infected blister. Nor do you want to damage your posture due to an imbalance in your stride. And you certainly don’t want to break your neck because your laces came undone on the steepest part of the trail. It’s the basics—the diligence of attending to the mundane, everyday things with great care—that can make or break even the most sophisticated systems. “Brilliant basics,” as a friend of mine constantly keeps reminding their teams at work.

The Magic Curtain of Abstraction

During my two decades in the software industry, I have seen a recurring obsession with abstractions. There is always a “next big thing” that promises to effortlessly and magically solve people’s problems. We are drawn to the allure of the new framework, the other cloud provider, or a “zero-code” solution that claims to do all the hard work for us. And we want to believe that there is a wizard behind the curtain.

The truth, alas, is that there is no wizard. There is no magic. In the world of software, there is only more code. “Code all the way down,” to paraphrase Terry Pratchett. Of course, “brilliant basics” doesn’t mean that you review, understand or even read all of that code. But new wizards are popping up every week or so. And what’s needed to demystify them is a solid understanding of the basic, underlying mechanics of how they work.

Take Retrieval-Augmented Generation (RAG) as just one of many examples. Throughout 2024 and 2025, the technique has been hyped as a nearly miraculous way to customize a general-purpose large language model (LLM) with your proprietary data. The model would be “grounded” in your business context, and thus neither hallucinate nor ramble off topic. However, when you look just one inch past the marketing, you realize that the technique is merely a fancy way messing with the prompt that gets sent to the model. Why? Because, at the time, the prompt was the only way “in.” A basic understanding of how text-generation models work makes it clear that, unless you’d want to fine-tune (aka “train”) the model yourself, the only thing you could customize was the text you put in in expectation of an answer.

Suppose you have a few thousand documents, and the point of your RAG toolchain is to enable users to ask questions about their contents. Following a run-of-the-mill RAG approach, you’ll end up with a lot of fancy tech:

  • A vector store database (in which the text contents of your documents gets stored and indexed).
  • A chunking mechanism that slices your documents into prompt-sized pieces.
  • An embedding model to compute how “similar” the users question is to any chunk of your documents.
  • A retrieval mechanism that returns those chunks of your documents, based on their textual similarity to the user’s question.

But the final choking point? The bottleneck through which it all flows?

  • A textual prompt which splices the user’s question and the “most promising document chunks”, plus an instruction for the LLM such as “Answer this question using that data.”

Anyone debugging—or thinking through—how this works must come to the conclusion that the LLM will inevitably generate incoherent, unusable information in many situations. It “sees” only a fraction of your actual business data. It’s fed only splinters of the corpus of documents, which often start or end mid-sentence and are devoid of context and meaning. The LLM is, of course, also preprogrammed to come up with some sort of an answer—rather than say “my information on this is inconclusive.” Needless to say, the model will hallucinate, invent, and spout half-truths. How could it do otherwise? 1

Sadly, most people won’t think it through, try it out, or debug to see what’s actually going on behind the wizard’s curtain. Most people either ignore or are unaware of the basic mechanics at play. Instead, they peddle “magical” solutions—and often rake in unfathomable consulting fees in the process. These businesses that buy these solutions end up overpaying for something that doesn’t deliver, and are understandably disappointed and frustrated with the technology.

The Cost of a Toaster

Nevertheless, “brilliant basics” doesn’t mean we have to do or build everything from scratch. In software engineering and in life, we come to rely on abstractions to accomplish meaningful tasks. They make us faster. They allow us to benefit from the work of others. “Standing on the shoulders of giants” enables us to create results that we could never attain on our own.

The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch by Thomas Thwaites.

In the famous “Toaster Project,” designer Thomas Thwaites attempted to build something as simple as a $10 household appliance from scratch, including mining the raw ore for the steel and hand-carving the plastic. The project took months, cost a fortune, and resulted in a melted lump of metal that barely worked. Supply chains, specialization, and division of labor have their merit, no less in software engineering.

Abstractions, such as buying a pre-made heating element for a DIY toaster or building an enterprise application using extensive code libraries, are necessary. Modern engineers don’t concern themselves with writing machine code or how bits are translated into electromagnetic waves because we have massive, reliable software stacks that handle those tasks for us.

However, the risk is that we become untethered from reality and lost in abstractions. You don’t have to write machine code yourself or manually do the Fourier Transform calculations needed to modulate bits onto a carrier signal. However, how you should be mindful that such basic things do take place somewhere, somehow. And that they can fail and bite you in the butt.

When we rely on AI agents to take over the work of coding, an obsession over the basics doesn’t become obsolete—it becomes paramount.

  • How will you evaluate whether a coding agent’s proposed algorithm will meet your scaling needs if you’re unfamiliar with asymptotic runtime complexity?
  • How can you judge whether its architectural decisions are sustainable if you’ve never heard about the basic design patterns of distributed systems?
  • How can you expect your agent to design an emphatic UI if you haven’t done any user research?

Caring about these things is like a basketball pro properly tying their shoes. It’s boring. It’s not glamorous. But it’s unavoidable if you want to succeed in the long run.

It’s All HTML in the End

Having been in this industry for a while, I’ve seen several hype cycles, from the rise of service-oriented architectures, software as a service (SaaS), and hyperscalers, to microservices, and now the AI boom. This one feels “different” because it is different. The tools are more sophisticated, the opportunities to hide behind this or that framework loom larger than ever. This industry, and many roles within it, is changing beyond recognition. But the underlying problem is the same: abstractions without understanding lead to fragility.

About ten years ago, when a new JavaScript framework seemed to be released every other day and developers could hardly keep up with them all, I attended a conference where I heard an insightful talk titled “In the end, it’s all HTML”. The speaker’s point was simple. Regardless of whether you use React, Vue, Angular or anything else, the final output is still just tags and attributes rendered by a web browser. If you’re a software developer with a solid understanding of what this implies, you’ll be in a good position. But if you don’t grasp the basics? At the same time, I had a perplexing run-in with an engineer who was unable to explain which parts of his app were running on the client side and which were running on the server side. That, of course, is a recipe for disaster.

Today, whether you are using an AI agent to scaffold a backend or a sophisticated LLM to write your entire app, the fundamentals of logic, state management, and resource allocation still apply.

Sweat the basics. Tie your shoes. Take apart and understand each component of your bike. Learn how your frameworks actually work under the hood. Check the details of what your AI agent is really doing. It may feel like a waste of time when “salvation” is just a prompt away, but when the game is on the line, you’ll be glad your laces are tight.


  1. In case you’re wondering: Nowadays, there are better ways to solve problems like this one. For example, use “tool calling” or a custom MCP server to allow the LLM to access a structured data source when it needs specific information. ↩︎