Control and AI
Holding Tight and Letting Go
Earlier, I wrote about determinism and control. I feel a need to return to these concepts because they are the quiet shift beneath software, and deserve greater attention.
The shift from traditional software to AI is a shift from deterministic systems (where a specific input leads to a specific output) to indeterministic systems (where outputs are probabilistic and fluid). Almost every magical capability of AI is downstream of this indeterminism. But crucially, so are its most frustrating limitations.
If there is one fatal misunderstanding of AI today, it’s that we are engaging with this shift inadequately. “Indeterminism” has entered the lexicon, but usually only at a surface level. And because we are stuck on the surface, the loudest debates about AI have become incredibly boring.
Why the Extremes are Boring
Let’s look at the three loudest factions in the AI debate.
First, the AI doubters. They look at the unpredictable, indeterministic nature of large language models and declare it a failure. To them, a system that hallucinates cannot be trusted, and therefore cannot be useful. This is a boringly misguided example of confirmation bias. Humans are highly indeterministic—we forget things, we make math errors, we have bad days—yet we’ve muddled along reasonably well. How? By inventing deterministic tools to anchor us: long multiplication, checklists, standard operating procedures, etc. The doubter assumes you can’t extract value from an unpredictable system when you need reliability. History proves otherwise.
Second, the AI doomers. They also view indeterminism as a critical failure, but in the opposite direction. They are painfully aware of the immense power of AI systems and assume that this power is inherently uncontrollable. While this makes for a more gripping narrative than the doubters’ view, it strips away human agency. We’d have only one option left, don’t create powerful AI. Setting aside whether it is even possible to perpetually prevent its creation, this fatalism leaves no room for a practical conversation about how to retain control.
Finally, the radical accelerationists. They acknowledge the wild nature of AI but fall prey to a blind optimism, assuming a purely indeterministic system will somehow self-regulate and perfectly align with our needs. This is just as boring. The need for control is not irrational, nor is control a given. If control is achievable, it will demand a deliberate, concerted effort, requiring understanding every tool to engineer that control.
If you want to find interesting conversations, look for the solution seekers.
The Solution Seekers: Layers and Workflows
The most compelling builders today are those who reject both absolute pessimism and absolute optimism. They recognize that solutions aren’t singular or total. The most promising path is layers and workflows that mix and join determinism and indeterminism.
Think about how we manage high-stakes reasoning in the physical world—like in an intensive care unit or the cockpit of a commercial jet. We don’t rely entirely on the raw, in-the-moment reasoning of a doctor or pilot; human reasoning is brilliant but fluid, prone to fatigue, distraction, and variance. But we also don’t rely entirely on rigid, unyielding flowcharts, because a flowchart cannot reason through a novel, complex anomaly.
Instead, we design workflows that rely on both. We build strict, deterministic protocols—mandatory checklists, hard limits on medication dosages, automated collision warnings—to create a safe, predictable framework. Inside that framework, we rely on the judgement of a doctor or pilot to handle context, nuance, and problem-solving. Protocols enforce absolute boundaries; experts provide reasoning. Frameworks change, doctors update their own based on their learning, with debate and review, inside another layered framework.
This is the architecture of the AI future. AI will dominate the next generation of software, but it will not render deterministic code obsolete. Instead, code is how protocols are encoded. Those route, authorize, evaluate, and constrain indeterministic AI actors. Control points written in deterministic code will provide the necessary mechanisms to enforce rules, isolate agency, and supply safety. AI will be called upon within those specific boundaries to reason, interpret intent, and adapt to the messy reality of the user.
The Myth of the Developers Demise
This need for control has profound implications for how software is built. Recently, the term “vibe coding” has emerged to describe the practice of building software through natural language interactions with AI. A maximalist subgroup makes an extreme claim that with vibe coding, developers are obsolete and users will prompt their own custom software into existence on the fly.
This misses the fundamental purpose of a developer. A developer’s job is not to write code; a developer’s job is to remove effort for the user. Developing is ultimately not about producing code, but about producing reusable, accessible capabilities for users. An accessible capability is one that requires the least effort to access, and a reusable one is one that can be applied to multiple situations. Code is just the mechanism.
When developers create software, they establish guardrails, conventions, and reusable patterns. Sometimes, a user wants absolute flexibility, and a fluid AI companion is perfect. But often, a user wants rigid reliability. They want to press a button and know exactly what will happen. It’s easy to forget, amidst the explosion of AI capabilities, that rigidness has immense value.
It’s tempting to view recent advancements as a single evolutionary timeline—assuming we are moving from hand-written code, to AI-assisted code, to a future where code is entirely replaced by just in time reasoning of AI agents. That is a mistake, over-extending a trend. Committed code, generated, reviewed, tested and committed as stable will exist in abundance. Just in time generated code, executed in a protected sandbox will also be used abundantly.
The use of models and instructions, reasoned upon just in time, shifts the balance point between flexibility and rigidity, but it won’t abandon code nor the developer.
A Shared Experience: Taming the Machine
For users, future software interfaces will be a mix of structured and natural. Learning to navigate the difference between them will be a vital modern skill.
Structured interfaces (buttons, menus, traditional apps) sit atop deterministic systems. You can trust them to follow a plan. However, that plan was written by a developer. If the developer didn’t anticipate your specific need, the software becomes frustrating. You are forced to learn its non-intuitive logic.
Natural interfaces (chatbots, voice agents) sit on top of indeterministic systems. They can do things developers never anticipated and can interpret your unique intent. But they make assumptions. Using an AI interface is like ordering from a waiter at a restaurant. You need to develop an instinct for how your communication might be misinterpreted. You need to know when the system will ask a clarifying follow-up question (”soup or salad?”), and when you need to be proactively rigid and structured in your commands (”hold the mustard”). Make a mistake here, and you end up with a mustard-covered sandwich. Everyone then has to start over from scratch, and someone has to pay for the waste.
Interestingly, the people building the software are going through the exact same transition.
Developers are increasingly using natural language to write code. For a brief moment, this felt like magic without rules—just type what you want, and the machine builds it. But developers are quickly realizing that an AI coding assistant is just as indeterministic as a chatbot. If they aren’t careful, they end up with the equivalent of a “mustard-covered sandwich” deep in their codebase.
Because of this, we are watching a new kind of structure reemerge in software development. Developers aren’t abandoning natural language, but they are scaffolding it. They are learning when to let the AI riff creatively, and when to enforce strict, deterministic tests to verify the AI’s output. The developer’s job is evolving from writing rigid rules by hand to managing the chaotic intelligence that writes them, locking its best outputs into place so they can be relied upon tomorrow.
Conclusion
For decades, our relationship with computers was fundamentally one-sided: humans had to learn to speak like machines. We memorized menus, learned strict syntax, and clicked exact sequences of buttons. We were forced to be rigid operators of deterministic systems.
AI flips this dynamic, but it introduces a new burden. The era of the comprehensive user manual is over, because you cannot write a complete manual for a probabilistic system. Its capabilities are discovered through interaction, not documented in a spec sheet.
This is why understanding the architecture beneath your feet is no longer just a concern for software engineers. It is a vital literacy for everyone.
If you are an everyday user, recognizing whether you are interacting with a deterministic system or an AI agent changes how you engage. The caution you apply to inputs and outputs should shift. For deterministic systems you should provide what is required and just what is required. For AI systems consider where elaboration yields better results, and vagueness leads to guesswork. Unless you need guesswork, avoid triggering that path.
If you are trying to predict where the industry is going, looking for these architectural layers is the only way to cut through the boring extremes of blind hype and cynical doom.
And if you are a builder—whether you are writing thousands of lines of code or just stringing together a few tools to solve a daily problem—understanding this duality is your ultimate advantage. The future of technology isn’t about choosing between the rigid reliability of the past and the creative chaos of the future. It’s about learning to bolt them together.

