<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[norabble]]></title><description><![CDATA[Norabble investigates the complex systems of economics, technology, and global development, applying a pragmatic lens to reveal the hidden mechanics that shape our world.]]></description><link>https://substack.norabble.com</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 01:08:47 GMT</lastBuildDate><atom:link href="https://substack.norabble.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ryan Baker]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[norabble@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[norabble@substack.com]]></itunes:email><itunes:name><![CDATA[Ryan Baker]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ryan Baker]]></itunes:author><googleplay:owner><![CDATA[norabble@substack.com]]></googleplay:owner><googleplay:email><![CDATA[norabble@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ryan Baker]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Control and AI]]></title><description><![CDATA[Holding Tight and Letting Go]]></description><link>https://substack.norabble.com/p/control-and-ai</link><guid isPermaLink="false">https://substack.norabble.com/p/control-and-ai</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Tue, 28 Apr 2026 11:03:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9235ab95-b3ad-4254-9249-cb999931edfc_1731x909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Earlier, I wrote about <a href="https://substack.norabble.com/p/ai-determinism-and-control-part-2">determinism and control</a>. I feel a need to return to these concepts because they are the quiet shift beneath software, and deserve greater attention.</p><p>The shift from traditional software to AI is a shift from deterministic systems (where a specific input leads to a specific output) to indeterministic systems (where outputs are probabilistic and fluid). Almost every magical capability of AI is downstream of this indeterminism. But crucially, so are its most frustrating limitations.</p><p>If there is one fatal misunderstanding of AI today, it&#8217;s that we are engaging with this shift inadequately. &#8220;Indeterminism&#8221; has entered the lexicon, but usually only at a surface level. And because we are stuck on the surface, the loudest debates about AI have become incredibly boring.</p><h3><strong>Why the Extremes are Boring</strong></h3><p>Let&#8217;s look at the three loudest factions in the AI debate.</p><p>First, the <strong>AI doubters</strong>. They look at the unpredictable, indeterministic nature of large language models and declare it a failure. To them, a system that hallucinates cannot be trusted, and therefore cannot be useful. This is a boringly misguided example of confirmation bias. Humans are highly indeterministic&#8212;we forget things, we make math errors, we have bad days&#8212;yet we&#8217;ve muddled along reasonably well. How? By inventing deterministic tools to anchor us: long multiplication, checklists, standard operating procedures, etc. The doubter assumes you can&#8217;t extract value from an unpredictable system when you need reliability. History proves otherwise.</p><p>Second, the <strong>AI doomers</strong>. They also view indeterminism as a critical failure, but in the opposite direction. They are painfully aware of the immense power of AI systems and assume that this power is inherently uncontrollable. While this makes for a more gripping narrative than the doubters&#8217; view, it strips away human agency. We&#8217;d have only one option left, don&#8217;t create powerful AI. Setting aside whether it is even possible to perpetually prevent its creation, this fatalism leaves no room for a practical conversation about how to retain control.</p><p>Finally, the <strong>radical accelerationists</strong>. They acknowledge the wild nature of AI but fall prey to a blind optimism, assuming a purely indeterministic system will somehow self-regulate and perfectly align with our needs. This is just as boring. The need for control is not irrational, nor is control a given. If control is achievable, it will demand a deliberate, <em>concerted</em> effort, requiring understanding every tool to engineer that control.</p><p>If you want to find interesting conversations, look for the solution seekers.</p><h3><strong>The Solution Seekers: Layers and Workflows</strong></h3><p>The most compelling builders today are those who reject both absolute pessimism and absolute optimism. They recognize that solutions aren&#8217;t singular or total. The most promising path is layers and workflows that mix and join determinism and indeterminism.</p><p>Think about how we manage high-stakes reasoning in the physical world&#8212;like in an intensive care unit or the cockpit of a commercial jet. We don&#8217;t rely entirely on the raw, in-the-moment reasoning of a doctor or pilot; human reasoning is brilliant but fluid, prone to fatigue, distraction, and variance. But we also don&#8217;t rely entirely on rigid, unyielding flowcharts, because a flowchart cannot reason through a novel, complex anomaly.</p><p>Instead, we design workflows that rely on both. We build strict, deterministic protocols&#8212;mandatory checklists, hard limits on medication dosages, automated collision warnings&#8212;to create a safe, predictable framework. Inside that framework, we rely on the judgement of a doctor or pilot to handle context, nuance, and problem-solving. Protocols enforce absolute boundaries; experts provide reasoning. Frameworks change, doctors update their own based on their learning, with debate and review, inside another layered framework.</p><p>This is the architecture of the AI future. AI will dominate the next generation of software, but it will not render deterministic code obsolete. Instead, code is how protocols are encoded. Those route, authorize, evaluate, and constrain indeterministic AI actors. Control points written in deterministic code will provide the necessary mechanisms to enforce rules, isolate agency, and supply safety. AI will be called upon within those specific boundaries to reason, interpret intent, and adapt to the messy reality of the user.</p><h3><strong>The Myth of the Developers Demise</strong></h3><p>This need for control has profound implications for how software is built. Recently, the term &#8220;vibe coding&#8221; has emerged to describe the practice of building software through natural language interactions with AI. A maximalist subgroup makes an extreme claim that with vibe coding, developers are obsolete and users will prompt their own custom software into existence on the fly.</p><p>This misses the fundamental purpose of a developer. A developer&#8217;s job is not to write code; a developer&#8217;s job is to <em>remove effort for the user</em>. Developing is ultimately not about producing code, but about producing reusable, accessible capabilities for users. An accessible capability is one that requires the least effort to access, and a reusable one is one that can be applied to multiple situations. Code is just the mechanism.</p><p>When developers create software, they establish guardrails, conventions, and reusable patterns. Sometimes, a user wants absolute flexibility, and a fluid AI companion is perfect. But often, a user wants rigid reliability. They want to press a button and know exactly what will happen. It&#8217;s easy to forget, amidst the explosion of AI capabilities, that rigidness has immense value.</p><p>It&#8217;s tempting to view recent advancements as a single evolutionary timeline&#8212;assuming we are moving from hand-written code, to AI-assisted code, to a future where code is entirely replaced by just in time reasoning of AI agents. That is a mistake, over-extending a trend. Committed code, generated, reviewed, tested and committed as stable will exist in abundance. Just in time generated code, executed in a protected sandbox will also be used abundantly.</p><p>The use of models and instructions, reasoned upon just in time, shifts the balance point between flexibility and rigidity, but it won&#8217;t abandon code nor the developer.</p><h3><strong>A Shared Experience: Taming the Machine</strong></h3><p>For users, future software interfaces will be a mix of structured and natural. Learning to navigate the difference between them will be a vital modern skill.</p><p>Structured interfaces (buttons, menus, traditional apps) sit atop deterministic systems. You can trust them to follow a plan. However, that plan was written by a developer. If the developer didn&#8217;t anticipate your specific need, the software becomes frustrating. You are forced to learn its non-intuitive logic.</p><p>Natural interfaces (chatbots, voice agents) sit on top of indeterministic systems. They can do things developers never anticipated and can interpret your unique intent. But they make assumptions. Using an AI interface is like ordering from a waiter at a restaurant. You need to develop an instinct for how your communication might be misinterpreted. You need to know when the system will ask a clarifying follow-up question (&#8221;soup or salad?&#8221;), and when you need to be proactively rigid and structured in your commands (&#8221;hold the mustard&#8221;). Make a mistake here, and you end up with a mustard-covered sandwich. Everyone then has to start over from scratch, and someone has to pay for the waste.</p><p>Interestingly, the people building the software are going through the exact same transition.</p><p>Developers are increasingly using natural language to write code. For a brief moment, this felt like magic without rules&#8212;just type what you want, and the machine builds it. But developers are quickly realizing that an AI coding assistant is just as indeterministic as a chatbot. If they aren&#8217;t careful, they end up with the equivalent of a &#8220;mustard-covered sandwich&#8221; deep in their codebase.</p><p>Because of this, we are watching a new kind of structure reemerge in software development. Developers aren&#8217;t abandoning natural language, but they are scaffolding it. They are learning when to let the AI riff creatively, and when to enforce strict, deterministic tests to verify the AI&#8217;s output. The developer&#8217;s job is evolving from writing rigid rules by hand to managing the chaotic intelligence that writes them, locking its best outputs into place so they can be relied upon tomorrow.</p><h3><strong>Conclusion</strong></h3><p>For decades, our relationship with computers was fundamentally one-sided: humans had to learn to speak like machines. We memorized menus, learned strict syntax, and clicked exact sequences of buttons. We were forced to be rigid operators of deterministic systems.</p><p>AI flips this dynamic, but it introduces a new burden. The era of the comprehensive user manual is over, because you cannot write a complete manual for a probabilistic system. Its capabilities are discovered through interaction, not documented in a spec sheet.</p><p>This is why understanding the architecture beneath your feet is no longer just a concern for software engineers. It is a vital literacy for everyone.</p><p>If you are an everyday user, recognizing whether you are interacting with a deterministic system or an AI agent changes how you engage. The caution you apply to inputs and outputs should shift. For deterministic systems you should provide what is required and just what is required. For AI systems consider where elaboration yields better results, and vagueness leads to guesswork. Unless you need guesswork, avoid triggering that path.</p><p>If you are trying to predict where the industry is going, looking for these architectural layers is the only way to cut through the boring extremes of blind hype and cynical doom.</p><p>And if you are a builder&#8212;whether you are writing thousands of lines of code or just stringing together a few tools to solve a daily problem&#8212;understanding this duality is your ultimate advantage. The future of technology isn&#8217;t about choosing between the rigid reliability of the past and the creative chaos of the future. It&#8217;s about learning to bolt them together.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;d9b78ba2-b767-4a0b-8d6d-d69ac0d76516&quot;,&quot;caption&quot;:&quot;What do you think of when the topic of AI comes up? I think there are some common answers here. Most of those answers are incomplete. I hope I can provide a deeper understanding by looking at the concept of control, and patterns of application. This will be a two-part series: the first part describes a framework and the foundational layer of AI uses, and the second describes more advanced applications.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI, Determinism and Control (Part 1)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:&quot;Software architect, with 30+ years of experience, ex-AWS. My professional history explains my expertise in software, cloud computing, and AI, my focus on economics and urban development stems from decades of personal interest and independent study.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-06T11:30:07.361Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd951cd5-388a-4c05-b795-6a543c957ac1_1220x1422.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-determinism-and-control-part-1&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193078429,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;642fbe8a-98c2-4cf5-a15e-7407f33b1a92&quot;,&quot;caption&quot;:&quot;In Part 1 of this series, we explored how AI is fundamentally altering software control through the lenses of determinism and scope. We traced the journey from passive, strictly bounded chatbots to the threshold of active agents&#8212;AI systems capable of autonomous, multi-step planning. But what happens when these indeterminate systems are given broader scope and powerful tools? The consequences ripple outward, reshaping not just the security of our infrastructure, but the shape of our workflows and emotional relationship to work. To understand the recursive systems of tomorrow, we must dive into the agent ecosystem itself.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI, Determinism and Control (Part 2)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:&quot;Software architect, with 30+ years of experience, ex-AWS. My professional history explains my expertise in software, cloud computing, and AI, my focus on economics and urban development stems from decades of personal interest and independent study.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-07T11:40:21.896Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!LkP2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f85cf0-af7b-4d8e-89db-0ae9ff30f041_1220x2632.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-determinism-and-control-part-2&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193008931,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[Update on AI CyberSecurity]]></title><description><![CDATA[I&#8217;m travelling this week, so this will be short, but I thought the reactions to Mythos have been interesting.]]></description><link>https://substack.norabble.com/p/update-on-ai-cybersecurity</link><guid isPermaLink="false">https://substack.norabble.com/p/update-on-ai-cybersecurity</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Thu, 16 Apr 2026 16:38:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9a0519f6-56ad-44c1-bc50-2a933878d284_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m travelling this week, so this will be short, but I thought the reactions to Mythos have been interesting. The <a href="https://www.economist.com/science-and-technology/2026/04/15/how-ai-hackers-will-shake-up-cyber-security">core reaction</a>, after a little panic, has been consistent with the structure I outlined in <a href="https://substack.norabble.com/p/security-cant-wait">Security Can&#8217;t Wait</a> last month. Namely, the short term brings some risk, but the long term favors the defender.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;33bf76dd-8977-47e4-ad5c-d2184eaa48b3&quot;,&quot;caption&quot;:&quot;Right now, Artificial Intelligence is fundamentally rewriting the rules of cybersecurity&#8212;and we do not have the luxury of waiting before taking action.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Security Can&#8217;t Wait&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:&quot;Software architect, with 30+ years of experience, ex-AWS. My professional history explains my expertise in software, cloud computing, and AI, my focus on economics and urban development stems from decades of personal interest and independent study.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-05T21:05:09.345Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b2a65ed-e701-4f36-8d82-2a665189419b_2816x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/security-cant-wait&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190039490,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>One thing that is still being missed, is why the long term favors the defenders. One of the reasons is that fewer defects is good for defenders in a generally absolute way. But another one relates to costs and benefits. Read that economist article linked and the final statements, suggesting that defenders will have to pay highly to discover defects.</p><p>Now reflect, that this isn&#8217;t new, it has always been expensive to discover defects. The risk that products like Mythos bring is that they lower the cost of discovering defects to exploit. The solution is to raise that cost. That might tempt you to suggest you should rewind the clock, and never invent Mythos. That&#8217;s not a solution though as eventually attackers would invent something similar, and you would then lose any control and advantage from the defenders being the first with access.</p><p>Instead the solution is that you find as many easy defects as you can and fix them. The first 100 defects might cost $20,000 / defect to discover. The next 100 might be $40,000 per, etc. Along the way you end up with defensive layers that are more and more reinforcing, and the cost for attackers to discover defects goes up, especially if they have less sophisticated tools, and/or have to spend a lot to first illicitly gain access to tools. When Mythos is publicly released you can generally assume providers will increase their attempts to find and ban users with ill intent. Those protections create costs for attackers, such that if a defender can find a defect for $20,000, an attacker might need $100,000. The attacker&#8217;s main advantage is they just need one, but as unpatched defects become more rare and harder to find that advantage tends to shift toward favoring the larger aggregate budgets of defenders.</p><p>The defenders have a strong advantage in terms of money. Where they struggle is in organization, because they have a much harder organizational problem to solve. The hard part about being a defender is <a href="https://substack.norabble.com/p/deployments-cant-wait">getting changes deployed everywhere quickly</a>. Once attackers find a defect, they can try and use it everywhere. If they find it first, that works in a lot of places. If they find it second, it&#8217;s dependent on how organized the deployment process is.</p><p>And this is why the long term economics favor the defender. Statistically, most defects are found first by defenders, due to larger budgets. As the period between discoveries gets longer, the chances that attackers have really good targets declines. That lowers their cost/benefit, which probably also lowers their actual budget. Criminals invest in things that make (them) money, not ones that lose it.</p>]]></content:encoded></item><item><title><![CDATA[AI, Determinism and Control (Part 2)]]></title><description><![CDATA[The Agent Ecosystem and the Human Hand-off]]></description><link>https://substack.norabble.com/p/ai-determinism-and-control-part-2</link><guid isPermaLink="false">https://substack.norabble.com/p/ai-determinism-and-control-part-2</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Tue, 07 Apr 2026 11:40:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LkP2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f85cf0-af7b-4d8e-89db-0ae9ff30f041_1220x2632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="callout-block" data-callout="true"><p><em><a href="https://substack.norabble.com/p/ai-determinism-and-control-part-1">In Part 1</a> of this series, we explored how AI is fundamentally altering software control through the lenses of <strong>determinism</strong> and <strong>scope</strong>. We traced the journey from passive, strictly bounded chatbots to the threshold of active agents&#8212;AI systems capable of autonomous, multi-step planning. But what happens when these indeterminate systems are given broader scope and powerful tools? The consequences ripple outward, reshaping not just the security of our infrastructure, but the shape of our workflows and emotional relationship to work. To understand the recursive systems of tomorrow, we must dive into the agent ecosystem itself.</em></p></div><h2><strong>The Agent Ecosystem</strong></h2><p>Agents represent a significant shift in control, trading linear human prompting for continuous indeterministic planning.</p><p>To understand how these agents operate, we must briefly consider <strong>tools</strong>. Agents use tools to accomplish their plans. Tools can be anything, and which tools an agent is provided with define its constraints. You can provide an agent instructions, cautions, and directives through its prompt and context, but like anything in an agent, it&#8217;s indeterminate.</p><p>A tool might be as basic and low-risk as retrieving a specific account balance, where the boundaries are tight and predictable. It might be as broad as searching gigantic repositories like the entire internet or an organization&#8217;s internal files, which escalates risk by exposing the agent to untrusted data or sensitive information.</p><p>A broader path still is the ability to create and execute computer code, introducing severe risk if left unchecked. That might initially seem like it loses all constraints, allowing the agent to perform unanticipated or dangerous actions. However, code can be executed in a sandbox that limits how it communicates and what data it can access. Assuming the sandbox is secure&#8212;which requires careful planning, inspection, and testing&#8212;<a href="https://aws.amazon.com/blogs/machine-learning/control-which-domains-your-ai-agents-can-access/">restricting communication to untrusted sites</a> prevents data exfiltration or external control. Just as critical is controlling the credentials provided to the sandbox. Strictly limiting credentials restricts the agent&#8217;s ability to update records or access systems outside the purview of its current authorized activity. Together, these boundaries provide the necessary mechanism to constrain this high-risk capability.</p><p>Tool use isn&#8217;t restricted to retrieving information, either; it can allow <em>changing</em> information, which can trigger further actions. This is an area that requires much more caution, doubly so for writes and actions that are irreversible. Beyond that obvious observation, two other dimensions enter in. First, since an agent&#8217;s plan is indeterminate, the ability for a designer to remove the risk that it performs actions in unanticipated ways is vastly more complex than when working with a deterministic plan. Second, we must account for prompt-injection&#8212;the risk that something an agent has read can influence its choices, resulting in actions desired by an attacker rather than the user or designer. There are protections against this type of attack, but it would be foolish to consider them foolproof.</p><p>With that foundational understanding of how agents act on the world, we can observe this frontier opening up across escalating levels of scope:</p><h3><strong>Standalone AI Agents</strong></h3><p>Unlike a chatbot that waits for a prompt, a standalone agent is given a high-level objective, allowed to indeterministically generate its own step-by-step plan, and execute it using available tools (like searching the web or scraping data). While the planning is continuous and autonomous, the agent still typically operates within a relatively bounded scope, restricted by specific APIs to prevent runaway consequences.</p><p>Like chatbots, there are standalone agents from OpenAI, Claude, Google, and others. In fact, most chatbots have silently become agents<em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em>, though still with many constraints. </p><p>In time, these standalone agents may have more and more autonomy. But from the perspective of this framework, the fundamental aspect of taking user input, deriving a plan through an indeterministic process, and executing that plan won&#8217;t re-enter the realm of determinism until it invokes a tool<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><h3><strong>Agents Embedded in Applications</strong></h3><p>Moving beyond the simple &#8220;embedded AI node&#8221; discussed in Part 1 involves agents operating continuously alongside users within a shared software environment. Consider a complex data analysis platform: the human user might explicitly invoke deterministic tools to filter data, while an embedded agent operates in the background, autonomously invoking its own set of analytical tools to highlight anomalies. The application becomes a hybrid ecosystem where human indeterminism and agent indeterminism collaborate in real-time bounded by the application&#8217;s guardrails.</p><p>Agents embedded in applications have an advantage over agents called by other agents: the input data is controlled by the calling application. Still, remember that the applications agents are embedded in may themselves be working with dynamic data. A data analysis platform has many data sources; are they all vetted and invulnerable to an injection attack?</p><p>Another common example today is agents embedded into development workflows. They can reason about code, look for security issues or defects, generate fixes, and submit them as pull-requests for developers to review, effectively combining the code-generation function with the embedding function.</p><h3><strong>Agents Using Agents</strong></h3><p>An agent can become a tool used by other agents. To think about why this is valuable, you want to first understand that agents generally have a few components. At their core, they create plans via the GenAI model. They generally have some sort of instruction (or persona) file. They&#8217;ll also have access rights or boundaries associated with a tool list.</p><p>The instruction file is the interesting part here, as it provides the reason for a distinct agent. In the simplest version, it might describe a persona (&#8220;you are an insurance adjuster&#8221;), but this relies on the GenAI model to blindly decide how an adjuster behaves. If you&#8217;re building an agent, you want more control. A detailed instruction file can&#8217;t shift you to a fully deterministic world (if you want that, you should write code), but it can reduce variability.</p><p>Another aspect that can remove ambiguity is placing restrictions on inputs and input sources. If an agent expects to receive user input, it has to be fully prepared for anything. If it&#8217;s called from an application, those expectations are more constrained.</p><p>Agent input expectations fall into two categories: expectations formed around successfully fulfilling its goal under valid usage, and expectations formed around avoiding taking action on behalf of an attack. These two have some non-overlapping aspects. If an input is suspected of being for the purpose of an attack, there&#8217;s no need to try to do anything other than quit and refuse action. But the consequences of allowing an attack are generally far more severe. On the other hand, failing to successfully complete an action is less severe, but it&#8217;s less acceptable to give up because of uncertainty.</p><p><strong>If we had wanted certainty, and accepted inaction for uncertainty, we should have written a traditional application, not an agent.</strong> In many ways, the creation of agents with sophisticated instruction files is a type of meta-programming that never coalesces into a deterministic form. While we could use vibe-coding to generate an application, creating an instruction file for an agent has a similar outcome, except designers never get the chance to validate the plan for each agent execution. We might restore some of that validation through a human-in-the-loop workflow, but the agent designer won&#8217;t be in the loop unless they are also the user.</p><p>Furthermore, when input comes from another agent, the expectations on input are not very clear. It&#8217;s not as unclear as coming from an untrusted user, but since the user of the <em>calling</em> agent might be less than fully trusted, we have to consider the possibility that a malicious user could cause the calling agent to pass dangerous inputs to the called agent. Depending on the design, that might be difficult, but proving it&#8217;s impossible is a high bar without some deterministic system in the path.</p><h3><strong>Agents Building Applications</strong></h3><p>At the apex of the framework we have agents building applications. Instead of a human using an AI tool to write code, an autonomous agent&#8212;or a multi-agent framework&#8212;is given the broad scope to architect, write, test, and deploy entire applications. Operating within bleeding-edge, emerging ecosystems like Steve Yegge&#8217;s concept of <a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">Gas Town</a>, an overarching agent might autonomously spawn specialized &#8220;code worker&#8221; sub-agents to solve specific architectural problems. This introduces the reality of <strong>deep recursion</strong>: AI systems dynamically writing, testing, and deploying new AI systems at machine speed.</p><h3><strong>Agents Building Agents</strong></h3><p>An alternate apex is agents building other agents. While both scenarios rely on deep recursion where at least one level is indeterminate, the agents-building-agents path stores its recursive plans in natural language, rather than a programming language. Tools like <a href="https://www.anthropic.com/product/claude-cowork">Claude Cowork</a> and <a href="https://www.anthropic.com/product/claude-code">Claude Code</a> are bordering on this. Technically, they&#8217;ve always been capable of it, as a developer can create recursion somewhat trivially.</p><p>The barrier here has generally been security. It&#8217;s rather easy to say, &#8220;Agent A calls Agent B to create Agent C, which can call Agent B&#8221; (look, I just did it!). The hard part is whether that&#8217;s a good idea. Projects like <a href="https://openclaw.ai/">OpenClaw</a> push this further. When agents build other agent skills or update through tools like <a href="https://www.moltbook.com/">Moltbook</a>, they are acting at this highly complex, deeply recursive layer. OpenClaw has some security controls, but not enough to prevent many users from <a href="https://blog.barrack.ai/openclaw-security-vulnerabilities-2026/">making significant mistakes</a>.</p><p>Another example here that illustrates the movement from applications to agents, is <a href="https://steve-yegge.medium.com/vibe-maintainer-a2273a841040#:~:text=Gas%20Town%20is%20a%20%E2%80%9Cpack%E2%80%9D%20within%20Gas%20City">Gas Town in Gas City</a>. Gas Town, the original multi-agent orchestration system for Claude Code, GitHub Copilot, and other AI agents, was an application. When Yegge wrote Gas City, a &#8220;orchestration-builder SDK for multi-agent systems&#8221;, Gas Town became &#8220;code free&#8221;, and instead a bundle of prompts and skills.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/KoYXv/3/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86f85cf0-af7b-4d8e-89db-0ae9ff30f041_1220x2632.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f6242f71-c3cb-497c-8e28-1dfa6635d4a3_1220x2702.png&quot;,&quot;height&quot;:1320,&quot;title&quot;:&quot;AI Use Cases&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/KoYXv/3/" width="730" height="1320" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><h2><strong>Ripple Effects: Cybersecurity</strong></h2><p>One constant that comes from more complex systems is greater challenges at securing them. All else being equal, indeterministic systems, either fully or wholly, are more complicated than fully deterministic ones. Security is traditionally about protecting deterministic plans from indeterminate actors (human hackers). Each layer adds complexity as well. Attackers will have new tools and will use them against the highest value targets that have weak points.</p><p>Fortunately, all this does not come without some benefits, both inside and outside the realm of security. Inside the realm of security, access to dynamic systems like agents allow for faster responses. As I&#8217;ve explored previously, this means <a href="https://substack.norabble.com/p/security-cant-wait">security can&#8217;t wait</a>&#8212;it will force not just an overdue commitment to defense, but complete organizational changes.</p><p>The highest value targets know this. They will rapidly adopt new defensive techniques, patching weak points faster than ever before. Where we will see more successful attacks is against more moderate value targets. <a href="https://substack.norabble.com/p/deployments-cant-wait">Some operate efficiently</a> and will adapt, but others will face an &#8220;adapt or fail&#8221; pressure cooker. They will suddenly find themselves defending against highly sophisticated, indeterminate automated attacks.</p><p>Overall, however, this is a narrative of optimism. As Dario Amodei notes, <em>&#8220;<a href="https://www.darioamodei.com/essay/the-adolescence-of-technology#:~:text=the%20offense%2Ddefense%20balance%20may%20be%20more%20tractable%20in%20cyber%2C%20where%20there%20is%20at%20least%20some%20hope%20that%20defense%20could%20keep%20up%20with%20(and%20even%20ideally%20outpace)%20AI%20attack%20if%20we%20invest%20in%20it%20properly.">the offense-defense balance may be more tractable in cyber, where there is at least some hope that defense could keep up with (and even ideally outpace) AI attack if we invest in it properly.</a>&#8221;</em> Security must simply shift toward robust bounding and sandboxing of environments, rather than assuming the predictability of the software operating within them.</p><h2><strong>Automation and Workflow Change</strong></h2><p>This shift in control&#8212;from human-driven applications to autonomous, recursive agents&#8212;isn&#8217;t happening just for the sake of technological novelty. Ultimately, the goal is, and has always been, automation. AI changes opportunities for automation by lowering automation costs that were previously prohibitive.</p><h3><strong>Classification and ML</strong></h3><p>Machine Learning (ML) is an AI technique that achieved broad use earlier than Generative AI. Classification and prediction tasks were the core use cases. It&#8217;s generally less well known than Generative AI because those use cases fit into embedded AI workflows that have less direct user interaction. But that doesn&#8217;t mean they haven&#8217;t been effective. Generative AI has some overlap, but it&#8217;s useful to note ML is not obsolete&#8212;it will continue to dominate specific classification tasks where the trade-offs favor highly optimized, low-compute execution.</p><p>But GenAI is shifting the math for automation&#8217;s long tail where engineering effort is the limiting factor. Traditional ML models require a significant engineering investment to train. While that engineering could theoretically be automated, doing so would bring you back to using GenAI to generate the code. When a general-purpose GenAI model can perform a task at equal quality without that upfront engineering time, it opens up a new option for countless use cases that were never practical to tackle with traditional ML. AI provides the structure to finally capture and automate the tacit knowledge we previously had to rely on humans to execute.</p><h3><strong>Workflow Change</strong></h3><p>This, and the other uses of Generative AI, allows a deeper decomposition of workflows. In prior methodologies, it was too expensive to capture the output of specific, granular steps. Those steps were done &#8220;in the head&#8221; of human workers, existing only as &#8220;tacit knowledge.&#8221; A workflow that may have produced better results might have been avoided because the human cost of data preparation or classification was too high.</p><p>While it&#8217;s possible to replicate the same workflows, trade-offs have shifted. Consider a nurse who has learned a new symptom for a patient. That nurse may lack the depth of medical knowledge or the patient&#8217;s full history, so may not be able to do more than record that information until the patient&#8217;s doctor can review it. But an AI system can reanalyze a patient&#8217;s information nearly instantly. It can recategorize data, make new recommendations, or provide the nurse with the relevant medical information and patient history. This could allow next steps that both improve efficiency, but also improve outcomes. Maybe it&#8217;s an extra test, or an extra question, or a life-saving reaction.</p><p>Because AI lowers the cost of executing small, indeterminate tasks, we can now decompose workflows further. What workflow is optimal depends heavily on hand-off costs. In the example of the nurse, the hand-off costs to the doctor were the impediment. Human to human or human to machine hand-offs are expensive compared to machine-to-machine. When tasks shift from human dependent to machine dependent, a reorganization of workflow makes sense. A particular flow which was used to avoid handoffs may no longer be necessary. More importantly, those handoffs that remain, will have higher relevance than before, and optimizing for them, rather than those that are no longer needed, takes priority.</p><p>Initially, we should expect to see pilots, trials, and first iterations operate within existing workflows. Changing workflows requires planning, training, and is hard to reverse or do incrementally. As such, it follows in later iterations. But many of the largest gains are realized with those iterations.</p><h3><strong>The Human Side</strong></h3><p>Workflow change can also have a significant impact on satisfaction amongst workers. Hand-offs can be the most frustrating type of work, depending on your personality type. Human to machine hand-offs become frustrating when flexibility is lacking, and you feel like your task is to fit a round-peg into a square hole. Human to human hand-offs can sometimes be enriched by the personal interaction, but they also expose you to misaligned goals, competing priorities and personality conflicts. Personal interactions are a lot more reliably fun when you get to pick the individuals and circumstances.</p><p>Worst of all, machine to human hand-offs can create the impression that you&#8217;re serving the machine, not the inverse. All hand-offs can have that effect somewhat, but it&#8217;s especially hard if there&#8217;s an endless list of machine generated work. It helps to detach from the &#8220;end&#8221; and focus on the progress here, but when an organization turns that into a metric, ruthlessly gamifies it, and fails to consider the impacts, it requires extreme stoicism to avoid burnout.</p><p>It&#8217;s important to remember the human side with workflow change. Ruthless metrics fail in the long term. Leaders should watch for that, and avoid allowing short-term goals to overwhelm long-term health. It&#8217;s not always clear that this fulfills the &#8220;bottom-line&#8221; if that&#8217;s financial performance. It&#8217;s always clear it&#8217;s better for broader goals than financial, but even for financial goals, the benefits are likely there, though harder to see.</p><h2><strong>Conclusion</strong></h2><p>The era of software as purely rigid, deterministic planning is ending. In its place is the rapidly expanding Agent Ecosystem. By integrating indeterministic models into our autonomous systems, and allowing agents to build other agents, we are trading perfect predictability for unprecedented scale and capability.</p><p>As the scope of these systems increases, our primary job shifts from writing static instructions to managing boundaries. We must build robust technical sandboxes to protect our cybersecurity, and we must build equally robust organizational boundaries to protect human workers from the burnout of endless machine-to-human hand-offs. We have to design systems that serve humans, not the other way around.</p><p>The question is no longer just &#8220;What can the chatbot say?&#8221; The real questions are: How much scope are we willing to give to indeterminate plans? How will we effectively bound the recursive systems of tomorrow? And, ultimately, how do we bound ourselves?</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;367ff460-5a37-4500-93b9-ed64fa4ab0cc&quot;,&quot;caption&quot;:&quot;What do you think of when the topic of AI comes up? I think there are some common answers here. Most of those answers are incomplete. I hope I can provide a deeper understanding by looking at the concept of control, and patterns of application. This will be a two-part series: the first part describes a framework and the foundational layer of AI uses, and the second describes more advanced applications.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI, Determinism and Control (Part 1)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:&quot;Software architect, with 30+ years of experience, ex-AWS. My professional history explains my expertise in software, cloud computing, and AI, my focus on economics and urban development stems from decades of personal interest and independent study.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-06T11:30:07.361Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd951cd5-388a-4c05-b795-6a543c957ac1_1220x1422.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-determinism-and-control-part-1&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193078429,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p> </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;6a827910-462d-465b-a9a1-43225804c239&quot;,&quot;caption&quot;:&quot;Right now, Artificial Intelligence is fundamentally rewriting the rules of cybersecurity&#8212;and we do not have the luxury of waiting before taking action.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Security Can&#8217;t Wait&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-05T21:05:09.345Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b2a65ed-e701-4f36-8d82-2a665189419b_2816x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/security-cant-wait&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190039490,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;158a6066-f235-433f-b873-b4a78079836a&quot;,&quot;caption&quot;:&quot;In the broader discourse on artificial intelligence, the sharpest minds in AI safety are currently looking to the horizon. They are focused on existential, cinematic threats: the potential for AI-generated bioweapons, nuclear command vulnerabilities, and autonomous warfare.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Deployments Can't Wait&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-23T11:50:42.029Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d95d729-a5d3-4f42-9d39-bf371396315c_2812x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/deployments-cant-wait&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191818851,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>There is debate on what the minimum requirement is to be an &#8220;agent&#8221;. For this framework, we&#8217;ll use the looser form that does not require continual autonomy, but simply the ability to create and execute a plan, which may still involve supervision. Just two of many models of describing agency:</em> <a href="https://arxiv.org/abs/2405.06643">arXiv (Huang et al., May 2024)</a>; <a href="https://arxiv.org/html/2506.12469v1">arXiv (Feng et al., June/July 2025)</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>AI chatbot providers may give users a way to define deterministic workflows, but you can think of these as user-built tools; a very simple version of application building.</em></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI, Determinism and Control (Part 1)]]></title><description><![CDATA[Taming the First Layers of Indeterminism]]></description><link>https://substack.norabble.com/p/ai-determinism-and-control-part-1</link><guid isPermaLink="false">https://substack.norabble.com/p/ai-determinism-and-control-part-1</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 06 Apr 2026 11:30:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cd951cd5-388a-4c05-b795-6a543c957ac1_1220x1422.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What do you think of when the topic of AI comes up? I think there are some common answers here. Most of those answers are incomplete. I hope I can provide a deeper understanding by looking at the concept of control, and patterns of application. This will be a two-part series: the first part describes a framework and the foundational layer of AI uses, and the second describes more advanced applications.</p><p>To the earlier question, you wouldn&#8217;t be alone if your first answer was a user-facing chat application&#8212;ChatGPT, Gemini, or Claude. Hundreds of millions, maybe billions of users have tried one of these<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. It&#8217;s a great starting point to understand the current state of AI. You can probe with questions, get answers, and evaluate the output. 30 minutes of that will demonstrate more than any other 30-minute investment. That said, forming your entire impression of AI based solely on chatbots misses deeper shifts.</p><p>A second common answer is a robot from a science fiction movie&#8212;a human-like, but fundamentally alien, being. While creative, this vision represents what AI <em>might</em> be, not what it is today. Thinking about Sci-Fi AI often does more to help us understand human nature than actual machine learning. Take any part of it too literally, and it will make you less informed.</p><p>Understanding AI&#8217;s true impact requires closing the gap between simple chatbots and science fiction robots. We can do that by looking at AI through the lens of software engineering and control. Specifically, we can treat the control mechanism of software as a system of <strong>planning</strong>, governed by two critical axes: <strong>Determinism</strong> and <strong>Scope</strong>.</p><h2><strong>The Mechanics of Control: Determinism and Scope</strong></h2><p>Historically, software has been built on predictability. When a human developer writes traditional code, they are creating a deterministic plan. Once it passes through quality checks, validations, and testing, it becomes a solid, rigid set of instructions. While bugs exist, the system is fundamentally designed to execute the same way every time.</p><p>Humans, on the other hand, are inherently indeterministic; you never know exactly how a user will approach a problem, what strategies they will employ, or how they might adapt their plans on the fly. <a href="https://www.coursera.org/articles/what-is-generative-ai">Generative AI models</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>&#8212;the underlying engines powering the familiar chat tools like ChatGPT and Gemini mentioned earlier&#8212;share this indeterminism. When a GenAI model creates a plan or a response, it is probabilistic. It will not necessarily produce the same output twice. That said, if there is a single correct answer that has been heavily reinforced during its training, the model will predominantly provide that specific answer. This consistency occurs because the statistical weight leans overwhelmingly in that direction, not because the system&#8217;s inherent flexibility has been mechanically removed.</p><p>The second axis of control is the execution environment, or its scope. An environment can be strictly bounded, meaning the actor (human or AI) has very limited tools and access. A user in a simple data-entry application cannot do much other than enter data. Conversely, an environment can have broad scope, featuring wide-ranging access to tools with compounding effects, such as command-line execution, file system access, or the ability to write and deploy new code outside of a sandbox.</p><p>By analyzing AI through these axes&#8212;how predictable a plan is, and how bounded its environment is&#8212;a clear narrative of evolution emerges.</p><p><strong>What is Indeterminism?</strong></p><p>What you identify as the &#8220;AI&#8221; in these applications is called a foundation model. This model generates responses to your inputs based on its &#8220;training.&#8221; Training is a process where the model is incrementally updated to make its responses match an ideal. You can think of the first training pass as an attempt to create a model that could rewrite the entire internet with as little inconsistency as possible. Initially, the model might predict a word because it looks like a piece of content it just saw. But as it processes more information, it encounters conflicts. By forcing the model to resolve millions of overlapping conflicts, it learns the underlying rules of how concepts connect. Later, a second training pass is applied to be much more specific about what is considered a &#8220;good&#8221; or &#8220;bad&#8221; way to respond to a human user.</p><p>Output from AI models is indeterminate. There are two factors that cause this. The first, unconquerable aspect is that the relationship between input and output is too complex to reason through. You actually <em>can </em>get a model to produce the exact same output for the exact same input if you adjust its &#8220;temperature&#8221; to zero. However, this doesn&#8217;t mean the model is truly predictable, because even minute changes in the input can put the model on completely different paths, even at zero temperature. The second factor is that when you see a model used in practice, the temperature is generally set greater than zero. Zero temperature tends to be boring, less creative, and less insightful without necessarily being more accurate&#8212;it just enforces a stronger consistency between input and output. But since the complexity of that relationship makes strict prediction impossible anyway, the value of zero temperature is limited, and the output remains, in all practical senses, indeterminate.</p><p>But indeterminate isn&#8217;t the same as random; it has a direction. With evaluation you can find a probability, and those probabilities can be high. But indeterminate also entails the chance for novelty, including surprise. To some degree, we might say indeterminism reflects a limitation of the user or designer&#8217;s ability to predict outcomes. But it&#8217;s not a limitation reflective of a lazy user or designer; it reflects a level of complexity no amount of attention can fully address. You can fight it a bit, gain some control and understanding, but if your expectation is full control, you&#8217;re using the wrong toolbox, wasting your time, and will ultimately fail.</p><p>Indeterminism is something you tame, not control. A tame agent is something that works with you. Tame things have many benefits, but also bear caution. A tame horse has advantages a car does not. If you fall asleep on a horse (and don&#8217;t fall off), the horse is very unlikely to jump off a cliff. A traditional, non-autonomous car doesn&#8217;t behave that way&#8212;it&#8217;s very deterministic at driving straight, whether that straight path leads down the road, into a wall, or over a cliff.</p><p>But a tame horse can still kick you. It&#8217;s far less likely than a wild horse, but if you approach it wrong or scare it, there&#8217;s no horse so well-trained that a kick becomes impossible. That&#8217;s part of the tradeoff of working with something indeterminate. A car with its engine off is going to behave like any other 2,000+ lb. hunk of metal on wheels, governed entirely by physics. Even in motion, while there are a few exceptions like engine or brake failures, it&#8217;s all just physics in the end.</p><p>Most software applications are designed to be determinate. A developer reasoned out what output a particular input should create, and planned this carefully. The plan of these applications is encoded into a language, and translated into machine code.</p><p>Understanding this shift to a probabilistic nature is crucial, because many choices we make will be founded on the seeking of a balance between dynamism and trust that intersects with that fundamental property of models.</p><h2><strong>The Thin Layer: AI as an Application</strong></h2><p>Most users have started to understand what GenAI is, and what its capabilities are, by using it as an application. What&#8217;s interesting about this is that these first applications started as very thin layers over the core internal generative AI model, so users have experienced the technology at nearly its most basic. It&#8217;s been a while since a novel computing technique has been exposed with so few extra layers.</p><p>In a formal sense, &#8220;AI as an application&#8221; means the primary interface is directly to a GenAI model. There are a few wrapper elements&#8212;identifying who you are, moving data back and forth, and providing some presentation of returned data&#8212;but mostly, it&#8217;s a wrapper. You send inputs, it sends outputs, and you directly converse back and forth.</p><p>In our framework, this is an <em>indeterminate system operating within a strictly bounded scope</em>. The user and the model are primarily in control. The text, images, or files you provide get fed to the AI, and its probabilistic responses are safely constrained by the application&#8217;s sandbox. Safeguards exist that neither can override, but within those boundaries, the direction of the interaction is controlled linearly by the human and the AI model.</p><h2><strong>Embedded Intelligence: AI within Applications</strong></h2><p>As software evolves, we are seeing a shift toward embedding AI directly into applications. While technically a chatbot is an example of this, it is highly useful to differentiate the two.</p><p>When a developer embeds GenAI into an existing application&#8212;for example, a <a href="https://aws.amazon.com/blogs/machine-learning/build-an-ai-powered-a-b-testing-engine-using-amazon-bedrock/">GenAI-powered A/B testing engine</a> that automatically generates and tests multiple variations of marketing copy to identify the best performer&#8212;the control dynamics shift. The overarching application remains a rigid, deterministic plan, but the AI represents a small, contained pocket of indeterminism.</p><p>With AI embedded in an application, the application is primarily still in control. When it uses an AI for a specific function, it cedes a small amount of control, but it has strict boundaries. The traditional software dictates exactly <em>when</em> the AI is called and <em>where</em> its output goes, strictly bounding its influence to specific micro-outcomes. There is immense potential left in this domain that the general public doesn&#8217;t recognize, simply because its precise use is entirely dependent on the creativity of application developers.</p><h3><strong>Prompt Engineering</strong></h3><p>An interesting detail about embedding is that <a href="https://aws.amazon.com/what-is/prompt-engineering/">prompt engineering</a> becomes a critical function. Embedded GenAI needs a goal. With chatbots, the user can provide the goal. With embedded GenAI, if the input is from the user, it&#8217;s through things like filling forms or uploading documents. The embedded function should reliably reach a result that allows the application workflow to proceed.</p><p>Prompt engineering is the process of creating inputs to a GenAI model that perform better at achieving a goal than other inputs. Some parts of this are intuitive to anyone fluent in a language, like if you were instructing an actual assistant. Some parts are more particular to GenAI models, or even particular to specific GenAI models.</p><p>Technically, you can use prompt engineering when you use a chatbot, and you&#8217;ll get better results if you do. But there&#8217;s also an overhead in doing so, as you&#8217;re no longer expressing your simple intent, but working to make it fit a pattern. Model builders work to try and make prompt engineering less necessary for general user interactions, so some tricks are less important than they were in 2023. The most obvious parts will probably always be useful, like avoiding ambiguity when you have a clear intent in mind.</p><p>For embedding, the overhead of prompt engineering has a higher payback, so it makes sense to engage with it more deeply, and so developers do. Prompt-engineering is also used when creating custom chatbots<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>Another goal of prompt engineering is to constrain the output. Stronger, more consistent instructions produce more consistent outputs.</p><h3><strong>Embedding and Value</strong></h3><p>You might have noted that the A/B testing example earlier was a marketing example, and thus falls into the <a href="https://substack.norabble.com/p/ai-and-the-zero-sum-game">adversarial category</a> of use I&#8217;ve talked about before. Marketing is often an early adopter of these embedded systems because the industry is driven by adversarial motives&#8212;constantly competing against others for user attention and clicks. But the true potential is much more inspiring. Consider an adaptive educational platform. The overarching application rigidly tracks a student&#8217;s progress, curriculum, and test scores (the deterministic plan). However, when the system detects a student struggling with a specific concept, it calls upon an embedded GenAI model to instantly generate a custom, interactive story explaining that exact concept using the student&#8217;s favorite hobbies as an analogy. The application remains fully in control, but it uses the AI&#8217;s indeterminate flexibility to provide a deeply personalized learning experience that hardcoded software never could.</p><h2><strong>Building Systems with a Life of Their Own</strong></h2><p>The control dynamic fundamentally fractures with the next pattern: using AI as a tool to build applications. So far, experiences with this have revolved around coding assistants and &#8220;vibe coding.&#8221; In some cases, it&#8217;s immediately clear this is different from the chatbot model because the AI is embedded within complex Integrated Development Environments (IDEs).</p><p>But what truly distinguishes this pattern isn&#8217;t the interface. Rather, it&#8217;s the output and how it is used. Business-focused tools like <a href="https://www.anthropic.com/product/claude-cowork">Claude Cowork</a> or <a href="https://aws.amazon.com/quick/">Amazon Quick</a> are increasingly managing different inputs and outputs to help end-users pursue task-oriented goals, generating artifacts like documents, summaries, and presentations. But if that output is ephemeral&#8212;a static artifact used only to accomplish a quick, singular objective&#8212;it&#8217;s not building an application.</p><p>Building applications means building something that has a life of its own. It is the <em>indeterministic generation of deterministic plans</em>. The AI indeterministically generates a code script, which is then refined through human review, automated reasoning, and testing. Once committed, it becomes a static, deterministic plan.</p><p>The topic of control highlights the profound nature of this shift. When an application is built this way, there are two distinct phases of control. In the first phase, the developer is in control, sharing control with the generative AI model by delegating, authorizing, and reviewing. But the developer does not retain persistent control during the second phase. Once deployed, the built application itself takes control. If it runs on a server, the developer can turn it off or replace it, but that is supervisory control at best.</p><p>By building something with a life of its own, we take a foundational <strong>recursive step</strong> in software: using indeterminate AI to architect and generate the very deterministic logic that will govern computing moving forward.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/nnjJ2/2/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f1f47da1-709f-4eb8-b69c-426449346dec_1220x1352.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dcc98ac4-d192-4641-8967-2b9444812a26_1220x1422.png&quot;,&quot;height&quot;:643,&quot;title&quot;:&quot;AI Use Cases (First Layer)&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/nnjJ2/2/" width="730" height="643" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><h2><strong>From Chatbot to Agent</strong></h2><p>The user-facing AI chat application you engage with has already evolved to be more than a simple interface. Designers try to make this seamless for you, so you shouldn&#8217;t feel bad if you missed the change, but to understand the systems of tomorrow, we need to distinguish between a passive chatbot and an active agent.</p><p>When you interact with a standard chatbot, the control dynamic is strictly conversational and reactive. You provide a prompt, the underlying generative AI model probabilistically calculates a response, and then it stops. It relies entirely on you to drive the interaction forward step-by-step.</p><p>An <strong>agent</strong>, on the other hand, is an AI system designed to pursue a broader goal autonomously. Instead of just answering a single prompt, an agent takes an objective, indeterministically breaks it down into a multi-step plan, and uses available tools to execute that plan. It can observe the results of its own actions, correct its course, and continue working until the goal is met.</p><p>Understanding this shift from passive generation to active execution is crucial. By combining the probabilistic reasoning of foundation models with the ability to take independent action, we are actively moving away from simple chatbots and into the era of agents.</p><p>We have traced the evolution of AI from simple chat interfaces to embedded intelligence, and finally to the threshold of these autonomous agents. But recognizing this shift is only the first step. To truly understand where software is heading, we must examine the wild frontier of the agent ecosystem itself&#8212;how these agents use tools, how they interact with each other, and the cybersecurity implications of granting them broad scope. <a href="https://substack.norabble.com/p/ai-determinism-and-control-part-2">In Part 2 of this essay, we will dive into this ecosystem, exploring the deep recursion of agents building agents, and what this shift means for the ultimate goal of automation</a>.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.businessofapps.com/data/chatgpt-statistics/">900 million ChatGPT users</a>,  and <a href="https://www.businessofapps.com/data/google-gemini-statistics/">750 million Gemini users</a> globally.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>There&#8217;s two terms you might hear which are almost always used incorrectly. <strong>Large Language Model (LLM)</strong> was used to describe a model based on text inputs with text outputs, which used a large amount of text and had high complexity. Technically you rarely use a LLM anymore, as most models are multimodal (supporting text and graphics). Despite that change, the term LLM has enough weight that people use it anyhow, even though technically incorrect. As well, the term <strong>Foundation Model</strong> has a broader scope. While this allows it to encompass large multimodal models, it technically also includes many earlier types of models not advanced enough to perform the actions associated with &#8220;AI&#8221;, but more commonly described as Machine Learning (ML). If a better term was popular, I&#8217;d use it, but in general you should probably think of them as all the same, and if necessary use context to refine the intent. <strong>Generative AI Model</strong> is the best term, but if you see LLM or foundation model in anything other than an academic context, you can assume they are referring to generative AI models.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>I want to be clear though. When I talk about embedded GenAI, I&#8217;m not referring to custom chatbots, like the one that answers your company&#8217;s HR questions. Those were never going to be particularly transformative. They&#8217;ve been made fun of quite a lot, and for good reason. While of some utility, they were really just cheap upgrades to search capabilities, and often underperformed general-purpose chatbots. We don&#8217;t need a special category for those until they become full-fledged agents.</em></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The AI Jobs Blind Spot]]></title><description><![CDATA[Why Job Creation is the Default]]></description><link>https://substack.norabble.com/p/the-ai-jobs-blind-spot</link><guid isPermaLink="false">https://substack.norabble.com/p/the-ai-jobs-blind-spot</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 30 Mar 2026 12:15:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fa01df32-94e2-4d3f-9e9a-1cf6816267bc_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When discussing AI and the future of work, there is a glaring blind spot in the general discourse: the fundamental baseline state of an economy is to create new jobs.</p><p>It is common to hear people argue that &#8220;technology creates new jobs,&#8221; usually pointing out that despite centuries of technological advancement, nearly everyone is employed today. Therefore, they argue, the fear that technology destroys jobs must be wrong. While it is true that technologies can both create and eliminate specific roles, framing the debate entirely around the technology misses the underlying engine. The real topic is the economy itself, which naturally seeks to create new jobs from available resources&#8212;the most limited of which is labor. <a href="https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller">Whether ATMs create more or less demand for bank tellers</a> is simply not as important as we think.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:190553382,&quot;url&quot;:&quot;https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller&quot;,&quot;publication_id&quot;:4554783,&quot;publication_name&quot;:&quot;David Oks&quot;,&quot;publication_logo_url&quot;:null,&quot;title&quot;:&quot;Why ATMs didn&#8217;t kill bank teller jobs, but the iPhone did&quot;,&quot;truncated_body_text&quot;:&quot;A few months ago, J. D. Vance, sitting vice president of the United States, gave an interview to Ross Douthat of the New York Times. During that interview, Vance and Douthat had an interesting exchange:&quot;,&quot;date&quot;:&quot;2026-03-10T22:29:42.275Z&quot;,&quot;like_count&quot;:1567,&quot;comment_count&quot;:110,&quot;bylines&quot;:[{&quot;id&quot;:2088240,&quot;name&quot;:&quot;David Oks&quot;,&quot;handle&quot;:&quot;doks&quot;,&quot;previous_name&quot;:&quot;Stylite&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/553a38f8-f363-424f-8648-742af2eacc8d_1024x1024.png&quot;,&quot;bio&quot;:&quot;Essays on economics, technology, history&quot;,&quot;profile_set_up_at&quot;:&quot;2021-04-25T15:01:09.752Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-06-18T14:21:19.283Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:4646174,&quot;user_id&quot;:2088240,&quot;publication_id&quot;:4554783,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:4554783,&quot;name&quot;:&quot;David Oks&quot;,&quot;subdomain&quot;:&quot;davidoks&quot;,&quot;custom_domain&quot;:&quot;davidoks.blog&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;The world is what it is.&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:2088240,&quot;primary_user_id&quot;:2088240,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-03-30T23:49:08.700Z&quot;,&quot;email_from_name&quot;:&quot;David Oks&quot;,&quot;copyright&quot;:&quot;doks&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:null}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[1198116,1071360,159185,1063960],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><span></span><span class="embedded-post-publication-name">David Oks</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Why ATMs didn&#8217;t kill bank teller jobs, but the iPhone did</div></div><div class="embedded-post-body">A few months ago, J. D. Vance, sitting vice president of the United States, gave an interview to Ross Douthat of the New York Times. During that interview, Vance and Douthat had an interesting exchange&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 months ago &#183; 1567 likes &#183; 110 comments &#183; David Oks</div></a></div><p>Debates about automation often get stuck here. One side correctly argues that economies have historically created new jobs, but incorrectly attempts to prove this by claiming <em>technologies</em> always create new jobs. These are two different arguments. This minor framing issue does a lot of heavy lifting in keeping both sides from understanding each other. Once you look past the technology and focus on the economic engine, you can discuss the future of jobs in far more effective ways than debating the fate of bank tellers.</p><h3><strong>Why Does an Economy Create New Jobs?</strong></h3><p>It is easy to misunderstand how economies work if you view them through a lens of strict limits rather than dynamic balance. A certain mindset approaches every economic issue as a zero-sum game of apportionment&#8212;assuming there is a fixed number of jobs in the world, and introducing a new technology either adds to or subtracts from that finite pool.</p><p>It is not hard to see why this mindset takes hold; in a moment-to-moment sense, it appears true. At any given second, there are a fixed number of jobs. Eliminate 500,000 of them instantly, and you have 500,000 unemployed people. But economies are not static moments; they are moving systems that perpetually seek equilibrium. While external limits exist, the internal machinery of an economy is entirely dedicated to finding a balance between those limits and the limitless preferences and desires of the people within it.</p><p>That is why, in their default state, economies always create new jobs out of available labor. If there is an unmet desire among the population, and there is labor available to fulfill it, the economy will generate an opportunity to put that labor to work. This balancing act isn&#8217;t instantaneous. There is a &#8220;seeking&#8221; process to find a new equilibrium. Sometimes this process stalls, the economy malfunctions, and we experience high unemployment. But high unemployment is not the result of an absolute limit on total possible jobs; it is a breakdown in how quickly the economy adjusts to new parameters.</p><h3><strong>Will AI Create New Jobs?</strong></h3><p>Yes, it will. Will it create more than it eliminates? Probably not. But the <em>economy</em> will still create new jobs, and it isn&#8217;t dependent on AI to do so.</p><p>Consider software engineering. The number of computer programmers necessary to write and maintain a specific piece of software will likely go down due to AI. However, that doesn&#8217;t extinguish the societal desire for <em>more</em> software or <em>better</em> software. AI didn&#8217;t create those desires, but those human desires will inevitably create new jobs focused on building that better software.</p><p>Economies do not constrain the matching of human desire and available labor to a specific job description. Often, one job type is entirely replaced by another. As farming became more efficient with the advent of tractors and fertilizers, freed-up labor initially went back into farming to manage more land. Eventually, agricultural limits were reached, a different source of balance was invoked, and that freed-up labor transitioned into industry and manufacturing.</p><p>The same principle applies to AI. Until every need and desire of the human population is met, there will be pressure on the economy&#8217;s balancing forces to create jobs to meet them. The only absolute barrier to meeting that pressure is a lack of available labor. If AI ever becomes so universally capable that humanity literally has no more unmet needs or desires, we will have reached an incredibly unprecedented state. <a href="https://substack.norabble.com/p/the-economic-future-from-and-of-ai">While I have previously explored how the economy might actually function if that AGI future arrives</a>, history tells us that this kind of post-scarcity utopia is always further away than we imagine. In the meantime, the world gets more efficient without flipping into a topsy-turvy reality where the fundamental economic force of putting available labor to work ceases to exist.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b9016373-7f11-41c7-a6d6-caed0eb6518e&quot;,&quot;caption&quot;:&quot;This will be part one of a two part series. In the first part, I want to outline some of my views about how salient a set of what we might call existential concerns about AI should be. In part two, I want to discuss some more immediate interactions with today's economy&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Economic Future from and of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-07T14:08:35.292Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d02180eb-af84-4846-b470-d641afa59da1_512x512.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-economic-future-from-and-of-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173016480,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3><strong>What About the Short Term?</strong></h3><p>These macroeconomic forces dictate our long-term expectations: the economy will eventually balance out. But we live in the present, making it entirely reasonable to ask how the job market is changing right now and what to expect in the short-to-medium term.</p><p>Currently, I find the evidence of broad job market changes <em>already</em> caused by AI to be very weak. Conversely, I find the probability of broad <em>future</em> changes to be very strong. Most informed observers without ulterior motives tend to agree with this assessment. However, the prevailing public narrative has settled on the exact opposite amalgamation.</p><p>To the casual observer, the dominant narrative is that AI has already triggered widespread job losses and restructuring, but will ultimately fail to live up to its long-term hype due to inherent technical limitations. While this is just a prevailing vibe&#8212;and conversations often reveal more nuance&#8212;it is worth examining how illogical this combination of opinions is, and why it is so easily adopted.</p><h3><strong>Weighing the Evidence for Changes Already Occurred</strong></h3><p>The belief that AI has already upended the job market is easy to support because countless articles have delivered it as a concrete conclusion.</p><p>One form of article takes real statistics about a weakening job market&#8212;or specific sectors like tech&#8212;and correlates them directly with the release of AI products like ChatGPT. As a hypothesis, this is fine; as a conclusion, it is incredibly poor. It fails for two main reasons: it ignores major alternative economic forces, and it assumes a timeline of corporate reaction that defies reality.</p><p>Because an economy is about balance, if other substantial forces can explain job market shifts, the &#8220;AI did it&#8221; correlation becomes incredibly weak. When looking at the period since ChatGPT&#8217;s release in late 2022, we are swimming in alternate economic forces.</p><p>First, the COVID-19 pandemic created a profound shock. In-person jobs vanished and slowly recovered, while tech firms over-hired to meet the surging demand for remote work, supply chain management, and digital education. Executives extrapolated that temporary surge into permanent future demand. While some of that expected permanent shift was indeed realized, the world largely returned to a physical &#8220;normal.&#8221; As the most extreme growth expectations evaporated, it triggered a sharp, ongoing correction in tech employment.</p><p>Second, we experienced a severe inflation surge. While the exact interplay is complex, the relationship between inflation, interest rate hikes, and employment cooling is a foundational and uncontroversial economic reality. Finally, we are running the radical experiment of applying 1930s-style tariffs to a modern, globalized economy. Disentangling these three major, structural forces from the data to pinpoint AI as the primary culprit for recent layoffs is nearly impossible.</p><p>Furthermore, the timeline required for these correlation theories is implausibly fast. ChatGPT is released, and supposedly, jobs immediately begin to decline. No economic theory predicts immediate structural decline from a new tool. At a minimum, users must adopt the tool and prove its efficiency. Then, managers must recognize this efficiency, rewrite staffing plans, get approval, and execute layoffs. This takes quarters, if not years. Yet, the main correlational narratives point to tech job losses that actually began <em>months before</em> ChatGPT was even released.</p><p>This exact flawed logic was perfectly encapsulated in a viral graph that circulated widely,  which<a href="https://www.derekthompson.org/p/is-this-the-new-scariest-chart-in"> Derek Thompson observed was often being shared with commentary declaring it the "scariest chart in the world"</a>. The chart accurately shows the S&amp;P 500 rising while total job openings fall, with an ominous vertical line marking ChatGPT&#8217;s release right at the inflection point.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TXvb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TXvb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 424w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 848w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 1272w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TXvb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png" width="1161" height="850" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:850,&quot;width&quot;:1161,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TXvb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 424w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 848w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 1272w, https://substackcdn.com/image/fetch/$s_!TXvb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684f5bd1-e1f0-4a70-a4df-45e5071bbac4_1161x850.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">caption...</figcaption></figure></div><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:176860342,&quot;url&quot;:&quot;https://www.derekthompson.org/p/is-this-the-new-scariest-chart-in&quot;,&quot;publication_id&quot;:2880588,&quot;publication_name&quot;:&quot;Derek Thompson&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uPIO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png&quot;,&quot;title&quot;:&quot;Is This the New &#8216;Scariest Chart in the World&#8217;?&quot;,&quot;truncated_body_text&quot;:&quot;In the last few days, I&#8217;ve seen the following chart bounce around my corner of the Internet, often with some commentary declaring it the scariest chart in the world. The graph seems to show that the release of ChatGPT and the ensuing AI boom cracked the US economy in two, crushing the workforce while lifting the stock market.&quot;,&quot;date&quot;:&quot;2025-10-23T10:23:18.822Z&quot;,&quot;like_count&quot;:496,&quot;comment_count&quot;:22,&quot;bylines&quot;:[{&quot;id&quot;:157561,&quot;name&quot;:&quot;Derek Thompson&quot;,&quot;handle&quot;:&quot;derekthompson&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!oFSS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ed4fc85-9214-4460-a3e7-c80fca4a3c3d_872x872.png&quot;,&quot;bio&quot;:&quot;Abundance and other ideas to make the world a better place&quot;,&quot;profile_set_up_at&quot;:&quot;2021-10-25T17:19:21.553Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-03-09T16:22:19.302Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:2928158,&quot;user_id&quot;:157561,&quot;publication_id&quot;:2880588,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:2880588,&quot;name&quot;:&quot;Derek Thompson&quot;,&quot;subdomain&quot;:&quot;derekthompson&quot;,&quot;custom_domain&quot;:&quot;www.derekthompson.org&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A newsletter about abundance and building a better world.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png&quot;,&quot;author_id&quot;:157561,&quot;primary_user_id&quot;:157561,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2024-08-13T01:26:09.408Z&quot;,&quot;email_from_name&quot;:&quot;Derek Thompson&quot;,&quot;copyright&quot;:&quot;Derek Thompson&quot;,&quot;founding_plan_name&quot;:&quot;Superfan Tier&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:null}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[159185,656797],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.derekthompson.org/p/is-this-the-new-scariest-chart-in?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!uPIO!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png" loading="lazy"><span class="embedded-post-publication-name">Derek Thompson</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Is This the New &#8216;Scariest Chart in the World&#8217;?</div></div><div class="embedded-post-body">In the last few days, I&#8217;ve seen the following chart bounce around my corner of the Internet, often with some commentary declaring it the scariest chart in the world. The graph seems to show that the release of ChatGPT and the ensuing AI boom cracked the US economy in two, crushing the workforce while lifting the stock market&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; 496 likes &#183; 22 comments &#183; Derek Thompson</div></a></div><p>While the data points themselves are factually accurate, the suggested correlation is a classic case of confusing coincidence with causality. As a piece of media narrative, it is highly persuasive; as an economic argument, it is entirely unsupportable, especially considering the downward trend in job openings clearly begins <em>before</em> the AI tool was even available to the public.</p><p>It is entirely possible that some executives are acting on the &#8216;vibes&#8217; of AI, restructuring their companies in anticipation of future gains we can&#8217;t yet see in the data. But in the present moment, those &#8216;vibes&#8217; serve as the perfect smokescreen. Whether an executive is genuinely anticipating an AI revolution or simply needs to fix a bloated balance sheet, the public narrative sounds exactly the same. In reality, managers claiming AI-driven layoffs rarely have the data to back it up; rather, they have ulterior motives for cutting staff that they prefer to keep hidden.</p><p>Hyperscalers and major tech firms do not want to discuss how their cash flow is being squeezed by the massive capital expenditures required to build data centers and hoard Nvidia chips. Highlighting that reality invites investor scrutiny regarding the ultimate return on those investments. It is much easier to feed the market a narrative of &#8220;AI efficiency.&#8221;</p><p>Then you have executives who simply mismanaged their companies and need a convenient scapegoat for the necessary corrections. Add to this the universal motivation of any executive looking for a short-term stock boost: announcing headcount reductions under the guise of &#8220;doing more with less&#8221; is a brilliant Wall Street narrative. <a href="https://www.latimes.com/business/story/2026-03-02/ai-washing-how-companies-like-block-may-use-ai-as-layoff-excuse">This practice of &#8220;AI-washing&#8221; layoffs&#8212;as seen with Jack Dorsey&#8217;s recent cuts at Block</a>&#8212;avoids calamitous explanations like &#8220;we are losing customers&#8221; or &#8220;we are running out of money,&#8221; and actively excites investors in the short term, even if the cuts hollow out the company&#8217;s long-term capabilities.</p><p>Whether a company can actually do more with less will be tested in the future, not the present. The popular narrative assumes that newfound efficiency naturally dictates layoffs, but for a healthy company with opportunities to grow, turning efficiency into layoffs is hugely damaging. If a company suddenly needs fewer resources to maintain its current output, the logical move is to redeploy those resources to capture more market share or build new products. Layoffs generally only make sense if a company is correcting a past mistake (like rampant over-hiring) or if it has exhausted its growth options.</p><p>This reality exposes two fundamental flaws in the current public discourse. First, extrapolating the actions of these shrinking companies to the entire economy leaves no room for the story of companies that will use AI to expand. Second, it means these opportunistic, short-term cuts will eventually have to be reversed for companies that actually <em>do</em> have future potential. If they cut too deep today, service quality will degrade, feature releases will slow down, and competitors will steal market share. Eventually, they will be forced to reverse course and re-hire to regain their footing&#8212;having needlessly sacrificed their growth momentum for a temporary Wall Street bump. By then, however, the executive will have likely kept their job, exercised their stock options, and enjoyed the short-term bump from the AI narrative.</p><p>Another form of article driving the public narrative simply repeats these executives&#8217; statements verbatim. They do so without examining the underlying data or considering the obvious financial incentives for executives to spin bad news (over-hiring or cash flow issues) into a forward-looking story of AI-driven efficiency.</p><h3><strong>The Delusion of Immediate Efficiency</strong></h3><p>Accepting these narratives uncritically builds a dangerous delusion: that AI has already unlocked massive efficiency gains, that the best use of those gains is shrinking the labor force, and that companies failing to do so are falling behind.</p><p>In reality, while AI has added efficiencies in specific pockets, we are mostly still in the learning and adoption phase. Any hours saved are frequently counterbalanced by training, integration, and implementation costs. Where true, systemic efficiency has been achieved, it is very recent and far from pervasive.</p><p>Because this shift is so nascent, we don&#8217;t yet have stories of mature firms using AI to successfully expand. Aside from new startups or companies explicitly selling AI infrastructure, the narrative is entirely dominated by contractionary stories&#8212;which, as established, are largely misdirection. Accepting these false contractionary tales severely distorts our perception of what the technology is actually doing to the economy.</p><h3><strong>Non-Linear Transitions</strong></h3><p>Furthermore, we must remember that stories about the impact of a specific technology are not comprehensive stories about the entire economy. If employment in a field like insurance claims processing genuinely contracts due to AI, the compensating job expansion will not necessarily be AI-related at all.</p><p>Freed-up labor might allow housing construction to expand, making homes more accessible and lowering costs. While expanding the housing industry requires more than just available labor (like zoning reform or lower interest rates), if those external limits are removed, the economy will naturally funnel available labor toward that unmet demand. This kind of non-linear adjustment is exactly what dynamic economies do. It is rarely instantaneous, and it is never without friction, but in its default state, the economy makes the adjustment.</p><p>A common objection here is the &#8220;skills mismatch&#8221;&#8212;the idea that a laid-off insurance claims processor isn&#8217;t going to suddenly start swinging a hammer. But an economy does not rely on perfect one-to-one transitions. Labor markets are highly dynamic, and indirect shifts do most of the heavy lifting. While some claims processors might actually enjoy learning a trade or already have prior experience in one, it is far more likely that their existing skills shift adjacently. An insurance adjuster might transition to a construction firm as a project manager, which in turn frees up the multitasking owner to spend more time actually building.</p><p>Even if only a small proportion of the labor force makes these types of lateral moves, the cascading effect absorbs vast amounts of economic change when all the different pathways are added up.</p><h3><strong>Conclusion</strong></h3><p>Ultimately, the story about the future of jobs is incomplete without an understanding of this economic dynamism. AI is a profound technological shift, and when the really significant changes it promises do start happening, that dynamic adjustment process will become more obvious.</p><p>But right now, separating the noise of the current moment from the signal of long-term economic behavior is crucial. If we believe media narratives driven by ulterior corporate motives, we will confuse ourselves with expectations that are neither complete nor correct. We must remember that AI is just a tool; the economy is the engine. As long as human desires remain unmet, the economic engine will continue to do what it has always done: take available labor and put it to work.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;85cd96ec-ce86-4b23-ac4b-88e3a00200e0&quot;,&quot;caption&quot;:&quot;Beyond Observed AI Exposure&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Jobs: The Hidden Rules of Demand&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-16T12:03:39.491Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RrL0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-jobs-the-hidden-rules-of-demand&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190836245,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Deployments Can't Wait]]></title><description><![CDATA[Why AI Threats Demand a Deployment Revolution]]></description><link>https://substack.norabble.com/p/deployments-cant-wait</link><guid isPermaLink="false">https://substack.norabble.com/p/deployments-cant-wait</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 23 Mar 2026 11:50:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2d95d729-a5d3-4f42-9d39-bf371396315c_2812x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the broader discourse on artificial intelligence, the sharpest minds in AI safety are currently looking to the horizon. They are focused on existential, cinematic threats: the potential for AI-generated bioweapons, nuclear command vulnerabilities, and autonomous warfare.</p><p>While these are undeniably critical issues, this focus has created a strategic void. The AI industry is aware of enterprise cybersecurity, and they are actively building tools to address it. However, the problem is being approached tactically, rather than strategically. Because they are not treating the defense of our digital infrastructure as a core, existential mission, a cohesive, industry-wide narrative has failed to materialize.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The hard truth for technology executives&#8212;CTOs, CISOs, and business leaders driving technology strategy&#8212;is this: the AI cavalry isn&#8217;t coming. At best you can hope for the AI industry, and the security industry, to work to sell tools. But for gaps that aren&#8217;t tool-shaped, it&#8217;s up to IT organizations to make this a strategic priority.</p><p>As I argued in <em><a href="https://substack.norabble.com/p/security-cant-wait">Security Can&#8217;t Wait</a></em>, advances in AI are drastically accelerating the attacker-defender cycle. Threat actors are already utilizing AI to automate vulnerability discovery and weaponize exploits at unprecedented speeds. Without an equally aggressive response, the segments of our defense lifecycle that remain manual and sluggish will fall hopelessly behind, handing attackers a permanent, dangerous advantage.</p><p>And right now, the weakest, most sluggish point of the defense lifecycle isn&#8217;t vulnerability identification. It is deployment.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;cf7ed7fe-aaa2-436f-ab1e-285004973223&quot;,&quot;caption&quot;:&quot;Right now, Artificial Intelligence is fundamentally rewriting the rules of cybersecurity&#8212;and we do not have the luxury of waiting before taking action.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Security Can&#8217;t Wait&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-05T21:05:09.345Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b2a65ed-e701-4f36-8d82-2a665189419b_2816x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/security-cant-wait&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190039490,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3><strong>Untangling the Past</strong></h3><p>The ability to deploy quickly is an element of great variance across the industry. Those variances have always been of importance, but the acceleration of AI-driven threats makes them a critical crux point. It&#8217;s tempting to assume this variance is simply the fault of the organizations that lag behind. But not only is that unhelpful, it&#8217;s also untrue. History often offers a better explanation, with only a moderate amount of fault left to place on the organizations suffering the ill effects. In my experience leading organizations through such changes, I find it&#8217;s best to leave that be and move on.</p><p>You might be unwilling to do so without understanding that past, so it helps to examine it. Additionally, even if you are ready to move forward, your organization may not be able to unless you can explain it to them. Some members of an organization lived through previous efforts to change, bear scars, and understand the reality. Others may have joined more recently and fail to understand why things are the way they are. A shared understanding is critical for an organization to work as one.</p><p>The most recent crux point for deployment was the adoption of &#8220;DevOps,&#8221; &#8220;Continuous Integration&#8221; (CI), and &#8220;Continuous Deployment&#8221; (CD). These paradigms are real, and their value is immense. Understanding them, however, is often clouded by layers of marketing jargon that have saturated software development for the last decade.</p><p>Make no mistake: the advent of DevOps, CI, and CD has been incredibly important. Even half-implementations, aligned with marketing that sold success before completion, have moved the needle. And the organizations that implemented them fully are now industry leaders in far more than just technology.</p><p>To appreciate why these changes left scars&#8212;and why implementations varied so wildly&#8212;we must look at the mechanical baseline they aimed to improve. Historically, software development and IT operations were strictly isolated. Development teams created software, generally working independently for months or even years, before handing the code off to the operations team to support and run in production. Because these teams had opposing incentives&#8212;developers were measured by feature delivery (progress), while operations were measured by system stability&#8212;introducing change was treated as an inherent threat. As a result, deployments were often massive, infrequent, and high-risk events.</p><p>DevOps emerged as a pragmatic and cultural approach to resolve this dysfunction. At its core, DevOps isn&#8217;t just a set of tools; it is a commitment to teamwork, communication, and shared goals. In its full realization, it requires unifying leadership to keep the two disciplines from pulling apart and devolving into political, rather than technical, management.</p><h3><strong>The Mechanics of Modernization</strong></h3><p>To support this cultural shift, the industry developed specific pipeline tooling designed to automate away the friction and reduce the stress that leads to organizational divergence:</p><ul><li><p><strong>Automated Builds:</strong> In software development, code changes must be &#8220;packaged&#8221; into a build. Depending on the platform, this involves compiling human-readable code into machine-readable formats, resolving third-party dependencies, and packaging it into a deployable format.</p></li><li><p><strong>Validation and Testing:</strong> Beyond just compiling, a mature pipeline validates the code&#8217;s quality and executes automated tests. To make testing efficient, engineers test the smallest possible units of code (unit tests). This limits the scope of failures and uses less compute time. Inadequate testing can cause a pipeline that otherwise looks complete to produce poor results. Errors that reach production cause costly rollbacks, and the fear of repeating those errors slows everything else down.</p></li><li><p><strong>Continuous Integration (CI):</strong> Integration is the process of reconciling the simultaneous contributions of multiple developers into a cohesive system. CI extends the build process by making this integration a frequent, if not constant, event. By merging developers&#8217; working copies several times a day, the complexity and risk associated with a final, massive merge are dramatically reduced. In the context of security, CI serves as a crucial enforcement point for the unified system. It is here that dependencies from multiple contributors are brought together, making it the primary stage for running deep, automated scanning tools against the combined application.</p></li><li><p><strong>Automated Deployments (CD):</strong> Once integrated, software cannot simply be pushed to users; safety constraints require it to be deployed to isolated test environments first. A true pipeline requires test environments that accurately simulate production. However, creating and supporting these duplicate environments is highly complex and the costs often become prohibitive.</p></li></ul><p>Together, the premise of these mechanics was straightforward: mitigate risk by moving faster with tiny, highly automated, and easily reversible changes caught early by continuous feedback loops.</p><h3><strong>Deployment Divergence</strong></h3><p>However, as these concepts gained mainstream traction, a clear divergence emerged across the industry. It is tempting to think of organizations making the same technological choices simply by nature of being in the same industry&#8212;surely all banks are similarly modernized? In reality, there are significant deviations even within the same sectors. These divergences are shaped heavily by a company&#8217;s specific history: when they were formed, or when they attempted a prior wave of modernization.</p><p>Generally, organizations fell into one of three paths:</p><p><strong>1. True Adoption:</strong> Many organizations successfully navigated this transformation. They did the hard work of aligning incentives under unified leadership and invested in comprehensive test environments, proving that modern, automated deployment is a highly achievable goal when backed by genuine commitment.</p><p><strong>2. Watered-Down Adoption:</strong> Driven by vendor sales cycles and a management desire for painless wins, many organizations adopted the terminology without the substance. The genuinely far-reaching concepts were distorted to justify incremental tool purchases. Crucial but non-mandatory steps&#8212;like rigorous unit testing or maintaining accurate test environments&#8212;were skipped or done poorly in the name of expediency. Without true CI, integration remained sporadic. Teams bought the tools and declared victory, but failed to fundamentally change their deployment process or speed.</p><p><strong>3. Stalled Implementation:</strong> Other organizations simply struggled to get momentum at all, weighed down by the sheer complexity and cost of entrenched legacy systems, such as monolithic applications and mainframes, which are notoriously difficult to integrate into modern CI/CD pipelines.</p><p>Why did so many organizations fall into the latter two camps? The root causes are deeply embedded in organizational dynamics. For years, technology teams have been caught in a tug-of-war between competing priorities. There is an unrelenting push to deliver short-term wins and new features, which inevitably drives the accumulation of technical debt. This is compounded by coordination issues between siloed teams, cost-cutting mandates, and general corporate politics.</p><p>The result of this divergence is that while excellent pipelines certainly exist, a significant portion of enterprises still grapple with brittle, sporadic deployment processes. They have automated the easy parts (like compiling) but left the hard parts (comprehensive testing and security scanning) as manual roadblocks. Without continuous, reliable feedback, deployments are batched, delayed, and risky.</p><p>This isn&#8217;t an indictment of current leadership; it is simply a realistic accounting of the accumulated friction of technical debt and conflicting priorities. But it is a reality we must acknowledge before we can move forward.</p><p><em><strong>Clouded Perceptions:</strong> Restarting from Stalled and Watered-Down Adoptions takes additional effort to rebuild momentum because of terminology drift. It&#8217;s too easy to assume a shared commitment which turns out to represent different expectations. While you can&#8217;t erase the effect of the past, you can take the extra effort to communicate what is meant at each opportunity.</em></p><h3><strong>The Widening Gap and the Irony of Regulation</strong></h3><p>When an organization&#8217;s deployment pipeline is insufficient, the time it takes to patch a newly discovered vulnerability stretches from hours to weeks or months. Attackers have delays too, but depending on their delays&#8212;which might be accelerated by AI&#8212;is a risk.</p><p>We often look to regulation to force improvements in these areas, hoping compliance mandates will motivate continuous improvement. But here lies a painful irony: for the organization that has already fallen behind, regulation often creates <em>extra</em> friction. It introduces new audit gates and reporting requirements that further slow down the deployment process. Until the pressure is redirected toward a truly dramatic overhaul&#8212;with all the costs and commitment that entails&#8212;the effect of regulation is to slow defenders, leaving a wider gap attackers can exploit.</p><h3><strong>Assessing the Battlefield and Avoiding the Blame Game</strong></h3><p>If the mandate is to unblock these pipelines, technology executives must first assess their own relationship to the organization before demanding changes. Are you a new leader brought in with an explicit mandate to improve? Are you an established leader leveraging newly acquired influence? Or are you new to an organization where continuity, rather than disruption, was the stated goal?</p><p>Understanding this positioning is critical because diagnosing a lagging deployment pipeline often delivers bad news to teams who believe they are already doing their best. If delivered poorly, it forces the organization into a &#8220;fight or flight&#8221; response.</p><p>Crucially, executives must actively suppress the &#8220;blame game.&#8221; Blame is a destructive concept when fixing technical debt. Technical systems do not care who is at fault; they will succeed or fail independently. Seeking blame causes internal information sharing to become strategic and self-preserving, rather than solution-oriented. While identifying failures is necessary for strategic leadership changes, day-to-day technical modernization requires actively discouraging the blame game so teams can focus entirely on the fix.</p><h3><strong>Turning AI Inward</strong></h3><p>If current pipelines are too encumbered by historical debt to move at the speed of modern threats, they need priority. Yet, that priority is lacking and must be built. The AI and security industries are offering tools, but not implementation.</p><p>Technology-focused executives must take the driver&#8217;s seat. The DevOps playbook is well documented. But so are the impediments. New efforts and commitments are difficult. Past failures create inertia that needs unblocking.</p><p>Tools can&#8217;t solve this alone. What they can do is modify the impediments that held back implementation in the past. Those modifications create a compelling narrative to overcome inertia, and start new efforts and commitments to modernize deployment pipelines.</p><p>Consider the new opportunities to improve modernization efficiency and effectiveness faster using AI:</p><h4><strong>The Testing Burden</strong></h4><p>A robust deployment pipeline requires comprehensive automated testing, but developers notoriously loathe writing and maintaining tests. AI fundamentally changes this dynamic. If you have no tests, AI can scale up baseline coverage rapidly. If you have some tests, AI can identify and fill the gaps. More importantly, AI can monitor existing tests for brittleness, automatically suggesting refactoring or updates when underlying code changes. By removing the maintenance overhead, AI removes a primary excuse for failing pipelines.</p><h4><strong>Accelerating Legacy Transformation</strong></h4><p>Many deployment bottlenecks are rooted in legacy systems&#8212;like mainframes and monolithic applications&#8212;that were previously deemed too complex, expensive, or risky to modernize. AI transformation software is changing this calculus.</p><p>One methodology here is to reverse engineer specifications from an existing codebase. One significant challenge to modernizing any legacy system is understanding how that system should behave. There may be documentation, but it very likely has drift and inaccuracies that would undermine a transformation. A reverse engineering process is not likely to be hands-free, but a combination of AI and operators complement each other. That should make it possible to reverse engineer any existing codebase sufficiently to perform a quality transformation.</p><p>Testing comes into focus here again. Tests can be generated and employed on both the old and new source. Specifications assist in test generation. Tests assist in mechanical code transformation for core functions and methods. Tests and specifications also assist in larger-scale structural changes. Transformation strategies have traditionally relied on both of these, preferring small incremental updates when practical, and resorting to larger-scale rewrites strategically.</p><p>AI-driven transformation tools not only reduce effort via these steps, but improve accuracy and probability of success.</p><p>Resources:</p><ul><li><p><a href="https://www.gartner.com/reviews/market/ai-augmented-code-modernization-tools">Gartner AI Augmented Code Modernization Tool Peer Insights</a></p></li></ul><h4><strong>Shift-Left Security</strong></h4><p>AI-powered static analysis can be integrated directly into the developer workflow. This ensures that the code (and the AI-generated tests themselves) adhere to established security and quality standards <em>before</em> they ever reach the integration phase. Not only can this help avoid introducing new security issues, but it can raise confidence in the process of deploying fixes to known issues and newly discovered issues.</p><p>The quality of the tools you can integrate may be influenced by how modern and mainstream other parts of the stack are. COBOL and FORTRAN code won&#8217;t have the same level of support as Rust, Python, TypeScript, .NET, C or C++ code. While static analysis tools have existed for some time, the most developed tools in this space have evolved past simply flagging potential errors; they now utilize AI to drastically reduce false positives, understand the context of the codebase, and suggest specific, workable auto-fixes.</p><p>Resources:</p><ul><li><p><a href="https://www.gartner.com/reviews/market/application-security-testing">Gartner Application Security Testing Peer Insights</a></p></li><li><p><a href="https://www.g2.com/categories/static-application-security-testing-sast">G2 Best Static Application Security Testing Software</a></p></li></ul><h4><strong>Securing the Pipeline</strong></h4><p>As pipelines become the engine of the enterprise, they become prime targets for attackers. Implementing highly effective but difficult security practices&#8212;such as least-privilege access for the pipeline itself&#8212;is complex to manage manually.</p><p>How this is done will depend on where your pipeline is implemented. AI tools can analyze code for access requirements, avoiding the admin needing to guess developers&#8217; requirements. AI and conventional tools can analyze deployment patterns to determine used and unused privileges, which create a signal where to limit privileges.</p><p>Resources:</p><ul><li><p><a href="https://cloudsecurityalliance.org/blog/2025/09/22/do-your-ci-cd-pipelines-need-identities-yes">Do Your CI/CD Pipelines Need Identities? Yes.</a> (Cloud Security Alliance, 2025)</p></li></ul><h4><strong>Disrupting Active Exploitation: An Essential Stopgap</strong></h4><p>While modernizing the deployment pipeline is the ultimate cure, technology executives must manage the immediate reality: vulnerabilities <em>will</em> exist in production while fixes navigate a sluggish pipeline. It would be irresponsible to omit AI&#8217;s capability as an ameliorative control during this window. AI-driven behavioral analytics and dynamic anomaly detection can be deployed defensively to disrupt the control and exploitation phases of an attack in real time. By identifying and isolating threat actors attempting to leverage unpatched systems, these tools buy the organization the critical time needed for pipeline improvements to take effect.</p><h3><strong>Implementation</strong></h3><p>AI tooling isn&#8217;t enough in the same way that DevOps tooling wasn&#8217;t enough. A plan is necessary, and that plan must engage with the culture of your organization. What type of modernization is needed? Why hasn&#8217;t it happened already? Will it require a full-scale transformation (mainframes/monoliths)? Is it about completing a watered-down adoption?</p><p>There are good sources on DevOps adoption (i.e., <a href="https://www.oreilly.com/library/view/the-devops-handbook/9781457191381/toc.xhtml">The DevOps Handbook</a>), so I won&#8217;t try and repeat these in their entirety. Committing to completing adoption, and taking advantage of new opportunities that shorten or de-risk challenging aspects, is how to create your plan.</p><h3><strong>Conclusion: The Call to Arms</strong></h3><p>The acceleration of the cyber battlefield is a reality. The mandate for technology executives is clear: we must stop viewing AI solely as a threat to be mitigated or a product to be purchased, and start wielding it as an operational imperative. Accelerating our defenses requires accelerating our deployments. The tools are in our hands; it is time to use them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Jobs: The Hidden Rules of Demand]]></title><description><![CDATA[Predicting the future of work using Bounded, Unbounded, and Adversarial demand]]></description><link>https://substack.norabble.com/p/ai-jobs-the-hidden-rules-of-demand</link><guid isPermaLink="false">https://substack.norabble.com/p/ai-jobs-the-hidden-rules-of-demand</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 16 Mar 2026 12:03:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RrL0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Beyond Observed AI Exposure</strong></h2><p><a href="https://www.anthropic.com/research/labor-market-impacts">Anthropic&#8217;s recent labor market analysis</a> has improved understanding by <a href="https://www.anthropic.com/research/labor-market-impacts#a-new-measure-of-occupational-exposure-">analyzing &#8220;observed exposure&#8221;</a>&#8212;shifting from theoretical feasibility to measuring how AI is actually being used across different occupations. This is a crucial step in understanding AI&#8217;s real-world footprint. However, a core assumption remains: if a task can be done twice as fast by AI, the required human labor spent on that task will decrease.</p><p>I suggest a deeper framework that confronts that assumption. A significant reason why AI capabilities will not translate into reduced working hours is that observed exposure fails to account for the <em>dynamics of economic demand </em>for tasks. Demand for tasks is not static. As a task progresses from theoretical capability, to observed exposure, to full exposure, dynamic responses should be expected.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The impact of AI does not depend solely on whether a machine can and is used effectively to do work, but on whether the demand for that work is <strong>Bounded</strong>, <strong>Unbounded</strong>, or <strong>Adversarial</strong>. How work is divided between those categories, how it&#8217;s packaged into jobs, and the dynamic interplay are critical to accurately predicting how AI adoption will change demand for work.</p><p><em>While I am only presenting the conceptual framework here, extending this to a quantitative analysis via task classification and applying it to datasets like the <a href="https://www.anthropic.com/economic-index">Anthropic Economic Index</a> is the logical next step.</em></p><h2><strong>Tasks, Jobs, Outcomes, and Demand Dynamics</strong></h2><p>To understand labor impacts, I must separate the elements of work into tasks, outcomes, and jobs:</p><ul><li><p><strong>Tasks</strong> are individual units of work executed to achieve a specific result.</p></li><li><p><strong>Outcomes</strong> are the overarching goals or results that a job seeks to achieve through the execution of tasks.</p></li><li><p><strong>Jobs</strong> are bundles of tasks organized and executed to deliver specific outcomes.</p></li></ul><p>To this, I also add three categories of demand:</p><ul><li><p><strong>Bounded Demand:</strong> Demand that has finite usefulness within related outcomes, and does not itself enable demand for new outcomes.</p></li><li><p><strong>Unbounded Demand:</strong> Demand with the potential for self-expansion by enabling demand for new outcomes. When scaled, efficiency completes entirely <em>new</em> outcomes with positive value. (Practically speaking, this does not demand a <em>true</em> lack of boundaries, just incredibly distant ones).</p></li><li><p><strong>Adversarial Demand:</strong> A non-bounded state<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> driven by a zero-sum competition. When scaled, efficiency drives volume and complexity <em>within the same adversarial outcome</em>.</p></li></ul><p>These three categories of demand are most readily applied to classify tasks, but we&#8217;ll find we also think of jobs in this way based on the type of tasks they hold. We can also see how outcomes are key to this, in differentiating unbounded and adversarial demand&#8212;separated by whether scaling them generates new outcomes or inflates existing outcomes.</p><h3><strong>The Dynamics of Efficiency Reallocation</strong></h3><p>Jobs tend to have a primary outcome (e.g., producing a car, solving a medical issue, resolving legal disputes). When that overarching outcome is adversarial, the underlying adversarial tasks act as an &#8220;efficiency sink.&#8221;</p><p>When AI automates routine, bounded tasks within that job, the worker does not simply work fewer hours. Instead, the time saved is reallocated into the adversarial tasks to maximize the overarching outcome. This dynamic maintains stable human labor hours despite high overall observed AI exposure.</p><p><em>Crucially, if all adversarial tasks are fully automated, the efficiency sink effect moves from labor hours to automation (compute/API) costs. But as long as a single adversarial task is left unautomated, it retains a significant portion of the efficiency sink effect, and total labor hours remain mostly unchanged.</em></p><h2><strong>Three Part Demand Framework</strong></h2><p>Occupational tasks can be categorized into three distinct economic demand buckets. Because jobs are organized to achieve specific outcomes, we observe task-level dynamics manifesting at the job level. When AI introduces efficiency gains, each bucket reacts differently:</p><h3><strong>Bounded (Satiated Demand)</strong></h3><ul><li><p><strong>Reallocation Dynamic:</strong> There is no new work to reallocate. Faster outcome completion means work is finished earlier. Fewer workers are needed to maintain the same pace. Unless an unbounded or adversarial outcome indirectly generates new work, available jobs attached to these bounded outcomes will decline.</p></li><li><p><strong>Job Examples:</strong> Payroll Clerks, Data Entry Operators, and Technical Writers. Demand for Payroll Clerks is bounded by the number of companies, workers, and efficiency of the payroll process.</p></li></ul><h3><strong>Unbounded Utility (The Infinite Backlog)</strong></h3><ul><li><p><strong>Reallocation Dynamic:</strong> Faster production lowers costs. A backlog of demand reuses freed resources. Cost efficiencies sustain or expand fulfillment of potential demand. Time saved is used to produce more output, higher-quality outputs, or both.</p></li><li><p><strong>Reallocation Friction: </strong>Reallocation is not immediate. The demand backlog can become stuck for organizational, training, research, finance, or any other coordination issue.</p></li><li><p><strong>Job Examples:</strong> Computer Programmers, Scientific Researchers, and Healthcare Professionals. The backlog of desirable software, scientific discoveries, and medical care is never fully satiated.</p></li></ul><h3><strong>True Adversarial (Zero-Sum Escalation)</strong></h3><ul><li><p><strong>Reallocation Dynamic:</strong> Efficiency gains are weaponized to win adversarial outcomes. Time saved is reinvested into performing the task at a higher volume or complexity to maintain an edge over an opponent, scaling effort <em>within the same outcome</em>.</p></li><li><p><strong>Reallocation Friction: </strong>Escalation between parties can take time to emerge, and can be delayed or deferred by agreement, law, or practical obstacles.</p></li><li><p><strong>Escalation Attrition:</strong> Adversarial escalation eventually hits diminishing marginal returns. If AI allows lawyers to draft 10x the claims and counterclaims, they will do so to maintain an advantage. These claims and counterclaims add little or no extra utility to the justice system. If any, it was such fractional quantities that they would not have been pursued outside an adversarial system. Those dynamics don&#8217;t mean escalation isn&#8217;t subject to its own attrition where the next escalation not only fails to create social value, but fails to yield <em>individual </em>value.</p></li><li><p><strong>Attrition and Friction: </strong>The combination of escalation attrition and friction is another factor in delayed reallocation. The last layers of escalation yield the least individual value, which lowers any incentive to bypass frictions created by time, happenstance, law, agreement, ethical standards, or other factors.</p></li><li><p><strong>Job Examples:</strong> Lawyers (maximizing legal strategy), Salespeople (maximizing competitive wins), Marketers (battling for attention), and Cybersecurity Analysts (offensive vs. defensive escalation).</p></li></ul><p><em>Note on Transitions:</em> Tasks and jobs can shift categories. Customer Service Representatives are currently <strong>Bounded</strong> (dealing with a finite number of human interactions). However, if AI agents drop the cost of interacting with customer service to near zero, these outcomes could transition into <strong>Adversarial</strong> territory. While a customer-obsessed company does not view legitimate customers as adversaries, an open, zero-friction channel inevitably attracts malicious actors, automated fraud rings, and algorithmic social engineering at scale. Companies will be forced to deploy defensive corporate AI to filter this malicious volume, reallocating human CSRs to investigate and manage these complex, escalated algorithmic attacks.</p><h2><strong>The AI Labor Impact Matrix: &#8220;Three Sextants and One Half&#8221;</strong></h2><p>By mapping AI Exposure (High vs. Low) against the 3-Part Demand Framework, I create a 3x2 matrix for describing expected labor market behavior.</p><h3><strong>The Bottom Half (The Control Group)</strong></h3><ul><li><p><strong>Low AI Exposure (across all demand types):</strong> Protected by physical friction, manual dexterity requirements, or strict regulatory roadblocks. This represents the status quo (e.g., physical trades, nursing).</p></li></ul><h3><strong>The Top Three Sextants (High AI Exposure)</strong></h3><p>The highly exposed segment of the economy splits into three distinct zones, driven by different adoption incentives:</p><p><strong>Sextant 1: The Efficiency Transition (High Exposure + Bounded)</strong></p><ul><li><p><strong>Early Influences:</strong> Early adoption is driven top-down by organizations seeking to realize the benefits of automation to reduce labor costs.</p></li><li><p><strong>Labor Impact:</strong> Measurable job displacement and hiring slowdowns. While this creates disruption for current workers, it represents an efficiency gain for the broader economy by freeing human capital from bounded tasks.</p></li><li><p><strong>Social Impact:</strong> Managing this shift requires robust social infrastructure. Social programs like unemployment insurance, retraining initiatives, and general social support must be central to navigating these impacts.</p></li></ul><p>While disruption occurs across all sextants to some degree, societal resources and attention must be most heavily directed toward transitioning workers. Without support you lose both pre-disruption stability and the productive use of freed human capital. There is no social value in structural optimization if it doesn&#8217;t lead to new, more productive employment.</p><p><strong>Sextant 2: The Infinite Frontier (High Exposure + Unbounded)</strong></p><ul><li><p><strong>Early Influences:</strong> Early adoption is driven by closeness to the technology industry.</p></li><li><p><strong>Labor Impact:</strong> Minimal displacement (subject to reallocation frictions), accompanied by productivity and objective output growth.</p></li></ul><p><strong>Sextant 3: The Arms Race (High Exposure + Adversarial)</strong></p><ul><li><p><strong>Early Influences:</strong> Early adoption is driven bottom-up by individuals with an aggressive, advantage-seeking demeanor, and others who are forced to adopt to survive a zero-sum game.</p></li><li><p><strong>Labor Impact:</strong> Minimal displacement (subject to reallocation frictions), task inflation, and potential for worker burnout.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RrL0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RrL0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 424w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 848w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 1272w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RrL0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png" width="1456" height="795" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:795,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RrL0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 424w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 848w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 1272w, https://substackcdn.com/image/fetch/$s_!RrL0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520fde3e-dde5-437e-aaf5-9d7f457179f6_2048x1118.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Predicting Adoption Velocity: The Role of Worker Demeanor</strong></h2><p>While exposure metrics predict <em>where</em> AI can be used, analyzing worker demeanor helps predict <em>how fast</em> and <em>by whom</em> it will be adopted. Adoption is not solely a function of technological capability; it is deeply tied to the behavioral incentives and natural disposition of the workers in these roles.</p><ul><li><p><strong>Status Quo Bias and Corporate Mandates (Top-Down):</strong> In Bounded roles, workers have little incentive to adopt a tool that finishes their workload faster, as doing so threatens their job security. Consequently, organic worker adoption in this sextant is heavily muted. AI integration here is almost entirely <em>top-down</em>, driven by management seeking cost reductions.</p></li><li><p><strong>Curiosity and Tech-Affinity (Bottom-Up):</strong> In Unbounded roles, early adoption has been driven by domain proximity and natural curiosity. Workers in these fields inherently value systems that reduce friction to build, solve, and create more efficiently. <em>Top-down</em> influences are secondary, but as a complement, create the fastest adoption.</p></li><li><p><strong>Advantage-Seeking Demeanor (Bottom-Up):</strong> In Adversarial jobs, workers are structurally incentivized to seek an edge. They actively test and implement AI independently because failing to do so means losing a deal or a case. Adoption is organic and aggressive. <em>Top-down</em> influences are secondary, and more about approval than directives.</p></li></ul><p><strong>The Prediction:</strong> AI tooling will proliferate fastest and most smoothly in Unbounded and Adversarial jobs (Sextants 2 and 3), driven by eager, self-motivated workers. In contrast, Bounded jobs (Sextant 1) will experience a delayed adoption curve, followed by an abrupt, disruptive shock as corporate mandates are enforced.</p><h2><strong>Societal Impact: Disruption and Outcome Completion</strong></h2><p>From a macroeconomic and social perspective, society will primarily be concerned with two consequences of this framework: the friction of job disruption and the new value generated from outcome completion.</p><h3><strong>The Reality of Job Disruption and Reallocation</strong></h3><p>Job disruption will occur most acutely within highly exposed, bounded jobs. It is important to clarify that absolute &#8220;zero displacement&#8221; in Unbounded and Adversarial jobs is merely a theoretical equilibrium. In reality, frictions in reallocating time and learning new AI-augmented workflows cause <em>some</em> temporary displacement in Unbounded and Adversarial jobs. The key difference is in Unbounded and Adversarial jobs final equilibrium is resistant to durable reduction.</p><p>Furthermore, as acute displacement occurs in the Bounded sextant, the freed human capital will likely reallocate toward Adversarial or Unbounded work, shifting the composition of the broader labor market.</p><h3><strong>Value Creation and Outcome Completion</strong></h3><p>For society to realize net-new value from AI, it must look primarily to the unbounded domain. This is where efficiency translates directly into outcome completion rather than zero-sum escalation. While early tech discourse heavily features Computer Programmers, the most profound societal benefits will emerge from fields like Science and Healthcare. The demand for scientific discovery, novel treatments, and improved patient care is effectively infinite. Scaling these tasks generates profound, concrete new outcomes for human well-being.</p><p>Conversely, AI usage in adversarial jobs toward adversarial outcomes, is largely irrelevant to net-new social value. Because it consumes resources that might have been put to better use in another domain, <a href="https://substack.norabble.com/p/ai-and-the-zero-sum-game">adversarial adoption can sometimes be a net negative</a>, though it often balances out to neutral or slightly positive. For starters, existing human labor usage was a drag of its own.</p><p>Additionally deeper task completion within adversarial roles can result in slightly positive effects, but this is far from guaranteed. Adversarial roles are usually composed of a very valuable social good (justice for example), with an adversarial layer on top. The most significant realization of that social good comes at the first layers of engagement, and at most incrementally improves with more engagement. Even this is not a guaranteed conclusion though, there is no inherent reason additional investment cannot become extractive while failing to produce more social goods.</p><p>Critically, good planning around critical systems (like finance and law), can improve their social outputs and avoid their adversarial aspects amounting to net losses, but the opposite can be said of poor planning.</p><h3><strong>The Complex Social Dynamic of Art</strong></h3><p>Society will have a more complex relationship with certain adversarial domains, most notably the Arts. While the economic dynamic of art is highly adversarial&#8212;creators are engaged in a zero-sum competition for finite human attention&#8212;society does not view this escalation in the same way it views the &#8220;deadweight loss&#8221; of legal paperwork. The resulting explosion of media, storytelling, and design may be born from competitive escalation, but it yields cultural artifacts that society inherently values and consumes differently than pure corporate friction.</p><p>Crucially, art is often valued not simply for its outputs, but for its process. In a sense, you could attribute the same to any other field where the participants care about their own work, but society has always given a special place to art in this way.</p><p>For those reasons, it&#8217;s reasonable to not expect the domain of art to follow all the same dynamics of other adversarial domains. That said, it is clearly an early adopter, like other adversarial domains.</p><h3><strong>About Real-World Tests</strong></h3><p>The world is looking at real-world jobs data trying to find confirmations of early AI labor and other predictions. It is not wrong to look, but the expectation of finding confirmation here is unrealistic. Jobs data itself is messy, takes time to become accurate, and has some other recent large scale influences.</p><p>Anthropic goes so far as to <a href="https://www.anthropic.com/research/labor-market-impacts#how-exposure-tracks-with-projected-job-growth-and-worker-characteristics">compare a prediction to another prediction</a>, in search of such confirmation. In their defense, they are clearly <a href="https://www.anthropic.com/research/labor-market-impacts#how-exposure-tracks-with-projected-job-growth-and-worker-characteristics#counterfactuals-">aware of the risks</a> there and don&#8217;t tout their results heavily.</p><p>I look forward to putting my predictions to the test, and seeing the results of other tests. But I would also continue to suggest that we should expect null results and should not force data that only supports a null result into a definitive conclusion about AI&#8217;s ultimate labor impact.</p><h2><strong>Conclusion</strong></h2><p>When macroeconomic studies aggregate these three top sextants into a single &#8220;Highly Exposed&#8221; bucket, the stable employment driven by the Arms Race and the Infinite Frontier completely masks the real, acute job losses occurring within the Efficiency Transition. By evaluating occupations through the primary lens of Bounded vs. Non-Bounded (Unbounded/Adversarial) demand, we can isolate the exact sectors where AI will cause job displacement versus where it will merely fuel outcome generation or task escalation.</p><p></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;384151d5-cd94-4a77-b0c6-7dff01f30fa7&quot;,&quot;caption&quot;:&quot;AI is advancing quickly, and if there&#8217;s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it&#8217;s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the Industrial Revolution or the advent of the internet, and believe that advances ultimately lead to new jobs that didn&#8217;t previously exist.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI and the Zero-Sum Game&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-30T16:15:53.873Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!3lXS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75bdff8e-8e0a-461c-99ae-df41fd06ab63_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-and-the-zero-sum-game&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160183122,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Like unbounded demand, adversarial activities might have true boundaries. In both cases, the important detail from the framework is if they are currently at their limits, or those are distant. That might make us worry about near term transitions, but practically speaking those are rare. Most tasks either have distant boundaries or are already maintaining an equilibrium against their boundaries.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Security Can’t Wait]]></title><description><![CDATA[The Mandatory AI Driven Security Upgrade for a Safer Future]]></description><link>https://substack.norabble.com/p/security-cant-wait</link><guid isPermaLink="false">https://substack.norabble.com/p/security-cant-wait</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Thu, 05 Mar 2026 21:05:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7b2a65ed-e701-4f36-8d82-2a665189419b_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Right now, Artificial Intelligence is fundamentally rewriting the rules of cybersecurity&#8212;and we do not have the luxury of waiting before taking action.</p><p>However, the underlying mechanics of both fields can feel frustratingly inaccessible. By design, cybersecurity is meant to be an invisible shield. Unless you are deeply involved in computing, you usually only notice it when it fails, or when it creates daily friction&#8212;like remembering a complex password. The inner workings of how your data stays safe remain mostly opaque, exactly as the engineers intended.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A similar dynamic applies to Artificial Intelligence. Today, it&#8217;s easy to experience AI through chatbots. You can ask questions, spiral into deep conversations, or generate images in seconds. But as impressive as that is, talking to a chatbot is just the tip of the iceberg. Behind the scenes, by some estimates, <a href="https://openrouter.ai/state-of-ai#categories_-how-are-people-using-llms_">over half of all AI usage today is dedicated to a single task: writing computer code</a>. It is an invisible shift of significant scale.</p><p>To understand why<a href="https://www.blackduck.com/blog/2026-ai-security-appsec-predictions.html#1"> applying AI to cybersecurity is so critical right now</a>, we first have to confront a widespread misunderstanding about what software actually is, and why it breaks.</p><h2><strong>The Myth of Perfect Software</strong></h2><p>If you don&#8217;t have programming experience, it is natural to assume that building software is like publishing a newspaper: you plan the layout, write the articles, print the edition, and the final product is permanently finished. In reality, writing software is much more like writing and maintaining Wikipedia.</p><p>When a printed newspaper hits the stands, it cannot be changed; tomorrow brings an entirely new edition, sharing little other than a layout, typeface, and name. But Wikipedia is an ongoing, living document. A single event sparks the first version of an article, but editors will argue over, revise, and correct it for years. Software engineers do the same thing. They write code, users report that something doesn&#8217;t work the way they expected, and the engineers go back and revise it.</p><p>Because fixing one piece of software often accidentally breaks another, they don&#8217;t stop there. They write entirely separate scripts&#8212;automated tests&#8212;whose only job is to constantly check the original code and ensure that older features keep working as the software evolves.</p><p>Testing exists because programmers are human. We misunderstand what users want. We misunderstand the limits of our computer hardware. We mistakenly rely on flawed code written by someone else. Testing protects programmers from their own fallibility.</p><p>Historically, programmers wanted their software to be deterministic. That means for every specific action, there is one specific, predictable reaction. If you move $100 from your savings to your checking account, savings goes down exactly $100, and checking goes up exactly $100. It sounds simple. Simple rules like this allow simple tests.</p><p>But users are highly unpredictable. They click buttons in the wrong order, type words into boxes meant for numbers, and combine features in ways the engineers never imagined. Add to this the physical realities of computing&#8212;hardware inevitably degrades, and surges in user traffic can consume all available memory&#8212;and the environment becomes chaotic.</p><h3><strong>Resilience</strong></h3><p>This dynamic chaos is difficult enough to manage when users are innocently fumbling around. To manage it, engineers must add another layer of complexity to their work: resilience. They don&#8217;t just program what happens when things go right; they have to spend countless hours programming exactly what happens when things go wrong, trying to ensure small failures don&#8217;t add up to large failures. This relentless pursuit of perfection makes building software exponentially harder.</p><h3><strong>Enter the Attacker</strong></h3><p>Attackers live in the gaps of a programmer&#8217;s incomplete plan. They look for the scenarios the engineer forgot to test. Sometimes, this looks like extreme user behavior: <em>What happens if I type 10,000 characters into a password field meant for 20? What if I send thousands of requests at the exact same millisecond?</em></p><p>This doesn&#8217;t stop by &#8220;acting like a user&#8221;. Software has internal communication channels, invisible to users, and attackers will do their best to access and utilize these too.</p><p>An attacker is entirely happy with chaos as an outcome. They only need to find one weak spot, one forgotten variable, to force the software to do something it shouldn&#8217;t.</p><h2><strong>The AI Magnifying Glass</strong></h2><p>How does AI interact with this cat-and-mouse game? Fundamentally, AI is a magnifying glass. For attackers, it is a tool to scan for weak spots faster and more comprehensively than manual reviews allow.</p><p>The most obvious response is for defenders to use similar tools. If an attacker is using AI to find the cracks in your walls, you need AI to find&#8212;and patch&#8212;those cracks first. In the long run, the ability of AI to rapidly spot human errors in code will be a substantial advantage to defenders. But where this was useful before, it&#8217;s critical today. As the cat-and-mouse game accelerates, staying ahead is more critical than ever.</p><p>But this brings us to another major misconception about cybersecurity: <em>finding</em> the vulnerability isn&#8217;t actually the hardest part. Neither is fixing the vulnerability.</p><p>To an outside observer, fixing a security flaw sounds highly complex. Often, it isn&#8217;t. The majority of security vulnerabilities are born from tiny, simple mistakes: a list that is one item too short, a user granted one permission too many, or a line of code that says &#8220;and&#8221; when it should have said &#8220;or.&#8221; In a vacuum, a programmer could fix these errors in five minutes.</p><p>There is a worry that a little change might have a bigger impact. Other code may have tried to compensate for the mistake and now breaks after the fix. This is always a worry, and automated testing was a tool to minimize that worry. So fixes aren&#8217;t always easy, but they still aren&#8217;t the core challenge.</p><p>The real challenge is <em>deploying</em> that fix. Modern software is woven into complex corporate environments. A simple five-minute fix might have to pass through multiple testing environments, bureaucratic approvals, and compliance checks before it ever reaches the user. The quality of different companies&#8217; deployment processes varies greatly. The best companies can deploy thousands of small fixes a day. Many other businesses struggle to deploy one update a month.</p><p>There are <a href="https://devops.com/patch-or-perish-the-brutal-truth-about-vulnerability-management-in-2025/">over 40,000 known vulnerabilities, with over 100 more discovered each day</a>. And those numbers only cover known software and libraries. Code a company develops for itself can introduce unique vulnerabilities that aren&#8217;t part of vulnerability databases. While these won&#8217;t all apply to any particular company&#8217;s environments, enough will that one update a month, or even one per day, is not sufficient.</p><p>Attackers act as relentless inspectors who will punish a company for any delay. If AI helps an attacker find a flaw today, but your company&#8217;s approval process takes three weeks to deploy the fix, you are at a serious disadvantage.</p><h2><strong>The Economics of Cyber Warfare</strong></h2><p>This might sound like a losing battle, but the defenders actually have a distinct advantage: economics.</p><p>Cyber attackers generally fall into two categories: people who just want to cause random destruction (who are thankfully rare and usually lack the focus to execute complex plans), and people who want to make money.</p><p>That second group is large, but they are doing math. If the payoff is too low, or the effort required to break in is too high, they will give up and look for an easier target. You don&#8217;t need a perfectly impenetrable wall to stay safe. You just need a wall that is sufficiently expensive for a hacker to penetrate.</p><p>This is where AI will shift the landscape. High-value targets (like major banks or tech giants) are already rapidly adopting AI to patch their weak points faster than ever. Attackers will likely find these targets too expensive to hack, assuming they avoid the deployment trap. The security organizations meant to protect are sometimes the impediment, creating the delays that bring risk.</p><p>This varies significantly between organizations. All organizations realize the importance of security, but only some have been able to turn that knowledge into reality and bring about the changes that allow for rapid deployment.</p><p>Efficient deployment is part of the design of many organizations. Newer organizations with a tech focus usually started out this way, as the template has been demonstrated many times. Older organizations have sometimes moved up, but many older or non-tech-focused organizations sit in an uncomfortable gap here.</p><p>For those that have failed to keep up, their fallback is often more layers of security&#8212;which carries high costs, but remains effective in raising the barrier to entry for attackers.</p><p>The real danger zone will be moderate-value targets&#8212;companies that have something worth stealing but may operate with slow, outdated security practices, and lack the justification for the most expensive layered capabilities. AI will turn a harsh lens on organizations that have managed to scrape by unnoticed in the past. These companies will face a strict ultimatum: modernize their security, or risk severe breaches.</p><p>Again, there is significant variability. Those with efficient deployment will stay ahead. Those that don&#8217;t are at risk, unable to match the expensive high-value protections, but also behind their peers.</p><p>Ironically, the lowest-value targets&#8212;everyday individuals and small businesses&#8212;might actually see an immediate benefit. Because their core reliance is on outsourced platforms (like cloud email providers), they will instantly inherit the new AI-driven spam and scam detection tools built by the tech giants, without having to lift a finger. While outsourcing has its weaknesses, when it comes to core functionality with a broad user base, it&#8217;s hard to beat.</p><h4><em><strong>A Brief Aside: Adversarial Revenue</strong></em></h4><p><em>This mandatory modernization creates an interesting, somewhat circular side-effect in the tech industry: a concept known as &#8220;<a href="https://substack.norabble.com/i/189221013/the-activity-value-matrix">adversarial revenue.</a>&#8220;</em></p><p><em>Because attackers are rapidly adopting AI, every potential target is forced to buy AI-driven defensive tools just to keep pace. Who sells those tools? Often, it is the broader tech industry that is developing these AI capabilities in the first place. For the companies providing AI security platforms, the rising tide of empowered hackers guarantees a sustained, highly motivated market. The threat itself creates the demand for the cure, making AI defense a uniquely lucrative sector of the economy.</em></p><p><em>Security revenue is a bit different than other <a href="https://substack.norabble.com/p/ai-and-the-zero-sum-game">adversarial roles</a>. Here there&#8217;s a clear bad guy. In fields like laws and finance, two sides exist, but neither is clearly creating the inefficiency. It&#8217;s theoretically possible we might improve the ratio between productive and adversarial revenue here by self-policing or regulatory efforts, though that requires convincing those industries to give up some potential revenue.</em></p><p><em>Setting aside the financial balance sheets and returning to the mechanics of the conflict, a much simpler question often arises about these adversarial dynamics: why not just prevent attackers from accessing AI to begin with?</em></p><h2><strong>Why Not Just Ban the Bad Guys?</strong></h2><p>Unfortunately, it&#8217;s a deeply complex challenge. The best success leverages lesser amounts of privacy, but it would be naive to think a loss of privacy can provide a total solution.</p><p>There are two main ways people access AI. One approach is through &#8220;open-source&#8221; models, which are freely available for anyone to download and use privately on their own computers. Protections here are limited. Creators train them to refuse malicious requests, but determined attackers consistently figure out how to bypass those guardrails (&#8220;jailbreak&#8221; them). The best protection here is that, so far, open-source models are less capable, and degrade a bit more after being jailbroken.</p><p>A more common method is through &#8220;controlled hosting&#8221;&#8212;the major platforms where you must log in to use the AI. Here, the AI companies actually <em>do</em> fight back every day. This isn&#8217;t just a theoretical threat; companies like Anthropic and OpenAI routinely detect and disrupt coordinated attackers attempting to use their networks.</p><p>But their expectation isn&#8217;t to build a flawless barrier. Instead, they use the mechanics of bureaucracy to drive up the attacker&#8217;s costs. They require an email to create an account. They monitor activity for suspicious patterns. When they see something shady, they issue a &#8220;soft-block,&#8221; refusing the prompt. When an attacker repeatedly tries to bypass that block, the company bans the account entirely, forcing the hacker to create a new account. And then they block account creation patterns that look shady.</p><p>Even with controlled hosting, stopping malicious actors entirely is difficult. &#8220;Shady&#8221; is a judgment call, and the other side has the option of changing tactics. They&#8217;ll try to look like regular users. They can&#8217;t hide forever in this way, but the provider risks harming regular users if they react too quickly by blocking patterns that describe regular users.</p><h3><strong>The AI Apprentice</strong></h3><p>AI companies seem highly capable, so why can&#8217;t they stop this, even though it&#8217;s difficult? You might ask, if they are motivated enough. If you doubt the AI companies are sufficiently motivated, consider the story of &#8220;distillation&#8221; attacks, which demonstrates the limits of their control.</p><p>To understand distillation, imagine a hacker who knows they will eventually get caught on the major, guarded platforms. Instead of using the heavily guarded AI to find vulnerabilities directly, they use it as a master tutor. They feed the secure AI complex coding problems, record its brilliant answers, and use that data to train their own private, open-source AI models.</p><p>Think of it like sneaking a camera into a master locksmith&#8217;s workshop. You don&#8217;t need to steal the locksmith&#8217;s tools; you just record how they work, go home, and teach your own apprentice. Once the attacker&#8217;s private AI learns enough, they no longer need the major platforms. They have their own unrestricted hacking assistant, operating entirely under the radar.</p><p>Distillation attacks also come from rival attempts to improve their own AI model, using outputs from a better model. It should be obvious that the leading AI companies want to retain their lead, and stopping distillation attacks would help. That said, all have reported activity of this type, and while they&#8217;ve had partial success in detecting it, it&#8217;s only partial. <a href="https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks">Anthropic reported millions of requests it believes were distillation attacks</a>. <a href="https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use">Google reports hundreds of thousands of requests</a> too.</p><p>Major AI providers are highly motivated to prevent this. Distillation isn&#8217;t just a security threat; it&#8217;s the outright theft of their multi-billion-dollar intellectual property. The fact that tech giants actively try&#8212;and often struggle&#8212;to stop distillation proves that preventing misuse isn&#8217;t a matter of lacking the desire or financial motivation to build a flawless barrier. They desperately want to build that barrier, but the technical reality makes absolute control nearly impossible.</p><h3><strong>Privacy and Security</strong></h3><p>Placing the best models in controlled environments provides some improvements. It does then place some of our privacy in the trust of those controlling those environments. Such environments are designed to preserve privacy in a balanced way. Your requests do go through automated review, but there are internal guardrails on how those are maintained, and who and when someone sees violations. But we have to place some trust elsewhere that&#8217;s significantly different from the type of validation we&#8217;d need regarding the privacy of a locally run model.</p><h2><strong>What&#8217;s Your Role?</strong></h2><p>Knowing that this massive contest is occurring behind the scenes, what can you, as an everyday user, actually do?</p><p>While the tech giants fight over deploying complex code fixes, attackers will still try to go after the easiest target: you. AI allows hackers to create highly personalized, perfectly spelled scam emails and incredibly realistic fake websites. To stay safe, a few standard pieces of advice are more important than ever:</p><p><strong>1. Enable Multi-Factor Authentication (MFA)</strong></p><p>With just a username and password, your security depends entirely on no one ever guessing or stealing your password. If you reuse a password, or accidentally type it into a fake &#8220;phishing&#8221; site created by AI, it&#8217;s compromised. MFA, while occasionally annoying, ties your access to something physical that you <em>have</em>&#8212;like a phone that receives a prompt or an authentication app. Even if an attacker steals your password, they can&#8217;t get in without your phone.</p><p><strong>2. Learn to Read a Web Address (and Spot a Fake Browser)</strong></p><p>Attackers frequently build fake login pages designed to steal passwords. Because AI makes it easy to perfectly clone the look of a legitimate site, attackers have escalated to a new trick: the &#8220;browser within a browser.&#8221;</p><p>A web browser displays content from the sites it loads. As a side effect, malicious sites can draw a fake window inside the webpage that looks exactly like your browser&#8217;s top bar, complete with a perfectly secure-looking&#8212;but entirely fake&#8212;web address. To protect yourself, you must be familiar with the normal layout of your browser. The real address bar is part of the secure surface of your browser at the very top of your screen, not nested down inside the web page&#8217;s content.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TUFO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TUFO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 424w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 848w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 1272w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TUFO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png" width="715" height="51" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:51,&quot;width&quot;:715,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TUFO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 424w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 848w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 1272w, https://substackcdn.com/image/fetch/$s_!TUFO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd59b140b-c127-49d1-99ac-70bdfe12ca67_715x51.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Once you are certain you are looking at the <em>real</em> address bar, the URL can look like a long string of gibberish, but there is a simple rule of thumb: find the very first single slash (/) after the https://. Then, look at the word immediately to the left of the .com, .gov, or .org.</p><ul><li><p>If the address is consumer.ftc.gov/articles/..., the controlling word is <strong>ftc</strong>. You are on a government site.</p></li><li><p>If an attacker tries to trick you with ftc.security-update.com/login, the controlling word is <strong>security-update</strong>. You are <em>not</em> on a government site; you are on an attacker&#8217;s site.</p></li></ul><p><strong>3. When in Doubt, Search</strong></p><p>If reading the URL feels confusing, use a search engine instead of clicking a link in an email. Type the company name into Google. It is incredibly difficult for an attacker to manipulate search algorithms enough to place their fake website higher than the real company&#8217;s official site. Just be sure to skip past the first few results if they are explicitly labeled as &#8220;Sponsored&#8221; or &#8220;Ad,&#8221; as attackers sometimes buy ad space.</p><p><strong>4. Be Wary of Voice Calls and Texts</strong></p><p>You should never give out your MFA codes or passwords by email or by phone call. However, verifying who is actually on the other end of the line has become much harder. AI makes it trivially easy for scammers to clone voices or generate convincing, conversational text messages. If you get a call from your bank&#8212;or even a panicked loved one&#8212;asking for money or a security code, hang up. Look up their official phone number yourself, and call them back.</p><p><strong>5. Keep Things Updated</strong></p><p>You should get to know your computer&#8217;s operating system and web browser. Both have built-in mechanisms to install updates automatically. Don&#8217;t delay or avoid these updates. As we discussed earlier, deploying fixes is the hardest part of cybersecurity. When you see an update ready to install on your phone or computer, you are often receiving the exact &#8220;five-minute fixes&#8221; software engineers just wrote to patch a vulnerability. Install them.</p><h3><strong>What are AI Companies Doing to Protect You?</strong></h3><p>While your personal vigilance is the last line of defense, the tech industry isn&#8217;t sitting idle. AI companies are actively deploying countermeasures:</p><ul><li><p><strong>Limiting Access by Attackers:</strong> When an attacker is identified, their accounts are deactivated. AI companies use complex pattern recognition&#8212;analyzing the content and origin of requests&#8212;to hunt down malicious users. It is a constant cat and mouse game. While it doesn&#8217;t stop all access, it raises the cost significantly. Every moment an attacker spends trying to defeat these protections is a moment they can&#8217;t spend conducting destructive attacks.</p></li><li><p><strong>Utilizing Guardrails and Training:</strong> AI models are trained with guardrails that inspect incoming and outgoing traffic, automatically refusing or modifying prompts that appear intended to facilitate harmful activity. Again, these techniques are not foolproof, but they disrupt access and diminish the utility of the AI for hackers.</p></li><li><p><strong>Scanning for Vulnerabilities and Orchestrating Remediation:</strong> Vulnerability scanning isn&#8217;t new, but AI enables broader and deeper results. AI companies (<a href="https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/">Google CodeMender</a>, <a href="https://www.anthropic.com/news/claude-code-security">Claude Code Security</a>, <a href="https://openai.com/index/introducing-aardvark/">OpenAI Aardvark</a>) are working directly with the cybersecurity industry to execute massive scans, instantly generate remediations, and orchestrate campaigns to deploy those fixes before attackers can act.</p></li></ul><h3><strong>Securing AI Itself</strong></h3><p>It is worth noting that protecting traditional software <em>from</em> AI-empowered attackers is only one slice of the overall security story. A complete view of AI security must also engage with other massive topics: how to deploy AI safely within an organization, how to manage how your private data is used by an AI model, and how to manage &#8220;agentic&#8221; systems (AI that can take actions on its own).</p><p>There are also vital, high-level theoretical debates about preventing AI from being used for massively destructive weapons, authoritarian surveillance, or sci-fi &#8220;AI overlord&#8221; scenarios.</p><p>But every conversation needs a focus, and right now, the most immediate, practical threat to the average user and business is the invisible arms race occurring in everyday software.</p><h3><strong>A Reason for Optimism</strong></h3><p>Ultimately, the integration of AI into cybersecurity is a narrative of optimism.</p><p>Yes, the equilibrium will shift. There will be chaotic periods as attackers test new AI tools. But relying <em>only</em> on defense means attackers get to choose the time and place of the next battle. By using AI to significantly speed up how we write, test, and fix software, we take the initiative away from the attackers. We make the cost of doing bad business too high.</p><p>As Dario Amodei, CEO of Anthropic, has noted, the balance between offense and defense is actually tractable in cybersecurity. There is real hope that defense can outpace attacks&#8212;but only if we actively invest in it. The tools are here. Someone must do the hard work of putting them to use for good, lest they only be put to use for harm.</p><h5>Related Articles</h5><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;6b17b6f2-ae9e-4ce4-98bc-cca1b2fd2980&quot;,&quot;caption&quot;:&quot;In the broader discourse on artificial intelligence, the sharpest minds in AI safety are currently looking to the horizon. They are focused on existential, cinematic threats: the potential for AI-generated bioweapons, nuclear command vulnerabilities, and autonomous warfare.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Deployments Can't Wait&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-23T11:50:42.029Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d95d729-a5d3-4f42-9d39-bf371396315c_2812x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/deployments-cant-wait&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191818851,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;62741dd7-68ea-4ad9-80e4-303ab84be196&quot;,&quot;caption&quot;:&quot;AI is advancing quickly, and if there&#8217;s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it&#8217;s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the Industrial Revolution or the advent of the internet, and believe that advances ultimately lead to new jobs that didn&#8217;t previously exist.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI and the Zero-Sum Game&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-30T16:15:53.873Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!3lXS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75bdff8e-8e0a-461c-99ae-df41fd06ab63_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-and-the-zero-sum-game&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160183122,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a33e112a-8c78-4129-aee9-0e3adc1b080a&quot;,&quot;caption&quot;:&quot;Billions of dollars are currently pouring into AI data centers, chips, and foundational models, but the ultimate test of that massive investment happens in just one place: the Application Layer.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The AI Reality Check &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-26T13:36:17.746Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!kuup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-application-layer&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189221013,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The AI Reality Check ]]></title><description><![CDATA[Decoding the Application Layer]]></description><link>https://substack.norabble.com/p/ai-application-layer</link><guid isPermaLink="false">https://substack.norabble.com/p/ai-application-layer</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Thu, 26 Feb 2026 13:36:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kuup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Billions of dollars are currently pouring into AI data centers, chips, and foundational models, but the ultimate test of that massive investment happens in just one place: the Application Layer.</p><p><a href="https://substack.norabble.com/p/the-architecture-of-a-gamble">In my last post</a>, I mapped the architecture of the AI industry. Today, we are diving into the top of that stack to answer the industry&#8217;s most critical question: how do we separate the technological hype from real, sustainable economic value?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kuup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kuup!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kuup!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kuup!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kuup!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kuup!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kuup!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kuup!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kuup!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kuup!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>While all layers are important, I&#8217;d argue that at this point in time, the application layer is the most critical of all. The first reason is this is where theoretical value becomes real value. If social value isn&#8217;t created, why should anyone care about AI?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A second and related reason is this is where financial value is realized. Social value and financial value can be unaligned. When they are unaligned, society can miss opportunities and experience waste. This cuts both directions.</p><p>A third and also related reason is that as individuals we care about our own financial positions, and this layer&#8217;s financial performance will set limits on the financials of every other layer. If revenues can&#8217;t be generated here, they cannot pay off the investments already made and planned for model training, nor investments made and planned for data centers filled with AI compute capable chips.</p><p>The goal of this post is to understand those aspects. Where does (or could) revenue come from? How large are these flows, and what motivates or justifies them? In the process, we&#8217;ll learn more about how AI is changing the world today and in the future.</p><h2><strong>Who are the providers in the Application Layer?</strong></h2><p>The way I define it, the application layer will seem like it includes almost the entire industry. I&#8217;ll explain a model to differentiate the application layers of these organizations from their contributions to other layers.</p><p>At the forefront, we have the foundational model builders like <strong>OpenAI</strong> (with ChatGPT), <strong>Anthropic</strong> (with Claude), and <strong>Google</strong> (with Gemini). While they provide the underlying &#8220;Intelligence Layer,&#8221; they also act as direct-to-user application providers. Everyone wants to&#8212;and needs to&#8212;be in the application layer. All the core providers want to enable diverse users within the application layer, but cannot entrust their future solely to a developing ecosystem. They are thus both enablers and active participants.</p><p>Alongside them are the massive cloud platforms&#8212;most notably <strong>Microsoft</strong> and <strong>AWS</strong>. While it&#8217;s tempting to look at their packaged applications (like Microsoft Copilot), their true gravity in the enterprise space lies in platforms like <strong>Azure AI Foundry</strong> and <strong>Amazon Bedrock</strong>. These platforms provide the crucial API infrastructure that allows other businesses to build their <em>own</em> custom AI applications. Because of their sheer scale and their role in hosting these APIs, their movements dictate much of the financial reality of this layer.</p><p>This reliance on API infrastructure is perhaps the most common blind spot in the current AI discourse. Media pundits, the general public, and basically anyone not actively involved in building AI-enabled applications consistently overlook it. It&#8217;s easy to fixate on what is visible&#8212;the chat interfaces and packaged consumer tools. The enterprise API layer, however, operates quietly behind the scenes, routing data and powering internal corporate workflows. This infrastructure isn&#8217;t a secret; the documentation for Bedrock and Azure AI is entirely public. But because the average person has no reason to read it or interact with it, this critical financial and operational engine remains largely hidden from view, and will likely remain so.</p><p>Finally, beyond these hyperscalers and hidden API layers, there is a rapidly expanding ecosystem of AI applications. While pure-play startups (like <strong>Cursor</strong>, <strong>Midjourney</strong>, or <strong>Harvey</strong>) provide early examples of entirely new workflows built from the ground up, they represent just a fraction of what is possible. Ultimately, this layer is about embedding AI into existing processes and tools, making it generally pervasive over time, with many of the most transformative use cases yet to be realized.</p><h2><strong>The Consumer vs. Enterprise Split</strong></h2><p>A key starting point to making sense of the application layer&#8212;and where its revenue comes from&#8212;is to categorize usage into two segments: <strong>consumer</strong> and <strong>enterprise</strong>.</p><p>The consumer segment encompasses personal usage, while the enterprise segment encompasses usage contracted for by a business. The lines can be blurry with self-employed businesses, gig work, and shadow enterprise usage. Imperfect as the distinction is, it still serves a purpose. The consumer segment shares dynamics regarding free usage and subscriptions that are important to understand potential growth. The enterprise segment is very diverse; while it has leading workload types, those workloads share pressures to demonstrate return on investment (ROI) and are challenged by implementation and integration.</p><p>Revenues for consumer usage are generally subscription-based. Conversion from free to paid is not straightforward; while limits on free usage create pressure, they can also alienate users. In this segment, usage isn&#8217;t measured by ROI but by &#8220;vibes&#8221;&#8212;does the user feel the tool is worth the monthly fee?</p><p>Conversely, the enterprise segment defaults to metered API usage (pay-per-token). This is exactly why the API platforms mentioned above (Bedrock, Azure AI) are so critical&#8212;they form the backbone of this enterprise business model. This metered structure creates a more predictable future by correlating revenue and costs directly. However, forecasting enterprise revenue depends on how capable organizations are at closing implementation and integration gaps. In a fully rational market, metered usage would create significant profit, but if the market deploys more capacity than the demand can satisfy, we may see usage sold at a loss just to recoup sunk costs.</p><h2><strong>The Activity Value Matrix</strong></h2><p>To truly answer the core questions of this article regarding social value, financial value, and financial performance, we need another framing tool alongside the consumer/enterprise split. This is where the <strong>Activity Value Matrix</strong> comes in.</p><p>The Activity Value Matrix is crucial because it establishes a reasoning model for what type of AI activity actually brings net-positive value back to society, versus activity that merely burns compute to propel the industry forward without creating real benefit. Furthermore, it helps us identify precarious gaps: cases where AI <em>could</em> create immense social value, but because we lack a sustainable revenue model to support it, that value will fail to materialize sustainably. By analyzing the relationship between <strong>Value</strong> (economic/social utility), <strong>Revenue</strong> (monetization), and <strong>Usage</strong> (active consumption), we can map the true health of the application layer:</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/g79Lh/3/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7fabecb-90ec-4529-aca3-9baebaf951b4_1220x1412.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2dec22a0-210f-44b8-ae64-9b72febdb18f_1220x1482.png&quot;,&quot;height&quot;:757,&quot;title&quot;:&quot;Activity Value Matrix&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/g79Lh/3/" width="730" height="757" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>From a social perspective, we have a few rows we prefer, but the most important is productive revenue. Free goods are enjoyable, but unsustainable, and so at a society level, we should expect to eventually pay. Public goods are likewise excellent, but the arrangements are complex and apply to fewer situations. While we might wish for an endless supply of free or public goods, it is via activities with productive revenue that we&#8217;ve moved forward.</p><p>Likewise, there are some rows we should fear. Waste is clear. The exploitative patent rent is clear. General patent rent is less clear. From a simple preference, we should dislike it. But patent rent is often a useful tool to promise as a privilege. When managed well, that promise encourages investment that subsidizes activity that creates growth that could not occur from productive revenue alone.</p><h2><strong>Moving Bottlenecks and Remnants</strong></h2><p>With these tools&#8212;the consumer/enterprise split and the Activity Value Matrix&#8212;we can start to examine individual use cases. Whether driven by consumer subscriptions or the pressure to recoup enterprise sunk costs, the operational reality is that actual AI adoption is heavily challenged by implementation and integration. As this adoption progresses, we will repeatedly encounter new bottlenecks.</p><p>Many bottlenecks are simply standard operational hurdles&#8212;like migrating legacy data, reorganizing teams, or passing compliance reviews&#8212;that get addressed one by one. While solvable, they still create significant <em>diffusion delays</em> that push the realization of productivity gains further into the future. Another type &#8220;remnants&#8221;, on the other hand, are the most resistant bottlenecks. They are the stubborn areas left behind after the easy problems are solved.</p><p>One of the most stubborn types of remnants is <strong>adversarial</strong>. As the matrix highlights under &#8220;Compelled / Adversarial Revenue,&#8221; these actions create a continuous loop of demand for intermediate outputs just to maintain existing outcomes&#8212;like AI generating better spam, which requires better AI to filter that spam. This burns energy and resources, generating revenue and usage, without creating net-new social or financial value.</p><p>To see how these bottlenecks and remnants shape an industry, consider <strong>Coding</strong>&#8212;arguably the leading edge of the application layer. Software developers are culturally accustomed to disruption, which reduces the organizational friction and diffusion delays seen elsewhere. Yet, their use cases perfectly map to our frameworks:</p><ul><li><p><strong>Responding to the Adversarial First:</strong> A wave of security related investment is underway, and I expect will accelerate. This early adoption is driven by compelled security needs&#8212;using AI to find vulnerabilities and harden systems before attackers can use AI to exploit them.</p></li><li><p><strong>Paying Down Debt:</strong> AI is being deployed to clear existing bottlenecks by paying down technical debt&#8212;modernizing deprecated platforms and freeing up locked capital.</p></li><li><p><strong>An Engine for Change:</strong> Because software underlies modern business, increasing developer productivity acts as an engine for change across all other industries, eventually translating to new productive revenues.</p></li></ul><p>However, if development capacity isn&#8217;t solved first, consider how a bottleneck might manifest as an increasing &#8220;Proposal-to-Product&#8221; ratio. If AI halved the time needed to create software feature proposals of twice the quality, we might describe it as a 400% efficiency improvement. We might also see twice as many proposals. However, if the capacity to actually <em>write</em> the software hasn&#8217;t increased proportionally, that 400% internal productivity boost might only translate to a 10% increase in final value.</p><p>If we&#8217;re not careful, we might incorrectly diagnose the proposal process as the problem. We might even blame AI and suggest all of those new proposals were &#8220;AI slop&#8221;. While it&#8217;s possible for tools to be misused, it would be ironically sloppy to jump to that conclusion. <a href="https://substack.norabble.com/p/the-slop-scapegoat-ai">The overuse of AI slop is already a problem</a>, so we should not add to it. If users decrease their effort by 10x, and get half the quality, then yes, we can call their output slop. But that would have been true if they decreased their effort by half without tools.</p><p>The real story here is of the bottleneck.  The unused proposals aren&#8217;t necessarily low-quality slop; they are simply piling up behind a bottleneck further down the line.</p><h2><strong>Conclusion: The Reality Check</strong></h2><p>The Application Layer is <strong>the industry&#8217;s &#8220;Reality Check.&#8221;</strong> While the lower layers (Compute and Intelligence) have seen massive investment and technical breakthroughs, the Application Layer is where those breakthroughs are forced to justify their existence.</p><p>The distinction between the <strong>Consumer</strong> and <strong>Enterprise</strong> segments is vital to this justification, as it dictates the very survival of business models. In the consumer world, the value is personal and often ephemeral, driven by individual preferences and &#8220;vibes.&#8221; Here, adoption can be viral and instantaneous, but loyalty is fickle and valuations are tied to user engagement. In the enterprise world, value is rigorous and must survive the gauntlet of organizational change and ROI calculations. While adoption is slower due to integration hurdles, the resulting business models are often more durable and command higher valuations based on proven efficiency gains. While many use cases&#8212;like coding or creative generation&#8212;will exist in both segments, they will take on different &#8220;shapes&#8221; and adoption velocities based on these divergent preferences.</p><p>As we look forward, the specific tasks&#8212;the &#8220;Use Cases&#8221;&#8212;will define which segment wins and how much of that massive investment in lower layers can actually be recouped. In my next post, we will dive deeper into the <strong>Activity Value Matrix</strong> to see exactly where this &#8220;productive revenue&#8221; is hiding, how the shapes of use cases shift between consumer and enterprise, and where energy is simply being burned as &#8220;waste.&#8221;</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;bb7ec0a3-d388-432d-9f7f-5ff804b6c545&quot;,&quot;caption&quot;:&quot;A while back I talked about producing an analysis of the AI industry. I&#8217;ve put together something pretty extensive, but on reflection, I&#8217;ve decided to put it out in multiple parts. This post today functions more as an outline, where the following posts will dive more into each layer of this stack and then finally look in more depth at the macro-economic aspects.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Architecture of a Gamble&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-15T12:00:28.462Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!wNQ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-architecture-of-a-gamble&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181559364,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;cce7ce77-6910-4937-937f-bf31928e1d30&quot;,&quot;caption&quot;:&quot;I don&#8217;t like the term &#8220;AI slop&#8221;. As a term it&#8217;s used far too casually. The Internet has had copious amounts of slop for a while, if we describe slop as low-quality material created to grab eyeballs. For example, the article-spinning software of the 2000s, content farms churning out SEO-driven articles, or the rise of viral clickbait. Quantity over quality, you might say.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Slop Scapegoat: AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-10-19T16:35:44.334Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30c58724-2ab9-4488-9cbf-1a0fad3363f4_1024x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-slop-scapegoat-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:176573296,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:5,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Architecture of a Gamble]]></title><description><![CDATA[Mapping the AI Value Chain]]></description><link>https://substack.norabble.com/p/the-architecture-of-a-gamble</link><guid isPermaLink="false">https://substack.norabble.com/p/the-architecture-of-a-gamble</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:00:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wNQ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>A while back I talked about producing an analysis of the AI industry. I&#8217;ve put together something pretty extensive, but on reflection, I&#8217;ve decided to put it out in multiple parts. This post today functions more as an outline, where the following posts will dive more into each layer of this stack and then finally look in more depth at the macro-economic aspects.</em></p><h3><strong>Introduction</strong></h3><p>If you look solely at the headlines in 2025, the artificial intelligence industry appears like a single coordinated effort with unprecedented wealth. We see trillions of dollars in valuation, feverish construction of data centers, and a frantic race for silicon. But to view AI as a single, unified industry is to miss the mechanics of how capital flows, and how that creates stability in some places, and precariousness elsewhere.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The reality is a stack&#8212;four distinct layers of value, each operating under different laws of physics and economics. At the bottom is the concrete reality of energy and silicon. At the top is the promise of productivity through software. Between them lies a complex web of subsidies, speculative bets, and &#8220;soft dependencies&#8221; that are challenging the laws of traditional business.</p><p>At the base, capital is being aggressively spent on lower layers of the stack, to both meet current and expected future demands from higher layers. Higher layers are operating speculatively, bringing in capital through speculative investments and running at a loss, in expectation of unprecedented future revenue growth.</p><p>This creates a <strong>commercial stability gradient</strong> that decreases as you move up the stack, where revenue covers less of investment and depends more on future growth.</p><ul><li><p><strong>The Compute Supply Chain</strong> (Layer 1) is paid first. While future demand, and thus revenues cannot avoid a dependence on the higher layers, they are also selling hard product today, and being paid well for it. As businesses, they are more stable, though you should be careful about translating that commercial stability into assumptions of stock price stability, which assume both high demand and high margins, neither of which is certain.</p></li><li><p><strong>The Operational Infrastructure</strong> (Layer 2) is mixing revenue from higher layers and cash flows from existing business to fund purchases from Layer 1. This layer is competitive even before AI, and AI&#8217;s influence may make it more commodity-like and more competitive. Investment is speculative, but structured for stability.</p></li><li><p><strong>The Intelligence</strong> (Layer 3) is precarious. Only a fraction of spending is covered by revenue; the remainder is funded by investment capital and debt, driven by high optimism. While capital costs are not as high as Layer 2, they are still high, and so are operational costs. Almost all this goes to Layer 2 as current revenues. Cash flow is negative and dependent on ongoing investment, while also being highly competitive.</p></li><li><p><strong>The Application</strong> (Layer 4) is diverse, and where value is realized. Other layers depend upon this to bring the revenues that pay for capital and operational costs. The risk here is an <strong>Attribution Gap</strong>. Real value is created, but measurement it is difficult. This leaves uncertainty if users will pay enough to justify the massive investments at lower layers.</p></li></ul><p>This structure creates an unstable relationship. If spending in the Application layer doesn&#8217;t expand rapidly, the investment optimism funding the Intelligence layer could dry up, leaving a critical gap in the revenues needed to pay back investors and meet the ROI expectations of the infrastructure below.</p><p>An interesting exception to this structure is application layer companies can be more commercially stable than Layer 3 companies. If they demonstrate their own value to customers, and decouple via negotiation with Layer 3, they could reach profitability before Layer 3.</p><p>As a diverse layer, many different outcomes should be expected, with some companies successfully navigating their own path and others failing. Also, while this shifts more everyday, a reasonably significant part of this layer (25%) is direct OpenAI/Gemini/Anthropic user subscriptions, coupling that revenue to Layer 3.</p><h3><strong>Layer 1: The Compute Supply Chain</strong></h3><p>The foundation of the AI stack is not code, but the &#8220;hard&#8221; reality of atoms: the physical and logical inputs required to create intelligence. This layer is dominated by the high capital cost and expansion timelines of chip manufacturing, which feeds into the economics of the entire industry.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_dfj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_dfj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_dfj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_dfj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_dfj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F771d2e0d-137b-43e7-b830-2af1735ae168_1024x559.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Foundries, such as TSMC, represent the hardest constraint and the largest capital commitment. <a href="https://finviz.com/news/244259/taiwan-semiconductor-trading-at-a-discount-how-to-play-the-stock">TSMC alone is projecting $40-42 billion in CapEx for 2025</a>. This creates a &#8220;pervasive price&#8221; because chip manufacturing costs factor into most other costs. Training needs chips, and inference needs chips.</p><p>While foundries deal with the brute force of fabrication, chip designers like Nvidia and AMD operate the <strong>Efficiency Lever</strong>. Their economic role is to maximize the usage of scarce foundry capacity. However, the lines here are blurring. Hyperscalers like Amazon and Google are increasingly designing their own silicon&#8212;Trainium and TPUs&#8212;to optimize their own costs, effectively internalizing the supply chain.</p><p>Surrounding this is the often-overlooked constraint of utility inputs: energy and real estate. All compute requires reliable power, making grid access a key determinant of land value.</p><p>One of the advantages for NVidia and TSMC&#8217;s is that chips are being sold today. TSMC may have high CapEx, but for 2025 has $120 billion in revenues, and $40 billion net income. NVidia has an even more appealing 2025 forecast of $115 billion net income and revenue of $200 billion, with only $6 billion in capital expense.</p><p>$115 billion in income doesn&#8217;t justify a $5 trillion market capitalization, it has to grow to support that, but as a business there&#8217;s no lack of commercial stability. It&#8217;s also a mistake to look at NVidia&#8217;s favorable position, and imply stability for the entire AI industry. That just means the rest of the industry is paying for NVidia, <a href="https://www.planetearthandbeyond.co/p/did-nvidia-just-prove-there-is-no">it doesn&#8217;t mean the rest of the industry can afford to</a>.</p><p>If demand continues to grow this layer&#8217;s hard constraints&#8212;manufacturing capacity and power availability&#8212;will give it pricing power. If it flatlines, that&#8217;s lost, but investments will still be repaid. Demand would have to pull back significantly to translate into internalized losses. Stock prices could be volatile, and bad choices there could cause individual investors to experience losses, but that&#8217;s partly the nature of stock markets in general.</p><h3><strong>Layer 2: The Operational Infrastructure</strong></h3><h4><strong>Disclaimer</strong></h4><p><em>Now is a good time to remind you that while I work at AWS, this is a personal Substack. Opinions are my own.</em></p><p>Sitting directly above the supply chain is the Operational Infrastructure, defined by the <strong>delivered compute</strong>. This layer efficiently deploys chips using existing cash flows, and keeps them operational. Very CapEx dependent, but so far, self-financing. The primary examples are the hyperscalers: AWS, GCP and Azure. These companies are absorbing the massive AI infrastructure costs using cash flows from mature, non-AI businesses like search and retail. A full accounting must consider smaller providers and private deployments.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wNQ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wNQ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wNQ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg" width="1024" height="565" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:565,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wNQ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wNQ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ac2d61c-4ddf-4020-a1f0-0349cbb0809e_1024x565.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are witnessing a classic <strong>Installation Period</strong>: a rapid infrastructure overbuilding driven by FOMO and strategic necessity. Similar to the fiber-optic boom of 1999-2001, physical assets are being deployed ahead of proven demand. While this typically leads to a capacity glut, such a glut is eventually beneficial for the economy, as excess capacity drives costs down and subsidizes the next wave of innovation.</p><p>In my follow-up, I&#8217;ll dive into</p><ol><li><p>The CapEx numbers from each entity</p></li><li><p>The CapEx breakdown by type (chips, land, buildings, etc.)</p></li><li><p>The differences between projected CapEx, announced CapEx, commitments, and actual spending.</p></li><li><p>The revenues from supporting training, inference.</p></li></ol><h3><strong>Layer 3: The Intelligence</strong></h3><p>The third layer is the <strong>speculative core</strong> of <strong>model providers. </strong>The current core are those with frontier model programs, such as OpenAI, Anthropic, Google, Amazon, Meta. While this layer captures the public imagination, its economic footing is far more slippery than the infrastructure below it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TzfU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TzfU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TzfU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg" width="1024" height="565" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:565,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TzfU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TzfU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2040e824-60bd-4c06-b97b-3584a7d78b0f_1024x565.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These companies face a brutal paradox: they provide the cognitive engine for the entire ecosystem, yet &#8220;intelligence&#8221; itself is trending toward a commodity.</p><p>In this high-velocity environment, legacy advantages are fleeting. A leader can be dethroned by a single release cycle from a competitor. Unlike the infrastructure layer, where assets have a useful life of years to decades, a frontier model can depreciate in months.</p><p>Changes in position happen often. At the moment of writing, December 2025, OpenAI is not the leader, and it&#8217;s open to debate between Google Gemini and Anthropic Claude Opus/Sonnet as the momentary leader. You&#8217;re only as good as your latest model on the frontier. Niches are developing, on a cost basis and use case basis. But even here, a niche can be swallowed by the new frontier release, or lost to a new release targeting the niche.</p><p>A long term advantage here would have to be founded on execution: creating effective models cost-efficiently. The most optimistic view is that a recursive feedback loop emerges. If AI improves model architecture via an <strong>algorithm loop</strong>, or AI improves chip design, these create novel impacts.</p><p>A chip design loop would be slow and shared, as a new chip design would have to go next to manufacturing, and dependent on chip designer intellectual property. An algorithm loop could be faster, and self-contained. While this area is fairly speculative, it&#8217;s a foundational one to how many model providers think, at least at the fringes.</p><p>Financially, this layer is precarious. Revenues are not large enough to pay for past training costs, so companies are still unprofitable. Internally they face a <strong>revenue gap</strong> where growth projections often presume exponential conversion of free users to paid users, without a proven reason for that assumption. They also are dependent on the realization of many other revenue sources from the application layer.</p><p>In my follow-up, I&#8217;ll dive into the revenues and expenses at this layer.</p><h3><strong>Layer 4: The Application</strong></h3><p>The most critical disconnect in the modern AI economy sits at the very top: The Application Layer (Layer 4). This is where the rubber meets the road&#8212;where enterprises and consumers actually use AI to solve problems.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aCRB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aCRB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aCRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aCRB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aCRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774ff182-74c6-41f9-8f0d-2a5694aadc29_1024x559.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In a healthy market, the revenue from this top layer would cascade down, paying for the models, the servers, and the chips. Today, however, that flow is a trickle compared to the flood of investment rising from the bottom. We are facing an <strong>attribution gap</strong>. While AI creates real value&#8212;writing code, summarizing documents, speeding up research&#8212;measuring that value is notoriously difficult.</p><p>Besides the attribution gap, there are other frictions, an <strong>implementation gap</strong>, an <strong>integration gap</strong>, and an <strong>adoption gap</strong>.</p><p>Before users can make use of the potential capability of a model, for a specific purpose, they often need an implementation. Consider coding and coding tools as an example. Before coding tools, like Kiro, supported spec driven development, the ability to use models for this purpose was challenging. The same goes for earlier advancements, such as generating tests. Sure, you could in theory have done this with a chatbot and a lot of copy and paste, but that&#8217;s not good enough to make it practical. The implementation takes theory to practical. The number of <strong>implementation gaps</strong>, between what the best models could in theory do, and what users have good tools to do, is growing faster than it&#8217;s shrinking.</p><p>Even with an implementation, another type of gap is an <strong>integration gap</strong>. Intelligent decisions depend upon access to relevant and current data. Data does not automatically become integrated. For good reason, data has to go through the process of integration, where security, quality and structure have to be resolved. And it&#8217;s not just data, but also actions, which quite reasonably have a yet higher bar for proper integration.</p><p>Finally, there&#8217;s the <strong>adoption gap</strong>. Even when a tool or process is available, and integrated, skepticism, awareness, and habits often mean it&#8217;s not used. There&#8217;s a relation back to the attribution gap here too, as the inability to attribute value helps fuel skepticism.</p><p>The attribution gap, and the other frictions slow down both the actual implementation, and the creation of revenue. This results in a <strong>lag effect</strong>. The industry is currently in a race to see if revenue can expand fast enough to justify the valuations before the <strong>Installation Period</strong> speculation cools.</p><p>In my first follow-up (<a href="https://open.substack.com/pub/norabble/p/ai-application-layer?r=10qod6&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true">The AI Application Layer</a>), I dive into:</p><ol><li><p>Two types of revenue, consumer and enterprise, and their different frictions.  A preview here; this is changing rapidly and the enterprise is becoming more important.</p></li><li><p>Different types of value-revenue relationships, and their importance to delivering social value from usage. </p></li></ol><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;65efea1f-7782-4b92-9273-61d205559862&quot;,&quot;caption&quot;:&quot;In my last post, I discussed the architecture of the AI industry. Today, I&#8217;d like to dive into the top-layer, the application layer.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Application Layer&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-26T13:36:17.746Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!kuup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc9d6ac1-a0e2-450e-b5de-90eaaf315791_1024x559.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-application-layer&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189221013,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>A future follow-up, will dive into tasks that can create revenue, and their relationship to value.</p><h3><strong>Macro-Economic Effects</strong></h3><p>I&#8217;ve written about this a bit in the past&#8230;</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;145770ae-953b-4207-a48e-d517ced47ec2&quot;,&quot;caption&quot;:&quot;This will be part one of a two part series. In the first part, I want to outline some of my views about how salient a set of what we might call existential concerns about AI should be. In part two, I want to discuss some more immediate interactions with today's economy&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Economic Future from and of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-07T14:08:35.292Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d02180eb-af84-4846-b470-d641afa59da1_512x512.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-economic-future-from-and-of-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173016480,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2791c40e-b3f0-4ffc-aa53-ae6c79ef0482&quot;,&quot;caption&quot;:&quot;In Part One, I discussed some of the existential economic concerns that Artificial Intelligence forces us to consider. In this second part, I&#8217;ll focus more directly on the practical, near-term landscape of familiar economic forces.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Economic Future from and of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-08T12:05:18.583Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6e78a60e-800e-4adf-9223-6f4fd217c034_1024x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173031411,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;21efe8ab-226d-4b28-b3e3-d7dd15eb5dac&quot;,&quot;caption&quot;:&quot;AI is advancing quickly, and if there&#8217;s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it&#8217;s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the Industrial Revolution or the advent of the internet, and believe that advances ultimately lead to new jobs that didn&#8217;t previously exist.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI and the Zero-Sum Game&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-30T16:15:53.873Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!3lXS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75bdff8e-8e0a-461c-99ae-df41fd06ab63_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/ai-and-the-zero-sum-game&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160183122,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>In my follow-up I&#8217;ll dive into:</p><ol><li><p>How the current state of the AI industry feeds into the overall economy</p></li><li><p>How different types of revenue have different effects on the economy separate from the effects on the AI industry.</p></li></ol><h3><strong>Conclusion</strong></h3><p>The bottom of the stack is being paid for by upper layers. The largest contribution comes from the internal investment by the infrastructure layer. But this is reaching its limits. Some additional capital is being supplied by the intelligence layer, in the form of external investment spent on training and inference in excess of their own revenues. Finally, the application layer is bringing in revenues. Those revenues wouldn&#8217;t be less than enough to pay back CapEx to date, and would be trivial in comparison what would be needed to pay back projected CapEx.</p><p>Thus, the current internal and external investment is absorbing the costs of infrastructure in the belief that the application layer will eventually catch up. If projected CapEx is to occur, new sources of investment will be necessary, likely through debt and larger amounts of equity financing. A drop in optimism could cut these plans short and create market volatility.</p><p>This is not necessarily a disaster; it is a historical pattern. We are deep in what economists call an <strong>Installation Period.</strong> Much like the fiber-optic boom of the late 1990s, we are overbuilding infrastructure ahead of proven demand. This is a feature, not a bug, of technological revolutions. This overbuilding is inflationary and chaotic, driven by Fear of Missing Out (FOMO) and strategic necessity.</p><p>The eventual result is almost always a capacity glut. It&#8217;s not a foregone conclusion, and pinpointing the timing is a much harder prediction than the simple occurrence. If you forecast on potential value only, and ignore the attribution, implementation, integration and adoption gaps, it&#8217;s almost certain to occur.</p><p>In theory, good forecasting can forestall or avoid a glut. In practice, there&#8217;s usually at least one market participant that&#8217;s constitutionally inclined to pursue the most optimistic interpretation, and FOMO pulls others behind this wave. Lagging behind too much when optimism prevails risks being left out with a struggle to regain position. In other words, playing it safe isn&#8217;t always safe.</p><p>When optimism finds its failing point stock valuations correct violently. But for the economy at large, this is the &#8220;turning point.&#8221; A crash in the cost of intelligence would reduce the barriers of an attribution gap, spawning a burst in adoption that needed to unlock a <strong>Deployment Phase. </strong>With past revolutions this is where the technology becomes cheap and reliable enough to be woven into the fabric of everyday life.</p><p>With AI, we might wonder if the model of the railroads, which experienced serial corrections, could fit better as one correction is not enough to close all of the potential gaps.</p><p>For now, the AI economy is a structure supported by massive financial scaffolding. The heavy lifting is being done by legacy profits and speculative investment, all betting on a future where the friction of adoption disappears. The question is not whether AI adds value, but whether the application layer can expand quickly enough to catch the weight of the massive infrastructure being built to support it.</p><h3><em><strong>Next in this Series</strong></em></h3><p><em>In the coming posts, we will peel back the layers of this stack one by one, examining the specific data and market dynamics driving the Compute Supply Chain, the precarious economics of Model Providers, and the reality of Enterprise Adoption.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Unlocking the Library: A Proposal for a Reader Pass]]></title><description><![CDATA[How to support independent writers without the pressure of a full subscription.]]></description><link>https://substack.norabble.com/p/unlocking-the-library-a-proposal</link><guid isPermaLink="false">https://substack.norabble.com/p/unlocking-the-library-a-proposal</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Wed, 19 Nov 2025 12:45:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cc4add73-fdfd-4498-a054-e8d6e1fcfc10_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Substack has done an incredible job creating a home for independent writing. It&#8217;s given thousands of authors a way to make a living directly from their readers. But as a user, I&#8217;m running into a problem: there are too many great writers I want to support, and even if I can afford it, I&#8217;m not inclined to carry a full $50/year subscription for every single one of them.</p><p>Currently, if I want to read just one specific essay from a writer I don&#8217;t subscribe to, I usually can&#8217;t. I hit the paywall and leave. That&#8217;s a loss for me, and it&#8217;s money left on the table for the writer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I think there is a middle ground that would help readers explore more independent work while putting more money in writers&#8217; pockets: <strong>The Reader Pass</strong>.</p><h2><strong>How It Could Work</strong></h2><p>The idea is simple: a way to support writers individually without committing to a marriage. It acts like a pre-paid tab for the whole platform with volume discounts built in.</p><ol><li><p><strong>Set a Budget:</strong> I set a monthly limit I&#8217;m comfortable with&#8212;say, $10.</p></li><li><p><strong>Pay-Per-Post:</strong> When I want to read a paywalled post from a writer I don&#8217;t subscribe to, I can &#8220;unlock&#8221; it. This deducts from my budget.</p></li><li><p><strong>Bonus Tiers:</strong> Once I hit my limit (e.g. 10 articles for $10), I don&#8217;t just get cut off. I unlock a &#8220;bonus&#8221; set of free articles.</p></li><li><p><strong>Scaling Up:</strong> If I exhaust my bonus articles, I can approve the next tier. Each subsequent tier offers significantly better value (more articles for the same price), plus its own set of bonus free reads.</p></li></ol><p>Here is how the value could scale as a reader spends more:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\begin{array}{l|c|c|c|c}\n\\textbf{Monthly Spend} &amp; \\textbf{Base Articles } &amp; \\textbf{Per Article} &amp; \\textbf{Bonus Articles} &amp; \\textbf{Total Access} \\\\\n\\hline\n\\$10 &amp; 10 &amp; \\$1.00 &amp; +5 &amp; 15 \\text{ Posts} \\\\\n\\$20 \\small{\\text{ (}+\\$10\\text{)}} &amp; 30 \\small{\\text{ (}15\\text{ }new\\text{)}} &amp; \\$0.66 &amp; +10 &amp; 40 \\text{ Posts} \\\\\n\\$30 \\small{\\text{ (}+\\$10\\text{)}} &amp; 60 \\small{\\text{ (}20\\text{ }new\\text{)}}&amp; \\$0.50 &amp; +15 &amp; 75 \\text{ Posts} \\\\\n\\$40 \\small{\\text{ (}+\\$10\\text{)}} &amp; 100 \\small{\\text{ (}25\\text{ }new\\text{)}}&amp; \\$0.40 &amp; +20 &amp; 120 \\text{ Posts} \\\\\n\\end{array}\n&quot;,&quot;id&quot;:&quot;VPNASGRPED&quot;}" data-component-name="LatexBlockToDOM"></div><p>This structure incentivizes heavier reading. The deeper you go into the ecosystem, the cheaper it becomes to explore, removing the hesitation of &#8220;is this one article worth it?&#8221;</p><h2><strong>Why This Helps Writers</strong></h2><p>The biggest worry writers might have is, &#8220;Will my subscribers downgrade to this?&#8221;</p><p>I honestly don&#8217;t think so. True fans want the community, the archives, and the direct connection of a full subscription. The Reader Pass is for everyone else&#8212;the casual readers who are currently paying $0.</p><ul><li><p><strong>Earn from Casuals:</strong> Instead of bouncing off the paywall, a casual reader pays $1. That adds up.</p></li><li><p><strong>Find New Subscribers:</strong> This is a great way for readers to &#8220;date&#8221; a writer before marrying them. If I find myself spending $5 a month unlocking a specific writer&#8217;s posts, it becomes an easy decision to just switch to a full subscription.</p></li></ul><h2><strong>Why It&#8217;s Good for the Community</strong></h2><p>Right now, Substack can feel a bit siloed&#8212;we stick to the writers we already know. A Reader Pass turns the platform into a library where we can wander the stacks. It encourages us to take a chance on a new voice or a niche topic we wouldn&#8217;t normally pay for.</p><h2><strong>Respecting Independence</strong></h2><p>Of course, independence is the whole point of Substack. This shouldn&#8217;t be forced on anyone. If a writer wants to keep their work exclusive to full subscribers, they should absolutely be able to opt out.</p><h2><strong>Conclusion</strong></h2><p>The subscription model is amazing for supporting our favorite writers, but it creates a wall for everyone else. A Reader Pass would smooth that out, helping curious readers support more independent voices without the pressure of a full annual commitment.</p><h2><strong>Let&#8217;s Make It Happen</strong></h2><p>If you agree that a Reader Pass would make Substack better for everyone, let&#8217;s make some noise. Share this idea with your favorite writers, or tag Substack to let them know we&#8217;re ready for a middle ground between &#8220;all or nothing.&#8221; We want to support more writers&#8212;give us the tools to do it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Shutdown Views]]></title><description><![CDATA[Two good reasons doesn't make a wrong]]></description><link>https://substack.norabble.com/p/shutdown-views</link><guid isPermaLink="false">https://substack.norabble.com/p/shutdown-views</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Thu, 13 Nov 2025 12:49:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b48ed2d7-875a-48ae-8836-606bd0351a21_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Following the recent government shutdown, which ended with a short-term deal, much of the media focus has turned to Democratic infighting over the strategy and its outcome.</p><p>Democratic Leadership has been taking flak for the outcome of the shutdown standoff. While that&#8217;s not too surprising, I think it should find a different conclusion. To frame that context here, I&#8217;ve been unsure about engaging in the shutdown from the start. I&#8217;ve been a bit more supportive this time around, as there was a clear outcome the Democrats were seeking here, and it did seem worth a stand. All that said, as recently as Sunday I was saying, if I was in congress, I&#8217;m not sure what I&#8217;d do. My reason for that is that the shutdown was causing real impacts to people, and it was getting worse.</p><p>It&#8217;s one thing to make a stand, and put your political reputation on the line, it&#8217;s another to do it when people you&#8217;re trying to protect are being hurt. I&#8217;d want to be sure it was going to succeed, and ideally, I&#8217;d want to know those that were at risk, were willing to take the risk themselves. If I had been a senator and broken ranks, that would have been the reason, and would have been how I explained it.</p><p>When you look at the <a href="https://www.bbc.com/news/articles/c7974x7248go">explanations of those that broke ranks, their explanations aren&#8217;t very dissimilar</a>.</p><blockquote><p>&#8220;We have airport controllers. And we were seeing lines to our food banks in northern Nevada. These were lines that I hadn&#8217;t seen since the pandemic.&#8221;</p></blockquote><p>I&#8217;m going to suggest that Democrats shouldn&#8217;t be spending their time attacking each other right now. Yes, there is a reason to want the group to act as one, and so I understand the impression that this is a failure. But on the other hand, giving in because you care, that&#8217;s something you should consider forgiving. More than that, the real story here should be, the Republicans were willing to throw America under the bus to take away healthcare. The separation between Democrats that were willing to take a stand to protect people&#8217;s healthcare and those that were willing to take a drubbing from their own party to protect people harmed by the shutdown, shouldn&#8217;t be a division you can&#8217;t get past. It&#8217;s two sets of good intentions, with a hard choice.</p><p>Besides being true, that statement is also one every member of the Democratic party should want to reinforce amongst the public, who were largely on their side throughout the shutdown, so reasonably could be expected to understand how terrible the Republican position is. Additionally, if you value unity as a party, I think this is just the moment to take the initiative. Punitive statements and in-party fighting aren&#8217;t necessarily more useful for developing party discipline than recognizing the common cause and acting accordingly.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Shape of Economic Risks ]]></title><description><![CDATA[How a Stagflationary Shock Could Start an Economic Correction]]></description><link>https://substack.norabble.com/p/the-shape-of-economic-risks</link><guid isPermaLink="false">https://substack.norabble.com/p/the-shape-of-economic-risks</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Sat, 08 Nov 2025 19:00:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d6257a7d-4511-4eda-8f13-294dee877ab5_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s significant talk these days about tech bubbles and how that relates to today. This talk often predicts a market correction. While this is possible, even probable in a long enough term, the timing of such a correction is inconveniently inaccessible.</p><p>The bursting of bubbles based on sentiment is not very predictable. For one, when optimism gets into high gear, it&#8217;s usually tied to underlying dynamics that carry a lot of uncertainty. Who really knows what work AI will and won&#8217;t be able to do, and more importantly, when?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Second, since bubbles are generally based on predictions of future performance, there&#8217;s little immediate forcing function. If the current quarter&#8217;s numbers come in low, but your target is 5 years away, it&#8217;s not that hard to maintain your 5-year optimism. Shrugging off short-term evidence as non-relevant to the long-term outcome has some sensibility.</p><p>Third, even someone who knows enough to be certain that optimism is too high will be tempted to hang on and try to time their exit to take best advantage of others&#8217; optimism. This will delay the bursting of a bubble. This dynamic can&#8217;t last forever, but it <a href="https://www.economist.com/finance-and-economics/2025/11/02/why-wall-street-wont-see-the-next-crash-coming">complicates timing significantly</a>.</p><p>With this in mind, too much weight may be placed on the idea that a correction would start within the realm of tech and AI investments. This type of correction isn&#8217;t the only one we should be worried about, though. A different, though highly complex, scenario is one that shows a smooth connection from point A (now) to point B (the 5-year optimistic view) isn&#8217;t possible, because the &#8220;main street&#8221; economy that supports it begins to buckle.</p><h2><strong>The Stagflationary Shock</strong></h2><p>This presents a concerning story: Tariffs create extra costs, and tariff policies create uncertainty that reduces &#8220;main street&#8221; business investment. Job growth slows as a result. At the same time, crackdowns on immigrants cause business closures and slower business formation in areas that drew workers from immigrant populations, such as farming, restaurants, and residential or informal construction (like renovations and single-family homes).</p><p>This scenario creates two powerful, opposing economic forces simultaneously:</p><p><strong>An Inflationary Supply Shock:</strong> Tariffs (a tax on goods) and labor shortages (raising wage costs) reduce the economy&#8217;s productive capacity, pushing prices up.</p><p><strong>A Recessionary Demand Shock:</strong> Business closures and slowing investment reduce aggregate demand, putting downward pressure on prices.</p><p>This aggregate balance is further complicated because these forces may not be uniform. The inflationary pressures from tariffs are broad, while the recessionary effects of workforce loss and business closures are localized to specific industries and regions. It&#8217;s possible for the economy to experience both shocks simultaneously without them canceling each other out on a national scale.</p><p>Individuals could absorb the pains from those effects by using savings or by taking on more debt, but they&#8217;ll also probably stop spending as freely.</p><p>When spending slows, it affects retailers, which indirectly slows advertising revenue, or directly slows online sales.</p><h2><strong>Risk One: The Disinflationary Bust</strong></h2><p>The first risk is that these forces don&#8217;t simply cancel. The inflation from tariffs itself acts as a tax, slashing consumer purchasing power and causing a recession. In this feedback loop, an initial inflationary spike is followed by a recession that becomes so deep that it overwhelms the inflationary pressure from tariffs. Demand is so thoroughly destroyed that businesses cannot pass on their higher costs and instead reduce production and employment. This would not lead to stagflation, but to a deep disinflationary bust&#8212;a severe recession where inflation rapidly slows, or even turns negative.</p><p>This specific risk&#8212;a disinflationary bust originating from &#8220;main street&#8221;&#8212;is one the U.S. economy has arguably been facing. It has, so far, been largely avoided or masked by a powerful counter-force: a massive wave of investment from tech companies and rising stock market optimism fueled by the promises of AI, which has provided a floor for the wider economy. This reliance, however, creates a fragile co-dependence that leads directly to the other major risks.</p><h2><strong>Risk Two: The Credit Competition Crisis</strong></h2><p>This fragile balance, propped up by AI, is still dependent on the &#8220;main street&#8221; consumer. While they have been absorbing the pain from inflation and job slowdowns by using savings or taking on more debt, this cannot last forever.</p><p>The second major risk begins when this consumer resilience is exhausted. As &#8220;main street&#8221; spending contracts significantly, it directly hits the revenue of the tech companies themselves&#8212;slowing advertising, e-commerce, and enterprise sales.</p><p>This creates a new, dangerous dynamic. Tech companies, which had been funding their massive AI investments from free cash flow, now see those cash flows dry up. They are forced to turn to debt markets to continue investing. Suddenly, these tech giants are in direct competition for a limited pool of credit with the very consumers and &#8220;main street&#8221; businesses that are also desperate to borrow to survive. This competition for credit would cause interest rates to spike across the board&#8212;even without government action&#8212;threatening to choke off both the AI boom and any hope of a &#8220;main street&#8221; recovery.</p><p>This is the point where the crisis becomes systemic and demands a government policy response, leading to the next set of risks.</p><h2><strong>Risk Three: The Crisis Apex and Policy Dilemma</strong></h2><p>Faced with the systemic credit crisis from Risk Two, the government is forced to intervene. At this apex, the economy is balanced on a knife-edge, with two new, opposing market dynamics coming into play, even as the government weighs its options.</p><p>On one hand, there is a slender thread of hope: AI itself might finally start returning tangible, widespread value to &#8220;main street,&#8221; allowing new business formation and employment to grow again. If AI-driven productivity gains were to suddenly make &#8220;main street&#8221; businesses profitable and credit-worthy, it could provide a market-based escape. While the dynamic is simple, the reality of this is unclear, as it&#8217;s not known what this value would look like or if it could arrive fast enough.</p><p>On the other hand, this is the exact moment that AI optimism itself might end. Faced with cratering &#8220;main street&#8221; demand and a real credit crunch, investors might finally decide the &#8220;5-year optimistic view&#8221; is no longer plausible. This wouldn&#8217;t be a solution, but a different, compounding dynamic: a full-blown tech correction on top of the &#8220;main street&#8221; recession, leading to an immediate crash (similar to Risk One, but far more severe).</p><p>Barring the slender thread of hope, and assuming AI optimism holds just long enough to influence policy, the government must still act. Its options are severely limited by the original stagflationary shock: inflation (from tariffs) is still high, even as all sectors are desperate for cheaper credit.</p><p>In this scenario, the government might still try to counteract the recession and credit crunch by lowering base interest rates. This choice, however, creates a new, highly unstable set of risks.</p><p>This policy creates a bifurcated economy. It doesn&#8217;t mean credit only benefits AI, but rather that in this environment, banks engage in risk-based credit rationing.</p><p>This policy is highly unstable. It fails to help &#8220;main street&#8221; (which is too risky to lend to) while simultaneously fueling an asset bubble and adding inflationary pressure (by pumping cheap money into the &#8220;hot&#8221; AI sector).</p><h2><strong>Risk Four: Path A - The Politically-Driven Hard Landing</strong></h2><p>Faced with the unstable, inflationary, and politically toxic situation created by Risk Three, policymakers arrive at a critical fork in the road. The first, and most economically orthodox path, is the hard landing.</p><p>The policy in Risk Three is politically toxic. A government facing an electorate where everyone is moderately affected by inflation&#8212;a universal and highly visible political poison&#8212;is far less likely to tolerate it than unemployment, which affects a smaller group severely.</p><p>The political pressure would be immense for the Federal Reserve to do the opposite of Risk Three: keep interest rates high (or raise them further) to crush inflation, even at the cost of deepening the recession. This &#8220;Volcker-style&#8221; hard landing, driven by political necessity, would itself trigger the tech correction by making financing prohibitively expensive, leading to a bust in both tech and &#8220;main street.&#8221;</p><h2><strong>Risk Five: Path B - The Idiosyncratic Low-Rate Gamble</strong></h2><p>There is, however, a second, less orthodox path. An idiosyncratic political administration, one more concerned with short-term stock market performance and headline employment figures than long-term price stability, might attempt to force the Federal Reserve to keep interest rates low, despite the stagflationary pressures.</p><p>This path involves rejecting the painful (but historically validated) lesson of the 1970s and 80s, and instead gambling that the economy can &#8220;grow its way out&#8221; of inflation. In this scenario, the government essentially accepts the unstable dynamic of Risk Three. It continues to fuel the AI asset bubble with cheap credit while hoping for a productivity miracle, all while persistent inflation erodes the value of &#8220;main &#8220;street&#8221; savings and wages.</p><h2><strong>A Choice Between Two Harmful Paths</strong></h2><p>Neither fork in the road leads to a happy state. Path A (Risk Four) is the orthodox, &#8220;Volcker-style&#8221; hard landing: a deliberate and deeply painful recession. It is a harmful path that would cause significant unemployment and business failures, but it is a known strategy designed to crush inflation and prevent an uncontrolled spiral.</p><p>Path B (Risk Five) is an unorthodox gamble that likely leads to a worse fate. If President Trump chooses this second path&#8212;cutting and holding rates during stagflation&#8212;the scenario ends grimly. The policy would fail to stop the main street recession while runaway inflation, now unchecked by the central bank, takes hold.</p><p>This onset of runaway inflation would eventually force the government&#8217;s hand anyway, compelling them to aggressively increase base interest rates after all. This U-turn would force AI investment to finally slow, but only after significant economic damage has been done. In the end, the economy would be a weaker one, with more inflation, more unemployment, fewer productive businesses, and a complicated unwinding of debts.</p><h2><strong>How likely is this?</strong></h2><p>The likelihood of this scenario appears uncomfortably high, as the key premises are not hypothetical but are already visible in late 2025 data.</p><ol><li><p><strong>The Stagflationary Shock is Underway.</strong> The &#8220;Stagflationary Shock&#8221; is not a future risk; it is the current reality. Job growth has stalled, averaging just 29,000 per month over the last quarter (near-recessionary levels) and showing &#8220;little change since April&#8221; according to the Bureau of Labor Statistics. This is happening while inflation remains &#8220;sticky&#8221; at 3.0%, fueled by tariffs. This combination of weak growth and persistent inflation is the textbook definition of stagflation.</p></li><li><p><strong>Tech Vulnerability is Real.</strong> &#8220;Risk Two&#8221; is also proving plausible. The assumption that Big Tech would fund the AI boom from its own cash reserves is incorrect. Tech giants are <em>already</em> turning to debt markets. Recent reports show AI capital expenditures are approaching 94% of operating cash flow, forcing massive bond sales, including Meta&#8217;s recent $30 billion offering. This confirms their new vulnerability to a &#8220;main street&#8221; downturn. That&#8217;s not to say that tech itself is at risk of collapse, but more that they aren&#8217;t insulated from the rest of the economy.</p></li><li><p><strong>Fed Independence is Under Attack.</strong> &#8220;Risk Five,&#8221; the &#8220;idiosyncratic low-rate gamble,&#8221; has become a tangible political risk. The Federal Reserve is under &#8220;immense pressure&#8221; from the administration. The recent appointment of a political adviser, Stephen Miran, to the Fed&#8217;s board&#8212;who is already dissenting in favor of the larger rate cuts the White House desires&#8212;demonstrates that the Fed&#8217;s traditional independence can no longer be taken for granted.</p></li></ol><p>Because the three critical links in the chain&#8212;the stagflationary shock, the tech debt reliance, and the erosion of Fed independence&#8212;are already in place, this alternative path to a correction is not just a theory, but an unfolding reality.</p><h2><strong>What should we do?</strong></h2><p>Confronting the question of what would reduce these risks, the most direct path is to remove the initial stagflationary shocks&#8212;the tariff policy and the workforce disruptions from restrictive immigration policy&#8212;which are the root cause of the entire risk cascade. Tariff policy is easily undone, especially if it comes with a commitment to not make that mistake again. Immigration policy shifts should be more forward-leaning, creating an environment for orderly, legal immigration that meets workforce needs.</p><p>Reducing our risk by these means is the most obvious and sound policy. The question though that arises is, what could convince the Trump administration to follow through with measures that contradict the largely irrational policy it&#8217;s pursued so far?</p><h2><strong>Postscript: Quantifying</strong></h2><p>The scenario here, it&#8217;s plausible, but not quantified. <a href="https://www.bls.gov/news.release/pdf/empsit.pdf">Job numbers</a>, cash flows and <a href="https://apnews.com/article/federal-reserve-trump-dbd197ba501ac0b6ab0b47e079eedddb">attacks on Fed independence</a>, do neutralize the strongest counterarguments. That still doesn&#8217;t make this a quantified scenario, and without quantification, the timeline is unknown. Anytime a timeline is unknown, you have to also consider the possibility that the timing is &#8220;too far away to matter&#8221;.</p><p>While there&#8217;s a solid story that main street weaknesses could overwhelm optimism, financial tricks, and tech&#8217;s contributions, it&#8217;s also possible these prop up main street and slow main street erosion. In that time, policies could reverse. Maybe it could even be so long to allow new elections and new policies from that.</p><p>It would be nice to have numbers. Numbers are not easy though. Quantifying this scenario presents a significant challenge. I have no intention of attempting a full econometric model of the U.S. economy, which is <em>the minimum </em>that would be required to know the outcome of the interaction of these forces.  I&#8217;d watch for others taking on that work, but do you really need to wait?</p><p>I do have the idea that I can take a deeper look at the tech economy, and think through how the specific companies or layers would be impacted by changes in wider economic behaviors. That&#8217;s an interesting question all of its own, but I also think it helps understanding how powerful feedbacks would be. This is still a big topic, so it will be a future post.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Slop Scapegoat: AI]]></title><description><![CDATA[Blaming AI for low-quality content misses the real problem&#8212;and the real opportunity.]]></description><link>https://substack.norabble.com/p/the-slop-scapegoat-ai</link><guid isPermaLink="false">https://substack.norabble.com/p/the-slop-scapegoat-ai</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Sun, 19 Oct 2025 16:35:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/30c58724-2ab9-4488-9cbf-1a0fad3363f4_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I don&#8217;t like the term &#8220;AI slop&#8221;. As a term it&#8217;s used far too casually. The Internet has had copious amounts of slop for a while, if we describe slop as low-quality material created to grab eyeballs. For example, the article-spinning software of the 2000s, content farms churning out SEO-driven articles, or the rise of viral clickbait. Quantity over quality, you might say.</p><p>The idea that AI has been necessary for slop is a lazy one of its own. The more and more we use the term AI slop, the more I see two mistakes happening. First, we forget that non-AI slop hasn&#8217;t gone away; in fact, the same incentives that created it are now simply being supercharged by new tools. Second, there&#8217;s a lazy classification of everything AI related as slop. Ultimately, these two mistakes will lead to a failed understanding of the problem, preventing us from developing effective solutions.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>There&#8217;s protections against slop already, though we can all attest that these haven&#8217;t fully protected us from slop, otherwise most of us wouldn&#8217;t even know what it was. But we do. Will the addition of AI radically change this? I think it&#8217;s not going to be all that dramatic. I know that&#8217;s in contrast to the common belief that it&#8217;s going to be an unstoppable tidal wave that creates the &#8220;end of the Internet&#8221;.</p><p>I believe the panic is overstated for three reasons. The first is, AI can improve the protections. While it&#8217;s also useful to avoid the protections, it&#8217;s hard to say which is going to be the more powerful force. The second is those protections, they already set a status quo which is more stable than is generally acknowledged.</p><p>The third, and most powerful reason is, slop has intent. You don&#8217;t have to protect against slop by directly focusing on the content, but can also focus on the intent. The motivation that goes into the intent isn&#8217;t endless either.</p><p>Let&#8217;s think about the protections we have. It&#8217;s gone unacknowledged that multiple platforms have thrived on slop. Facebook, Tiktok, etc. What is the majority of the content there other than slop?</p><p>At this point, I have to take a minor tangent. Since we define slop as low-quality, we have to acknowledge that quality is in the eye of the beholder. You can identify factual details and use them in a debate about quality, but only when there&#8217;s an agreement on goals. It&#8217;s a generally acknowledged goal of most products that they don&#8217;t fall apart (though some, like toilet paper, must be designed to fall apart at the right time). But what is entertaining is far less agreed upon. What is informative is somewhat in between.</p><p>These platforms use algorithms to prioritize content, using user activity as a key input. However, their goal isn&#8217;t to find &#8216;entertaining&#8217; content; it&#8217;s to find &#8216;engaging&#8217; content&#8212;anything that keeps your eyes on the screen longer so they can show you more ads. While the two sometimes overlap, they are not the same thing.</p><p>It&#8217;s close enough that if you ask a representative of Facebook or TikTok, they&#8217;ll probably claim they are trying to prioritize entertaining content. However, their systems receive clearer signals about engagement than entertainment. The fact that this easier approximation is closer to their business objective, is a coincidence. A coincidence that they&#8217;re happy to embrace, but a coincidence nonetheless.</p><p>The point is not to defend social media platforms. I actually have little interest in them myself, and think their objectives and thus actions, have some problems. The point though is that they&#8217;ve somewhat tamed slop to their purposes. This gives a reasonable cause for hope that if you choose a different purpose, you can tame it there as well.</p><p>Recognizing slop can be done by looking at the quality, but that&#8217;s time consuming compared to some shortcuts. Most slop will carry its intent with it, and can be recognized by looking for tell-tale markers of its intent. Is a page littered with 1,000 ads, such that if there was any value in the base material, it&#8217;s rather hard to find? Well, it&#8217;s probably slop. Is it repeating some lazy conspiracy theory? Slop.</p><p>The interesting thing about the anti-slop opportunities that modern AI tools promise is they can listen to us. This does depend on us retaining control, which we mostly gave up in the last run of algorithm deployment.</p><p>When preference algorithms were first deployed, they used likes, dislikes, and correlational data. In Pandora, we indicated music we liked, and Pandora contributed data about music patterns that would help find the patterns in the likes and dislikes. We didn&#8217;t have a lot of control here, but at least there wasn&#8217;t much in there other than our goals.</p><p>Later, as platforms gathered more user data, they began assuming a correlation between your choices and those of similar users. But things got really messy when they started measuring subconscious actions. Suddenly, metrics like how long you spent scrolling or how your cursor lingered over a post became key inputs for the algorithms. After that, the algorithm optimizers started to optimize for these aspects. Initially this helped grow those platforms, but slowly users started to realize that more of their interactions with these platforms weren&#8217;t serving their own interests, at least not fully. At some level, it was satisfying something, else, why would they keep coming back? But we&#8217;re not immune to doing things we regret, and regrets have started to pile up.</p><p>With this history, I can see why someone worries about another iteration of AI assisted content selection. That said, there&#8217;s some differences this time. Large language models are pretrained as general purpose tools. Intents as specific as guiding you to particular content aren&#8217;t part of the training. This presents the opportunity to take more control. If you wanted to explain to Facebook what you were interested in, you could choose interests from a list, but how those would affect outcomes was in the control of the algorithm designer. In theory, we might have been offered the ability to write our own algorithms, but this was never in the business model for Facebook, and besides most users would have never mastered it (though we might imagine a world in which enough did to share their algorithms with other users).</p><p>With GenAI tools, the genie is out of the bottle already. Preferences can be described in natural language. That&#8217;s still less trivial than it sounds, but it&#8217;s something that average users can use trial and error to gain enough experience to get something useful. Facebook still may never offer you this, so the social world may have to wait for a transition to something more open.</p><p>While a solution for the walled gardens of social media remains elusive, there is opportunity in other environments. This is where users are active participants, not passive consumers, and in decentralized spaces with competitive services that prioritize their own discoverability. Examples are the world of news, opinion writing, and academic research. In these cases, the data will be easy to consume, allowing production of a feed that serves your interests. Imagine telling your news aggregator, &#8216;Show me articles about urban planning, but filter out any that are just clickbait or designed to provoke outrage.&#8217;</p><p>While this leaves the problem of the passive user unsolved, the problem of matching our interests there is more complex. In one sense, providing entertaining content by algorithm, is the user&#8217;s interests. That is not the healthiest choice when overused, but it&#8217;s not one we&#8217;d typically deny a user a choice in. Providing better alternatives, which a better feed would do, is probably the most attractive, though not the easiest way, to draw users away from that pattern.</p><p>While the term AI slop does have a real meaning, and does describe something that will create annoyances, the casual overuse misses the point of describing slop in general. Yes, there will be more slop, and yes, we&#8217;ll need to be more active in filtering it out.</p><p>My intent here is not to apologize for AI generated slop, or slop in general. Yes, we will give some credit based on intent. If someone tries to create something great, but fails, we don&#8217;t want to discourage future efforts. But if it&#8217;s commercially motivated, or criminally motivated, we shouldn&#8217;t think twice about asserting our interests. The greater intent is, let&#8217;s not panic, and let&#8217;s be as specific as we can.</p><p>But it&#8217;s not a hopeless battle, and lazy use of &#8220;AI slop&#8221; as a term for the purpose of creating antipathy to AI in general misses a real opportunity: to use these new tools to reduce the impact of slop&#8212;AI-generated or not&#8212;and at the same time reassert our own intentions, rather than a platform&#8217;s.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What is needed to advance the use of AI?]]></title><description><![CDATA[When we think about AI, one of the topics we should be most concerned with is, how does this bring value to people?]]></description><link>https://substack.norabble.com/p/what-is-needed-to-advance-the-use</link><guid isPermaLink="false">https://substack.norabble.com/p/what-is-needed-to-advance-the-use</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 22 Sep 2025 12:04:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/36d6f5b3-4952-4fbf-9c9b-124e15f74449_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we think about AI, one of the topics we should be most concerned with is, how does this bring value to people? My view is that the work that needs to be done to advance the practical use of AI, is in securely connecting them to the private, context-rich data they need to perform meaningful work. This has more practical value than model advancements alone. While those are reinforcing trends, not divergent ones, so much attention is given to models that I think it&#8217;s necessary to call this out. I&#8217;d like to talk about that today, and explore why solving this data integration problem is the key to unlocking AI's value.</p><p>This is not a discussion of an AGI world. I explain why in &#8220;<a href="https://substack.norabble.com/i/174216621/why-should-we-sometimes-avoid-talking-about-agi">Why should we sometimes avoid talking about AGI?</a>&#8221; near the end.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I&#8217;m also not talking about what people are doing today. I&#8217;ll leave that to the journalists and researchers who compile such data. <a href="https://forklightning.substack.com/p/how-people-use-chatgpt">OpenAI has some good data here</a>, and <a href="https://www.anthropic.com/research/economic-index-geography">so does Anthropic</a>. A lot is being done, and I&#8217;ll mention this when it&#8217;s relevant to what I&#8217;m writing about, but I don&#8217;t intend to present any new data of my own or do a comprehensive review of what&#8217;s available today.</p><p>The part I&#8217;m interested in is between AGI and tracking usage. Using the high level capabilities available today, what are the incremental steps that make the best use of it? I want to constrain myself to talk about practical things, but also be exploratory.</p><p>As I see it, this part tracks back to data. AI models, like any type of reasoning, thrive on data. The models in use today, they did utilize a massive repository of data to enable them to perform what they do. There are questions this has raised, like how creators of data (which in this case all includes creative outputs, not just data in a scientific form) are able to ask for and get compensation for the use of their creations. That&#8217;s a big topic all of its own, where there are both unresolved questions today, and questions from the past that have taken on another level of relevance.</p><p>I&#8217;m not trying to focus on this use of data. Training of AI models will continue, but a lot of data isn&#8217;t going to be part of that. Sometimes it&#8217;s about privacy, but often it&#8217;s about recency. There&#8217;s also the thought about relevance.</p><p>Privacy here has a lot of dimensions when we get past thinking of ourselves alone. It&#8217;s one decision regarding how something I know about myself gets used, it&#8217;s another question about how something I know about you gets used. Add to that organizations you or I are part of, and that serve us, or that serve organizations we are part of and it gets complex quickly.</p><p>If we think about how we&#8217;d want to interact with all of this, I think a common baseline might go something like this. We&#8217;d like the people and organizations that serve us to do so intelligently, which does require them to be informed. We&#8217;d like to not repeat ourselves, or be burdened with the responsibility to inform every interaction. We&#8217;d like for people or organizations with our data to only do things that serve us.</p><p>That last point does lead to tension. We can&#8217;t honestly expect people or organizations to only serve us. In the extreme form, if someone does something bad to someone else, being unable to use a video of a public space to show the presence or actions of the first person, would to understate things, seem harmful. So there&#8217;s a clear need to not be absolute about people&#8217;s data only being used for them.</p><p>There&#8217;s some balance here, and that&#8217;s a great topic I don&#8217;t think I can adequately cover here. But one thing I can pull out of that topic is that the balance often incorporates how personal a set of data is. My appearance in a public space is personal, but it&#8217;s as personal as my appearance in a private space.</p><h3><strong>AI and Private Data</strong></h3><p>I think I&#8217;m saying something non-controversial if I say that there&#8217;s many tasks we could not expect to be done, either by AI or by human, without access to data. I cannot respond to a customer&#8217;s concerns without access to the email where those concerns were voiced.</p><p>One of the most major challenges with AI adoption beyond what it&#8217;s adopted so far, is <a href="https://www.icf.com/insights/technology/data-ai-trends-federal-government-report-2025">getting data where it needs to be, when it needs to be there, while also ensuring it retains privacy that it should have</a>. Without this data, the tasks dependent upon it are not automatable. But this type of data has long been difficult to handle. Even without AI systems, ensuring that private data is kept private, and used only where required. This is true whether it represents a customer's personal data, or represents a company's internal data.</p><p>As a direct user of AI platforms, you can handle some of this yourself. You can attach a document, or copy and paste it in. But for an AI system to be able to perform many operations, that&#8217;s impractical. What we generally want is for AI systems to automate the most boring parts of our jobs. Replacing those activities with some nearly mindless copy and paste isn&#8217;t going to help. More powerful AI systems do longer actions. Identifying what data is needed for those actions is a progressive multi-step exploration. It&#8217;s both impractical and dumb to assume the solution is having a human is the solution here.</p><p>What is needed is workflows that put data where they need to be. This isn&#8217;t a novel problem. For humans to access the data they need to fulfill the roles their job is composed of, they need practical ways to access data. Controls and safeguards around that data are often needed to prevent misuse. Studies have often shown that one of the greatest risks to information is insiders. That&#8217;s not the only motivation in building these systems. Even if it was safe to allow everyone in a company direct access to the database, most of them couldn&#8217;t use it, because they didn&#8217;t have the skill set.</p><p>AI potentially would solve the second part, but not the first. If you allowed it unfettered access to a database, in theory it could learn how that data was organized and use it. But the first problem would remain. In some ways it&#8217;d be worse, as more people would know how to use it.</p><p>I think it&#8217;s very unlikely then that the current paradigm of building systems to manage access becomes less critical. In a sense, AI can help, by helping to build those systems. But building new ones, and updating existing ones to incorporate AI, and new safeguards necessary, it&#8217;s been a slow process. If it wasn&#8217;t a slow process, we&#8217;d be seeing many more and broader benefits already.</p><p>The jobs that have seen the most impact from AI adoption&#8212;software development and call centers&#8212;perfectly illustrate this data access challenge. They represent opposite ends of the spectrum in data trust and system readiness.</p><p>The association of software developers with AI developers is a ready explanation for software development&#8217;s early adoption, but I think there&#8217;s more to the story than that alone. Software development turns out to be a high-trust activity. The average software developer has access to read a lot of code directly. Quite a lot of code is open-source, meaning anyone in the world can read it, and even within organizations, the efforts to silo code visibility is relatively low. This is a little surprising considering how valuable most organizations consider their software, but there&#8217;s also a few good explanations here.</p><p>The influence of open-source conventions creates a cultural acceptance that pushes most controls to the output side. Software development does have a lot of controls about what goes into a code repository. There&#8217;s generally one or two manual reviews, large numbers of automated reviews and a rock-solid history that tracks who did what. The solid focus on the output does take some concerns about the viewing away. It also helps motivate liberal viewing, so that a wide group can be part of the review process.</p><p>Another enabling aspect is that software code, unlike other pieces of data a company manages, generally isn&#8217;t third party. When third parties are engaged, you get binaries, not original code. The most fine grained access controls in organizations revolve around managing data for third parties, either individuals like you and me, or organizations that are customers. That makes sense, in terms of retaining customer&#8217;s trust, but is also further motivated by many compliance activities that legally require those types of controls.</p><p>With all this in mind, it might seem odd that call centers which are so directly engaged in interacting with customers and thus customer data, would be another early implementer. I think a lot of people assume the reason here is that call center jobs are easy and so AI doesn&#8217;t have to be too smart to do them. I think that&#8217;s both an insufficient, and wrong explanation. The real reason is simpler: the call center industry, by necessity, had already invested heavily in the rigorous data workflows required to put exactly the right information in front of an employee at exactly the right time. It&#8217;s essentially the opposite side of the pendulum from the high-trust software developers. Call center staff operate in a low-trust environment.</p><p>Call center jobs aren&#8217;t easy. Partly it&#8217;s because of you. When people make a call, it&#8217;s often because they have a reason to be angry, or worried, or impatient. Of course, it doesn&#8217;t really make sense to take that anger out on call center staff, but we&#8217;re all human, and talk to enough people, and one will let that reasoning slip. Dealing with emotionally charged customers isn't the only challenge involved in call center roles though. Those roles are also operating in a low-trust environment, which means lots of extra complications for anything they do.</p><p>When you experience bad service, it&#8217;s tempting to ascribe it to bad staff. While that&#8217;s also possible, it shouldn&#8217;t be the first assumption. Many times, bad service means bad systems. Systems are forced to be restrictive because they handle customer data, but also because call center staff often have an employment relationship that encourages low-trust. Access to systems is often very constrained. There may be a complex &#8220;runbook&#8221; to perform even seemingly simple activities. Permissions are micromanaged.</p><p>My point isn&#8217;t to suggest there aren&#8217;t good reasons for that, but to point out that companies have already invested in making explicit all the questions about data and moving it about. They&#8217;ve realized that bad systems equals bad service, and so in many ways the systems were more ready to add an AI layer to them than elsewhere in companies where general purpose systems like email are conduits for information.</p><p>The challenges of developing workflows and systems to manage the data is a key part of what AI engineers call context engineering. Many AI pilots don&#8217;t include that type of activity. Predictably, they produce fewer results. That&#8217;s not entirely saying those pilots were mistakes, they often spread some awareness of what to expect, challenges. But it does limit their ability to offer real productivity improvement. The mistake would be to keep repeating this over and over, and not moving to the real challenges.</p><p>Like any technological change, there&#8217;s problems of technical implementation and organizational implementation. Some organizations understand this quicker than others. Some may never understand it fully, and find their competitive position eroded. It takes a surprisingly long time for bad organizations to fail. All of this slows actual adoption far beyond what the technical challenges alone do.</p><p>If you&#8217;re in an organization you&#8217;re trying to change, being aware of all of this helps a lot. It gives a consistent direction to head toward, a message to convey, and the ability to craft plans. That said, it doesn&#8217;t make it easy.</p><p>If you&#8217;re studying AI adoption from the outside and trying to understand differential rates of adoption, this is a key component. It&#8217;s also not all starting from ground-zero here. The challenges of handling data existed before AI. Getting the right data to the right person at the right time, but never the inverse has been a hard problem. As the tale of software developers and call centers shows, the solutions haven&#8217;t always been the same. Sometimes the solutions are implemented as technical systems, like with call centers. Sometimes they depend more on human judgement, culture, or both.</p><p>Understanding this data bottleneck is one thing; acting on it is another. The divergence between high-trust and low-trust environments provides a clear roadmap for where investment, technological effort, and strategic planning should be focused.</p><h2><strong>So, What Now?</strong></h2><h3><strong>For Investors: Bet on the "Plumbers"</strong></h3><p>One take for an investor should be that AI&#8217;s adoption curve is more complex than the supply curve. Consumer workflows have been able to grow on an awareness wave, coupled with capability waves that were more or less mediated by the supply of compute capacity. Future growth won't come from compute power alone; it will be driven by companies that can navigate the messy, real-world challenges&#8212;both technical and organizational&#8212;of handling private data.</p><p>You should also look beyond models and compute capacity, and look to firms with deep capabilities in dealing with those impediments. Firms that have already done this for their own data will have an advantage of being able to adopt AI more deeply, sooner. Firms that can help others do this will be in demand.</p><p>Don&#8217;t assume that only firms founded with the purpose of addressing AI data challenges are capable of doing so. Many existing firms have been addressing data challenges for some time now. If they use that experience in the new domain that&#8217;s an advantage they start from, both when competing against traditional competitors and maintaining relevance during a disruptive wave.</p><h3><strong>For Technology Professionals: Build the "Pipes"</strong></h3><p>As a technology professional, you should be thinking about the challenges to getting your private data where it needs to be, and how to reduce that friction. While it is something that could be tackled one by one, that is not the efficient route. There are repeating patterns here that are relevant. If you&#8217;re in the type of role that builds such patterns, you should start focusing on identifying what&#8217;s missing and building it. If you&#8217;re in a role that leverages platforms others build, you should do the research to know who has a platform for this and who has the commitment to keep building that platform because they understand the needs and opportunities here.</p><h3><strong>For Business Leaders: Evolve and Revolutionize</strong></h3><p>One thing learned from prior technology revolutions is that while companies can gain by integrating a technology into an existing business, there&#8217;s always an even greater gain to be made by a bigger rethinking of business processes. <a href="https://www.allaboutlean.com/automotive-assembly-line-evolution-1/">Machine tools and static assembly vs. an assembly line</a> for example.</p><p>The same will undoubtedly be true with AI and existing business processes. That insight can encourage companies to attempt to remake their business processes. On net, that&#8217;s a good thing, as there are so many forces at work encouraging companies to not do this, often with the result that the business process only changes when a new company introduces it and disrupts existing companies. Sometimes those companies fail to ever change, sometimes they are very slow and lose competitiveness in the process. That said, the change has to be the right change, and in the historical sense looking back at disruptive startups, we only remember the ones that succeeded. Some of those that failed, did so because the disruption they attempted wasn&#8217;t well structured. An incumbent attempting to disrupt themselves can make the same mistake.</p><p>As such, it makes sense to understand what can be done by evolution, alongside revolution. The insight here is that giving up on the modernization of the existing business is certainly premature. Don&#8217;t convince yourself to stop maintaining the &#8220;legacy&#8221; because a revolution is coming. You're far more likely to make the leap to the other side if you find the narrowest point to cross.</p><h2><strong>Further Context &amp; Definitions</strong></h2><h3><strong>What about tacit knowledge?</strong></h3><p>Tacit knowledge is underrated in businesses because it&#8217;s so invisible. It&#8217;s not written down, it&#8217;s hard to measure, and you mostly know it&#8217;s there from its effects. Thus it&#8217;s correct to point out that the training of general purpose large language models, they don&#8217;t consume tacit knowledge.</p><p>Where that thinking can go awry though, is equating tacit knowledge to private data, like it&#8217;s a subset. This leads to the thought that what&#8217;s needed is to extract that tacit knowledge and turn it into data. The reality is that tacit knowledge is more about experience than data. In my experience, public data is generally sufficient for its acquisition.</p><p>While private data might include trade secrets, mostly, it&#8217;s about current facts that are simply critical to carrying out an action. Tacit knowledge has more to do with the ability to make good judgements about using data of any type, to make decisions. While I wouldn&#8217;t suggest that LLMs are excellent at complex judgements, they are a lot better than they are commonly given credit for and the potential that they may become better is one of the great uncertainties.</p><p>I don&#8217;t think the next frontier for AI is particularly dependent on extracting tacit knowledge. There&#8217;s opportunities there for sure, but it&#8217;s also a difficult thing to do, and so I&#8217;d be sure you have the real experts, both in the domain you&#8217;re extracting, from the domain of the process of extraction, and from the domain of training necessary to turn extracted knowledge into real systems. A half-hearted effort will probably fail.</p><p>I think progress will be made faster via integration of private data and that general purpose models contain enough tacit knowledge to put that data to practical use in short order.</p><h3><strong>What&#8217;s AGI?</strong></h3><p>AGI, or artificial general intelligence, refers to the science fiction like possibilities where AI capabilities exceed human capabilities in a general, rather than a local way. Those are capabilities that do not exist today.</p><h3><strong>Why should we sometimes avoid talking about AGI?</strong></h3><p>My honest take on AGI is that its future timeline is indeterminate. I think anyone who tells you otherwise either has access to some really interesting data I&#8217;d like to see, or they are making overconfident predictions. Now, that said, making predictions here isn't entirely wrong. Someone might say they have an argument for 2040 as a reasonable timeline. Every prediction though must admit to not just being subject to miscalculation by actual numbers input, but also of novel influences we don&#8217;t have a structure to reason about yet. This cuts a bit both ways. We could get surprised by something that emerges in 2 years as much as we could be disappointed by predictions for 2040 receding farther and farther as we approach.</p><p>It&#8217;s not that hard to use your imagination, or connect with someone else&#8217;s, who thinks about what could occur for the science fiction possibilities. But it&#8217;s less obvious how you put that into present day planning. The investment in it can&#8217;t be too high if your timeline is uncertain, and the effects have wide divergences in possibility. A lot of the debate that happens here is closer to mood affiliation with optimistic or pessimistic outcomes than hard reasoning about probabilities, which makes it difficult to do much in terms of future planning other than more research.</p><p>It's great that this is occurring, and I&#8217;ll talk about it at other times, but I think a great mistake in much AI debate is dismissing the need to think about short term impacts under conventional assumptions. It&#8217;s no more reasonable to enter a room talking about short term impacts and change the topic to, &#8220;what about AGI?&#8221;, then it is to walk into an AGI discussion and say &#8220;that&#8217;s impossible, who cares?&#8221;. Both conversations need enough space between them to be able to progress, add nuance, without becoming a debate between them.</p><p>A way to sum this all up, I think there&#8217;s two discussions, &#8220;If AGI&#8221;, and &#8220;If less than AGI&#8221;, which are both interesting. While there is an obvious connection point between them, that connection point is coming back with answers like &#8220;needs more information&#8221;. In light of that, putting the connection aside, and thinking about each independently makes a lot more sense than treating it as one big continuum.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Economic Future from and of AI]]></title><description><![CDATA[Part 2: Risks in the Practical Horizon]]></description><link>https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1</link><guid isPermaLink="false">https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 08 Sep 2025 12:05:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6e78a60e-800e-4adf-9223-6f4fd217c034_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a href="https://substack.norabble.com/p/the-economic-future-from-and-of-ai?r=10qod6">Part One</a>, I discussed some of the existential economic concerns that Artificial Intelligence forces us to consider. In this second part, I&#8217;ll focus more directly on the practical, near-term landscape of familiar economic forces.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9fa516d5-c82e-416b-ad0a-f4c75b7c64da&quot;,&quot;caption&quot;:&quot;This will be part one of a two part series. In the first part, I want to outline some of my views about how salient a set of what we might call existential concerns about AI should be. In part two, I want to discuss some more immediate interactions with today's economy.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Economic Future from and of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-07T14:08:35.292Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d02180eb-af84-4846-b470-d641afa59da1_512x512.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-economic-future-from-and-of-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173016480,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>In the near term, AI's integration into the economy will manifest through familiar channels, primarily impacting labor markets, financial investment, and industrial competition.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A more novel danger is what I call 'The Great Masking': the risk that intense investor optimism in AI is hiding <a href="https://norabble.substack.com/p/delayed-effects-and-dangerous-unpredictability">serious, slow-building threats to the wider economy</a>. Those threats are trying to hide their effects already, and are eager to utilize AI investment as another layer of protection, to mask the effects of tariffs or irresponsible fiscal policy.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;12163022-2a15-4739-b149-471b546c1b7e&quot;,&quot;caption&quot;:&quot;Economists have warned of tariffs' impact on the US economy. The stock market has dropped when policies were announced, recovered when policies partially undone, or delayed, and then reached new records, while continued new announcements and implementations occurred.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Delayed Effects and Dangerous Unpredictability&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-08-03T16:10:25.588Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!dQsg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/delayed-effects-and-dangerous-unpredictability&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:170009169,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3><strong>Labor Market Disruption: Beyond Simple Job Churn</strong></h3><p>Historically, technological advancements have caused job disruption, not a net reduction in jobs. Some roles, companies, and even industries shrink or fail, while others are created to meet new or previously unmet needs. We don&#8217;t have reasons to think that AI is breaking this pattern. While we have reasons to wonder about this in the long-term, those are the existential concerns we covered in part one. In the present day, we&#8217;d need more concrete data. This data is limited and not conclusive at this point.</p><p>For instance, <a href="https://open.substack.com/pub/derekthompson/p/the-evidence-that-ai-is-destroying">Derek Thompson supports the idea that AI is already shifting employment among recent grads</a>, but he is careful to note that overconfidence is unadvisable.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:172039373,&quot;url&quot;:&quot;https://www.derekthompson.org/p/the-evidence-that-ai-is-destroying&quot;,&quot;publication_id&quot;:2880588,&quot;publication_name&quot;:&quot;Derek Thompson&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uPIO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png&quot;,&quot;title&quot;:&quot;The Evidence That AI Is Destroying Jobs For Young People Just Got Stronger&quot;,&quot;truncated_body_text&quot;:&quot;In a moment with many important economic questions and fears, I continue to find this among the more interesting mysteries about the US economy in the long run: Is artificial intelligence already taking jobs from young people?&quot;,&quot;date&quot;:&quot;2025-08-27T10:03:12.967Z&quot;,&quot;like_count&quot;:344,&quot;comment_count&quot;:48,&quot;bylines&quot;:[{&quot;id&quot;:157561,&quot;name&quot;:&quot;Derek Thompson&quot;,&quot;handle&quot;:&quot;derekthompson&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!oFSS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ed4fc85-9214-4460-a3e7-c80fca4a3c3d_872x872.png&quot;,&quot;bio&quot;:&quot;Abundance and other ideas to make the world a better place&quot;,&quot;profile_set_up_at&quot;:&quot;2021-10-25T17:19:21.553Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-03-09T16:22:19.302Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:2928158,&quot;user_id&quot;:157561,&quot;publication_id&quot;:2880588,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:2880588,&quot;name&quot;:&quot;Derek Thompson&quot;,&quot;subdomain&quot;:&quot;derekthompson&quot;,&quot;custom_domain&quot;:&quot;www.derekthompson.org&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A newsletter about abundance and building a better world.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png&quot;,&quot;author_id&quot;:157561,&quot;primary_user_id&quot;:157561,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2024-08-13T01:26:09.408Z&quot;,&quot;email_from_name&quot;:&quot;Derek Thompson&quot;,&quot;copyright&quot;:&quot;Derek Thompson&quot;,&quot;founding_plan_name&quot;:&quot;Superfan Tier&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:{&quot;ranking&quot;:&quot;trending&quot;,&quot;rank&quot;:1,&quot;publicationName&quot;:&quot;Derek Thompson&quot;,&quot;label&quot;:&quot;Business&quot;,&quot;categoryId&quot;:62},&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000}}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.derekthompson.org/p/the-evidence-that-ai-is-destroying?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!uPIO!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b0f850-caa7-417a-bc0b-5b7224dd1f25_888x888.png" loading="lazy"><span class="embedded-post-publication-name">Derek Thompson</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">The Evidence That AI Is Destroying Jobs For Young People Just Got Stronger</div></div><div class="embedded-post-body">In a moment with many important economic questions and fears, I continue to find this among the more interesting mysteries about the US economy in the long run: Is artificial intelligence already taking jobs from young people&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">8 months ago &#183; 344 likes &#183; 48 comments &#183; Derek Thompson</div></a></div><p>When looking at recent data, it's crucial to distinguish AI's impact from other confounding factors. For instance, a perceived slowdown in hiring for software developers&#8212;a key data point in the discussion around recent grads&#8212;may be less a result of AI replacing jobs and more a consequence of the post-pandemic hiring rebalancing in the tech sector, which saw furious hiring in 2020-2021 followed by a correction. This highlights the difficulty in isolating AI's specific effects from broader economic trends.</p><p>To make the case that AI is breaking the prior patterns, you&#8217;d need to make the case for <strong>structural unemployment</strong>. This is not merely the temporary friction of workers moving between jobs, but a fundamental, lasting mismatch between the skills held by the workforce and the skills demanded by an AI-driven economy. If AI automates a wide swath of cognitive and manual tasks, a large segment of the population may find their skills devalued, creating a challenge that standard economic churn cannot easily resolve. This practical concern, if it grows large enough, becomes the mechanism for a more existential crisis.</p><p>While that conceptually is sound, it&#8217;s applicability depends on data. Weak data suggesting shifting employment among recent grads doesn&#8217;t suggest structural unemployment, much less attribute it to AI rather than other economic conditions. </p><h3><strong>Investment, Bubbles, and Financial Contagion</strong></h3><p>The development of AI is fueled by staggering levels of investment, creating both immense opportunity and significant financial risk. <a href="https://www.economist.com/finance-and-economics/2025/09/07/what-if-the-ai-stockmarket-blows-up">The source of this capital is critical to understanding the potential fallout</a>.</p><ul><li><p><strong>Investment from Corporate Profits:</strong> When tech giants fund AI development from their vast cash reserves, the primary risk is an opportunity cost. If these investments fail to generate expected returns, shareholder value will fall, but the direct impact on the broader financial system is relatively contained.</p></li><li><p><strong>Investment from Bank Loans:</strong> When investments are funded by bank loans and other debt instruments, the risk of contagion is much higher. If AI companies fail to meet lofty profit expectations, they could default on these loans. A wave of defaults could destabilize the lending institutions, forcing them to tighten credit across the entire economy. This is how widespread unemployment can occur. A person laid off may seek a new job or try to start their own business, but if loans for new ventures aren't available, job creation stalls. It is this failure to create new jobs, not just the disruption of old ones, that leads to prolonged unemployment.</p></li></ul><p>This leads to the risk of an <strong>AI investment bubble</strong>. This doesn't require investors to be irrational, but rather for individually rational views to create a collectively irrational market. For example, one group of investors may rationally believe Company A will dominate the market, while another group rationally believes Company B will. If both groups invest heavily based on their beliefs, the aggregate market valuation can reflect a future where both companies win, an impossible outcome. The market as a whole may be pricing in a cumulative level of future profit that the industry cannot possibly deliver due to competition. The dot-com bubble of 2000 serves as a pertinent example of a financial collapse sparked by over-enthusiasm for a transformative technology, even one that ultimately delivered immense productivity gains.</p><h3><strong>Market Structure: Layers and Concentration</strong></h3><p>The AI industry is developing in distinct layers: hardware providers (e.g., AI chips), compute providers (cloud services), model providers (foundational models), and workflow integration (applying AI to specific industries). Competition within and between these layers will shape the distribution of profits. A key uncertainty is <a href="https://joincolossus.com/article/ai-will-not-make-you-rich/">who will capture the value at the workflow layer</a>.</p><ul><li><p><strong>Startups:</strong> New, specialized startups could focus on different niches, leading to a differentiated market with high potential profits for the successful disruptors.</p></li><li><p><strong>Cloud Providers:</strong> The large cloud computing companies could extend their offerings, creating products for each market. This would likely lead to less differentiation as providers feel pressure to match each other's offerings.</p></li><li><p><strong>Existing Companies:</strong> Incumbents could develop their own AI solutions internally. In this case, profits that might have gone to startups would remain within these established firms.</p></li></ul><p>While these layers are distinct, there will be pressure to break them down. Cloud providers, for instance, have a strong incentive to expand into the other layers. The economics of AI&#8212;with massive upfront R&amp;D costs and powerful network effects&#8212;create a tendency toward <strong>market concentration</strong>.</p><p>It&#8217;s useful not to overstate this point though. Multiple layers allow for multiple points where concentration, and the effects of concentration can be weakened.</p><p>Model providers are only as good as their latest model, and so far current investments haven&#8217;t provided a clear moat. Mostly AI companies with early success have stayed in the forefront by ever increasing investment, not by resting on laurels. New entrants have arrived and shown results that would have fully obsoleted early investments, at a fraction of those costs.</p><p>Amazon, Azure and GCP may remain the primary compute providers, but three competitors are enough to retain a competitive market, and other avenues for compute will remain. In the extreme scenario that competitive forces were muted, the efficiency factor from that scale is still within a single order of magnitude, limiting the exploitative capacity here.</p><p>NVidia has had great success, but stays in that place through continued investments. If they ever did hit a brick wall in terms of ability to improve performance, there&#8217;s no reason not to expect competitors to reach that same wall quickly, and thus have complete competitive equality.</p><p>But above all, each lower layer can only charge something less than what higher layers can capture in terms of value. A higher layer cannot pay for services it has not gathered enough revenue to pay for. With workflow at the top, and the most diversified of all the layers, this is a strong force toward a more broad distribution of economic surplus.</p><h3><strong>A Counterforce: Defensive Competition and Economic Surplus</strong></h3><p>For some areas, like cloud providers, forces toward concentration aren&#8217;t demonstrably stronger than existing forces. Investing in AI is not simply an attempt to build a new business, but one to retain a competitive position in an existing one. This dynamic could ultimately lead to more economic surplus being more generally by society through usage, rather than by companies through the various means of capturing revenue. This surplus can be:</p><ul><li><p><strong>Direct:</strong> Users may get free services or receive far more value than the price they are charged for those services.</p></li><li><p><strong>Indirect:</strong> Many smaller businesses, empowered by more accessible AI tools, become more efficient or capable and pass on those savings or enhanced capabilities to their own customers.</p></li></ul><p>While that is a great deal for consumers generally, it&#8217;s a risk to investors. If they assumed a different outcome, or followed others without considering the outcome, their paper wealth will decrease. To the degree that investors are individuals or entities that can survive that risk, it&#8217;s not something we need worry about, but that&#8217;s not always the case with investors. To some degree we are those investors, but even more, we interact with a system that could be disrupted by turmoil.</p><h3><strong>The Great Masking: How AI Optimism Obscures Deeper Economic Risks</strong></h3><p>While the financial risks within the AI sector are significant, a more novel and under-discussed danger is how the AI investment boom may be masking other serious, slow-building threats to the economy. The intense market optimism and capital flows generated by AI can create a "sugar high," temporarily hiding the <a href="https://open.substack.com/pub/pricetheory/p/6-reasons-why-tariffs-are-a-terrible">destructive effects of tariffs</a>, trade disputes, and irresponsible fiscal policy.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:170300963,&quot;url&quot;:&quot;https://www.economicforces.xyz/p/6-reasons-why-tariffs-are-a-terrible&quot;,&quot;publication_id&quot;:86578,&quot;publication_name&quot;:&quot;Economic Forces&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!oSpe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Faec57f84-07b0-4cbc-b1df-d29997f6fa2b_493x493.png&quot;,&quot;title&quot;:&quot;6 reasons why tariffs are a terrible way to raise revenue&quot;,&quot;truncated_body_text&quot;:&quot;Economists are not fans of tariffs. There, the newsletter is coming out the gate with some hot takes! Meanwhile, regular people think tariffs are good for jobs. At the very least, they may ask, &#8220;What&#8217;s the big deal? It&#8217;s just a tax on imports. We need taxes to fix the deficit.&#8221;&quot;,&quot;date&quot;:&quot;2025-08-07T17:08:10.663Z&quot;,&quot;like_count&quot;:116,&quot;comment_count&quot;:77,&quot;bylines&quot;:[{&quot;id&quot;:4279841,&quot;name&quot;:&quot;Brian Albrecht&quot;,&quot;handle&quot;:&quot;briancalbrecht&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F245be494-e7c3-4d75-826b-0ec5096168e7_2048x2048.jpeg&quot;,&quot;bio&quot;:&quot;Using price theory to understand the world&quot;,&quot;profile_set_up_at&quot;:&quot;2021-04-29T17:34:53.522Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-03-09T14:23:02.374Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:8257,&quot;user_id&quot;:4279841,&quot;publication_id&quot;:86578,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:86578,&quot;name&quot;:&quot;Economic Forces&quot;,&quot;subdomain&quot;:&quot;pricetheory&quot;,&quot;custom_domain&quot;:&quot;www.economicforces.xyz&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Pondering price theory, past and present. A weekly newsletter covering all things economics.&quot;,&quot;logo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/aec57f84-07b0-4cbc-b1df-d29997f6fa2b_493x493.png&quot;,&quot;author_id&quot;:13367528,&quot;primary_user_id&quot;:6926582,&quot;theme_var_background_pop&quot;:&quot;#FF6B00&quot;,&quot;created_at&quot;:&quot;2020-08-24T13:06:05.139Z&quot;,&quot;email_from_name&quot;:&quot;Economic Forces&quot;,&quot;copyright&quot;:&quot;Brian Albrecht and Josh Hendrickson&quot;,&quot;founding_plan_name&quot;:&quot;Price Theory Enthusiast &quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;BrianCAlbrecht&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100,&quot;status&quot;:{&quot;bestsellerTier&quot;:100,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:100}}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.economicforces.xyz/p/6-reasons-why-tariffs-are-a-terrible?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!oSpe!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Faec57f84-07b0-4cbc-b1df-d29997f6fa2b_493x493.png" loading="lazy"><span class="embedded-post-publication-name">Economic Forces</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">6 reasons why tariffs are a terrible way to raise revenue</div></div><div class="embedded-post-body">Economists are not fans of tariffs. There, the newsletter is coming out the gate with some hot takes! Meanwhile, regular people think tariffs are good for jobs. At the very least, they may ask, &#8220;What&#8217;s the big deal? It&#8217;s just a tax on imports. We need taxes to fix the deficit&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">9 months ago &#183; 116 likes &#183; 77 comments &#183; Brian Albrecht</div></a></div><p>The mechanism of this interaction is perilous. These external factors, such as tariffs breaking down global cooperation or budget deficits driving inflation, act as a persistent drag on the economy. They increase costs for businesses and reduce purchasing power for consumers. In a normal environment, the negative effects of these policies would be more visible in economic data. However, the powerful forward-looking optimism of the AI boom can overwhelm these signals, keeping investment sentiment high and stock market valuations buoyant.</p><p>This phenomenon aligns with <a href="https://open.substack.com/pub/paulkrugman/p/why-arent-markets-freaking-out">observations from economists like Paul Krugman</a>, who notes that markets are often poor at pricing in long-term policy risks, allowing them to build until a crisis becomes undeniable. An AI boom and tariffs both fit this narrative of market myopia, focusing attention on future technological gains while downplaying present-day policy costs.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:172134133,&quot;url&quot;:&quot;https://paulkrugman.substack.com/p/why-arent-markets-freaking-out&quot;,&quot;publication_id&quot;:277517,&quot;publication_name&quot;:&quot;Paul Krugman&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!e1Ly!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f7295f5-c1bd-4d62-b641-6dfbf34258f8_951x951.png&quot;,&quot;title&quot;:&quot;Why Aren&#8217;t Markets Freaking Out?&quot;,&quot;truncated_body_text&quot;:&quot;For those of us who follow economic policy in general and the Federal Reserve in particular, the past week has been shocking and terrifying. Donald Trump&#8217;s ongoing attempts to bully the Fed into large interest rate cuts have escalated into an attempt to fire Lisa Cook, a member of the Fed&#8217;s Board of Governors, over unsubstantiated claims that she commit&#8230;&quot;,&quot;date&quot;:&quot;2025-08-28T10:30:30.476Z&quot;,&quot;like_count&quot;:2766,&quot;comment_count&quot;:597,&quot;bylines&quot;:[{&quot;id&quot;:26817325,&quot;name&quot;:&quot;Paul Krugman&quot;,&quot;handle&quot;:&quot;paulkrugman&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cd097e5-2750-4a19-aaf3-6425407e9b6c_951x951.jpeg&quot;,&quot;bio&quot;:&quot;Professor, CUNY Grad Center, Nobel laureate and former columnist, NY Times. Also, according to Donald Trump, a &#8220;Deranged BUM.&#8221;&quot;,&quot;profile_set_up_at&quot;:&quot;2022-12-17T15:45:57.485Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-12-11T21:28:06.827Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:227323,&quot;user_id&quot;:26817325,&quot;publication_id&quot;:277517,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:277517,&quot;name&quot;:&quot;Paul Krugman&quot;,&quot;subdomain&quot;:&quot;paulkrugman&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Notes on economics and more&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7295f5-c1bd-4d62-b641-6dfbf34258f8_951x951.png&quot;,&quot;author_id&quot;:26817325,&quot;primary_user_id&quot;:26817325,&quot;theme_var_background_pop&quot;:&quot;#E8B500&quot;,&quot;created_at&quot;:&quot;2021-02-03T15:49:15.992Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Paul Krugman&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:10000,&quot;status&quot;:{&quot;bestsellerTier&quot;:10000,&quot;subscriberTier&quot;:10,&quot;leaderboard&quot;:{&quot;ranking&quot;:&quot;trending&quot;,&quot;rank&quot;:4,&quot;publicationName&quot;:&quot;Paul Krugman&quot;,&quot;label&quot;:&quot;U.S. Politics&quot;,&quot;categoryId&quot;:76739},&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:10000}}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://paulkrugman.substack.com/p/why-arent-markets-freaking-out?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!e1Ly!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f7295f5-c1bd-4d62-b641-6dfbf34258f8_951x951.png" loading="lazy"><span class="embedded-post-publication-name">Paul Krugman</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Why Aren&#8217;t Markets Freaking Out?</div></div><div class="embedded-post-body">For those of us who follow economic policy in general and the Federal Reserve in particular, the past week has been shocking and terrifying. Donald Trump&#8217;s ongoing attempts to bully the Fed into large interest rate cuts have escalated into an attempt to fire Lisa Cook, a member of the Fed&#8217;s Board of Governors, over unsubstantiated claims that she commit&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">8 months ago &#183; 2766 likes &#183; 597 comments &#183; Paul Krugman</div></a></div><p><strong><a href="https://open.substack.com/pub/matthewyglesias/p/how-the-stock-market-learned-to-love">Matthew Yglesias expands on this</a></strong>, explaining why this effect may be particularly severe now. The logic is that traders believe Trump is so sensitive to the stock market that if a sell-off occurs, Trump will back down. This creates a powerful incentive for investors to ignore the initial disruptive action and "buy the dip," confident that the policy will be reversed. This confidence, however, can prevent the very market crash needed to trigger the policy reversal. This dynamic, where the market's faith in its own influence creates a dangerous tolerance for risk, is amplified by the sheer optimism surrounding American AI companies. Since the major AI players are American, and are perceived as having the support of the political establishment, investors are willing to overlook institutional risks that would cause panic in other circumstances.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:172176998,&quot;url&quot;:&quot;https://www.slowboring.com/p/how-the-stock-market-learned-to-love&quot;,&quot;publication_id&quot;:159185,&quot;publication_name&quot;:&quot;Slow Boring &quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!gzxV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fceeb681e-a14d-4bbb-a8fe-951c29603e3f_256x256.png&quot;,&quot;title&quot;:&quot;How the stock market learned to love Trump &quot;,&quot;truncated_body_text&quot;:&quot;Whenever Donald Trump does something that amounts to putting the institutional foundations of American prosperity through the wood chipper &#8212; whether that&#8217;s attacking Fed independence, turning his back on mRNA technology, attacking the integrity of federal economic statistics, or any of a dozen other things that happened this su&#8230;&quot;,&quot;date&quot;:&quot;2025-09-02T10:02:51.952Z&quot;,&quot;like_count&quot;:330,&quot;comment_count&quot;:577,&quot;bylines&quot;:[{&quot;id&quot;:580004,&quot;name&quot;:&quot;Matthew Yglesias&quot;,&quot;handle&quot;:&quot;matthewyglesias&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20964455-401a-494d-a8ef-9835b34e9809_3024x3024.png&quot;,&quot;bio&quot;:&quot;Blogger, journalist, podcaster, trying to get back to my roots. &quot;,&quot;profile_set_up_at&quot;:&quot;2021-04-21T11:11:05.347Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-06-09T02:45:24.786Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:18017,&quot;user_id&quot;:580004,&quot;publication_id&quot;:159185,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:159185,&quot;name&quot;:&quot;Slow Boring &quot;,&quot;subdomain&quot;:&quot;matthewyglesias&quot;,&quot;custom_domain&quot;:&quot;www.slowboring.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Start your day with pragmatic takes on politics and public policy.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ceeb681e-a14d-4bbb-a8fe-951c29603e3f_256x256.png&quot;,&quot;author_id&quot;:580004,&quot;primary_user_id&quot;:580004,&quot;theme_var_background_pop&quot;:&quot;#121BFA&quot;,&quot;created_at&quot;:&quot;2020-11-05T16:20:32.177Z&quot;,&quot;email_from_name&quot;:&quot;Matthew Yglesias&quot;,&quot;copyright&quot;:&quot;Matthew Yglesias&quot;,&quot;founding_plan_name&quot;:&quot;Avid Supporter&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:null,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:6156692,&quot;user_id&quot;:580004,&quot;publication_id&quot;:5247799,&quot;role&quot;:&quot;contributor&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5247799,&quot;name&quot;:&quot;The Argument&quot;,&quot;subdomain&quot;:&quot;theargument&quot;,&quot;custom_domain&quot;:&quot;www.theargumentmag.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Join Us. We're Libbing Out.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d6b65fcd-fe11-48ac-bfe4-6c0f746e1608_300x300.png&quot;,&quot;author_id&quot;:18091829,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-05T17:53:31.825Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Jerusalem Demsas&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;mattyglesias&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:10000,&quot;status&quot;:{&quot;bestsellerTier&quot;:10000,&quot;subscriberTier&quot;:10,&quot;leaderboard&quot;:{&quot;ranking&quot;:&quot;paid&quot;,&quot;rank&quot;:15,&quot;publicationName&quot;:&quot;Slow Boring &quot;,&quot;label&quot;:&quot;U.S. Politics&quot;,&quot;categoryId&quot;:76739},&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:10000}}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.slowboring.com/p/how-the-stock-market-learned-to-love?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!gzxV!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fceeb681e-a14d-4bbb-a8fe-951c29603e3f_256x256.png" loading="lazy"><span class="embedded-post-publication-name">Slow Boring </span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How the stock market learned to love Trump </div></div><div class="embedded-post-body">Whenever Donald Trump does something that amounts to putting the institutional foundations of American prosperity through the wood chipper &#8212; whether that&#8217;s attacking Fed independence, turning his back on mRNA technology, attacking the integrity of federal economic statistics, or any of a dozen other things that happened this su&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">8 months ago &#183; 330 likes &#183; 577 comments &#183; Matthew Yglesias</div></a></div><p>The danger is that these problems don't disappear; they fester beneath the surface. This sets the stage for a potential cascade of <strong>concurrent failures</strong>. An eventual, and likely inevitable, correction in AI valuations would not happen in a vacuum. Instead, it would act as a trigger, suddenly exposing the underlying weaknesses that the boom had papered over. This could lead to a multi-pronged crisis:</p><ol><li><p><strong>A Financial Shock:</strong> The AI bubble deflates, erasing wealth and crushing investor confidence.</p></li><li><p><strong>A Corporate Shock:</strong> Businesses, already weakened by higher input costs from trade barriers, face a sudden drop in demand and tighter credit, leading to widespread insolvencies.</p></li><li><p><strong>A Consumer Shock:</strong> Households, whose purchasing power has already been eroded by inflation, face the added threat of mass layoffs.</p></li></ol><p>This "perfect storm" scenario, where a financial correction and a real-economy crisis hit simultaneously, is far more dangerous than either event occurring in isolation.</p><h3><strong>Too many triggers, and weakened foundations</strong></h3><p>What&#8217;s more, either can create the precipitating event. Bad policy could induce stagflation or a recession, spook AI investors and force them to accept that expected profits are now more distant. AI would still succeed, but the financing costs will still impact companies. Some may fail, but those that survive will still have a weaker financial position that merits a valuation change. The weaker financial position would cause investments to realign, and slowdown, removing the lift it&#8217;s been providing to the economy and exacerbating a policy induced recession.</p><p>On the other hand, an adjustment to optimism about long term effects can happen at any time. This could be a rational adjustment, or an irrational one. Normally an irrational adjustment would be temporary, but if it triggers a break in an unstable foundation, the normal recovery would not occur. Even a logical, healthy correction in AI stocks could trigger a catastrophe. Investors who thought they had safety nets would discover those protections were eroded by bad policy, causing a much wider economic collapse.</p><p>Finally, you have how the Trump administration would react to this. TACO (Trump Always Chickens Out) logic presumes that Trump can undo the damage done. That works far better for a policy announced yesterday that has not been put into effect. It does less for one that&#8217;s been slowly eating away at the American economy for months, maybe even years. In that case, which policy do they reverse? All of them? While that would be my choice, it presumes a lot to assume it would be the Trump administration's choice.</p><p>It might instead think it can placate markets as it has in the past with partial reversals of the most recent policies. This of course would not work, as something fundamental would have shifted, and so only something more fundamental can correct it. We also have signs that the Trump administration would take a page from China&#8217;s playbook and create the numbers it wants, and hope it can fool enough of the public to restore optimism. There&#8217;s a chance that might work once or twice, but ultimately if the bad policies remain, it only allows the foundation to erode further, creating a yet more future crisis.</p><h2><strong>Conclusion: The Risk of a Great Unmasking</strong></h2><p>While the long-term, existential questions surrounding AI command attention, the most immediate and tangible threats to the economy are rooted in the complex interplay between the current investment boom and other festering economic problems. The path to any future, utopian or otherwise, must first pass through a period of significant short-term risk, where the greatest danger is not a single point of failure, but a cascade of them.</p><p>The central, under-analyzed threat is that the <strong>AI investment bubble is actively masking</strong> the slow-building damage from irresponsible fiscal policy and the breakdown of global trade. Should the bubble burst, it will trigger a "Great Unmasking." The initial financial shock from deflating tech valuations will be immediately compounded by the sudden exposure of a real economy already weakened by inflation and supply chain friction.</p><p>This creates the potential for a multi-faceted crisis where a credit crunch, a collapse in business investment, and a surge in unemployment all happen concurrently. This is not just a tech-sector correction; it is the risk of a systemic economic downturn sparked by the tech sector but amplified by pre-existing conditions the boom helped to hide.</p><p>Therefore, while navigating the long-term societal transition is crucial, the immediate priority for policymakers must be to recognize and address this dangerous interaction. It requires looking beyond tech-sector regulation to the interplay of fiscal, trade, and financial stability policies. Successfully managing the practical, interconnected turbulence of today is the absolute prerequisite for realizing the profound promise of tomorrow.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">norabble is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Economic Future from and of AI]]></title><description><![CDATA[Part 1: The Existential Horizon]]></description><link>https://substack.norabble.com/p/the-economic-future-from-and-of-ai</link><guid isPermaLink="false">https://substack.norabble.com/p/the-economic-future-from-and-of-ai</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Sun, 07 Sep 2025 14:08:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d02180eb-af84-4846-b470-d641afa59da1_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This will be part one of a two part series. In the first part, I want to outline some of my views about how salient a set of what we might call existential concerns about AI should be. <a href="https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1">In part two, I want to discuss some more immediate interactions with today's economy</a>.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c3331dd8-9136-4986-91fc-0ba3b1282d44&quot;,&quot;caption&quot;:&quot;In Part One, I discussed some of the existential economic concerns that Artificial Intelligence forces us to consider. In this second part, I&#8217;ll focus more directly on the practical, near-term landscape of familiar economic forces.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Economic Future from and of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-08T12:05:18.583Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6e78a60e-800e-4adf-9223-6f4fd217c034_1024x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173031411,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Both of these are important topics. It can be tempting to ignore one for the other. Existential concerns have such large impacts, that they can make present day concerns seem trivial. The flaw in that thinking is that we don&#8217;t know when, or even if, existential concerns will emerge. Even if we assume they will emerge, our management of the present day has a lot to do with how stable and strong a society we have to manage those concerns. And if they don&#8217;t emerge, today's default concerns are more important.</p><p>That frame of reference can sometimes be used to ignore existential concerns, which would also be a mistake. While their certainty and timing is less knowable, the adjustments we make for them can be modest in the present, yet still have large impacts, when carried across time. Sometimes this is nothing more than the understanding that would help us recognize early impacts. Sometimes it&#8217;s plans we&#8217;d enact once recognized.</p><p>It&#8217;s tempting to demand responses to existential risks that would be as total as the risk. For example, with nuclear weapons, it&#8217;s tempting to think, why have any? While that sounds reasonable, we should ask, if that was the only response we considered, would we be better or worse than we are with the far less obvious set of non-proliferation treaties, monitoring, and other responses that have been employed? If we assume we would have gotten what we demanded, we can make a good, though not air-tight case for that. But the biggest risk is that we might not have been able to achieve that demand, and ended up with nothing as a response.</p><p>So both are important, and both interact, but they are different enough that it&#8217;s unproductive to continually introduce one into the other domain. With that in mind, I&#8217;d like to spend some time on the existential concerns to put them aside before moving into the present day concerns.</p><h2><strong>Two Horizons of AI's Economic Impact</strong></h2><p>How Artificial Intelligence will affect the economy unfolds across two distinct but interconnected horizons. The first is a practical, near-term landscape of familiar economic forces: jobs will be created and destroyed, investments will be made, and markets will adapt. These are the everyday concerns of disruption and growth that have characterized technological shifts of the past. <a href="https://knightcolumbia.org/content/ai-as-normal-technology">AI </a><strong><a href="https://knightcolumbia.org/content/ai-as-normal-technology">is</a></strong><a href="https://knightcolumbia.org/content/ai-as-normal-technology"> normal technology</a>, at least for the short-term.</p><p>The second horizon raises questions about the nature of work, value, and social organization. If AI is as transformative as predicted, it may force a fundamental restructuring of the mechanisms that distribute wealth and opportunity. Understanding AI's total economic impact requires analyzing both of these horizons&#8212;the immediate disruption and the potential long-term transformation&#8212;and recognizing how the former may ultimately lead to the latter.</p><h2><strong>The Existential Horizon: Abundance, Agency, and the Social Contract</strong></h2><p>When we talk about an existential horizon, we&#8217;re assuming a radically changed environment. Not just incremental changes, but something fundamentally different. One fundamentally different environment would where there is no longer an incentive to utilize more human labor. Historically, productivity improvements have led to more labor specialization, more benefits to labor, and similarly complete employment of labor across society. While some jobs were reduced, and some disappeared, others were created.</p><p>If the historical pattern no longer holds, the result is more than temporary friction of workers moving between jobs, but a fundamental, lasting mismatch with the demands of an AI-driven economy. If AI automates a wide swath of cognitive and manual tasks, a large segment of the population may find their skills devalued, creating a challenge that standard economic churn cannot easily resolve. This practical concern, if it grows large enough, becomes the mechanism for a fundamental crisis with the structure of an economy.</p><p>It&#8217;s important to note that the part of the context of that crisis is radical productivity gains. This implies an abundance of production, which if distributed, could fulfill all current day physical needs and desires.</p><h3><strong>Unstable Dystopia and Inevitable Transformation</strong></h3><p>The core issue remains the distinction between <strong>production</strong> and <strong>distribution</strong>. AI may solve the problem of production, but our primary distribution mechanism&#8212;employment income&#8212;could be broken by widespread structural unemployment.</p><p>An existential thinker considers a potential dystopian state where the owners of AI capture the gains, leaving a majority without income. However, such a system would be profoundly unstable. An economy where the vast majority of the population cannot afford to purchase the goods and services being produced is one that no longer functions for the majority, creating immense pressure for change. It is difficult to envision a scenario where a large, disenfranchised majority would peacefully accept deprivation while a paradise of abundance is technologically possible. The social contract would be broken, and the economic system's legitimacy would evaporate.</p><p>Therefore, the more enduring pathway is one of transformation. Faced with systemic collapse, social and political systems would be forced to adapt. Through democratic pressure or mass social movements, new mechanisms for distributing the gains of productivity&#8212;such as a Universal Basic Income (UBI) or other forms of social wealth distribution&#8212;would likely emerge not as a matter of choice, but of necessity, to ensure social and economic stability.</p><h3><strong>The Inevitability of Change: Why a Dystopian State Cannot Last</strong></h3><p>The notion that a wealthy elite could maintain a dystopian system against the will of the majority overlooks the fundamental levers of power in society. A population that has nothing to lose has no reason to respect the existing economic or political order.</p><ul><li><p><strong>Democratic Power:</strong> In nations with functioning democratic processes, a disenfranchised majority would have the votes to enact radical change. The political imperative to ensure the well-being of the populace would eventually overwhelm the influence of a small, wealthy minority.</p></li><li><p><strong>The Power of Mass Action:</strong> If democratic channels were to fail, the risk of mass civil unrest and rebellion would become acute. A system that creates widespread deprivation alongside visible, immense wealth is inherently unstable and invites revolutionary change.</p></li><li><p><strong>The Appeal to Humanity:</strong> Beyond coercion and political maneuvering, there remains the appeal to the shared humanity and self-preservation of the powerful. A society in a state of constant, simmering revolt is not a desirable or stable one for anyone, including the elite.</p></li><li><p><strong>Alternate Economies: </strong>A final aspect is that if a formal economy is not functioning for the majority, alternate economies can be formed. This happens today in exploitative economies around the world. The difference here is that in those economies today, productive capacity is low, so they distribute little. The formal economy would have to successfully suppress or deny alternate economies access to productive capacity of both their own individuals and AI.</p></li></ul><p>These corrective forces make a prolonged, technologically enforced dystopia an unlikely long-term outcome. The friction and conflict during the transition would be immense, but the ultimate direction would be toward a new social contract that aligns with the new economic reality.</p><h3><strong>The "Brave New World" Scenario: The Dystopia of Contentment</strong></h3><p>A more insidious, and perhaps more stable, dystopian outcome is not one of overt oppression but of sophisticated pacification. In this scenario, "bread and circuses" would pacify the population. An accurate view upgrades "bread and circuses" to far more than subsistence. It would be a high standard of living for all, with material needs and entertainment amply provided for.</p><p>The trade-off would be a loss of genuine <strong>agency</strong>. The population would be consumers and spectators in a world run by a small elite, not active participants in shaping their society. The core conflict is not freedom versus suffering, but freedom versus comfort. This presents a more philosophical challenge: whether a comfortable, secure, and entertained population would still value the burdens and responsibilities of self-determination.</p><h2><strong>Navigating the Transition: A Pragmatic Approach</strong></h2><p>Given the scale of the potential disruption and the inevitability of the technological advance, the central policy debate should focus on how to manage the transition.</p><h3><strong>The Case Against Preemptive Policy and Prohibition</strong></h3><p>A key debate is the timing of major social and economic reforms. While some advocate for preemptively implementing policies like Universal Basic Income (UBI) to soften the blow of disruption. A system like UBI is a response to a fundamentally different economic reality where the link between labor and survival has been severed for a large part of the population. Insisting upon it as a condition of AI development is premature.</p><p>To the degree that UBI makes sense today, maybe in a more limited form, that can be advocated for. But it only becomes absolutely necessary <em>after</em> a systemic change has occurred, not before. To insist upon it now, with an uncertain timeline, would be to enact a cure for a condition that has not yet manifested.</p><p>Similarly, arguments to halt or severely restrict AI development due to these risks are both overly cautious and impractical. The potential for AI to solve humanity's most pressing problems creates an enormous opportunity cost. Furthermore, AI development is a global geopolitical race. Any nation that unilaterally pauses its efforts risks falling catastrophically behind, making a global moratorium unenforceable. Progress is, for all practical purposes, inevitable.</p><p>The most viable path forward is not to stop progress or to preemptively re-engineer society, but to focus on managing the risks and guiding the technology's development.</p><h2><strong>Conclusion: The Risk of a Great Unmasking</strong></h2><p>While the long-term, existential questions surrounding AI command attention, the most immediate and tangible threats to the economy are rooted in the complex interplay between the current investment boom and other festering economic problems. The path to any future, utopian or otherwise, must first pass through a period of significant short-term risk, where the greatest danger is not a single point of failure, but a cascade of them.</p><p>Those near-term threats will be the <a href="https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1">topic of part two</a>.</p>]]></content:encoded></item><item><title><![CDATA[The Technologies of Trust]]></title><description><![CDATA[How We Cooperate at Scale]]></description><link>https://substack.norabble.com/p/the-technologies-of-trust</link><guid isPermaLink="false">https://substack.norabble.com/p/the-technologies-of-trust</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Mon, 25 Aug 2025 20:55:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3a093180-ffd8-497f-92f7-737ff8d80e4e_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been writing about Trust recently. I started with the <a href="https://norabble.substack.com/p/money-more-than-just-stuff-its-trust">topic of money</a>, which I contend is widely misunderstood (or more generally, hasn&#8217;t been sufficiently reflected upon). I also talked about <a href="https://norabble.substack.com/p/money-trust-and-loans">loans </a>and <a href="https://norabble.substack.com/p/trust-money-and-companies">companies</a>.</p><p>The common thread of trust here makes me want to talk about another concept, that while fairly pervasive, has escaped labelling and deep discussion. I&#8217;m choosing to describe this concept as the &#8220;Technologies of Trust,&#8221; a set of formal, abstract systems that provide the foundation for large-scale cooperation. This bucks some commonplace uses of &#8220;technology&#8221; but is true to the defined term, and creates a space for something that has no such space today.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The term "technology" is often associated with the physical sciences, but a broader definition is "the application of scientific knowledge for practical purposes." This is a useful lens. Just as categorizing physical tools under "technology" gives us a shared language to discuss them, we need similar categories for our abstract tools. We can think of a hierarchy of dependencies. Physical technologies allow us to manipulate the world, and organizational technologies allow us to coordinate groups to apply those physical tools. But for organizations to function at a large scale, beyond the limits of personal relationships, a separate category is needed: the technologies of trust. These are primarily applications of economic and social science, creating the formal frameworks that make broad, impersonal collaboration possible.</p><p>What are these technologies of trust? The list is long, but some of the most fundamental examples include:</p><ul><li><p><strong>Money:</strong> The primary form of depersonalized trust. It is a purely abstract system whose power comes from our collective confidence that it can be exchanged for goods and services in the future, allowing transactions between strangers who have no personal reason to trust each other.<br></p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:159457670,&quot;url&quot;:&quot;https://substack.norabble.com/p/money-more-than-just-stuff-its-trust&quot;,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;title&quot;:&quot;Money: More Than Just Stuff, It's Trust&quot;,&quot;truncated_body_text&quot;:&quot;We all understand trust and how essential it is for people to live and work together. Civilization itself is built on it. Money, while newer in human history, is just as woven into the fabric of our lives.&quot;,&quot;date&quot;:&quot;2025-03-20T04:16:50.436Z&quot;,&quot;like_count&quot;:3,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;handle&quot;:&quot;norabble&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-06-06T00:50:16.432Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-10T20:42:17.649Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1615971,&quot;user_id&quot;:61710810,&quot;publication_id&quot;:1642290,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1642290,&quot;name&quot;:&quot;norabble&quot;,&quot;subdomain&quot;:&quot;norabble&quot;,&quot;custom_domain&quot;:&quot;substack.norabble.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Complex, neglected, impersonal and challenging topics; Commonly economics, global development, cities and technology&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;author_id&quot;:61710810,&quot;primary_user_id&quot;:61710810,&quot;theme_var_background_pop&quot;:&quot;#00C2FF&quot;,&quot;created_at&quot;:&quot;2023-05-06T19:03:50.569Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ryan Baker&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;norabble&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://substack.norabble.com/p/money-more-than-just-stuff-its-trust?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!_1Oy!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png"><span class="embedded-post-publication-name">norabble</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Money: More Than Just Stuff, It's Trust</div></div><div class="embedded-post-body">We all understand trust and how essential it is for people to live and work together. Civilization itself is built on it. Money, while newer in human history, is just as woven into the fabric of our lives&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 3 likes &#183; Ryan Baker</div></a></div></li><li><p><strong>Loans:</strong> An instrument built on personal trust. Unlike money, a loan requires a direct assessment of a borrower's reliability, including their intent and capability to repay. This personal nature limits its scale compared to a depersonalized system.<br></p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:160828968,&quot;url&quot;:&quot;https://substack.norabble.com/p/money-trust-and-loans&quot;,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;title&quot;:&quot;Money, Trust, and Loans&quot;,&quot;truncated_body_text&quot;:&quot;In a previous discussion, we established a framework for understanding money: it's a form of trust that has been depersonalized and made exchangeable. This transformation allows society to operate with a greater total amount of trust than would be possible through personal relationships alone, thereby fostering greater potential for prosperity.&quot;,&quot;date&quot;:&quot;2025-04-08T14:01:00.786Z&quot;,&quot;like_count&quot;:2,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;handle&quot;:&quot;norabble&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-06-06T00:50:16.432Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-10T20:42:17.649Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1615971,&quot;user_id&quot;:61710810,&quot;publication_id&quot;:1642290,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1642290,&quot;name&quot;:&quot;norabble&quot;,&quot;subdomain&quot;:&quot;norabble&quot;,&quot;custom_domain&quot;:&quot;substack.norabble.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Complex, neglected, impersonal and challenging topics; Commonly economics, global development, cities and technology&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;author_id&quot;:61710810,&quot;primary_user_id&quot;:61710810,&quot;theme_var_background_pop&quot;:&quot;#00C2FF&quot;,&quot;created_at&quot;:&quot;2023-05-06T19:03:50.569Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ryan Baker&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;norabble&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://substack.norabble.com/p/money-trust-and-loans?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!_1Oy!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png"><span class="embedded-post-publication-name">norabble</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Money, Trust, and Loans</div></div><div class="embedded-post-body">In a previous discussion, we established a framework for understanding money: it's a form of trust that has been depersonalized and made exchangeable. This transformation allows society to operate with a greater total amount of trust than would be possible through personal relationships alone, thereby fostering greater potential for prosperity&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 2 likes &#183; Ryan Baker</div></a></div></li><li><p><strong>Contracts &amp; Law:</strong> Formal agreements and rules that create predictable, enforceable expectations between parties.</p></li><li><p><strong>Corporations:</strong> Legal structures that enable groups of people to act as a single entity, limiting individual liability and enabling large-scale investment and enterprise.<br></p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:162225592,&quot;url&quot;:&quot;https://substack.norabble.com/p/trust-money-and-companies&quot;,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;title&quot;:&quot;Trust, Money, and Companies&quot;,&quot;truncated_body_text&quot;:&quot;While companies are ubiquitous, and most people have a general idea of what they are, a fully-formed understanding of the concept of a company is less prevalent. At the base level, it&#8217;s just an entity doing business. The extra nuance comes from all the structure needed to make that practical. They have to be mostly flexible, but in other ways predictabl&#8230;&quot;,&quot;date&quot;:&quot;2025-04-26T22:16:02.756Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;handle&quot;:&quot;norabble&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-06-06T00:50:16.432Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-10T20:42:17.649Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1615971,&quot;user_id&quot;:61710810,&quot;publication_id&quot;:1642290,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1642290,&quot;name&quot;:&quot;norabble&quot;,&quot;subdomain&quot;:&quot;norabble&quot;,&quot;custom_domain&quot;:&quot;substack.norabble.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Complex, neglected, impersonal and challenging topics; Commonly economics, global development, cities and technology&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;author_id&quot;:61710810,&quot;primary_user_id&quot;:61710810,&quot;theme_var_background_pop&quot;:&quot;#00C2FF&quot;,&quot;created_at&quot;:&quot;2023-05-06T19:03:50.569Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ryan Baker&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;norabble&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://substack.norabble.com/p/trust-money-and-companies?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!_1Oy!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png"><span class="embedded-post-publication-name">norabble</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Trust, Money, and Companies</div></div><div class="embedded-post-body">While companies are ubiquitous, and most people have a general idea of what they are, a fully-formed understanding of the concept of a company is less prevalent. At the base level, it&#8217;s just an entity doing business. The extra nuance comes from all the structure needed to make that practical. They have to be mostly flexible, but in other ways predictabl&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 1 like &#183; Ryan Baker</div></a></div></li><li><p><strong>Finance:</strong> A complex system that bridges personal and depersonalized trust. Through banking, for example, personal trust (a loan agreement with an individual) is converted into depersonalized trust (newly created bank deposits that function as money in the broader economy). It uses tools like interest and collateral to bolster personal trust where it might be weak.</p></li><li><p><strong>Professional &amp; Academic Standards:</strong> Credentials, licenses, and peer review that create a baseline of trust in specialized knowledge and skills.</p></li><li><p><strong>Government:</strong> The overarching set of institutions that create, administer, and enforce the laws of a society, providing a framework of stability.</p></li><li><p><strong>Bureaucracy:</strong> The operational systems of rules, procedures, and hierarchies used to manage large organizations&#8212;public or private&#8212;in a consistent and predictable manner.</p></li><li><p><strong>Digital Trust Systems</strong>: Modern extensions of these frameworks, including blockchains, digital identity systems, and encryption, which allow trust to be distributed without central intermediaries. These highlight both the promise and peril of new, untested technologies of trust.</p></li></ul><h2>Formal vs. Informal Trust</h2><p>A common thread among these examples is that they are formal systems. Unlike the informal, intuitive trust we place in friends and family, these technologies are designed to be evaluated through reason. They create a framework of predictability that allows us to engage in complex collaborations with people we have never met. A formal system, however, is only as strong as the informal trust that individuals place in it. When confidence in these institutions erodes, so does their ability to facilitate cooperation.</p><p>Contemporary life shows how fragile this balance can be. For example, social media both erodes and reinforces institutional trust: on the one hand, disinformation campaigns weaken confidence in governments or media; on the other, platforms also serve as vehicles for new forms of verification and accountability.</p><p>Understanding these technologies is not merely an academic exercise; it is essential for a functioning society. When individuals lack a basic understanding of these systems, they are less likely to trust them, leading to reduced participation and a loss of potential collaboration. This can lead to unproductive modes of reasoning and the embrace of radical ideologies that fail to appreciate the delicate balance required.</p><p>Another aspect to consider is how these technologies complement and balance against each other. This balance is one reason it&#8217;s useful to think of them as a group, as many flawed ideologies can be described as a failure to balance amongst these techniques.</p><h2>When Trust Systems Are Rejected</h2><p>Some ideologies are defined by their skepticism of certain technologies of trust. While this is sometimes expressed in stark, theoretical terms, the reality has usually been more nuanced.</p><ul><li><p><strong>Communism:</strong> In theory, many communist traditions reject markets, corporations, and money itself, relying instead on centralized planning and bureaucracy. In practice, however, even communist regimes such as the USSR or China never fully abolished money or markets&#8212;both sanctioned and unsanctioned markets continued to exist. This shows how difficult it is to eliminate these technologies of trust completely. What communism demonstrates most clearly is an overemphasis on government and bureaucracy at the expense of other balancing systems.</p></li><li><p><strong>Libertarianism</strong>: Many libertarian strands distrust government and bureaucracy, instead placing heavy faith in contracts, markets, and voluntary associations. But this perspective often underplays the fact that governments do far more than simply &#8220;enforce contracts.&#8221; They create the very conditions under which markets exist&#8212;defining property rights, establishing money, and ensuring stability. Beyond that, governments must intervene to correct for failures (like pollution or monopolies) and to unwind outdated rules that distort rather than enable economic life. Libertarian experiments such as charter cities or cryptocurrency-based communities illustrate both the appeal of market-centric trust and the difficulties of maintaining order, fairness, and adaptability without strong public institutions.</p></li><li><p><strong>Anarchism:</strong> Anarchist thought often rejects nearly all formal trust systems, seeking instead to build cooperation through informal trust, mutual aid, and federated local assemblies. Examples like the Rojava region of Syria show attempts to create alternative, decentralized institutions. These systems reveal both the strengths of informal trust in tight-knit groups and the difficulty of scaling them to larger societies.</p></li></ul><p>Recent populist movements also show how selective rejection of trust systems&#8212;such as undermining courts, electoral processes, or expert institutions&#8212;can destabilize societies. Even when not fully ideological, these attacks erode confidence in the shared frameworks that make cooperation possible.</p><p>The lesson across these cases is not that certain ideologies are simply wrong, but that any attempt to radically unbalance the ecosystem of trust technologies&#8212;by over-relying on one and dismissing others&#8212;creates fragility. Durable societies require a balance among these different tools of trust.</p><h2>The Price of Complexity</h2><p>The challenge is finding the right balance. While trust expands our potential for good, it is not without risk. On one level, trust can be exploited, if we are not vigilant enough. On another level, systems dependent on trust can be forced to shrink when trust is lost. As a combination, our capability for successful vigilance puts a limit on the amount of trust that both logical and intuitive systems support.</p><p>Technologies like finance enable a level of prosperity analogous to building an economic skyscraper, reaching heights unimaginable from the ground. A financial crisis, in this view, is a catastrophic failure, but it is a failure that could only happen from such a height. The alternative is not a skyscraper that cannot collapse, but a simple hut with a much lower ceiling for growth.</p><p>In this metaphor, regulation serves as the skyscraper&#8217;s safety systems: fire codes, structural reinforcements, and emergency exits. They cannot eliminate risk altogether, but they can reduce the likelihood of collapse and increase resilience when shocks occur.</p><p>This is why, even after a major crisis, trust is rarely abandoned entirely. It is damaged, but it proves resilient. The response is not to tear down the system, but to repair and renegotiate trust through new regulations and reforms. The alternative&#8212;a complete return to simpler, less powerful systems&#8212;is often unthinkable, demonstrating our fundamental dependence on these complex technologies, for all their flaws.</p><h2>Educating for Trust</h2><p>Ultimately, the technologies of trust are not merely abstract systems; they are the essential tools we have engineered to overcome the natural limits of personal relationships, enabling the vast collaborations that define our world. Small-scale, personal trust is innate, a concept that barely needs to be taught, as it is coded into our function from evolving in small communities. Understanding the abstract technologies that allow for this large-scale cooperation, however, does not come so naturally.</p><p>This is why, just as we teach our children about the physical world, we must also educate ourselves and future generations about these abstract systems. Schools, universities, and public institutions should include civic education on money, law, and emerging trust systems. The goal is to build critical literacy so citizens can weigh their potential and their risks. Policymakers can help by ensuring transparency, accountability, and accessible regulation. And as individuals, we can cultivate trust by participating in institutions thoughtfully and critically, rather than retreating into cynicism or ideological views.</p><p>Understanding the value in each system helps reinforce the need for balance. At its simplest, this is immunization to ideology.</p><p>Balance itself is a bit harder to obtain, as it's not enough to understand one's own position, or even those close to us. Balance requires listening to others as a first step. But in large communities we need to trust others who aggregate views distant from us, either physically or along other dimensions.</p><p>Understanding in this context also allows acknowledging flaws, and working to improve implementation. Putting this all together, we can foster a more collaborative and prosperous society for all.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI and the Social Ecosystem]]></title><description><![CDATA[Beyond What AI Will Do]]></description><link>https://substack.norabble.com/p/ai-and-the-social-ecosystem</link><guid isPermaLink="false">https://substack.norabble.com/p/ai-and-the-social-ecosystem</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Sat, 09 Aug 2025 18:00:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/36851b1d-8bf8-4be1-9027-702965282c6d_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Conversations about AI often focus on what AI will do. While important, this focus can be misleading. A more critical question is: <strong>What will people do with AI?</strong> This shift matters because AI systems are, and will increasingly be, malleable. As our understanding grows, we can shape them to do what we want&#8212;and where we can&#8217;t yet, we&#8217;re likely to gain those abilities.</p><p>While developing those capabilities is valuable, AI research has often stopped short of addressing what comes after: deciding how we should use them. Humanity has rarely agreed on such questions. It&#8217;s possible someone has found a complete answer, but more likely, we have only partial and competing ideas. Even when we agree, translating agreement into action proves difficult.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>AI enters a world already full of unresolved issues. To use it wisely, we must acknowledge the societal patterns we&#8217;ve long ignored.</p><h3><strong>Example: AI Companions</strong></h3><p>Researchers ask <a href="https://www.hbs.edu/ris/Publication%20Files/24-078_a3d2e2c7-eca1-4767-8543-122e818bf2e5.pdf">if AI companions could reduce loneliness</a>. This is promising for those who feel isolated, yet reactions vary.</p><p>Some dismiss it as &#8220;unnatural&#8221; or &#8220;creepy,&#8221; ignoring how our social structures actively contribute to loneliness. While some causes are personal, many are external&#8212;rooted in social biases and behaviors. Knee-jerk objections often defend the status quo, offering platitudes (&#8220;we should all be nice&#8221;) that history shows we rarely fulfill.</p><p>Others worry about exploitation&#8212;people becoming dependent on AI and vulnerable to manipulation. This is a valid concern, but banning AI for companionship ignores the underlying problem: many people are already left lonely by societal design.</p><h3><strong>The Social Ecosystem Problem</strong></h3><p>The deeper risk lies not in AI itself, but in the social environment it enters. Human biases determine who we befriend and who we exclude. These biases are difficult to overcome, because personal happiness is tied to them. Progress in changing this has been slow for centuries, with only incremental gains.</p><p>In this sense, we live in a manipulative ecosystem. Human social life often revolves around implicit negotiations for status, influence, and belonging. We signal our own value, interpret others' signals, and use those social cues to navigate our standing. This process can subtly or overtly marginalize those who do this less, or less effectively, reinforcing their exclusion. This can leave people feeling rejected or excluded, reinforcing cycles of isolation. Those without sufficient supportive relationships may turn to artificial companionship&#8212;whether AI chatbots or other digital agents&#8212;that promise consistent validation without the risks of human judgment. While such companionship can meet real needs, it can also increase dependence on entities whose motivations may not align with the user&#8217;s well-being.</p><p>If this ecosystem persists&#8212;as history suggests&#8212;it will shape how AI is used and abused. Delaying AI in hopes that society will &#8220;be ready&#8221; may simply leave us unprepared when AI inevitably becomes widespread.</p><p>Instead, we should ask: what can we do, with old or new capabilities, to make this ecosystem safer before AI becomes deeply embedded?</p><h3><strong>Advertising as a Case Study</strong></h3><p>Advertising is one of the most organized forms of social influence, and it provides a clear case study for the risks of AI integration. Consider the AI companions discussed earlier. If they are provided by people who genuinely aim to reduce loneliness, we can be optimistic. But if they come from companies whose main goal is ad revenue, we should be cautious. This same dynamic applies to any system&#8212;from search engines to social media feeds&#8212;that could be funded through advertising.</p><p>Society has rarely addressed the broader effects of advertising. Rules generally only forbid outright falsehoods, leaving manipulative but technically true messages untouched. In an AI-powered world, this gap becomes more dangerous.</p><p>We often avoid regulating advertising because of free speech concerns. While this principle is important, advertising is distinct from the personal, expressive speech we most value. There will be blurry cases, but clear ones exist&#8212;and addressing them could protect vulnerable people without dismantling free expression.</p><h3><strong>Balancing Free Speech and Advertising Limits in an AI Age</strong></h3><p>When addressing the risks of AI-powered advertising&#8212;especially in companionship contexts&#8212;we face a classic dilemma: how to curb manipulation without undermining free speech. The tension lies in the fact that too much personal judgment in setting limits risks turning regulation into a tool for enforcing one person&#8217;s biases over another&#8217;s speech. This would strike at the very root of free speech protections.</p><p>One path forward is to prioritize rules that require minimal subjective interpretation. These rules should be inherently more equal in application, less reliant on individual moral standards, and thus less prone to abuse.</p><p><strong>Examples of Lower-Judgment Rules:</strong></p><ul><li><p><strong>Limiting the Total Volume of Advertising:</strong> Capping the overall amount of advertising exposure&#8212;whether in minutes per hour, or ad impressions per day&#8212;reduces manipulation opportunities without privileging one viewpoint over another.</p></li><li><p><strong>Restricting Advertising from Certain Spaces:</strong></p></li></ul><ul><li><p><strong>Physical spaces:</strong> We already prohibit most advertising in schools, recognizing that young people deserve protection from certain commercial pressures.</p></li><li><p><strong>Digital spaces:</strong> Online environments used primarily for education, mental health support, or community-building could adopt similar restrictions. Currently, digital spaces often import norms from the open web, where advertising is pervasive and largely unregulated.</p></li></ul><ul><li><p><strong>Context-Based Exclusions:</strong> Banning advertising in contexts where people are unusually vulnerable&#8212;such as grief counseling forums, addiction recovery platforms, or AI companionship apps&#8212;could help safeguard well-being without making content-based judgments about the ads themselves.</p></li></ul><h3><strong>A Further Step: Regulating Manipulative Methods</strong></h3><p>Beyond rules about the context of advertising, a more ambitious, though complex, path involves regulating the specific manipulative methods used. This approach rightfully raises concerns about potential overreach, so it must be handled with care. The focus would not be on the content of a message, but on its structure, targeting verifiably manipulative techniques that often exploit known psychological shortcuts.</p><p>Many of these techniques are applications of well-documented principles of persuasion, such as those identified by <a href="https://www.amazon.com/gp/product/B08HZ57WYN">Robert Cialdini in his foundational book, </a><em><a href="https://www.amazon.com/gp/product/B08HZ57WYN">Influence</a></em>. An automated system could, in theory, be trained to detect and block advertising that employs these methods:</p><ul><li><p><strong>Exploiting Scarcity:</strong> Creating artificial pressure with misleading countdown timers or "limited supply" claims that are demonstrably false.</p></li><li><p><strong>Inflating Social Proof:</strong> Using fabricated testimonials, fake reviews, or inflated user counts to create a false sense of popularity.</p></li><li><p><strong>Deceptive Interface Design:</strong> This includes a wide range of techniques often referred to as "Dark Patterns," a term coined by UX researcher Harry Brignull. Examples include using confusing navigation, hidden opt-outs, or "confirmshaming" language ("No thanks, I hate saving money") to trick users into making unintended choices.</p></li></ul><p>The key advantage of exploring this path is that an AI could be a more consistent and less biased arbiter for these kinds of structural rules than a human reviewer. While the risk of bias in the AI's training data would still exist, it avoids the motivated reasoning a human might use to permit a borderline case, making enforcement more uniform. This remains a highly contingent idea, but one worth exploring as AI capabilities mature.</p><h3><strong>The Funding Problem: Alternatives and Competitive Realities</strong></h3><p>Restricting advertising in vulnerable digital spaces immediately raises a critical question: how will these services be funded? Blocking a primary source of revenue is not enough; viable alternatives must exist. This requires considering the funding ecosystem alongside the regulatory one. Potential models include:</p><ul><li><p><strong>Direct Funding:</strong> Services could be supported by government grants or non-profit organizations, treating them as public goods akin to libraries or mental health services.</p></li><li><p><strong>User-Supported Ecosystems:</strong> Deliberately creating infrastructure for subscriptions or micropayments could allow users to directly fund the services they value, aligning the provider&#8217;s incentives with the user&#8217;s well-being.</p></li></ul><p>However, it is not an either/or situation. In a competitive market, the existence of an alternative funding model may not be enough to drive out ad-based systems entirely. A slight competitive edge from having access to even a small amount of advertising revenue can compound over time. While some users may value an ad-free experience, this appreciation is often insufficient to overcome the advantages&#8212;in scale, features, or price&#8212;that an ad-supported competitor can offer. Any serious attempt to create ad-free zones must also account for these powerful market dynamics.</p><h3><strong>Why This Matters in the AI Era</strong></h3><p>If AI companions, search algorithms, or social media feeds are driven by ad-based revenue, they inherit the same risks that have long plagued advertising&#8212;but with greater personalization and subtlety. As <a href="https://norabble.substack.com/p/ai-and-the-zero-sum-game">one of the adversarial industries, advertising will respond to scaling differently</a> than other industries, so some extra attention is merited.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:160183122,&quot;url&quot;:&quot;https://substack.norabble.com/p/ai-and-the-zero-sum-game&quot;,&quot;publication_id&quot;:1642290,&quot;publication_name&quot;:&quot;norabble&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!_1Oy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;title&quot;:&quot;AI and the Zero-Sum Game&quot;,&quot;truncated_body_text&quot;:&quot;AI is advancing quickly, and if there&#8217;s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it&#8217;s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the I&#8230;&quot;,&quot;date&quot;:&quot;2025-03-30T16:15:53.873Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:2,&quot;bylines&quot;:[{&quot;id&quot;:61710810,&quot;name&quot;:&quot;Ryan Baker&quot;,&quot;handle&quot;:&quot;norabble&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2376ff1a-8f8b-4e42-b164-1855d9e7999b_140x105.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-06-06T00:50:16.432Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-10T20:42:17.649Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1615971,&quot;user_id&quot;:61710810,&quot;publication_id&quot;:1642290,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1642290,&quot;name&quot;:&quot;norabble&quot;,&quot;subdomain&quot;:&quot;norabble&quot;,&quot;custom_domain&quot;:&quot;substack.norabble.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Complex, neglected, impersonal and challenging topics; Commonly economics, global development, cities and technology&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png&quot;,&quot;author_id&quot;:61710810,&quot;primary_user_id&quot;:61710810,&quot;theme_var_background_pop&quot;:&quot;#00C2FF&quot;,&quot;created_at&quot;:&quot;2023-05-06T19:03:50.569Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ryan Baker&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;norabble&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://substack.norabble.com/p/ai-and-the-zero-sum-game?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!_1Oy!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97750d25-7e33-4ebe-87af-6f4b3d0e4138_348x348.png" loading="lazy"><span class="embedded-post-publication-name">norabble</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">AI and the Zero-Sum Game</div></div><div class="embedded-post-body">AI is advancing quickly, and if there&#8217;s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it&#8217;s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the I&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 1 like &#183; 2 comments &#183; Ryan Baker</div></a></div><p>We already use simple guardrails in some areas, like limiting ads in schools, but we have not consistently re-evaluated limits to quantity and placement as trade-offs have shifted. Revisiting those trade-offs, can limit harm, and does not need to entangle us in debates over which messages are &#8220;acceptable.&#8221;</p><p>The ability to systematically implement more advanced rules would be more transformative, but rightfully needs more caution. Caution though, advocates of more study and discussion, rather than treatment as a settled topic.</p><p>Such solutions won&#8217;t remove all risks, but they can help ensure that free speech protections remain intact while still shielding people from the most pervasive forms of manipulation.</p><h3><strong>Conclusion</strong></h3><p>The central challenge is not just what AI will do, but how it will operate within human systems already shaped by bias, neglect, and power imbalances. The examples offered here are far from comprehensive and are ultimately exploratory. Reducing the influence of advertising won&#8217;t create a utopia; the scope is only so broad, and the effort will not be easy. It&#8217;s a concern that hasn&#8217;t entirely escaped our awareness, but one where we&#8217;ve been limited by the trade-offs. As those shift, we must re-examine.</p><p>We should remain aware of our own individual and group social interactions. Best efforts there are consistent with our humanity, and we should applaud anyone who tries to be a better person. But we must also recognize our limits and how systems encourage patterns that individual efforts either ride or resist. AI could reinforce these patterns&#8212;or help change them. A willingness to re-evaluate and adjust trade-offs can find wins where we&#8217;d otherwise see losses. But we must be clear-eyed: even positive changes come with some set of losses. The goal, then, is not to seek a perfect, cost-free solution, but to consciously decide which changes we are willing to bear. That choice&#8212;what we are prepared to change in order to gain something better&#8212;is the work that separates shaping our future from letting it happen to us.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Delayed Effects and Dangerous Unpredictability]]></title><description><![CDATA[The Real Risk of Tariffs]]></description><link>https://substack.norabble.com/p/delayed-effects-and-dangerous-unpredictability</link><guid isPermaLink="false">https://substack.norabble.com/p/delayed-effects-and-dangerous-unpredictability</guid><dc:creator><![CDATA[Ryan Baker]]></dc:creator><pubDate>Sun, 03 Aug 2025 16:10:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dQsg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Economists have warned of tariffs' impact on the US economy. The stock market has<a href="https://thedailyeconomy.org/article/equity-markets-react-to-trumps-tariff-announcements-the-data/"> </a><strong><a href="https://thedailyeconomy.org/article/equity-markets-react-to-trumps-tariff-announcements-the-data/">dropped when policies were announced, recovered when policies partially undone, or delayed</a></strong>, and then reached new records, while continued new announcements and implementations occurred.</p><p>If your main mechanism for judging the different claims is by looking at your own personal situation, you&#8217;d likely be uncertain that anything bad was going to happen. Unfortunately, that&#8217;s not a good way to judge how an economy of 340 million will behave.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>There is an effect of tariffs, and the business uncertainty they have created. While it may be somewhat masked, the masking has limits.</p><h3><strong>How tariffs affect businesses that produce goods</strong></h3><p>In a simple world, where a tariff was established without all the reversals, delays and uncertainty, the effect on business largely depends on what the businesses do, and what options they have to make changes.</p><p>For a business involved in manufacturing, they would be affected by inputs and outputs. In terms of outputs, if they have a foreign competitor that has the same output taxed, it gives that business an opportunity to raise their prices or get more market share, at least in the long term.</p><p>In the short term, local business has to react to what foreign competitors do. If those competitors keep the same pre-import price, the post-import price increases. Local business then can increase their prices, and gain more profit. Alternatively, if local businesses have the capacity to produce more, they can maintain their current prices and find willing customers who choose their product over the foreign competitors due to the relative price change.</p><p>It&#8217;s also possible their competitors choose to reduce their pre-import price. This inherently means less profit, unless they can force their suppliers or workforce to take less for their inputs or labor. If foreign competitors take less profit, this either will lead to less ability to invest long term which will reduce their market share eventually, or to the existing investment becoming negative value.</p><p>In the short term, there would be limits to expanding production, but with some predictability about the future, a business that expects to have a competitive advantage would invest in expanding. If a local manufacturer knows they&#8217;ll make more profit per item at the same price to the consumer, they can price lower to attract more business and be sure that extra production capacity won&#8217;t go to waste.</p><p>The timing of this does matter. It doesn&#8217;t make sense for a local producer to undercut their foreign competitor until they can produce enough to take additional market share. Until that point, the local producer would be better off maintaining a higher price, accumulating more profit, and using that profit (and the additional access to capital it enables) to invest in more production. Only after that investment adds more production capacity would they undercut their foreign competitors' prices.</p><p>Foreign producers will want to continue to use their existing capacity. They&#8217;ll have less incentive to invest in more capacity, but in the immediate sense, there&#8217;s no reason for them to reduce prices. If they assume the tariffs will remain in place, they would want to capture what profit they could until the local producer undercuts them in a way that makes the existing capacity unprofitable to operate, at which point they would idle that capacity, and possibly take more final measures, such as selling equipment or parts, declaring bankruptcy, etc.</p><p>Again timing matters. One set of decisions, continued operation or idling, depends on the realities of today. Another set of decisions affords a more complex set of options:</p><ul><li><p>New Investment</p></li><li><p>Maintenance with no investment</p></li><li><p>No maintenance, and no investment</p></li><li><p>Dismantling operational capacity</p></li></ul><p>These decisions depend on future expectations. If the competitive advantage looks favorable for the future, new investment makes sense, either to capture any growth in demand, or to force a competitor to choose option (3) or (4). If competitive advantage looks even, option (2) makes sense. If the reality is a competitive disadvantage, you&#8217;ll choose between (3) and (4) based upon how fast your competitor expands and the prices they set.</p><p>If a competitor can't meet all the market demand, a company might continue selling with its existing equipment. However, the most profitable move could be to let a shortage happen and dismantle operations (option 4), especially if the costs of maintaining idle equipment are high. In other words, as a company produces less, its competitive disadvantage can worsen, forcing it to dismantle its operations even faster than its rivals can grow to fill the gap. If not, the company would likely continue to produce and sell until its competitors capture the entire market through lower prices and greater supply.</p><h3><strong>Effect of tariffs on inputs</strong></h3><p>In addition to all of these considerations about the effect of tariffs on outputs, there&#8217;s also the consideration that an input may have a tariff applied. Assuming there is a local producer for that input, that full story about the balance between foreign and local producers will play out. During that time, the input&#8217;s price would initially increase. The simplest logic assumes the initial price increase would be equal to the tariff rate, but it could be less. As expansions and contractions take place, the price would reduce from this increased price. The most likely settling point is still higher than the initial price.</p><p>While it is possible to craft a story that ends up with the final local price being lower, it would depend on specific equilibriums. Then there is the counter point that you could end up with shortages that cause prices to exceed the original price plus tariff. Anyone who tells you such situations are probable without a very detailed set of data is making overconfident predictions.</p><p>If you&#8217;re a local producer that both has inputs increase in price and has their foreign competitors affected by tariffs on the output product, timing your investments becomes more complicated. For one, you probably don&#8217;t have enough information about the business of your potential suppliers to know what their optimal pricing model would be. For two, they may not either. For three, even if they have the information to choose an optimal pricing model, they may be run by less than fully rational individuals, or hope that their competitors have less than full information.</p><p>Most businesses must have a lot of confidence in their own competitive advantage or the growth potential of their industry before they commit to investments. In general, they&#8217;d prefer to risk their customer&#8217;s experiencing a shortage, and thus being able to raise prices, then to risk having unproductive investments.</p><p>We often make the mistake of only thinking about high growth industries' business models. In those industries, early investments are somewhat defensive. Beyond creating customer loyalty, there are also scale advantages. The producer that reaches a certain scale first captures that part of a competitive advantage. Dislodging them from this position at the apex of growth is more expensive than an early lead, as exponential compounding of those costs applies. As such, in a high growth industry everyone wants an early lead.</p><p>But industries in which growth is minimal, flat or contracting, this preference for investment doesn&#8217;t exist. Even if a competitor invests first, the scale advantage is trivial or non-existent, and the amount it costs to invest early and later are equal or roughly equal.</p><p>It&#8217;s in this environment that a country imposing a tariff should worry about the possibility of shortages. The foreign producers may no longer see the local market as worth exporting to, and yet the local producers may have favored being conservative about their investments to the degree that their capacity expansion isn&#8217;t available until after a shortage is experienced. In a liquid global market, a local shortage would be resolved by bidding up the pre-import price. However, markets aren&#8217;t fully liquid, and so it could be left unresolved, due to contracts and agreements and the simple nature of shipping delays.</p><h3><strong>How tariffs policies create business uncertainty</strong></h3><p>In a long term, static economy, reasoning about the effect of tariffs is simpler. In a more dynamic model, or during periods or transition, it becomes more complex, and timing must be factored in.</p><p>When Trump announces a policy, reverses the policy, reimposes the policy at half the initial level and then adds another layer on top, this more complex model adds yet another layer of complexity. The effect of uncertainty is a greater degree of delay for investments, increasing the probabilities of high local prices or local shortages, which would lead to even higher local prices, in addition to the actual shortage.</p><h3><strong>Risks of Oversimplifying</strong></h3><p>At times it&#8217;s appealing to try and apply our own personal experiences with money and finance to economic reasoning. For example, when you think of businesses delaying investments, you might think of it like a personal decision to buy a new car or keep driving the old one. If you don&#8217;t spend on a car today, you save that money, which you retain as potential to buy a new car (or something else) later.</p><p>For an individual business this is a little bit similar. They do have the complexity of building relationships, organizing operations, to worry about. Still, a company with $100 million in the bank has a lot of potential saved up.</p><p>Where this analogy breaks down fully is at the level of a national or global economy. Despite all the talk about imports and exports, most national economies have the majority of their production and consumption locally consumed and produced. For example,<a href="https://www.frbsf.org/research-and-insights/publications/economic-letter/2019/01/how-much-do-we-spend-on-imports/"> </a><strong><a href="https://www.frbsf.org/research-and-insights/publications/economic-letter/2019/01/how-much-do-we-spend-on-imports/">about 89% of U.S. consumer spending is on domestically produced goods and services</a></strong>. When you or a business saves money, you can later exchange the money for a good or service, with the most minimal of restrictions. But national economies have limited options to use currency to demand goods or services, and a global economy has no options.</p><p>An economy can store value by stockpiling produced goods or raw materials, but the outcome of delayed investment usually doesn&#8217;t translate into stockpiling. Rather it tends to mean less circulation of money. In many cases that would cause less employment, and labor can&#8217;t be stored. If a person is unemployed in 2025, you can&#8217;t get them to do two years worth of work in 2026.</p><p>At an economy wide level, if one business doesn&#8217;t invest, another may find it easier to get the loan necessary to invest. But when investment is lower across an entire economy, the result is mostly waste that can&#8217;t be recovered.</p><h3><strong>The connections between businesses</strong></h3><p>The combined effects of uncertainty and actual tariff costs affects US businesses. Despite braggadocio claims that foreigners would pay for tariffs, the evidence so far is that Americans are bearing them. So far a lot of it is<a href="https://paulkrugman.substack.com/p/the-art-of-the-really-stupid-deal"> </a><strong><a href="https://paulkrugman.substack.com/p/the-art-of-the-really-stupid-deal">landing on American businesses</a></strong>. Some people might consider that a win, a take down of profitable US corporations. But ultimately that&#8217;s a shortsighted view, and ignoring the composition of American businesses.</p><p>The problems are (a) this is unlikely to continue for long, (b) affects many smaller businesses, and (c) many Americans are invested in those corporations.</p><p>One of the reasons this won&#8217;t continue? Some of those costs are being buffered by<a href="https://apnews.com/article/trump-tariffs-consumers-prices-ebf959d8f8ad24bd0757d8e3693924c1"> </a><strong><a href="https://apnews.com/article/trump-tariffs-consumers-prices-ebf959d8f8ad24bd0757d8e3693924c1">pre-tariff stockpiling</a></strong>. When those stockpiles are gone, businesses will have harder decisions to make. They could have raised prices, earning a small profit for being wise enough to stockpile, but they weren&#8217;t forced to. In the future, the choice of raising prices will not represent a decision to forgo a small one-time profit, but may represent a choice to be unprofitable generally. Businesses that aren&#8217;t profitable eventually stop being businesses. If that&#8217;s the case, the pattern of bearing costs will end and consumer prices will rise.</p><p>Even if the change in costs is small enough to only reduce profit, rather than invert it to a loss, these costs aren&#8217;t borne just by large corporations, many are smaller businesses. Those business investment plans are altered by a change in cash flow.</p><p>Finally, remember that Americans own quite a lot of those corporations, and that changes to their profitability will lower rational valuations. Those corporations also do a lot of investing too. In the end, one thing must decline, either investment or dividends, and in both cases this should translate into a lower valuation.</p><h3><strong>The risks of delayed effects</strong></h3><p>I worry that a set of delayed effects that have masked the developing effects of tariffs and inconsistent policy will all land at the same time. The effect here is far more worrisome than the stock market's initial reaction to the tariffs. That reaction was based on expectations, and expectations are easily corrected by a course change. The wave of effects that have been working their way through our economies layers, are not so easily reversed.</p><p>Investments that businesses put off to build stockpiles, and due to uncertainty meant real work wasn&#8217;t being done. We finally saw the effect of that in a<a href="https://apnews.com/article/jobs-unemployment-economy-trump-federal-reserve-68a15f89d68793a6cf88a522ff33246c"> </a><strong><a href="https://apnews.com/article/jobs-unemployment-economy-trump-federal-reserve-68a15f89d68793a6cf88a522ff33246c">revised jobs report</a></strong>. It&#8217;s not simply that the stockpile is gone, it&#8217;s that the stockpile itself had costs, and we&#8217;ve largely been missing those costs as we take advantage of the stockpile. But<a href="https://www.nytimes.com/2025/08/02/business/trump-tariffs-consumer-prices.html"> </a><strong><a href="https://www.nytimes.com/2025/08/02/business/trump-tariffs-consumer-prices.html">when that&#8217;s exhausted a new price must be paid</a></strong>, in addition to the loss of investment. Even if all tariffs are permanently reversed next week, those effects will continue to show up.</p><p>Businesses, large and small, that use profits or a cash cushion to absorb costs will make that clear in quarterly statements, or in more personal ways for smaller businesses. Even if all tariffs are permanently reversed next week, those costs remain. If they continue, they keep growing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.bea.gov/news/2025/gross-domestic-product-2nd-quarter-2025-advance-estimate" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dQsg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 424w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 848w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 1272w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dQsg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png" width="1024" height="462" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:462,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Contributions to Percent Change in Real GDP, 2nd Quarter 2025&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;https://www.bea.gov/news/2025/gross-domestic-product-2nd-quarter-2025-advance-estimate&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Contributions to Percent Change in Real GDP, 2nd Quarter 2025" title="Contributions to Percent Change in Real GDP, 2nd Quarter 2025" srcset="https://substackcdn.com/image/fetch/$s_!dQsg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 424w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 848w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 1272w, https://substackcdn.com/image/fetch/$s_!dQsg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4882805d-8f7b-4998-9dd1-dec9d944b401_1024x462.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Note the decline in investment. Does that look like businesses investing in increasing supply?</figcaption></figure></div><p>We&#8217;re deep enough into the tariffs that<a href="https://www.google.com/search?q=https://www.descartes.com/resources/knowledge-center/global-shipping-report-june-2025-us-Imports-down-in-may-led-by-china%23:~:text%3DSource:%2520Descartes%2520Datamyne-,U.S.%2520imports%2520from%2520the%2520top%252010%2520countries%2520of%2520origin%2520decline,volume%2520shifting%2520away%2520from%2520China."> </a><strong><a href="https://www.google.com/search?q=https://www.descartes.com/resources/knowledge-center/global-shipping-report-june-2025-us-Imports-down-in-may-led-by-china%23:~:text%3DSource:%2520Descartes%2520Datamyne-,U.S.%2520imports%2520from%2520the%2520top%252010%2520countries%2520of%2520origin%2520decline,volume%2520shifting%2520away%2520from%2520China.">foreign shipments are disincentivized by high tariff rates</a></strong>, and yet<a href="https://apnews.com/article/us-economy-growth-tariffs-629549a1e77e6f1823adbcd5ccf78941"> </a><strong><a href="https://apnews.com/article/us-economy-growth-tariffs-629549a1e77e6f1823adbcd5ccf78941">businesses have not invested in replacing that supply</a></strong>. This creates a balance sheet problem if they&#8217;ve lost their cash cushion while waiting, and now finally see enough certainty to make investment worthwhile. In theory this can be fixed by credit, and investment can start and eventually these local producers would have more market share. But there&#8217;s a looming shortage between these points. We can&#8217;t really avoid those shortages where they are lurking. Maybe I&#8217;m wrong and such shortages aren&#8217;t lurking, but I&#8217;m not sure who could give you a reason to be confident in that.</p><p>Even if all tariffs are permanently reversed next week, arranging new supply will come with many complications. Supply contracts can take time to restart.</p><p>And then there&#8217;s the businesses that have had their production costs increased by higher input prices, or will have higher input prices. Those businesses have a hard set of choices. They can raise prices. If they do that alone they lose customers. But if all businesses in an industry come to the same conclusion, they may just shrink the market. If the market shrinks then the industry has to scale back costs or lose money.</p><p><strong>In a competitive market,</strong> in the long run, a business only has one option, to raise prices. If they operate at a loss, they&#8217;ll go bankrupt. If they operate at a lower margin, they&#8217;ll lose access to credit and be unable to make investments. That will have different effects depending on the market's potential for growth.</p><p><strong>In a growth market,</strong> not making investments will cause market share to drop. At an individual business level, that would be destructive, making raising prices a better option than accepting lower margins (which are sometimes already negative in growth industries), and lower investment. If, counter to that, it does make sense to accept lower margins, the result is the growth industry transitions to a period of no growth, which brings effects to the wider economy.</p><p><strong>In a stable market,</strong> investments are less frequent, but can still be necessary to perform maintenance. Loss of access to credit can then force such businesses to shrink, albeit slowly as deferred maintenance slowly reduces efficiency and capacity. Capacity reductions can cause a loss of scale advantages, raising per unit costs and thus forcing the increased price vs. lower margin tradeoff to begin again.</p><p>In the end, the only real option for businesses that were at an equilibrium before, is to raise prices. It&#8217;s tempting to imagine that most businesses are out of equilibrium, and that profits are high enough that none of these effects are triggered. It&#8217;s a hard story to disprove in any given moment because finding the equilibrium for individual businesses and industries is quite difficult. For the most part, we don&#8217;t have a better mechanism for discovering this than the actual conduct of markets. If you knew a stable equilibrium other than what markets discover for themselves, that information would enable you to make a lot of money by outsmarting the markets. While it&#8217;s not impossible, history tells us this is far more difficult and rare than overconfident predictors make it look.</p><h3><strong>Conclusion: Cascading Risks and Lasting Costs</strong></h3><p>The true risk of tariff policies lies not just in their immediate costs, but in their potential to create a cascade of lasting economic damage. The uncertainty generated by shifting trade rules discourages business investment, a loss of productivity that cannot be easily recovered. As this article has detailed, these policies create a ripple effect:</p><ul><li><p><strong>Delayed Investment:</strong> Businesses postpone crucial investments in new equipment and capacity, leading to long-term economic waste and a higher risk of future shortages.</p></li><li><p><strong>Supply Chain Fragility:</strong> As foreign supply is disincentivized and domestic replacement fails to materialize, supply chains become brittle and prone to disruption.</p></li><li><p><strong>Absorbed Costs and Rising Prices:</strong> While businesses may initially absorb the costs of tariffs through lower profits or cash reserves, this is not sustainable. Eventually, these costs are passed on to consumers in the form of higher prices.</p></li></ul><p>These factors create a fragile economic environment. Unlike a temporary market fluctuation, the structural changes caused by prolonged trade conflicts, such as lost investment, atrophied supply chains, and bankrupt businesses, are not quickly reversible, even if the tariffs themselves are lifted.</p><p>These changes can develop with less visibility, below the level of awareness that drives market prices. But when that lack of awareness is shattered, a course change is insufficient to undo the change in expectations that awareness brings. The economic consequences, therefore, will persist long after the policies that created them have changed.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.norabble.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading norabble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>