This will be part one of a two part series. In the first part, I want to outline some of my views about how salient a set of what we might call existential concerns about AI should be. In part two, I want to discuss some more immediate interactions with today's economy.
The Economic Future from and of AI
In Part One, I discussed some of the existential economic concerns that Artificial Intelligence forces us to consider. In this second part, I’ll focus more directly on the practical, near-term landscape of familiar economic forces.
Both of these are important topics. It can be tempting to ignore one for the other. Existential concerns have such large impacts, that they can make present day concerns seem trivial. The flaw in that thinking is that we don’t know when, or even if, existential concerns will emerge. Even if we assume they will emerge, our management of the present day has a lot to do with how stable and strong a society we have to manage those concerns. And if they don’t emerge, today's default concerns are more important.
That frame of reference can sometimes be used to ignore existential concerns, which would also be a mistake. While their certainty and timing is less knowable, the adjustments we make for them can be modest in the present, yet still have large impacts, when carried across time. Sometimes this is nothing more than the understanding that would help us recognize early impacts. Sometimes it’s plans we’d enact once recognized.
It’s tempting to demand responses to existential risks that would be as total as the risk. For example, with nuclear weapons, it’s tempting to think, why have any? While that sounds reasonable, we should ask, if that was the only response we considered, would we be better or worse than we are with the far less obvious set of non-proliferation treaties, monitoring, and other responses that have been employed? If we assume we would have gotten what we demanded, we can make a good, though not air-tight case for that. But the biggest risk is that we might not have been able to achieve that demand, and ended up with nothing as a response.
So both are important, and both interact, but they are different enough that it’s unproductive to continually introduce one into the other domain. With that in mind, I’d like to spend some time on the existential concerns to put them aside before moving into the present day concerns.
Two Horizons of AI's Economic Impact
How Artificial Intelligence will affect the economy unfolds across two distinct but interconnected horizons. The first is a practical, near-term landscape of familiar economic forces: jobs will be created and destroyed, investments will be made, and markets will adapt. These are the everyday concerns of disruption and growth that have characterized technological shifts of the past. AI is normal technology, at least for the short-term.
The second horizon raises questions about the nature of work, value, and social organization. If AI is as transformative as predicted, it may force a fundamental restructuring of the mechanisms that distribute wealth and opportunity. Understanding AI's total economic impact requires analyzing both of these horizons—the immediate disruption and the potential long-term transformation—and recognizing how the former may ultimately lead to the latter.
The Existential Horizon: Abundance, Agency, and the Social Contract
When we talk about an existential horizon, we’re assuming a radically changed environment. Not just incremental changes, but something fundamentally different. One fundamentally different environment would where there is no longer an incentive to utilize more human labor. Historically, productivity improvements have led to more labor specialization, more benefits to labor, and similarly complete employment of labor across society. While some jobs were reduced, and some disappeared, others were created.
If the historical pattern no longer holds, the result is more than temporary friction of workers moving between jobs, but a fundamental, lasting mismatch with the demands of an AI-driven economy. If AI automates a wide swath of cognitive and manual tasks, a large segment of the population may find their skills devalued, creating a challenge that standard economic churn cannot easily resolve. This practical concern, if it grows large enough, becomes the mechanism for a fundamental crisis with the structure of an economy.
It’s important to note that the part of the context of that crisis is radical productivity gains. This implies an abundance of production, which if distributed, could fulfill all current day physical needs and desires.
Unstable Dystopia and Inevitable Transformation
The core issue remains the distinction between production and distribution. AI may solve the problem of production, but our primary distribution mechanism—employment income—could be broken by widespread structural unemployment.
An existential thinker considers a potential dystopian state where the owners of AI capture the gains, leaving a majority without income. However, such a system would be profoundly unstable. An economy where the vast majority of the population cannot afford to purchase the goods and services being produced is one that no longer functions for the majority, creating immense pressure for change. It is difficult to envision a scenario where a large, disenfranchised majority would peacefully accept deprivation while a paradise of abundance is technologically possible. The social contract would be broken, and the economic system's legitimacy would evaporate.
Therefore, the more enduring pathway is one of transformation. Faced with systemic collapse, social and political systems would be forced to adapt. Through democratic pressure or mass social movements, new mechanisms for distributing the gains of productivity—such as a Universal Basic Income (UBI) or other forms of social wealth distribution—would likely emerge not as a matter of choice, but of necessity, to ensure social and economic stability.
The Inevitability of Change: Why a Dystopian State Cannot Last
The notion that a wealthy elite could maintain a dystopian system against the will of the majority overlooks the fundamental levers of power in society. A population that has nothing to lose has no reason to respect the existing economic or political order.
Democratic Power: In nations with functioning democratic processes, a disenfranchised majority would have the votes to enact radical change. The political imperative to ensure the well-being of the populace would eventually overwhelm the influence of a small, wealthy minority.
The Power of Mass Action: If democratic channels were to fail, the risk of mass civil unrest and rebellion would become acute. A system that creates widespread deprivation alongside visible, immense wealth is inherently unstable and invites revolutionary change.
The Appeal to Humanity: Beyond coercion and political maneuvering, there remains the appeal to the shared humanity and self-preservation of the powerful. A society in a state of constant, simmering revolt is not a desirable or stable one for anyone, including the elite.
Alternate Economies: A final aspect is that if a formal economy is not functioning for the majority, alternate economies can be formed. This happens today in exploitative economies around the world. The difference here is that in those economies today, productive capacity is low, so they distribute little. The formal economy would have to successfully suppress or deny alternate economies access to productive capacity of both their own individuals and AI.
These corrective forces make a prolonged, technologically enforced dystopia an unlikely long-term outcome. The friction and conflict during the transition would be immense, but the ultimate direction would be toward a new social contract that aligns with the new economic reality.
The "Brave New World" Scenario: The Dystopia of Contentment
A more insidious, and perhaps more stable, dystopian outcome is not one of overt oppression but of sophisticated pacification. In this scenario, "bread and circuses" would pacify the population. An accurate view upgrades "bread and circuses" to far more than subsistence. It would be a high standard of living for all, with material needs and entertainment amply provided for.
The trade-off would be a loss of genuine agency. The population would be consumers and spectators in a world run by a small elite, not active participants in shaping their society. The core conflict is not freedom versus suffering, but freedom versus comfort. This presents a more philosophical challenge: whether a comfortable, secure, and entertained population would still value the burdens and responsibilities of self-determination.
Navigating the Transition: A Pragmatic Approach
Given the scale of the potential disruption and the inevitability of the technological advance, the central policy debate should focus on how to manage the transition.
The Case Against Preemptive Policy and Prohibition
A key debate is the timing of major social and economic reforms. While some advocate for preemptively implementing policies like Universal Basic Income (UBI) to soften the blow of disruption. A system like UBI is a response to a fundamentally different economic reality where the link between labor and survival has been severed for a large part of the population. Insisting upon it as a condition of AI development is premature.
To the degree that UBI makes sense today, maybe in a more limited form, that can be advocated for. But it only becomes absolutely necessary after a systemic change has occurred, not before. To insist upon it now, with an uncertain timeline, would be to enact a cure for a condition that has not yet manifested.
Similarly, arguments to halt or severely restrict AI development due to these risks are both overly cautious and impractical. The potential for AI to solve humanity's most pressing problems creates an enormous opportunity cost. Furthermore, AI development is a global geopolitical race. Any nation that unilaterally pauses its efforts risks falling catastrophically behind, making a global moratorium unenforceable. Progress is, for all practical purposes, inevitable.
The most viable path forward is not to stop progress or to preemptively re-engineer society, but to focus on managing the risks and guiding the technology's development.
Conclusion: The Risk of a Great Unmasking
While the long-term, existential questions surrounding AI command attention, the most immediate and tangible threats to the economy are rooted in the complex interplay between the current investment boom and other festering economic problems. The path to any future, utopian or otherwise, must first pass through a period of significant short-term risk, where the greatest danger is not a single point of failure, but a cascade of them.
Those near-term threats will be the topic of part two.