Conversations about AI often focus on what AI will do. While important, this focus can be misleading. A more critical question is: What will people do with AI? This shift matters because AI systems are, and will increasingly be, malleable. As our understanding grows, we can shape them to do what we want—and where we can’t yet, we’re likely to gain those abilities.
While developing those capabilities is valuable, AI research has often stopped short of addressing what comes after: deciding how we should use them. Humanity has rarely agreed on such questions. It’s possible someone has found a complete answer, but more likely, we have only partial and competing ideas. Even when we agree, translating agreement into action proves difficult.
AI enters a world already full of unresolved issues. To use it wisely, we must acknowledge the societal patterns we’ve long ignored.
Example: AI Companions
Researchers ask if AI companions could reduce loneliness. This is promising for those who feel isolated, yet reactions vary.
Some dismiss it as “unnatural” or “creepy,” ignoring how our social structures actively contribute to loneliness. While some causes are personal, many are external—rooted in social biases and behaviors. Knee-jerk objections often defend the status quo, offering platitudes (“we should all be nice”) that history shows we rarely fulfill.
Others worry about exploitation—people becoming dependent on AI and vulnerable to manipulation. This is a valid concern, but banning AI for companionship ignores the underlying problem: many people are already left lonely by societal design.
The Social Ecosystem Problem
The deeper risk lies not in AI itself, but in the social environment it enters. Human biases determine who we befriend and who we exclude. These biases are difficult to overcome, because personal happiness is tied to them. Progress in changing this has been slow for centuries, with only incremental gains.
In this sense, we live in a manipulative ecosystem. Human social life often revolves around implicit negotiations for status, influence, and belonging. We signal our own value, interpret others' signals, and use those social cues to navigate our standing. This process can subtly or overtly marginalize those who do this less, or less effectively, reinforcing their exclusion. This can leave people feeling rejected or excluded, reinforcing cycles of isolation. Those without sufficient supportive relationships may turn to artificial companionship—whether AI chatbots or other digital agents—that promise consistent validation without the risks of human judgment. While such companionship can meet real needs, it can also increase dependence on entities whose motivations may not align with the user’s well-being.
If this ecosystem persists—as history suggests—it will shape how AI is used and abused. Delaying AI in hopes that society will “be ready” may simply leave us unprepared when AI inevitably becomes widespread.
Instead, we should ask: what can we do, with old or new capabilities, to make this ecosystem safer before AI becomes deeply embedded?
Advertising as a Case Study
Advertising is one of the most organized forms of social influence, and it provides a clear case study for the risks of AI integration. Consider the AI companions discussed earlier. If they are provided by people who genuinely aim to reduce loneliness, we can be optimistic. But if they come from companies whose main goal is ad revenue, we should be cautious. This same dynamic applies to any system—from search engines to social media feeds—that could be funded through advertising.
Society has rarely addressed the broader effects of advertising. Rules generally only forbid outright falsehoods, leaving manipulative but technically true messages untouched. In an AI-powered world, this gap becomes more dangerous.
We often avoid regulating advertising because of free speech concerns. While this principle is important, advertising is distinct from the personal, expressive speech we most value. There will be blurry cases, but clear ones exist—and addressing them could protect vulnerable people without dismantling free expression.
Balancing Free Speech and Advertising Limits in an AI Age
When addressing the risks of AI-powered advertising—especially in companionship contexts—we face a classic dilemma: how to curb manipulation without undermining free speech. The tension lies in the fact that too much personal judgment in setting limits risks turning regulation into a tool for enforcing one person’s biases over another’s speech. This would strike at the very root of free speech protections.
One path forward is to prioritize rules that require minimal subjective interpretation. These rules should be inherently more equal in application, less reliant on individual moral standards, and thus less prone to abuse.
Examples of Lower-Judgment Rules:
Limiting the Total Volume of Advertising: Capping the overall amount of advertising exposure—whether in minutes per hour, or ad impressions per day—reduces manipulation opportunities without privileging one viewpoint over another.
Restricting Advertising from Certain Spaces:
Physical spaces: We already prohibit most advertising in schools, recognizing that young people deserve protection from certain commercial pressures.
Digital spaces: Online environments used primarily for education, mental health support, or community-building could adopt similar restrictions. Currently, digital spaces often import norms from the open web, where advertising is pervasive and largely unregulated.
Context-Based Exclusions: Banning advertising in contexts where people are unusually vulnerable—such as grief counseling forums, addiction recovery platforms, or AI companionship apps—could help safeguard well-being without making content-based judgments about the ads themselves.
A Further Step: Regulating Manipulative Methods
Beyond rules about the context of advertising, a more ambitious, though complex, path involves regulating the specific manipulative methods used. This approach rightfully raises concerns about potential overreach, so it must be handled with care. The focus would not be on the content of a message, but on its structure, targeting verifiably manipulative techniques that often exploit known psychological shortcuts.
Many of these techniques are applications of well-documented principles of persuasion, such as those identified by Robert Cialdini in his foundational book, Influence. An automated system could, in theory, be trained to detect and block advertising that employs these methods:
Exploiting Scarcity: Creating artificial pressure with misleading countdown timers or "limited supply" claims that are demonstrably false.
Inflating Social Proof: Using fabricated testimonials, fake reviews, or inflated user counts to create a false sense of popularity.
Deceptive Interface Design: This includes a wide range of techniques often referred to as "Dark Patterns," a term coined by UX researcher Harry Brignull. Examples include using confusing navigation, hidden opt-outs, or "confirmshaming" language ("No thanks, I hate saving money") to trick users into making unintended choices.
The key advantage of exploring this path is that an AI could be a more consistent and less biased arbiter for these kinds of structural rules than a human reviewer. While the risk of bias in the AI's training data would still exist, it avoids the motivated reasoning a human might use to permit a borderline case, making enforcement more uniform. This remains a highly contingent idea, but one worth exploring as AI capabilities mature.
The Funding Problem: Alternatives and Competitive Realities
Restricting advertising in vulnerable digital spaces immediately raises a critical question: how will these services be funded? Blocking a primary source of revenue is not enough; viable alternatives must exist. This requires considering the funding ecosystem alongside the regulatory one. Potential models include:
Direct Funding: Services could be supported by government grants or non-profit organizations, treating them as public goods akin to libraries or mental health services.
User-Supported Ecosystems: Deliberately creating infrastructure for subscriptions or micropayments could allow users to directly fund the services they value, aligning the provider’s incentives with the user’s well-being.
However, it is not an either/or situation. In a competitive market, the existence of an alternative funding model may not be enough to drive out ad-based systems entirely. A slight competitive edge from having access to even a small amount of advertising revenue can compound over time. While some users may value an ad-free experience, this appreciation is often insufficient to overcome the advantages—in scale, features, or price—that an ad-supported competitor can offer. Any serious attempt to create ad-free zones must also account for these powerful market dynamics.
Why This Matters in the AI Era
If AI companions, search algorithms, or social media feeds are driven by ad-based revenue, they inherit the same risks that have long plagued advertising—but with greater personalization and subtlety. As one of the adversarial industries, advertising will respond to scaling differently than other industries, so some extra attention is merited.
We already use simple guardrails in some areas, like limiting ads in schools, but we have not consistently re-evaluated limits to quantity and placement as trade-offs have shifted. Revisiting those trade-offs, can limit harm, and does not need to entangle us in debates over which messages are “acceptable.”
The ability to systematically implement more advanced rules would be more transformative, but rightfully needs more caution. Caution though, advocates of more study and discussion, rather than treatment as a settled topic.
Such solutions won’t remove all risks, but they can help ensure that free speech protections remain intact while still shielding people from the most pervasive forms of manipulation.
Conclusion
The central challenge is not just what AI will do, but how it will operate within human systems already shaped by bias, neglect, and power imbalances. The examples offered here are far from comprehensive and are ultimately exploratory. Reducing the influence of advertising won’t create a utopia; the scope is only so broad, and the effort will not be easy. It’s a concern that hasn’t entirely escaped our awareness, but one where we’ve been limited by the trade-offs. As those shift, we must re-examine.
We should remain aware of our own individual and group social interactions. Best efforts there are consistent with our humanity, and we should applaud anyone who tries to be a better person. But we must also recognize our limits and how systems encourage patterns that individual efforts either ride or resist. AI could reinforce these patterns—or help change them. A willingness to re-evaluate and adjust trade-offs can find wins where we’d otherwise see losses. But we must be clear-eyed: even positive changes come with some set of losses. The goal, then, is not to seek a perfect, cost-free solution, but to consciously decide which changes we are willing to bear. That choice—what we are prepared to change in order to gain something better—is the work that separates shaping our future from letting it happen to us.