Challenges for AI Misuse Prevention
Jurisdictions, Open Models, and Privacy
Preventing the use of AI for malicious purposes is critical. Malicious use means some human somewhere wants to create harm. AI is a new tool for them. In theory, existing law would apply to those creating harm.
Today I wanted to talk about some challenges that complicate preventing malicious use.
Jurisdictions
A first failure of existing law is jurisdictions. The world has rogue states, lawless states, and aggressor states. These either turn a blind-eye toward harmful activity, lack the capability to enforce laws, or actively create targeted harm themselves. Existing laws cannot reliably reach actors that hide in these jurisdictions. There is a justified effort to close those gaps. There is slow progress. Sometimes gaps reopen. Because it’s a long running effort, we shouldn’t expect a near-term resolution, and treat it as a reality we must mitigate.
If we can’t target the originator of malicious acts, we can try to deny them tools. We should recognize the efforts of AI companies here, which have been substantial. But, these efforts are hindered by two background stories: open models and privacy. To deny tools for malicious use, you must first detect malicious use, or intent; open models and privacy complicate both of these.
Open Models
Open models are models released openly. Without going into too much detail, the key quality is users can run these anywhere. Closed models don’t give users that ability, and users have to interact with them as a managed service. That layer of management provides the key capabilities that enable monitoring and denial.
Open models once openly published, have no or limited ability to monitor. There is very limited ability left to control them, mostly centered around denying access to sufficient compute resources.
The largest collections of compute are at cloud providers, but there are still ample compute resources outside of cloud providers — in private data centers, colocation facilities, sovereign national infrastructure, and increasingly, distributed consumer hardware. Even for cloud resources, the nature of providing compute, rather than a managed service obscure the most effective means of monitoring. By design, cloud providers give customers using compute a heavy dose of privacy.
While open models have their justifications, from the realm of preventing malicious use, they are a challenge. It’s of some comfort then that open models are less capable than closed ones. This reduces the capability harmful users have access to. Since some aspects are adversarial, the advantage of closed models provides defenders an advantage too. This applies most significantly to cybersecurity.
Will open models stay less capable than closed ones? We could, across cooperative jurisdictions, enact regulation to ensure that — but if a non-cooperative jurisdiction has the capability to create more powerful models, we’d lose that control. China is the jurisdiction most likely to both have that capability, and make independent decisions.
Privacy
The second background story is privacy. The default state of anonymity on the Internet has costs. Privacy advocates attempt to maintain this state. I, like some others, believe the costs of this anonymity as a policy are too high. This isn’t specific to AI, but it does relate.
We have tied the hands of security teams and mostly delivered theoretical privacy. Where privacy matters most, such as totalitarian countries, the privacy is undermined by local realities. Privacy advocates don’t have a voice here. They win political contests where there is the least need for them, and lose where there is the most. It’s a tough choice, but I think we’re not making the right choices.
We should be pragmatic, but we’re idealistic. In some cases, privacy measures accelerated accumulation of data for malicious purposes. When countermeasures can’t be due to obscuring the lowest layers of a technical stack, we fail to achieve privacy and prevent harm. When formal data-sharing is prohibited, informal systems take their place, and predictably result in harmful breaches.
If service providers always knew who was using their service, they’d be able to deny access to anyone detected acting maliciously in the past. But the internet offers too much anonymity. Providers can shut down an account, but without accounts tied to a real identity, a new one can be created. The current standard among AI companies is too lax about this. We could make it more costly for attackers to maintain access.
Conclusion
Jurisdictions, open models, and privacy are features of the world we must work within — but they are also policy choices we can influence. The uncomfortable reality is that these three forces compound each other. Open models place powerful tools in jurisdictions beyond legal reach, while anonymity makes it difficult to detect or deny access to bad actors even where laws do apply. Treating any one of these in isolation understates the problem.
The path forward requires accepting some hard tradeoffs. Meaningful identity verification will feel like a concession on privacy — because it is one. Regulatory constraints on open model releases will frustrate researchers and developers who have legitimate reasons to want them — because the benefits of openness are real. Coordinating across jurisdictions will be slow and incomplete. None of these are reasons to avoid acting, but they are reasons to be honest about what any given measure can and cannot achieve.
What’s not acceptable is the current default: deferring hard choices while treating anonymity as an unqualified good and open access as costless. The tools for harm are improving. The window for shaping how they’re governed is open, but it won’t stay that way.

