Deployments Can't Wait
Why AI Threats Demand a Deployment Revolution
In the broader discourse on artificial intelligence, the sharpest minds in AI safety are currently looking to the horizon. They are focused on existential, cinematic threats: the potential for AI-generated bioweapons, nuclear command vulnerabilities, and autonomous warfare.
While these are undeniably critical issues, this focus has created a strategic void. The AI industry is aware of enterprise cybersecurity, and they are actively building tools to address it. However, the problem is being approached tactically, rather than strategically. Because they are not treating the defense of our digital infrastructure as a core, existential mission, a cohesive, industry-wide narrative has failed to materialize.
The hard truth for technology executives—CTOs, CISOs, and business leaders driving technology strategy—is this: the AI cavalry isn’t coming. At best you can hope for the AI industry, and the security industry, to work to sell tools. But for gaps that aren’t tool-shaped, it’s up to IT organizations to make this a strategic priority.
As I argued in Security Can’t Wait, advances in AI are drastically accelerating the attacker-defender cycle. Threat actors are already utilizing AI to automate vulnerability discovery and weaponize exploits at unprecedented speeds. Without an equally aggressive response, the segments of our defense lifecycle that remain manual and sluggish will fall hopelessly behind, handing attackers a permanent, dangerous advantage.
And right now, the weakest, most sluggish point of the defense lifecycle isn’t vulnerability identification. It is deployment.
Untangling the Past
The ability to deploy quickly is an element of great variance across the industry. Those variances have always been of importance, but the acceleration of AI-driven threats makes them a critical crux point. It’s tempting to assume this variance is simply the fault of the organizations that lag behind. But not only is that unhelpful, it’s also untrue. History often offers a better explanation, with only a moderate amount of fault left to place on the organizations suffering the ill effects. In my experience leading organizations through such changes, I find it’s best to leave that be and move on.
You might be unwilling to do so without understanding that past, so it helps to examine it. Additionally, even if you are ready to move forward, your organization may not be able to unless you can explain it to them. Some members of an organization lived through previous efforts to change, bear scars, and understand the reality. Others may have joined more recently and fail to understand why things are the way they are. A shared understanding is critical for an organization to work as one.
The most recent crux point for deployment was the adoption of “DevOps,” “Continuous Integration” (CI), and “Continuous Deployment” (CD). These paradigms are real, and their value is immense. Understanding them, however, is often clouded by layers of marketing jargon that have saturated software development for the last decade.
Make no mistake: the advent of DevOps, CI, and CD has been incredibly important. Even half-implementations, aligned with marketing that sold success before completion, have moved the needle. And the organizations that implemented them fully are now industry leaders in far more than just technology.
To appreciate why these changes left scars—and why implementations varied so wildly—we must look at the mechanical baseline they aimed to improve. Historically, software development and IT operations were strictly isolated. Development teams created software, generally working independently for months or even years, before handing the code off to the operations team to support and run in production. Because these teams had opposing incentives—developers were measured by feature delivery (progress), while operations were measured by system stability—introducing change was treated as an inherent threat. As a result, deployments were often massive, infrequent, and high-risk events.
DevOps emerged as a pragmatic and cultural approach to resolve this dysfunction. At its core, DevOps isn’t just a set of tools; it is a commitment to teamwork, communication, and shared goals. In its full realization, it requires unifying leadership to keep the two disciplines from pulling apart and devolving into political, rather than technical, management.
The Mechanics of Modernization
To support this cultural shift, the industry developed specific pipeline tooling designed to automate away the friction and reduce the stress that leads to organizational divergence:
Automated Builds: In software development, code changes must be “packaged” into a build. Depending on the platform, this involves compiling human-readable code into machine-readable formats, resolving third-party dependencies, and packaging it into a deployable format.
Validation and Testing: Beyond just compiling, a mature pipeline validates the code’s quality and executes automated tests. To make testing efficient, engineers test the smallest possible units of code (unit tests). This limits the scope of failures and uses less compute time. Inadequate testing can cause a pipeline that otherwise looks complete to produce poor results. Errors that reach production cause costly rollbacks, and the fear of repeating those errors slows everything else down.
Continuous Integration (CI): Integration is the process of reconciling the simultaneous contributions of multiple developers into a cohesive system. CI extends the build process by making this integration a frequent, if not constant, event. By merging developers’ working copies several times a day, the complexity and risk associated with a final, massive merge are dramatically reduced. In the context of security, CI serves as a crucial enforcement point for the unified system. It is here that dependencies from multiple contributors are brought together, making it the primary stage for running deep, automated scanning tools against the combined application.
Automated Deployments (CD): Once integrated, software cannot simply be pushed to users; safety constraints require it to be deployed to isolated test environments first. A true pipeline requires test environments that accurately simulate production. However, creating and supporting these duplicate environments is highly complex and the costs often become prohibitive.
Together, the premise of these mechanics was straightforward: mitigate risk by moving faster with tiny, highly automated, and easily reversible changes caught early by continuous feedback loops.
Deployment Divergence
However, as these concepts gained mainstream traction, a clear divergence emerged across the industry. It is tempting to think of organizations making the same technological choices simply by nature of being in the same industry—surely all banks are similarly modernized? In reality, there are significant deviations even within the same sectors. These divergences are shaped heavily by a company’s specific history: when they were formed, or when they attempted a prior wave of modernization.
Generally, organizations fell into one of three paths:
1. True Adoption: Many organizations successfully navigated this transformation. They did the hard work of aligning incentives under unified leadership and invested in comprehensive test environments, proving that modern, automated deployment is a highly achievable goal when backed by genuine commitment.
2. Watered-Down Adoption: Driven by vendor sales cycles and a management desire for painless wins, many organizations adopted the terminology without the substance. The genuinely far-reaching concepts were distorted to justify incremental tool purchases. Crucial but non-mandatory steps—like rigorous unit testing or maintaining accurate test environments—were skipped or done poorly in the name of expediency. Without true CI, integration remained sporadic. Teams bought the tools and declared victory, but failed to fundamentally change their deployment process or speed.
3. Stalled Implementation: Other organizations simply struggled to get momentum at all, weighed down by the sheer complexity and cost of entrenched legacy systems, such as monolithic applications and mainframes, which are notoriously difficult to integrate into modern CI/CD pipelines.
Why did so many organizations fall into the latter two camps? The root causes are deeply embedded in organizational dynamics. For years, technology teams have been caught in a tug-of-war between competing priorities. There is an unrelenting push to deliver short-term wins and new features, which inevitably drives the accumulation of technical debt. This is compounded by coordination issues between siloed teams, cost-cutting mandates, and general corporate politics.
The result of this divergence is that while excellent pipelines certainly exist, a significant portion of enterprises still grapple with brittle, sporadic deployment processes. They have automated the easy parts (like compiling) but left the hard parts (comprehensive testing and security scanning) as manual roadblocks. Without continuous, reliable feedback, deployments are batched, delayed, and risky.
This isn’t an indictment of current leadership; it is simply a realistic accounting of the accumulated friction of technical debt and conflicting priorities. But it is a reality we must acknowledge before we can move forward.
Clouded Perceptions: Restarting from Stalled and Watered-Down Adoptions takes additional effort to rebuild momentum because of terminology drift. It’s too easy to assume a shared commitment which turns out to represent different expectations. While you can’t erase the effect of the past, you can take the extra effort to communicate what is meant at each opportunity.
The Widening Gap and the Irony of Regulation
When an organization’s deployment pipeline is insufficient, the time it takes to patch a newly discovered vulnerability stretches from hours to weeks or months. Attackers have delays too, but depending on their delays—which might be accelerated by AI—is a risk.
We often look to regulation to force improvements in these areas, hoping compliance mandates will motivate continuous improvement. But here lies a painful irony: for the organization that has already fallen behind, regulation often creates extra friction. It introduces new audit gates and reporting requirements that further slow down the deployment process. Until the pressure is redirected toward a truly dramatic overhaul—with all the costs and commitment that entails—the effect of regulation is to slow defenders, leaving a wider gap attackers can exploit.
Assessing the Battlefield and Avoiding the Blame Game
If the mandate is to unblock these pipelines, technology executives must first assess their own relationship to the organization before demanding changes. Are you a new leader brought in with an explicit mandate to improve? Are you an established leader leveraging newly acquired influence? Or are you new to an organization where continuity, rather than disruption, was the stated goal?
Understanding this positioning is critical because diagnosing a lagging deployment pipeline often delivers bad news to teams who believe they are already doing their best. If delivered poorly, it forces the organization into a “fight or flight” response.
Crucially, executives must actively suppress the “blame game.” Blame is a destructive concept when fixing technical debt. Technical systems do not care who is at fault; they will succeed or fail independently. Seeking blame causes internal information sharing to become strategic and self-preserving, rather than solution-oriented. While identifying failures is necessary for strategic leadership changes, day-to-day technical modernization requires actively discouraging the blame game so teams can focus entirely on the fix.
Turning AI Inward
If current pipelines are too encumbered by historical debt to move at the speed of modern threats, they need priority. Yet, that priority is lacking and must be built. The AI and security industries are offering tools, but not implementation.
Technology-focused executives must take the driver’s seat. The DevOps playbook is well documented. But so are the impediments. New efforts and commitments are difficult. Past failures create inertia that needs unblocking.
Tools can’t solve this alone. What they can do is modify the impediments that held back implementation in the past. Those modifications create a compelling narrative to overcome inertia, and start new efforts and commitments to modernize deployment pipelines.
Consider the new opportunities to improve modernization efficiency and effectiveness faster using AI:
The Testing Burden
A robust deployment pipeline requires comprehensive automated testing, but developers notoriously loathe writing and maintaining tests. AI fundamentally changes this dynamic. If you have no tests, AI can scale up baseline coverage rapidly. If you have some tests, AI can identify and fill the gaps. More importantly, AI can monitor existing tests for brittleness, automatically suggesting refactoring or updates when underlying code changes. By removing the maintenance overhead, AI removes a primary excuse for failing pipelines.
Accelerating Legacy Transformation
Many deployment bottlenecks are rooted in legacy systems—like mainframes and monolithic applications—that were previously deemed too complex, expensive, or risky to modernize. AI transformation software is changing this calculus.
One methodology here is to reverse engineer specifications from an existing codebase. One significant challenge to modernizing any legacy system is understanding how that system should behave. There may be documentation, but it very likely has drift and inaccuracies that would undermine a transformation. A reverse engineering process is not likely to be hands-free, but a combination of AI and operators complement each other. That should make it possible to reverse engineer any existing codebase sufficiently to perform a quality transformation.
Testing comes into focus here again. Tests can be generated and employed on both the old and new source. Specifications assist in test generation. Tests assist in mechanical code transformation for core functions and methods. Tests and specifications also assist in larger-scale structural changes. Transformation strategies have traditionally relied on both of these, preferring small incremental updates when practical, and resorting to larger-scale rewrites strategically.
AI-driven transformation tools not only reduce effort via these steps, but improve accuracy and probability of success.
Resources:
Shift-Left Security
AI-powered static analysis can be integrated directly into the developer workflow. This ensures that the code (and the AI-generated tests themselves) adhere to established security and quality standards before they ever reach the integration phase. Not only can this help avoid introducing new security issues, but it can raise confidence in the process of deploying fixes to known issues and newly discovered issues.
The quality of the tools you can integrate may be influenced by how modern and mainstream other parts of the stack are. COBOL and FORTRAN code won’t have the same level of support as Rust, Python, TypeScript, .NET, C or C++ code. While static analysis tools have existed for some time, the most developed tools in this space have evolved past simply flagging potential errors; they now utilize AI to drastically reduce false positives, understand the context of the codebase, and suggest specific, workable auto-fixes.
Resources:
Securing the Pipeline
As pipelines become the engine of the enterprise, they become prime targets for attackers. Implementing highly effective but difficult security practices—such as least-privilege access for the pipeline itself—is complex to manage manually.
How this is done will depend on where your pipeline is implemented. AI tools can analyze code for access requirements, avoiding the admin needing to guess developers’ requirements. AI and conventional tools can analyze deployment patterns to determine used and unused privileges, which create a signal where to limit privileges.
Resources:
Do Your CI/CD Pipelines Need Identities? Yes. (Cloud Security Alliance, 2025)
Disrupting Active Exploitation: An Essential Stopgap
While modernizing the deployment pipeline is the ultimate cure, technology executives must manage the immediate reality: vulnerabilities will exist in production while fixes navigate a sluggish pipeline. It would be irresponsible to omit AI’s capability as an ameliorative control during this window. AI-driven behavioral analytics and dynamic anomaly detection can be deployed defensively to disrupt the control and exploitation phases of an attack in real time. By identifying and isolating threat actors attempting to leverage unpatched systems, these tools buy the organization the critical time needed for pipeline improvements to take effect.
Implementation
AI tooling isn’t enough in the same way that DevOps tooling wasn’t enough. A plan is necessary, and that plan must engage with the culture of your organization. What type of modernization is needed? Why hasn’t it happened already? Will it require a full-scale transformation (mainframes/monoliths)? Is it about completing a watered-down adoption?
There are good sources on DevOps adoption (i.e., The DevOps Handbook), so I won’t try and repeat these in their entirety. Committing to completing adoption, and taking advantage of new opportunities that shorten or de-risk challenging aspects, is how to create your plan.
Conclusion: The Call to Arms
The acceleration of the cyber battlefield is a reality. The mandate for technology executives is clear: we must stop viewing AI solely as a threat to be mitigated or a product to be purchased, and start wielding it as an operational imperative. Accelerating our defenses requires accelerating our deployments. The tools are in our hands; it is time to use them.

