By Sherri Davidoff   /   Feb 26th, 2026

AI and the Rise of Negative-Day Exploits

cover image raw If this quarter feels different, it is because it is. 

AI models are now discovering software vulnerabilities at scale in mature, widely deployed open-source projects that have already been reviewed countless times by experienced engineers. 

On a recent episode of Cyberside Chats, Sherri Davidoff put it plainly: 

“I believe we are going to look back on this period as a true inflection point in software exploitation. The speed and scale of vulnerability discovery have fundamentally changed.” 

For CISOs, security architects, and risk leaders, this is not incremental change. It is a structural shift in how vulnerabilities are identified, weaponized, and exploited, and in how defenders must respond. 

This is the moment patch-centric security begins to break down. 

AI Can Now Find What Humans Missed for Years 

In February 2026, Anthropic published research titled Evaluating and Mitigating the Growing Risk of LLM-Discovered 0-Days that directly addressed the accelerating capability of large language models to uncover exploitable vulnerabilities in production software.

Their findings confirmed what many security teams suspected: frontier models can identify logic flaws and exploit paths in real-world codebases at scale, including vulnerabilities that traditional scanners routinely miss. 

Tenable’s technical analysis of Claude Opus reinforced this concern, showing how AI-driven vulnerability discovery is moving beyond simple pattern matching into reasoning about application flow and state transitions.

The most important detail is this: these systems were not handed prebuilt exploit playbooks. They were asked what vulnerabilities they could find. 

The implications are no longer theoretical. In the decentralized finance ecosystem, AI-assisted vulnerability discovery was linked to a $2.7 million exploit in the Moonwell protocol. The incident drew enough attention that OpenAI released a crypto-focused security tool shortly afterward.

The line between research and exploitation is shrinking. 

If AI can identify and reason through exploit chains this efficiently, adversaries are already doing the same. 

The discovery timeline has collapsed. 

Welcome to the Era of Negative Day Exploits 

We are familiar with zero-day vulnerabilities, flaws exploited before a patch is available. 

But February 2026 illustrated something more concerning. 

Microsoft’s February Patch Tuesday addressed dozens of vulnerabilities, including six zero-days that were already being actively exploited in the wild at the time of release. SecurityWeek covered the surge in actively exploited issues.

The Zero Day Initiative’s February 2026 Security Update Review also highlighted the unusually high number of vulnerabilities under active exploitation compared to recent patch cycles.

That means exploitation was already underway before many organizations even began triage. 

Davidoff described this emerging reality as negative day vulnerabilities: 

“A negative day vulnerability is one that is actively being exploited before anyone even realizes it exists. You have to prepare for that reality. Assume your software is exploitable.” 

Traditional vulnerability management assumes a sequence. A vulnerability is discovered. A patch is released. Organizations prioritize and deploy. 

AI compresses and in some cases eliminates that sequence entirely. 

Attackers can now feed source code or binaries into AI systems and generate exploit paths in hours. Meanwhile, defenders are waiting for advisories, CVSS scoring, and internal change windows. 

If exploitation precedes awareness, patch prioritization alone cannot be the primary control. 

Exposure Now Matters More Than Severity 

February’s exploited vulnerabilities were not exclusively high-profile internet-facing remote code execution flaws. Some required user interaction. Others involved embedded components and legacy services still present in modern environments. 

The lesson is straightforward. Context matters more than raw severity. 

CVSS scores do not answer the questions that truly determine risk. Is the system internet-facing? Is it accessible to users? Is it part of the identity tier? Is it embedded in remote access infrastructure? 

As Davidoff explained, “We have to prioritize exposure, not just severity. The most dangerous vulnerability is the one attackers can reach.” 

In an AI-driven environment, attackers will automate discovery across your exposed surface area. The relevant question is no longer which flaw appears worst on paper. It is which pathway is most reachable today. 

Patch Management Is Necessary – But No Longer Sufficient 

Patching remains essential. But February’s patch cycle illustrated the growing asymmetry between attacker velocity and defender process. 

AI-assisted discovery widens the gap between vulnerability discovery, vendor remediation, and enterprise deployment. Testing requirements, change control boards, and operational dependencies do not accelerate simply because attackers do. 

As Matt Durrin noted, “Plan for exploitation before disclosure. By the time you apply the patch, the compromise may have already happened.” 

That is not alarmism. It is a recognition of how quickly the threat landscape is shifting. 

Rethinking the Defensive Model 

If we accept that some exposed assets will eventually be compromised, architecture must reflect that assumption. The focus shifts from perfect prevention to resilient containment. 

  1. Plan for exploitation before disclosure
  2. Prioritize exposure, not just severity
  3. Assume compromise on exposed assets and monitor accordingly
  4. Treat compensating controls as first-line defense
  5. Prepare for containment patches may not exist
  6. Rehearse a “negative-day” tabletop
  7. Integrate AI into your vendor risk mode

Vendor risk must also evolve. AI is not only discovering vulnerabilities, it is generating code at unprecedented scale. As Davidoff observed, “AI is being used to find vulnerabilities at scale. It is also generating massive amounts of new code, and that code can carry vulnerabilities with it.” Security leaders now need to evaluate how vendors govern and validate AI-assisted development. 

At the same time, the fundamentals remain decisive. Internet-exposed services, legacy components, and poorly hardened remote access still represent the most predictable entry points. AI does not eliminate old risks. It magnifies their consequences. 

Conclusion 

February 2026 marked a visible acceleration point.  

Anthropic’s research confirmed AI’s ability to uncover exploitable vulnerabilities at scale. Microsoft’s patch cycle showed multiple zero-days already under active exploitation. Independent reporting and technical analyses reinforced that attackers are moving faster, and the gap between discovery and weaponization is narrowing. 

Security programs built around disclosure timelines and patch prioritization alone are no longer sufficient. Organizations must assume that some vulnerabilities will be exploited before they are publicly known and design their defenses accordingly. 

The shift underway is not about abandoning patch management. It is about recognizing that resilience, detection speed, containment capability, and exposure reduction now carry equal weight. 

We are entering an era where the question is no longer whether a vulnerability exists, but how quickly it can be found and weaponized. In that environment, the strongest organizations will not be those with the longest patch backlogs cleared. They will be the ones that can detect compromise early, limit its spread, and recover decisively. 

That is the new baseline. 

About the Author

Sherri Davidoff

Sherri Davidoff is the Founder of LMG Security and the author of three books, including “Ransomware and Cyber Extortion” and “Data Breaches: Crisis and Opportunity. As a recognized expert in cybersecurity, she has been called a “security badass” by the New York Times. Sherri is a regular instructor at the renowned Black Hat trainings and a faculty member at the Pacific Coast Banking School. She is also the co-author of Network Forensics: Tracking Hackers Through Cyberspace (Prentice Hall, 2012), and has been featured as the protagonist in the book, Breaking and Entering: The Extraordinary Story of a Hacker Called “Alien.” Sherri is a GIAC-certified forensic examiner (GCFA) and penetration tester (GPEN) and received her degree in Computer Science and Electrical Engineering from MIT.

CONTACT US