By Sherri Davidoff   /   Feb 5th, 2026

The AI Visibility Gap: a Defining Security Challenge for 2026

blog graphic At first, AI felt like a choice. A tool teams could experiment with, pilot carefully, or decide not to use at all. That moment has passed. Today, AI is embedded directly into everyday enterprise workflows, often without a single, deliberate decision to “adopt AI” ever being made.

This shift has created one of the most significant cybersecurity challenges organizations face this year: a lack of visibility and control over how AI systems access and use data, as discussed in LMG Security’s recent Top Threats of 2026 webinar [TopThreats2026WebinarLink]. In this article, we’ll examine how AI creates new visibility gaps inside security programs, and outline practical steps organizations can take to ensure AI doesn’t become your biggest blind spot.

How AI Is Changing Data Exposure in Practice

Microsoft’s Digital Defense Report 2025 makes a critical observation: many AI-related security risks do not originate from compromised infrastructure or malicious insiders. Instead, they emerge from legitimate AI usage operating exactly as designed, but in ways defenders are not prepared to monitor or constrain.

Microsoft’s research shows that AI introduces new attack surfaces layered on top of traditional ones, rather than replacing them. These include:

  • AI usage security risks, such as oversharing, shadow AI tools, and misuse of copilots
  • AI application risks, including prompt injection and insecure plugins
  • AI platform risks, such as model poisoning and training data exposure
In these scenarios, data does not leave the organization through a classic exfiltration channel. It is processed, transformed, summarized, or re-shared by AI systems that already have legitimate access.

EchoLeak and the Problem of Invisible AI Risk

One of the clearest illustrations of this challenge appeared in 2025 with EchoLeak, a zero-click vulnerability affecting Microsoft 365 Copilot.

Researchers demonstrated that malicious prompts could be embedded in email metadata. Although invisible to users, the content was still ingested by Copilot and incorporated into downstream AI responses.

“All you had to do was send the email,” Matt Durrin explained. “As long as Copilot had access, it would ingest those malicious commands. No click required.”

Crucially, EchoLeak did not generate traditional incident alerts. There was no malware, no suspicious login, and no obvious policy violation. From a monitoring perspective, Copilot was behaving as expected.

EchoLeak was not a failure of intent. It was not a failure of design. It was a failure of assumptions about how AI would ingest information, how much it would be trusted, and how its activity would be monitored once embedded inside daily workflows.

Microsoft’s newer Copilot features further illustrate how visibility challenges evolve as AI moves from analysis to action. Capabilities like Copilot Checkout allow AI to initiate business transactions on a user’s behalf using legitimate access and approved workflows, which makes understanding where AI can act — not just what it can see — increasingly important.

Importantly, this also means sensitive data such as credit card information may now flow through AI-enabled tools, requiring organizations to update data mapping, scoping, and compliance assumptions to account for AI as part of the transaction path.

Why AI Security Failures May Not Be Visible

EchoLeak highlights a critical challenge for defenders: even if the vulnerability had been exploited to spread malware or leak sensitive data, those outcomes might not have been easily detected. There were no suspicious logins, no malware execution events, and no obvious policy violations. From a monitoring perspective, Copilot was behaving as designed, using legitimate access to ingest and process information.

EchoLeak is not an outlier. It is an example of a broader pattern where AI-related security failures occur without triggering the signals security teams rely on. Since AI systems ingest, transform, and re-share data through channels that were not originally designed for security enforcement, they can bypass traditional controls like DLP rules, perimeter firewalls, and content inspection that were built for human-driven workflows. AI effectively creates new information pathways that sit outside many existing detection models.

When AI-related security failures occur, they do not always generate alerts. In many cases, they look exactly like normal business activity, even while sensitive information is going out the door.

Other examples of low-visibility AI security failures include:

  • Copilots summarizing sensitive documents and redistributing them to broader audiences than intended
  • AI-powered search or analytics tools surfacing restricted content because of inherited permissions
  • Employees pasting regulated or proprietary data into personal AI tools without triggering DLP
  • Agentic AI taking actions on behalf of users using legitimate credentials and approved APIs
In each case, nothing “breaks.” The systems behave normally. The failure is only visible if organizations can see how AI is accessing data and what it is producing as output.

Visibility Is the Core Security Challenge of 2026

When asked to identify the single biggest AI-related challenge for the coming year, Sherri Davidoff was direct. “Our number one biggest challenge of 2026 is going to be visibility.”

That gap shows up in practical questions many organizations still struggle to answer:

  • Which AI tools and features are enabled today?
  • Which users and agents interact with them?
  • What data sources can AI access?
  • Where are AI-generated outputs stored or shared?
  • Which third parties process AI-related data?
If those answers are unclear, so are the risks.

What Security Leaders Can Do Now

1. Centralize AI usage

Require a clear approval and intake process for AI tools and features, including AI capabilities embedded in existing platforms.

2. Inventory and monitor AI access

“You can’t secure what you can’t see,” Durrin said. Track AI agents, integrations, and the data they ingest and produce.

3. Establish and enforce data classification

Clear sensitivity labels help determine which data AI should never access and where guardrails must exist.

4. Apply least privilege to AI

Treat AI systems like privileged users and routinely audit what they can access.

5. Evaluate emerging AI security controls

AI gateways and monitoring tools can provide observability into prompts, responses, and agent behavior, aligning with guidance such as the NIST AI Risk Management Framework.

A Practical Next Step

AI adoption will continue to move quickly, but that doesn’t mean organizations are powerless. The same fundamentals that have guided security programs for years still apply — they simply need to be extended to AI.

LMG Security works with organizations to assess AI exposure, strengthen governance, and test AI-enabled systems through security program assessments and penetration testing. With the right visibility in place, AI can be adopted deliberately, monitored effectively, and secured as part of the broader enterprise.

About the Author

Sherri Davidoff

Sherri Davidoff is the Founder of LMG Security and the author of three books, including “Ransomware and Cyber Extortion” and “Data Breaches: Crisis and Opportunity. As a recognized expert in cybersecurity, she has been called a “security badass” by the New York Times. Sherri is a regular instructor at the renowned Black Hat trainings and a faculty member at the Pacific Coast Banking School. She is also the co-author of Network Forensics: Tracking Hackers Through Cyberspace (Prentice Hall, 2012), and has been featured as the protagonist in the book, Breaking and Entering: The Extraordinary Story of a Hacker Called “Alien.” Sherri is a GIAC-certified forensic examiner (GCFA) and penetration tester (GPEN) and received her degree in Computer Science and Electrical Engineering from MIT.

CONTACT US