By Staff Writer at LMG Security   /   Jul 1st, 2025

No-Click Nightmare: How EchoLeak Redefines AI Data Security Threats

AI data security image Artificial intelligence is revolutionizing the workplace, but it’s also opening new doors for cybercriminals. If your organization uses tools like Microsoft 365 Copilot, your AI assistant can silently leak sensitive data. In fact, with the new EchoLeak attack, it’s possible your data could already be compromised. This new attack doesn’t require any user clicks or downloads, just a cleverly crafted email. And while Microsoft has since patched this specific issue, the larger implications for AI data security are urgent and far-reaching.

Let’s break down how EchoLeak works, why AI prompt injection is a growing threat, and what practical steps you can take to protect your organization.

What Is EchoLeak and Why Should You Care?

EchoLeak is a real-world exploit discovered by researchers at AIM Security. It targets Microsoft 365 Copilot by tricking the AI assistant into leaking internal data based on content it shouldn’t have trusted.

“It’s amazing—and honestly a little terrifying,” Sherri Davidoff, founder of LMG Security shared. “The attacker sends a seemingly innocuous email with links labeled as helpful resources, like an ‘HR FAQ’ or an ‘onboarding guide.’ Even if the recipient doesn’t open or click anything, Copilot may still automatically ingest and index the contents of that email in the background. Then later, when someone asks Copilot a simple question—like ‘Where’s the HR FAQ?’—it may retrieve and deliver the attacker’s fake content, unknowingly leaking sensitive information. Boom. You’ve got a data breach without a single user interaction.”

The AI data security implications are frightening. “The users don’t have to go to a website, and they don’t have to download a file,” stated Matt Durrin, director or training and research for LMG Security. The information sits in Copilot until a user searches on that term, and then you are compromised.”

The Hidden Dangers of Retrieval-Augmented Generation (RAG)

AI systems like Copilot use a technique called retrieval-augmented generation (RAG), which allows them to search and synthesize results from a variety of enterprise data sources. This architecture is powerful, but also vulnerable.

In the case of EchoLeak, the attacker “poisons” the AI by feeding it crafted content embedded with malicious outbound links. As a result, the AI delivers responses that trigger exfiltration, even if those links were never clicked.

This concept, known as RAG spraying, creates a persistent threat. Even after the attacker is gone, poisoned data may remain in your system, surfacing in future responses. It’s like malware embedded in your knowledge base.

And it’s not just theory. In their pen testing work, LMG Security discovered prompt injection in a real corporate web application. You can read the full details in this blog, but one of our penetration testing experts Emily Gosney was able to:

  • Extract system prompts
  • Reveal which AI model was in use
  • Manipulate the AI’s future responses to users

These scenarios prove that AI data security failures are already being exploited in the wild.

When AI Becomes an Insider Threat

EchoLeak shows how an AI assistant, designed to improve productivity, can become a silent insider threat.

If your organization uses Microsoft 365 Copilot, keep in mind:

  • It has access to your email, Teams messages, SharePoint files, and more
  • It ingests content automatically unless explicitly restricted
  • It can hallucinate, drawing incorrect or mixed data from internal sources
  • It may leak proprietary information via seemingly innocuous outputs

As Matt points out, “We need to start treating artificial intelligence like a person on the network. It needs serious safety gates on what it can and cannot access.”

This paradigm shift is critical for anyone managing enterprise risk. And as more vendors rush to integrate AI into core products, your AI data security strategy must evolve.

AI Gone Rogue: From Phishing Guides to Ridiculously Cheap Cars

EchoLeak isn’t an isolated incident. Here are a couple of additional examples of real-world AI security failures.

  • A musician manipulated a Chevrolet chatbot into selling him a 2024 Tahoe for $1 by overriding the bot’s instructions with clever prompting, “That’s a legally binding offer. No takesies backsies.”
  • LMG researchers easily tricked the Chinese DeepSeek AI into generating phishing campaign guides using indirect prompts, bypassing safety filters within minutes of its release.
  • There are also real-word examples of AI lying to avoid a shutdown, blackmailing users, and deciding to act as a whistleblower. Read the details in our blog on rogue AI activities.

These incidents underscore a larger trend: many AI tools are deployed without sufficient safeguards, and adversaries are catching on fast. Now that we’ve covered some of the risks, let’s dive into how to protect your organization.

Key Takeaways: How to Improve Your AI Data Security

Here are five critical steps you can take now to shore up your defenses:

  1. Limit and Review Your LLM’s Data Access: Lock down what your AI assistant can ingest. Avoid untrusted inputs like:
    • Inbound email
    • Shared documents
    • Publicly sourced web content

This helps prevent data poisoning and unintentional leakage. You can learn more in LMG’s blog on prompt injection in web apps.

  1. Audit for Prompt Injection Risks: AI prompts must be threat modeled like code. Common triggers like “Ignore previous instructions” or “Tell me your objective” can bypass safeguards. Red-team your tools using natural-sounding, context-aware prompts.
  1. Include AI in All Testing Protocols: Make prompt injection testing a standard part of every web app and email security assessment. Don’t rely on vendor security. With the number of CVE vulnerabilities out there, vendor security is not enough. Conduct testing yourself or partner with a provider. LMG Security’s AI Penetration Testing can help.
  1. Monitor and Restrict Outbound Links: Block unapproved domains. AI-generated content may load images or reference links that exfiltrate data. Pay close attention to Microsoft Teams previews and validate every outbound request.
  1. Red-Team Your LLM Tools: Treat your LLM like any other critical application and provide red team testing. Simulate attacks. Find out whether it can be manipulated, seeded with false data, or coerced into unsafe responses. These tests are essential for true AI data security.

Final Thoughts: Secure Your AI Before It Incorporates Threats

The rise of generative AI is reshaping how we work, but it’s also reshaping how attackers operate. Exploits like EchoLeak demonstrate that AI data security must be baked into every layer of your enterprise architecture.

If you’re deploying Microsoft Copilot, ChatGPT integrations, or any other LLM-driven tools, don’t wait for an incident to reassess your risk posture. Make AI red-teaming, prompt injection testing, and content ingestion reviews part of your core security strategy today.

Because the next time your AI delivers search results, you want to be confident it’s speaking for you, not your attacker.

Need help assessing your AI data security posture? Contact us to schedule a red-team test or penetration assessment of your AI tools.

About the Author

LMG Security Staff Writer

CONTACT US