By Staff Writer at LMG Security   /   Sep 25th, 2025

Vibe Hacking: How AI Is Reshaping Cybercrime and What Your Organization Can Do

AI codingArtificial intelligence is changing the way we build software—and the way criminals exploit it. Over the past two years, a new trend has emerged in the development world: “vibe coding,” where developers rely on AI tools to generate or refine their code. On the surface, vibe coding promises faster results and more efficient development. But there’s a darker twin: vibe hacking. That’s when attackers leverage the same AI capabilities to find vulnerabilities, exploit software supply chains, and launch sophisticated cyberattacks. 

This rising trend raises urgent questions for CISOs, security leaders, and IT professionals. How can you protect your software supply chain when AI itself is part of the risk? What happens when adversaries upload stolen data into AI tools, effectively supercharging their extortion operations? And how should organizations adapt their defenses in a world where half of AI-generated code contains vulnerabilities? 

Let’s dive into what vibe hacking is, how it’s already affecting organizations, and what you can do to stay ahead. 

What Is Vibe Coding (and Vibe Hacking)? 

“Vibe coding is like the nicer brother of vibe hacking,” explained Matt Durrin, LMG Security’s Director of Training and Research, during a recent Cyberside Chats Live episode. “Instead of having to go through and actually write out Python code or Java code, you simply ask an AI engine to do something for you, and it will write the code that will accomplish that task.” 

That’s convenient—until it isn’t. “There’s concerns about the AI introducing malware, the AI tool having vulnerabilities, or just legitimate AI tools being used in evil ways, all three of which we have seen over the past few months,” Sherri Davidoff, founder of LMG Security, pointed out.  

This is vibe hacking: when AI-backed development tools become weapons. Criminals can now lean on AI to accelerate every stage of their operations—from writing malicious code to analyzing stolen data. 

Real-World Examples of Vibe Hacking 

We don’t have to imagine how vibe hacking works—it’s already happening. Two recent cases illustrate the risks. 

One striking example is the use of Anthropic’s Claude AI chatbot to run entire cyber extortion campaigns. Attackers relied on Claude for everything: helping them break in, analyzing stolen data, drafting ransom notes, and even creating monetization strategies. The chilling part? Once exfiltrated data is uploaded into an AI system, it’s essentially out of the victim’s control. “If a hacker uploads all your data to WormGPT, in my mind, that’s gone,” Davidoff shared.  

Another case involves the open-source AI code editor Cursor, a fork of Visual Studio Code, which was found to contain a vulnerability that allowed attackers to silently execute malicious code on developer systems. Since developers often store API keys, SSH credentials, and sensitive data locally, a single compromised tool can cascade into full environment takeover.  

The Supply Chain Blind Spot 

When Operation Aurora hit in 2010, organizations suddenly woke up to the risks in source code management. Today’s blind spot is AI-powered development tools. Many organizations don’t even know if their internal developers—or third-party contractors—are using AI assistants. And if they are, how secure are those tools? 

Consider this: a Veracode study found that about 45% of AI-generated code contains vulnerabilities, including SQL injection and cross-site scripting. Another study published on arXiv showed that iterative AI code “refinements” can actually make things worse — after five rounds of refinement, critical vulnerabilities increased by nearly 40%. 

The result? AI may speed up coding, but it also increases the attack surface. When third-party developers, contractors, or open-source maintainers unknowingly use insecure AI-generated code, your organization inherits the risk. 

Organized Crime Meets AI 

It’s not just traditional hackers using these tools. Organized crime groups are increasingly blending fraud, infiltration, and AI. Multiple sources, including the FBI, have shared that criminals are using AI to: 

  • Generate fake résumés. 
  • Pass job interviews with natural language AI assistance. 
  • Secure remote developer positions inside legitimate companies. 

This echoes the North Korean “fake employee” operations—but now supercharged with AI. In a recent Cyberside Chats episode on insider threats from hiring fake employees, Davidoff warned, “Even if you have good screening processes for your own employees, what about the developers your vendors hire?” 

Four Steps to Reduce Vibe Hacking Risks 

The rise of vibe hacking doesn’t mean organizations are helpless. Here are practical steps to reduce risk: 

  1. Establish AI ground rules: Even if you don’t employ developers, your staff may experiment with AI tools. Set policies about when AI can be used—and make it clear that sensitive data must never be pasted into AI systems. LMG recently published a AI-Enhanced Cybersecurity Checklist with guidance on balancing productivity and security. 
  2. Strengthen your software supply chain: Ask your vendors and contractors about their AI use in development. Do they vet AI-generated code? Do they maintain a Software Bill of Materials (SBOM) so you know what’s inside the software you buy? SBOMs are critical for quickly assessing exposure when new vulnerabilities emerge. 
  3. Treat endpoints like crown jewels: Restrict what software employees can install—especially IT staff and developers. Provide a sandbox machine for testing unfamiliar tools and deploy strong endpoint protection with least-privilege access. As Davidoff emphasized, “Do not have IT staff testing AI tools on their production systems. That is critical.” 
  4. Update your incident response playbooks: Include scenarios where AI is part of the attack: compromised dev tools, malicious packages, or vendor incidents. Test these scenarios through tabletop exercises. As Matt Durrin observed, “Pretty much every scoping call for tabletop exercises lately has been, ‘We want to do a third-party compromise, and we want AI involved.’” If you need help developing your IR plan or playbook, please contact us for help.  

Looking Ahead 

The AI arms race is here. Criminals are experimenting with “evil” AI engines that actively search for vulnerabilities, and defenders are racing to catch up. It’s not a question of if vibe hacking will impact your organization—it’s when. 

The good news: organizations that set clear policies, strengthen their vendor oversight, and rigorously test their defenses will be far better prepared when that day comes. 

Next Steps 

Vibe hacking highlights the double-edged sword of AI: the same tools that empower productivity can also empower cybercrime. By setting AI use policies, demanding transparency from vendors, locking down endpoints, and updating your response plans, you can reduce risk and build resilience. 

At LMG Security, we help organizations get ahead of these challenges through AI risk assessments, third-party risk management solutions, and custom tabletop exercises. If your organization is ready to build practical defenses against vibe hacking, contact us today to get started. 

About the Author

LMG Security Staff Writer

CONTACT US