By Staff Writer at LMG Security   /   Mar 5th, 2026

Google Gemini Changed the Rules: Are Your API Keys Exposed?

ep62 raw cover image For years, Google told developers that certain API keys were not secrets. They were described as project identifiers—safe to embed in client-side code for Maps or Firebase, especially with referrer restrictions.  

Then something changed. 

“Google retroactively turned API keys that were meant to be public into sensitive credentials,” said Tom Pohl, Head of Penetration Testing at LMG Security, during a recent episode of Cyberside Chats (our cybersecurity podcast). 

Recent research from Truffle Security identified nearly 3,000 publicly exposed Google API keys that were still live and usable against Gemini endpoints. They called the core problem “retroactive privilege expansion—when a key that used to be “safe enough” to publish suddenly gains access to more powerful services after a new feature is enabled in the same project.  

Tom put it plainly: “You create an API key, it’s good for all the Google services that it could be used on,” he said — and “when a new service comes along, it’s good for that one too.” In other words, a key that sat harmlessly in a website for years can become far more valuable to an attacker overnight — not because your team changed anything, but because the platform did. 

That’s why this is more than a story about sloppy key management. “Old assumptions met new capabilities, and nobody recalculated the risk,” said Sherri Davidoff, founder of LMG Security. What used to be a low-level billing annoyance can now authorize AI compute, exhaust quotas, and drive unexpected costs, via credentials never intended to be sensitive. 

This is not a vulnerability in Gemini itself. It’s a cloud governance blind spot — the kind that emerges when cloud providers add powerful new services to long-lived environments without forcing teams to re-evaluate legacy credentials. 

 

What Actually Changed? 

Historically, many Google API keys were embedded directly in browser-based applications. They were restricted by HTTP referrer and often scoped to a narrow set of APIs. In that context, the industry message was clear: these were not authentication secrets in the traditional sense. Gemini changed that equation. 

AI endpoints dramatically increase the value and cost of API access. A key that can interact with Gemini services is not just identifying a project. It is authorizing compute-intensive operations tied directly to billing. That shift transforms what was once a relatively low-impact exposure into a potential financial and operational liability. 

The Truffle Security research, “Google API Keys Weren’t Secrets. But then Gemini Changed the Rules,” highlights how thousands of exposed keys could still successfully access Gemini APIs. BleepingComputer similarly reported that previously harmless Google API keys now expose Gemini AI data. 

This is a governance blind spot: organizations enabled AI capabilities in projects that already contained embedded or public-facing keys without reassessing the security and billing implications. 

 

The Real Risk Is Not Just Data 

Unlike legacy Maps calls, Gemini endpoints can generate sustained compute costs in minutes. Unrestricted or loosely restricted keys can lead to: 

  • Unauthorized API usage 
  • Quota exhaustion that disrupts legitimate workloads 
  • Unexpected billing spikes 
  • Abuse that impacts availability 

Depending on configuration, exposed keys may also reveal prompts, model outputs, or other sensitive AI interactions.  

This is the same fundamental lesson we have seen in other API-related incidents. In our blog, “Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security,” we explored how exposed credentials create tangible financial and operational consequences. 

The difference now is that AI amplifies the blast radius. 

As discussed on the podcast, “If it can trigger compute, it can trigger cost, and cost is risk.” 

 

Five Immediate Actions for Security Leaders 

For IT and security leadership, AI enablement should automatically trigger a credential governance review. 

Here is where to start.

  1. Audit Legacy API Keys Before and After AI Enablement

Inventory every API key across your cloud projects. Confirm: 

    • Is this key still required? 
    • Who owns it? 
    • What APIs is it scoped to? 
    • Is it restricted appropriately? 
    • Is it embedded in public-facing code? 

If Gemini or any AI service has been enabled in a project that historically used public-facing API keys, assume the risk profile has changed.  

This is a classic cloud drift problem. Configurations that were acceptable yesterday may be unacceptable today. 

 

  1. Treat API Keys as Sensitive Credentials in the AI Era

Even if a vendor once described an API key as “not a secret,” that guidance may no longer align with today’s technical reality. 

That means: 

    • Rotate keys regularly. 
    • Apply strict quotas. 
    • Enable real-time billing alerts. 
    • Monitor for abnormal usage patterns. 
    • Integrate API key monitoring into SOC workflows. 

Google’s own documentation on API key best practices reinforces the need for restriction and monitoring. In the AI era, relying solely on referrer restrictions is not enough. 

 

  1. Enforce Least Privilege at the API Level

Referrer or IP restrictions are helpful, but they are not a substitute for API-level scoping. For example, if you are embedding Google Maps on your website, restrict that key so it can only be used for the Maps API. If it leaks, it cannot be reused for Gemini or any other service.  

Every key should be limited explicitly to only the APIs it requires. “Allow all APIs” should not exist in production environments. 

Least privilege does not stop at IAM roles. It applies to every credential surface — including API keys that developers once treated casually. 

 

  1. Isolate AI Development from Production Projects

Separate your AI development and test environment from existing production projects. In particular, avoid enabling AI services in long-lived production projects that contain embedded keys or legacy web applications. Instead: 

    • Use separate projects or subscriptions for AI experimentation. 
    • Apply distinct billing accounts. 
    • Enforce tighter quotas in development environments. 
    • Limit cross-project access. 

Isolation reduces blast radius from both a security and financial perspective. 

Tom Pohl’s DEF CON talk, “Private Keys in Public Places,” underscores how frequently credentials leak in real-world environments. The key question is not whether exposure happens. It is how much damage is possible when it does. 

 

  1. Update Third-Party Risk Management for AI-Driven Credential Risk

AI adoption is accelerating across SaaS vendors and managed service providers. Third-party risk assessments must evolve accordingly. 

Ask vendors: 

    • How are API keys scoped and restricted? 
    • Are AI services isolated from production systems? 
    • Are keys rotated and monitored? 
    • Are billing spikes detected in real time? 
    • What logging is available for AI endpoint access? 

AI introduces new cost and abuse vectors. Vendor questionnaires that do not address AI credential management are already outdated. 

 

Why This Matters Beyond Google 

AI capabilities are being added to existing cloud environments at a rapid pace. When new services are enabled, old configurations inherit new risk. 

We have seen this pattern before: 

  • AWS S3 storage buckets once considered low risk become public exposure events. 
  • IAM roles designed for limited automation gain excessive permissions. 
  • API keys once treated casually become cost-amplifying credentials. 

AI is the latest multiplier. Security leaders should formalize AI enablement as a governance event. That means documented reviews, credential audits, billing guardrails, and architectural segmentation. 

If that process does not exist today, now is the time to build it. 

 

Conclusion: Recalculate Your Risk Model 

Gemini did not break API keys. It changed their potential use cases. Make sure your risk model takes new AI capabilities into account. Now is the time to audit and rotate keys, enforce API-level least privilege, isolate AI workloads, and monitor usage and billing in real time. 

At LMG Security, our cloud security assessments and penetration testing services help organizations find exposed credentials, insecure configurations, and governance blind spots. If you are expanding AI capabilities, consider a proactive review aligned with services like our cloud security and penetration testing offerings at https://www.lmgsecurity.com to ensure your blast radius has not quietly expanded. 

The takeaway is straightforward: make AI enablement a governance event, and include credential review, least privilege, and cost guardrails by default. 

About the Author

LMG Security Staff Writer

CONTACT US