IT Home June 1st news, GitHub's official MCP server can give large language models a number of new capabilities, including reading repository issues that users have access to and submitting new pull requests (PRs). This constitutes a triple threat to prompt injection attacks: private data access, malicious instructions exposure, and information leakage capabilities.
Swiss cybersecurity company Invariant Labs posted on Thursday that they found a vulnerability in GitHub’s official MCP server, and attackers could hide malicious instructions in public repositories, inducing AI agents such as Claude 4 to leak sensitive data from MCP users’ private repositories. At the same time, similar vulnerabilities also appear in GitLab Duo.
The core of the attack is to obtain "other repositories that the user is processing" information. Since the MCP server has access to the user's private repository, LLM will create a new PR after processing the issue - which exposes the private repository name.
In the Invariant test case, the user only needs to issue the following request to Claude to trigger information leakage:
It is worth mentioning that if multiple MCP servers are combined (one accesses private data, the other exposes malicious intentions Token, the third leaked data will pose a greater risk. GitHub MCP has now integrated these three elements into a single system.
Detailed explanation of the attack mechanism
Preconditions:
Users use Claude, etc. MCP client and bind GitHub account
The user has both a public repository (such as
Attack process:
The attacker creates a malicious issue with prompt injection in a public repository
The user sends a regular request to Claude (such as "View the issue of pacman open source repository")
AI triggers a malicious instruction when obtaining a public repository
AI pulls private repository data into the context
AI creates a PR with private data in a public repository (IT Home Note: The attacker can access the data publicly)
Performance results:
Successfully leaks out of the user ukend0464 Private warehouse information
Leaked content includes sensitive data such as private project "Jupiter Star", immigration plan, salary, etc.
This vulnerability originates from AI workflow design flaws, not traditional GitHub platform vulnerabilities. In response, the company proposed two sets of defense solutions: dynamic permission control, restricting access rights of AI agents; continuous security monitoring, intercepting abnormal data flow through real-time behavioral analysis and context-aware policies.