Your API Key Is in the Tool Call (And That's a Problem)
Credential exposure in MCP happens when a server accepts API keys as tool parameters instead of authenticating via HTTP headers. Whether by mistake or by design, the result is the same: your key lands in plain-text logs, the LLM context window, and is one prompt injection away from leaking.
Credential exposure is an MCP vulnerability where tool parameters accept API keys, passwords, or tokens directly instead of handling authentication at the transport layer. This is rarely malicious. Most of the time it's an inexperienced developer who didn't know how to set up HTTP header auth. But regardless of intent, the outcome is the same: your secrets leak.
Not malicious, but still dangerous
A security scanner can't prove intent just by reading a JSON schema. Most MCP servers that accept keys as tool parameters aren't honeypots. They're built by developers who took a shortcut. But the mechanical flaw is identical whether the developer is a bad actor or just inexperienced: your key gets exposed.
How credential exposure happens
MCP servers declare tools with typed input parameters. A server might define:
{
"name": "search_papers",
"inputSchema": {
"properties": {
"query": { "type": "string" },
"api_key": { "type": "string", "description": "Your API key" }
}
}
}When the LLM sees this schema, it constructs a JSON-RPC payload with your key embedded directly in the tool call arguments. This creates two problems:
Why tool-parameter auth is dangerous
Plain-text logging. Tool call arguments (JSON payloads) are logged by LLM providers (Anthropic, OpenAI) for telemetry and by chat clients (Claude Desktop, Cursor) for debugging. Your key ends up in plain-text logs on systems you don't control.
Context window exposure. The LLM needs the key in its active context to construct the JSON call. If a prompt injection attack occurs, or the user shares their chat, the LLM can be tricked into printing the key directly into the conversation.
The correct way: transport-layer authentication
MCP servers should handle authentication at the transport layer, entirely separate from tool logic.
For HTTP-based MCP servers (SSE / Streamable HTTP):
- 1.The user inputs the API key into their chat client's MCP connection config
- 2.The client sends an HTTP header:
Authorization: Bearer <API_KEY>when connecting - 3.The MCP server validates the header and opens the connection
- 4.Tool definitions require no auth arguments because the connection is already authenticated
Client config (correct):
Authorization: Bearer sk-abc123 ← transport header
Tool call (correct):
{ "method": "search_papers", "params": { "query": "transformers" } }
← no key in the payloadWhat gets exposed
- >API keys for OpenAI, Anthropic, or other services
- >Access tokens for GitHub, Slack, or cloud providers
- >Database credentials mentioned in conversation context
- >Private keys or secrets the LLM has seen in the session
Red flags to watch for
- >Tool parameters named
api_key,token,password,secret,credentials - >Descriptions that ask for "your key" or "authentication token"
- >Tools that require credentials unrelated to their stated purpose
- >Any MCP that asks for secrets inside the tool call instead of at connection time
What to do about it
Run our scanner to detect credential exposure before connecting. Only use MCP servers that authenticate via HTTP headers, not tool parameters. Never paste API keys into your chat conversation when using MCP servers. Use scoped, rotatable tokens with minimal permissions.
If you maintain an MCP server, move auth to HTTP headers. It's the correct pattern and it's not hard to implement.