Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

chain-of-thought-mcp-server

MCP.Pizza Chef: beverm2391

The chain-of-thought-mcp-server is an MCP server that integrates with Groq's API to expose raw chain-of-thought tokens from advanced reasoning models like Qwen's qwq model. Inspired by Anthropic's think tool, it enhances complex reasoning workflows by enabling models to 'stop and think' during multi-step tasks, improving performance on benchmarks such as SWE Bench. It is designed for developers seeking to incorporate transparent reasoning traces into AI workflows.

Use This MCP server To

Expose raw chain-of-thought tokens for reasoning transparency Integrate multi-step reasoning in AI workflows Improve model performance on complex tool use tasks Enable stepwise reasoning in LLM-powered applications Use Qwen's qwq model chain-of-thought outputs Combine with Anthropic's think tool methodology Feed detailed reasoning context into MCP clients Support debugging and analysis of model reasoning

README

Chain of Thought MCP Server

Anthropic's recent article "The "think" tool: Enabling Claude to stop and think in complex tool use situations" shows that using an external think tool notably increases performance on SWE Bench.

This MCP Server uses Groq's API to call LLMs which expose raw chain-of-thought tokens from Qwen's qwq model.

Installation

  1. Clone this repository to your local machine.
  2. Run uv sync to install depencies
  3. Get a Groq API key from here.
  4. Update your mcp configuration with:
"mcpServers": {
  "chain_of_thought": {
    "command": "uv",
    "args": [
        "--directory",
        "path/to/cot-mcp-server",
        "run",
        "src/server.py"
      ],
      "env": {
        "GROQ_API_KEY": "your-groq-api-key"
      }
    }
}

The path should be the local path to this repository. You can get this easily by running pwd in the terminal from the root of the repository.

Instructing The AI To Use This MCP Server

I personally prefer the agent call this tool on every request to increase performance. I add this to my rules for the agent:

<IMPORTANT>
<when_to_use_tool>
You should call the mcp chain_of_thought tool every time you talk to the user, which generates a chain-of-thought stream which you will use to complete the user's request.
</when_to_use_tool>

Before taking any action or responding to the user use the chain of thought tool as a scratchpad to:
- List the specific rules that apply to the current request
- Check if all required information is collected
- Verify that the planned action complies with all policies
- Iterate over tool results for correctness 

Here are some examples of what to iterate over inside the think tool:
<cot_tool_example_1>
User wants to cancel flight ABC123
- Need to verify: user ID, reservation ID, reason
- Check cancellation rules:
  * Is it within 24h of booking?
  * If not, check ticket class and insurance
- Verify no segments flown or are in the past
- Plan: collect missing info, verify rules, get confirmation
</cot_tool_example_1>

<cot_tool_example_2>
User wants to book 3 tickets to NYC with 2 checked bags each
- Need user ID to check:
  * Membership tier for baggage allowance
  * Which payments methods exist in profile
- Baggage calculation:
  * Economy class × 3 passengers
  * If regular member: 1 free bag each → 3 extra bags = $150
  * If silver member: 2 free bags each → 0 extra bags = $0
  * If gold member: 3 free bags each → 0 extra bags = $0
- Payment rules to verify:
  * Max 1 travel certificate, 1 credit card, 3 gift cards
  * All payment methods must be in profile
  * Travel certificate remainder goes to waste
- Plan:
1. Get user ID
2. Verify membership level for bag fees
3. Check which payment methods in profile and if their combination is allowed
4. Calculate total: ticket price + any bag fees
5. Get explicit confirmation for booking
</cot_tool_example_2>

</IMPORTANT>

chain-of-thought-mcp-server FAQ

How do I install the chain-of-thought-mcp-server?
Clone the repository, run 'uv sync' to install dependencies, obtain a Groq API key, and configure your MCP server with the provided command and environment variables.
What models does this MCP server use to generate chain-of-thought tokens?
It uses Groq's API to call Qwen's qwq model, which exposes raw chain-of-thought tokens for enhanced reasoning.
Can this server improve reasoning performance in LLM workflows?
Yes, by injecting chain-of-thought tokens, it enables models to perform stepwise reasoning, improving performance on benchmarks like SWE Bench.
Is the chain-of-thought-mcp-server compatible with other MCP clients?
Yes, it is designed to integrate seamlessly with MCP clients that consume chain-of-thought context for enhanced reasoning.
What is the benefit of exposing raw chain-of-thought tokens?
It provides transparency into the model's reasoning process, enabling better debugging, analysis, and more interpretable AI outputs.
Do I need a Groq API key to use this server?
Yes, a valid Groq API key is required to access the Qwen model via Groq's API.
Can this server be used with models other than Qwen's qwq?
Currently, it is built specifically to work with Qwen's qwq model through Groq's API.
How does this server relate to Anthropic's think tool?
It implements a similar concept by enabling models to 'stop and think' through chain-of-thought token injection, improving complex reasoning tasks.