Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

multi-llm-cross-check-mcp-server

MCP.Pizza Chef: lior-ps

The multi-llm-cross-check-mcp-server is an MCP server that enables querying multiple large language model providers in parallel, including OpenAI, Anthropic, Perplexity AI, and Google Gemini. It facilitates asynchronous cross-checking of LLM responses to improve reliability and consistency, integrating easily with Claude Desktop. This server supports Python 3.8+ and requires API keys for each LLM provider, offering a unified interface for multi-LLM querying and faster response aggregation.

Use This MCP server To

Compare answers from multiple LLMs for consistency verification Aggregate responses from different LLM providers in real time Integrate multi-LLM querying into Claude Desktop workflows Speed up response times using asynchronous parallel LLM calls Validate AI-generated content by cross-referencing multiple sources Develop applications requiring multi-provider LLM consensus Monitor and benchmark LLM provider response quality simultaneously

README

Multi LLM Cross-Check MCP Server

A Model Control Protocol (MCP) server that allows cross-checking responses from multiple LLM providers simultaneously. This server integrates with Claude Desktop as an MCP server to provide a unified interface for querying different LLM APIs.

Features

  • Query multiple LLM providers in parallel
  • Currently supports:
    • OpenAI (ChatGPT)
    • Anthropic (Claude)
    • Perplexity AI
    • Google (Gemini)
  • Asynchronous parallel processing for faster responses
  • Easy integration with Claude Desktop

Prerequisites

  • Python 3.8 or higher
  • API keys for the LLM providers you want to use
  • uv package manager (install with pip install uv)

Installation

  1. Clone this repository:
git clone https://github.com/lior-ps/multi-llm-cross-check-mcp-server.git
cd multi-llm-cross-check-mcp-server
  1. Initialize uv environment and install requirements:
uv venv
uv pip install -r requirements.txt
  1. Configure in Claude Desktop: Create a file named claude_desktop_config.json in your Claude Desktop configuration directory with the following content:

    {
      "mcp_servers": [
        {
          "command": "uv",
          "args": [
            "--directory",
            "/multi-llm-cross-check-mcp-server",
            "run",
            "main.py"
          ],
          "env": {
            "OPENAI_API_KEY": "your_openai_key",  // Get from https://platform.openai.com/api-keys
            "ANTHROPIC_API_KEY": "your_anthropic_key",  // Get from https://console.anthropic.com/account/keys
            "PERPLEXITY_API_KEY": "your_perplexity_key",  // Get from https://www.perplexity.ai/settings/api
            "GEMINI_API_KEY": "your_gemini_key"  // Get from https://makersuite.google.com/app/apikey
          }
        }
      ]
    }

    Notes:

    1. You only need to add the API keys for the LLM providers you want to use. The server will skip any providers without configured API keys.
    2. You may need to put the full path to the uv executable in the command field. You can get this by running which uv on MacOS/Linux or where uv on Windows.

Using the MCP Server

Once configured:

  1. The server will automatically start when you open Claude Desktop
  2. You can use the cross_check tool in your conversations by asking to "cross check with other LLMs"
  3. Provide a prompt, and it will return responses from all configured LLM providers

API Response Format

The server returns a dictionary with responses from each LLM provider:

{
    "ChatGPT": { ... },
    "Claude": { ... },
    "Perplexity": { ... },
    "Gemini": { ... }
}

Error Handling

  • If an API key is not provided for a specific LLM, that provider will be skipped
  • API errors are caught and returned in the response
  • Each LLM's response is independent, so errors with one provider won't affect others

Verified on MseeP

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

multi-llm-cross-check-mcp-server FAQ

How do I set up API keys for multiple LLM providers?
Obtain API keys from each provider's developer portal (OpenAI, Anthropic, Perplexity AI, Google Gemini) and configure them in the server's environment or config files.
Can this server handle asynchronous requests to all LLMs?
Yes, it uses asynchronous parallel processing to query multiple LLMs simultaneously for faster aggregated responses.
Is it compatible with Claude Desktop?
Yes, it is designed to integrate easily with Claude Desktop as an MCP server.
What Python version is required?
Python 3.8 or higher is required to run this MCP server.
How do I install the server dependencies?
Use the uv package manager to create a virtual environment and install requirements with 'uv pip install -r requirements.txt'.
Which LLM providers are supported?
Currently supports OpenAI (ChatGPT), Anthropic (Claude), Perplexity AI, and Google Gemini.
Can I add support for other LLM providers?
The server architecture allows extension, but adding new providers requires custom integration and API handling.
How does cross-checking improve response quality?
By comparing outputs from multiple LLMs, you can identify inconsistencies and select the most reliable or consensus answer.