mcp-chatbot

MCP.Pizza Chef: 3choff

The mcp-chatbot is a command-line interface client that exemplifies the integration of the Model Context Protocol (MCP) into chatbot applications. It highlights MCP's flexibility by supporting multiple MCP servers and dynamic tool integration declared via system prompts, ensuring broad compatibility with any LLM provider adhering to OpenAI API standards. Tested with models like Llama 3.2 90b and GPT-4o mini, it offers a practical, extensible foundation for developers to build AI chatbots that leverage real-time context and multi-tool orchestration. The client requires Python 3.10 and common dependencies such as python-dotenv, requests, mcp, and uvicorn for operation.

Use This MCP client To

Build CLI chatbots with multi-tool MCP integration Demonstrate MCP client capabilities in real-time Test LLMs with dynamic tool support Configure multiple MCP servers via JSON Prototype AI chatbots using OpenAI-compatible LLMs

README

MCP Chatbot

This chatbot example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP's flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.

If you find this project helpful, don’t forget to ⭐ star the repository or buy me a ☕ coffee.

Key Features

  • LLM Provider Flexibility: Works with any LLM that follows OpenAI API standards (tested with Llama 3.2 90b on Groq and GPT-4o mini on GitHub Marketplace).
  • Dynamic Tool Integration: Tools are declared in the system prompt, ensuring maximum compatibility across different LLMs.
  • Server Configuration: Supports multiple MCP servers through a simple JSON configuration file like the Claude Desktop App.

Requirements

  • Python 3.10
  • python-dotenv
  • requests
  • mcp
  • uvicorn

Installation

  1. Clone the repository:

    git clone https://github.com/3choff/mcp-chatbot.git
    cd mcp-chatbot
  2. Install the dependencies:

    pip install -r requirements.txt
  3. Set up environment variables:

    Create a .env file in the root directory and add your API key:

    LLM_API_KEY=your_api_key_here
    
  4. Configure servers:

    The servers_config.json follows the same structure as Claude Desktop, allowing for easy integration of multiple servers. Here's an example:

    {
      "mcpServers": {
        "sqlite": {
          "command": "uvx",
          "args": ["mcp-server-sqlite", "--db-path", "./test.db"]
        },
        "puppeteer": {
          "command": "npx",
          "args": ["-y", "@modelcontextprotocol/server-puppeteer"]
        }
      }
    }

    Environment variables are supported as well. Pass them as you would with the Claude Desktop App.

    Example:

    {
      "mcpServers": {
        "server_name": {
          "command": "uvx",
          "args": ["mcp-server-name", "--additional-args"],
          "env": {
            "API_KEY": "your_api_key_here"
          }
        }
      }
    }

Usage

  1. Run the client:

    python main.py
  2. Interact with the assistant:

    The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.

  3. Exit the session:

    Type quit or exit to end the session.

Architecture

  • Tool Discovery: Tools are automatically discovered from configured servers.
  • System Prompt: Tools are dynamically included in the system prompt, allowing the LLM to understand available capabilities.
  • Server Integration: Supports any MCP-compatible server, tested with various server implementations including Uvicorn and Node.js.

Class Structure

  • Configuration: Manages environment variables and server configurations
  • Server: Handles MCP server initialization, tool discovery, and execution
  • Tool: Represents individual tools with their properties and formatting
  • LLMClient: Manages communication with the LLM provider
  • ChatSession: Orchestrates the interaction between user, LLM, and tools

Logic Flow

flowchart TD
    A[Start] --> B[Load Configuration]
    B --> C[Initialize Servers]
    C --> D[Discover Tools]
    D --> E[Format Tools for LLM]
    E --> F[Wait for User Input]
    
    F --> G{User Input}
    G --> H[Send Input to LLM]
    H --> I{LLM Decision}
    I -->|Tool Call| J[Execute Tool]
    I -->|Direct Response| K[Return Response to User]
    
    J --> L[Return Tool Result]
    L --> M[Send Result to LLM]
    M --> N[LLM Interprets Result]
    N --> O[Present Final Response to User]
    
    K --> O
    O --> F
Loading
  1. Initialization:

    • Configuration loads environment variables and server settings
    • Servers are initialized with their respective tools
    • Tools are discovered and formatted for LLM understanding
  2. Runtime Flow:

    • User input is received
    • Input is sent to LLM with context of available tools
    • LLM response is parsed:
      • If it's a tool call → execute tool and return result
      • If it's a direct response → return to user
    • Tool results are sent back to LLM for interpretation
    • Final response is presented to user
  3. Tool Integration:

    • Tools are dynamically discovered from MCP servers
    • Tool descriptions are automatically included in system prompt
    • Tool execution is handled through standardized MCP protocol

Contributing

Feedback and contributions are welcome. If you encounter any issues or have suggestions for improvements, please create a new issue on the GitHub repository.

If you'd like to contribute to the development of the project, feel free to submit a pull request with your changes.

License

This project is licensed under the MIT License.

mcp-chatbot FAQ

How do I configure multiple MCP servers in mcp-chatbot?
You can configure multiple MCP servers using a simple JSON configuration file, allowing the chatbot to interact with various tools and data sources dynamically.
Which Python version is required to run mcp-chatbot?
mcp-chatbot requires Python 3.10 to ensure compatibility with its dependencies and features.
What dependencies must be installed to use mcp-chatbot?
The main dependencies include python-dotenv, requests, mcp, and uvicorn, which can be installed via pip.
Can mcp-chatbot work with LLM providers other than OpenAI?
Yes, it supports any LLM provider that follows OpenAI API standards, including models like Llama 3.2 90b on Groq and GPT-4o mini on GitHub Marketplace.
How does mcp-chatbot handle tool integration?
Tools are declared in the system prompt, enabling dynamic and flexible integration across different LLMs and MCP servers.
Is mcp-chatbot suitable for production use?
It is primarily a demonstration and prototyping tool but can be extended for production with additional development and customization.
How do I start the mcp-chatbot after installation?
After installing dependencies, you run the chatbot via the CLI, which will load the configured MCP servers and tools for interaction.
Does mcp-chatbot support real-time context updates?
Yes, leveraging MCP, it supports real-time context feeding and multi-step reasoning with connected MCP servers.