Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

mcp-openapi

MCP.Pizza Chef: rmasters

mcp-openapi is an MCP server that converts HTTP methods defined in an OpenAPI specification into callable MCP tools. It enables real-time interaction with APIs by exposing their operations as structured tools accessible to LLMs. This server supports multiple OpenAPI specs by running multiple instances and can be configured via command line or environment variables. It is a proof-of-concept designed to facilitate API integration within the Model Context Protocol ecosystem.

Use This MCP server To

Expose REST API endpoints as MCP tools for LLM interaction Run multiple MCP-OpenAPI instances to serve different API specs Integrate external HTTP APIs into AI workflows via OpenAPI specs Enable LLMs to call API operations directly through MCP Configure MCP servers dynamically using environment variables Serve OpenAPI-based MCP tools over SSE or stdio protocols

README

MCP-OpenAPI

A Model Context Protocol server that exposes HTTP methods from an OpenAPI spec as tools.

Warning

This is a proof of concept, yet to be fully tested and documented.

Usage

Each MCP-OpenAPI server exposes operations from a single OpenAPI spec. You could run multiple instances to expose multiple specs.

This package is available on PyPI @ [mcp-openapi][pypi]. You can start a server with uvx mcp-openapi --openapi-url https://httpbin.org/spec.json (sse | stdio).

An example Claude config (while running fastapi dev tests/todos.py with port 8000):

{
  "mcpServers": {
    "todos": {
    "command": "uvx",
    "args": [
        "mcp-openapi",
        "--openapi-url=http://localhost:8000/openapi.json",
        "stdio"
      ]
    }
  }
}

The OpenAPI url can also be passed as an environment variable, MCP_OPENAPI_URL.

When running as SSE, you can configure the server with:

  • --fastmcp-sse-host - the host to serve the MCP server on
  • --fastmcp-sse-port - the port to serve the MCP server on

There are additional global options:

  • --fastmcp-debug - enable debug mode for the MCP server
  • --fastmcp-log-level - the log level for the MCP server

These can also be configured via environment variables using the FASTMCP_ prefix, e.g. FASTMCP_LOG_LEVEL=DEBUG.

How it works

  1. The MCP-OpenAPI server is initialised with an OpenAPI spec URL.
  2. The server fetches and parses the OpenAPI spec, and registers each path-operation as a tool.
  3. A FastMCP server is started with the registered tools.
  4. When a client requests a tool, the MCP server makes a HTTP request to the API and returns the response.

Supported OpenAPI/Swagger versions

Swagger 2.0 OpenAPI 3.0 OpenAPI 3.1
✔️ ✔️

This package supports OpenAPI 3.0 and 3.1 specs, in JSON and YAML formats.

We're using openapi-parser under the hood, which seems to be pretty comprehensive, and supports resolving references to external specs.

Future configuration options

[!INFO] These are still to be implemented.

Restricting endpoints

By default, all endpoints are exposed as tools. You can restrict which endpoints are exposed:

  • By path patterns,
  • By HTTP method,
  • Explicitly listing the individual operations to expose,
  • Selecting routes using OpenAPI tags.

Handling API authentication

MCP-OpenAPI makes API requests to the target API - it can use a global auth token:

  • --auth-token - a bearer token, or basic credentials in base64 format, used depending on the OpenAPI spec.

It would be nice to be able to make requests using varying authentication tokens (e.g. per client user), but without having the LLM see the tokens. TBD.

mcp-openapi FAQ

How do I start the mcp-openapi server?
Use the command `uvx mcp-openapi --openapi-url <URL> (sse | stdio)` to start the server with your OpenAPI spec URL.
Can I expose multiple OpenAPI specs simultaneously?
Yes, by running multiple instances of mcp-openapi, each serving a different OpenAPI specification.
How do I configure the server host and port?
When running as SSE, use flags like `--fastmcp-sse-host` and `--fastmcp-sse-port` to set the host and port.
Is mcp-openapi production-ready?
No, it is currently a proof of concept and is yet to be fully tested and documented.
How can I pass the OpenAPI URL to the server?
You can pass it as a command-line argument or set the environment variable `MCP_OPENAPI_URL`.
What protocols does mcp-openapi support?
It supports SSE (Server-Sent Events) and stdio for communication.
Can mcp-openapi be integrated with different LLM providers?
Yes, it can be used with any MCP client that supports OpenAI, Claude, Gemini, and other LLM providers.
Where can I find the mcp-openapi package?
It is available on PyPI under the name `mcp-openapi`.