Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

atla-mcp-server

MCP.Pizza Chef: atla-ai

The atla-mcp-server is an MCP server implementation that provides a standardized interface for large language models (LLMs) to interact with the Atla API. It enables state-of-the-art LLM evaluation by offering tools to assess model responses against specific criteria, returning detailed scores and critiques. This server facilitates advanced LLM response evaluation workflows, integrating seamlessly with the Model Context Protocol ecosystem.

Use This MCP server To

Evaluate LLM responses against specific evaluation criteria Perform multi-criteria evaluation of LLM outputs Generate detailed textual critiques of LLM responses Integrate Atla evaluation models into MCP-based workflows Automate quality assessment of LLM-generated content Benchmark different LLMs using standardized evaluation metrics

README

Atla MCP Server

An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.

Learn more about Atla here. Learn more about the Model Context Protocol here.

Atla MCP server

Available Tools

  • evaluate_llm_response: Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response.
  • evaluate_llm_response_on_multiple_criteria: Evaluate an LLM's response to a prompt across multiple evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria.

Usage

To use the MCP server, you will need an Atla API key. You can find your existing API key here or create a new one here.

Installation

We recommend using uv to manage the Python environment. See here for installation instructions.

Manually running the server

Once you have uv installed and have your Atla API key, you can manually run the MCP server using uvx (which is provided by uv):

ATLA_API_KEY=<your-api-key> uvx atla-mcp-server

Connecting to the server

Having issues or need help connecting to another client? Feel free to open an issue or contact us!

OpenAI Agents SDK

For more details on using the OpenAI Agents SDK with MCP servers, refer to the official documentation.

  1. Install the OpenAI Agents SDK:
pip install openai-agents
  1. Use the OpenAI Agents SDK to connect to the server:
import os

from agents import Agent
from agents.mcp import MCPServerStdio

async with MCPServerStdio(
        params={
            "command": "uvx",
            "args": ["atla-mcp-server"],
            "env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")}
        }
    ) as atla_mcp_server:
    ...
Claude Desktop

For more details on configuring MCP servers in Claude Desktop, refer to the official MCP quickstart guide.

  1. Add the following to your claude_desktop_config.json file:
{
  "mcpServers": {
    "atla-mcp-server": {
      "command": "uvx",
      "args": ["atla-mcp-server"],
      "env": {
        "ATLA_API_KEY": "<your-atla-api-key>"
      }
    }
  }
}
  1. Restart Claude Desktop to apply the changes.

You should now see options from atla-mcp-server in the list of available MCP tools.

Cursor

For more details on configuring MCP servers in Cursor, refer to the official documentation.

  1. Add the following to your .cursor/mcp.json file:
{
  "mcpServers": {
    "atla-mcp-server": {
      "command": "uvx",
      "args": ["atla-mcp-server"],
      "env": {
        "ATLA_API_KEY": "<your-atla-api-key>"
      }
    }
  }
}

You should now see atla-mcp-server in the list of available MCP servers.

Contributing

Contributions are welcome! Please see the CONTRIBUTING.md file for details.

License

This project is licensed under the MIT License. See the LICENSE file for details.

atla-mcp-server FAQ

How does the atla-mcp-server evaluate LLM responses?
It uses Atla's evaluation models to score and critique LLM responses based on defined criteria.
Can the server evaluate responses on multiple criteria simultaneously?
Yes, it supports multi-criteria evaluation to provide comprehensive feedback.
Is the atla-mcp-server compatible with different LLM providers?
Yes, it works with any LLM integrated via MCP, including OpenAI, Claude, and Gemini.
What kind of feedback does the server provide on LLM responses?
It returns both a numerical score and a textual critique highlighting strengths and weaknesses.
How do I integrate the atla-mcp-server into my existing MCP workflow?
You connect it as an MCP server exposing Atla API evaluation tools accessible by your MCP client.
Does the server support real-time evaluation during LLM interactions?
Yes, it is designed for real-time, structured evaluation within MCP-enabled environments.
Where can I find more documentation about the Atla MCP server?
Detailed docs are available at https://docs.atla-ai.com and MCP info at https://modelcontextprotocol.io.