mcp-mem0

MCP.Pizza Chef: coleam00

MCP-Mem0 is a server implementation of the Model Context Protocol that integrates Mem0 to provide AI agents with persistent long-term memory. It supports storing, retrieving, and semantically searching memories, serving as both a practical memory backend and a template for developers to build their own MCP servers using Python. This server follows best practices for MCP server design, ensuring compatibility with any MCP client and enabling advanced memory management for AI workflows.

Use This MCP server To

Store and retrieve long-term memories for AI agents Enable semantic search over stored agent memories Serve as a template for building custom MCP servers Integrate persistent memory into AI workflows Demonstrate best practices for MCP server implementation Provide a Python-based MCP server example Support multi-step reasoning with memory recall Facilitate memory management in AI agent development

README

MCP-Mem0: Long-Term Memory for AI Agents

Mem0 and MCP Integration

A template implementation of the Model Context Protocol (MCP) server integrated with Mem0 for providing AI agents with persistent memory capabilities.

Use this as a reference point to build your MCP servers yourself, or give this as an example to an AI coding assistant and tell it to follow this example for structure and code correctness!

Overview

This project demonstrates how to build an MCP server that enables AI agents to store, retrieve, and search memories using semantic search. It serves as a practical template for creating your own MCP servers, simply using Mem0 and a practical example.

The implementation follows the best practices laid out by Anthropic for building MCP servers, allowing seamless integration with any MCP-compatible client.

Features

The server provides three essential memory management tools:

  1. save_memory: Store any information in long-term memory with semantic indexing
  2. get_all_memories: Retrieve all stored memories for comprehensive context
  3. search_memories: Find relevant memories using semantic search

Prerequisites

  • Python 3.12+
  • Supabase or any PostgreSQL database (for vector storage of memories)
  • API keys for your chosen LLM provider (OpenAI, OpenRouter, or Ollama)
  • Docker if running the MCP server as a container (recommended)

Installation

Using uv

  1. Install uv if you don't have it:

    pip install uv
  2. Clone this repository:

    git clone https://github.com/coleam00/mcp-mem0.git
    cd mcp-mem0
  3. Install dependencies:

    uv pip install -e .
  4. Create a .env file based on .env.example:

    cp .env.example .env
  5. Configure your environment variables in the .env file (see Configuration section)

Using Docker (Recommended)

  1. Build the Docker image:

    docker build -t mcp/mem0 --build-arg PORT=8050 .
  2. Create a .env file based on .env.example and configure your environment variables

Configuration

The following environment variables can be configured in your .env file:

Variable Description Example
TRANSPORT Transport protocol (sse or stdio) sse
HOST Host to bind to when using SSE transport 0.0.0.0
PORT Port to listen on when using SSE transport 8050
LLM_PROVIDER LLM provider (openai, openrouter, or ollama) openai
LLM_BASE_URL Base URL for the LLM API https://api.openai.com/v1
LLM_API_KEY API key for the LLM provider sk-...
LLM_CHOICE LLM model to use gpt-4o-mini
EMBEDDING_MODEL_CHOICE Embedding model to use text-embedding-3-small
DATABASE_URL PostgreSQL connection string postgresql://user:pass@host:port/db

Running the Server

Using uv

SSE Transport

# Set TRANSPORT=sse in .env then:
uv run src/main.py

The MCP server will essentially be run as an API endpoint that you can then connect to with config shown below.

Stdio Transport

With stdio, the MCP client iself can spin up the MCP server, so nothing to run at this point.

Using Docker

SSE Transport

docker run --env-file .env -p:8050:8050 mcp/mem0

The MCP server will essentially be run as an API endpoint within the container that you can then connect to with config shown below.

Stdio Transport

With stdio, the MCP client iself can spin up the MCP server container, so nothing to run at this point.

Integration with MCP Clients

SSE Configuration

Once you have the server running with SSE transport, you can connect to it using this configuration:

{
  "mcpServers": {
    "mem0": {
      "transport": "sse",
      "url": "http://localhost:8050/sse"
    }
  }
}

Note for Windsurf users: Use serverUrl instead of url in your configuration:

{
  "mcpServers": {
    "mem0": {
      "transport": "sse",
      "serverUrl": "http://localhost:8050/sse"
    }
  }
}

Note for n8n users: Use host.docker.internal instead of localhost since n8n has to reach outside of it's own container to the host machine:

So the full URL in the MCP node would be: http://host.docker.internal:8050/sse

Make sure to update the port if you are using a value other than the default 8050.

Python with Stdio Configuration

Add this server to your MCP configuration for Claude Desktop, Windsurf, or any other MCP client:

{
  "mcpServers": {
    "mem0": {
      "command": "your/path/to/mcp-mem0/.venv/Scripts/python.exe",
      "args": ["your/path/to/mcp-mem0/src/main.py"],
      "env": {
        "TRANSPORT": "stdio",
        "LLM_PROVIDER": "openai",
        "LLM_BASE_URL": "https://api.openai.com/v1",
        "LLM_API_KEY": "YOUR-API-KEY",
        "LLM_CHOICE": "gpt-4o-mini",
        "EMBEDDING_MODEL_CHOICE": "text-embedding-3-small",
        "DATABASE_URL": "YOUR-DATABASE-URL"
      }
    }
  }
}

Docker with Stdio Configuration

{
  "mcpServers": {
    "mem0": {
      "command": "docker",
      "args": ["run", "--rm", "-i", 
               "-e", "TRANSPORT", 
               "-e", "LLM_PROVIDER", 
               "-e", "LLM_BASE_URL", 
               "-e", "LLM_API_KEY", 
               "-e", "LLM_CHOICE", 
               "-e", "EMBEDDING_MODEL_CHOICE", 
               "-e", "DATABASE_URL", 
               "mcp/mem0"],
      "env": {
        "TRANSPORT": "stdio",
        "LLM_PROVIDER": "openai",
        "LLM_BASE_URL": "https://api.openai.com/v1",
        "LLM_API_KEY": "YOUR-API-KEY",
        "LLM_CHOICE": "gpt-4o-mini",
        "EMBEDDING_MODEL_CHOICE": "text-embedding-3-small",
        "DATABASE_URL": "YOUR-DATABASE-URL"
      }
    }
  }
}

Building Your Own Server

This template provides a foundation for building more complex MCP servers. To build your own:

  1. Add your own tools by creating methods with the @mcp.tool() decorator
  2. Create your own lifespan function to add your own dependencies (clients, database connections, etc.)
  3. Modify the utils.py file for any helper functions you need for your MCP server
  4. Feel free to add prompts and resources as well with @mcp.resource() and @mcp.prompt()

mcp-mem0 FAQ

How do I deploy the MCP-Mem0 server?
You can deploy MCP-Mem0 by following the GitHub repository instructions, which include setting up Python dependencies and configuring Mem0 integration.
Can MCP-Mem0 be used with any MCP-compatible client?
Yes, MCP-Mem0 follows MCP standards and works seamlessly with any MCP-compatible client.
What memory capabilities does MCP-Mem0 provide?
MCP-Mem0 offers persistent long-term memory storage, retrieval, and semantic search for AI agents.
Is MCP-Mem0 suitable as a development template?
Yes, it is designed as a practical example and template for building your own MCP servers in Python.
Does MCP-Mem0 support semantic search?
Yes, it uses Mem0's semantic search capabilities to enable efficient memory querying.
What programming language is MCP-Mem0 implemented in?
MCP-Mem0 is implemented in Python, making it accessible for many developers.
How does MCP-Mem0 handle memory persistence?
It integrates with Mem0 to store memories persistently and retrieve them as needed.
Can MCP-Mem0 improve multi-step reasoning in AI agents?
Yes, by providing persistent memory and semantic search, it supports complex reasoning workflows.