Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

mcp-local-rag

MCP.Pizza Chef: nkapila6

mcp-local-rag is a lightweight, local RAG-style MCP server that performs web search using DuckDuckGo, computes embeddings with Google's MediaPipe Text Embedder, ranks results by similarity, and extracts context from URLs to provide relevant markdown content to language models without relying on external APIs.

Use This MCP server To

Perform local web search for LLM context without API dependencies Rank and extract relevant web content for query-based context Integrate with LLMs to enrich responses with real-time web data Enable privacy-focused RAG workflows by running searches locally Fetch and embed search results for improved LLM answer accuracy

README

mcp-local-rag

"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨

flowchart TD
    A[User] -->|1.Submits LLM Query| B[Language Model]
    B -->|2.Sends Query| C[mcp-local-rag Tool]
    
    subgraph mcp-local-rag Processing
    C -->|Search DuckDuckGo| D[Fetch 10 search results]
    D -->|Fetch Embeddings| E[Embeddings from Google's MediaPipe Text Embedder]
    E -->|Compute Similarity| F[Rank Entries Against Query]
    F -->|Select top k results| G[Context Extraction from URL]
    end
    
    G -->|Returns Markdown from HTML content| B
    B -->|3.Generated response with context| H[Final LLM Output]
    H -->|5.Present result to user| A

    classDef default fill:#f9f,stroke:#333,stroke-width:2px;
    classDef process fill:#bbf,stroke:#333,stroke-width:2px;
    classDef input fill:#9f9,stroke:#333,stroke-width:2px;
    classDef output fill:#ff9,stroke:#333,stroke-width:2px;

    class A input;
    class B,C process;
    class G output;
Loading

Table of Contents


Installation

Using Docker (recommended)

Ensure you have Docker installed.
Add this to your MCP server configuration:

{
  "mcpServers": {
    "mcp-local-rag": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "--init",
        "-e",
        "DOCKER_CONTAINER=true",
        "ghcr.io/nkapila6/mcp-local-rag:latest"
      ]
    }
  }
}

Using Python + uv

For this step, make sure you have uv installed: https://docs.astral.sh/uv/.

There are 2 ways to approach this:

  1. Option 1: Directly running via uvx
  2. Option 2: Clone and Run Locally

Run Directly via uvx

This is the easiest and quickest method. Add the following to your MCP config:

{
  "mcpServers": {
    "mcp-local-rag":{
      "command": "uvx",
        "args": [
          "--python=3.10",
          "--from",
          "git+https://github.com/nkapila6/mcp-local-rag",
          "mcp-local-rag"
        ]
      }
  }
}

Clone and Run Locally

  1. Clone this GitHub repository
git clone https://github.com/nkapila6/mcp-local-rag
  1. Add the following to your MCP Server configuration.
{
  "mcpServers": {
    "mcp-local-rag": {
      "command": "uv",
      "args": [
        "--directory",
        "<path where this folder is located>/mcp-local-rag/",
        "run",
        "src/mcp_local_rag/main.py"
      ]
    }
  }
}

You can find MCP config file paths here: https://modelcontextprotocol.io/quickstart/user


Example use

Prompt

When an LLM (like Claude) is asked a question requiring recent web information, it will trigger mcp-local-rag.

When asked to fetch/lookup/search the web, the model prompts you to use MCP server for the chat.

In the example, have asked it about Google's latest Gemma models released yesterday. This is new info that Claude is not aware about.

Result

mcp-local-rag performs a live web search, extracts context, and sends it back to the model—giving it fresh knowledge:


🛠️ Contributing

Have ideas or want to improve this project? Issues and pull requests are welcome!

📝 License

This project is licensed under the MIT License.

mcp-local-rag FAQ

How does mcp-local-rag perform web searches without APIs?
It uses DuckDuckGo's public search interface to fetch results directly without requiring API keys.
What embedding model does mcp-local-rag use?
It uses Google's MediaPipe Text Embedder to generate embeddings for search result ranking.
Can mcp-local-rag run fully offline?
It requires internet access for web searches but runs all processing locally without external API calls.
How does mcp-local-rag rank search results?
It computes similarity scores between the query embedding and search result embeddings to select the most relevant entries.
What output format does mcp-local-rag provide to the LLM?
It returns extracted context as markdown content derived from the HTML of top-ranked URLs.
Is mcp-local-rag compatible with multiple LLM providers?
Yes, it can be integrated with models like OpenAI GPT, Anthropic Claude, and Google Gemini.
How do I deploy mcp-local-rag locally?
Clone the repository, install dependencies, and run the server; no API keys are needed.
Does mcp-local-rag support custom search engines?
Currently, it is designed to work with DuckDuckGo but can be extended with custom adapters.