Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

mcp-DEEPwebresearch

MCP.Pizza Chef: qpd-v

The mcp-DEEPwebresearch is an enhanced MCP server designed for advanced deep web research. It enables structured, real-time extraction of webpage content through tools like visit_page, optimized for performance within MCP timeout constraints. This server improves page loading efficiency, reduces default crawling depth and branching, and incorporates robust error and timeout handling. Built with Node.js and TypeScript, it is a fork of the original mcp-webresearch project, tailored for efficient and reliable deep web data gathering in AI workflows.

Use This MCP server To

Extract deep web page content for AI analysis Perform structured web crawling with timeout control Integrate real-time web data into LLM workflows Optimize web research tasks within MCP environments Handle web page loading errors gracefully Reduce crawl depth and branching for focused research

README

MCP Deep Web Research Server (v0.3.0)

Node.js Version TypeScript License: MIT

A Model Context Protocol (MCP) server for advanced web research.

Web Research Server MCP server

Latest Changes

  • Added visit_page tool for direct webpage content extraction
  • Optimized performance to work within MCP timeout limits
    • Reduced default maxDepth and maxBranching parameters
    • Improved page loading efficiency
    • Added timeout checks throughout the process
    • Enhanced error handling for timeouts

This project is a fork of mcp-webresearch by mzxrai, enhanced with additional features for deep web research capabilities. We're grateful to the original creators for their foundational work.

Bring real-time info into Claude with intelligent search queuing, enhanced content extraction, and deep research capabilities.

Features

  • Intelligent Search Queue System

    • Batch search operations with rate limiting
    • Queue management with progress tracking
    • Error recovery and automatic retries
    • Search result deduplication
  • Enhanced Content Extraction

    • TF-IDF based relevance scoring
    • Keyword proximity analysis
    • Content section weighting
    • Readability scoring
    • Improved HTML structure parsing
    • Structured data extraction
    • Better content cleaning and formatting
  • Core Features

    • Google search integration
    • Webpage content extraction
    • Research session tracking
    • Markdown conversion with improved formatting

Prerequisites

  • Node.js >= 18 (includes npm and npx)
  • Claude Desktop app

Installation

Global Installation (Recommended)

# Install globally using npm
npm install -g mcp-deepwebresearch

# Or using yarn
yarn global add mcp-deepwebresearch

# Or using pnpm
pnpm add -g mcp-deepwebresearch

Local Project Installation

# Using npm
npm install mcp-deepwebresearch

# Using yarn
yarn add mcp-deepwebresearch

# Using pnpm
pnpm add mcp-deepwebresearch

Claude Desktop Integration

After installing the package, add this entry to your claude_desktop_config.json:

Windows

{
  "mcpServers": {
    "deepwebresearch": {
      "command": "mcp-deepwebresearch",
      "args": []
    }
  }
}

Location: %APPDATA%\Claude\claude_desktop_config.json

macOS

{
  "mcpServers": {
    "deepwebresearch": {
      "command": "mcp-deepwebresearch",
      "args": []
    }
  }
}

Location: ~/Library/Application Support/Claude/claude_desktop_config.json

This config allows Claude Desktop to automatically start the web research MCP server when needed.

First-time Setup

After installation, run this command to install required browser dependencies:

npx playwright install chromium

Usage

Simply start a chat with Claude and send a prompt that would benefit from web research. If you'd like a prebuilt prompt customized for deeper web research, you can use the agentic-research prompt that we provide through this package. Access that prompt in Claude Desktop by clicking the Paperclip icon in the chat input and then selecting Choose an integrationdeepwebresearchagentic-research.

Tools

  1. deep_research

    • Performs comprehensive research with content analysis
    • Arguments:
      {
        topic: string;
        maxDepth?: number;      // default: 2
        maxBranching?: number;  // default: 3
        timeout?: number;       // default: 55000 (55 seconds)
        minRelevanceScore?: number;  // default: 0.7
      }
    • Returns:
      {
        findings: {
          mainTopics: Array<{name: string, importance: number}>;
          keyInsights: Array<{text: string, confidence: number}>;
          sources: Array<{url: string, credibilityScore: number}>;
        };
        progress: {
          completedSteps: number;
          totalSteps: number;
          processedUrls: number;
        };
        timing: {
          started: string;
          completed?: string;
          duration?: number;
          operations?: {
            parallelSearch?: number;
            deduplication?: number;
            topResultsProcessing?: number;
            remainingResultsProcessing?: number;
            total?: number;
          };
        };
      }
  2. parallel_search

    • Performs multiple Google searches in parallel with intelligent queuing
    • Arguments: { queries: string[], maxParallel?: number }
    • Note: maxParallel is limited to 5 to ensure reliable performance
  3. visit_page

    • Visit a webpage and extract its content
    • Arguments: { url: string }
    • Returns:
      {
        url: string;
        title: string;
        content: string;  // Markdown formatted content
      }

Prompts

agentic-research

A guided research prompt that helps Claude conduct thorough web research. The prompt instructs Claude to:

  • Start with broad searches to understand the topic landscape
  • Prioritize high-quality, authoritative sources
  • Iteratively refine the research direction based on findings
  • Keep you informed and let you guide the research interactively
  • Always cite sources with URLs

Configuration Options

The server can be configured through environment variables:

  • MAX_PARALLEL_SEARCHES: Maximum number of concurrent searches (default: 5)
  • SEARCH_DELAY_MS: Delay between searches in milliseconds (default: 200)
  • MAX_RETRIES: Number of retry attempts for failed requests (default: 3)
  • TIMEOUT_MS: Request timeout in milliseconds (default: 55000)
  • LOG_LEVEL: Logging level (default: 'info')

Error Handling

Common Issues

  1. Rate Limiting

    • Symptom: "Too many requests" error
    • Solution: Increase SEARCH_DELAY_MS or decrease MAX_PARALLEL_SEARCHES
  2. Network Timeouts

    • Symptom: "Request timed out" error
    • Solution: Ensure requests complete within the 60-second MCP timeout
  3. Browser Issues

    • Symptom: "Browser failed to launch" error
    • Solution: Ensure Playwright is properly installed (npx playwright install)

Debugging

This is beta software. If you run into issues:

  1. Check Claude Desktop's MCP logs:

    # On macOS
    tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
    
    # On Windows
    Get-Content -Path "$env:APPDATA\Claude\logs\mcp*.log" -Tail 20 -Wait
  2. Enable debug logging:

    export LOG_LEVEL=debug

Development

Setup

# Install dependencies
pnpm install

# Build the project
pnpm build

# Watch for changes
pnpm watch

# Run in development mode
pnpm dev

Testing

# Run all tests
pnpm test

# Run tests in watch mode
pnpm test:watch

# Run tests with coverage
pnpm test:coverage

Code Quality

# Run linter
pnpm lint

# Fix linting issues
pnpm lint:fix

# Type check
pnpm type-check

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Coding Standards

  • Follow TypeScript best practices
  • Maintain test coverage above 80%
  • Document new features and APIs
  • Update CHANGELOG.md for significant changes
  • Follow semantic versioning

Performance Considerations

  • Use batch operations where possible
  • Implement proper error handling and retries
  • Consider memory usage with large datasets
  • Cache results when appropriate
  • Use streaming for large content

Requirements

  • Node.js >= 18
  • Playwright (automatically installed as a dependency)

Verified Platforms

  • macOS
  • Windows
  • Linux

License

MIT

Credits

This project builds upon the excellent work of mcp-webresearch by mzxrai. The original codebase provided the foundation for our enhanced features and capabilities.

Author

qpd-v

mcp-DEEPwebresearch FAQ

How does mcp-DEEPwebresearch handle timeouts?
It includes timeout checks throughout the crawling process to ensure operations complete within MCP limits, preventing stalls.
What is the visit_page tool?
A feature that allows direct extraction of webpage content for immediate use by the model.
Is mcp-DEEPwebresearch compatible with different MCP hosts?
Yes, it is designed to work seamlessly with any MCP host embedding LLMs.
What technologies is mcp-DEEPwebresearch built on?
It is built using Node.js (>=18) and TypeScript 5.0 for robust and modern development.
How does it improve over the original mcp-webresearch?
It optimizes performance, reduces default crawl depth and branching, and enhances error and timeout handling.
Can it handle errors during web page loading?
Yes, it has enhanced error handling to manage loading failures gracefully.
Is the server open source?
Yes, it is licensed under MIT and available on GitHub.
Can it be used with multiple LLM providers?
Yes, it supports integration with OpenAI, Anthropic Claude, and Google Gemini models.