Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

mentor-mcp-server

MCP.Pizza Chef: cyanheads

mentor-mcp-server is a Model Context Protocol server that provides LLM agents with AI-powered mentorship through Deepseek-Reasoning (R1). It offers expert second opinions including code review, design critique, writing feedback, and idea brainstorming by integrating the Deepseek API. This server enhances LLM agent workflows by delivering actionable insights and expert-level feedback to improve decision-making and output quality.

Use This MCP server To

Provide code review feedback to LLM agents Offer design critique for software architecture Generate writing feedback for content improvement Brainstorm ideas collaboratively with LLM agents Deliver expert second opinions to AI workflows Integrate Deepseek API for advanced reasoning Enhance LLM agent decision-making with mentorship Support multi-domain feedback for AI-generated outputs

README

mentor-mcp-server

TypeScript Model Context Protocol Version License Status GitHub

A Model Context Protocol server providing LLM Agents a second opinion via AI-powered Deepseek-Reasoning (R1) mentorship capabilities, including code review, design critique, writing feedback, and idea brainstorming through the Deepseek API. Set your LLM Agent up for success with expert second opinions and actionable insights.

Model Context Protocol

The Model Context Protocol (MCP) enables communication between:

  • Clients: Claude Desktop, IDEs, and other MCP-compatible clients
  • Servers: Tools and resources for task management and automation
  • LLM Agents: AI models that leverage the server's capabilities

Table of Contents

Features

Code Analysis

  • Comprehensive code reviews
  • Bug detection and prevention
  • Style and best practices evaluation
  • Performance optimization suggestions
  • Security vulnerability assessment

Design & Architecture

  • UI/UX design critiques
  • Architectural diagram analysis
  • Design pattern recommendations
  • Accessibility evaluation
  • Consistency checks

Content Enhancement

  • Writing feedback and improvement
  • Grammar and style analysis
  • Documentation review
  • Content clarity assessment
  • Structural recommendations

Strategic Planning

  • Feature enhancement brainstorming
  • Second opinions on approaches
  • Innovation suggestions
  • Feasibility analysis
  • User value assessment

Installation

# Clone the repository
git clone git@github.com:cyanheads/mentor-mcp-server.git
cd mentor-mcp-server

# Install dependencies
npm install

# Build the project
npm run build

Configuration

Add to your MCP client settings:

{
  "mcpServers": {
    "mentor": {
      "command": "node",
      "args": ["build/index.js"],
      "env": {
        "DEEPSEEK_API_KEY": "your_api_key",
        "DEEPSEEK_MODEL": "deepseek-reasoner",
        "DEEPSEEK_MAX_TOKENS": "8192",
        "DEEPSEEK_MAX_RETRIES": "3",
        "DEEPSEEK_TIMEOUT": "30000"
      }
    }
  }
}

Environment Variables

Variable Required Default Description
DEEPSEEK_API_KEY Yes - Your Deepseek API key
DEEPSEEK_MODEL Yes deepseek-reasoner Deepseek model name
DEEPSEEK_MAX_TOKENS No 8192 Maximum tokens per request
DEEPSEEK_MAX_RETRIES No 3 Number of retry attempts
DEEPSEEK_TIMEOUT No 30000 Request timeout (ms)

Tools

Code Review

<use_mcp_tool>
<server_name>mentor-mcp-server</server_name>
<tool_name>code_review</tool_name>
<arguments>
{
  "file_path": "src/app.ts",
  "language": "typescript"
}
</arguments>
</use_mcp_tool>

Design Critique

<use_mcp_tool>
<server_name>mentor-mcp-server</server_name>
<tool_name>design_critique</tool_name>
<arguments>
{
  "design_document": "path/to/design.fig",
  "design_type": "web UI"
}
</arguments>
</use_mcp_tool>

Writing Feedback

<use_mcp_tool>
<server_name>mentor-mcp-server</server_name>
<tool_name>writing_feedback</tool_name>
<arguments>
{
  "text": "Documentation content...",
  "writing_type": "documentation"
}
</arguments>
</use_mcp_tool>

Feature Enhancement

<use_mcp_tool>
<server_name>mentor-mcp-server</server_name>
<tool_name>brainstorm_enhancements</tool_name>
<arguments>
{
  "concept": "User authentication system"
}
</arguments>
</use_mcp_tool>

Examples

Detailed examples of each tool's usage and output can be found in the examples directory:

  • Second Opinion Example - Analysis of authentication system requirements
  • Code Review Example - Detailed TypeScript code review with security and performance insights
  • Design Critique Example - Comprehensive UI/UX feedback for a dashboard design
  • Writing Feedback Example - Documentation improvement suggestions
  • Brainstorm Enhancements Example - Feature ideation with implementation details

Each example includes the request format and sample response, demonstrating the tool's capabilities and output structure.

Development

# Build TypeScript code
npm run build

# Start the server
npm run start

# Development with watch mode
npm run dev

# Clean build artifacts
npm run clean

Project Structure

src/
├── api/         # API integration modules
├── tools/       # Tool implementations
│   ├── second-opinion/
│   ├── code-review/
│   ├── design-critique/
│   ├── writing-feedback/
│   └── brainstorm-enhancements/
├── types/       # TypeScript type definitions
├── utils/       # Utility functions
├── config.ts    # Server configuration
├── index.ts     # Entry point
└── server.ts    # Main server implementation

License

Apache License 2.0. See LICENSE for more information.


Built with the Model Context Protocol

mentor-mcp-server FAQ

How does mentor-mcp-server integrate with LLM agents?
It connects via the Model Context Protocol, providing mentorship through the Deepseek API to enhance agent reasoning and feedback.
What types of feedback can mentor-mcp-server provide?
It offers code reviews, design critiques, writing feedback, and idea brainstorming support.
Is mentor-mcp-server compatible with multiple LLM providers?
Yes, it works with various LLMs including OpenAI, Anthropic Claude, and Google Gemini by adhering to MCP standards.
How do I set up mentor-mcp-server in my environment?
You can deploy it by following the GitHub repository instructions, which include TypeScript setup and MCP integration steps.
Can mentor-mcp-server improve AI-generated content quality?
Yes, by providing expert second opinions and actionable insights, it helps refine and improve AI outputs.
Does mentor-mcp-server support real-time interaction?
Yes, it offers real-time mentorship capabilities to LLM agents during their workflows.
What is Deepseek-Reasoning (R1) in this context?
It is an AI-powered reasoning framework integrated via the Deepseek API to provide advanced mentorship and critique.
Is the mentor-mcp-server open source?
Yes, it is licensed under Apache 2.0 and available on GitHub for community use and contributions.