Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

producthunt-mcp-server

MCP.Pizza Chef: jaipandya

Product Hunt MCP Server is a plug-and-play MCP server that integrates Product Hunt's API with LLMs and agents. It enables real-time access to the latest product launches, trends, and community discussions from Product Hunt, allowing models to query and interact with this data seamlessly. Designed for Python 3.10+ and Docker-ready, it requires a Product Hunt API token for operation and supports secure, scoped interactions via the Model Context Protocol.

Use This MCP server To

Fetch latest product launches from Product Hunt in real time Query trending products and categories for market analysis Integrate Product Hunt data into AI-powered product discovery tools Enable LLMs to generate summaries of new tech products Automate alerts for new products matching specific criteria Combine Product Hunt insights with other data sources for research Support AI agents in recommending innovative products to users

README

🚀 Product Hunt MCP Server

PyPI version License: MIT Python 3.10+ Docker Ready MCP Compatible

A plug-and-play MCP server for Product Hunt


📦 Quick Install

pip install product-hunt-mcp

🏃‍♂️ Quick Start Example

# Run the MCP server (requires PRODUCT_HUNT_TOKEN environment variable)
export PRODUCT_HUNT_TOKEN=your_token_here
product-hunt-mcp

✨ What is this?

Product Hunt MCP Server connects Product Hunt's API to any LLM or agent that speaks the Model Context Protocol (MCP). Perfect for AI assistants, chatbots, or your own automations!

  • 🔍 Get posts, collections, topics, users
  • 🗳️ Get votes, comments, and more
  • 🛠️ Use with Claude Desktop, Cursor, or any MCP client

🛠️ Features

  • Get detailed info on posts, comments, collections, topics, users
  • Search/filter by topic, date, votes, etc.
  • Paginated comments, user upvotes, and more
  • Built with FastMCP for speed and compatibility

🧑‍💻 Who is this for?

  • AI/LLM users: Plug into Claude Desktop, Cursor, or your own agent
  • Developers: Build bots, dashboards, or automations with Product Hunt data
  • Tinkerers: Explore the MCP ecosystem and build your own tools

🏁 Setup

Prerequisites

  • Python 3.10+
  • Product Hunt API token (get one here)
    • You'll need to create an account on Product Hunt
    • Navigate to the API Dashboard and create a new application
    • Use the Developer Token for the token

Note: When creating a new application on Product Hunt, you will be asked for a redirect_uri. While the MCP server does not use the redirect URI, it is a required field. You can enter any valid URL, such as https://localhost:8424/callback.

Installation

Preferred: uv (fast, modern Python installer)

# Install uv if you don't have it
pip install uv
Install from PyPI (recommended)
uv pip install product-hunt-mcp
# or
pip install product-hunt-mcp
Install from GitHub (latest main branch)
uv pip install 'git+https://github.com/jaipandya/producthunt-mcp-server.git'
# or
pip install 'git+https://github.com/jaipandya/producthunt-mcp-server.git'
Install locally from source
uv pip install .
# or
pip install .

🚀 Usage with Claude Desktop & Cursor

Once installed, the product-hunt-mcp command will be available. Add it to your Claude Desktop or Cursor configuration:

{
  "mcpServers": {
    "product-hunt": {
      "command": "product-hunt-mcp",
      "env": {
        "PRODUCT_HUNT_TOKEN": "your_token_here"
      }
    }
  }
}
  • Replace your_token_here with your actual Product Hunt API token.
  • The token must be set as an environment variable in your Claude Desktop or Cursor config for the server to authenticate.
  • Always restart your client (Claude Desktop/Cursor) after editing the config file.

Tip: On macOS, Claude Desktop may not always find the product-hunt-mcp command if it's not in the default PATH. If you encounter issues, you can provide the full path to the executable. After installing, run:

which product-hunt-mcp

Use the output path in your Claude Desktop config, replacing "command": "product-hunt-mcp" with the full path (e.g., "command": "/Users/youruser/.local/bin/product-hunt-mcp").

Finding your configuration file

  • Claude Desktop:

    • Windows: %APPDATA%\claude-desktop\config.json
    • macOS: ~/Library/Application Support/claude-desktop/config.json
    • Linux: ~/.config/claude-desktop/config.json
  • Cursor:

    • Windows: %APPDATA%\Cursor\User\settings.json
    • macOS: ~/Library/Application Support/Cursor/User/settings.json
    • Linux: ~/.config/Cursor/User/settings.json

Docker

You can also run the server using Docker:

# Build the Docker image
docker build -t product-hunt-mcp .

# Run the Docker container (interactive for MCP)
docker run -i --rm -e PRODUCT_HUNT_TOKEN=your_token_here product-hunt-mcp

For Claude Desktop/Cursor integration with Docker, use this configuration:

{
  "mcpServers": {
    "product-hunt": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "-e", "PRODUCT_HUNT_TOKEN=your_token_here", "product-hunt-mcp"],
      "env": {}
    }
  }
}

Security Note: Your PRODUCT_HUNT_TOKEN is sensitive. Do not share it or commit it to version control.


🛠️ MCP Tools

Tool Description Key Parameters
get_post_details Get info about a specific post id or slug, comments_count, comments_after
get_posts Get posts with filters topic, order, count, featured, posted_before, posted_after
get_comment Get info about a specific comment id (required)
get_post_comments Get comments for a post post_id or slug, order, count, after
get_collection Get info about a collection id or slug
get_collections Get collections with filters featured, user_id, post_id, order, count
get_topic Get info about a topic id or slug
search_topics Search topics query, followed_by_user_id, order, count
get_user Get info about a user id or username, posts_type, posts_count
get_viewer Get info about the authenticated user None
check_server_status Check server/API status & authentication None

🏗️ Project Structure

product-hunt-mcp/
├── src/
│   └── product_hunt_mcp/ # Main package directory
│       ├── __init__.py
│       ├── cli.py        # Command-line entry point
│       ├── api/          # API clients & queries
│       ├── schemas/      # Data validation schemas
│       ├── tools/        # MCP tool definitions
│       └── utils/        # Utility functions
├── pyproject.toml      # Project metadata, dependencies, build config
├── README.md
├── CONTRIBUTING.md
├── CHANGELOG.md
├── Dockerfile
└── ... (config files, etc.)

🔄 Rate Limiting

The Product Hunt API has rate limits that this client respects. If you encounter rate limit errors, the client will inform you when the rate limit resets. You can check your current rate limit status using the get_api_rate_limits or check_server_status tools.


🐛 Troubleshooting

  • Missing token: Ensure your PRODUCT_HUNT_TOKEN is correctly set as an environment variable.
  • Connection issues: Verify your internet connection and that the Product Hunt API is accessible.
  • Rate limiting: If you hit rate limits, wait until the reset time or reduce your query frequency.
  • Claude Desktop/Cursor not finding the server: Verify the path to your Python executable and restart the client.

🤝 Contributing

  • PRs and issues welcome!
  • Please follow PEP8 and use ruff for linting.
  • See pyproject.toml for dev dependencies.

🌐 Links


📝 Notes

  • This project is not affiliated with Product Hunt.
  • The Product Hunt API is subject to change.

📜 License

MIT

producthunt-mcp-server FAQ

How do I install the Product Hunt MCP Server?
Install via pip using 'pip install product-hunt-mcp'.
What environment variable is required to run the server?
You must set the PRODUCT_HUNT_TOKEN environment variable with your API token.
Is the Product Hunt MCP Server compatible with Docker?
Yes, it includes Docker support for easy deployment.
Which Python versions are supported?
It supports Python 3.10 and above.
How does this server interact with LLMs?
It exposes Product Hunt data through the Model Context Protocol, enabling LLMs to query product information in real time.
Can I use this server with multiple LLM providers?
Yes, it is provider-agnostic and works with any MCP-compatible LLM like OpenAI, Claude, and Gemini.
Is the server secure when handling API tokens?
Yes, it follows MCP principles for secure and scoped model interactions.
Where can I find the source code?
The server is open source and available on GitHub under the MIT license.