langchain-mcp-client

MCP.Pizza Chef: datalayer

The langchain-mcp-client is a versatile Model Context Protocol client that integrates LangChain's ReAct agent capabilities to connect seamlessly with any MCP servers. It supports flexible model selection with any LangChain-compatible LLM and provides a command-line interface for dynamic, interactive conversations. This client facilitates real-time context flow and tool orchestration, enabling developers to build advanced AI workflows with ease and interoperability across MCP environments.

Use This MCP client To

Connect LangChain agents to multiple MCP servers for real-time context sharing Use LangChain-compatible LLMs to interact with MCP server tools Enable dynamic CLI-based conversations with MCP servers Orchestrate multi-step reasoning workflows using LangChain and MCP Integrate external MCP data sources into LangChain agent workflows Test and prototype MCP server interactions via command line

README

Datalayer

Become a Sponsor

🦜 πŸ”— LangChain MCP Client

Github Actions Status PyPI - Version

This simple Model Context Protocol (MCP) client demonstrates the use of MCP server tools by LangChain ReAct Agent.

  • 🌐 Seamlessly connect to any MCP servers.
  • πŸ€– Use any LangChain-compatible LLM for flexible model selection.
  • πŸ’¬ Interact via CLI, enabling dynamic conversations.

Conversion to LangChain Tools

It leverages a utility function convert_mcp_to_langchain_tools(). This function handles parallel initialization of specified multiple MCP servers and converts their available tools into a list of LangChain-compatible tools (List[BaseTool]).

Installation

The python version should be 3.11 or higher.

pip install langchain_mcp_client

Configuration

Create a .env file containing all the necessary API_KEYS to access your LLM.

Configure the LLM, MCP servers, and prompt example in the llm_mcp_config.json5 file:

  1. LLM Configuration: Set up your LLM parameters.
  2. MCP Servers: Specify the MCP servers to connect to.
  3. Example Queries: Define example queries that invoke MCP server tools. Press Enter to use these example queries when prompted.

Usage

Below an example with a Jupyter MCP Server:

Check the llm_mcp_config.json5 configuration (commands depends if you are running on Linux or macOS/Windows).

# Start jupyterlab.
make jupyterlab
# Launch the CLI.
make cli

This is a prompt example.

create matplolib examples with many variants in jupyter

Credits

This initial code of this repo is taken from hideya/mcp-client-langchain-py (MIT License) and from langchain_mcp_tools (MIT License).

langchain-mcp-client FAQ

How do I install the langchain-mcp-client?
You can install it via PyPI using 'pip install langchain-mcp-client'.
Can I use any LLM with langchain-mcp-client?
Yes, it supports any LangChain-compatible LLM, including OpenAI, Claude, and Gemini models.
Does langchain-mcp-client support multiple MCP servers simultaneously?
Yes, it can connect to and manage context from multiple MCP servers concurrently.
How do I interact with MCP servers using this client?
Interaction is primarily through a command-line interface that supports dynamic conversations.
Is langchain-mcp-client suitable for production use?
It is designed for both prototyping and production workflows, depending on your integration needs.
How does langchain-mcp-client handle security?
It relies on MCP's built-in secure, scoped, and observable model interaction principles.
Can I extend langchain-mcp-client with custom LangChain agents?
Yes, you can customize and extend it using LangChain's agent framework to fit specific workflows.
Where can I find documentation and support?
Documentation is available on the GitHub repository, and community support can be found via the MCP and LangChain forums.