mcp-agent

MCP.Pizza Chef: joshuaalpuerto

mcp-agent is a TypeScript client framework designed to build lightweight, composable AI agents using the Model Context Protocol (MCP). It enables developers to manage connections and execute MCP tools with minimal integration effort, providing a type-safe and modular approach to create robust AI agents within JavaScript and TypeScript environments. Inspired by a Python counterpart, it facilitates direct tool calls and simple agent construction in existing architectures.

Use This MCP client To

Build custom AI agents that interact with MCP-aware services Manage MCP tool connections within TypeScript applications Execute MCP tools programmatically with minimal setup Create composable and type-safe AI workflows in JavaScript Integrate AI agents into existing web or backend architectures Prototype AI-driven automation using MCP in TypeScript Simplify multi-step reasoning by orchestrating MCP tools Develop modular AI components for scalable applications

README

mcp-agent

Build Effective Agents with Model Context Protocol in TypeScript

mcp-agent is a TypeScript framework inspired by the Python lastmile-ai/mcp-agent project. It provides a simple, composable, and type-safe way to build AI agents leveraging the Model Context Protocol (MCP) in JavaScript and TypeScript environments.

This library aims to bring the powerful patterns and architecture of mcp-agent to the JavaScript ecosystem, enabling developers to create robust and controllable AI agents that can interact with MCP-aware services and tools.

Installation

First, create or update your .npmrc file with:

@joshuaalpuerto:registry=https://npm.pkg.github.com

Then

npm install @joshuaalpuerto/mcp-agent

Key Capabilities

mcp-agent empowers you to build sophisticated AI agents with the following core capabilities:

  • Agent Abstraction: Define intelligent agents with clear instructions, access to tools (both local functions and MCP servers), and integrated LLM capabilities.
  • Model Context Protocol (MCP) Integration: Seamlessly connect and interact with services and tools exposed through MCP servers.
  • Local Function Tools: Extend agent capabilities with custom, in-process JavaScript/TypeScript functions that act as tools, alongside MCP server-based tools.
  • LLM Flexibility: Integrate with various Large Language Models (LLMs). The library includes an example implementation for Fireworks AI, demonstrating extensibility for different LLM providers.
  • Memory Management: Basic in-memory message history to enable conversational agents.
  • Workflows: Implement complex agent workflows like the Orchestrator pattern to break down tasks into steps and coordinate multiple agents. Support for additional patterns from Anthropic's Building Effective Agents and OpenAI's Swarm coming soon.
  • TypeScript & Type Safety: Built with TypeScript, providing strong typing, improved code maintainability, and enhanced developer experience.

Quick Start

Standalone Usage

Get started quickly with a basic example (Using as standalone):

import { fileURLToPath } from 'url';
import path from 'path';
import { Agent, LLMFireworks, Orchestrator } from 'mcp-agent'; // Import from your library name!
import { writeLocalSystem } from './tools/writeLocalSystem'; // Assuming you have example tools

const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

async function runOrchestrator() {
  const llm = new LLMFireworks("accounts/fireworks/models/deepseek-v3", { // Example LLM from Fireworks
    maxTokens: 2048,
    temperature: 0.1
  });

  const researcher = await Agent.initialize({
    llm,
    name: "researcher",
    description: `Your expertise is to find information.`,
    serverConfigs: [ // Example MCP Server Configurations
      {
        name: "read_file_from_local_file_system",
        type: "stdio",
        command: "node",
        args: ['--loader', 'ts-node/esm', path.resolve(__dirname, 'servers', 'readLocalFileSystem.ts'),]
      },
      {
        name: "search_web",
        type: "ws",
        url: createSmitheryUrl( // Example using community mcp server via @smithery/sdk
          "https://server.smithery.ai/exa/ws",
          {
            exaApiKey: process.env.EXA_API_KEY
          }
        )
      },
    ],
  });

  const writer = await Agent.initialize({
    llm
    name: "writer",
    description: `Your expertise is to write information to a file.`,
    functions: [writeLocalSystem], // Example local function tool
    llm,
  });

  const orchestrator = new Orchestrator({
    llm,
    agents: [researcher, writer],
  });

  const result = await orchestrator.generate('Search new latest developemnt about AI and write about it to `theory_on_ai.md` on my local machine. no need to verify the result.');
  console.log(JSON.stringify(result));

  await researcher.close();
  await writer.close();
}

runOrchestrator().catch(console.error);
logs.mp4

To run this example:

  1. Install Dependencies:
    pnpm install
  2. Set Environment Variables: Create a .env file (or set environment variables directly) and add your API keys (e.g., EXA_API_KEY, Fireworks AI API key if needed).
  3. Run the Demo:
    node --loader ts-node/esm ./demo/standalone/index.ts

Rest server Integration

For a complete Express.js integration example with multi-agent orchestration, check out the demo/express/README.md.

Core Concepts

  • Agent: The fundamental building block. An Agent is an autonomous entity with a specific role, instructions, and access to tools.
  • MCP Server Aggregator (MCPServerAggregator): Manages connections to multiple MCP servers, providing a unified interface for agents to access tools.
  • MCP Connection Manager (MCPConnectionManager): Handles the lifecycle and reuse of MCP server connections, optimizing resource usage.
    • Supported Transport: stdio, sse, streamable-http & websockets
  • LLM Integration (LLMInterface, LLMFireworks): Abstracts interaction with Large Language Models. LLMFireworks is an example implementation for Fireworks AI models.
  • Tools: Functions or MCP server capabilities that Agents can use to perform actions. Tools can be:
    • MCP Server Tools: Capabilities exposed by external MCP servers (e.g., file system access, web search).
    • Local Function Tools: JavaScript/TypeScript functions defined directly within your application.
  • Workflows: Composable patterns for building complex agent behaviors (see anthropic blog here).
    • Orchestrator - workflow demonstrates how to coordinate multiple agents to achieve a larger objective.
    • Prompt chaining - coming soon.
    • Routing - coming soon.
    • Parallelization - coming soon.
    • Evaluator-optimizer - coming soon.
  • Memory (SimpleMemory): Provides basic in-memory message history for conversational agents.

Acknowledgements

This project is heavily inspired by and builds upon the concepts and architecture of the excellent lastmile-ai/mcp-agent Python framework

We encourage you to explore their repository for a deeper understanding of the underlying principles and patterns that have informed this TypeScript implementation.

Contributing

Contributions are welcome!

mcp-agent FAQ

How do I install mcp-agent in my project?
Add the GitHub package registry to your .npmrc and run npm install @joshuaalpuerto/mcp-agent.
Can mcp-agent be used with plain JavaScript or only TypeScript?
While optimized for TypeScript with type safety, mcp-agent can also be used in JavaScript projects.
Does mcp-agent support integration with multiple MCP servers?
Yes, it manages connections and tool execution across multiple MCP servers seamlessly.
Is mcp-agent compatible with different LLM providers?
Yes, it works with models from OpenAI, Anthropic Claude, and Google Gemini through MCP.
Can I build complex multi-step AI workflows using mcp-agent?
Yes, mcp-agent supports composing and orchestrating multiple MCP tools for advanced workflows.
What programming environments is mcp-agent designed for?
It is designed primarily for JavaScript and TypeScript environments, including Node.js and browser contexts.
How does mcp-agent simplify AI agent development?
By providing abstractions for tool management, connection handling, and type-safe agent composition.
Is mcp-agent open source and actively maintained?
Yes, it is open source on GitHub and inspired by a well-established Python project.