Fire in da houseTop Tip:Most people pay up to $340 per month for Perplexity, MidJourney, Runway, ChatGPT, and more - but you can get them all your AI tools for $15 with Galaxy. It's free to test!Fire in da houseCheck it out

baml-agents

MCP.Pizza Chef: Elijas

baml-agents is an MCP server enabling the creation of structured agents using Building Agents with LLM (BAML) principles, MCP Tools, and 12-Factor Agents methodology. It facilitates structured generation and orchestration of LLM-powered agents, supporting modular, maintainable, and scalable AI workflows within the MCP ecosystem. This server is experimental but maintained, designed for developers building advanced AI agents with real-time context and tool integration.

Use This MCP server To

Create structured LLM agents with modular components Integrate MCP tools for enhanced agent capabilities Implement 12-Factor principles in AI agent development Orchestrate multi-step reasoning workflows with LLMs Build scalable and maintainable AI agent architectures Enable real-time context feeding into LLM agents Develop experimental AI agents with structured generation

README

baml‑agents

Status: Experimental Maintained: yes License: MIT PyPI Version PyPI Downloads Linter: Ruff

Building Agents with LLM structured generation (BAML), MCP Tools, and 12-Factor Agents principles

This repository shares useful patterns I use when working with BAML. Note: The API may unexpectedly change with future minor versions; therefore, install with specific version constraints:

pip install "baml-agents>=0.38.2,<0.39.0"

Found this useful? Star the repo on GitHub to show support and follow for updates. Also, find me on Discord if you have questions or would like to join a discussion!

GitHub Repo stars  Discord server invite

Disclaimer

This project is maintained independently by Elijas and is not affiliated with the official BAML project.

Repository Structure

  • /notebooks: Core Tutorials & Examples. Contains curated Jupyter notebooks demonstrating key features and recommended patterns. Start here to learn baml-agents.
  • /explorations: Experimental & Niche Content. Holds prototypes, tests, and examples for specific or advanced use cases. Content may be less polished or stable. See the explorations README.
  • /baml_agents/devtools: Developer Utilities. Contains helper scripts for project maintenance, development workflows, and automating tasks (e.g., updating baml generator versions). See the devtools README.

Contents (Core Tutorials)

The primary tutorials are located in the /notebooks directory:

  1. Flexible LLM Client Management in BAML
    • Effortlessly switch between different LLM providers (like OpenAI, Anthropic, Google) at runtime using simple helper functions.
    • Bridge compatibility gaps: Connect to unsupported LLM backends or tracing systems (e.g., Langfuse, LangSmith) via standard proxy setups.
    • Solve common configuration issues: Learn alternatives for managing API keys and client settings if environment variables aren't suitable.
  2. Introduction to AI Tool Use with BAML
    • Learn how to define custom actions (tools) for your AI using Pydantic models, making your agents capable of doing things.
    • See how to integrate these tools with BAML manually or dynamically using ActionRunner for flexible structured outputs.
    • Understand how BAML translates goals into structured LLM calls that select and utilize the appropriate tool.
  3. Integrating Standardized MCP Tools with BAML
    • Discover how to leverage the Model Context Protocol (MCP) to easily plug-and-play pre-built 3rd party tools (like calculators, web search) into your BAML agents.
    • See ActionRunner in action, automatically discovering and integrating tools from MCP servers with minimal configuration.
    • Learn techniques to filter and select specific MCP tools to offer to the LLM, controlling the agent's capabilities precisely.
  4. Interactive BAML Development in Jupyter
    • See BAML's structured data generation stream live into your Jupyter output cell as the LLM generates it.
    • Interactively inspect the details: Use collapsible sections to view full LLM prompts and responses, optionally grouped by call or session, directly in the notebook.
    • Chat with your agent: Interactive chat widget right in the notebook, allowing you to chat with your agent in real-time.
  5. Simple Agent Demonstration
    • Putting it all together: Build a simple, functional agent capable of tackling a multi-step task.
    • Learn how to combine custom Python actions (defined as Action classes) with standardized MCP tools (like calculators or time servers) managed by ActionRunner.
    • Follow the agent's decision-making loop driven by BAML's structured output generation (GetNextAction), see it execute tools, and observe how it uses the results to progress.
    • Includes demonstration of JupyterBamlMonitor for transparent inspection of the underlying LLM interactions.

Simple example

Tip

The code below is trimmed for brevity to illustrate the core concepts. Some function names or setup steps may differ slightly from the full notebook implementation for clarity in this example. The full, runnable code is available in the notebook Simple Agent Demonstration (notebooks/05_simple_agent_demo.ipynb)

Show code for the example below
def get_weather_info(city: str):
    return f"The weather in {city} is 63 degrees fahrenheit with cloudy conditions."

def stop_execution(final_answer: str):
    return f"Final answer: {final_answer}"

r = ActionRunner() # Doing an action means using a tool

# Adding a tool to allow the agent to do math
r.add_from_mcp_server(server="uvx mcp-server-calculator")

# Adding a tool to get the current time
r.add_from_mcp_server(server="uvx mcp-timeserver")  # Note: you can also add URLs

# Adding a tool to get the current weather
r.add_action(get_weather_info)

# Adding a tool to let the agent stop execution
r.add_action(stop_execution)

async def execute_task(llm, task: str) -> str:
    interactions = []
    while True:
        action = await llm.GetNextAction(task, interactions)
        if result := is_result_available(action):
            return result

        result = r.run(action)
        interactions.append(new_interaction(action, result))

llm = LLMClient("gpt-4.1-nano")
task = r.execute_task(llm, "State the current date along with avg temp between LA, NY, and Chicago in Fahrenheit.")

BAML Agent execution trace in Jupyter showing LLM prompts and completions

To try it yourself, check out the notebook Simple Agent Demonstration (notebooks/05_simple_agent_demo.ipynb).

Running the Notebooks

To run code from the notebooks/ folder, you'll first need to:

  • Install the uv python package manager.
  • Install all dependencies: uv sync --dev
  • Generate necessary BAML code: uv run baml-cli generate
    • Alternatively, you can use the VSCode extension to do it automatically every time you edit a .baml file.

baml-agents FAQ

How do I install baml-agents?
You can install baml-agents via PyPI using 'pip install baml-agents'.
Is baml-agents actively maintained?
Yes, baml-agents is actively maintained and updated regularly.
What programming language is baml-agents written in?
baml-agents is primarily written in Python, making it easy to integrate with Python-based MCP clients and servers.
Can baml-agents work with multiple LLM providers?
Yes, baml-agents supports integration with various LLM providers including OpenAI, Anthropic Claude, and Google Gemini.
What does '12-Factor Agents' mean in this context?
It refers to applying 12-Factor App principles to AI agents for better modularity, scalability, and maintainability.
Is baml-agents suitable for production use?
Currently, baml-agents is experimental but stable enough for development and testing environments.
How does baml-agents leverage MCP Tools?
It uses MCP Tools to expose structured data and functionality, enabling LLMs to interact with their environment effectively.
Where can I find documentation and support?
Documentation and issue tracking are available on the GitHub repository, with community support channels linked there.