Skip to main content
The OpenAI Agents SDK (openai-agents) is a Python framework for building multi-agent workflows with tool calling, handoffs, and guardrails. This guide shows you how to send traces from your OpenAI Agents applications to Lemma.

How It Works

instrument_openai_agents from uselemma-tracing registers the Lemma OTel provider and patches the OpenAI Agents SDK using OpenInference instrumentation. Every agent run, LLM call, tool call, and handoff automatically emits spans to Lemma. Wrapping your agent function with wrap_agent creates a top-level Lemma span that ties the whole run together — giving you the run ID, input, output, and experiment flag alongside the auto-instrumented child spans.

Getting Started

Install Dependencies

pip install "uselemma-tracing[openai-agents]" openai-agents

Set Up Instrumentation

Call instrument_openai_agents before importing agents. This is typically at the top of your application’s entry point (main.py, app.py, etc.):
# main.py
from dotenv import load_dotenv
load_dotenv()

# Must come before importing openai-agents
from uselemma_tracing import instrument_openai_agents
instrument_openai_agents()

from agents import Agent, Runner
Set LEMMA_API_KEY and LEMMA_PROJECT_ID environment variables in your application. You can find these in your Lemma project settings.

Examples

Basic Agent

import asyncio
from uselemma_tracing import TraceContext, instrument_openai_agents, wrap_agent

instrument_openai_agents()

from agents import Agent, Runner

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
)

async def run_agent(ctx: TraceContext, input: dict):
    result = await Runner.run(agent, input["user_message"])
    ctx.on_complete(result.final_output)
    return result.final_output

async def main():
    user_message = "What is the capital of France?"

    wrapped = wrap_agent("assistant", run_agent)

    result, run_id, span = await wrapped({"user_message": user_message})
    print(f"run_id: {run_id}")
    print(f"result: {result}")

asyncio.run(main())

Agent with Tools

import asyncio
from agents import Agent, Runner, function_tool
from uselemma_tracing import TraceContext, instrument_openai_agents, wrap_agent

instrument_openai_agents()

@function_tool
def lookup_order(order_id: str) -> str:
    """Look up an order by ID."""
    return f"Order {order_id} is shipped and arriving tomorrow."

agent = Agent(
    name="Order assistant",
    instructions="You help customers check their order status.",
    tools=[lookup_order],
)

async def run_agent(ctx: TraceContext, input: dict):
    result = await Runner.run(agent, input["user_message"])
    ctx.on_complete(result.final_output)
    return result.final_output

async def main():
    user_message = "What's the status of order #12345?"

    wrapped = wrap_agent("order-agent", run_agent)

    result, run_id, span = await wrapped({"user_message": user_message})
    print(f"run_id: {run_id}")
    print(f"result: {result}")

asyncio.run(main())

Multi-Agent Handoffs

import asyncio
from agents import Agent, Runner
from uselemma_tracing import TraceContext, instrument_openai_agents, wrap_agent

instrument_openai_agents()

spanish_agent = Agent(
    name="Spanish agent",
    instructions="You only respond in Spanish.",
)

english_agent = Agent(
    name="English agent",
    instructions="You only respond in English.",
)

triage_agent = Agent(
    name="Triage agent",
    instructions="Hand off to the appropriate agent based on the language of the request.",
    handoffs=[spanish_agent, english_agent],
)

async def run_agent(ctx: TraceContext, input: dict):
    result = await Runner.run(triage_agent, input["user_message"])
    ctx.on_complete(result.final_output)
    return result.final_output

async def main():
    user_message = "Hola, ¿cómo estás?"

    wrapped = wrap_agent("triage-agent", run_agent)

    result, run_id, span = await wrapped({"user_message": user_message})
    print(f"run_id: {run_id}")
    print(f"result: {result}")

asyncio.run(main())

OpenAI Traces Dashboard

By default, the OpenAI Agents SDK also sends traces to OpenAI’s own Traces dashboard. This works independently of Lemma — you get visibility in both platforms without any extra configuration. If you want to disable OpenAI’s built-in tracing and only send traces to Lemma, set the OPENAI_AGENTS_DISABLE_TRACING environment variable:
OPENAI_AGENTS_DISABLE_TRACING=1

Next Steps