The OpenAI Agents SDK (@openai/agents) is a framework for building multi-agent workflows with tool calling, handoffs, guardrails, and built-in tracing. This guide shows you how to send traces from your OpenAI Agents applications to Lemma.
How It Works
The OpenAI Agents SDK has its own tracing system that captures agent runs, LLM generations, tool calls, handoffs, and guardrails. Using wrapAgent from @uselemma/tracing, you can wrap your agent runs to send top-level execution traces to Lemma — including inputs, outputs, timing, and metadata.
For deeper visibility, you can add a custom TracingProcessor that bridges the SDK’s native spans into OpenTelemetry, giving Lemma full access to every operation in your agent workflow.
Getting Started
Install Dependencies
npm install @openai/agents @uselemma/tracing zod
Set Up the Tracer Provider
Next.js (Recommended)
Create an instrumentation.ts file in your project root:
// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { registerOTel } = await import('@uselemma/tracing');
registerOTel();
}
}
Make sure to enable instrumentation in your next.config.js:
// next.config.js
module.exports = {
experimental: {
instrumentationHook: true,
},
};
Node.js (Non-Next.js)
Create a tracer setup file and import it at the top of your entry point:
// tracer.ts
import { registerOTel } from '@uselemma/tracing';
registerOTel();
Then import it at the top of your application entry point:
// index.ts or server.ts
import './tracer'; // Must be first!
// ... rest of your imports
Set the LEMMA_API_KEY and LEMMA_PROJECT_ID environment variables in your application. You can find these in your Lemma project settings.
Examples
Basic Agent
import { wrapAgent } from '@uselemma/tracing';
import { Agent, run } from '@openai/agents';
const agent = new Agent({
name: 'Assistant',
instructions: 'You are a helpful assistant.',
});
export const callAgent = async (userMessage: string) => {
const wrappedFn = wrapAgent(
'my-agent',
async ({ onComplete }, input) => {
const result = await run(agent, input.userMessage);
onComplete(result.finalOutput);
return result.finalOutput;
}
);
const { result, runId } = await wrappedFn({ userMessage });
return { result, runId };
};
import { z } from 'zod';
import { wrapAgent } from '@uselemma/tracing';
import { Agent, run, tool } from '@openai/agents';
const lookupOrder = tool({
name: 'lookup_order',
description: 'Look up an order by ID',
parameters: z.object({ orderId: z.string() }),
execute: async (input) => {
// your lookup logic
return { orderId: input.orderId, status: 'shipped' };
},
});
const agent = new Agent({
name: 'Order assistant',
instructions: 'You help customers check their order status.',
tools: [lookupOrder],
});
export const callAgent = async (userMessage: string) => {
const wrappedFn = wrapAgent(
'order-agent',
async ({ onComplete }, input) => {
const result = await run(agent, input.userMessage);
onComplete(result.finalOutput);
return result.finalOutput;
}
);
const { result, runId } = await wrappedFn({ userMessage });
return { result, runId };
};
Multi-Agent Handoffs
import { z } from 'zod';
import { wrapAgent } from '@uselemma/tracing';
import { Agent, run, tool } from '@openai/agents';
const getWeather = tool({
name: 'get_weather',
description: 'Get the weather for a city',
parameters: z.object({ city: z.string() }),
execute: async (input) => `The weather in ${input.city} is sunny, 72°F.`,
});
const weatherAgent = new Agent({
name: 'Weather agent',
instructions: 'You provide weather information.',
handoffDescription: 'Handles weather-related questions',
tools: [getWeather],
});
const triageAgent = Agent.create({
name: 'Triage agent',
instructions:
'You route user questions to the right specialist. Hand off to the weather agent for weather questions.',
handoffs: [weatherAgent],
});
export const callAgent = async (userMessage: string) => {
const wrappedFn = wrapAgent(
'triage-agent',
async ({ onComplete }, input) => {
const result = await run(triageAgent, input.userMessage);
onComplete(result.finalOutput);
return result.finalOutput;
}
);
const { result, runId } = await wrappedFn({ userMessage });
return { result, runId };
};
Streaming
For streaming responses, set autoEndRoot: true so the RunBatchSpanProcessor automatically ends the root span when all direct child spans have finished. Call onComplete to record the output once the stream finishes:
import { wrapAgent } from '@uselemma/tracing';
import { Agent, run } from '@openai/agents';
const agent = new Agent({
name: 'Assistant',
instructions: 'You are a helpful assistant.',
});
export const callAgent = async (userMessage: string) => {
const wrappedFn = wrapAgent(
'my-agent',
async ({ onComplete, recordError }, input) => {
try {
const result = await run(agent, input.userMessage, {
stream: true,
});
let finalOutput = '';
for await (const event of result) {
if (
event.type === 'run_item_stream_event' &&
event.item.type === 'message_output_item'
) {
finalOutput = event.item.rawItem.content
.map((c) => (c.type === 'output_text' ? c.text : ''))
.join('');
}
}
onComplete({ finalOutput });
return result;
} catch (error) {
recordError(error);
throw error;
}
},
{ autoEndRoot: true }
);
const { result, runId } = await wrappedFn({ userMessage });
return { result, runId };
};
OpenAI Traces Dashboard
By default, the OpenAI Agents SDK also sends traces to OpenAI’s own Traces dashboard. This works independently of Lemma — you get visibility in both platforms without any extra configuration.
If you want to disable OpenAI’s built-in tracing and only send traces to Lemma, set the OPENAI_AGENTS_DISABLE_TRACING environment variable:
OPENAI_AGENTS_DISABLE_TRACING=1
Next Steps