Use @uselemma/tracing with the OpenAI Node SDK to get a top-level run trace from agent plus a child span for every chat.completions.create (or Responses API) call — with prompts, completions, model, token usage, and timing.
How It Works
registerOTel() sets up the OTel transport. The OpenAIInstrumentation from OpenInference patches the openai package so every API call emits a gen_ai.chat child span inside whatever span is currently active — including inside agent.
Getting Started
Install
npm install @uselemma/tracing openai @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai
Register at startup
// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
const { registerOTel } = await import("@uselemma/tracing");
const { registerInstrumentations } = await import("@opentelemetry/instrumentation");
const { OpenAIInstrumentation } = await import(
"@arizeai/openinference-instrumentation-openai"
);
const provider = registerOTel();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
tracerProvider: provider,
});
}
}
// tracer.ts — import this first in your entry point
import { registerOTel } from "@uselemma/tracing";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
const provider = registerOTel();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
tracerProvider: provider,
});
// index.ts
import "./tracer"; // must be first
Set LEMMA_API_KEY and LEMMA_PROJECT_ID environment variables. Find them in your Lemma project settings.
Example
Single-turn completion
import { agent } from "@uselemma/tracing";
import OpenAI from "openai";
const openai = new OpenAI();
const supportAgent = agent("support-agent", async (input: { message: string }) => {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "You are a helpful support agent." },
{ role: "user", content: input.message },
],
});
return response.choices[0].message.content ?? "";
});
const { result, runId } = await supportAgent({ message: "How do I reset my password?" });
import { agent, tool } from "@uselemma/tracing";
import OpenAI from "openai";
const openai = new OpenAI();
const lookupOrder = tool("lookup-order", async (orderId: string) => {
return { orderId, status: "shipped", estimatedDelivery: "2026-04-10" };
});
const orderAgent = agent("order-agent", async (input: { message: string }) => {
const tools = [
{
type: "function" as const,
function: {
name: "lookup_order",
description: "Look up an order by ID",
parameters: {
type: "object",
properties: { order_id: { type: "string" } },
required: ["order_id"],
},
},
},
];
const messages: OpenAI.ChatCompletionMessageParam[] = [
{ role: "user", content: input.message },
];
let response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages, tools });
while (response.choices[0].finish_reason === "tool_calls") {
const toolCalls = response.choices[0].message.tool_calls ?? [];
messages.push(response.choices[0].message);
for (const call of toolCalls) {
const args = JSON.parse(call.function.arguments);
const result = await lookupOrder(args.order_id); // creates a tool.lookup-order child span
messages.push({ role: "tool", tool_call_id: call.id, content: JSON.stringify(result) });
}
response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages, tools });
}
const text = response.choices[0].message.content ?? "";
return text;
});
const { result, runId } = await orderAgent({ message: "What's the status of order ORD-123?" });
What You’ll See in Lemma
| Span | Source | Contains |
|---|
ai.agent.run | agent | Full run input, output, timing, run ID |
gen_ai.chat | OpenInference | Model name, prompt messages, completion, token usage |
tool.lookup-order | tool() helper | Tool input and return value |
Next Steps