Skip to main content
This guide shows how to instrument an agent built from scratch, without relying on a framework integration. You will add:
  1. a top-level run span using agent()
  2. generation spans for LLM calls using llm()
  3. tool-call spans using tool()
  4. run-level metadata for filtering
  5. error handling for failed runs and child spans

Prerequisites

  • You have a Lemma API key and project ID.
  • You can run a plain Node.js or Python app.
  • Your agent loop is code you control directly (no framework wrapper).

Instrument the Agent

1

Install and configure tracing

npm install @uselemma/tracing openai
// tracer.ts
import { registerOTel } from "@uselemma/tracing";

registerOTel({
  apiKey: process.env.LEMMA_API_KEY,
  projectId: process.env.LEMMA_PROJECT_ID,
});
Register tracing before code that creates spans or calls your agent.
2

Wrap your agent as a run

Use agent() to create the top-level run span.
import "./tracer";
import { agent } from "@uselemma/tracing";

const runAgent = agent("scratch-agent", async (input, ctx) => {
  ctx.span.setAttribute("lemma.user_id", input.userId);
  ctx.span.setAttribute("lemma.session_id", input.sessionId);
  ctx.span.setAttribute("lemma.feature", "support_chat");

  return await executeAgentLoop(input.message);
});
3

Wrap LLM calls with llm()

The llm() helper creates a generation span under the active run. Pass the model name as the span label.
import { llm } from "@uselemma/tracing";
import OpenAI from "openai";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const generate = llm("gpt-4o", async (userMessage: string) => {
  const response = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: userMessage }],
  });
  return response.choices[0]?.message?.content ?? "";
});
4

Wrap tools with tool()

The tool() helper creates a tool-call span nested under the active run.
import { tool } from "@uselemma/tracing";

const getWeatherTool = tool("get-weather", async (city: string) => {
  return await getWeather(city);
});
5

Wire everything into one agent loop

import "./tracer";
import { agent, llm, tool } from "@uselemma/tracing";
import OpenAI from "openai";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const generate = llm("gpt-4o", async (prompt: string) => {
  const res = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: prompt }],
  });
  return res.choices[0]?.message?.content ?? "";
});

const getWeatherTool = tool("get-weather", async (city: string) => {
  return await getWeather(city);
});

const runAgent = agent("scratch-agent", async (input, ctx) => {
  ctx.span.setAttribute("lemma.user_id", input.userId);

  const weatherContext = await getWeatherTool(input.city); // span: tool.get-weather
  const answer = await generate(input.message);             // span: llm.gpt-4o

  return answer;
});
The helpers handle error recording and span lifecycle automatically — no try/finally or manual span.end() needed.
6

Run and verify in Lemma

  • Execute one agent request.
  • In Lemma, verify:
    • top-level ai.agent.run span exists
    • llm.gpt-4o generation span is nested under the run
    • tool.get-weather span appears with the tool result
    • custom metadata (lemma.user_id, lemma.session_id) is filterable

Troubleshooting checklist

  • No runs visible: ensure registerOTel / register_otel runs before your app logic.
  • Run appears but no child spans: make sure llm() / tool() wrapped functions are called inside the agent() wrapper.
  • Run never closes: in streaming mode, ensure ctx.complete() is called inside the finish callback.
  • Missing metadata filters: verify attributes are set on the run span, not on unrelated spans.