Skip to main content
Each page in this section uses TypeScript and Python tabs for code and runtime-specific notes.

Environment variables

export LEMMA_API_KEY="lma_..."
export LEMMA_PROJECT_ID="proj_..."

Install dependencies

npm install @uselemma/tracing @opentelemetry/api
If you want automatic per-LLM-call spans, add provider instrumentation:
npm install @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai @arizeai/openinference-instrumentation-anthropic

Register OpenTelemetry once at startup

Call registerOTel() before code that creates spans.
// instrumentation.ts (Next.js)
export async function register() {
  if (process.env.NEXT_RUNTIME === "nodejs") {
    const { registerOTel } = await import("@uselemma/tracing");
    registerOTel();
  }
}
// tracer.ts (Node.js)
import { registerOTel } from "@uselemma/tracing";

registerOTel();

Create a run

A run is one top-level wrapAgent execution. Return the wrapped function result and runId so you can attach external signals later.
import { wrapAgent } from "@uselemma/tracing";

export const callAgent = async (userMessage: string) => {
  const wrapped = wrapAgent("my-agent", async ({ onComplete }, input) => {
    const response = await llmCall(input.userMessage);
    onComplete(response);
    return response;
  });

  const { result, runId } = await wrapped(
    { userMessage },
    { threadId: sessionThreadId } // optional; omit second arg if unused
  );
  return { result, runId };
};
See Wrapping your agent for threadId / isExperiment on the wrapped function.

What is captured automatically vs manually

Understanding this boundary prevents the most common “data is missing” issues.
Runs capture input, output, timing, and errors on the root span. Per-LLM-call detail requires OpenInference or manual llm.step spans. Tool arguments and results require manual tool.call spans. Business metadata is never automatic—set attributes on the run span.

Optional: add OpenInference instrumentors

Use this when you want provider-generated LLM spans to appear as run children.
import { registerOTel } from "@uselemma/tracing";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

const provider = registerOTel();
registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation()],
  tracerProvider: provider,
});
For LiteLLM call paths and import pitfalls, see Provider instrumentation.

Next Steps