Skip to main content
This page describes how Lemma wires OpenTelemetry to its API: tracer provider, span processor, exporter, and context. It is not a symptom-based troubleshooting guide — for that, see Common issues and Debug mode.

Architecture

Your Application

wrapAgent / trace.getTracer()

NodeTracerProvider (creates spans)

RunBatchSpanProcessor (batches spans per agent run)

OTLPTraceExporter (sends over HTTP)

Lemma API (https://api.uselemma.ai/otel/v1/traces)

Components

NodeTracerProvider — The factory that creates tracers and spans. Integrates with Node.js async hooks to propagate context across async boundaries. RunBatchSpanProcessor — A custom span processor that groups spans by agent run ID. When the top-level ai.agent.run span ends, all spans belonging to that run are exported together in a single batch. It also propagates the lemma.run_id attribute to child spans automatically and filters out framework-internal spans (e.g. Next.js instrumentation). OTLPTraceExporter — Serializes spans using OpenTelemetry Protocol (OTLP) and sends them to Lemma with authentication headers.

Context propagation

When wrapAgent creates a span, it:
  1. Opens a new root span against ROOT_CONTEXT: tracer.startSpan("ai.agent.run", { ... }, ROOT_CONTEXT)
  2. Sets the span on the context: trace.setSpan(ROOT_CONTEXT, span)
  3. Executes your agent function inside that context: context.with(ctx, async () => { ... })
  4. Any spans created inside automatically become children by checking context.active()
Using ROOT_CONTEXT ensures each wrapAgent call creates an independent trace, even when called from within another traced context. Spans from frameworks like the Vercel AI SDK automatically nest under your agent span — the context is propagated transparently across async operations, HTTP requests, and framework boundaries.

Source code

The snippets below are illustrative of how Lemma’s SDK builds the processor and provider.
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { RunBatchSpanProcessor } from "./run-batch-span-processor";

export function createLemmaSpanProcessor(options = {}) {
  const apiKey = options.apiKey ?? process.env.LEMMA_API_KEY;
  const projectId = options.projectId ?? process.env.LEMMA_PROJECT_ID;
  const baseUrl = options.baseUrl ?? "https://api.uselemma.ai";

  if (!apiKey || !projectId) {
    throw new Error("Missing API key and/or project ID");
  }

  return new RunBatchSpanProcessor(
    new OTLPTraceExporter({
      url: `${baseUrl}/otel/v1/traces`,
      headers: {
        Authorization: `Bearer ${apiKey}`,
        "X-Lemma-Project-ID": projectId,
      },
    })
  );
}

export function registerOTel(options = {}) {
  const tracerProvider = new NodeTracerProvider({
    spanProcessors: [createLemmaSpanProcessor(options)],
  });

  tracerProvider.register();
  return tracerProvider;
}

Next steps