Architecture
Components
NodeTracerProvider — The factory that creates tracers and spans. Integrates with Node.js async hooks to propagate context across async boundaries. RunBatchSpanProcessor — A custom span processor that groups spans by agent run ID. When the top-levelai.agent.run span ends, all spans belonging to that run are exported together in a single batch. It also propagates the lemma.run_id attribute to child spans automatically and filters out framework-internal spans (e.g. Next.js instrumentation).
OTLPTraceExporter — Serializes spans using OpenTelemetry Protocol (OTLP) and sends them to Lemma with authentication headers.
Context propagation
WhenwrapAgent creates a span, it:
- Opens a new root span against
ROOT_CONTEXT:tracer.startSpan("ai.agent.run", { ... }, ROOT_CONTEXT) - Sets the span on the context:
trace.setSpan(ROOT_CONTEXT, span) - Executes your agent function inside that context:
context.with(ctx, async () => { ... }) - Any spans created inside automatically become children by checking
context.active()
ROOT_CONTEXT ensures each wrapAgent call creates an independent trace, even when called from within another traced context. Spans from frameworks like the Vercel AI SDK automatically nest under your agent span — the context is propagated transparently across async operations, HTTP requests, and framework boundaries.
Source code
The snippets below are illustrative of how Lemma’s SDK builds the processor and provider.- TypeScript
- Python
Next steps
- Instrumentation — Register OpenTelemetry and
wrapAgent - Provider instrumentation — OpenInference and run vs child spans

