Skip to main content

Quick Start

Instrumenting your application means setting up OpenTelemetry to send traces to Lemma. Here’s how:
For a run/step/tool-call lifecycle guide, see Custom Instrumentation. This page focuses on OTel setup and architecture.

1. Set environment variables

export LEMMA_API_KEY="lma_..."
export LEMMA_PROJECT_ID="proj_..."

2. Call registerOTel() before your application code runs

Create instrumentation.ts at your project root:
// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const { registerOTel } = await import('@uselemma/tracing');
    registerOTel();
  }
}
The NEXT_RUNTIME check ensures instrumentation only runs on the server.

3. Use wrapAgent to trace your agent functions

registerOTel() / register_otel() sets up export and batching. wrapAgent / wrap_agent creates the top-level run span (inputs, outputs, timing). Per-LLM-call prompts and completions require a separate layer — OpenInference on the provider SDK — see Provider instrumentation.
import { wrapAgent } from '@uselemma/tracing';

const myAgent = wrapAgent('my-agent', async (ctx, input) => {
  const result = await doWork(input);
  ctx.onComplete(result);
  return result;
});

await myAgent({ query: 'Hello' });
// Optional: per-invocation metadata (thread linking, experiment override)
await myAgent({ query: 'Hello' }, { threadId: 'thread-abc' });
That’s it for the run boundary. Your traces will appear in Lemma. For more on threadId and isExperiment at call time, see Wrapping your agent.
Order matters! Call registerOTel() before any code that creates spans.

How It Works

Next.js

For Next.js 15+, use the instrumentation.ts file at your project root. This file runs once when the Node.js runtime starts, before any application code executes.
// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const { registerOTel } = await import('@uselemma/tracing');
    registerOTel();
  }
}
The NEXT_RUNTIME check ensures instrumentation only runs on the server, not in the browser or Edge runtime.

Node.js (General)

For standalone Node.js applications, create a dedicated tracer.ts or instrumentation.ts file and import it before any other application code:
// tracer.ts
import { registerOTel } from '@uselemma/tracing';

registerOTel();
Then in your entry point:
// index.ts or server.ts
import './tracer'; // Must be first!
import express from 'express';
import { wrapAgent } from '@uselemma/tracing';

// ... rest of your application
Order matters! The tracer must be registered before you import any code that creates spans. Otherwise, those spans won’t be captured.

Python

For Python applications, call register_otel() at startup:
# app.py or __init__.py
from uselemma_tracing import register_otel

register_otel()
registerOTel() sets up the OpenTelemetry infrastructure to:
  1. Create spans when you call wrapAgent or framework instrumentors
  2. Batch them per agent run using RunBatchSpanProcessor — all spans for a single agent execution are grouped together
  3. Export them to Lemma’s API at https://api.uselemma.ai/otel/v1/traces when the top-level agent span ends
Once registered, spans automatically nest correctly across async operations, HTTP requests, and framework boundaries.

Configuration

By default, registerOTel() reads from environment variables. You can also pass options directly:
registerOTel({
  apiKey: 'lma_...',        // defaults to process.env.LEMMA_API_KEY
  projectId: 'proj_...',    // defaults to process.env.LEMMA_PROJECT_ID
  baseUrl: '...',           // defaults to https://api.uselemma.ai
});

Exporting to Multiple Destinations

Already sending traces to Datadog, Jaeger, or another collector? Add Lemma as a second span processor on the same TracerProvider. Every span — including from wrapAgent and any other instrumentation — is forwarded to each processor independently.
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { createLemmaSpanProcessor } from "@uselemma/tracing";

const provider = new NodeTracerProvider({
  spanProcessors: [
    createLemmaSpanProcessor(),
    new BatchSpanProcessor(new OTLPTraceExporter({
      url: "https://your-collector/v1/traces",
    })),
  ],
});

provider.register();
Processor order is not significant. Each processor independently receives every span — they do not filter or gate each other. You can add create_lemma_span_processor() before or after your existing processor and both will see all spans. If tracing is already initialized (a framework or APM agent already registered a provider), add Lemma as an additional processor rather than replacing the existing provider:
from opentelemetry import trace
from uselemma_tracing import create_lemma_span_processor

trace.get_tracer_provider().add_span_processor(create_lemma_span_processor())
Calling register_otel() replaces the global provider entirely, which would discard any processors your existing setup registered. For multi-destination setup and using OpenInference on the same custom provider, see the Dual export guide.

Provider spans (OpenInference)

To get child spans for each LLM call (prompt, completion, tokens, model), add OpenInference instrumentors for the OpenAI and/or Anthropic SDKs. That layer is separate from wrapAgent: it does not create the top-level run span.

Troubleshooting

Traces not showing up?
  • Make sure registerOTel() is called before any code that creates spans
  • Verify LEMMA_API_KEY and LEMMA_PROJECT_ID are set: echo $LEMMA_API_KEY
  • Check console for authentication errors
Spans not nesting correctly?
  • Use wrapAgent or tracer.startActiveSpan (not tracer.startSpan)
  • Context propagates automatically across async/await
Performance issues?
  • The RunBatchSpanProcessor is production-ready by default — it batches all spans for an agent run and exports them together when the run ends
  • For high-volume apps, consider sampling to trace only a percentage of requests

Next Steps