Quick Start
Instrumenting your application means setting up OpenTelemetry to send traces to Lemma. Here’s how:
1. Set environment variables
export LEMMA_API_KEY="lma_..."
export LEMMA_PROJECT_ID="proj_..."
2. Call registerOTel() before your application code runs
Create instrumentation.ts at your project root:// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { registerOTel } = await import('@uselemma/tracing');
registerOTel();
}
}
The NEXT_RUNTIME check ensures instrumentation only runs on the server.
Create a tracer.ts file and import it first in your entry point:// tracer.ts
import { registerOTel } from '@uselemma/tracing';
registerOTel();
// index.ts or server.ts
import './tracer'; // Must be first!
import express from 'express';
// ... rest of your application
Call register_otel() at the start of your application:# app.py or __init__.py
from uselemma_tracing import register_otel
register_otel()
# ... rest of your application
3. Use wrapAgent to trace your agent functions
import { wrapAgent } from '@uselemma/tracing';
const myAgent = wrapAgent('my-agent', async (ctx, input) => {
const result = await doWork(input);
ctx.onComplete(result);
return result;
});
await myAgent({ query: 'Hello' });
That’s it. Your traces will now appear in Lemma.
Order matters! Call registerOTel() before any code that creates spans.
How It Works
Next.js
For Next.js 15+, use the instrumentation.ts file at your project root. This file runs once when the Node.js runtime starts, before any application code executes.
// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { registerOTel } = await import('@uselemma/tracing');
registerOTel();
}
}
The NEXT_RUNTIME check ensures instrumentation only runs on the server, not in the browser or Edge runtime.
Node.js (General)
For standalone Node.js applications, create a dedicated tracer.ts or instrumentation.ts file and import it before any other application code:
// tracer.ts
import { registerOTel } from '@uselemma/tracing';
registerOTel();
Then in your entry point:
// index.ts or server.ts
import './tracer'; // Must be first!
import express from 'express';
import { wrapAgent } from '@uselemma/tracing';
// ... rest of your application
Order matters! The tracer must be registered before you import any code that creates spans. Otherwise, those spans won’t be captured.
Python
For Python applications, call register_otel() or use the convenience functions:
# app.py or __init__.py
from uselemma_tracing import register_otel
register_otel()
# Or use framework-specific registration
from uselemma_tracing import instrument_openai
instrument_openai()
registerOTel() sets up the OpenTelemetry infrastructure to:
- Create spans when you call
wrapAgent or framework instrumentors
- Batch them per agent run using
RunBatchSpanProcessor — all spans for a single agent execution are grouped together
- Export them to Lemma’s API at
https://api.uselemma.ai/otel/v1/traces when the top-level agent span ends
Once registered, spans automatically nest correctly across async operations, HTTP requests, and framework boundaries.
Configuration
By default, registerOTel() reads from environment variables. You can also pass options directly:
registerOTel({
apiKey: 'lma_...', // defaults to process.env.LEMMA_API_KEY
projectId: 'proj_...', // defaults to process.env.LEMMA_PROJECT_ID
baseUrl: '...', // defaults to https://api.uselemma.ai
});
register_otel(
api_key="lma_...", # defaults to os.getenv("LEMMA_API_KEY")
project_id="proj_...", # defaults to os.getenv("LEMMA_PROJECT_ID")
base_url="...", # defaults to https://api.uselemma.ai
)
Exporting to Multiple Destinations
Already sending traces to Datadog, Jaeger, or another collector? Add Lemma as a second destination:
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { createLemmaSpanProcessor } from "@uselemma/tracing";
const provider = new NodeTracerProvider({
spanProcessors: [
createLemmaSpanProcessor(), // Send to Lemma
yourExistingSpanProcessor, // Keep your existing setup
],
});
provider.register();
from opentelemetry.sdk.trace import TracerProvider
from uselemma_tracing import create_lemma_span_processor
provider = TracerProvider()
provider.add_span_processor(create_lemma_span_processor()) # Send to Lemma
provider.add_span_processor(your_existing_span_processor) # Keep your existing setup
Auto-Instrumentation for Frameworks
Want automatic spans for OpenAI, Anthropic, or other frameworks? Add their instrumentors after registerOTel():
// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { registerOTel } = await import('@uselemma/tracing');
registerOTel();
// Auto-instrument OpenAI
const { OpenAIInstrumentation } = await import('@arizeai/openinference-instrumentation-openai');
new OpenAIInstrumentation().instrument();
}
}
# Convenience function that combines registerOTel + OpenAI instrumentation
from uselemma_tracing import instrument_openai
instrument_openai()
Troubleshooting
Traces not showing up?
- Make sure
registerOTel() is called before any code that creates spans
- Verify
LEMMA_API_KEY and LEMMA_PROJECT_ID are set: echo $LEMMA_API_KEY
- Check console for authentication errors
Spans not nesting correctly?
- Use
wrapAgent or tracer.startActiveSpan (not tracer.startSpan)
- Context propagates automatically across async/await
Performance issues?
- The
RunBatchSpanProcessor is production-ready by default — it batches all spans for an agent run and exports them together when the run ends
- For high-volume apps, consider sampling to trace only a percentage of requests
Next Steps
Advanced: How It Works Under the Hood
This section explains the OpenTelemetry architecture for those who want to understand the internals or customize the setup.
Architecture
Your Application
↓
wrapAgent / trace.getTracer()
↓
NodeTracerProvider (creates spans)
↓
RunBatchSpanProcessor (batches spans per agent run)
↓
OTLPTraceExporter (sends over HTTP)
↓
Lemma API (https://api.uselemma.ai/otel/v1/traces)
Components
NodeTracerProvider — The factory that creates tracers and spans. Integrates with Node.js async hooks to propagate context across async boundaries.
RunBatchSpanProcessor — A custom span processor that groups spans by agent run ID. When the top-level ai.agent.run span ends, all spans belonging to that run are exported together in a single batch. It also propagates the lemma.run_id attribute to child spans automatically and filters out framework-internal spans (e.g. Next.js instrumentation).
OTLPTraceExporter — Serializes spans using OpenTelemetry Protocol (OTLP) and sends them to Lemma with authentication headers.
Context Propagation
When wrapAgent creates a span, it:
- Opens a new root span against
ROOT_CONTEXT: tracer.startSpan("ai.agent.run", { ... }, ROOT_CONTEXT)
- Sets the span on the context:
trace.setSpan(ROOT_CONTEXT, span)
- Executes your agent function inside that context:
context.with(ctx, async () => { ... })
- Any spans created inside automatically become children by checking
context.active()
Using ROOT_CONTEXT ensures each wrapAgent call creates an independent trace, even when called from within another traced context. Spans from frameworks like the Vercel AI SDK automatically nest under your agent span — the context is propagated transparently across async operations, HTTP requests, and framework boundaries.
Source Code
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { RunBatchSpanProcessor } from "./run-batch-span-processor";
export function createLemmaSpanProcessor(options = {}) {
const apiKey = options.apiKey ?? process.env.LEMMA_API_KEY;
const projectId = options.projectId ?? process.env.LEMMA_PROJECT_ID;
const baseUrl = options.baseUrl ?? "https://api.uselemma.ai";
if (!apiKey || !projectId) {
throw new Error("Missing API key and/or project ID");
}
return new RunBatchSpanProcessor(
new OTLPTraceExporter({
url: `${baseUrl}/otel/v1/traces`,
headers: {
Authorization: `Bearer ${apiKey}`,
"X-Lemma-Project-ID": projectId,
},
})
);
}
export function registerOTel(options = {}) {
const tracerProvider = new NodeTracerProvider({
spanProcessors: [createLemmaSpanProcessor(options)],
});
tracerProvider.register();
return tracerProvider;
}
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry import trace
from .run_batch_span_processor import RunBatchSpanProcessor
def create_lemma_span_processor(*, api_key=None, project_id=None, base_url="https://api.uselemma.ai"):
api_key = api_key or os.getenv("LEMMA_API_KEY")
project_id = project_id or os.getenv("LEMMA_PROJECT_ID")
if not api_key or not project_id:
raise ValueError("Missing API key and/or project ID")
exporter = OTLPSpanExporter(
endpoint=f"{base_url}/otel/v1/traces",
headers={
"Authorization": f"Bearer {api_key}",
"X-Lemma-Project-ID": project_id,
},
)
return RunBatchSpanProcessor(exporter)
def register_otel(*, api_key=None, project_id=None, base_url="https://api.uselemma.ai"):
provider = TracerProvider()
provider.add_span_processor(
create_lemma_span_processor(api_key=api_key, project_id=project_id, base_url=base_url)
)
trace.set_tracer_provider(provider)
return provider