Skip to main content
Langfuse exposes an OTLP-compatible endpoint, which means you can route the same traces to both Langfuse and Lemma simultaneously. No changes to your agent logic are needed — you configure the processors once at startup.

Getting started

Install

npm install @uselemma/tracing @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai

Register at startup

Use registerOTel() from @uselemma/tracing to configure the tracer provider, then register OpenInference instrumentors against it:
import { registerOTel } from '@uselemma/tracing';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { OpenAIInstrumentation } from '@arizeai/openinference-instrumentation-openai';

const provider = registerOTel();

registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation()],
  tracerProvider: provider,
});
In a Next.js app, put this in instrumentation.ts:
// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const { registerOTel } = await import('@uselemma/tracing');
    const provider = registerOTel();

    const { registerInstrumentations } = await import('@opentelemetry/instrumentation');
    const { OpenAIInstrumentation } = await import('@arizeai/openinference-instrumentation-openai');
    registerInstrumentations({
      instrumentations: [new OpenAIInstrumentation()],
      tracerProvider: provider,
    });
  }
}
Set LEMMA_API_KEY and LEMMA_PROJECT_ID environment variables. Find them in your Lemma project settings.

Wrap your agent

import { agent } from "@uselemma/tracing";
import OpenAI from "openai";

const openai = new OpenAI();

const myAgent = agent("my-agent", async (input: string) => {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: input }],
  });
  return response.choices[0].message.content ?? "";
  // wrapper auto-captures the return value as ai.agent.output and closes the span
});

const { result, runId } = await myAgent("What is the capital of France?");

Sending to both Langfuse and Lemma

Build the TracerProvider manually and add both span processors. The Langfuse OTLP endpoint accepts traces with your Langfuse secret key in the Authorization header:
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
import { createLemmaSpanProcessor } from '@uselemma/tracing';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { OpenAIInstrumentation } from '@arizeai/openinference-instrumentation-openai';

const provider = new NodeTracerProvider({
  spanProcessors: [
    createLemmaSpanProcessor(),
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: 'https://cloud.langfuse.com/api/public/otel/v1/traces',
        headers: {
          Authorization: `Bearer ${process.env.LANGFUSE_SECRET_KEY}`,
        },
      })
    ),
  ],
});

provider.register();

registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation()],
  tracerProvider: provider,
});
Both processors receive all spans independently — order has no effect on correctness. See Dual export for the general pattern and additional options (pre-initialized providers, passing credentials explicitly).

What you’ll see in Lemma

SpanSourceContains
ai.agent.runagent()Run input, output, timing, run ID
gen_ai.chatOpenInference (OpenAI)Model name, prompt, completion, token usage

Next Steps