Skip to main content
Langfuse is a popular observability platform for LLM applications. If you’re already using Langfuse for tracing, you can send those traces to Lemma by configuring OpenTelemetry.
This guide shows how to use Langfuse’s native instrumentation with Lemma. If you’re starting fresh, consider using the Vercel AI SDK with Lemma’s registerOTel helper.

How It Works

Once a tracer provider is registered, Langfuse automatically captures spans via OpenTelemetry. Any LLM calls and operations instrumented by Langfuse will be sent to Lemma. You don’t need to make changes to your existing Langfuse instrumentation — the spans are automatically exported to Lemma’s endpoint.

Getting Started

Install Dependencies

npm install @langfuse/tracing @uselemma/tracing

Set Up the Tracer Provider

Use registerOTel from @uselemma/tracing to configure the tracer provider:
import { registerOTel } from '@uselemma/tracing';
registerOTel();
Or in a Next.js app, use the instrumentation.ts file:
// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const { registerOTel } = await import('@uselemma/tracing');
    registerOTel();
  }
}
Set the LEMMA_API_KEY and LEMMA_PROJECT_ID environment variables. Find these in your Lemma project settings.

Dual Export (Optional)

If you want to send traces to both Langfuse and Lemma simultaneously, you’ll need to configure the tracer provider manually with multiple exporters instead of using registerOTel:
npm install @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
import { createLemmaSpanProcessor } from '@uselemma/tracing';

const tracerProvider = new NodeTracerProvider({
  spanProcessors: [
    // Export to Lemma (uses RunBatchSpanProcessor internally)
    createLemmaSpanProcessor(),
    // Export to Langfuse
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: 'https://cloud.langfuse.com/api/public/otel/v1/traces',
        headers: {
          Authorization: `Bearer ${process.env.LANGFUSE_SECRET_KEY}`,
        },
      })
    ),
  ],
});

tracerProvider.register();
This sends the same trace data to both platforms, letting you use Langfuse features alongside Lemma’s experimentation and metrics capabilities.

Example

import { wrapAgent } from "@uselemma/tracing";
import { Langfuse } from "@langfuse/tracing";

const langfuse = new Langfuse({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY,
  secretKey: process.env.LANGFUSE_SECRET_KEY,
});

export const callAgent = async (userInput: string) => {
  const wrappedFn = wrapAgent(
    "my-agent",
    async (ctx, input) => {
      // Your Langfuse-instrumented agent logic
      const trace = langfuse.trace({ name: "agent-execution" });
      const generation = trace.generation({
        name: "llm-call",
        model: "gpt-4",
        input: input.userInput,
      });

      // Your actual agent work here
      const result = await doWork(input.userInput);

      generation.end({ output: result });
      trace.end();

      return result;
    }
  );

  const { result, runId } = await wrappedFn({ userInput });
  return { result, runId };
};

What Gets Traced

When using Langfuse with Lemma, you’ll see:
  • Top-level agent span — Created by wrapAgent, contains inputs and outputs
  • Langfuse trace spans — Automatically created by Langfuse’s instrumentation
  • Generation spans — LLM calls captured by Langfuse
  • Nested operations — Any additional spans from other instrumented libraries
All spans are sent to Lemma where you can:
  • View the full execution hierarchy
  • Analyze timing and performance
  • Filter by operation type or error status
  • Link metric events to specific runs

Additional Resources

For more on instrumenting your agent with Langfuse, see the Langfuse tracing documentation.

Next Steps