Documentation Index
Fetch the complete documentation index at: https://docs.uselemma.ai/llms.txt
Use this file to discover all available pages before exploring further.
If your app is already instrumented with OpenInference (including Arize-managed OpenInference setups), keep that instrumentation. Add Lemma as an OpenTelemetry export destination so the same spans are available in Lemma.
Langfuse is the recommended greenfield path, but OpenInference users do not need to replace their existing instrumentation.
Setup
Register OpenInference before your application imports the instrumented SDK. These examples instrument OpenAI calls and export the resulting OpenInference spans to Lemma.
Install OpenInference instrumentation and the OpenTelemetry exporter:npm install @arizeai/openinference-instrumentation-openai @opentelemetry/api @opentelemetry/exporter-trace-otlp-proto @opentelemetry/instrumentation @opentelemetry/sdk-trace-base @opentelemetry/sdk-trace-node openai
// instrumentation.ts
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
const provider = new NodeTracerProvider({
spanProcessors: [
new BatchSpanProcessor(
new OTLPTraceExporter({
url: process.env.LEMMA_BASE_URL,
headers: {
Authorization: `Bearer ${process.env.LEMMA_API_KEY}`,
"X-Lemma-Project-ID": process.env.LEMMA_PROJECT_ID,
},
}),
),
],
});
provider.register();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
});
// app.ts
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Help me understand my invoice" }],
});
Install OpenInference instrumentation and the OpenTelemetry exporter:pip install openai openinference-instrumentation-openai opentelemetry-exporter-otlp opentelemetry-sdk
# instrumentation.py
import os
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
provider = TracerProvider()
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(
endpoint=os.environ["LEMMA_BASE_URL"],
headers={
"Authorization": f"Bearer {os.environ['LEMMA_API_KEY']}",
"X-Lemma-Project-ID": os.environ["LEMMA_PROJECT_ID"],
},
)
)
)
trace.set_tracer_provider(provider)
OpenAIInstrumentor().instrument(tracer_provider=provider)
# app.py
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Help me understand my invoice"}],
)
Required environment variables
LEMMA_BASE_URL (https://api.uselemma.ai/otel/v1/traces)
LEMMA_API_KEY
LEMMA_PROJECT_ID
If you already have an OpenInference provider configured, add the Lemma exporter to that provider instead of creating a second global tracer provider.