Arize Phoenix is an open-source observability platform for LLM applications. Phoenix accepts standard OTLP traces, so you can send traces from Lemma’s instrumentation to both Lemma and Phoenix.
If you’re starting fresh and only need Lemma, use registerOTel from @uselemma/tracing. This guide is for adding Phoenix as a destination for Lemma traces.
How It Works
Phoenix accepts standard OTLP over HTTP. Once you configure an OpenTelemetry tracer provider to export to Phoenix’s endpoint, any spans captured by your instrumentation (e.g., wrapAgent, OpenInference instrumentors) are sent to Phoenix.
You don’t need Phoenix-specific SDK code — just point the OTLP exporter at Phoenix’s collector endpoint.
Getting Started
Install Dependencies
npm install @uselemma/tracing @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai
pip install uselemma-tracing opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
Set Up the Tracer Provider
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { createLemmaSpanProcessor } from "@uselemma/tracing";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
const provider = new NodeTracerProvider({
spanProcessors: [
createLemmaSpanProcessor(),
new BatchSpanProcessor(
new OTLPTraceExporter({
url: process.env.PHOENIX_COLLECTOR_ENDPOINT ?? "https://otlp.arize.com/v1/traces",
headers: process.env.PHOENIX_API_KEY
? { api_key: process.env.PHOENIX_API_KEY }
: {},
})
),
],
});
provider.register();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
tracerProvider: provider,
});
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry import trace
from uselemma_tracing import create_lemma_span_processor, instrument_openai
provider = TracerProvider()
provider.add_span_processor(create_lemma_span_processor())
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(
endpoint=os.environ.get("PHOENIX_COLLECTOR_ENDPOINT", "https://otlp.arize.com/v1/traces"),
headers={"api_key": os.environ["PHOENIX_API_KEY"]} if os.environ.get("PHOENIX_API_KEY") else {},
)
)
)
trace.set_tracer_provider(provider)
instrument_openai()
Self-hosted Phoenix: Set PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006/v1/traces for local Phoenix. Self-hosted deployments typically don’t require an API key.
Environment Variables
| Variable | Description |
|---|
PHOENIX_API_KEY | Your Arize API key (required for Arize cloud; omit for self-hosted) |
PHOENIX_COLLECTOR_ENDPOINT | OTLP endpoint URL. Defaults to https://otlp.arize.com/v1/traces |
Example
import { wrapAgent } from "@uselemma/tracing";
export const callAgent = async (userInput: string) => {
const wrappedFn = wrapAgent(
"my-agent",
async (ctx, input) => {
const result = await doWork(input.userInput);
ctx.onComplete(result);
return result;
},
{ autoEndRoot: true }
);
const { result, runId } = await wrappedFn({ userInput });
return { result, runId };
};
With the tracer provider configured for Lemma + Phoenix, all spans from wrapAgent and any OpenInference-instrumented calls (OpenAI, Anthropic, etc.) are exported to Phoenix.
What Gets Traced
When using Phoenix with Lemma’s instrumentation, you’ll see:
- Top-level agent span — Created by
wrapAgent, contains inputs and outputs
- LLM spans — Model calls captured by OpenInference instrumentors (OpenAI, Anthropic, etc.)
- Tool spans — Function/tool invocations
- Nested operations — Any additional spans from other instrumented libraries
All spans are sent to Phoenix where you can:
- View the full execution hierarchy
- Analyze timing and token usage
- Filter by operation type or error status
- Use Phoenix’s LLM-specific evaluation and monitoring features
Additional Resources
For more on Phoenix, see the Arize Phoenix documentation.
Next Steps