Azure Application Insights is Microsoft’s cloud-native monitoring and observability service. If you’re already sending traces to Application Insights, you can also send the same traces to Lemma by adding Lemma’s span processor alongside the Azure Monitor exporter.
If you’re starting fresh and only need Lemma, use registerOTel from @uselemma/tracing. This guide is for adding Lemma as a destination alongside an existing Azure Application Insights setup.
How It Works
Azure Application Insights uses the AzureMonitorTraceExporter from the Azure Monitor OpenTelemetry exporter package. Once you have a tracer provider configured with Azure, adding Lemma is a matter of attaching createLemmaSpanProcessor() as a second span processor on the same provider. Both destinations receive the same spans.
Getting Started
Install Dependencies
If you already have Azure telemetry configured, you likely have these installed:npm install @azure/monitor-opentelemetry-exporter @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base
To add Lemma and OpenInference instrumentation:npm install @uselemma/tracing @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai
If you already have Azure telemetry configured, you likely have these installed:pip install azure-monitor-opentelemetry-exporter opentelemetry-api opentelemetry-sdk
To add Lemma integration:pip install uselemma-tracing
If Azure telemetry is already configured in your project, the Lemma package is the required new dependency to send the same traces to Lemma.
Set Up the Tracer Provider
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter";
import { createLemmaSpanProcessor } from "@uselemma/tracing";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
const tracerProvider = new NodeTracerProvider({
spanProcessors: [
// Export to Azure Application Insights
new BatchSpanProcessor(
new AzureMonitorTraceExporter({
connectionString: process.env.APPLICATIONINSIGHTS_CONNECTION_STRING,
})
),
// Export to Lemma
createLemmaSpanProcessor(),
],
});
tracerProvider.register();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
tracerProvider: tracerProvider,
});
In a Next.js app, place this in instrumentation.ts:// instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
const { NodeTracerProvider } = await import("@opentelemetry/sdk-trace-node");
const { BatchSpanProcessor } = await import("@opentelemetry/sdk-trace-base");
const { AzureMonitorTraceExporter } = await import("@azure/monitor-opentelemetry-exporter");
const { createLemmaSpanProcessor } = await import("@uselemma/tracing");
const { registerInstrumentations } = await import("@opentelemetry/instrumentation");
const { OpenAIInstrumentation } = await import("@arizeai/openinference-instrumentation-openai");
const tracerProvider = new NodeTracerProvider({
spanProcessors: [
new BatchSpanProcessor(
new AzureMonitorTraceExporter({
connectionString: process.env.APPLICATIONINSIGHTS_CONNECTION_STRING,
})
),
createLemmaSpanProcessor(),
],
});
tracerProvider.register();
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
tracerProvider: tracerProvider,
});
}
}
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
from opentelemetry import trace
from uselemma_tracing import create_lemma_span_processor, instrument_openai
provider = TracerProvider()
# Export to Azure Application Insights
provider.add_span_processor(
BatchSpanProcessor(
AzureMonitorTraceExporter(
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"]
)
)
)
# Export to Lemma
provider.add_span_processor(create_lemma_span_processor())
trace.set_tracer_provider(provider)
instrument_openai()
Environment Variables
| Variable | Description |
|---|
APPLICATIONINSIGHTS_CONNECTION_STRING | Your Application Insights connection string. Find this in your Application Insights resource under Configure > Properties. |
LEMMA_API_KEY | Your Lemma API key |
LEMMA_PROJECT_ID | Your Lemma project ID |
Example
import { wrapAgent } from "@uselemma/tracing";
export const callAgent = async (userInput: string) => {
const wrappedFn = wrapAgent(
"my-agent",
async (ctx, input) => {
const result = await doWork(input.userInput);
ctx.onComplete(result);
return result;
},
{ autoEndRoot: true }
);
const { result, runId } = await wrappedFn({ userInput });
return { result, runId };
};
With both processors registered, all spans from wrapAgent and any auto-instrumented calls are sent to both Application Insights and Lemma.
What Gets Traced
When using Azure Application Insights alongside Lemma, you’ll see:
- Top-level agent span — Created by
wrapAgent, contains inputs and outputs
- LLM spans — Model calls captured by OpenInference or Azure Monitor instrumentors
- Tool spans — Function/tool invocations
- Nested operations — Any additional spans from other instrumented libraries
In Application Insights, spans appear as distributed traces. In Lemma, they appear as runs with full execution hierarchy, timing, and metric event links.
Additional Resources
For more on Azure Application Insights with OpenTelemetry, see the Azure Monitor OpenTelemetry documentation.
Next Steps