Skip to main content

Set Lemma environment variables

export LEMMA_API_KEY="lma_..."
export LEMMA_PROJECT_ID="proj_..."

Install dependencies

npm install @uselemma/tracing @opentelemetry/api
If you want automatic per-LLM-call spans, add provider instrumentation:
npm install @opentelemetry/instrumentation @arizeai/openinference-instrumentation-openai @arizeai/openinference-instrumentation-anthropic

Register OpenTelemetry once at startup

Call registerOTel() before code that creates spans.
// instrumentation.ts (Next.js)
export async function register() {
  if (process.env.NEXT_RUNTIME === "nodejs") {
    const { registerOTel } = await import("@uselemma/tracing");
    registerOTel();
  }
}
// tracer.ts (Node.js)
import { registerOTel } from "@uselemma/tracing";

registerOTel();

Create a run

A run is one top-level wrapAgent execution. Return the wrapped function result and runId so you can attach external signals later.
import { wrapAgent } from "@uselemma/tracing";

export const callAgent = async (userMessage: string) => {
  const wrapped = wrapAgent("my-agent", async ({ onComplete }, input) => {
    const response = await llmCall(input.userMessage);
    onComplete(response);
    return response;
  });

  const { result, runId } = await wrapped({ userMessage });
  return { result, runId };
};

Optional: add OpenInference instrumentors

Use this when you want provider-generated LLM spans to appear as run children.
import { registerOTel } from "@uselemma/tracing";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

const provider = registerOTel();
registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation()],
  tracerProvider: provider,
});

Next Steps