Skip to main content
New to Lemma? Lemma is an observability platform for AI agents. Every agent execution becomes a trace — a structured record of timing, inputs, outputs, LLM calls, and tool invocations — so you can understand exactly what your agent did and where it went wrong. Learn more about the concepts →

Agentic setup

Install the Lemma AI skill to let your coding agent integrate Lemma automatically. It detects your framework, picks the right integration path, and knows the common pitfalls.
Point your agent at the skill repository and ask it to add tracing:
Install the Lemma AI skill from github.com/uselemma/skills
and use it to add tracing to this application.

Manual setup

1

Install the SDK

npm install @uselemma/tracing
2

Set environment variables

Find your API key and project ID in Lemma project settings.
export LEMMA_API_KEY="lma_..."
export LEMMA_PROJECT_ID="proj_..."
3

Register the tracer and wrap your agent

Pick the tab that matches your setup. Each example sends a trace to Lemma and prints the runId you can look up in the dashboard.
// instrumentation.ts (Next.js) or tracer.ts (Node.js)
import { registerOTel } from "@uselemma/tracing";
registerOTel();
import { agent } from "@uselemma/tracing";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

const myAgent = agent("my-agent", async (input: string) => {
  const { text } = await generateText({
    model: openai("gpt-4o-mini"),
    prompt: input,
    experimental_telemetry: { isEnabled: true },
  });
  return text; // wrapper auto-captures output and closes the span
});

const { result, runId } = await myAgent("What is the capital of France?");
console.log(result);  // Paris
console.log(runId);   // look this up in the Lemma dashboard
4

See your trace in Lemma

Open the Lemma dashboard and navigate to Traces. Your run appears within a few seconds — click it to see timing, input, output, and any child spans from provider instrumentation.

What’s next

Concepts

Understand runs, spans, threads, and how they relate.

Tracing guides

Choose the right setup for your integration depth.

Provider instrumentation

Add per-LLM-call child spans with prompt, completion, and token data.

Recipes

Complete copy-paste examples for common patterns.

Query traces from your IDE

Use the Lemma MCP server to search and inspect traces from Cursor, Claude Desktop, or any MCP client.