New to Lemma? Lemma is an observability platform for AI agents. Every agent execution becomes a trace — a structured record of timing, inputs, outputs, LLM calls, and tool invocations — so you can understand exactly what your agent did and where it went wrong. Learn more about the concepts →
Agentic setup
Install the Lemma AI skill to let your coding agent integrate Lemma automatically. It detects your framework, picks the right integration path, and knows the common pitfalls.- Ask your coding agent
- Manual installation
Point your agent at the skill repository and ask it to add tracing:
Manual setup
Set environment variables
Find your API key and project ID in Lemma project settings.
Register the tracer and wrap your agent
Pick the tab that matches your setup. Each example sends a trace to Lemma and prints the
runId you can look up in the dashboard.- Vercel AI SDK
- OpenAI Agents SDK
- OpenAI SDK (TypeScript)
- OpenAI SDK (Python)
- Anthropic SDK (Python)
See your trace in Lemma
Open the Lemma dashboard and navigate to Traces. Your run appears within a few seconds — click it to see timing, input, output, and any child spans from provider instrumentation.
What’s next
Concepts
Understand runs, spans, threads, and how they relate.
Tracing guides
Choose the right setup for your integration depth.
Provider instrumentation
Add per-LLM-call child spans with prompt, completion, and token data.
Recipes
Complete copy-paste examples for common patterns.
Query traces from your IDE
Use the Lemma MCP server to search and inspect traces from Cursor, Claude Desktop, or any MCP client.

