How a Trace Is Structured
A single agent execution produces a tree of spans.agent creates the root, and everything that runs inside it — provider SDK calls, tool executions, retrieval steps — becomes a child span.
agent()
ai.agent.run
run ID · input · output · timing
gen_ai.chat
prompt · completion · tokens
tool.lookup-order
input · output
gen_ai.chat
second LLM call
gen_ai.chat child spans when you call OpenAI or Anthropic. The tool. spans come from the tool() helper or your own tracer.startActiveSpan calls.
Traces
A trace represents a single execution of your agent from start to finish. It captures:- Inputs — The initial state and parameters passed to your agent
- Outputs — The final result produced by your agent
- Spans — Nested operations within the execution (LLM calls, tool invocations, etc.)
- Timing — Duration and timing of each operation
- Metadata — Additional context like model names, token counts, and error states
Run ID
The run ID is Lemma’s identifier for a specific agent execution. It’s returned byagent and used to:
- Query and filter traces in the dashboard
- Link multi-turn conversations via a shared thread ID
Thread ID
A thread ID is an optional string you attach to multiple agent runs so Lemma can treat them as part of the same conversation or multi-turn flow. Set it in the SDK via the per-invocation options on the wrapped function (threadId in TypeScript, thread_id in Python); it is stored on the run span as lemma.thread_id.
When you use the same non-empty thread ID across multiple traces, the Lemma dashboard can surface a Thread · N indicator on the traces list (for threads with more than one finished trace). Thread IDs are not inferred—you must pass them explicitly when you call the wrapped agent.
Spans
Spans are the building blocks of a trace. Each span represents a single operation within your agent’s execution, such as:- An LLM generation call
- A tool or function invocation
- A database query
- A custom operation you want to track
- Which operations happened in what order
- How long each operation took
- Where errors occurred in the execution path
Projects
A project is the top-level container in Lemma. It groups:- All traces from your agent(s)
- A project ID — Used when sending traces and making API calls
- An API key — For authentication
- A dashboard for viewing and analyzing data
Tracer Provider
The tracer provider is the OpenTelemetry component responsible for:- Creating and managing spans
- Exporting trace data to Lemma’s OTLP endpoint
- Handling batching and retries
- Lemma’s OTLP endpoint URL
- Your API key and project ID
- Span processors (
RunBatchSpanProcessorgroups all spans for an agent run and exports them together when the run completes)
Next Steps
Now that you understand Lemma’s core concepts:- Explore Tracing to start sending traces
- Learn about Manual Instrumentation to add custom spans to any function

