- trace a custom agent loop end-to-end
- control exactly how run, step, and tool-call spans are created
- attach business metadata for filtering in the dashboard
- combine Lemma export with other OpenTelemetry destinations
Concept Mapping
This section mirrors common observability language used across agent platforms:| Concept | In Lemma |
|---|---|
| Run | A top-level wrapAgent / wrap_agent execution (ai.agent.run) |
| Step | An LLM-call span inside the run (manual or OpenInference-generated) |
| Tool call | A child span that captures tool name, args, result, and status |
| Session | A group of related runs linked by shared metadata (for example session_id) |
Using These Docs
Start with the setup page for your runtime, then use the run, step, and tool-call pages as reference docs for the parts of the lifecycle you need to instrument.- TypeScript: Setup, Run usage, Step usage, Tool call usage
- Python: Setup, Run usage, Step usage, Tool call usage
Next Steps
- Start with TypeScript setup or Python setup
- Follow Manual Instrumentation for a step-by-step framework-free guide
- Use Advanced for streaming, custom child spans, and multi-destination export patterns
- Enable Debug mode if traces are not appearing as expected

