registerOTel()) and the run boundary (agent()), see Overview and Quickstart.
Span helpers (recommended)
The@uselemma/tracing and uselemma_tracing packages export typed helpers that wrap functions with child spans automatically. Use these instead of raw startActiveSpan calls — they handle error recording, span lifecycle, and automatic I/O capture for you.
| Helper | span.type | Use for |
|---|---|---|
trace(name, fn) | (none) | General-purpose child span |
tool(name, fn) | tool | Tool / function execution |
llm(name, fn) | generation | LLM call (when OpenInference is not used) |
retrieval(name, fn) | retriever | Vector search, document retrieval |
agent() span without any extra wiring.
Automatic I/O capture: every helper serializes the function’s input as input.value and its return value as output.value on the span. These appear as “Input” and “Output” in the Lemma trace view without any manual setAttribute calls.
In TypeScript, helpers are always called as tool("name", fn). In Python, they also work as decorators (@tool("name")).
- TypeScript
- Python
LLM step spans (raw OTel)
For full control over step-level attributes (model, tokens, cost, finish reason), create child spans manually using the OpenTelemetry API:- TypeScript
- Python
Tool call spans (raw OTel)
- TypeScript
- Python
tool() helper above is simpler — use raw startActiveSpan only when you need explicit control over attributes like tool.args and tool.result.
Next Steps
- Custom attributes — attach user ID, session, and environment metadata to run spans
- Multi-turn threads — link related runs into a conversation thread
- Troubleshooting — spans not appearing, nesting issues
- From-Scratch Agent recipe — framework-free step-by-step instrumentation walkthrough
- Adding provider instrumentation — auto-instrument your provider SDK so
gen_ai.chatchild spans appear without manual step spans

