- a top-level run span using
agent() - generation spans for LLM calls using
llm() - tool-call spans using
tool() - run-level metadata for filtering
- error handling for failed runs and child spans
Prerequisites
- You have a Lemma API key and project ID.
- You can run a plain Node.js or Python app.
- Your agent loop is code you control directly (no framework wrapper).
Instrument the Agent
Wrap LLM calls with llm()
The
llm() helper creates a generation span under the active run. Pass the model name as the span label.- TypeScript
- Python
Wrap tools with tool()
The
tool() helper creates a tool-call span nested under the active run.- TypeScript
- Python
Wire everything into one agent loop
- TypeScript
- Python
try/finally or manual span.end() needed.Troubleshooting checklist
- No runs visible: ensure
registerOTel/register_otelruns before your app logic. - Run appears but no child spans: make sure
llm()/tool()wrapped functions are called inside theagent()wrapper. - Run never closes: in streaming mode, ensure
ctx.complete()is called inside the finish callback. - Missing metadata filters: verify attributes are set on the run span, not on unrelated spans.

