- a top-level run span
- manual step spans for LLM calls
- manual tool-call spans for tool execution
- run-level metadata for filtering
- error handling for failed runs and child spans
Prerequisites
- You have a Lemma API key and project ID.
- You can run a plain Node.js or Python app.
- Your agent loop is code you control directly (no framework wrapper).
Instrument the Agent
Wrap your agent as a run
Use
wrapAgent / wrap_agent to create the top-level run span (ai.agent.run).- TypeScript
- Python
Add a step span for each LLM call
A step is a child span inside the run that captures one LLM request/response.
- TypeScript
- Python
Add tool-call spans around tools
A tool call is another child span nested under the active run.
- TypeScript
- Python
Wire everything into one agent loop
Below is a minimal sequence:
- run starts with
wrapAgent/wrap_agent - step span records LLM decision
- tool-call span records tool execution
- step span records final LLM response
- run ends with
onComplete/on_complete
Troubleshooting checklist
- No runs visible: ensure
registerOTel/register_otelruns before your app logic. - Run appears but no child spans: make sure step/tool spans are created inside the wrapped function.
- Run never closes: ensure
onComplete/on_completeis reached, or that errors are rethrown. - Missing metadata filters: verify attributes are set on the run span, not on unrelated spans.

