Skip to main content
Use this for the standard case: an agent function that calls an LLM and returns a single response.
import { registerOTel, agent } from "@uselemma/tracing";

registerOTel();

const wrapped = agent("my-agent", async (input: { userMessage: string }) => {
  const response = await callLLM(input.userMessage);
  return response; // wrapper auto-captures output and closes the span
});

const { result, runId } = await wrapped({ userMessage });
To add per-call LLM visibility (prompt, response, tokens), register the matching OpenInference instrumentor for your provider at startup. Details: Adding provider instrumentation.