Skip to main content
Use this for agents that loop — model calls tools, tools return results, model calls again — until a stop condition is reached. The run span stays open for the entire loop and closes when the wrapped function returns.
import { registerOTel, agent, tool, llm } from "@uselemma/tracing";

registerOTel();

const executeTool = tool("dispatch", async (name: string, args: Record<string, unknown>) => {
  return await runTool(name, args);
});

const callLLM = llm("gpt-4o", async (history: unknown[]) => {
  return await callModel(history);
});

const wrapped = agent("my-agent", async (input: { userMessage: string }) => {
  const history = [{ role: "user", content: input.userMessage }];

  while (true) {
    const response = await callLLM(history); // span: llm.gpt-4o

    if (response.stopReason === "end_turn") {
      return response.text;
    }

    for (const toolCall of response.toolCalls) {
      const result = await executeTool(toolCall.name, toolCall.args); // span: tool.dispatch
      history.push({ role: "tool", name: toolCall.name, content: String(result) });
    }
  }
});

const { result, runId } = await wrapped({ userMessage: "What is the weather in London?" });
Key points:
  • Each tool execution gets its own tool.* child span via the tool() helper.
  • Each LLM call gets its own llm.* child span via the llm() helper.
  • The run span closes only after the whole loop finishes, so all turns are captured.
  • Error recording and span lifecycle are handled automatically by the helpers.