Skip to main content

Trace ends after the first generation

Symptom: An agent that loops internally (model → tools → model → …) produces a trace that captures only the first LLM call. Subsequent rounds and tool calls are missing. Cause: autoEndRoot: true (the default) ends the root span when all direct child spans finish. In an agentic loop, there is an idle gap between rounds — the first LLM span closes, your tool code runs, and before the next LLM call starts there is a moment with zero active child spans. The RunBatchSpanProcessor treats this as the run completing and flushes early. Fix: Use autoEndRoot: false and call onComplete once after the full loop finishes:
const wrappedFn = wrapAgent(
  "my-agent",
  async ({ onComplete, recordError }, input) => {
    try {
      let finalResponse = "";

      while (true) {
        const response = await callModel(input.userMessage, history);

        if (response.stop_reason === "end_turn") {
          finalResponse = response.text;
          break;
        }

        await runTools(response.tool_calls, history);
      }

      onComplete({ text: finalResponse }); // once, after all rounds
      return finalResponse;
    } catch (err) {
      recordError(err instanceof Error ? err : new Error(String(err)));
      throw err;
    }
  },
  { autoEndRoot: false }
);
See autoEndRoot in depth for a full explanation of both modes.

Tool calls are missing from the trace

Symptom: LLM calls appear as child spans, but the tool executions between them are absent. Cause: Provider instrumentors (OpenAI, Anthropic, etc.) only patch the LLM SDK — they have no visibility into your code that runs between calls. Tool execution, database lookups, and HTTP requests won’t produce spans automatically. Fix: Wrap each tool execution in a manual tool.call span:
import { trace } from "@opentelemetry/api";

const tracer = trace.getTracer("my-agent");

async function executeTool(name: string, args: Record<string, unknown>) {
  return tracer.startActiveSpan("tool.call", async (span) => {
    span.setAttribute("tool.name", name);
    span.setAttribute("tool.args", JSON.stringify(args));
    try {
      const result = await runTool(name, args);
      span.setAttribute("tool.result", JSON.stringify(result));
      span.setAttribute("tool.status", "ok");
      return result;
    } catch (err) {
      span.recordException(err as Error);
      span.setAttribute("tool.status", "error");
      throw err;
    } finally {
      span.end();
    }
  });
}
See Tool call usage for the full reference.

Anthropic streaming breaks with auto-instrumentation

Symptom: After installing @arizeai/openinference-instrumentation-anthropic, the server throws messages.create(...).withResponse is not a function. Cause: AnthropicInstrumentation patches messages.create(). The high-level messages.stream() helper internally calls messages.create(...).withResponse() — some versions of the instrumentation don’t preserve that method on the patched return value. Fix: Replace messages.stream() with messages.create({ stream: true }) and consume the async iterable directly. This goes through the same patched create() call without the .withResponse() chain:
// Before (breaks with instrumentation):
const stream = anthropic.messages.stream({ model, messages, ... });
stream.on("text", (delta) => { ... });
const msg = await stream.finalMessage();

// After (compatible with instrumentation):
const stream = await anthropic.messages.create({ model, messages, ..., stream: true });
let fullText = "";
for await (const event of stream) {
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
    fullText += event.delta.text;
  }
}
Alternatively, drop AnthropicInstrumentation and add manual LLM step spans instead.

Output is empty or not recorded

Symptom: The trace appears in the dashboard but the run output field is blank. Cause: How output is captured depends on autoEndRoot:
autoEndRootHow output is stored
true (default)The return value of the wrapped function. Make sure you return the final result.
falseThe argument passed to onComplete(result). Make sure you call it before the function exits.
In both modes, the value must be a plain object or string — not a stream, Promise, or callback.

Need more detail?

Enable Debug Mode to see a log line for every span start, end, and export event. This makes it easy to confirm whether spans are being created, attributed to a run, and successfully flushed. The following issues require debug mode output to diagnose.

Run never appears in the dashboard

Symptom: Your agent runs successfully but no trace shows up. What to look for: Check whether exporting batch appears in the logs. If it does, the spans were sent and the issue is likely downstream (network, API key, or ingestion). If it doesn’t, work backwards through these checks:
  1. No onStart: top-level run spanregisterOTel was never called, or was called after the agent ran. Make sure registerOTel is called once at startup before any agent invocations.
  2. onStart: top-level run span appears but exporting batch never does — The root span was opened but the run batch never completed. This usually means the top-level span never ended:
    • With autoEndRoot: true (the default): check that span auto-ended after fn returned appears. If it doesn’t, the function may have thrown and the error path should show span ended on error.
    • With autoEndRoot: false: check that span ended via onComplete appears. If it doesn’t, onComplete was never called.
  3. exporting batch appears with spanCount: 1 — Only the root span was exported and no child spans. Child spans from frameworks like the Vercel AI SDK or OpenAI Agents SDK won’t appear unless their instrumentation is registered. See the integrations.

Spans appear but the run doesn’t close

Symptom: You see onStart lines for child spans, but exporting batch never fires and the run hangs open in the dashboard. What to look for: Check the remainingChildren value on onEnd: direct child ended lines. If it never reaches 0, a child span that was opened never ended — commonly a streaming span left open if an error is swallowed mid-stream.
[LEMMA:processor] onEnd: direct child ended { ..., remainingChildren: 1 }
[LEMMA:processor] onEnd: direct child ended { ..., remainingChildren: 1 }
# remainingChildren never hits 0 — one span is stuck open
The onEnd: triggering auto-end of top-level span line only fires once remainingChildren hits 0. If that line is missing, find the span that was started but never ended and ensure its lifecycle is closed (e.g. by awaiting the stream to completion or using a finally block).

Spans are started but not attributed to a run

Symptom: You see onStart: child span but no matching onStart: top-level run span before it. What to look for: The processor only attributes child spans to a run if it saw the parent run start first. This can happen if:
  • The child span is created outside the wrapAgent context (before or after the wrapped function executes)
  • The span’s parent context doesn’t trace back to an ai.agent.run span
Make sure all instrumented calls happen inside the wrapped function body.

skipped: true on a span

Symptom: A span appears in onEnd logs with skipped: true and is not included in the export batch. What to look for: The processor skips spans from the next.js instrumentation scope to avoid noise from framework internals. If you’re seeing unexpected spans skipped, check the spanName in the log — the span may be emitted by Next.js middleware or routing and is intentionally excluded.