Trace ends after the first generation
Symptom: An agent that loops internally (model → tools → model → …) produces a trace that captures only the first LLM call. Subsequent rounds and tool calls are missing. Cause:autoEndRoot: true (the default) ends the root span when all direct child spans finish. In an agentic loop, there is an idle gap between rounds — the first LLM span closes, your tool code runs, and before the next LLM call starts there is a moment with zero active child spans. The RunBatchSpanProcessor treats this as the run completing and flushes early.
Fix: Use autoEndRoot: false and call onComplete once after the full loop finishes:
- TypeScript
- Python
Tool calls are missing from the trace
Symptom: LLM calls appear as child spans, but the tool executions between them are absent. Cause: Provider instrumentors (OpenAI, Anthropic, etc.) only patch the LLM SDK — they have no visibility into your code that runs between calls. Tool execution, database lookups, and HTTP requests won’t produce spans automatically. Fix: Wrap each tool execution in a manualtool.call span:
Anthropic streaming breaks with auto-instrumentation
Symptom: After installing@arizeai/openinference-instrumentation-anthropic, the server throws messages.create(...).withResponse is not a function.
Cause: AnthropicInstrumentation patches messages.create(). The high-level messages.stream() helper internally calls messages.create(...).withResponse() — some versions of the instrumentation don’t preserve that method on the patched return value.
Fix: Replace messages.stream() with messages.create({ stream: true }) and consume the async iterable directly. This goes through the same patched create() call without the .withResponse() chain:
AnthropicInstrumentation and add manual LLM step spans instead.
Output is empty or not recorded
Symptom: The trace appears in the dashboard but the run output field is blank. Cause: How output is captured depends onautoEndRoot:
autoEndRoot | How output is stored |
|---|---|
true (default) | The return value of the wrapped function. Make sure you return the final result. |
false | The argument passed to onComplete(result). Make sure you call it before the function exits. |
Need more detail?
Enable Debug Mode to see a log line for every span start, end, and export event. This makes it easy to confirm whether spans are being created, attributed to a run, and successfully flushed. The following issues require debug mode output to diagnose.Run never appears in the dashboard
Symptom: Your agent runs successfully but no trace shows up. What to look for: Check whetherexporting batch appears in the logs. If it does, the spans were sent and the issue is likely downstream (network, API key, or ingestion). If it doesn’t, work backwards through these checks:
-
No
onStart: top-level run span—registerOTelwas never called, or was called after the agent ran. Make sureregisterOTelis called once at startup before any agent invocations. -
onStart: top-level run spanappears butexporting batchnever does — The root span was opened but the run batch never completed. This usually means the top-level span never ended:- With
autoEndRoot: true(the default): check thatspan auto-ended after fn returnedappears. If it doesn’t, the function may have thrown and the error path should showspan ended on error. - With
autoEndRoot: false: check thatspan ended via onCompleteappears. If it doesn’t,onCompletewas never called.
- With
-
exporting batchappears withspanCount: 1— Only the root span was exported and no child spans. Child spans from frameworks like the Vercel AI SDK or OpenAI Agents SDK won’t appear unless their instrumentation is registered. See the integrations.
Spans appear but the run doesn’t close
Symptom: You seeonStart lines for child spans, but exporting batch never fires and the run hangs open in the dashboard.
What to look for: Check the remainingChildren value on onEnd: direct child ended lines. If it never reaches 0, a child span that was opened never ended — commonly a streaming span left open if an error is swallowed mid-stream.
onEnd: triggering auto-end of top-level span line only fires once remainingChildren hits 0. If that line is missing, find the span that was started but never ended and ensure its lifecycle is closed (e.g. by awaiting the stream to completion or using a finally block).
Spans are started but not attributed to a run
Symptom: You seeonStart: child span but no matching onStart: top-level run span before it.
What to look for: The processor only attributes child spans to a run if it saw the parent run start first. This can happen if:
- The child span is created outside the
wrapAgentcontext (before or after the wrapped function executes) - The span’s parent context doesn’t trace back to an
ai.agent.runspan
skipped: true on a span
Symptom: A span appears in onEnd logs with skipped: true and is not included in the export batch.
What to look for: The processor skips spans from the next.js instrumentation scope to avoid noise from framework internals. If you’re seeing unexpected spans skipped, check the spanName in the log — the span may be emitted by Next.js middleware or routing and is intentionally excluded.
