Skip to main content
A run is a single end-to-end execution of your agent. In Lemma custom instrumentation, runs are created with wrap_agent.

Required

Create a run wrapper

from uselemma_tracing import TraceContext, wrap_agent

async def run_agent(ctx: TraceContext, input_data: dict):
    final_answer = await solve_user_request(input_data["user_message"])
    ctx.on_complete(final_answer)
    return final_answer

wrapped = wrap_agent("support-agent", run_agent)

Execute the run

result, run_id, span = await wrapped({"user_message": "Reset my password"})
run_id is the stable identifier you can return, store, and correlate with downstream signals.

Optional run data

Attach run-level attributes on the root span:
async def run_agent(ctx: TraceContext, input_data: dict):
    ctx.span.set_attribute("lemma.user_id", input_data["user_id"])
    ctx.span.set_attribute("lemma.session_id", input_data["session_id"])
    ctx.span.set_attribute("lemma.route", "chat.support")

    final_answer = await solve_user_request(input_data["user_message"])
    ctx.on_complete(final_answer)
    return final_answer
Recommended run-level keys:
  • lemma.user_id
  • lemma.session_id
  • lemma.environment
  • lemma.feature

Mark a run as failed

Use record_error and re-raise the exception:
async def run_agent(ctx: TraceContext, input_data: dict):
    try:
        return await solve_user_request(input_data["user_message"])
    except Exception as err:
        ctx.record_error(err)
        raise

Dashboard outcome

A run appears as the top-level ai.agent.run span with:
  • total duration
  • final output from on_complete
  • error state (if record_error is called or an uncaught exception occurs)
  • custom attributes for filtering

Next Steps