Skip to main content
Metric events connect feedback signals to specific agent executions. When a user rates a response, flags an issue, or when your system detects a problem, you can record that event against the trace—creating a link between what happened and how it was received. This data powers Lemma’s analysis: you can filter traces by feedback, identify patterns in negative responses, and track how changes to your agent affect user satisfaction over time.

Prerequisites

Before recording metric events, you need:
  1. A metric created in your Lemma project (find the metric ID in your dashboard under Metrics)
  2. A run ID from an agent execution (see Tracing Your Agent)

Record a Metric Event

To record feedback against a trace, send a POST request to the metric events endpoint:
async function recordMetricEvent(
  metricId: string,
  runId: string,
  feedback: boolean,
  description?: string
) {
  const response = await fetch("https://api.uselemma.ai/metric-events", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${process.env.LEMMA_API_KEY}`,
    },
    body: JSON.stringify({
      metricId,
      runId,
      value: {
        feedback,
        description,
      },
    }),
  });

  if (!response.ok) {
    throw new Error(`Failed to record metric event: ${response.statusText}`);
  }

  return response.json();
}
The runId is returned by wrapAgent when tracing your agent. See Tracing Your Agent for setup instructions.

Request Body

FieldTypeRequiredDescription
metricIdstringYesUUID of the metric to record against
runIdstringYesThe run ID returned by wrapAgent
value.feedbackbooleanYesWhether the feedback is positive (true) or negative (false)
value.descriptionstringNoOptional context about the feedback

Use Cases

User Feedback

The most common use case is capturing explicit user feedback—thumbs up/down, ratings, or flag buttons:
// After your agent runs, store the runId
const { result, runId } = await wrapAgent("my-agent", { input }, async () => {
  return doWork(input);
});

// When user clicks thumbs up/down
async function onThumbsUp() {
  await recordMetricEvent(METRIC_ID, runId, true);
}

async function onThumbsDown(reason?: string) {
  await recordMetricEvent(METRIC_ID, runId, false, reason);
}

Automated Quality Checks

You can also record metric events from automated systems—moderation filters, fact-checkers, or format validators:
const { result, runId } = await wrapAgent("my-agent", { input }, async () => {
  return doWork(input);
});

// Check if response passed moderation
const passedModeration = await moderationCheck(result);

await recordMetricEvent(
  MODERATION_METRIC_ID,
  runId,
  passedModeration,
  passedModeration ? undefined : "Failed content policy check"
);

Downstream Outcomes

Track whether the agent’s output led to a successful outcome:
// Agent generates a support response
const { result, runId } = await wrapAgent("support-agent", { supportTicket }, async () => {
  return generateSupportResponse(supportTicket);
});
await sendResponseToCustomer(result);

// Later, when ticket is resolved or escalated
async function onTicketResolved() {
  await recordMetricEvent(RESOLUTION_METRIC_ID, runId, true);
}

async function onTicketEscalated() {
  await recordMetricEvent(RESOLUTION_METRIC_ID, runId, false, "Required human escalation");
}

Viewing Metric Events

Once recorded, metric events appear in your Lemma dashboard:
  • On the trace detail page — see all feedback associated with a specific execution
  • In the metrics view — filter and aggregate events to spot trends
  • When analyzing experiments — compare feedback rates across different agent versions