A Run is a single execution of an Agent. When you assign a task to an agent by providing it with specific inputs, you initiate a run. The run represents the entire lifecycle of that task, from starting the process to delivering the final result. Each run is a distinct, logged instance of an agent’s work, allowing you to track and analyze its performance on a case-by-case basis. Runs are tied to a specific agent revision, ensuring consistent and reproducible execution.

The Anatomy of a Run

  • Input: Every run begins with an input, which is a JSON object that conforms to the agent’s defined inputSchema. This input provides the specific data and parameters the agent needs to complete its task. For example, if you have a research agent, the input might be {"topic": "AI in healthcare"}. Learn more about how to define and use Inputs.
  • Agent Revision: Each run executes against a specific agent revision. By default, runs use the published revision, but you can specify any revision ID to test draft changes or reproduce historical results.
  • Status: A run progresses through several statuses:
    • pending: The run has been created but has not yet started.
    • running: The agent is actively working on the task.
    • done: The agent has successfully completed the task.
    • error: The run failed due to an error.
  • Events: During a run, the agent emits a series of events that provide a real-time log of its thought process and actions. These events include planning, reasoning, and tool_call, giving you full visibility into how the agent is working towards its goal.
  • Result: Once a run is complete, it produces a resultData. This can be a text document or a structured json object, depending on the agent’s configuration. If the agent used research tools, the result will also include a list of sources it consulted.

Why are Runs Important?

Runs are essential for several reasons:
  • Traceability: They provide a complete audit trail of an agent’s activity. You can see exactly what an agent did, what tools it used, and how it arrived at its conclusions.
  • Debugging: When a run fails or produces unexpected results, the detailed event log is invaluable for debugging. You can pinpoint the exact step where things went wrong.
  • Performance Monitoring: By analyzing multiple runs, you can evaluate an agent’s consistency and accuracy over time, helping you identify opportunities for improvement.
  • Scalability: An agent can be run many times in parallel with different inputs, allowing you to automate tasks at a massive scale. Each run is an independent process.

Creating Runs with Specific Revisions

When creating a run, you have two options for revision selection:
By default, runs use the agent’s published revision:
curl -X POST https://api.ag.dev/v0.1/agents/{agentId}/runs \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"input": {"topic": "AI in healthcare"}}'
If no revision has been published, the run will fail with an error. Make sure to publish at least one revision before creating production runs.
You can create and monitor runs through the Console or programmatically via the API. Each run maintains a permanent link to the specific revision it executed, ensuring full traceability.