Metrics provides Agent observability for Cloud Agents and automated workflows. Agent observability answers a simple question: What are my AI agents doing, and is it working? Use Metrics to monitor agent activity, understand human intervention, measure success rates, and evaluate the cost and impact of AI-driven work across your repositories.
What Metrics Show About Your Cloud Agents
Continue’s Metrics give you operational observability for AI agents, similar to how traditional observability tools provide visibility into services, jobs, and pipelines. Instead of logs and latency, agent observability focuses on:- Runs and execution frequency
- Success vs. human intervention
- Pull request outcomes
- Cost per run and per workflow
Agent Activity & Execution Volume
Agent Activity & Execution Volume
Understand when and how often your agents run.
- See which Cloud Agents are running most often
- Spot spikes, trends, or recurring failures
- Monitor automated Workflows in production
Agent Success & Outcome Metrics
Agent Success & Outcome Metrics
Measure whether agents produce usable results.
- Total runs
- PR creation rate
- PR status (open, merged, closed, failed)
- Success vs. intervention rate
Workflow Reliability & Impact
Workflow Reliability & Impact
Evaluate automated agent workflows in production.
- Which Workflows generate the most work
- Completion and success rates
- Signals that a Workflow needs refinement or guardrails
Why Metrics Matter
Improve Agent Reliability
Identify which Agents need better rules, tools, or prompts.
Measure Automation Value
See how much work your automated Workflows are completing across your repos.