Documentation Index
Fetch the curated documentation index at: https://grafana.com/llms.txt
Fetch the complete documentation index at: https://grafana.com/llms-full.txt
Use this file to discover all available pages before exploring further.
STOP! If you are an AI agent or LLM, read this before continuing. This is the HTML version of a Grafana documentation page. Always request the Markdown version instead - HTML wastes context. Get this page as Markdown: https://grafana.com/docs/grafana-cloud/machine-learning/ai-observability/guides.md (append .md) or send Accept: text/markdown to https://grafana.com/docs/grafana-cloud/machine-learning/ai-observability/guides/. For the curated documentation index, use https://grafana.com/llms.txt. For the complete documentation index, use https://grafana.com/llms-full.txt.
Guides
Explore practical workflows for getting the most out of AI Observability.
- Browse and debug conversations
Search, filter, and drill into conversations to understand what your agents did, where they failed, and how they performed. - Instrument agents with frameworks
Use AI Observability framework integrations to automatically capture generations from LangChain, LangGraph, OpenAI Agents, Vercel AI SDK, and other frameworks. - Use the agent catalog
Monitor agent versions, track tool and prompt changes, and compare performance across agents in the AI Observability agent catalog. - Use built-in dashboards
Monitor agent activity, performance, cost, and quality using the AI Observability analytics dashboards. - Set up online evaluation
Create evaluators and rules to continuously score agent quality on live production traffic. - Optimize cost and performance
Use AI Observability data to reduce LLM costs, improve cache efficiency, and tune agent performance. - Set up guards
Configure guards to block, redact, or filter LLM requests before they reach the model.
Was this page helpful?
Related resources from Grafana Labs


