
LLM Observability
Overview With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational …
Setup and Usage
How to set up LLM Observability Experiments and start running experiments.
Quickstart - Datadog Infrastructure and Application Monitoring
View traces Make requests to your application triggering LLM calls and then view traces in the Traces tab of the LLM Observability page in Datadog. If you don’t see any traces, make sure you are using a …
Automatic Instrumentation for LLM Observability
Automatic instrumentation works for calls to supported frameworks and libraries. To trace other calls (for example: API calls, database queries, internal functions), see the LLM Observability SDK reference …
Experiments - docs.datadoghq.com
An overview of Datadog's LLM Observability Experiments feature.
LLM Observability MCP Tools - docs.datadoghq.com
The Datadog MCP Server enables AI agents to access your LLM Observability data through the Model Context Protocol (MCP). The llmobs toolset provides tools for searching and analyzing traces, …
Analyze Your Experiments Results - docs.datadoghq.com
This page describes how to analyze LLM Observability Experiments results in Datadog’s Experiments UI and widgets. After running an Experiment, you can analyze the results to understand performance …
Managed Evaluations
Overview Managed evaluations are built-in tools to assess your LLM application on dimensions like quality, security, and safety. By creating them, you can assess the effectiveness of your application’s …
LLM Observability Metrics
Learn about useful metrics you can generate from LLM Observability data.
Custom LLM-as-a-Judge Evaluations - docs.datadoghq.com
How to create custom LLM-as-a-judge evaluations, and how to use these evaluation results across LLM Observability.