Skip to content

feat(mcp-insights): product docs #14544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Aug 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_order: 500
description: "Learn how to manually instrument your code to use Sentry's Agents module."
---

With <Link to="/product/insights/agents/dashboard/">Sentry AI Agent Monitoring</Link>, you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.
With <Link to="/product/insights/ai/agents/dashboard/">Sentry AI Agent Monitoring</Link>, you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.

As a prerequisite to setting up AI Agent Monitoring with JavaScript, you'll need to first <PlatformLink to="/tracing/">set up tracing</PlatformLink>. Once this is done, the JavaScript SDK will automatically instrument AI agents created with supported libraries. If that doesn't fit your use case, you can use custom instrumentation described below.

Expand Down
4 changes: 2 additions & 2 deletions docs/platforms/python/integrations/openai/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This integration connects Sentry with the [OpenAI Python SDK](https://github.com

Once you've installed this SDK, you can use Sentry AI Agents Monitoring, a Sentry dashboard that helps you understand what's going on with your AI requests.

Sentry AI Monitoring will automatically collect information about prompts, tools, tokens, and models. Learn more about the [AI Agents Dashboard](/product/insights/agents).
Sentry AI Monitoring will automatically collect information about prompts, tools, tokens, and models. Learn more about the [AI Agents Dashboard](/product/insights/ai/agents).

## Install

Expand Down Expand Up @@ -57,7 +57,7 @@ def my_llm_stuff():

After running this script, the resulting data should show up in the `"AI Spans"` tab on the `"Explore" > "Traces"` page on Sentry.io.

If you manually created an <PlatformLink to="/tracing/instrumentation/custom-instrumentation/ai-agents-module/#invoke-agent-span">Invoke Agent Span</PlatformLink> (not done in the example above) the data will also show up in the [AI Agents Dashboard](/product/insights/agents).
If you manually created an <PlatformLink to="/tracing/instrumentation/custom-instrumentation/ai-agents-module/#invoke-agent-span">Invoke Agent Span</PlatformLink> (not done in the example above) the data will also show up in the [AI Agents Dashboard](/product/insights/ai/agents).

It may take a couple of moments for the data to appear in [sentry.io](https://sentry.io).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_order: 500
description: "Learn how to manually instrument your code to use Sentry's Agents module."
---

With <Link to="/product/insights/agents/dashboard/">Sentry AI Agent Monitoring</Link>, you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.
With <Link to="/product/insights/ai/agents/dashboard/">Sentry AI Agent Monitoring</Link>, you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.

As a prerequisite to setting up AI Agent Monitoring with Python, you'll need to first <PlatformLink to="/tracing/">set up tracing</PlatformLink>. Once this is done, the Python SDK will automatically instrument AI agents created with supported libraries. If that doesn't fit your use case, you can use custom instrumentation described below.

Expand Down
2 changes: 1 addition & 1 deletion docs/product/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Releases are integrated with the rest of Sentry so you can directly see how an e

### AI Agents Monitoring

Our [**AI Agents Monitoring**](/product/insights/agents/) feature gives you insights into your AI agent workflows within the broader context of your app. When you `pip install sentry` into a project using AI agents, Sentry will automatically pick up useful metrics like agent invocations, tool executions, handoffs between agents, and token usage, sending them to our AI Agents Insights dashboard.
Our [**AI Agents Monitoring**](/product/insights/ai/agents/) feature gives you insights into your AI agent workflows within the broader context of your app. When you `pip install sentry` into a project using AI agents, Sentry will automatically pick up useful metrics like agent invocations, tool executions, handoffs between agents, and token usage, sending them to our AI Agents Insights dashboard.

### Uptime Monitoring

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: "Learn how to use Sentry's AI Agents Dashboard."

<Include name="feature-limited-on-team-retention.mdx" />

Once you've [configured the Sentry SDK](/product/insights/agents/getting-started/) for your AI agent project, you'll start receiving data in the Sentry [AI Agents Insights](https://sentry.io/orgredirect/organizations/:orgslug/insights/agents/) dashboard.
Once you've [configured the Sentry SDK](/product/insights/ai/agents/getting-started/) for your AI agent project, you'll start receiving data in the Sentry [AI Agents Insights](https://sentry.io/orgredirect/organizations/:orgslug/insights/agents/) dashboard.

The main dashboard provides a comprehensive view of all your AI agent activities, performance metrics, and recent executions.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "AI Agents"
sidebar_order: 40
sidebar_order: 10
description: "Learn how to use Sentry's AI Agent monitoring tools to trace and debug your AI agent workflows, including agent runs, tool calls, and model interactions."
---

Expand All @@ -20,4 +20,4 @@ To use AI Agent Monitoring, you must have an existing Sentry account and project

![AI Agents Monitoring Overview](./img/overview.png)

Learn how to [set up Sentry for AI Agents](/product/insights/agents/getting-started/).
Learn how to [set up Sentry for AI Agents](/product/insights/ai/agents/getting-started/).
1 change: 0 additions & 1 deletion docs/product/insights/ai/index.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
title: "AI Performance"
sidebar_order: 50
sidebar_hidden: true
description: "Learn how to use Sentry's AI Performance tool to get insights into things that may be affecting your application health, including critical LLM metrics."
---

Expand Down
28 changes: 28 additions & 0 deletions docs/product/insights/ai/mcp/dashboard.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
title: MCP Dashboard
sidebar_order: 10
description: "Learn how to use Sentry's MCP Dashboard."
---

<Include name="feature-limited-on-team-retention.mdx" />

Once you've [configured the Sentry SDK](/product/insights/ai/mcp/getting-started/) for your MCP project, you'll start receiving data in the Sentry [MCP Insights](https://sentry.io/orgredirect/organizations/:orgslug/insights/mcp/) dashboard.

The main dashboard provides a comprehensive view of all your MCP server activities, performance metrics, and recent tool executions.

![MCP Monitoring Overview](./img/overview.png)

The dashboard displays key widgets like:

- **Traffic**: Shows MCP requests over time, error rates, and releases to track overall server activity and health
- **Traffic by Client**: Displays which MCP clients are connecting to your server (cursor-vscode, CustomMCPClient, etc.)
- **Transport Distribution**: Shows the distribution of transport protocols used (http, sse, custom transports)
- **Most Used Tools/Resources/Prompts**: Shows which MCP tools/resources/prompts are called most frequently by clients
- **Slowest Tools/Resources/Prompts**: Identifies tools/resources/prompts with the highest response times for performance optimization
- **Most Failing Tools/Resources/Prompts**: Highlights tools/resources/propmpts with the highest error rates that need attention

Underneath these widgets are tables that allow you to view data in more detail:

- **Tools**: Performance metrics for each MCP tool including request count, error rate, average duration, and P95 latency
- **Resources**: Access patterns and performance for MCP resources by URI, showing requests, error rate, average duration, and P95 latency
- **Prompts**: Usage statistics for MCP prompt templates by name, including requests, error rate, average duration, and P95 latency
36 changes: 36 additions & 0 deletions docs/product/insights/ai/mcp/getting-started.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
title: Set Up
sidebar_order: 0
description: "Learn how to set up Sentry MCP Monitoring"
---

Sentry MCP Monitoring helps you track and debug Model Context Protocol (MCP) implementations using our supported SDKs and integrations. Monitor your complete MCP workflows from client connections to server responses, including tool executions, resource access, and protocol communications.

To start sending MCP data to Sentry, make sure you've created a Sentry project for your MCP-enabled repository and follow the guide below:

## Supported SDKs

### JavaScript - MCP Server

The Sentry JavaScript SDK supports MCP monitoring by wrapping the MCP Server from the [@modelcontextprotocol/sdk](https://www.npmjs.com/package/@modelcontextprotocol/sdk) package. This wrapper automatically captures spans for your MCP server workflows including tool executions, resource access, and client connections.

#### Quick Start with MCP Server

```javascript
import { Sentry } from "@sentry/node";
import { McpServer } from "@modelcontextprotocol/sdk";

// Sentry init needs to be above everything else
Sentry.init({
dsn: "___PUBLIC_DSN___",
tracesSampleRate: 1.0,
});

// Your MCP server
const server = Sentry.wrapMcpServerWithSentry(new McpServer({
name: "my-mcp-server",
version: "1.0.0",
}));

...
```
Binary file added docs/product/insights/ai/mcp/img/overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 23 additions & 0 deletions docs/product/insights/ai/mcp/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: "MCP"
sidebar_order: 20
description: "Learn how to use Sentry's MCP monitoring tools to trace and debug your Model Context Protocol implementations, including server connections, resource access, and tool executions."
---

<Include name="feature-stage-beta.mdx" />

Sentry's MCP (Model Context Protocol) monitoring tools help you understand what's happening in your MCP implementations. They automatically collect information about MCP server connections, resource access, tool executions, and errors across your entire MCP pipeline—from client requests to server responses.

## Example MCP Monitoring Use Cases

- Your MCP server is failing to respond to tool calls, and you want to trace the complete request flow to identify where the connection is breaking.
- Clients report that your MCP resources are returning outdated or malformed data, and you need to debug the full context of resource requests and server responses.
- Your MCP implementations are experiencing performance issues, and you want to identify which components (server startup, resource fetching, or tool execution) are causing bottlenecks.

## Get Started

To use MCP Monitoring, you must have an existing Sentry account and project set up. If you don't have one, [create an account here](https://sentry.io/signup/).

![MCP Monitoring Overview](./img/overview.png)

Learn how to [set up Sentry for MCP](/product/insights/ai/mcp/getting-started/).
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
"./apps/*"
],
"scripts": {
"dev:minimal": "yarn enforce-redirects && concurrently \"node ./src/hotReloadWatcher.mjs\" \"next dev\"",
"dev": "yarn enforce-redirects && concurrently \"yarn sidecar\" \"node ./src/hotReloadWatcher.mjs\" \"next dev\"",
"dev:developer-docs": "yarn enforce-redirects && NEXT_PUBLIC_DEVELOPER_DOCS=1 yarn dev",
"build:developer-docs": "yarn enforce-redirects && git submodule init && git submodule update && NEXT_PUBLIC_DEVELOPER_DOCS=1 yarn build",
Expand Down
6 changes: 3 additions & 3 deletions redirects.js
Original file line number Diff line number Diff line change
Expand Up @@ -968,11 +968,11 @@ const userDocsRedirects = [
},
{
source: '/product/insights/llm-monitoring/:path*',
destination: '/product/insights/ai/:path*',
destination: '/product/insights/ai/',
},
{
source: '/product/insights/ai/:path*',
destination: '/product/insights/agents/',
source: '/product/insights/agents/:path*',
destination: '/product/insights/ai/agents/:path*',
},
{
source: '/product/insights/retention-priorities/',
Expand Down
8 changes: 4 additions & 4 deletions src/middleware.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3124,19 +3124,19 @@ const USER_DOCS_REDIRECTS: Redirect[] = [
},
{
from: '/product/ai-monitoring/',
to: '/product/insights/agents/',
to: '/product/insights/ai/agents/',
},
{
from: '/product/insights/llm-monitoring/',
to: '/product/insights/agents/',
to: '/product/insights/ai/agents/',
},
{
from: '/product/insights/llm-monitoring/getting-started/',
to: '/product/insights/agents/getting-started/',
to: '/product/insights/ai/agents/getting-started/',
},
{
from: '/product/insights/llm-monitoring/getting-started/the-dashboard/',
to: '/product/insights/agents/getting-started/the-dashboard/',
to: '/product/insights/ai/agents/getting-started/the-dashboard/',
},
{
from: '/product/metrics/',
Expand Down
Loading