OpenAI 为 Agents SDK 添加了 MCP 支持。
OpenAI adds MCP support to Agents SDK

原始链接: https://openai.github.io/openai-agents-python/mcp/

模型上下文协议 (MCP) 提供了一种标准化的方法,让 AI 应用能够像 USB-C 连接设备和外设一样,将大型语言模型 (LLM) 连接到外部数据源和工具。Agents SDK 支持 MCP,使代理能够利用各种 MCP 服务器。 MCP 服务器分为两种类型:`stdio`(作为子进程在本地运行)和通过 SSE 的 HTTP(远程运行并通过 URL 访问)。SDK 中的 `MCPServerStdio` 和 `MCPServerSse` 类用于连接这些服务器。代理可以配置 MCP 服务器;SDK 会在每个服务器上调用 `list_tools()`,使工具可供 LLM 使用。当 LLM 调用工具时,SDK 会在相应的服务器上调用 `call_tool()`。 为了提高性能,可以使用 `cache_tools_list=True` 来缓存 `list_tools()` 的响应。当工具列表是静态的时,最好使用此功能。缓存可以使用 `invalidate_tools_cache()` 手动失效。Agents SDK 自动跟踪 MCP 操作(例如调用 list tools),这有助于调试。

OpenAI正在其Agents SDK中添加对MCP(消息内容协议)的支持,这标志着它认可了(一位评论者称之为)“连接大型语言模型与外部工具的行业标准”。此举使MCP支持成为代理框架的基本要求。 虽然目前MCP主要用于本地应用,但OpenAI计划在未来几个月内将其支持集成到其桌面应用程序和API中。一位用户询问了当OpenAI API处理请求时,如何连接到本地运行的MCP服务器。另一位评论者开玩笑说MCP就像“AI应用程序的USB-C接口”,突出了与USB相关的、常常令人痛苦的实现复杂性。

原文

The Model context protocol (aka MCP) is a way to provide tools and context to the LLM. From the MCP docs:

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools to your Agents.

MCP servers

Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use:

  1. stdio servers run as a subprocess of your application. You can think of them as running "locally".
  2. HTTP over SSE servers run remotely. You connect to them via a URL.

You can use the MCPServerStdio and MCPServerSse classes to connect to these servers.

For example, this is how you'd use the official MCP filesystem server.

Using MCP servers

MCP servers can be added to Agents. The Agents SDK will call list_tools() on the MCP servers each time the Agent is run. This makes the LLM aware of the MCP server's tools. When the LLM calls a tool from an MCP server, the SDK calls call_tool() on that server.

Caching

Every time an Agent runs, it calls list_tools() on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass cache_tools_list=True to both MCPServerStdio and MCPServerSse. You should only do this if you're certain the tool list will not change.

If you want to invalidate the cache, you can call invalidate_tools_cache() on the servers.

End-to-end examples

View complete working examples at examples/mcp.

Tracing

Tracing automatically captures MCP operations, including:

  1. Calls to the MCP server to list tools
  2. MCP-related info on function calls

MCP Tracing Screenshot

联系我们 contact @ memedata.com