Tambo 1.0:用于渲染 React 组件的开源代理工具包。
Tambo 1.0: Open-source toolkit for agents that render React components

原始链接: https://github.com/tambo-ai/tambo

## Tambo:React 的生成式 UI – 摘要 Tambo 是一款开源 React 工具包,它使开发者能够构建基于人工智能的代理,根据用户输入动态渲染 UI 组件。它简化了创建能够*适应*用户的应用程序,而不是遵循严格的结构。 开发者使用 Zod 模式注册 React 组件,定义其属性。当用户交互时 – 例如输入“按地区显示销售额” – Tambo 的代理会选择合适的组件(例如 ``)并流式传输渲染所需的属性。 Tambo 提供了一个全栈解决方案,包含 React SDK 和用于管理对话状态和代理执行的后端,支持 OpenAI、Anthropic 和 Gemini 等 LLM。它处理流式传输、取消和重新连接,并且可以通过 Tambo Cloud 部署或使用 Docker 自行托管。 主要功能包括内置代理管理、流式传输基础设施以及与 MCP 协议的集成,用于连接到外部工具和 API。开发者还可以定义基于浏览器的工具来执行 DOM 操作等任务。Tambo 提供了 `useTambo()` 和 `useTamboThreadInput()` 等钩子,以便轻松集成,并提供上下文传递、用户身份验证和提示建议等功能。 了解更多信息并开始使用,请访问 [https://tambo.ai](https://tambo.ai)。

## Tambo 1.0:AI 代理与 React 组件 Tambo 是一款新的开源工具包(在 GitHub 上拥有超过 8k 星标),旨在简化构建利用 React 组件进行生成式用户界面的 AI 代理。Tambo 1.0 版本发布,允许开发者轻松地使用 Zod 模式注册 React 组件,从而使代理能够智能地选择并渲染具有正确属性的适当 UI。 该工具包解决了生成式 UI 开发中的关键挑战,包括状态管理、部分属性渲染和身份验证。虽然 Google 的 A2UI 等库存在,但 Tambo 专注于理解*您*现有的 UI,并且未来可能支持 A2UI 渲染。 该团队强调 Tambo 是一个“开箱即用”的解决方案,甚至提供自己的代理,并且已被包括一家 Fortune 1000 金融科技公司在内的公司投入生产使用。讨论的重点是它与 MCP Apps 等标准的兼容性,以及代理-UI 交互的统一协议是否会涌现。
相关文章

原文

Build agents that speak your UI

The open-source generative UI toolkit for React. Connect your components—Tambo handles streaming, state management, and MCP.

npm version License Last Commit Discord GitHub stars

tambo-ai/tambo | Trendshift

Start For FreeDocsDiscord


Tambo 1.0 is here! Read the announcement: Introducing Tambo: Generative UI for React


Tambo is a React toolkit for building agents that render UI (also known as generative UI).

Register your components with Zod schemas. The agent picks the right one and streams the props so users can interact with them. "Show me sales by region" renders your <Chart>. "Add a task" updates your <TaskBoard>.

Get started in 5 minutes →

2025-11-07-cheatsheet-demo.mp4

Tambo is a fullstack solution for adding generative UI to your app. You get a React SDK plus a backend that handles conversation state and agent execution.

1. Agent included — Tambo runs the LLM conversation loop for you. Bring your own API key (OpenAI, Anthropic, Gemini, Mistral, or any OpenAI-compatible provider). Works with agent frameworks like LangChain and Mastra, but they're not required.

2. Streaming infrastructure — Props stream to your components as the LLM generates them. Cancellation, error recovery, and reconnection are handled for you.

3. Tambo Cloud or self-host — Cloud is a hosted backend that manages conversation state and agent orchestration. Self-hosted runs the same backend on your infrastructure via Docker.

Most software is built around a one-size-fits-all mental model. We built Tambo to help developers build software that adapts to users.

npm create tambo-app my-tambo-app
cd my-tambo-app
npx tambo init      # choose cloud or self-hosted
npm run dev

Tambo Cloud is a hosted backend, free to get started with plenty of credits to start building. Self-hosted runs on your own infrastructure.

Check out the pre-built component library for agent and generative UI primitives:

2025-11-07-ui-component-library.mp4

Or fork a template:

Tell the AI which components it can use. Zod schemas define the props. These schemas become LLM tool definitions—the agent calls them like functions and Tambo renders the result.

Render once in response to a message. Charts, summaries, data visualizations.

2025-11-07-generative-form.mp4
const components: TamboComponent[] = [
  {
    name: "Graph",
    description: "Displays data as charts using Recharts library",
    component: Graph,
    propsSchema: z.object({
      data: z.array(z.object({ name: z.string(), value: z.number() })),
      type: z.enum(["line", "bar", "pie"]),
    }),
  },
];

Persist and update as users refine requests. Shopping carts, spreadsheets, task boards.

2025-11-07-db-thing.mp4
const InteractableNote = withInteractable(Note, {
  componentName: "Note",
  description: "A note supporting title, content, and color modifications",
  propsSchema: z.object({
    title: z.string(),
    content: z.string(),
    color: z.enum(["white", "yellow", "blue", "green"]).optional(),
  }),
});

Docs: generative components, interactable components

Wrap your app with TamboProvider. You must provide either userKey or userToken to identify the thread owner.

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  components={components}
>
  <Chat />
  <InteractableNote id="note-1" title="My Note" content="Start writing..." />
</TamboProvider>

Use userKey for server-side or trusted environments. Use userToken (OAuth access token) for client-side apps where the token contains the user identity. See User Authentication for details.

Docs: provider options

useTambo() is the primary hook — it gives you messages, streaming state, and thread management. useTamboThreadInput() handles user input and message submission.

const { messages, isStreaming } = useTambo();
const { value, setValue, submit, isPending } = useTamboThreadInput();

Docs: threads and messages, streaming status, full tutorial

Connect to Linear, Slack, databases, or your own MCP servers. Tambo supports the full MCP protocol: tools, prompts, elicitations, and sampling.

import { MCPTransport } from "@tambo-ai/react/mcp";

const mcpServers = [
  {
    name: "filesystem",
    url: "http://localhost:8261/mcp",
    transport: MCPTransport.HTTP,
  },
];

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  components={components}
  mcpServers={mcpServers}
>
  <App />
</TamboProvider>;
2025-11-07-elicitations.mp4

Docs: MCP integration

Sometimes you need functions that run in the browser. DOM manipulation, authenticated fetches, accessing React state. Define them as tools and the AI can call them.

const tools: TamboTool[] = [
  {
    name: "getWeather",
    description: "Fetches weather for a location",
    tool: async (params: { location: string }) =>
      fetch(`/api/weather?q=${encodeURIComponent(params.location)}`).then((r) =>
        r.json(),
      ),
    inputSchema: z.object({
      location: z.string(),
    }),
    outputSchema: z.object({
      temperature: z.number(),
      condition: z.string(),
      location: z.string(),
    }),
  },
];

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  tools={tools}
  components={components}
>
  <App />
</TamboProvider>;

Docs: local tools

Context, Auth, and Suggestions

Additional context lets you pass metadata to give the AI better responses. User state, app settings, current page. User authentication passes tokens from your auth provider. Suggestions generates prompts users can click based on what they're doing.

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userToken={userToken}
  contextHelpers={{
    selectedItems: () => ({
      key: "selectedItems",
      value: selectedItems.map((i) => i.name).join(", "),
    }),
    currentPage: () => ({ key: "page", value: window.location.pathname }),
  }}
/>
const { suggestions, accept } = useTamboSuggestions({ maxSuggestions: 3 });

suggestions.map((s) => (
  <button key={s.id} onClick={() => accept(s)}>
    {s.title}
  </button>
));

Docs: additional context, user authentication, suggestions

OpenAI, Anthropic, Cerebras, Google Gemini, Mistral, and any OpenAI-compatible provider. Full list. Missing one? Let us know.

Feature Tambo Vercel AI SDK CopilotKit Assistant UI
Component selection AI decides which components to render Manual tool-to-component mapping Via agent frameworks (LangGraph) Chat-focused tool UI
MCP integration Built-in Experimental (v4.2+) Recently added Requires AI SDK v5
Persistent stateful components Yes No Shared state patterns No
Client-side tool execution Declarative, automatic Manual via onToolCall Agent-side only No
Self-hostable MIT (SDK + backend) Apache 2.0 (SDK only) MIT MIT
Hosted option Tambo Cloud No CopilotKit Cloud Assistant Cloud
Best for Full app UI control Streaming and tool abstractions Multi-agent workflows Chat interfaces

Join the Discord to chat with other developers and the core team.

Interested in contributing? Read the Contributing Guide.

Join the conversation on Twitter and follow @tambo_ai.

MIT unless otherwise noted. Some workspaces (like apps/api) are Apache-2.0.


Tambo AI Animation

For AI/LLM agents: docs.tambo.co/llms.txt

联系我们 contact @ memedata.com