Tailscale 的 Aperture 初探 (私密 Alpha)
A first look at Aperture by Tailscale (private alpha)

原始链接: https://tailscale.com/blog/aperture-private-alpha

## 光圈:安全地启用AI编码代理 编码代理在2025年迅速发展,成为强大的代码生成和审查工具。这种能力激增推动了工程师和管理人员的需求,但让IT和安全团队难以在访问与控制之间取得平衡。许多组织缺乏在不影响开发人员效率的情况下,实现安全、可见的AI使用的解决方案。 光圈,目前处于Alpha阶段,解决了这一挑战。它是一个AI网关,提供组织范围内的编码代理使用情况可见性——支持如Claude Code、Gemini和自定义框架等工具——*无需*开发人员管理API密钥。光圈利用现有的Tailscale网络身份,简化访问并跟踪使用情况,提供关于采用、潜在安全漏洞和成本效益的洞察。 光圈与现有系统集成(包括通过S3导出进行SIEM集成),并且设计为可扩展的,例如与Oso的合作增加了进一步的控制和审计功能。虽然最初专注于编码代理,但该平台计划扩展到代码之外的其他“代理式”AI工作负载,旨在成为管理组织内AI访问和可见性的中心点。提供有限用户数的免费版本,并为更大规模的部署提供专用实例。

## Tailscale 的 “Aperture” – 摘要 Tailscale 已经启动了 “Aperture” 的私人 alpha 测试,这是一款专注于管理组织内 LLM 访问权限的新产品。虽然一些用户质疑这与 Tailscale 的核心使命是否一致——最初是围绕简化的类似 VPN 的网络构建的——但另一些人认为这是其访问控制专业知识的自然延伸。 Aperture 解决的核心问题是企业在安全地集中 LLM 访问时面临的复杂性,这比典型的网络安全挑战要大得多。它旨在简化策略管理,并提供对 LLM 使用情况和 token 效率的可视性,从而可能避免单独分发 API 密钥并降低安全风险。 讨论强调了专注于核心产品改进(如动态路由)与追逐投资者期望驱动的 AI 相关机会之间的紧张关系。一些人认为 Tailscale 应该优先解决现有问题并保持其在网络/访问控制方面的优势,而另一些人则认为迎合企业需求和日益增长的 LLM 管理解决方案需求是有价值的。最终,该产品定位为一个利用网络实现的访问控制解决方案,而不仅仅是一家网络公司扩展到 AI 领域。
相关文章

原文

It’s understandable if you missed it, but coding agents are getting good [^1]. Throughout 2025 they went from interesting in limited circumstances, to nearly there, to being able to generate and review code at a surprisingly advanced level. As coding agents have been able to take on more and more complex tasks, it’s no surprise more and more folks want to bring them to work, even if their company’s tooling and policies are far from ready.

With growing interest in coding agents—not only from individual engineers, but also executives looking for force multipliers—security, IT, and engineering management are left with a difficult question: How can they enable secure, visible AI usage, without putting up roadblocks that engineers will work around?

We heard from countless customers that they don’t have a good answer to this. The thing is, neither did we. So we built one.

Aperture, currently in alpha release, is an AI gateway that provides visibility into coding agent usage across your entire organization without getting in the way of developers. It works great with most CLI or VS-Code-based AI coding tools, including Claude Code, Codex, Gemini CLI, and custom agent frameworks. It uses the underlying identity built into every Tailscale connection to eliminate distributing API keys to developer laptops, VMs, containers, CI/CD platforms (e.g. GitHub Actions), and other sandbox environments.

A diagram with "Tailscale/Aperture" in the center, with four pieces flowing into it: "LLM API A," with a key icon near it; "LLM API O," with another key; "Autonomous Agent," with a "tag:pr-review-bot" icon underneath, and "Coding Agent," with "user:pangolin@" nearby.

Any environment that can connect via Tailscale can use Aperture to eliminate keys and improve visibility. From an end-user perspective, setting up Aperture with an agent like Claude Code is as simple as adding the following to ~/.claude/settings.json, either manually or via MDM:

{
  "apiKeyHelper": "echo '-'",
  "env": {
    "ANTHROPIC_BASE_URL": "http://ai"
  }
}

Giving developers or agents within your organization access to new models or providers is as simple as adding a single API key and endpoint to Aperture's settings. Once a new provider is added, Aperture associates user and machine identities with API usage, while transparently passing along traffic to the LLM provider.

A screenshot of Tailscale/Aperture, showing the "Logs" section selected, with select users and metrics shown and tallied for: Requests, Input Tokens, Output Tokens, Cached, Reasoning, Tool Uses, Models, User, and Last Activity.

Since Aperture picks up identity information from your Tailscale network (tailnet), there is no need to set up user accounts or keys. Out of the gate, Aperture supports all major LLM providers using their native protocols, as well as most major cloud AI endpoints, self-hosted LLMs, and LLM inference providers that conform to the OpenAI v1 response or chat completions endpoint specifications.

The visibility provided by Aperture can be used in two main ways. The first is to understand AI adoption across an organization. The second is to look for signs of compromised or unapproved usage of agents or tools.

Models, agents, and AI usage best practices are constantly changing, and sometimes seemingly innocuous changes can have drastic impacts on tokens and, by extension, cost. By collecting usage information into a single place, engineering and IT leaders can get a complete picture into both user and agent token efficiency across the organization and providers.

A bar chart graph, showing peaks and valleys of usage increasing over time. Users are charted at the bottom, "amelie@tailscale.com" and "pangolin@tailscale.com," with numbers for each: "Cached tokens," "Input tokens," "Output tokens," "Reasoning Tokens," and "Requests."

When users feel security postures are too cumbersome, they’ll work around them and end up out on their own, unmonitored and unprotected. When deployed, Aperture quickly becomes the lowest-friction way to access AI at your company, while seamlessly attaching existing machine and user identities to logs, sessions, local tool calls, and local or remote MCP tool calls. All information can be easily viewed, filtered, and sorted inside Aperture, while also exported to S3 for easy SIEM integration.

A chart showing layered usage across days, with colored bands for tool call breakdowns by type for each session. One highlighted day, Jan 15, shows calls for "Bash," "Read," "Edit," "Grep," "TodoWrite," "Write," "Glob," "Task," "WebFetch," and "WebSearch," and "+6 more tools."

While Aperture itself is built on the Tailscale platform to leverage the built-in identity and connectivity, we’re making Aperture extensible as well. As part of making the safe way of providing AI access the easy way, we’ve made Aperture extensible, so that any company can provide detailed AI agent security features on top of Aperture. One of the first companies we'll be partnering with is Oso, to provide additional visibility, controls, alerting, and auditing for AI agents.

Oso Agent Dashboard, showing event volume over time, tools by risk score, and other statistics.
Oso Agent Dashboard.

In our initial alpha release of Aperture, we’ve focused mostly on providing security and insights for coding agent usage. But coding agents, we believe, are just the beginning of the proverbial agentic AI iceberg.

So in addition to expanding Aperture to cover more typical chat-UI-based use cases, we’re also planning support for other agentic workloads. Inside organizations there are tons of other processes beyond coding, waiting for their coding agent moment. And, Aperture will be there to provide the right visibility, control, convenience, and ease of use to make it happen.

Try it out today for free by signing up here. Like the Tailscale Personal plan, Aperture comes with similar usage limits of 3 free users. However, it can be used on any plan type. If you are trying to safely deploy coding agents for 10s, 100s, or 1000s of engineers inside your organization or building internal agents, sign up for the waitlist. We’d be happy to walk through your requirements and set you up with a dedicated instance of Aperture.

Please keep in mind that this is an early alpha release of an experimental product. It’s still a work in progress, so pieces may be incomplete, features may change, and you may experience bugs. We’re sharing it to learn and gather feedback, so before using it in production, please talk to us.

[1]: Look, they’re certainly not perfect, but if it’s been more than 6 months since you’ve tried Claude Code or Codex, I think you’ll be surprised as to how good they are now. I just recommend you start with a plan and iterate a bit to refine it before you ask it to start building something from scratch. [Back ⤴︎]

联系我们 contact @ memedata.com