Tailscale 对等中继现已正式发布。
Tailscale Peer Relays is now generally available

原始链接: https://tailscale.com/blog/peer-relays-ga

## Tailscale Peer Relay 正式发布 Tailscale 旨在实现设备之间的无缝直接连接,但现实网络中经常存在防火墙和 NAT 等挑战。当直接连接失败时,Tailscale 会利用中继。现在,**Tailscale Peer Relay** 正式发布,提供客户部署的中继选项,以提高性能和控制力。 Peer Relay 能够提供显著的吞吐量和可靠性提升,尤其是在多个客户端的情况下。改进包括优化的接口选择、高效的数据包处理以及利用多个 UDP 套接字。重要的是,它们现在与**静态端点**集成,即使在严格的防火墙或负载均衡器后面也能实现连接——在某些情况下有效地替代子网路由器。 增强的可视性是一项关键特性,与 `tailscale ping` 集成,用于实时诊断,并公开详细指标,以便通过 Prometheus 和 Grafana 等工具进行监控。 Peer Relay 在所有 Tailscale 计划中可用,并为在复杂网络环境中扩展 Tailscale 提供了一个核心构建块,提供高吞吐量、受控的连接和强大的可观察性。

相关文章

原文

When Tailscale works best, it feels effortless, almost boring. Devices connect directly, packets take the shortest possible path, and performance ceases to be a pressing concern.

But real-world networks aren’t always that cooperative. Firewalls, NATs, and cloud networking constraints can block direct peer-to-peer connections. When that happens, Tailscale relies on relays (DERP) to keep traffic moving securely and reliably.

Today, we’re excited to announce that Tailscale Peer Relays is now generally available (GA). Peer relays bring customer-deployed, high-throughput relaying to production readiness, giving you a tailnet-native relaying option that you can run on any Tailscale node. Since their beta release, we’ve shaped Tailscale Peer Relays to deliver major improvements in performance, reliability, and visibility.

What started as a way to work around hard NATs has grown into a production-grade connectivity option. One that gives teams the performance, control, and flexibility they need to scale Tailscale in even the most challenging network environments.

A flowchart: A Your Network container, with "Your Resource" heading into "Peer relay." From there, traffic hits a "Peer relay ip port exception at the edge of "Network Firewall," and then hits three checkpoints, heading into Resources.

We have made big throughput improvements for Tailscale Peer Relays that are especially noticeable when many clients are forwarding through them. Connecting clients now select a more optimal interface and address family when more than one are available within a single relay, which helps bootstrap and improve overall connection quality. On the relay itself, throughput has increased: packets are handled more efficiently on every Peer Relay because of lock contention improvements, and traffic is now spread across multiple UDP sockets where available.

Together, these changes deliver meaningful gains in both performance and reliability across day-to-day tailnet traffic. Even when direct peer-to-peer connections aren’t possible, peer relays can now achieve performance much closer to a true mesh.

In some environments, particularly in public cloud networks, automatic endpoint discovery isn’t always possible. Instances may sit behind strict firewall rules, rely on port forwarding or load balancers in peered public subnets, or operate in setups where opening arbitrary ports simply isn’t an option. In many cases, the infrastructure in front of those instances can’t run Tailscale directly, making standard discovery mechanisms ineffective.

Peer relays now integrate with static endpoints to address these constraints. Using the --relay-server-static-endpoints flag with tailscale set, a peer relay can advertise one or more fixed IP:port pairs to the tailnet. These endpoints can live behind infrastructure such as an AWS Network Load Balancer, enabling external clients to relay traffic through the peer relay even when automatic endpoint discovery fails.

Flowchart: A Private Subnet container, with "AWS Resource" inside. A two-way flow from AWS to a Peer Relay has a checkmark in the middle. Traffic is also coming to that Peer Relay, inside a Public Subnet box, from a Static Endpoint inside that Public Subnet, which is sending traffic back to a Network load balancer, and then to a Laptop outside the subnet. Another branch from AWS heads through the Public subnet, to "other Internet-bound traffic," into a NAT gateway, and then to Internet.

This unlocks high-throughput connectivity in restrictive cloud environments where traditional NAT traversal and endpoint discovery don’t work. Customers can now deploy peer relays behind load balancers and still provide reliable, high-performance relay paths to clients outside those networks.

For many customers, this also means peer relays can replace subnet routers, unlocking full-mesh deployments with core Tailscale features like Tailscale SSH and MagicDNS.

Now in general availability, Tailscale Peer Relays also integrate more deeply into Tailscale’s visibility and observability tooling, making relay behavior clear, measurable, and auditable.

Peer relays integrate directly with tailscale ping, allowing you to see whether a relay is being used, whether it’s reachable, and how it impacts latency and reliability when testing connectivity. This removes much of the guesswork from troubleshooting. When issues arise, it’s easy to determine whether traffic is being relayed, whether the relay is healthy, and whether it’s contributing to degraded performance.

For ongoing observability, Tailscale Peer Relays now expose client metrics such as tailscaled_peer_relay_forwarded_packets_total and tailscaled_peer_relay_forwarded_bytes_total. These metrics can be scraped and exported to monitoring systems like Prometheus and Grafana alongside existing Tailscale client metrics, enabling teams to track relay usage, understand traffic patterns, detect anomalies, and monitor tailnet health at scale.

With general availability, Tailscale Peer Relays become a core building block for scaling Tailscale in real-world networks. They enable:

  • High-throughput, low-latency connections when direct paths are unavailable
  • Deployments in restricted cloud environments through static endpoints
  • Full mesh in private subnets, with controlled ingress/egress paths

At the same time, Tailscale Peer Relays deliver intelligent, resilient path selection across the tailnet, along with first-class observability, auditability, and debuggability. All of this comes without compromising on Tailscale’s foundational guarantees: end-to-end encryption, least-privilege access, and simple, predictable operation.

Getting started is straightforward. Tailscale Peer Relays can be enabled on any supported Tailscale node using the CLI, controlled through grants in your ACLs, and deployed incrementally alongside existing relay infrastructure; you can read more in our docs.

Peer Relays are available on all Tailscale plans, including our free Personal plan. If you need deployment support or have specific throughput goals, don't hesitate to reach out.

联系我们 contact @ memedata.com