展示HN:Netfence – 像Envoy用于eBPF过滤器
Show HN: Netfence – Like Envoy for eBPF Filters

原始链接: https://github.com/danthegoodman1/netfence

## Netfence:基于eBPF的网络过滤 Netfence是一个利用eBPF过滤器动态控制虚拟机和容器网络访问的系统。它在每个主机上运行一个守护进程,将过滤器注入到网络接口(TC)或cgroup中,并通过gRPC实现的用户控制平面进行集中控制。 该守护进程管理一个每附件的DNS服务器,解析允许的域名并自动更新eBPF过滤器中的IP白名单。这允许基于策略的网络控制——允许列表、拒绝列表或阻止所有流量,支持IPv4/IPv6 CIDR和基于域名的规则(包括子域名)。 编排系统通过本地gRPC API与守护进程交互,以附加/分离过滤器,并提供用于标识的元数据(VM ID、租户等)。然后,守护进程与控制平面同步,通过双向流接收初始配置和更新。 Netfence通过在流量离开主机*之前*进行过滤,最大限度地减少延迟,从而优先考虑性能。控制平面处理规则定义(ALLOW/DENY),并将这些规则推送到守护进程,确保网络策略的一致性。

## Netfence:基于eBPF的类似Envoy的防火墙 Netfence是一种新的工具,旨在实现安全的代理防火墙,被描述为“eBPF过滤器的Envoy”。它允许定义基于DNS的规则,将规则解析为IP地址,然后通过高效的eBPF过滤器应用,从而控制出站网络访问,而不会产生性能损失。 与传统方法不同,Netfence避免修改基础镜像,防止代理对规则的潜在操纵。它使用类似于Envoy的xDS控制平面自动管理容器和微型VM(如Firecracker)的eBPF过滤器生命周期,从而实现动态规则更新和每个cgroup/接口的DNS解析。 创建者使用Netfence来限制代理访问特定服务,如S3、pip、apt和npm。一个关键的讨论点围绕DNS缓存展开;Netfence利用标准的DNS TTL进行IP查找,并利用高效的增量更新来管理权限规则,从而最大限度地减少延迟。虽然与Cilium类似,但目前它是在Kubernetes环境之外使用的。
相关文章

原文

Like Envoy xDS, but for eBPF filters.

Netfence runs as a daemon on your VM/container hosts and automatically injects eBPF filter programs into cgroups and network interfaces, with a built-in DNS server that resolves allowed domains and populates the IP allowlist.

Netfence daemons connect to a central control plane that you implement via gRPC to synchronize allowlists/denylists with your backend.

Your control plane pushes network rules like ALLOW *.pypi.org or ALLOW 10.0.0.0/16 to attached interfaces/cgroups. When a VM/container queries DNS, Netfence resolves it, adds the IPs to the eBPF filter, and drops traffic to unknown IPs before it leaves the host without any performance penalty.

  • Attach eBPF filters to network interfaces (TC) or cgroups
  • Policy modes: disabled, allowlist, denylist, block-all
  • IPv4 and IPv6 CIDR support with optional TTLs
  • Per-attachment DNS server with domain allowlist/denylist
  • Domain rules support subdomains with specificity-based matching (more specific rules win)
  • Resolved domains auto-populate IP filter
  • Metadata on daemons and attachments for associating with VM ID, tenant, etc.
  • Support for proxying DNS queries to the control plane to make DNS decisions per-attachment
+------------------+         +-------------------------+
|  Your Control    |<------->|  Daemon (per host)      |
|  Plane (gRPC)    |  stream |                         |
+------------------+         |  +-------------------+  |
                             |  | DNS Server        |  |
                             |  | (per-attachment)  |  |
                             |  +-------------------+  |
                             +-------------------------+
                                        |
                                 +------+------+
                                 |             |
                              TC Filter    Cgroup Filter
                              (veth, eth)  (containers)

Each attachment gets a unique DNS address (port) provisioned by the daemon. Containers/VMs should be configured to use their assigned DNS address.

Run the daemon, which:

  • Exposes a local gRPC API (DaemonService) for attaching/detaching filters
  • Connects to your control plane via bidirectional stream (ControlPlane.Connect)
  • Loads and manages eBPF programs

Start the daemon:

# Start with default config
netfenced start

# Start with custom config file
netfenced start --config /etc/netfence/config.yaml

Check daemon status:

Your orchestration system calls the daemon's local API.

RPC:

DaemonService.Attach(interface_name: "veth123", metadata: {vm_id: "abc"})
// or
DaemonService.Attach(cgroup_path: "/sys/fs/cgroup/...", metadata: {container_id: "xyz"})

CLI:

# Attach to a network interface (TC)
netfenced attach --interface veth123 --metadata vm_id=abc

# Attach to a cgroup
netfenced attach --cgroup /sys/fs/cgroup/... --metadata container_id=xyz

# Attach with metadata
netfenced attach --interface eth0 --metadata tenant=acme,env=prod
  • Daemon attaches eBPF filter to the target
  • Daemon sends Subscribed{id, target, type, metadata} to control plane and waits for SubscribedAck with initial config (mode, CIDRs, DNS rules)
  • If the control plane doesn't respond within the timeout (default 5s, configurable via control_plane.subscribe_ack_timeout), the attachment is rolled back and the attach call fails
  • Daemon watches for target removal and sends Unsubscribed automatically

RPC:

CLI:

netfenced detach --id <attachment-id>

List attachments:

netfenced list
netfenced list --all  # fetch all pages

On the control plane (you implement this)

Implement ControlPlane.Connect RPC - a bidirectional stream:

Receive from daemon:

  • SyncRequest on connect/reconnect (lists current attachments)
  • Subscribed when new attachments are added
  • Unsubscribed when attachments are removed
  • Heartbeat with stats

Send to daemon:

  • SyncAck after receiving SyncRequest
  • SubscribedAck{mode, cidrs, dns_config} after receiving Subscribed (required - daemon waits for this)
  • SetMode{mode} - change IP filter policy mode
  • AllowCIDR{cidr, ttl} / DenyCIDR / RemoveCIDR
  • SetDnsMode{mode} - change DNS filtering mode
  • AllowDomain{domain} / DenyDomain / RemoveDomain
  • BulkUpdate{mode, cidrs, dns_config} - full state sync

When the daemon receives Subscribed, it blocks waiting for SubscribedAck before returning success to the caller. This ensures the attachment has its initial configuration before traffic flows. Use the metadata to identify which VM/tenant/container this attachment belongs to and respond with the appropriate initial rules.

联系我们 contact @ memedata.com