刚玉 – 开源的基于FPGA的网络接口卡和网络内计算平台
Corundum – open-source FPGA-based NIC and platform for in-network compute

原始链接: https://github.com/corundum/corundum

## 刚玉:基于FPGA的开源网卡,用于网络内计算 刚玉是一款高性能、开源的网络接口卡 (NIC),专为 FPGA 平台构建,实现网络内计算能力。它通过 PCI Express Gen 3 支持高达 100G 以太网的速度,并具有定制的 DMA 引擎,拥有数千个可配置的传输、接收、完成和事件队列,用于细粒度的数据包控制。 主要特性包括高级调度(包括带有 IEEE 1588 PTP 时间戳的 TDMA)、流哈希、校验和卸载,以及用于自定义逻辑实现的专用应用部分。Linux 驱动程序将刚玉与标准的网络堆栈集成。 开发得到了全面的仿真框架的支持,并且有适用于各种 Xilinx 和 Intel FPGA 板的设计。该架构利用块/超 RAM 来高效存储队列状态。 资源包括文档 ([https://docs.corundum.io/](https://docs.corundum.io/))、GitHub 仓库 ([https://github.com/corundum/corundum](https://github.com/corundum/corundum))、Google 群组 ([https://groups.google.com/d/forum/corundum-nic](https://groups.google.com/d/forum/corundum-nic)) 和 Zulip 聊天 ([https://corundum.zulipchat.com/](https://corundum.zulipchat.com/))。

## 刚玉:开源 FPGA 网络 刚玉是一个开源项目,正在开发基于 FPGA 的网络接口卡 (NIC) 和用于网络内计算的平台。最近的开发主要在 GitHub 上的“Taxi”项目下进行。 虽然功能强大,但传统上与刚玉相关的硬件可能价格昂贵。然而,用户们正在寻找更实惠的选择,例如 Kria SoM 和二手 Alveo 卡,或者利用带有 ARM CPU 的 Zynq FPGA 来降低成本。 该项目被认为是顶级的 NIC 参考设计,尤其适用于高达 100G 带宽的应用。一位用户正在探索使用刚玉作为基础,完全在硬件中实现 PagedAttention over RoCE。 资源包括项目的 GitHub 仓库以及 Alex Forencich 演示重建平台的直播。
相关文章

原文

Build Status

GitHub repository: https://github.com/corundum/corundum

Documentation: https://docs.corundum.io/

GitHub wiki: https://github.com/corundum/corundum/wiki

Google group: https://groups.google.com/d/forum/corundum-nic

Zulip: https://corundum.zulipchat.com/

Corundum is an open-source, high-performance FPGA-based NIC and platform for in-network compute. Features include a high performance datapath, 10G/25G/100G Ethernet, PCI express gen 3, a custom, high performance, tightly-integrated PCIe DMA engine, many (1000+) transmit, receive, completion, and event queues, scatter/gather DMA, MSI interrupts, multiple interfaces, multiple ports per interface, per-port transmit scheduling including high precision TDMA, flow hashing, RSS, checksum offloading, and native IEEE 1588 PTP timestamping. A Linux driver is included that integrates with the Linux networking stack. Development and debugging is facilitated by an extensive simulation framework that covers the entire system from a simulation model of the driver and PCI express interface on one side to the Ethernet interfaces on the other side.

Corundum has several unique architectural features. First, transmit, receive, completion, and event queue states are stored efficiently in block RAM or ultra RAM, enabling support for thousands of individually-controllable queues. These queues are associated with interfaces, and each interface can have multiple ports, each with its own independent scheduler. This enables extremely fine-grained control over packet transmission. Coupled with PTP time synchronization, this enables high precision TDMA.

Corundum also provides an application section for implementing custom logic. The application section has a dedicated PCIe BAR for control and a number of interfaces that provide access to the core datapath and DMA infrastructure.

Corundum currently supports devices from both Xilinx and Intel, on boards from several different manufacturers. Designs are included for the following FPGA boards:

  • Alpha Data ADM-PCIE-9V3 (Xilinx Virtex UltraScale+ XCVU3P)
  • Dini Group DNPCIe_40G_KU_LL_2QSFP (Xilinx Kintex UltraScale XCKU040)
  • Cisco Nexus K35-S (Xilinx Kintex UltraScale XCKU035)
  • Cisco Nexus K3P-S (Xilinx Kintex UltraScale+ XCKU3P)
  • Cisco Nexus K3P-Q (Xilinx Kintex UltraScale+ XCKU3P)
  • Silicom fb2CG@KU15P (Xilinx Kintex UltraScale+ XCKU15P)
  • NetFPGA SUME (Xilinx Virtex 7 XC7V690T)
  • BittWare 250-SoC (Xilinx Zynq UltraScale+ XCZU19EG)
  • BittWare XUSP3S (Xilinx Virtex UltraScale XCVU095)
  • BittWare XUP-P3R (Xilinx Virtex UltraScale+ XCVU9P)
  • BittWare IA-420F (Intel Agilex F 014)
  • Intel Stratix 10 MX dev kit (Intel Stratix 10 MX 2100)
  • Intel Stratix 10 DX dev kit (Intel Stratix 10 DX 2800)
  • Intel Agilex F dev kit (Intel Agilex F 014)
  • Terasic DE10-Agilex (Intel Agilex F 014)
  • Xilinx Alveo U50 (Xilinx Virtex UltraScale+ XCU50)
  • Xilinx Alveo U55N/Varium C1100 (Xilinx Virtex UltraScale+ XCU55N)
  • Xilinx Alveo U200 (Xilinx Virtex UltraScale+ XCU200)
  • Xilinx Alveo U250 (Xilinx Virtex UltraScale+ XCU250)
  • Xilinx Alveo U280 (Xilinx Virtex UltraScale+ XCU280)
  • Xilinx Kria KR260 (Xilinx Zynq UltraScale+ XCK26)
  • Xilinx VCU108 (Xilinx Virtex UltraScale XCVU095)
  • Xilinx VCU118 (Xilinx Virtex UltraScale+ XCVU9P)
  • Xilinx VCU1525 (Xilinx Virtex UltraScale+ XCVU9P)
  • Xilinx ZCU102 (Xilinx Zynq UltraScale+ XCZU9EG)
  • Xilinx ZCU106 (Xilinx Zynq UltraScale+ XCZU7EV)

For operation at 10G and 25G, Corundum uses the open source 10G/25G MAC and PHY modules from the verilog-ethernet repository, no extra licenses are required. However, it is possible to use other MAC and/or PHY modules.

Operation at 100G on Xilinx UltraScale+ devices currently requires using the Xilinx CMAC core with RS-FEC enabled, which is covered by the free CMAC license.

For detailed documentation, see https://docs.corundum.io/

Corundum block diagram

Block diagram of the Corundum NIC. PCIe HIP: PCIe hard IP core; AXIL M: AXI lite master; DMA IF: DMA interface; AXI M: AXI master; PHC: PTP hardware clock; TXQ: transmit queue manager; TXCQ: transmit completion queue manager; RXQ: receive queue manager; RXCQ: receive completion queue manager; EQ: event queue manager; MAC + PHY: Ethernet media access controller (MAC) and physical interface layer (PHY).

Frame pad module for 512 bit 100G CMAC TX interface. Zero pads transmit frames to minimum 64 bytes.

Completion operation multiplexer module. Merges completion write operations from different sources to enable sharing a single cpl_write module instance.

Completion queue manager module. Stores device to host queue state in block RAM or ultra RAM.

Completion write module. Responsible for enqueuing completion and event records into the completion queue managers and writing records into host memory via DMA.

Descriptor fetch module. Responsible for dequeuing descriptors from the queue managers and reading descriptors from host memory via DMA.

Descriptor operation multiplexer module. Merges descriptor fetch operations from different sources to enable sharing a single desc_fetch module instance.

Event mux module. Enables multiple event sources to feed the same event queue.

Core module. Contains the interfaces, asynchronous FIFOs, PTP subsystem, statistics collection subsystem, and application block.

Core module for a PCIe host interface. Wraps mqnic_core along with generic PCIe interface components, including DMA engine and AXI lite masters.

mqnic_core_pcie_us module

Core module for a PCIe host interface on Xilinx 7-series, UltraScale, and UltraScale+. Wraps mqnic_core_pcie along with FPGA-specific interface logic.

Interface module. Contains the event queues, interface queues, and ports.

Port module. Contains the transmit and receive datapath components, including transmit and receive engines and checksum and hash offloading.

PTP subsystem. Contains one mqnic_ptp_clock instance and a parametrizable number of mqnic_ptp_perout instances.

PTP clock module. Contains an instance of ptp_clock with a register interface.

PTP period output module. Contains an instance of ptp_perout with a register interface.

mqnic_tx_scheduler_block_rr module

Transmit scheduler block with round-robin transmit scheduler and register interface.

mqnic_tx_scheduler_block_rr_tdma module

Transmit scheduler block with round-robin transmit scheduler, TDMA scheduler, TDMA scheduler controller, and register interface.

Queue manager module. Stores host to device queue state in block RAM or ultra RAM.

Receive checksum computation module. Computes 16 bit checksum of Ethernet frame payload to aid in IP checksum offloading.

Receive engine. Manages receive datapath operations including descriptor dequeue and fetch via DMA, packet reception, data writeback via DMA, and completion enqueue and writeback via DMA. Handles PTP timestamps for inclusion in completion records.

Receive hash computation module. Extracts IP addresses and ports from packet headers and computes 32 bit Toeplitz flow hash.

Statistics collector module. Parametrizable number of increment inputs, single AXI stream output for accumulated counts.

Statistics counter module. Receives increments over AXI stream and accumulates them in block RAM, which is accessible via AXI lite.

Collects DMA-related statistics for dma_if_pcie module, including operation latency.

stats_dma_if_latency module

DMA latency measurement module.

Collects TLP-level statistics for the generic PCIe interface.

Extracts TLP-level statistics for the generic PCIe interface (single channel).

TDMA bit error ratio (BER) test channel module. Controls PRBS logic in Ethernet PHY and accumulates bit errors. Can be configured to bin error counts by TDMA timeslot.

TDMA bit error ratio (BER) test module. Wrapper for a tdma_scheduler and multiple instances of tdma_ber_ch.

TDMA scheduler module. Generates TDMA timeslot index and timing signals from PTP time.

Transmit checksum computation and insertion module. Computes 16 bit checksum of frame data with specified start offset, then inserts computed checksum at the specified position.

Transmit engine. Manages transmit datapath operations including descriptor dequeue and fetch via DMA, packet data fetch via DMA, packet transmission, and completion enqueue and writeback via DMA. Handles PTP timestamps for inclusion in completion records.

tx_scheduler_ctrl_tdma module

TDMA transmit scheduler control module. Controls queues in a transmit scheduler based on PTP time, via a tdma_scheduler instance.

Round-robin transmit scheduler. Determines which queues from which to send packets.

cmac_pad.v                         : Pad frames to 64 bytes for CMAC TX
cpl_op_mux.v                       : Completion operation mux
cpl_queue_manager.v                : Completion queue manager
cpl_write.v                        : Completion write module
desc_fetch.v                       : Descriptor fetch module
desc_op_mux.v                      : Descriptor operation mux
event_mux.v                        : Event mux
event_queue.v                      : Event queue
mqnic_core.v                       : Core logic
mqnic_core_pcie.v                  : Core logic for PCIe
mqnic_core_pcie_us.v               : Core logic for PCIe (UltraScale)
mqnic_interface.v                  : Interface
mqnic_port.v                       : Port
mqnic_ptp.v                        : PTP subsystem
mqnic_ptp_clock.v                  : PTP clock wrapper
mqnic_ptp_perout.v                 : PTP period output wrapper
mqnic_tx_scheduler_block_rr.v      : Scheduler block (round-robin)
mqnic_tx_scheduler_block_rr_tdma.v : Scheduler block (round-robin TDMA)
queue_manager.v                    : Queue manager
rx_checksum.v                      : Receive checksum offload
rx_engine.v                        : Receive engine
rx_hash.v                          : Receive hashing module
stats_collect.v                    : Statistics collector
stats_counter.v                    : Statistics counter
stats_dma_if_pcie.v                : DMA interface statistics
stats_dma_latency.v                : DMA latency measurement
stats_pcie_if.v                    : PCIe interface statistics
stats_pcie_tlp.v                   : PCIe TLP statistics
tdma_ber_ch.v                      : TDMA BER channel
tdma_ber.v                         : TDMA BER
tdma_scheduler.v                   : TDMA scheduler
tx_checksum.v                      : Transmit checksum offload
tx_engine.v                        : Transmit engine
tx_scheduler_ctrl_tdma.v           : TDMA transmit scheduler controller
tx_scheduler_rr.v                  : Round robin transmit scheduler

Running the included testbenches requires cocotb, cocotbext-axi, cocotbext-eth, cocotbext-pcie, scapy, and Icarus Verilog. The testbenches can be run with pytest directly (requires cocotb-test), pytest via tox, or via cocotb makefiles.

  • A. Forencich, A. C. Snoeren, G. Porter, G. Papen, Corundum: An Open-Source 100-Gbps NIC, in FCCM'20. (FCCM Paper, FCCM Presentation)

  • J. A. Forencich, System-Level Considerations for Optical Switching in Data Center Networks. (Thesis)

If you use Corundum in your project, please cite one of the following papers and/or link to the project on GitHub:

@inproceedings{forencich2020fccm,
    author = {Alex Forencich and Alex C. Snoeren and George Porter and George Papen},
    title = {Corundum: An Open-Source {100-Gbps} {NIC}},
    booktitle = {28th IEEE International Symposium on Field-Programmable Custom Computing Machines},
    year = {2020},
}

@phdthesis{forencich2020thesis,
    author = {John Alexander Forencich},
    title = {System-Level Considerations for Optical Switching in Data Center Networks},
    school = {UC San Diego},
    year = {2020},
    url = {https://escholarship.org/uc/item/3mc9070t},
}

Corundum internally uses the following libraries:

联系我们 contact @ memedata.com