Hedgehog Supported Devices
Hedgehog tests, certifies and supports devices in Hedgehog fabrics for your AI training, fine tuning, RAG, inference and general purpose compute use cases. Every supported device runs continuous automated testing in our lab—ensuring the Hedgehog VPC and Transit Gateways your tenants rely on work flawlessly for their AI workloads.
Hedgehog gateway and control software runs on commodity servers you can buy just about anywhere. We support switches from multiple vendors so you can diversify your supply chain and add elasticity to your hybrid cloud.
- Hedgehog gateway servers
- Hedgehog control servers
- Leaf switches
- Spine switches
- IoT switches
Hedgehog Gateway Servers
You don't need to spend lots of money on expensive, specialized routers for data center interconnect (DCI). Hedgehog offers a software-based router called Hedgehog Transit Gateway. This soft router runs on commodity server hardware equipped with NVIDIA ConnectX-7 SmartNICs.
Hedgehog Control Servers
When you download the Hedgehog software appliance, you are going to run it on a control server. This controller includes an embedded, fully managed Kubernetes implementation which serves the Hedgehog cloud native API, configures the network operating systems running on your switches, and manages your gateway servers. In sum your control servers are the brains of your AI cloud network.
Celestica DS5000
A work horse for AI training, the Celestica DS5000 operates as a leaf or spine in front-end AI networks. 800G ports connect GPUs with shared memory access and congestion control in back-end AI networks.
- Fabrics: AI Training, AI Fine Tuning, AI RAG, AI Inference
- Roles: Leaf, Spine
- Silicon: Broadcom Tomahawk 5
- Bandwidth: 51.2 Tb/s
- Ports: 64xOSFP-800G, 2xSFP28-25G

Celestica DS4101
With 32x800G ports powered by Broadcom Tomahawk 4, the Celestica DS4101 is an ideal choice for a spine role in front end networks for AI training, fine tuning, RAG, inference or general purpose compute.
- Fabrics: AI Training, AI Fine Tuning, AI RAG, AI Inference, General Purpose Compute
- Roles: Spine
- Silicon: Broadcom Tomahawk 4G
- Bandwidth: 25.6 Tb/s
- Ports: 32xOSFP 800Gbps, 2xSFP28-10G

Celestica DS4000
If your workload requires a little less network throughput and you want to save some cash, consider the Celestica DS4000 as a spine in your AI Inference or General Purpose Compute fabrics. Port configuration is comparable to the DS4101, but you are settling for an older generation ASIC.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Spine
- Silicon: Broadcom Tomahawk 3
- Bandwidth: 12.8 Tb/s
- Ports: 32xQSFPDD-400G, 1xSFP28-10G

Celestica DS3000
With 100G ports powered by Broadcom Trident 3-X7, the Celestica DS3000 is designed for a primary role as a leaf in general purpose compute fabrics. It also works great as a leaf for AI inference applications at the data edge or as a spine in a 25G fabric.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 32xQSFP28-100G, 1xSFP28-10G

Supermicro SSE-C4632SB
Comparable to the Celestica DS3000, the Supermicro SSE-C4632SB is a good choice for AI inference or general purpose compute fabrics.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 32xQSFP28-100G, 1xSFP28-10G
Dell Z9332F-ON
Comparable to the Celestica DS4000, the Dell Z9332F-ON variant is a good choice for an alternate spine in general purpose compute or AI inference fabrics.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Spine
- Silicon: Broadcom Tomahawk 3
- Bandwidth: 12.8 Tb/s
- Ports: 32xQSFPDD-400G, 1xSFP28-10G

Dell S5232F-ON
Comparable to the Celestica DS3000 the Dell S5232F-ON is a good alternate in a 100G general purpose or AI inference fabric.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 32xQSFP28-100G, 1xSFP28-10G

Dell S5248F-ON
Same guts as the Dell S5232F-ON, but different port configuration. Use the Dell S5248F-ON with 25G ports as a leaf in a general purpose fabric with the S5232F as a spine.
- Fabrics: General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 48xSFP28-25G, 8xQSFP28-100G

Edgecore DCS204
Comparable to the Celestica DS3000 or the Dell S5232F-ON, the Edgecore DCS204 (aka "AS7726-32X") is a good alternate in a 100G general purpose or AI inference fabric.
- Fabrics: AI Inference, General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 32xQSFP28-100G, 1xSFP28-10G

Edgecore DCS203
Comparable to the Dell S5248F-ON, use the Edgecore DCS203 (aka "AS7326-56X") as an alternate leaf in a general purpose fabric.
- Fabrics: General Purpose Compute
- Roles: Leaf, Spine
- Silicon: Broadcom Trident 3-X7 3.2T
- Bandwidth: 3.2 Tb/s
- Ports: 48xSFP28-25G, 8xQSFP28-100G

Edgecore DCS501
If you just want an cheap OG spine in a 25G general purpose compute fabric, pick the Edgecore DCS501 (aka "Edgecore AS7712-32X"). Keep in mind that Broadcom has released 4 more versions of their Tomahawk ASIC since these went to the fab. You can get comparable bandwidth from a Trident 3-X7 ASIC with flexibility to use it as a leaf, too.
- Fabrics: General Purpose Compute
- Roles: Spine
- Silicon: Broadcom Tomahawk
- Bandwidth: 3.2 Tb/s
- Ports: 32xQSFP28-100G

Edgecore EPS203
Not to be confused with the DCS203 ("Data Center Switch"), the Edgecore EPS203 (aka "AS4630-54NPE") is an edge switch. We certified this device to power and connect IoT devices for unique inference data sources in enterprise AI applications. Power over Ethernet (POE) provides the power, and RJ45 ports get IoT data into your inference models.
- Fabrics: AI Inference
- Roles: POE access
- Silicon: Broadcom Trident 3-X3
- Bandwidth: 560 Gb/s
- Ports: 36xRJ45-2.5G, 12xRJ45-10G, 4xSFP28-25G, 2xQSFP28-100G
