Hedgehog Contributes OCP Reference Architectures for AI Networks
New OCP Accepted™ designs accelerate deployment of open, scalable AI infrastructure
Hedgehog has contributed its AI training fabric and AI inference fabric designs to the Open Compute Project (OCP) as reference architectures, which are immediately available through the OCP Marketplace. These validated, production-ready blueprints provide operators, system builders, and integrators with everything needed to deploy open, Ethernet-based AI networks using disaggregated hardware and Hedgehog AI network software.
Proven AI Fabrics Built for Real Deployments
Based on real-world production deployments supporting today’s most demanding workloads, these architectures emphasize interoperability across silicon vendors to prevent hardware lock-in.
Scale-Out Training
-
Delivers predictable performance through congestion-aware routing and lossless Ethernet (RoCEv2 QoS).
-
Features rail-optimized topology options that keep intra-rail collective traffic leaf-local, radically reducing spine congestion.
-
Automated Day-0 to Day-2 network lifecycle management via Kubernetes-native Continuous Reconciliation.
High-Efficiency Inference
-
Optimized for efficiency and low latency.
-
Built-in multi-tenant security and hybrid multi-cloud routing capabilities.
-
Simplified operations ensuring consistent performance at scale.
Modular Design from 64 to 1,024+ xPUs

The Open Compute Project Reference Architectures provide prescriptive, scale-out Ethernet fabrics built on Open Pod Group (OPG) scalable units. This allows everyone to network like a hyperscaler, utilizing modular building blocks to seamlessly expand AI clusters.
- Massive Scalability: Architectures support scaling smoothly from 64 up to 1,024 xPUs.
- Modular Building Blocks: Compose massive clusters using standardized OPG sizes like OPG-64, 128, 256, 512, and 1024.
- Advanced Capabilities: Implement dual-plane resilience, VPC abstraction, and full lifecycle automation.
Ecosystem Collaboration from Design to Production
- Proven by Neocloud Operators: Providers like FarmGPU rely on these open designs to ensure their GPU clusters are consistently fed without bottlenecks, allowing them to deploy functional AI infrastructure without reinventing the wheel.
- Supported by Hardware Leaders: Platinum OCP Members like Celestica have contributed to these architectures, proving how OCP-recognized hardware and open-source software provide a clear, deployable path for modern AI workloads.
- Advancing the OCP Mission: The Open Compute Project Foundation recognizes these contributions as a critical step in giving system builders direct access to validated designs, making it easier to adopt open, Ethernet-based AI networking with confidence.