Glossary
  • 3-stage Clos

    A 3-stage CLOS network uses three layers of switches to create a scalable, non-blocking topology for efficient, high-performance connectivity in data centers and HPC environments.
  • 5-Stage Clos Network

    A 5-stage CLOS network is a scalable network architecture with five switch layers, providing high throughput, redundancy, and efficient connectivity for large-scale data centers and HPC environments.
  • Access Leaf

    An access leaf is a network switch at the edge of a data center fabric, connecting end devices such as servers and storage systems to the network and enabling scalable, high-performance access.
  • Active Node

    An active node is a node that is currently operational and actively participating in processing tasks, serving requests, or executing workloads.
  • Adaptive Routing

    A dynamic networking technique that adjusts data paths in real time to optimize performance and reliability.
  • Agent

    A software program that autonomously performs tasks on behalf of a user or system, often incorporating intelligence and automation.
  • AI

    Artificial Intelligence (AI) is the field of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding.
  • AI Cloud

    AI Cloud is a specialized cloud infrastructure geared towards the development and deployment of artificial intelligence (AI) models, offering high-performance computing, tailored storage, and a suite of tools and services designed to streamline AI workflows.
  • Alerting

    A process that notifies stakeholders about critical events, anomalies, or threshold breaches in IT systems to enable rapid response and mitigation.
  • API Gateway

    A centralized entry point that manages, secures, and routes API requests between clients and backend services in cloud and microservices environments.
  • API Management

    The process of designing, publishing, securing, monitoring, and analyzing APIs to maximize their value and ensure reliable, scalable integrations.
  • Application Cluster

    An application cluster is a group of interconnected computing resources that work together to host and run specific applications, providing high availability, scalability, and reliability for critical workloads.
  • Back-end Network

    A dedicated network segment designed to support the extreme performance demands of AI training and inference workloads. To handle the intensive GPU-to-GPU communication requirements, these networks implement specialized protocols like RoCEv2, RDMA, PFC, and ECN.
  • Bandwidth

    Bandwidth defines the maximum data transfer rate of a network connection, determining how much data can be transmitted per second and influencing overall network speed and capacity.
  • Bare-metal

    Bare-metal runs software directly on physical hardware without a hypervisor, providing maximum performance, predictability, and resource control.
  • BCG

    The Border Gateway Protocol (BGP) is the standard protocol for exchanging routing information between autonomous systems on the Internet, enabling scalable, policy-driven inter-domain routing.
  • BGP EVPN

    Border Gateway Protocol - Ethernet VPN (BGP EVPN) is a modern networking technology that combines BGP and EVPN to deliver scalable Layer 2 and Layer 3 VPN services for multi-tenant and virtualized environments.
  • Big Three

    The "Big Three" refers to the trio of leading public cloud service providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These tech giants dominate the cloud industry, offering scalable and robust services for computing, storage, and various specialized cloud-based tasks including artificial intelligence (AI) and machine learning (ML).
  • Bootstrapping

    Bootstrapping automates the initialization and configuration of systems, devices, or applications, ensuring consistent, reliable, and scalable deployment in modern IT environments.
  • Border Leaf

    A border leaf is a specialized leaf switch that connects the data center network fabric to external networks, enabling secure, high-performance communication with the Internet, WANs, or cloud providers.
  • Buffer Budgeting

    Buffer budgeting allocates and manages buffer resources in network devices to optimize packet handling, prevent congestion, and ensure reliable performance.
  • Buffering

    The temporary storage of data during transfer or processing, essential for optimizing performance and reliability in computing and networking.
  • Chain Booting

    Chain booting is a network-based process where a device loads its operating system or firmware from another device over the network, enabling centralized management and diskless operation.
  • Chip Resource Management

    Chip resource management optimizes the allocation and utilization of on-chip resources, such as CPU cores, memory, and I/O, to balance performance, power, and reliability.
  • CI/CD

    A set of automated practices for continuously integrating code changes and delivering software updates rapidly, reliably, and at scale.
  • CLOS

    CLOS is a scalable, non-blocking network architecture using multiple stages of switches to efficiently connect large numbers of endpoints, often used in data centers and HPC environments.
  • Cloud Automation

    The use of software tools and scripts to automate the provisioning, management, and optimization of cloud resources and services, reducing manual intervention and improving efficiency.
  • Cloud Builder

    A Cloud Builder is a professional skilled in designing, deploying, and managing cloud infrastructure to maximize performance, reliability, and efficiency while ensuring security and cost-effectiveness.
  • Cloud Bursting

    A cloud deployment strategy that allows applications to dynamically extend from private infrastructure to public cloud resources to handle peak workloads.
  • Cloud Computing

    Cloud computing is an internet-based model providing on-demand access to shared pools of configurable computing resources, such as servers, storage, and applications, enabling rapid scaling and flexible resource management.
  • Cloud Connect

    Cloud Connect is the network connectivity service provided by CSPs to establish secure, dedicated connections between an organization's on-premises infrastructure and resources hosted within the cloud.
  • Cloud Gateway

    A Cloud Gateway securely connects on-premises networks to cloud resources, enabling hybrid cloud integration, traffic management, and seamless data exchange.
  • Cloud Migration

    The process of moving data, applications, and workloads from on-premises infrastructure to cloud environments to achieve greater scalability, flexibility, and cost efficiency.
  • Cloud Orchestration

    The coordinated management and automated arrangement of cloud resources, services, and workflows to optimize operations and achieve business objectives.
  • Cloud Repatriation

    The process of moving workloads or data from public cloud environments back to on-premises infrastructure or private clouds, often for cost, performance, or compliance reasons.
  • Cloud Storage

    A service that enables users to store, manage, and access data over the internet using scalable, distributed infrastructure provided by cloud vendors.
  • Cloud-native

    An approach to designing, building, and running applications that fully exploit the advantages of cloud computing, emphasizing scalability, resilience, and automation.
  • Cluster

    cluster is a group of interconnected computers or servers that work together to perform a common task or provide a specific service.
  • CNI

    Container Network Interface (CNI) is a standard that enables container runtimes to integrate with networking plugins, providing consistent, flexible network connectivity for containers.
  • Cni Operator

    A CNI operator automates the installation, configuration, and lifecycle management of Container Network Interface (CNI) plugins in Kubernetes clusters.
  • Collapsed Core

    A collapsed core architecture combines the core and distribution layers of a network into a single, simplified layer, reducing complexity and cost while improving scalability.
  • Commodity Server

    A commodity server is a standard off-the-shelf hardware configuration widely available from multiple vendors, offering cost-effective and interoperable general-purpose computing resources.
  • Compute Leaf

    A compute leaf is a network switch at the access layer, specifically designed to connect compute resources such as servers and virtual machines to the network fabric in data centers.
  • Congestion

    Congestion occurs when network demand exceeds available capacity, causing degraded performance, increased latency, and potential packet loss. Effective management is essential for reliable application delivery.
  • Container Orchestration

    The automated management of containerized applications, including deployment, scaling, and networking, across clusters.
  • Control Node

    A control node manages and orchestrates network resources, policies, and services, enabling centralized automation and optimization of modern network infrastructures.
  • Control Plane

    The control plane is a component of network architecture that governs how data packets are routed and manages the configuration and behavior of network elements. It performs critical functions including routing decisions, policy enforcement, and network topology management to ensure efficient and secure data transmission.
  • Cost-optimized

    a cost-optimized path refers to the route taken by data packets that minimizes the overall cost associated with network transmission. Hedgehog's service gateway will allow for cost or latency optimization customization.
  • CPU

    The CPU, or Central Processing Unit, is the primary hardware component of a computer responsible for interpreting and executing most of the commands from the computer's other hardware and software.
  • CRD

    A Kubernetes feature that allows users to extend the API with custom resource types, enabling tailored automation and management.
  • Data Edge

    The "data edge" refers to the frontiers of a network where data is first produced or collected, typically through devices like sensors, Internet of Things (IoT) gadgets, and edge servers. This concept is central to edge computing and is pivotal for real-time data processing and analytics.
  • Data Isolation

    Data isolation is a security process that involves keeping data segregated to prevent unauthorized access and maintain confidentiality, integrity, and privacy. It applies both logical and physical constraints to protect sensitive information within a computing environment.
  • Data Plane

    The data plane, also known as the forwarding plane, is the component of network infrastructure that processes and forwards data packets between devices. It performs essential functions like packet routing, filtering, and switching, based on predefined network rules. Hedgehog utilizes a open-source VPP data plane, allowing for transparent and standard technology usage.
  • Devops

    DevOps is a set of practices that combines software development and IT operations to shorten development cycles, increase deployment frequency, and deliver high-quality software reliably.
  • DHCP

    Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses and network configuration parameters to devices, simplifying network administration and connectivity.
  • Dhcp Relay

    A DHCP relay enables centralized IP address management by forwarding DHCP requests between clients and servers across different subnets, ensuring seamless network configuration and scalability.
  • Direct Connection

    A direct connection is a dedicated, point-to-point link established between two endpoints without intermediate network devices or the need to traverse the public internet. Hedgehog's service gateway will provide private direct connection support.
  • Distributed Firewall

    A distributed firewall is a network security solution that extends traditional firewall capabilities across multiple locations or devices within a network, distributing enforcement points closer to the endpoints they protect.
  • DNS

    Domain Name System (DNS) is a decentralized, hierarchical system that translates human-readable domain names into IP addresses, enabling seamless access to internet and network resources.
  • DPU

    A specialized processor designed to offload networking, storage, and security functions from CPUs, enhancing performance in cloud and data center environments.
  • Dpu-resident Cni

    A DPU-resident CNI is a network interface technology that resides on a Data Processing Unit (DPU) to facilitate and accelerate network communication for containerized applications, enhancing performance by offloading processing from the CPU.
  • Dpu/host Cni

    DPU/Host CNI integrates Data Processing Units (DPUs) with Container Network Interface (CNI) plugins on host systems, offloading network processing to hardware accelerators to improve performance and security for container workloads.
  • Dual Homing

    Dual homing is a network configuration in which a device is connected to two independent network paths for redundancy, high availability, and improved reliability.
  • East-west Traffic

    East-west traffic in a cloud environment refers to the lateral communications that occur between servers, containers, or applications within the same data center or cloud network, without involving the core network or external endpoints.
  • ECMP

    Equal-Cost Multi-Path (ECMP) is a routing strategy that distributes network traffic across multiple paths with identical cost, optimizing performance, redundancy, and load balancing.
  • Edge Computing

    A distributed computing paradigm that processes data near the source of generation, reducing latency and bandwidth usage compared to centralized cloud models.
  • Egress

    The process of transmitting outgoing data traffic from a network or device to external destinations.
  • End-to-end

    "end-to-end" refers to a system or solution that comprehensively handles all stages of a process, from the initial starting point to the final outcome, without requiring external intervention or additional systems. Hedgehog offers an end-to-end private networking solution for AI applications.
  • ESI

    Ethernet Segment Identifier (ESI) is a unique identifier used in Ethernet networks to enable loop prevention, traffic isolation, and redundancy in EVPN and Ethernet Segment Routing deployments.
  • Explicit Congestion Notification (ecn)

    Explicit Congestion Notification (ECN) is a network feature that allows routers to signal congestion to endpoints by marking packets instead of dropping them, enabling a preemptive response to avoid packet loss and maintain throughput.
  • External Peering Policy

    External peering policy governs the establishment, security, and optimization of direct network interconnections between an organization and external networks or autonomous systems.
  • Fabric Cluster

    A fabric cluster is a network of interconnected nodes or servers that work together to provide a high-availability, scalable, and fault-tolerant computing environment, enabling efficient execution of distributed applications.
  • Far Edge

    Far edge computing refers to the deployment of computational resources and services at the most remote areas of a network, directly adjacent to the devices generating or consuming data, facilitating minimal latency and immediate data processing.
  • Firewall

    A firewall is a network security device or software that monitors and controls incoming and outgoing network traffic based on predetermined security rules. Hedgehog's Cloud Network Services Security PLUS will offer a distributed firewall.
  • Flow

    A sequence of network packets sharing common attributes, fundamental for traffic management and optimization in modern networks.
  • Flow Control

    Flow control regulates data transmission rates between devices to prevent congestion, buffer overflow, and data loss, ensuring reliable and efficient network communication.
  • Flow Controller

    A flow controller is a device or algorithm designed to regulate the rate of fluid, data, or event flow through a system to maintain optimal performance, prevent congestion, and ensure system stability.
  • Forwarding

    Forwarding is the process of passing data packets from one network device to another based on their destination addresses.
  • Forwarding Pipeline

    A forwarding pipeline is a series of processes within a network device that handles the examination, decision-making, and direction of data packets to their next destination. It is a crucial component that determines how data moves through a network, affecting speed, efficiency, and traffic management.
  • Front-end Network

    A network segment in AI clouds that manages user access and data exchange with backend resources, supporting multi-tenant environments.
  • Generative AI (genai)

    Generative AI (genAI) encompasses machine learning models and algorithms designed to autonomously create new, synthetic data that mimics authentic samples in various forms such as text, images, music, or videos.
  • Gitops

    GitOps is a paradigm that combines software development and IT operations, utilizing Git as the single source of truth for system configuration and state. It automates the application delivery pipeline by leveraging Git version control for collaboration, versioning, and change management.
  • GPU

    A specialized processor designed to accelerate parallel computations, essential for modern AI and machine learning workloads.
  • GPU Cloud

    GPU cloud refers to cloud-based computing services that offer powerful graphics processing units (GPUs) to handle data-intensive tasks such as AI model training, deep learning, and complex simulations, delivering accelerated performance compared to traditional CPUs.
  • Hardened

    Hardened means a system or component is secured against threats through additional protections, configurations, and best practices.
  • Hedgehog Fabric Operator

    The Hedgehog Fabric Operator is a Kubernetes operator designed to automate the management of Hedgehog's network fabric, a tool for creating and managing network connectivity within containerized environments.
  • Hybrid Cloud

    An IT architecture that integrates public and private cloud environments, enabling data and application portability for greater flexibility and optimization.
  • Hyperscaler

    A company that operates massive data centers and cloud infrastructure, delivering scalable and global cloud services.
  • Iac

    A methodology that manages and provisions computing infrastructure using machine-readable definition files, enabling automation, consistency, and version control.
  • Immutable Linux

    Immutable Linux makes critical system files and directories read-only or immutable to prevent unauthorized modifications, enhancing security and system stability.
  • Inband Management

    Inband management is the practice of managing network devices using the same network infrastructure that carries production traffic, offering convenience but requiring careful security and prioritization.
  • Incident Response

    A structured approach for detecting, managing, and recovering from security or operational incidents to minimize impact and restore normal operations quickly.
  • Infiniband

    InfiniBand is a communications protocol for high-throughput, low-latency networking, primarily used in supercomputing and data center environments to interconnect servers, storage systems, and other hardware to facilitate rapid data transfer and processing.
  • Ingress

    The process of receiving and handling incoming data traffic as it enters a network or device.
  • Inter-site Gateway

    An inter-site gateway connects geographically distributed sites, ensuring secure, optimized, and reliable data exchange across wide area networks.
  • Inter-vpc Peering Policy

    Inter-VPC peering policy enables secure, scalable communication between Virtual Private Clouds, supporting multi-cloud architectures and flexible network segmentation.
  • Internet Gateway

    An Internet Gateway bridges local networks to the public internet, providing routing, NAT, security, and traffic management for secure and reliable connectivity.
  • Intra-vpc

    Intra-VPC policy governs secure, segmented communication and data flow within a single Virtual Private Cloud, enabling fine-grained control, compliance, and isolation for cloud resources.
  • Introspection

    Introspection is a programming capability that allows a system to examine and manipulate its own internal structures, properties, and states at runtime, enabling dynamic behavior and self-modification.
  • Intrusion Protection

    Intrusion Protection is a network security solution designed to monitor network traffic for malicious activity or security policy violations and take automated actions to block or mitigate threats in real-time.
  • Iot

    The Internet of Things (IoT) is a network of interconnected devices that communicate and exchange data, enabling automation, monitoring, and control across diverse environments.
  • IPAM

    IP Address Management (IPAM) is a comprehensive approach to planning, tracking, and managing IP addresses within a network, ensuring efficient utilization and centralized control.
  • Ipsec

    IPsec (Internet Protocol Security) is a suite of protocols that secures IP communications through cryptographic authentication
  • IPU

    A specialized processor designed for AI workloads, offering highly parallelized computation for machine learning tasks.
  • Ipv4

    Internet Protocol version 4 (IPv4) is the foundational protocol for addressing and routing data across the Internet, enabling global connectivity through a 32-bit address space.
  • Ipv6

    Internet Protocol version 6 (IPv6) is the successor to IPv4, offering a vastly expanded address space and enhanced features for modern Internet connectivity.
  • Isolation Policy

    An isolation policy is a set of rules and procedures for segregating network traffic, systems, or resources to enhance security and containment.
  • Jitter

    Jitter is the variation in packet arrival times across a network, impacting the quality and reliability of real-time communications like voice and video.
  • K3s

    A lightweight Kubernetes distribution designed for resource-constrained environments and edge computing.
  • K8s

    Kubernetes (K8s) is an open-source platform for automating deployment, scaling, and management of containerized applications, widely used in cloud-native and AI infrastructure.
  • Kubernetes

    An open-source platform for automating deployment, scaling, and management of containerized applications across clusters of hosts.
  • L3-4 SLB

    Layer 3-4 Server Load Balancing (L3-4 SLB) distributes network traffic across multiple servers at the network (Layer 3) and transport (Layer 4) layers, ensuring high availability and optimal performance for applications.
  • L7 Gateway

    An L7 gateway operates at the application layer to perform content-aware routing, load balancing, and security functions for HTTP(S) traffic, enhancing performance and protection for web applications.
  • Latency

    Latency is the time it takes for data to travel across a network, directly affecting application responsiveness and user experience, especially for real-time services.
  • Latency Optimized Path

    a latency-optimized path refers to the route taken by data packets that minimizes the time it takes for data to travel from the source to the destination. Hedgehog's service gateway will offer latency or cost optimizing options.
  • Leaf

    A leaf is a network switch that serves as an access or edge device within a network fabric, especially in a leaf-spine topology, providing connectivity for end devices in data centers and cloud environments.
  • Legacy Connect

    Legacy Connect is the process of integrating or connecting legacy systems, technologies, or infrastructure with modern IT environments or newer systems. Hedgehog's Gateway MAX service will offer legacy connect.
  • Linear Planner

    A linear planner generates a deterministic sequence of actions based on preconditions and effects to achieve a goal in a predictable, step-by-step manner.
  • LLM

    A Large Language Model (LLM) is an advanced AI framework capable of understanding, processing, and generating human-like text by learning from extensive datasets of natural language.
  • Load Balancing

    A technique that distributes network or application traffic across multiple servers to optimize resource use, maximize throughput, and ensure high availability.
  • Lossless

    A compression method that preserves all original data, ensuring perfect fidelity after decompression.
  • Lossy

    A compression method that reduces file size by discarding some data, commonly used in multimedia at the expense of perfect fidelity.
  • Management Network

    A management network is a dedicated network infrastructure used exclusively for managing and monitoring network devices, systems, and services, providing isolation, security, and centralized control.
  • MCLAG

    Multi-Chassis Link Aggregation Group (MCLAG) is a technology that pairs switches to provide redundancy, load balancing, and high availability by presenting multiple switches as a single logical entity.
  • Merchant Silicon

    Merchant silicon refers to commercially available integrated circuits (ICs) or chips from third-party vendors, used in networking equipment to deliver cost-effective, scalable, and standards-based solutions.
  • Microservices

    An architectural approach that structures applications as a collection of loosely coupled, independently deployable services, each responsible for a specific function.
  • MLAG

    Multi-Chassis Link Aggregation Group (MLAG) enables multiple switches to operate as a single logical switch, providing redundancy, high availability, and load balancing in network deployments.
  • Modified A*

    Modified A is an enhanced version of the A (A-star) pathfinding algorithm that integrates additional optimizations and heuristics to address specific challenges, such as reducing memory usage and improving search efficiency in complex environments.
  • Modified Dijkstra's

    Modified Dijkstra's algorithm is a variation of Dijkstra's algorithm used to find the shortest path from a source node to all other nodes in a weighted graph.
  • Monitoring

    The process of continuously observing systems, applications, and infrastructure to detect issues, ensure performance, and support proactive management.
  • Multi-cloud

    A strategy that utilizes multiple cloud service providers to optimize performance, cost, and resilience while avoiding vendor lock-in.
  • Multi-cluster

    A multi-cluster architecture is an infrastructure setup where multiple clusters of computing resources operate independently but are interconnected to achieve shared goals, offering flexibility, scalability, and resilience for distributed workloads.
  • Multi-cluster Gateway

    A multi-cluster gateway provides unified ingress and egress routing, service discovery, and security enforcement for applications spanning multiple clusters.
  • Multi-cluster Svc Lb

    A multi-cluster service load balancer distributes client requests across service endpoints in multiple Kubernetes clusters to deliver low-latency and high-availability applications.
  • Multi-homing

    Multi-homing is a network architecture in which a device or network connects to more than two independent networks, providing enhanced redundancy, load balancing, and traffic optimization.
  • Multi-tenancy

    An architecture in which a single instance of software serves multiple customers (tenants), providing logical isolation, resource sharing, and cost efficiency.
  • Naas

    Network as a Service (NaaS) is an outsourced networking model enabling businesses to subscribe to network capabilities as a managed service, often on a pay-as-you-go basis. This model provides scalable, flexible, and efficient networking solutions without the need for physical infrastructure ownership.
  • NAT

    A technique that modifies IP address information in packets, enabling multiple devices to share a single public IP and enhancing network security.
  • NAT/PAT

    NAT (Network Address Translation) and PAT (Port Address Translation) enable multiple devices to share a single public IP address by translating private addresses and ports, improving security and conserving IP resources.
  • NCCL

    A software library developed by NVIDIA to accelerate multi-GPU and multi-node communication for parallel computing and deep learning.
  • Near Edge

    Near edge computing refers to a distributed computing framework that situates processing capabilities closer to data sources or end-users than centralized data centers, but not as close as edge devices, to achieve a balance of low latency and scalability.
  • Network Cluster

    A network cluster is a group of interconnected computing devices organized to work together as a single system, providing shared network services, processing power, or storage capacity for enhanced performance and reliability.
  • NIC

    A Network Interface Card (NIC) is hardware that connects a computer or server to a network, enabling communication and data transfer.
  • North-south Traffic

    Traffic flowing between external networks and internal cloud resources, crucial for managing cloud security and scalability.
  • NTP

    Network Time Protocol (NTP) is a protocol used to synchronize clocks across devices in a computer network, ensuring accurate and consistent timekeeping.
  • Nvme/tcp

    A technology that enables high-speed, low-latency storage access over standard TCP/IP networks, combining NVMe storage with TCP transport.
  • Object Storage

    A data storage architecture that manages data as objects, enabling scalable, durable, and cost-effective storage for unstructured data in cloud environments.
  • Observability

    A comprehensive approach to understanding the internal states of systems by collecting, analyzing, and correlating metrics, logs, and traces for effective monitoring and troubleshooting.
  • Observer

    An observer, in a technical context, is a design pattern where an object, referred to as the observer, is notified of and reacts to state changes in another object, known as the subject. This pattern facilitates a one-to-many dependency allowing for decentralized event handling and efficient data updates.
  • Open Network Fabric

    A network architecture based on open standards and SDN principles, enabling scalable, programmable, and vendor-neutral connectivity.
  • Operator

    A software extension that automates the management of complex applications and resources on Kubernetes by encoding domain-specific knowledge.
  • Out Of Band Management

    Out-of-band management is a method of managing network devices using a separate, dedicated network isolated from production traffic, enhancing security, reliability, and operational continuity.
  • Over-the-top Operation

    Over-the-top (OTT) operation delivers digital content and services directly to end-users over the internet, bypassing traditional distribution channels like cable or satellite.
  • P4

    P4 is a domain-specific language for programming packet-processing devices to define customizable data plane behavior.
  • Packet Loss

    Packet loss is the failure of data packets to reach their destination, causing reduced network reliability and degraded application performance, especially for real-time services.
  • Packet Restamper

    A packet restamper is a network function or device that modifies specific header fields in data packets, such as timestamps or checksums, to ensure accurate, synchronized, and reliable data transmission.
  • Passive Node

    A passive node is a component within a distributed computing system designed to remain in standby mode without actively handling tasks or client requests, poised to take over in case the primary, or active, node fails. This concept is key to high availability and fault tolerance strategies, enabling uninterrupted service continuity.
  • Peering

    A direct interconnection between networks that enables efficient and cost-effective exchange of traffic.
  • Peering Provider

    a peering provider is an entity that facilitates the connection of different networks, allowing them to exchange traffic directly without routing through the public internet. Hedgehog's service gateway will offer multiple peering providers support.
  • PFC

    Priority-based Flow Control (PFC) is a network protocol mechanism that temporarily halts data transmission on Ethernet networks to prevent packet loss during congestion by pausing specific traffic classes based on their assigned priority.
  • Pipeline Builder

    A tool or framework that automates the construction and management of data pipelines for processing and analyzing large datasets.
  • Poe

    Power over Ethernet (PoE) is a technology that delivers electrical power and data over standard Ethernet cables, simplifying deployment of network devices such as IP phones, cameras, and access points.
  • Prioritization

    Prioritization in networking assigns precedence to certain traffic or data, ensuring critical applications receive the resources they need for optimal performance.
  • Private Cloud

    A cloud infrastructure operated exclusively for a single organization, providing greater control, security, and customization compared to public cloud.
  • Private Connection

    A private connection is a dedicated, secure link established between two endpoints, typically without traversing the public internet. Hedgehog's service gateway will provide private direct connection support.
  • PTP

    Precision Time Protocol (PTP) is a network protocol that synchronizes the clocks of devices to a highly accurate time reference, enabling precise coordination across distributed systems.
  • Public Cloud

    A cloud computing model in which services and infrastructure are provided by third-party vendors over the internet, accessible to multiple organizations.
  • PXE

    Preboot eXecution Environment (PXE) enables network-based booting and automated OS installation, streamlining deployment and management of large-scale computer fleets.
  • RDMA

    Remote Direct Memory Access (RDMA) is a communication protocol enabling the exchange of data directly between the main memory of two systems without CPU involvement, significantly reducing latency and overhead.
  • Redundancy

    Redundancy in technical systems is a strategic implementation of duplicate components, systems, or processes to ensure reliability and continuous operation in the event of failure or disruption of any single element.
  • Reserved Flow

    Reserved flow is a network management technique in which a specific amount of bandwidth or resources is allocated exclusively for a particular type of traffic, application, or service to ensure reliable performance and Quality of Service (QoS).
  • Resource Allocator

    A resource allocator is a system or mechanism within a computing environment that manages and distributes various resources, such as CPU cycles, memory, storage space, and network bandwidth, to applications and users according to their needs and priorities.
  • Resource Manager

    A resource manager is a system in distributed computing that coordinates the allocation and management of computational resources like CPU, memory, and network bandwidth to fulfill the requirements of various applications or tasks.
  • RFC7938

    RFC 7938 describes using BGP for routing in large-scale data centers, offering scalability, flexibility, and best practices for modern network architectures.
  • Rocev2

    RoCEv2, or RDMA over Converged Ethernet version 2, is a network protocol that facilitates efficient data transfers by enabling RDMA over Ethernet, minimizing latency and CPU load during direct memory access between systems. The Hedgehog data plane manages congestion and adapts routing in backend GPU fabrics with RoCEv2 for optimal AI network performance.
  • Scheduler

    A scheduler is a component responsible for managing and coordinating the execution of tasks or jobs across available resources.
  • Scheduler Node

    A scheduler node is a dedicated server or process within a distributed computing system tasked with managing the distribution and execution of workloads across computational resources, ensuring efficient operation and adherence to scheduling policies.
  • Security Policy

    A security policy defines an organization’s rules and procedures to protect information assets, ensure compliance, and mitigate cybersecurity risks.
  • Service Load Balancer

    A service load balancer distributes client requests across multiple backend servers or services to deliver high availability, optimal performance, and seamless scalability for applications.
  • Service Mesh

    A dedicated infrastructure layer that manages service-to-service communication, security, and observability within microservices architectures.
  • Service Node

    A service node is a specialized network component that hosts and delivers specific services or applications, supporting essential network functions and user needs.
  • Service Policy

    A service policy is a network configuration that defines rules for classifying, filtering, prioritizing, and shaping traffic to deliver consistent performance, security, and compliance.
  • Shaping

    Shaping is the process of controlling the rate at which data packets are transmitted across a network to enforce bandwidth limits and maintain quality of service.
  • Shared Resource Flow

    A shared-resource flow refers to network traffic where multiple entities share underlying network resources, requiring policies and mechanisms to manage contention and ensure fairness.
  • Site Connect

    Site Connect is the establishment of network connectivity between different physical locations or sites within an organization's network infrastructure. Hedgehog's Cloud Network Services will offer site connect.
  • SLA

    A formal agreement that defines the expected level of service between a provider and a customer, including metrics, responsibilities, and remedies for non-compliance.
  • SLI

    A quantitative metric used to measure the performance or reliability of a service, forming the basis for defining and tracking SLOs and SLAs.
  • SLO

    A specific, measurable target for service reliability or performance that guides operational priorities and customer expectations.
  • Smartnic

    A SmartNIC is a programmable network interface card that offloads advanced networking, security, and storage tasks from the CPU, improving performance and efficiency in cloud and data center environments.
  • Spine

    A spine is a high-capacity network switch that forms the core or backbone of a network fabric in a leaf-spine topology, providing high-speed interconnectivity between leaf switches.
  • SRE

    Site Reliability Engineering (SRE) is a discipline that incorporates aspects of software engineering into the IT operations domain to create scalable and reliable software systems.
  • Tenant

    In cloud computing, a "tenant" refers to an individual, organization, or service that occupies a portion of shared cloud infrastructure, much like a renter occupies a unit within an apartment complex. Tenants securely access and manage their allocated resources, such as computing power, storage, and applications, while isolated from other tenants.
  • Tenant Traffic Prioritization

    Tenant traffic prioritization assigns different priority levels to network traffic from multiple tenants, optimizing resource allocation and ensuring SLA compliance in shared environments.
  • Throughput

    Throughput measures the actual rate of successful data transfer over a network, reflecting real-world performance and efficiency for users and applications.
  • Turn-key

    A turn-key solution refers to a system or service designed to be fully operational and immediately usable upon delivery, with no need for further installation, configuration, or customization. Hedgehog offers a turn-key end-to-end private networking solution for AI applications and the distributed cloud.
  • Virtual Private Cloud

    A logically isolated cloud environment that emulates a traditional data center network, providing customizable and secure networking for cloud resources.
  • Virtual Private Cloud VPC

    A Virtual Private Cloud (VPC) is a customizable, isolated network space within a public cloud that provides users with control over virtual networking resources, facilitating secure and scalable cloud operations.
  • VLAN

    A Virtual Local Area Network (VLAN) is a logical segmentation of a physical network, providing isolation, security, and optimized performance for grouped devices.
  • VM

    A virtual machine (VM) is a software-based emulation of a physical computer that runs an operating system and applications, offering flexible resource isolation and portability.
  • VPC API

    The VPC API is an interface that enables programmatic configuration, management, and monitoring of Virtual Private Clouds, allowing automated control over network infrastructure in a cloud environment.
  • VPP

    Vector Packet Processing (VPP) is an open-source framework that implements high-performance, hardware-accelerated packet processing in software. It uses vectorized methods to process batches of packets to optimize throughput and minimize latency, suitable for demanding network functions. Hedgehog utilizes the open-source VPP data plane, allowing for transparent and standard technology usage.
  • VRF

    Virtual Routing and Forwarding (VRF) enables multiple isolated routing domains to coexist within a single network infrastructure, allowing for secure, independent traffic segmentation and multi-tenancy.
  • VXLAN

    Virtual Extensible LAN (VXLAN) is a network virtualization technology that enables scalable, isolated Layer 2 overlay networks across Layer 3 infrastructures in modern data centers.
  • White Box

    White box networking hardware is built from standardized, off-the-shelf components and open architectures, enabling flexibility, cost savings, and vendor independence compared to proprietary solutions.
  • Wireguard

    WireGuard is a modern, high-performance VPN protocol that provides secure, efficient, and simple encrypted connectivity across diverse platforms and network environments.
  • Zero Trust

    A security model that requires strict identity verification for every user and device, regardless of location, to protect resources and data.
  • ZTP

    Zero Touch Provisioning (ZTP) automates the deployment and configuration of network devices, enabling rapid, consistent, and hands-off provisioning for scalable network operations.