Glossary

near edge

Near edge computing refers to a distributed computing framework that situates processing capabilities closer to data sources or end-users than centralized data centers, but not as close as edge devices, to achieve a balance of low latency and scalability.

Near edge computing represents a layer within the distributed computing hierarchy that provides a compromise between the immediate proximity of edge computing and the robust resources of centralized cloud infrastructures. It serves as an intermediary processing tier that can handle data aggregation, filtering, and preliminary analytics before sending refined information to centralized systems or returning insights to users or devices.

Proximity to End-Users: By positioning computing resources nearer to where data is generated or consumed, near edge computing reduces transmission delays, thereby enhancing experiences in AR, VR, and real-time analytics.

Aggregation and Processing: This layer acts as a funnel for data from numerous edge devices, performing initial computations to lessen the data payload sent to the cloud, which conserves bandwidth and accelerates processing times.

Scalability and Flexibility: Near edge computing environments can scale up or down based on demand, providing the flexibility to manage various AI workloads efficiently without overburdening edge devices or relying solely on distant cloud data centers.

Edge-to-Cloud Integration: It facilitates a hybrid approach, combining local, low-latency processing at the near edge with the extensive capabilities of cloud services, optimizing both performance and cost.

Edge Networking Infrastructure: The success of near edge computing is underpinned by a robust edge networking infrastructure that ensures seamless data transfer and communication across the distributed network.