Fact-checked by Grok 2 weeks ago

Omni-Path

Omni-Path is a designed for (HPC) and (AI) applications, enabling low-latency, high-bandwidth interconnectivity across large-scale clusters of servers and storage systems. Originally developed by Corporation, it supports (RDMA) protocols and features like adaptive routing and (QoS) to optimize data transfer efficiency in tightly coupled environments. Since 2020, Cornelis Networks has owned and advanced the technology, relaunching it in 2025 with enhanced capabilities for modern AI and scale-out workloads. In November 2025, Cornelis announced the CN6000 family, extending Omni-Path to 800 Gbps with Ethernet RoCEv2 and Ultra Ethernet integration, planned for customer sampling in mid-2026. Intel introduced Omni-Path Architecture (OPA) in November 2015 as a core component of its Scalable System Framework aimed at , building on prior technologies but with innovations for cost-effective scalability. The architecture quickly gained adoption, powering over 50% of 100 Gbps systems on the list by November 2016 and supporting diverse applications from scientific simulations to . In 2019, Intel discontinued further development and canceled plans for a 200 Gbps variant, shifting focus to other interconnect solutions. Cornelis Networks, a spinout from Intel's Omni-Path division, acquired the in September 2020, revitalizing the ecosystem with independent support and a new product roadmap. Key components of Omni-Path include host fabric adapters (HFAs, rebranded as SuperNICs by Cornelis), edge and director switches, optical and copper cables, and management software suites that facilitate deployment and monitoring. It delivers sub-microsecond latencies (as low as 910 ns end-to-end for the original OPA100; under 600 ns for the 2025 CN5000), message rates exceeding 800 million bi-directional per second for the CN5000, and bandwidths scaling from 100 Gbps in the original OPA100 series to 400 Gbps in the 2025 CN5000 family, with 800 Gbps in the CN6000 announced in November 2025 and planned for mid-2026. Notable features encompass packet integrity protection for error correction without added latency, dynamic lane scaling for link resiliency, and compatibility with open standards like OpenFabrics Enterprise Distribution (OFED) and MPI libraries. These attributes position Omni-Path as a competitive alternative to InfiniBand and Ethernet in HPC environments, emphasizing performance-per-watt efficiency and reduced total cost of ownership for large-scale deployments.

Introduction

Overview

Omni-Path is a originally developed by for data centers and (HPC) environments. It originated from 2012 acquisition of interconnect technologies from QLogic and , forming the basis for a new generation of scalable networking fabrics. The primary purpose of Omni-Path is to provide low-latency, high- interconnects that support scalable clusters, with a focus on integrating CPUs, memory, and storage in tightly coupled systems. This enables efficient data movement in demanding workloads such as scientific simulations and large-scale data processing. A key goal of the architecture is to reduce power consumption compared to predecessor technologies, achieved through optimized design that minimizes infrastructure overhead while sustaining performance in HPC deployments. Following Intel's divestiture in 2020, Cornelis Networks has owned and advanced the technology, relaunching it in 2025 as the CN5000 family with 400 Gbps and enhanced support for AI and scale-out workloads, with 800 Gbps planned for 2026. Omni-Path was announced by in November 2014, with the first products shipping in early 2016.

Key Characteristics

Omni-Path features a lossless fabric design that employs credit-based flow control to eliminate and maintain reliable data transmission across the network. This approach uses flow control digits and link transfer packets to manage congestion at a granular level, ensuring deterministic performance without retransmissions. A core attribute is its support for (RDMA), which allows direct data transfers between application memory spaces on remote nodes, bypassing the CPU to minimize processing overhead and enable efficient, low-latency communication. Omni-Path integrates seamlessly with standard protocols, including the OpenFabrics Interfaces (OFI), to provide compatibility with a wide range of software ecosystems and off-the-shelf applications. Unlike solutions focused solely on maximum throughput, Omni-Path prioritizes cost-effectiveness and power efficiency in its design principles, aiming to deliver scalable fabrics that reduce overall system expenses and energy consumption. The architecture relies on modular hardware components, including Fabric Adapters (HFAs) for host connectivity, switches for fabric interconnection, and directors for expanding to large-scale deployments. Building on -like technologies, Omni-Path introduces optimizations such as and traffic prioritization to better suit demands.

History

Development and Launch

The development of Omni-Path Architecture (OPA) originated from 's strategic acquisition of QLogic's InfiniBand business in January 2012, which included the True Scale interconnect technology, for $125 million. This move provided with key in high-performance networking, enabling the company to build upon InfiniBand's foundations while addressing limitations in and efficiency. Additionally, OPA drew significant influence from Cray's interconnect, incorporating elements of its adaptive and low-latency design to enhance performance in large-scale systems. These origins positioned to create a next-generation fabric tailored for (HPC) environments. Intel formally announced Omni-Path at the SC14 supercomputing conference on November 17, 2014, positioning it as a 100 Gbps successor to True Scale . The architecture was designed to deliver up to 56% lower switch fabric latency compared to contemporary solutions, with motivations centered on reducing the high costs and power consumption associated with while supporting the demands of . Key design goals included optimizing for massive parallelism, minimizing energy use per bit transferred, and enabling cost-effective scaling to millions of nodes, thereby facilitating advancements toward exascale systems without the overhead of proprietary or power-intensive alternatives. Development progressed rapidly, with first hardware prototypes emerging in , including early host fabric interfaces (HFIs) and switch silicon tested in partner environments. The commercial launch of the OPA 100 series occurred in , featuring 100 Gbps bidirectional across adapters, switches with 48 ports, and director-class switches for larger fabrics. Initial partnerships accelerated adoption, including collaborations with for integration into XC-series supercomputers, Hewlett Packard Enterprise (HPE) for and Apollo systems, and for servers, ensuring seamless deployment in HPC clusters from the outset. These efforts established OPA as a viable alternative in the interconnect market, powering early deployments like the at the Pittsburgh Supercomputing Center.

Acquisition and Discontinuation by Intel

In 2016, integrated Omni-Path Architecture (OPA) directly into its (Knights Landing) processors, providing on-chip support for the interconnect to enhance performance in (HPC) environments. This integration allowed for tighter coupling between the processor and fabric, reducing and improving efficiency for many-core systems targeted at scientific simulations and data analytics. In 2016, Intel released the OPA 100 series, which included host fabric adapters and switches that significantly improved , supporting clusters of up to millions of while maintaining low latency and high . This series emphasized cost-effective for large-scale HPC deployments, with features like adaptive and to handle extreme node counts without . In July 2019, announced it would cease further development of Omni-Path, canceling the planned OPA 200 series that was intended to deliver 200 Gbps upgrades for next-generation HPC and workloads. The decision stemmed from 's strategic shift toward Ethernet-based fabrics and accelerators, coupled with intense competition from NVIDIA's acquisition of Mellanox and the dominance of in the HPC market. As a result, the last major updates came in 2020, focusing on software enhancements such as IPoIB bonding for improved and expanded MPI support to optimize application on existing OPA 100 infrastructure. This marked the end of Intel's active stewardship, with the technology later handed over to Cornelis Networks in a 2020 spin-out.

Revival under Cornelis Networks

In 2020, Intel spun out its Omni-Path Architecture business to form Cornelis Networks as an entity, acquiring the technology and intellectual property to revive and advance high-performance interconnect solutions for HPC and AI applications. This move followed Intel's decision to discontinue further development of Omni-Path in 2019, allowing Cornelis to rebrand and redirect the technology toward open standards and broader market adoption. Cornelis quickly outlined a revitalization , launching Omni-Path Express in as a software-optimized of the original . This release replaced legacy PSM2 drivers with OpenFabrics Interfaces (OFI) , enabling higher message rates, reduced , and with existing Omni-Path while facilitating with modern HPC software stacks. The initiative positioned Omni-Path as a flexible, vendor-agnostic fabric, emphasizing for large-scale deployments. By June 2025, Cornelis released the CN5000 series, a 400 Gbps Omni-Path solution featuring reworked silicon tailored for workloads, including enhanced collective operations and near-linear scaling for model training. Key enhancements included advanced workload management through adaptive routing and lossless fabric design, alongside a roadmap for Ultra Ethernet Consortium compatibility to bridge performance gaps with Ethernet-based networks. Cornelis also intensified focus on government and HPC modernization, deploying CN5000 in U.S. Department of Energy labs to support simulations for , climate modeling, and -driven public services. Strategically, Cornelis repositioned Omni-Path as a cost-competitive alternative to , targeting scaling with lower through open procurement and multi-vendor ecosystems, while delivering superior message rates and efficiency for hyperscale environments. This shift has supported scalability to over 500,000 nodes, emphasizing U.S.-based innovation and reduced dependency on proprietary fabrics. On November 18, 2025, Cornelis unveiled the CN6000 series, introducing an 800 Gbps Ethernet SuperNIC that combines Omni-Path with RoCEv2 and Ultra Ethernet support for enhanced flexibility and performance.

Technical Architecture

The physical layer of Omni-Path Architecture (OPA), as advanced by Cornelis Networks in the CN5000 family (launched 2025), employs QSFP112 connectors to support 400 Gbps links, utilizing four lanes each operating at 112 Gbps using PAM4 signaling for high-speed data transmission in and environments. The original OPA 100 (2016) used QSFP28 connectors with four 25 Gbps NRZ lanes for 100 Gbps. This standard interface accommodates both passive copper cables for short distances up to 3 meters and active optical cables for longer reaches up to 100 meters or more, enabling flexible deployment in rack-scale and larger fabrics while maintaining signal integrity. At the link layer, OPA implements credit-based flow control to ensure lossless transmission by preventing buffer overflows through real-time credit exchanges between sender and receiver. Adaptive routing complements this mechanism by dynamically balancing load across fabric paths, reducing and optimizing traffic distribution without requiring static configurations. Key hardware components include SuperNIC adapters (formerly Host Fabric Adapters or HFAs under ), which interface with host systems via PCIe Gen5 x16 for full 400 Gbps throughput. Dual-port variants use QSFP-DD connectors. Earlier versions used PCIe 3.0 x16 for 100 Gbps or x8 for up to 58 Gbps. Switch ASICs in the CN5000 series support up to 48 ports per edge switch at 400 Gbps and up to 576 ports in director-class switches, facilitating dense connectivity in scalable topologies. Error detection and correction occur at the link level through (CRC) validation on packets, with automatic retries via the Packet Integrity Protection mechanism to retransmit erroneous 1056-bit bundles without introducing per-packet overhead. This approach enhances fabric reliability by isolating and resolving bit errors locally, distinct from higher-layer error handling. For , OPA supports non-blocking fat-tree topologies that deliver full bisectional , enabling fabrics for exascale systems with millions of endpoints through extended addressing, as implemented in the CN5000 . The original OPA 100 supported up to 16,384 nodes.

Protocol and Transport Layers

The Omni-Path manages packet through a combination of adaptive and deterministic algorithms, enabling efficient path selection across the fabric while supporting extended addressing for more than 10 million endpoints via Layer 2 mechanisms like Destination Local Identifier (DLID) and Lid Mask Control (LMC). Virtual lanes (VLs) provide (QoS) by prioritizing traffic flows, with features such as packet preemption and Traffic Flow Optimization () that pause lower-priority packets to guarantee deterministic for high-priority ones, alongside credit-based flow control to mitigate congestion. At the transport layer, Omni-Path facilitates both reliable connected (RC) and unreliable datagram (UD) modes, allowing flexible data transfer options suited to varying workload requirements, with configurable maximum transmission units (MTUs) up to 65,520 bytes in connected mode. It incorporates Remote Direct Memory Access (RDMA) verbs for one-sided operations, enabling direct memory-to-memory transfers without CPU involvement, and supports atomic memory access for synchronized operations like compare-and-swap to maintain data consistency in distributed environments. The overall protocol stack aligns with the OpenFabrics Enterprise Distribution (OFED), providing compatibility with standard verbs APIs while including Omni-Path-specific extensions like the Performance Scaled Messaging 2 (PSM2) library, which optimizes for bursty, latency-sensitive traffic patterns common in high-performance computing. Management protocols rely on the Subnet Manager (SM) for fabric discovery, topology configuration, and ongoing monitoring of routing and QoS parameters, complemented by OPA Tools such as opafabricinfo for querying fabric status and hfidiags for low-level diagnostics like register decoding. Security features at the transport level include partitioning via Partition Keys (PKeys) to isolate traffic within virtual fabrics, preventing cross-communication between unauthorized partitions, and key-based using IDs () alongside GUID verification to enforce secure endpoint and mitigate spoofing risks.

Performance Specifications

Bandwidth and Latency

Omni-Path's first generation, known as OPA 100 and launched in 2016, provided 100 Gbps bidirectional per , enabling high-throughput data transfer in environments. This was achieved through four 25 Gbps lanes using QSFP28 connectors, supporting efficient scaling for HPC workloads. Intel planned a , OPA 200, with 200 Gbps bidirectional per port to address growing demands in larger clusters, but development was canceled in 2019 due to market shifts. Under Cornelis Networks, the revived Omni-Path architecture introduced the CN5000 switch in 2025, delivering 400 Gbps bidirectional per port across 48 ports, with an aggregate of 38.4 Tbps to support and . The CN5000 series maintains sub-1 μs end-to-end and up to 2x higher message rates compared to NDR, enhancing performance for workloads. Latency in Omni-Path systems remains a key strength, with end-to-end for small messages under 1 μs in OPA 100 configurations, facilitating rapid inter-node communication. Specifically, OPA 100 series achieves end-to-end of 910 ns for small messages (one-way, 1 switch, 8B ), minimizing delays in message-passing interfaces. Benchmarks demonstrate Omni-Path's capability for high message rates, reaching up to 172 million messages per second bidirectionally per port in OPA 100 tests using processors. Performance scales linearly up to 2,048 nodes, retaining near-full injection in large fabrics as validated by Intel's internal on clusters from 256 to 2,048 nodes. This behavior is confirmed in HPC benchmarks such as High-Performance Linpack (HPL), where efficiency remains high without significant degradation.

Power Efficiency and Scalability

Omni-Path Architecture (OPA) emphasizes power efficiency through optimized hardware design, achieving lower consumption compared to competing interconnects like . In the initial OPA 100 series, host fabric adapters (HFAs) typically consume 7.4–10.6 W (copper to optical) per port, with maximums up to 14.9 W, depending on configuration and cabling type. This represents up to 60% lower power usage than equivalent EDR solutions, primarily due to reduced complexity in error correction and routing mechanisms. A key efficiency metric is power per gigabit per second (Gbps), calculated as the total port power divided by the bidirectional bandwidth. For early OPA generations, this yields approximately 0.037–0.053 W/Gbps for HFAs, enabling sustained performance in energy-constrained environments. Under Cornelis Networks' CN5000 series, power efficiency improves further, targeting sub-20 W for 400 Gbps adapters—specifically 15 W typical for single-port and 19 W for dual-port configurations without . This equates to roughly 0.019-0.025 W/Gbps bidirectionally, supporting denser deployments without proportional power increases. Switches in the CN5000 lineup, such as the 48-port edge model, consume around 710 W typical without , scaling to higher figures with optical transceivers but maintaining efficiency through integrated cooling options like air or liquid. Scalability in Omni-Path extends to large clusters via hierarchical director-class switches, supporting up to approximately 27,600 nodes in large fabrics, with bids for even larger 100,000+ node clusters while incorporating fabric telemetry for real-time health monitoring, including signal integrity and thermal data. Adaptive routing algorithms dynamically select least-congested paths, enhancing efficiency in common topologies like fat-tree or dragonfly by mitigating hotspots and ensuring balanced load distribution. This congestion management preserves low latency and high throughput at scale, critical for exascale systems where interconnects must fit within tight power budgets of 20-30 MW overall. In , Omni-Path's role minimizes environmental impact by optimizing power budgets; for instance, deployments like TSUBAME 3.0 achieved 14.1 gigaflops per watt on the list (June 2017), thanks to OPA's reduced thermal overhead and reliable scaling. Such efficiency supports sustainable by lowering operational demands without sacrificing node count or .

Applications

High-Performance Computing

Omni-Path Architecture has been integrated into several high-ranking systems on the TOP500 list of supercomputers, particularly during Intel's stewardship prior to 2021, where it served as the leading 100 Gbps fabric for HPC deployments. Clusters from vendors like Cray (now part of HPE), such as the Nurion system at the Korea Institute of Science and Technology Information, utilized Omni-Path interconnects to achieve scalable performance in large-scale computing environments. Intel's own testbeds and development platforms further demonstrated its efficacy in prototyping TOP500-caliber systems, enabling efficient scaling for scientific workloads. The architecture is optimized for Message Passing Interface (MPI)-based simulations common in traditional HPC, providing reliable transport for distributed parallel computing tasks like seismic modeling. In benchmarks using the SeisSol application for earthquake simulations, Omni-Path delivered up to 1.53 times better price-performance compared to InfiniBand, highlighting its efficiency in handling compute-intensive, latency-sensitive workloads. Omni-Path's fabric benefits include low-jitter messaging through link-level traffic flow optimization, which minimizes variability in communication delays for high-priority operations. This design supports clusters up to 2048 nodes without performance degradation, ensuring consistent throughput in tightly coupled environments. Deployments in U.S. national laboratories during the Intel era, such as Livermore National Laboratory's cluster with 2,604 compute nodes, leveraged Omni-Path for physics simulations and climate modeling applications, enabling high-fidelity representations of complex environmental and material behaviors. The software stack integrates seamlessly with MPI implementations like MVAPICH, which provides native support via the PSM2 interface for optimized point-to-point and collective operations, and MPI for streamlined HPC orchestration across Omni-Path fabrics.

Artificial Intelligence and Machine Learning

The CN5000 series from Cornelis Networks introduces key optimizations for workloads, particularly in model training, by leveraging the Omni-Path architecture's lossless, congestion-free design to accelerate collective operations such as all-reduce in distributed learning environments. This enhancement addresses communication bottlenecks in large-scale training, where synchronizing gradients across thousands of nodes is critical, delivering up to 6X faster collective communication compared to (RoCE) implementations. The architecture's adaptive routing and credit-based flow control ensure predictable low-latency performance, enabling more efficient scaling of models without the common in congested networks. Omni-Path integrates seamlessly into machine learning fabrics through its support for (RDMA), which facilitates direct GPU-to-GPU data transfers in frameworks like and , significantly reducing communication overhead in distributed setups. Libraries such as MVAPICH2, optimized for Omni-Path, enable CUDA-aware MPI operations that overlap computation and communication, minimizing idle time for GPUs during multi-node training. This RDMA capability bypasses CPU involvement, allowing for higher throughput in gradient exchanges and parameter synchronization, which is essential for scaling training across clusters. In 2025, Omni-Path deployments have gained prominence in government AI scaling initiatives, including the U.S. Department of Energy's at , where the CN5000 fabric supports mission-critical workloads involving real-time inference for large language models. These systems achieve near-linear scaling for training trillion-parameter models while enabling low-latency inference pipelines that process dynamic queries in agentic AI frameworks. Such applications benefit from Omni-Path's ability to handle extended context windows in LLMs without performance degradation. Performance benchmarks highlight Omni-Path's advantages in AI-driven seismic applications, with the architecture providing up to 53% better price-performance than HDR when running SeisSol on processors, optimizing resource utilization for hybrid simulation-ML workflows. This efficiency stems from reduced in data-intensive operations, making it suitable for integrating models with geophysical datasets. Omni-Path's zero-congestion networking excels in managing bursty traffic patterns inherent to workloads, such as irregular data flows during on exabyte-scale sets, ensuring consistent throughput without retransmissions. The fabric's patented congestion management sustains high message rates across massive clusters, supporting the irregular access patterns of large-scale ingestion and model updates in pipelines.

Comparisons with Other Technologies

Versus InfiniBand

Omni-Path and are both lossless interconnect technologies designed for (HPC) and (AI) environments, but they differ in protocol complexity and feature sets. Omni-Path employs a simpler compared to , which includes more advanced hardware offloads for functions like (RDMA) and atomic operations. While both support GPUDirect RDMA for direct GPU-to-network transfers, Omni-Path's design prioritizes reduced overhead in certain messaging patterns, though it historically exhibited higher latency in (CFD) workloads, making it slower than in those scenarios. Recent advancements in the Cornelis CN5000 series, however, demonstrate up to 35% lower latency and 2x higher message rates than NDR in select HPC simulations. Economically, Omni-Path offers a compelling alternative through lower costs, with host fabric interfaces (HFIs) priced around $880 per 100 Gb/s adapter under Cornelis, often 30-50% less than comparable components, leading to up to 53% better price-performance in benchmarks like SeisSol on processors. Despite this, maintains a larger , bolstered by NVIDIA's In-Network features such as for collective operations, which enhance scalability in training clusters. Omni-Path's smaller limits its software optimizations and vendor support, positioning it as a niche option for budget-conscious deployments. In terms of , both technologies achieve at 400 Gb/s per as of 2025, with Omni-Path's CN5000 series matching 's NDR speeds while emphasizing efficiency. Omni-Path adapters consume approximately 7-15 per depending on cabling, translating to lower watts per Gb/s than earlier generations, though has improved its management in NDR implementations. This focus on efficiency makes Omni-Path suitable for dense, power-constrained fabrics. Interoperability between Omni-Path and is limited without gateways, as Omni-Path relies on the OpenFabrics Interfaces (OFI) stack with Performance Scaled Messaging 2 (PSM2) for its , contrasting with 's native Verbs API. Cornelis provides gateways to bridge Omni-Path fabrics to or Ethernet, but native cross-fabric communication remains unsupported, requiring application-level adaptations. Overall, Omni-Path is marketed as a cost-effective alternative for cost-sensitive HPC and applications, particularly where price-performance outweighs ecosystem breadth.

Versus Ethernet

Omni-Path employs a lossless fabric utilizing credit-based flow control to prevent packet drops, contrasting with Ethernet's inherently lossy, model that relies on routing protocols for . This non-routable design in Omni-Path enables a flat topology optimized for (HPC), achieving sub-microsecond end-to-end latencies, such as approximately 0.6–1 μs in typical configurations, compared to Ethernet-based RDMA solutions like RoCE, which often exhibit 5–10 μs latencies due to additional protocol layers and potential congestion. In terms of RDMA implementation, Omni-Path provides native support through its verbs interface without the encapsulation overhead of RoCEv2, which requires /IP headers and can introduce additional processing latency on Ethernet, though RoCE benefits from broader hardware offload compatibility across commodity Ethernet infrastructure. Ethernet's routable nature supports seamless integration in diverse environments, including Ultra Ethernet enhancements for workloads, while Omni-Path's specialized fabric avoids such overheads but remains proprietary. Omni-Path demonstrates superior in flat fabrics, supporting clusters exceeding 100,000 nodes without the hierarchical layers that can lead to in Ethernet networks, enabling consistent low-latency performance at extreme scales. Ethernet, while versatile, often requires multi-tier designs that introduce bottlenecks in large-scale HPC deployments. Use cases diverge accordingly: Ethernet dominates in and general-purpose data centers due to its ubiquity and cost-effectiveness for routed, multi-tenant environments, whereas Omni-Path is preferred for performance-critical and HPC applications where lossless operation and minimal are essential to avoid application slowdowns. As of 2025, Omni-Path's development roadmap, led by Cornelis Networks, includes 400 Gbps and future 800 Gbps offerings that align with Ultra Ethernet Consortium standards for enhanced /HPC features, while maintaining proprietary optimizations in power efficiency, such as lower per-port consumption compared to routed Ethernet alternatives in dense fabrics.

Current Status and Future Outlook

Market Adoption

Omni-Path, initially developed by , achieved its peak adoption in during 2018, appearing in 38 systems on the list that June, representing approximately 7.6% of the ranked supercomputers. This growth followed its 2016 release and wide evaluations for cost-effective fabrics, but adoption began to decline after 2019 amid Intel's strategic shifts away from specialized interconnects. By the early 2020s, its presence in systems had significantly diminished following Intel's sale of the Omni-Path division to Cornelis Networks in 2020, reflecting a contraction in market momentum. Under Cornelis Networks, Omni-Path has seen renewed deployments in 2025, particularly in U.S. government and AI sectors, with the CN5000 platform powering the 'Lynx' cluster for the at to handle advanced simulation and AI workloads. The technology was also showcased at SC25, highlighting its integration into AI and HPC environments for scalable performance. These installations underscore a focus on mission-critical applications where low-latency fabrics provide reliability for and research priorities. The vendor ecosystem supporting Omni-Path includes partnerships with major OEMs such as Lenovo for integrated solutions like the Everyscale platform, alongside legacy integrations from HPE and Dell in HPC systems. In the broader HPC and AI market, Omni-Path holds an estimated 3.5% share as of late 2024, with potential growth in cost-sensitive segments driven by its price-performance advantages. Key challenges for Omni-Path include its smaller developer community compared to the dominant ecosystem, which limits software optimizations and third-party support in large-scale deployments. However, recent advancements like the 400 Gbps CN5000 upgrades are fostering growth in applications by offering competitive throughput for scale-out clusters. As of 2025, Omni-Path's installed base centers on targeted edge HPC environments, with deployments spanning thousands of nodes across U.S. government labs and AI-focused firms, emphasizing efficient fabrics for distributed workloads over expansive data centers.

Ongoing Developments

Cornelis Networks has outlined an ambitious roadmap for Omni-Path beyond 2025, with the CN6000 family slated for release in 2026 featuring 800 Gbps bandwidth to support escalating demands in AI and high-performance computing (HPC) environments. On November 18, 2025, at SC25, Cornelis unveiled the CN6000 as the first SuperNIC integrating Omni-Path with Ethernet RoCEv2 and Ultra Ethernet for enhanced performance, flexibility, and scalability. This next-generation platform emphasizes deeper AI integrations, including optimized support for real-time inference and model training, enabling faster AI workflows and improved return on investment (ROI) through reduced latency and enhanced scalability. Additionally, the architecture incorporates liquid-cooled fabric options, building on the CN5000's director switch designs to manage thermal challenges in dense, high-power AI/HPC deployments. Key innovations include advanced telemetry capabilities integrated into the fabric for improved AI orchestration, allowing real-time monitoring and adaptive routing to minimize congestion in large-scale AI clusters. Hybrid Ethernet-Omni-Path bridges, facilitated through Omni-Path Express Gateways and the CN6000's unification of Omni-Path with RoCE-enabled Ethernet, enable seamless interoperability between networking paradigms, supporting cross-fabric data flows for hybrid AI infrastructures. Looking ahead, Cornelis faces challenges in expanding the Omni-Path ecosystem to compete more effectively with NVIDIA's dominance, particularly by broadening partner integrations and middleware support for AI/HPC applications. The company is exploring potential open-sourcing of additional management tools within the Omni-Path Express (OPX) , which already provides open-source fabric management to foster community-driven enhancements and wider adoption. Strategically, Omni-Path is positioning itself for exascale and zettabyte-era by prioritizing lossless, scalable fabrics that deliver high ROI for inference workloads, such as those in large language models and , through efficient resource utilization and predictable performance. To enhance broader adoption, Cornelis is aligning Omni-Path with the Ultra Ethernet Consortium (UEC) standards, as seen in the CN7000's planned 1.6 Tbps integration and the CN5000's compatibility with emerging UEC requirements for congestion-aware transport in networks.

References

  1. [1]
    [PDF] Intel® Omni-Path Architecture
    – Intel® OPA absolutely supports it. Intel® OPA data transfer overview: – Host Fabric Interface (HFI) automatically selects the most efficient resources and ...
  2. [2]
    With New Owner and New Roadmap, an Independent Omni-Path Is ...
    Jul 23, 2021 · With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback. Put on a shelf by Intel in 2019, Omni-Path faced an uncertain ...
  3. [3]
    Cornelis brings Intel Omni-Path networking tech back from dead
    Jun 4, 2025 · Intel stopped developing Omni-Path technologies in 2019, after which it was bought by Cornelis. “We used that time over the last four years to ...
  4. [4]
    Intel Architects High Performance Computing System Designs to ...
    Nov 16, 2015 · As a foundational element of the Intel SSF, Intel introduced the Intel® Omni-Path Architecture (Intel® OPA), a new HPC-optimized fabric ...
  5. [5]
    Intel Confirms Retreat on Omni-Path - HPCwire
    Aug 1, 2019 · It has jettisoned plans for a second-generation version of its Omni-Path interconnect for HPC, AI and other emerging enterprise workloads.
  6. [6]
    Omni-Path 100 | Low-Latency HPC Networking by Cornelis
    ### Summary of Omni-Path 100 by Cornelis
  7. [7]
    Intel Reveals Details for Future High-Performance Computing ...
    Nov 17, 2014 · The Intel Omni-Path Architecture will use a 48 port switch chip to deliver greater port density and system scaling compared to the current 36 ...
  8. [8]
    It's Back To The Future For Omni-Path InfiniBand - The Next Platform
    Sep 30, 2020 · The mission is to bring Omni-Path interconnect out of stasis and make it a viable alternative to Nvidia's InfiniBand and Ethernet (formerly from Mellanox).
  9. [9]
    Cornelis Networks
    Designed for AI and large-scale computing, CN5000 Omni-Path delivers faster models, real-time inference, and higher ROI. Connection lines between various nodes ...Software Cornelis Omni-Path... · Omni-Path 100 · CN5000 Omni-Path · Careers
  10. [10]
    Intel spinout Cornelis Networks offers alternative to Infiniband or ...
    Jun 3, 2025 · In 2020, the core team behind Omni-Path at Intel spun out Cornelis Networks as a new vendor tasked with advancing the Omni-Path technology.<|control11|><|separator|>
  11. [11]
    Intel Omni-Path Architecture Fabric, the Choice of Leading HPC ...
    Nov 10, 2016 · Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel's high-speed, ...
  12. [12]
    [PDF] Intel® Omni-Path Host Fabric Adapter 100 Series - | HPC @ LLNL
    Two Intel® Omni-Path Host Fabric. Adapter models are available to help fabric designers maximize performance versus cost for diverse requirements. The. PCIe x16 ...Missing: lossless credit- based RDMA
  13. [13]
    [PDF] omni-path-fabric-software-architecture-overview.pdf - Intel
    Nov 3, 2016 · Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. ...
  14. [14]
    [PDF] INTEL OMNI-PATH ® FABRIC OPEN SOURCE OVERVIEW
    Apr 6, 2016 · Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation.
  15. [15]
    Cornelis Omni-Path Express Director Class Switches
    Omni-Path Express Director Class Switches cost-effectively deliver high bandwidth and use advanced technologies to meet the key challenges to application ...Missing: credit- RDMA Host Adapters
  16. [16]
    Inside Intel's Impending Omni-Path Interconnect - The Next Platform
    Jul 13, 2015 · First customer shipments of Omni-Path gear will come in the fourth quarter of this year, Wuischpard confirmed, but Intel is hoping to close a ...
  17. [17]
    [PDF] Intel® Omni-Path Architecture Technology Overview
    Any differences in your system hardware, software or configuration may affect your actual performance. Intel processor numbers are not a measure of performance.
  18. [18]
    Intel Kills 2nd-Gen Omni-Path Interconnect For HPC, AI Workloads
    Jul 31, 2019 · Intel's development of Omni-Path Architecture began in 2012 when it acquired interconnect assets from HPC vendors Cray and QLogic for $140 ...Missing: origins | Show results with:origins
  19. [19]
    [PDF] Intel® Omni-Path Fabric Performance Tuning — User Guide
    Updated IBM Spectrum Scale* (aka GPFS) to include recommended tuning parameters and IPoIB bonding information. •. Added new section, GPFS Settings for Large ...
  20. [20]
    Intel Spins Out Omni-Path Interconnect Business Into Stand-Alone ...
    Sep 30, 2020 · Murphy had started working on Omni-Path after he came to Intel through its 2012 acquisition of the InfiniBand technology from QLogic, which had ...Missing: True origins
  21. [21]
    Cornelis Launches CN5000: Industry Leading AI and HPC Scale-out ...
    Cornelis Launches CN5000: Industry Leading AI and HPC Scale-out Network. June 3rd, 2025 By Cornelis Networks Cornelis CN5000 Omni-Path® logo.
  22. [22]
    Cornelis CN5000 400Gbps Omni-Path Launched - ServeTheHome
    Jun 3, 2025 · Cornelis Networks, the company that has taken over Intel Omni-Path when Intel divested the technology, now has a new 400Gbps generation.
  23. [23]
    Modernizing US Government Systems by Scaling AI and HPC
    Jul 1, 2025 · Cornelis enables HPC solutions for the world's toughest challenges with Omni-Path Express, delivering low latency and high message rates for ...
  24. [24]
    Omni-Path is back to take on InfiniBand? • The Register
    Jun 9, 2025 · Omni-Path is back on the AI and HPC menu in a new challenge to Nvidia's InfiniBand. After a five-year hiatus, Cornelis' interconnect returns at 400Gbps, with ...
  25. [25]
    Cornelis Launches CN5000: Industry Leading AI and HPC Scale ...
    Jun 3, 2025 · The Omni-Path architecture is driving advanced features into the Ultra Ethernet Consortium today to bridge the performance gap between ...
  26. [26]
    [PDF] Intel® Omni-Path Fabric Suite Fabric Manager — User Guide
    Nov 15, 2015 · This is a user guide for the Intel® Omni-Path Fabric Suite Fabric Manager, dated November 2015, with order number H76468-1.0.Missing: commercial | Show results with:commercial
  27. [27]
    What If Omni-Path Morphs Into The Best Ultra Ethernet?
    Jun 26, 2024 · And now, Cornelis Networks is going to be intersecting its roadmap with Omni-Path switches and adapters with the UEC roadmap. We drilled down ...
  28. [28]
    [PDF] Omni-Path Host Fabric Interface - Intel
    Adapters and Switches have Ports. Ports on switches and adapters are numbered starting with '1'. The WFR ASIC supports up to 1 port per HFI for a maximum of ...Missing: HFAs | Show results with:HFAs
  29. [29]
    [PDF] Intel® Omni-Path Fabric Host Software — User Guide
    Updated Intel Omni-Path Architecture Overview. continued... Revision History—Intel® Omni-Path Fabric. Intel® Omni-Path Fabric Host Software. January 2020. User ...
  30. [30]
    [PDF] Intel® Omni-Path IP and LNet Router — Design Guide
    The transport layer performs host-to-host communications on either the same or different hosts and on either the local network or remote networks separated by.
  31. [31]
    [PDF] Intel® Omni-Path Host Fabric Adapter 100 Series - Dell
    With their on-load design, Intel® Omni-. Path Host Fabric Interface adapters eliminate the need for data path firmware and external memory, while maintaining ...Missing: lossless credit- RDMA
  32. [32]
    Intel Omni-Path Architecture OPA 100 Series Host Fabric Adapters
    Intel Omni-Path Architecture (OPA) is a family of PCIe adapters, switches, cables, and management software that provides high-performance connectivity ...
  33. [33]
    CN5000 Omni-Path® | Switches - Cornelis Networks
    The CN5000 switch is designed for AI and HPC, offering 400Gbps bandwidth, low latency, and 48 ports with 38.4 Tbps aggregate bandwidth.
  34. [34]
    [PDF] Intel® Omni-Path Architecture: Overview
    Onload and offload are approaches for transport layer implementation. ▫ Choice influences achievable latency, message rate, scalability, data transfer options.
  35. [35]
    [PDF] DCG CG GOLD Intel Omni-Path Architecture - SIE
    Statements in this document that refer to Intel's plans and expectations for the quarter, the year, and the future, are forward-looking statements that.
  36. [36]
    What are the power consumption and cost differences between ...
    For organizations prioritizing energy efficiency and long-term TCO, InfiniBand is often the preferred choice. Omni-Path may suit budget-constrained projects ...Missing: motivations exascale
  37. [37]
  38. [38]
    [PDF] The Intel® Omni-Path Architecture (OPA) for Machine Learning
    The Omni-Path Fabric is one of Intel's several high-performance technologies that help clients deploy highly accurate, enterprise-grade ML solutions in record ...
  39. [39]
    [PDF] CORNELIS™ CN5000 OMNI-PATH™ PRODUCT FAMILY
    Power Consumption (Typical):. Single port. Dual Port. 15W (w/o optics). 19W (w/o optics). Cooling Options. Air, indirect liquid cooling with heat pipe from ASIC ...
  40. [40]
    Intel Rounds Out Scalable Systems With Omni-Path
    Nov 16, 2015 · Intel says that the Omni-Path will ramp, both through its own products and from others who are using Intel silicon to make their own gear, in ...Missing: 2014 | Show results with:2014
  41. [41]
    [PDF] Intel® Omni-Path Fabric Suite Fabric Manager
    September 2018. 10.0. Updates to this document include: •. Updated Intel® Omni-Path Architecture Overview. •. Updated Choosing Between Host and Embedded FM ...
  42. [42]
    [PDF] CORNELIS® CN5000 OMNI-PATH® SWITCH
    AC Power: 30.80 lb. (13.97 kg). DC Power: 30.40 lb. (13.79 kg). AC Power: 34.80 lb. (15.79 kg). DC Power: 34.40 lb. (15.60 kg). Power Consumption (Typ/Max)*.
  43. [43]
    A New Direction in HPC System Fabric: Intel's Omni-Path Architecture
    Jul 12, 2015 · Intel is using its new Omni-Path Architecture as a foundation for supercomputing systems that will scale to 200 Petaflops and beyond.Missing: SC14 2014
  44. [44]
    TOP500 List - November 2019
    Nurion - Cray CS500, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path, Cray/HPE ... Cray XC50, Xeon Platinum 8160 24C 2.1GHz, Aries interconnect, Cray Inc ...
  45. [45]
    Cornelis Networks Omni-Path Delivers 53% Better Price ...
    Mar 7, 2024 · This paper shows that with Cornelis Omni-Path, users can achieve up to 1.53X higher price-performance when running SeisSol when compared to NVIDIA InfiniBand ...
  46. [46]
    Intel® Omni-path Architecture: Enabling Scalable, High Performance ...
    The Intel Omni-Path Architecture (Intel OPA) is designed to enable a broad class of computations requiring scalable, tightly coupled CPU, memory, and storage ...
  47. [47]
    FY 2023 Battery Manufacturing Lab Call: National Lab Capabilities ...
    Several facilities house the simulation infrastructure at LLNL. The largest (LEED-certified) simulation facility offers 48,000ft2 and 85MW of power for systems ...
  48. [48]
    MVAPICH: MPI over InfiniBand, Omni-Path, Ethernet/iWARP, RoCE ...
    Omni-Path(PSM2-CH3): This interface provides native support for Omni-Path adapters from Intel over PSM2 interface. It provides high-performance point-to ...Missing: compatibility | Show results with:compatibility
  49. [49]
    Cornelis Launches CN5000: AI and HPC Scale-out Network
    Jun 3, 2025 · For HPC workloads, CN5000 outperforms InfiniBand NDR with 2X higher message rates1, 35 percent lower latency1 and up to 30 percent faster ...
  50. [50]
    [PDF] Scalable Distributed DNN Training using TensorFlow and CUDA ...
    Oct 25, 2018 · This library is built on top of Intel MPI and therefore supports various interconnects such as InfiniBand, Omni-Path, and. Ethernet. The library ...
  51. [51]
    MVAPICH :: Features - The Ohio State University
    Omni-Path(PSM2-CH3): This interface provides native support for Omni-Path adapters from Intel over the PSM2 interface. It provides high-performance point-to- ...
  52. [52]
    [PDF] High Performance Distributed Deep Learning - Semantic Scholar
    • OpenFabrics software stack with IB, Omni-Path, iWARP and RoCE interfaces are driving HPC systems ... • Overlaps data movement from GPU with RDMA transfers.
  53. [53]
    Cornelis CN5000 Deployed by U.S. Department of Energy's ...
    Jul 23, 2025 · CN5000's Omni-Path architecture delivers breakthrough performance through patented congestion management, advanced link-level innovations, and ...
  54. [54]
    Cornelis Launches CN5000: Industry Leading AI and HPC Scale ...
    Jun 3, 2025 · AI at Scale: CN5000 delivers near-linear scaling of training performance for large language models and unlocks advanced inference with extended ...
  55. [55]
    CN5000 Omni-Path® | Scalable, Zero-Congestion Networking for AI ...
    Unlock top AI and HPC performance with CN5000—lossless, zero-congestion networking that scales to supercomputers, reduces costs, and boosts efficiency.Missing: technology | Show results with:technology
  56. [56]
    [PDF] An Evaluation of Ethernet Performance for Scientific Workloads
    While lossless networks like Infiniband [1] have been preferred for high-performance computing (HPC) over "best-effort' net- works like Ethernet, the evolution ...
  57. [57]
    What are the latency and throughput benefits of using Omni-Path ...
    Lower end-to-end latency: Omni-Path typically achieves sub-microsecond latency, which is critical for tightly coupled HPC applications. Improved congestion ...
  58. [58]
    RDMA, Infiniband, RoCE, CXL : High-Performance Networking ...
    Dec 29, 2024 · However, it typically exhibits slightly higher latencies (5-10 microseconds) compared to InfiniBand. Compute Express Link (CXL): CXL is a new ...<|separator|>
  59. [59]
    [PDF] Design Guidelines for High Performance RDMA Systems | USENIX
    Jun 22, 2016 · Note that the header overhead of 20–26 bytes is comparable to the common size of data items used in services such as memcached [25] and RPCs [15] ...
  60. [60]
    Comparison of RDMA Technologies - NVIDIA Docs Hub
    both for latency, throughput and CPU overhead. RoCE is supported by many leading solu ...Missing: Omni- | Show results with:Omni-
  61. [61]
    Ethernet, InfiniBand, and Omni-Path battle for the AI-optimized data ...
    Sep 17, 2025 · While InfiniBand leads in performance, Ethernet's openness and Omni-Path's revival promise a more democratized future. Hyperscalers' strategies ...Missing: motivations power exascale
  62. [62]
    Highlights - June 2018 - TOP500
    Intel Omni-Path technology is now at 38 systems, up from 35 systems six ... In Europe, UK increased to 22 systems, Germany remains at 21 systems, followed by ...Missing: adoption | Show results with:adoption
  63. [63]
    Intel Advances Supercomputing With Additions and Updates to HPC ...
    Intel Omni-Path Architecture is also experiencing wide adoption since its release nine months ago. After completing competitive evaluations for price- ...
  64. [64]
  65. [65]
    NNSA Deploys Cornelis CN5000 Fabric for 'Lynx' Cluster at Livermore
    Jul 24, 2025 · CN5000's Omni-Path architecture delivers breakthrough performance through patented congestion management, advanced link-level innovations, and ...
  66. [66]
    [PDF] Hyperion Research SC24 HPC/AI Market Update
    HPC/AI server market size (growing at 15% CAGR). • Now tracking non-traditional ... • Omni-Path = 3.5%. • Other = 3.5%. ▫ Cray Aries. ▫ HPE Slingshot. 67.
  67. [67]
    Cornelis Omni-Path Express Edge Switches
    Accelerated application performance at scale. Cornelis Omni-Path Express Edge Switches provide forty-eight 100 Gbps ports, delivering full bidirectional ...Missing: SC14 2014
  68. [68]
    Intel Spin-Off: Our InfiniBand Alternative For AI Data Centers Has A ...
    Jun 3, 2025 · The network improvements, made possible by features like credit-based flow control and dynamic fine-grained adaptive routing, translate to ...Missing: link | Show results with:link
  69. [69]
    Cornelis Unveils Ambitious Omni-Path Interconnect Roadmap
    Aug 24, 2023 · This new stack has been developed to run on the 100 Gb/sec Omni-Path hardware that Cornelis Networks bought from Intel – what the company calls ...
  70. [70]
    Omni-Path 100 | Gateways - Cornelis Networks
    Designed for AI and large-scale computing, CN5000 Omni-Path delivers faster models, real-time inference, and higher ROI. Connection lines between various nodes ...
  71. [71]
    Cornelis OPX Software Suite - CN5000 Omni-Path
    Cornelis OPX Software Suite provides a complete enterprise class network stack, including OpenFabrics-based host drivers, fabric management, element management,Missing: sourcing | Show results with:sourcing
  72. [72]
    Cornelis Omni-Path Express: The open-source software approach to ...
    One of the main goals of this project is to motivate new initiatives and collaborations in the HPC field. Visit our forum to share your knowledge and discuss ...Missing: tools | Show results with:tools
  73. [73]
    Cornelis Networks Omni-Path Architecture for HPC - Aspen Systems
    Cornelis Networks Omni-Path Architecture (Intel OPA) provides comprehensive control of administrative functions using a mature subnet manager.Missing: protocol transport layers
  74. [74]