Ask Our Experts
Project Solutions & Tech.
Get Advice: Live Chat | +852-63593631

Understanding InfiniBand Cables: Types, Speeds, Connectors, and Best Practices (2025)

author
Network Switches
IT Hardware Experts
author https://network-switch.com/pages/about-us

Introduction

In high-performance computing (HPC), artificial intelligence (AI), and data-intensive applications, bandwidth and latency can make or break performance. Standard Ethernet has come a long way, but when workloads demand microsecond latencies, InfiniBand (IB) often becomes the interconnect of choice.

InfiniBand provides a purpose-built, low-latency, lossless fabric with Remote Direct Memory Access (RDMA), efficient congestion control, and the ability to scale clusters seamlessly. But for architects and engineers, one question comes up repeatedly: what cable should I buy?

This guide explains InfiniBand cables in detail, covering generations (EDR, HDR, NDR), connectors (QSFP56, OSFP, QSFP112), cable types (DAC, AOC, optics), distances, and practical buying tips. Whether you’re wiring an HPC cluster, an AI training pod, or a data center fabric, this article will help you choose wisely.

understanding infiniband cable

InfiniBand in One Minute

What is InfiniBand?

InfiniBand is a high-speed, low-latency interconnect architecture. Unlike general-purpose Ethernet, it’s designed to:

  • Deliver ultra-low latency with RDMA.
  • Scale modularly into large fabrics for supercomputers and GPU clusters.
  • Support collective operations critical to AI training and HPC workloads.
  • Achieve high utilization with credit-based flow control and lossless transmission.

It is now standard in top supercomputers, AI infrastructure, financial trading, and data analytics clusters where every microsecond counts.

Speed Generations

InfiniBand speeds are defined per lane and then multiplied by the number of lanes (link width). Most modern deployments use 4 lanes (4x), carried in QSFP-family connectors.

InfiniBand Generations, Signaling, and 4x Rates

Generation Per-lane signaling Modulation / encoding 4x link rate Typical connector(s)
SDR 2.5 Gbps 8b/10b NRZ 10 Gbps Legacy
DDR 5 Gbps 8b/10b NRZ 20 Gbps Legacy
QDR 10 Gbps 8b/10b NRZ 40 Gbps QSFP
FDR10 ~10.3 Gbps 64b/66b 40 Gbps QSFP+
FDR 14.06 Gbps 64b/66b 56 Gbps QSFP+
EDR 25.78 Gbps 64b/66b 100 Gbps QSFP28
HDR 53.125 Gbps PAM4 200 Gbps QSFP56
NDR 106.25 Gbps PAM4 400 Gbps OSFP (switches), QSFP112 (NICs/DPUs)
  • HDR (200G) doubled per-lane signaling to ~50G PAM4, using QSFP56.
  • NDR (400G) doubles again to 100G PAM4 per lane, typically using OSFP twin-port cages on switches and QSFP112 on NICs/DPUs.

Connectors and Port Form Factors

HDR vs NDR

HDR (200G InfiniBand)

  • Connector: QSFP56 (backward-compatible with QSFP28).
  • Found on HDR switches, ConnectX-6 adapters, and BlueField-2 DPUs.
  • Cable ecosystem: DAC (≤2.5 m), AOC (≤100 m), multimode (≤100 m), single-mode (≤2 km).

NDR (400G InfiniBand)

  • Switches: Quantum-2 platforms use twin-port OSFP cages.
  • NICs/DPUs: ConnectX-7 (OSFP/QSFP112), BlueField-3 (QSFP112).
  • Cables: OSFP↔OSFP AOCs, OSFP↔QSFP112 harnesses, QSFP112 DAC/AOC (short to medium reach).

Takeaway: Always confirm whether your device has OSFP or QSFP112/QSFP56, since cables are not cross-compatible.

Cable Families and Realistic Distances

InfiniBand cable options fall into three main categories:

InfiniBand Cable Types, Reach, and Use Cases

Cable family Typical reach (HDR/NDR) Pros Cons Best use
DAC (passive twinax) ≤2–2.5 m @200G; very short at 400G Cheapest, lowest latency, no power Very short, bulky, bend limits In-rack, same-chassis or adjacent ports
ACC (active copper) Slightly longer than DAC (platform-specific) Extends copper a bit further Costlier than DAC, niche Rarely used; transitional
AOC (active optical) 3–100 m Light, flexible, plug-and-play Higher cost/power than DAC Row-to-row, dense GPU pods
Transceivers (MM/SM) MM ~100 m; SM ~2 km Longest reach, scalable Most expensive, adds optics mgmt Row-to-row, aggregation, DCI within campus
  • DACs are the workhorse inside a rack.
  • AOCs dominate row-to-row AI training clusters where airflow and cable bulk matter.
  • Multimode/single-mode optics are for longer rows or building-to-building links.

HDR vs NDR

  • Signaling: HDR = 50G PAM4 per lane; NDR = 100G PAM4 per lane.
  • Connector fit: HDR = QSFP56; NDR = OSFP (switch) and QSFP112 (NIC/DPU).
  • Cable options:HDR supports DAC, AOC, MM/SM optics widely. NDR mostly ships as OSFP AOC/optics; QSFP112 DAC/AOC for NICs.
  • Airflow & density: NDR’s OSFP cages increase power draw and require more careful cooling.
  • Breakouts: Some NDR OSFP ports support 2×200G HDR breakout modes.

InfiniBand vs Ethernet

  • Throughput: Both ecosystems now ship 200/400/800G. Bandwidth alone isn’t the differentiator.
  • Latency: InfiniBand fabrics deliver sub-microsecond latency, while Ethernet adds stack overhead (even with RoCEv2).
  • RDMA and collectives: IB natively supports hardware collective operations and lossless RDMA, critical for AI/HPC. Ethernet needs added QoS/lossless mechanisms.
  • Fabric management: InfiniBand includes built-in subnet managers and fabric collectives; Ethernet leans on external SDN/automation.

Bottom line: Use Ethernet for general-purpose networking; use InfiniBand where latency, determinism, and collective scaling matter most.

How to Choose the Right InfiniBand Cable?

Step 1: Match the Generation

  • HDR (200G): QSFP56.
  • NDR (400G): OSFP on switches, QSFP112 on NICs/DPUs.

Step 2: Measure Distance

  • ≤2 m: DAC.
  • 3–100 m: AOC.
  • 100 m–2 km: MM/SM optics.

Step 3: Check Form Factor

  • Confirm OSFP vs QSFP. Order exact part numbers (e.g., NVIDIA LinkX).

Step 4: Consider Environment

  • Bend radius in dense racks.
  • Shielding in noisy environments.
  • Airflow in GPU-dense clusters.

Step 5: Budget & Power

  • DAC is cheapest, AOC midrange, optics highest cost but longest reach.
how to choose the right infiniband cable

Applications of InfiniBand Cables

High-Performance Computing (HPC)

  • Cluster interconnect for climate modeling, molecular dynamics, genomics.
  • Requires high throughput and deterministic latency.

AI Training & Inference

  • GPU-to-GPU and GPU-to-storage fabrics in training pods.
  • Collective operations accelerate deep learning frameworks.
  • NDR cables (400G) are rapidly being adopted for AI superpods.

Data Centers

  • Storage backends (NVMe over Fabrics with InfiniBand).
  • Low-latency east-west fabrics in large cloud/edge deployments.
  • High resilience with fabric redundancy.

FAQs

Q1: What does “4x” mean in InfiniBand cables?
A: It means four lanes per direction. A “4x HDR” cable carries 4×50G PAM4 = 200G. A “4x NDR” cable carries 4×100G PAM4 = 400G.

Q2: What are typical cable reaches?
A: DAC: ≤2–2.5 m. AOC: 3–100 m. MM optics: ~100 m. SM optics: ~2 km.

Q3: What connectors are used?
A: HDR uses QSFP56. NDR uses OSFP (switches) and QSFP112 (NICs/DPUs).

Q4: Can InfiniBand cables plug into Ethernet ports?
A: No. Even if form factors match, the protocols are not interoperable. Use only for matching InfiniBand gear.

Q5: Is active copper still used?
A: Rarely. Active copper (ACC) exists but is niche; most deployments use DAC (short) or AOC/optics (longer).

Conclusion

InfiniBand cables aren’t just wires; they’re critical enablers of the world’s fastest computing fabrics. From DACs inside racks to AOCs across rows and OSFP optics linking AI superpods, the right choice impacts both performance and reliability.

If you want pre-tested, vendor-coded InfiniBand cables and transceivers—or help planning HDR vs NDR interconnects, visit Network-Switch.com. We’re an authorized distributor of Cisco, Huawei, and Ruijie, and we manufacture our own fiber cables and optical modules. With our CCIE, HCIE, and RCNP engineers, we can help you build InfiniBand fabrics that deliver the throughput and latency your workloads demand.  

Did this article help you or not? Tell us on Facebook and LinkedIn . We’d love to hear from you!

Related post

Сделайте запрос сегодня