Yardım ve Destek Alın!

Ask Our Experts
Project Solutions & Tech.
Get Advice: Live Chat | +852-63593631

NS-S6900 Family: 100G/200G/400G Core & Fabric for AI-Ready Data Centers

author
Bob Lam
Senior Engineer
author https://network-switch.com

Hello, my name is Bob, and I am a Senior Engineer with the Technical Services team at network-switch.com. I am also a certified Cisco CCIE professional and HCIE certifed engineer, which reflects my expertise in networking and my dedication to delivering high-quality technical solutions. I specialize in advanced network configurations, troubleshooting, and providing expert-level support to ensure seamless network operations.

AI training clusters, NVMe/TCP storage, and container build storms are pushing spine-leaf networks well past 10/25G. The NS-S6900 family from Network-Switch gives you the same-class building blocks you expect at the apex of the fabric.

Data Center Switches designed for 100G, 200G, and 400G uplinks, but delivered as your product. We customize the faceplate logo, exterior colorway, labels/packaging, and even the day-0 software (VLAN plan, QoS/AAA, Syslog/SNMP, LACP/M-LAG templates). Every rack boots your standard, no guesswork.

The portfolio spans fixed 400G/200G boxes for high-radix spines and 2–4-slot Modular Switches for gateway/DCI roles. Published series pages confirm:

  • RG-S6990-128QC2XS-class hardware with 128 × 400G QSFP112 ports (downshiftable to 200G/100G), 102.4 Tbps switching and 21,000 Mpps forwarding.
  • RG-S6980-64QC with 64 × 400G QSFP-DD, 51.2 Tbps and 10,300 Mpps, plus ZR/ZR+ optics for up to ~120 km DCI.
  • RG-S6980-128DC with 128 × 200G QSFP56 (supports 100G/200G), also 51.2 Tbps / 10,300 Mpps.
  • RG-S6930-2C modular core: up to 144 × 100G or 72 × 200G in 4RU, 28.8 Tbps / 5,400 Mpps.
  • RG-S6920-4C modular: up to 128 × 100G or 64 × 100G + 16 × 400G.
  • RG-S6910-3C modular: up to 96 × 100G or 24 × 400G, 4RU, 2+2 PSU & 5+1 fan redundancy.

Product Overview

  • Fixed 400G/200G core & spine
    Choose NS-S6990 Switches when you need the highest radix at the core: 128 × 400G QSFP112 ports in 4RU with downshift to 200G/100G per port, lossless features for RoCEv2, and telemetry for AI fabrics.
    For slightly smaller high-speed domains, NS-S6980 Switches offer either 64 × 400G (QSFP-DD) or 128 × 200G (QSFP56) in 4RU, both with line-rate forwarding.
  • Modular DC gateway & aggregation
    NS-S6930 Switches (2-slot) scale to 144 × 100G or 72 × 200G, a compact Spine Switch/DC Switch for DCI or campus-core roles.
    NS-S6920 Switches (4-slot) flex for mixed 100G/400G migrations.
    NS-S6910 Switches (3-slot) cover 100G today with a path to 24 × 400G as needed.
  • Fabric & operations
    Across the class you’ll find EVPN-VXLAN, PFC/ECN for lossless RDMA, rich QoS, MLAG, and real-time telemetry exactly what modern AIOps and high-fan-in designs demand.

Model Lineup at a Glance

Model (NS) Port Geometry Fabric / Performance Redundancy Typical Roles Notes
NS-S6990-128QC2XS 128 × 100G/200G/400G (QSFP112); + 2 × 10G SFP+ mgmt 102.4 Tbps / 21,000 Mpps 2+2 PSU, 4+1 fans High-radix Spine Switch, AI training pods, DCI Official page lists speeds per port, RDMA-friendly features, telemetry.
NS-S6980-64QC 64 × 100G/200G/400G (QSFP-DD) 51.2 Tbps / 10,300 Mpps 2+2 PSU, 7+1 fans 400G core/spine with fewer but fatter links Supports 400G ZR/ZR+ for ~120 km interconnect
NS-S6980-128DC 128 × 100G/200G (QSFP56) 51.2 Tbps / 10,300 Mpps 2+2 PSU, 7+1 fans 200G leaf/aggregation with 100G down-shift Strong fit for staged 100G→200G migrations.
NS-S6930-2C 2 service slots; up to 144 × 100G or 72 × 200G 28.8 Tbps / 5,400 Mpps 2+2 PSU, 7+1 fans Compact Modular Switches for DC gateway/DCI Centralized chassis; large buffer noted.
NS-S6920-4C 4 service slots; up to 128 × 100G or 64 × 100G + 16 × 400G 2+2 PSU (class), hot-swap fans Flexible migration core; mixed 100G/400G Datasheet details slot combos & 4RU.
NS-S6910-3C 3 service slots; up to 96 × 100G or 24 × 400G 19.2 Tbps / 4,000 Mpps (class) 2+2 PSU, 5+1 fans 100G core with 400G growth path 4RU chassis; 16 GB buffer cited in datasheet.

What You Can Customize?

  • Exterior & labeling: faceplate logo, bezel color, rear/belly labels, carton and insert design.
  • Default config: VLAN plan, LACP/MLAG on fabric ports, MSTP edge guards, CoPP/NFPP/CPP, AAA order, SNMPv3/Syslog targets, login banners.
  • Port-role templates: “Server,” “GPU,” “Storage,” “vHost,” “OOB”—with ACL/QoS pre-applied.

Fabric Features & Why They Matter

  • Lossless Ethernet for AI/Storage. PFC + ECN and RDMA-friendly scheduling are documented across the 200G/400G class, enabling zero-loss east-west traffic in RoCEv2 clusters.
  • Overlay at scale. EVPN-VXLAN gives you L2 extension and any-to-any L3 gateways right at the leaf/spine, so policy follows workloads. (Listed in software specs on the fixed-400G pages.)
  • Telemetry & AI-tuned O&M. Real-time streaming, AI ECN tuning, INT/MOD, and congestion analytics let you prove SLA and catch incast hot spots quickly.
  • Serviceability. Hot-swap PSUs/fans with 1+1/2+2 and 4+1/7+1 schemes keep maintenance windows short; all documented on model pages.

Deployment Scenarios

400G spine for GPU training clusters (NS-S6990-128QC2XS)

Use 8 to 16× 400G links per rack for high-bisection AI fabrics. The official page documents 128 × 400G ports with per-port downshift (100/200/400G), 102.4 Tbps, and 21,000 Mpps ample headroom for mixed node sizes and future expansion. Enable MLAG for dual-active leaf pairs and rely on AI-powered ECN to blunt incast.

200G leaf with 400G core (NS-S6980-128DC → NS-S6990)

Standardize 200G to the servers/accelerators while keeping 400G uplinks at the spine. The 128 × 200G QSFP56 fixed box gives dense leaf capacity now and a clean path to 400G later.

400G core + long-reach DCI (NS-S6980-64QC)

Build a compact 4RU 400G core and bridge sites campus-to-DC using 400G ZR/ZR+ optics (~120 km) without external DWDM shelves.

Modular gateway / aggregation (NS-S6930 / NS-S6920 / NS-S6910)

Pick NS-S6930 Switches for two fat line-card slots (up to 72×200G). Move to NS-S6920 Switches when you need four slots and a blend of 100G + 400G uplinks. Keep NS-S6910 Switches for 100G-heavy cores that want a measured path to 24 × 400G.

Optics & Cabling

Port Type Recommended Modules / Cables Common Use Notes
QSFP112 400G (NS-S6990) 400G SR8/DR4/FR4/LR; DAC/AOC for short-reach Spine interconnects, AI pods Per-port speeds 100/200/400G supported on this class.
QSFP-DD 400G (NS-S6980-64QC) 400G ZR/ZR+ for ~120 km; SR8/DR4/FR4 for in-fabric Campus-to-DC interconnect, spine uplinks ZR/ZR+ reach and 64×400G in 4RU are documented.
QSFP56 200G (NS-S6980-128DC) 200G SR4/FR4; break-out to 2×100G where supported 200G leaf/agg Port group supports 100G/200G operation.
QSFP28 100G (Modular cards) 100G SR4/DR/FR/LR; 4×25G breakout as needed Legacy 100G leaf/core Slot mixes on NS-S6920 allow 100G + 400G in one chassis.

NS vs. Fixed-Brand Core/Spine Platforms

Aspect NS-S6910 / NS-S6920 / NS-S6930 / NS-S6980 / NS-S6990 Fixed-Brand OEM Other Third-Party Resellers
Performance & Ports Same-class hardware: 100G/200G/400G; ZR/ZR+ on 400G; high Mpps/Tbps figures OEM-grade, often tied to licensed optics/features Mixed; may be re-badged/refurb
Fabric & Overlays EVPN-VXLAN, PFC/ECN (RoCEv2), MLAG, telemetry/INT/MOD Similar features but tied to proprietary toolchains Feature claims vary widely
Customization Exterior (logo), labels; preloaded VLAN/QoS/AAA, LACP/MLAG, edge-guards, ACL baselines Branding/defaults fixed; limited templating Cosmetic only; config after arrival
Serviceability Hot-swap PSUs/fans; dual-image; modular options (2–4 slots) Similar FRUs but ecosystem-locked SKUs common FRU sourcing inconsistent
Pricing & TCO Typically ~30% under OEM; ships ready-to-deploy Premium list + license add-ons Slightly under OEM; fewer guarantees
Lead Time Custom SKUs; factory pre-config & burn-in Standard SKUs Variable, long tails

Day-0 to Day-2 Operations

  • Zero-touch day-0. We ship your image/config—VLAN plan (e.g., 10-GPU / 20-Storage / 30-Mgmt), LACP/MLAG on fabric ports, STP edge guards, CoPP/NFPP/CPP, SNMPv3/Syslog, and port roles—so field techs just patch and power.
  • Real-time visibility. Streaming telemetry, sFlow/INT/MOD, and optical-power readouts make validation and troubleshooting fast; AI ECN and congestion analytics surface hotspots early.
  • Resiliency. With 2+2 PSU and 4+1/7+1 fan schemes, maintenance is routine and low-risk. BFD/GR keep routing adjacencies up through events.

FAQs

Q1: Which NS-S6900 platforms provide true 400G?
A: NS-S6990-128QC2XS (128×400G QSFP112) and NS-S6980-64QC (64×400G QSFP-DD) are fixed 4RU 400G systems; both publish lossless and telemetry features.

Q2: Can 400G ports operate at lower speeds?
A: On NS-S6990-128QC2XS, service ports are specified as 100G/200G/400G; NS-S6980-64QC lists 100G/200G/400G on QSFP-DD.

Q3: I need dense 200G at the leaf, what fits?
A: NS-S6980-128DC exposes 128 × 200G (downshiftable to 100G) in 4RU with the same 51.2 Tbps/10,300 Mpps fabric numbers.

Q4: What are my modular choices for gateway/DCI?
A: NS-S6930-2C (2 slots) supports up to 72 × 200G; NS-S6920-4C (4 slots) mixes 100G and 400G line cards; NS-S6910-3C scales to 96 × 100G or 24 × 400G.

Q5: Do these platforms support EVPN-VXLAN and lossless RoCEv2?
A: Yes, software specs list EVPN-VXLAN, PFC/ECN, RDMA-friendly behaviors, and MLAG across the fixed 200G/400G pages.

Conclusion

If you want NS-S6990 Switches for maximum 400G radix, NS-S6980 Switches for 200G/400G in compact 4RU, and NS-S6930/NS-S6920/NS-S6910 Switches for modular gateway/DCI without vendor lock-in, the NS-S6900 family from Network-Switch is the straight path.

You get standards-aligned optics, EVPN-VXLAN overlays, 100G uplinks to 400G uplinks, and factory pre-configuration under your brand. That means lower rollout friction today and a cleaner path to tomorrow’s bandwidth.

Did this article help you or not? Tell us on Facebook and LinkedIn . We’d love to hear from you!

Related posts
View all

Bugün Soruşturma Yapın