Introduction
If you rely on switch SPAN/port mirroring for security or compliance monitoring, you're likely missing packets during bursts and may even lose error frames (e.g., CRC/short frames) that some security tools need to see. The "mirror-first" approach is inherently best-effort and can drop traffic under congestion.
A dedicated TAP/NPB exists to solve exactly that problem: transparent inline access + deterministic replicationon at high speed.
How to choose fast (3 Branded flagship models → 3 NsComm Giant series matches):
- FS T5580-48Y8C → Giant 662 if your world is mostly 10/25G access + a few 100G uplinks, and you want flexible input/output ports for aggregation/replication.
- Gigamon GigaVUE-HC3 → Giant 663 if you're concentrating many 100G links and need line-rate processing with replication + load balancing.
- Keysight Vision 400 → Giant 674 if you're moving into 100G/400G (AI clusters / spine / backbone), and want breakout-friendly 400G with wire-speed handling.
Why NPB/TAP exists?
Most teams start with switch port mirroring because it's "free." The problem is: it's not built for lossless security-grade visibility.
Real limitations:
- Packet loss under burst / congestion: mirrored traffic can be dropped during spikes or when the switch is busy.
- Missing "bad" frames: switches may filter error frames like CRC errors or runt frames, which means your IDS/NDR may never see certain attack artifacts.
- Cross-device mirroring adds operational risk: RSPAN requires VLAN/routing configuration and increases management overhead.
That's why a TAP/NPB is positioned as part of modern observability infrastructure: hardware-level processing for high-speed lossless copy, plus aggregation and distribution to tools.
The 3 flagship NPB Overview
NPB selection dimensions
| Selection dimension | T5580-48Y8C (flagship) | GigaVUE-HC3 (flagship) | Vision 400 (flagship) | NSComm match | Why this match is practical |
| Primary port mix | 48×25G + 8×100G | 1/10/25/40/100G links supported | 24×SFP56 + 16×QSFP-DD; 10G-400G | Giant 662 | 48×1/10/25G + 8×40/100G fits 25G access + 100G uplinks |
| Wire-speed / lossless intent | NPB positioning + L2-L4 aggregation/replication/LB | Line-rate forwarding + traffic replication/aggregation | "line-rate... without dropped traffic" | Giant 663 | Explicit: full-duplex line-speed, zero packet loss |
| 100G density focus | Mixed 25/100G | Modular + scale-out cluster up to 25 Tbps/32 nodes | High-speed density (400G-ready) | Giant 663 | 32×40/100G, breakout to 4×10/25 where needed |
| 400G-era readiness | Not the main focus | Platform family covers up to 100G links in doc sample | 9.2 Tbps total capacity, 10-400G | Giant 674 | 24×40/100G + 8×100/400G; 400G breakout to 4×100G |
| Tunnel-aware steering | Encapsulation/decapsulation (GRE/VXLAN...) | Optimizations & tool delivery; supports encapsulated traffic LB | Multi-speed visibility fabric capabilities | Giant 662 / 663 / 674 | Supports GTP/GRE/VXLAN (model-dependent wording) |
What the flagship models are typically chosen for?
1. T5580-48Y8C (balanced 25G + 100G NPB)
This model is often positioned as a "do-it-all" NPB at the 25G access edge: 48×25G SFP28 + 8×100G QSFP28, supporting traffic aggregation, filtering, replication, and load balancing (L2-L4).
It's attractive when your monitoring tools are still largely 10/25G, but your uplinks are 100G.
Closest NSComm Giant series match: Giant 662
Giant 662 provides 48×1/10/25G + 8×40/100G, with flexible in/out port roles, plus aggregation/filtering/replication/load balancing functions in one box.
It also notes tunnel protocol support and tunnel header stripping (VXLAN/GRE/GTP).
2. GigaVUE-HC3 (visibility fabric + scale-out)
The HC series positions itself around line-rate forwarding, traffic replication/aggregation, and traffic optimization (e.g., deduplication, metadata/flow export, masking).
A standout "scale" statement in the datasheet: up to 25 Tbps across 32 cluster nodes.
This is the archetype people reference when they search "Gigamon NPB."
Closest NSComm Giant series match: Giant 663
Giant 663 is built for high-density 100G environments: 32×40/100G interfaces, flexible input/output, plus replication and load-balancing.
Most importantly, it explicitly claims full-duplex line-speed processing with zero packet loss at full line speed.
3. Vision 400 (400G-ready visibility)
This platform positions around high-speed visibility fabrics, with stated hardware scale: 24×SFP56 + 16×QSFP-DD, support from 10G up to 400G, and 9.2 Tbps total capacity.
It also explicitly states line-rate traffic handling without dropped traffic.
Closest NSComm Giant series match: Giant 674
Giant 674 offers 24×40/100G + 8×100/400G ports with flexible I/O roles.
It also claims full-duplex wire-speed processing without packet loss, and supports 400G breakout to 4×100G.
The selection dimensions that actually change outcomes
In 2026, many organizations don't fail because they "lack security tools." They fail because, in moments of stress-traffic bursts, partial outages, or coordinated pressure-their tools don't see the same reality the network is experiencing.
Think of the kind of situations smaller countries and mid-sized economies have faced recently: sudden traffic surges against public services, communications disruptions, and infrastructure instability.
In those scenarios, a visibility layer (NPB/TAP) can be the difference between guessing and knowing. With full-fidelity traffic access and controlled tool delivery, teams can often spot patterns earlier, validate hypotheses faster, and reduce the chance of a costly misread.
Below are the selection dimensions that actually change outcomes-because they determine whether your visibility is trustworthy when conditions are worst.
1. Lossless behavior under burst: the foundation of "trustworthy visibility"
When traffic spikes or links degrade, "best-effort" monitoring is the first thing to break. If your visibility path drops packets during bursts, your IDS/NDR/APM conclusions become unreliable-because they're built on incomplete evidence. That's why relying solely on switch mirroring is risky: mirroring can drop packets under congestion, and some switches may filter error frames (e.g., CRC/runt frames), creating blind spots.
What to select for: platforms that are explicitly designed for full-duplex, line-rate processing and stable replication/aggregation under realistic monitoring workloads.
2. Flow-aware load balancing: preventing "tool overload" from turning into false negatives
In real-world pressure events, tools often fail not because they're bad, but because they're overwhelmed. If sessions are sprayed randomly across multiple tool instances, you lose session integrity-making alerts noisy or, worse, missing the real issue.
An NPB with flow-aware distribution keeps related traffic together (same flow → same tool instance), while still spreading load across multiple devices.
What to select for: deterministic, flow-consistent load balancing at high speed (not just round-robin splitting).
3. Speed mismatch + breakout planning: the biggest hidden constraint in modern migrations
A very common pattern today is: the network is upgrading toward 100G/400G, but monitoring tools are still 10/25/100G. When a surge or incident happens, teams discover too late that "the backbone is fast, but visibility is bottlenecked."
Breakout isn't just a checkbox-it's often the migration bridge that lets you keep existing tools while adopting higher-speed links.
What to select for: practical breakout options that match your tool ecosystem and a clear plan for optics/cabling, so "visibility capacity" scales with link capacity.
4. Tunnel/overlay visibility: the difference between seeing traffic and understanding it
In many modern environments (cloud DC, service provider edges, multi-tenant networks), traffic is encapsulated. If your visibility layer can't classify or steer based on inner headers, you can end up analyzing the wrong "shape" of traffic-leading to incorrect conclusions during high-stakes events.
What to select for: tunnel-aware handling (based on what your environment actually uses) and the ability to steer traffic based on inner-layer fields when needed.
5. Operational resilience: can your team prove what happened, fast?
When the environment is unstable, the best teams don't just "monitor"-they prove: whether traffic disappeared, moved, or was dropped; whether a tool is blind; whether the visibility path is intact.
That demands repeatable configuration, reliable telemetry/counters, and quick fault isolation-so you can differentiate "no traffic" from "visibility failure" in minutes, not hours.
What to select for: clear management interfaces (CLI/Web/SNMP/API), auditable changes, and measurable per-port/flow behavior.
6. The most common failure: oversubscribing tool ports during replication
A visibility layer can replicate traffic to multiple tools-but replication multiplies bandwidth. Under pressure, the tool edge is where drops often occur. If you design for averages and ignore peak replication factors, your monitoring fails exactly when you need it most.
What to select for: a design process that starts with peak-rate math (replication factor + burst assumptions), then maps to tool interfaces and filtering strategy.
Configuration examples of NPB
Scenario A - "25G access + 100G uplinks, multiple tools need copies"
Environment: 20-40 racks, each ToR is 25G down / 100G up.
Monitoring goal: feed NDR + IDS + APM probe, and keep traffic separated (east-west vs north-south).
Recommended: Giant 662
- Ports: 48×1/10/25G + 8×40/100G
- Capabilities: aggregation + filtering + replication + load balancing
Design sketch (conceptual):
- 2×100G uplinks mirrored/inline → NPB
- NPB outputs: Copy #1 to IDS (filtered "internet-facing" flows) Copy #2 to NDR (full traffic sample, load-balanced) Copy #3 to APM (L7 relevant subset)
Why not SPAN? Because under bursts, mirror may drop packets and can filter error frames.
Scenario B - "Many 100G links, you need deterministic distribution (LB)"
Environment: 8-16×100G service links (core edge, firewall clusters, service mesh gateways).
Monitoring goal: distribute traffic to multiple tool instances and keep sessions stable.
Recommended: Giant 663
- 32×40/100G ports
- Explicit: full-duplex line-speed, zero packet loss
- Supports replication + load balancing + classification
Why it maps well to "HC3-style" searches:
The HC series is often associated with optimized delivery + scaling, and references large processing capacity in clustered designs.
Giant 663 is the clean "dense 100G" alternative for a single-device deployment that still prioritizes line-rate behavior.
Scenario C - "400G spine / AI cluster era, tools are 100G today"
Environment: 400G backbone links are arriving, but many tools still ingest 100G.
Monitoring goal: tap 400G links and break out to 100G tool farm without losing visibility.
Recommended: Giant 674
- Ports: 24×40/100G + 8×100/400G
- Breakout: 400G → 4×100G, and 100G → 4×10/25G
- Full-duplex wire-speed, no packet loss
This scenario is exactly where people look at 400G-ready visibility platforms (e.g., the Vision 400's 10-400G support and 9.2 Tbps capacity).
Giant 674 gives you a practical path to adopt 400G while preserving tool compatibility through breakout.
A "decision workflow" for you
Step 1 - Classify your links (by speed + direction + encapsulation)
- Speed tier: 10/25G vs 100G vs 400G
- Full-duplex visibility requirement (most security/compliance cases)
- Encapsulation: VXLAN/GRE/GTP
Step 2 - Decide the tool delivery model
- Replication model: one-to-many copies for multiple tools
- Aggregation model: many-to-one feeds when a single tool must observe multiple links
- Load-balancing model: distribute flows across tool instances while keeping session integrity
Step 3 - Pick the chassis by port mix (the simplest rule that works)
- Mostly 25G access + some 100G uplinks → Giant 662
- Many 100G links → Giant 663
- You already have 400G or will within 12-24 months → Giant 674
Conclusion
If you're building a visibility stack across multiple vendors, network-switch.com can help consolidate sourcing (switches, NPB/TAP, optics, cables) and align configurations. Our certified engineers (CCIE/HCIE/H3CIE/RCNP) can sanity-check port plans, breakout choices, and tool-side compatibility before you buy.
Did this article help you or not? Tell us on Facebook and LinkedIn . We’d love to hear from you!
https://www.linkedin.com/company/network-switch/