- 1. 1. Nvidia Vera Rubin NVL72 Racks Priced at $5M-$7M Each - OEM Margins Under Pressure
- 2. 2. Nvidia Still Has Not Sold a Single H200 to China - Three Months After U.S. Approval
- 3. 3. U.S. Export Control Agency Loses 20% of Licensing Staff - AI Chip Approvals Bottlenecked
- 4. 4. Nvidia 2026-2028 Full Roadmap: Feynman GPUs, Rosa CPUs, and Optical NVLink
- 5. 5. Arista Q1 2026 Earnings Scheduled May 5 - Analysts Bullish Ahead of Report
- 6. 6. Dell'Oro: Co-Packaged Optics Enter Volume Ramp on AI Switches in 2026
- 7. Editor's Summary
- 8. Sources
1. Nvidia Vera Rubin NVL72 Racks Priced at $5M-$7M Each - OEM Margins Under Pressure
Supply chain pricing intelligence published in late March reveals that Nvidia's forthcoming Vera Rubin NVL72 (VR200) rack-scale systems are being quoted to OEM customers at between $5 million and $7 million per unit. The higher-tier Rubin Ultra NVL144 (VR300) - which doubles GPU packages per rack to 144 and is not yet in silicon production - is generating early quotes in the range of $7 million to $8.8 million per rack, according to supply chain sources cited by Tom's Hardware. Nvidia has not confirmed official list prices for either system.
The pricing represents a meaningful step up from Blackwell-generation racks, reflecting the NVL72's integration of 36 Vera CPUs, 72 Rubin GPUs with 288 GB of HBM4 memory each, sixth-generation NVLink switching, ConnectX-9 SuperNICs, BlueField-4 DPUs, and custom liquid cooling. Volume shipments from OEM partners - including Dell, HPE, Lenovo, and Supermicro - are targeted for the second half of 2026. Despite the unit price premium, analysts note that OEM margins on these systems are expected to remain tight due to supply chain complexity, elevated memory costs, and competitive pressure to deliver attractive total-cost-of-ownership against the backdrop of Rubin's promised 10x lower cost-per-token at the workload level.
2. Nvidia Still Has Not Sold a Single H200 to China - Three Months After U.S. Approval
Despite receiving U.S. government clearance in late January 2026 to sell H200 AI chips to vetted Chinese customers - subject to a 25% tariff on each shipment and independent third-party security testing of every unit before export - Nvidia has not generated any revenue from China H200 sales as of late April, according to CNBC reporting and Nvidia CFO Colette Kress's own remarks at the company's February earnings call. CEO Jensen Huang confirmed at GTC 2026 in March that Chinese customers including ByteDance, Alibaba, and Tencent had placed purchase orders totaling more than 400,000 H200 units. The shipments have nonetheless not cleared.
Two countervailing forces are stalling the deal. On the Chinese side, Beijing has directed customs officials to restrict H200 imports and has pushed domestic technology firms toward Huawei's Ascend AI accelerator series. On the American side, the Bureau of Industry and Security - the agency responsible for approving each export license - is operating with a severely reduced and overloaded staff (see story 3). CFO Kress acknowledged at the February earnings call that Chinese AI competitors are "bolstered by recent IPOs, making progress and have the potential to disrupt the structure of the global AI industry over the long-term." Nvidia's China AI chip market share has declined from over 90% in 2022 to approximately 50% as of early 2026.
3. U.S. Export Control Agency Loses 20% of Licensing Staff - AI Chip Approvals Bottlenecked
A Bloomberg investigation published on April 13 found that the Bureau of Industry and Security (BIS) - the U.S. Commerce Department office responsible for reviewing and approving AI chip export licenses - has shed approximately 101 employees since 2024, a 19% workforce reduction. Turnover among licensing and rulemaking staff specifically has run at nearly 20%. The outlet's analysis drew on Office of Personnel Management figures, LinkedIn profile changes, and accounts from more than 20 people familiar with the agency's operations.
The staffing reduction has created a severe processing bottleneck at precisely the moment when the volume and complexity of AI chip export license applications has surged. Under Secretary of Commerce Jeffrey Kessler is reportedly insisting on personally reviewing nearly every significant license application, a centralization of decision-making that compounds the throughput problem. For Nvidia, AMD, and other U.S. AI hardware exporters, this means legally approved trade arrangements now face unpredictable processing delays. For enterprise IT buyers outside the United States, this adds a concrete new layer of supply chain risk to any hardware passing through U.S. export-control review processes - a factor that should be incorporated into project lead-time planning for the remainder of 2026.
4. Nvidia 2026-2028 Full Roadmap: Feynman GPUs, Rosa CPUs, and Optical NVLink
At GTC 2026 in March, Nvidia disclosed the most detailed multi-year data center roadmap in the company's history, extending architecture visibility through 2028. The three-year plan by generation:
2026 - Vera Rubin: The Vera CPU (88 custom "Olympus" Arm cores, 1.8 TB/s NVLink chip-to-chip bandwidth) paired with the Rubin R200 GPU (50 petaflops FP4, 288 GB HBM4). Accompanied by the Groq LP30 LPU for low-latency inference, BlueField-4 DPU, NVLink-6 switch, Spectrum-6 Ethernet with co-packaged optics, and ConnectX-9 SuperNIC. Combined Blackwell and Rubin purchase commitments are targeting $1 trillion through 2027 - double the figure disclosed at GTC 2025.
2027 - Rubin Ultra: The NVL144 VR300 doubles GPU packages per rack to 144. Rubin Ultra GPUs use four compute chiplets with 1 TB of HBM4E memory per package, delivering approximately 100 petaflops of FP4 compute. The Groq LP35 LPU gains NVFP4 support. The Kyber NVL144 rack is designed to deliver at least 4x the performance of today's Blackwell NVL72.
2028 - Feynman + Rosa: Feynman GPUs adopt 3D die stacking - the first Nvidia AI GPUs to stack multiple GPU dies vertically - using custom high-bandwidth memory pushing per-GPU capacity well beyond 1 TB. Feynman is paired with the Rosa CPU (named after Nobel Prize-winning physicist Rosalyn Sussman Yalow), a ground-up redesign that cuts Nvidia's CPU development cycle from four years to two, signaling long-term commercial CPU ambitions beyond Nvidia's own platforms. The 2028 ecosystem includes BlueField-5, NVLink-8 with co-packaged optics (the first optical NVLink in Nvidia history), Spectrum-7 Ethernet with CPO, and ConnectX-10 SuperNIC. For the first time, Nvidia will support both copper and optical scale-up networking simultaneously.
5. Arista Q1 2026 Earnings Scheduled May 5 - Analysts Bullish Ahead of Report
Arista Networks (NYSE: ANET) confirmed on April 7 that Q1 2026 financial results will be released after U.S. market close on May 5, 2026, followed by an executive call at 1:30 PM PT. As of April 22, Arista carries a Zacks Rank of #2 (Buy), with broad analyst consensus pointing to continued AI networking momentum. The company enters the quarter having raised its 2026 AI networking revenue target to $3.25 billion and guided for approximately 25% full-year revenue growth toward $11.25 billion - which would mark Arista's first fiscal year above the $10 billion threshold.
Key metrics analysts are tracking: the pace of 800G port deployments at Microsoft and Meta (together over 40% of Arista revenue), Q1 AI networking order activity versus the $3.25 billion annual target, and campus networking growth following the VeloCloud SD-WAN acquisition (guided for 60% growth in 2026). Any commentary on the competitive dynamic between open Ethernet and Nvidia Spectrum-X in hyperscale AI back-end switching will also be closely watched. BNP Paribas, Evercore ISI, JPMorgan, and TD Cowen have all reiterated positive outlooks ahead of the print.
6. Dell'Oro: Co-Packaged Optics Enter Volume Ramp on AI Switches in 2026
In its 2026 data center networking outlook, research firm Dell'Oro Group identified co-packaged optics (CPO) as one of the year's defining technical transitions. After years of laboratory demonstration and limited pilot deployments, 2026 is expected to mark the beginning of CPO's volume production ramp on both InfiniBand and Ethernet AI switches - driven by hyperscaler demand for higher port density, better power-per-bit economics, and the reliability advantages of eliminating pluggable transceiver interfaces. Nvidia is leading the industry deployment, with its Spectrum-X and Spectrum-6 Ethernet platforms already specifying CPO as a core feature. Dell'Oro expects other vendors to follow shortly.
CPO integrates optical transceivers directly onto the switch package, eliminating the pluggable module entirely. Nvidia's published benchmarks for its Spectrum-X SN6000 series cite 5x better power efficiency, 10x higher reliability, and 5x longer uptime compared to traditional pluggable alternatives. For enterprise IT buyers, the practical implication is that by 2027, high-performance switch procurement decisions will increasingly require evaluating CPO compatibility alongside port count and bandwidth specifications - not just as a premium option but as a baseline expectation in AI-grade network designs. Dell'Oro also reiterated that the AI networking market remains supply-constrained, with chip, memory, and optical component shortages representing the primary risk to 2026 deployment timelines - a factor that procurement teams planning infrastructure upgrades should build into their lead-time assumptions.
Editor's Summary
This edition maps the structural forces shaping AI infrastructure investment through the rest of 2026 and into 2028. Vera Rubin rack pricing at $5M-$7M per unit makes the cost of the next GPU generation tangible for procurement planners at every tier of the market. The China H200 stalemate and BIS staffing crisis confirm that geopolitical and regulatory friction have become concrete supply chain variables, not merely strategic abstractions. Nvidia's 2026-2028 roadmap - Feynman, Rosa, and optical NVLink - provides the multi-year architecture framework that enterprise IT architects need to plan investments against. CPO's shift from pilot to volume production will reshape the economics of every high-performance switch purchase over the next two to three years. And Arista's May 5 earnings call will deliver the first real data point on how AI networking demand tracked in calendar Q1 2026 - a significant bellwether for the rest of the year.
Network-Switch.com is a global professional distributor of networking equipment from Cisco, Huawei, Ruijie, H3C, and our own NS brand - including switches, routers, firewalls, wireless APs, optical modules, and fiber patch cables. Our CCIE, HCIE, H3CIE, and RCNP certified engineers deliver end-to-end technical support and complete network solutions for enterprise, campus, and large-scale infrastructure deployments worldwide.
Visit us: https://network-switch.com
Sources
- Price of Nvidia Vera Rubin NVL72 Racks Skyrockets to $8.8 Million - Tom's Hardware (Mar 24, 2026)
- Nvidia Still Hasn't Sold Its U.S.-Approved China AI Chips - CNBC (Feb 26, 2026)
- U.S. Export Control Agency Has Lost Nearly a Fifth of Its Licensing Staff - Tom's Hardware (Apr 13, 2026)
- Driving Down the AI System Roadmap With Nvidia: Rubin, Feynman, Rosa - The Next Platform (Mar 19, 2026)
- Arista Networks to Announce Q1 2026 Financial Results on May 5, 2026 - Arista Networks (Apr 7, 2026)
- Data Center Networking 2025-2026: CPO Ramp, Supply Constraints and AI Switch Outlook - Dell'Oro Group (Jan 7, 2026)