Server Model

ARS-221GL-NHIR
CSE-MG202TS-R000NDFP

Form Factor

Rack – 2U Server

Condition

New

Drive Bays

Default: Total 8 bay(s)
8 front hot-swap E1.S NVMe drive bay(s)

Processor (CPU)

NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip

Supports up to 1000W TDP CPUs (Air Cooled)

System Memory

Slot Count: Onboard Memory
Max Memory: Up to 480GB ECC LPDDR5X
Additional GPU Memory: Up to 144GB ECC HBM3

Storage

2 M.2 NVMe slot(s) (M-key)

Graphics Card

Max GPU Count: Up to 2 onboard GPU(s)
Supported GPU: NVIDIA: H100 Tensor Core GPU on GH200 Grace Hopper™ Superchip (Liquid-cooled)
CPU-GPU Interconnect: NVLink®-C2C
GPU-GPU Interconnect: NVIDIA® NVLink®

Input / Output

LAN: 1 RJ45 1 GbE Dedicated IPMI LAN port(s)
USB: 2 port(s) (rear)
Video: 1 mini-DP port(s)

Bios Type

AMI 64MB SPI Flash EEPROM

PC Health Monitoring

CPU
– Monitors for CPU Cores, Chipset Voltages, Memory
FAN
– Fans with tachometer monitoring
– Status monitor for speed control
– Pulse Width Modulated (PWM) fan connectors
Temperature
– Monitoring for CPU and chassis environment
– Thermal Control for fan connectors

Front Panel

LED: HDD activity
Buttons: Power On/Off

Expansion Slots

PCI-Express (PCIe): Default
Configuration: 3 PCIe 5.0 x16 (in x16) FHFL slot(s)

System Cooling

7 Removable heavy-duty 6cm Fan(s)

Power Supply

4x 2000W Redundant (2 + 2) Titanium Level power supplies

Dimension (WxHxL) : 73.5 x 40 x 185 mm

+12V
Max: 83A / Min: 0A (100Vac-127Vac)
Max: 150A / Min: 0A (200Vac-220Vac)
Max: 165A / Min: 0A (220Vac-230Vac)
Max: 166A / Min: 0A (230Vac-240Vac)

12V SB
Max: 3.5A / Min: 0A

AC Input
1000W: 100-127Vac / 50-60Hz
1800W: 200-220Vac / 50-60Hz
1980W: 220-230Vac / 50-60Hz
2000W: 220-240Vac / 50-60Hz (for UL only)
2000W: 230-240Vac / 50-60Hz
2000W: 230-240Vdc / 50-60Hz (for CQC only)

Output Type: Backplanes (gold finger)

Warranty

ARS-221GL-NHIR Dimensions

Height: 3.43" (87 mm)
Width: 17.3" (438.4 mm)
Depth: 35.43" (900 mm)

Package:11.02" (H) x 27.56" (W) x 47.24" (D)

ARS-221GL-NHIR Weight

Gross Weight: 103 lbs (46.8 kg)
Net Weight: 88 lbs (40 kg)

Reviews

There are no reviews yet.

Be the first to review “Supermicro GPU ARS-221GL-NHIR – 2U Rack Server”

Your email address will not be published. Required fields are marked *

NOTIFY ME We will inform you when the product arrives in stock. Please leave your valid email address below.
U.S Based Support
Based in Los Angeles, our U.S.-based engineering team supports customers across the United States, Canada, and globally. You get direct access to real engineers, fast response times, and rapid deployment with reliable parts availability and professional service for mission-critical systems.
Expert Guidance You Can Trust
Companies rely on our engineering team for optimal hardware configuration, CUDA and model compatibility, thermal and airflow planning, and AI workload sizing to avoid bottlenecks. The result is a precisely built system that maximizes performance, prevents misconfigurations, and eliminates unnecessary hardware overspend.
Reliable 24/7 Performance
Every system is fully tested, thermally validated, and burn-in certified to ensure reliable 24/7 operation. Built for long AI training cycles and production workloads, these enterprise-grade workstations minimize downtime, reduce failure risk, and deliver consistent performance for mission-critical teams.
Future Proof Hardware
Built for AI training, machine learning, and data-intensive workloads, our high-performance workstations eliminate bottlenecks, reduce training time, and accelerate deployment. Designed for enterprise teams, these scalable systems deliver faster iteration, reliable performance, and future-ready infrastructure for demanding production environments.
Engineers Need Faster Iteration
Slow training slows product velocity. Our high-performance systems eliminate queues and throttling, enabling instant experimentation. Faster iteration and shorter shipping cycles keep engineers unblocked, operating at startup speed while meeting enterprise demands for reliability, scalability, and long-term growth today globally.
Cloud Cost are Insane
Cloud GPUs are convenient, until they become your largest monthly expense. Our workstations and servers often pay for themselves in 4–8 weeks, giving you predictable, fixed-cost compute with no surprise billing and no resource throttling.