Server Model

ARS-221GL-NR
CSE-GP201TS-R000NP

Form Factor

Rack – 2U Server

Condition

New

Drive Bays

Default: Total 8 bay(s)
8 front hot-swap E1.S NVMe drive bay(s)

Processor (CPU)

Dual processor(s)
NVIDIA Dual 72-core CPUs on a Grace™ CPU Superchip

Supports up to 500W TDP CPUs (Air Cooled)

System Memory

Slot Count: Onboard Memory
Max Memory: Up to 480GB ECC LPDDR5X
Memory Voltage: 1.1V

Storage

M.2

Graphics Card

Max GPU Count: Up to 4 double-width GPU(s)
Supported GPU: NVIDIA PCIe:
– NVIDIA PCIe H100
– NVIDIA PCIe L40S
– NVIDIA PCIe H100 NVL
CPU-GPU Interconnect: PCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU Interconnect: NVIDIA® NVLink® Bridge (optional)

Input / Output

Video: 1 mini-DP port(s)

System Bios

AMI 32MB SPI Flash EEPROM

Management

Software
– SuperDoctor® 5
– Watch Dog
– NMI
– SUM
– KVM with dedicated LAN
– SPM
– Intel® Node Manager
– SSM
– IPMI 2.0
– Redfish API
– OOB Management Package (SFT-OOB-LIC )

Power configurations
– Power-on mode for AC power recovery
– ACPI Power Management

PC Health Monitoring

CPU
– Monitors for CPU Cores, Chipset Voltages, Memory
FAN
– Fans with tachometer monitoring
– Status monitor for speed control
– Pulse Width Modulated (PWM) fan connectors
Temperature
– Monitoring for CPU and chassis environment
– Thermal Control for fan connectors

ARS-221GL-NR Dimensions

Height: 3.46" (88 mm)
Width: 17.25" (438.4 mm)
Depth: 35.43" (900 mm)

Package: 11" (H) x 22.5" (W) x 45.5" (D)

SYS-221GE-NR Weight

Gross Weight: 86.5 lbs (39.2 kg)
Net Weight: 67.5 lbs (30.6 kg)

Front Panel

LED
– Hard drive activity LED
– Network activity LEDs
– Power status LED
– System Overheat & Power Fail LED
Buttons
– Power On/Off button
– System Reset button

Expansion Slots

PCI-Express (PCIe): Default
Configuration: 7 PCIe 4.0 x16 FHFL slot(s)

System Cooling

6 heavy duty fans with optimal fan speed control

Power Supply

2000W Redundant Titanium Level power supplies

Dimension (WxHxL) : 73.5 x 40 x 185 mm

+12V
Max: 83A / Min: 0A (100Vac-127Vac)
Max: 150A / Min: 0A (200Vac-220Vac)
Max: 165A / Min: 0A (220Vac-230Vac)
Max: 166A / Min: 0A (230Vac-240Vac)

12V SB
Max: 3.5A / Min: 0A

AC Input
1000W: 100-127Vac / 50-60Hz
1800W: 200-220Vac / 50-60Hz
1980W: 220-230Vac / 50-60Hz
2000W: 220-240Vac / 50-60Hz (for UL only)
2000W: 230-240Vac / 50-60Hz
2000W: 230-240Vdc / 50-60Hz (for CQC only)

Output Type: Backplanes (gold finger)

Warranty

Reviews

There are no reviews yet.

Be the first to review “Supermicro GPU ARS-221GL-NR – 2U Rack Server”

Your email address will not be published. Required fields are marked *

NOTIFY ME We will inform you when the product arrives in stock. Please leave your valid email address below.
U.S Based Support
Based in Los Angeles, our U.S.-based engineering team supports customers across the United States, Canada, and globally. You get direct access to real engineers, fast response times, and rapid deployment with reliable parts availability and professional service for mission-critical systems.
Expert Guidance You Can Trust
Companies rely on our engineering team for optimal hardware configuration, CUDA and model compatibility, thermal and airflow planning, and AI workload sizing to avoid bottlenecks. The result is a precisely built system that maximizes performance, prevents misconfigurations, and eliminates unnecessary hardware overspend.
Reliable 24/7 Performance
Every system is fully tested, thermally validated, and burn-in certified to ensure reliable 24/7 operation. Built for long AI training cycles and production workloads, these enterprise-grade workstations minimize downtime, reduce failure risk, and deliver consistent performance for mission-critical teams.
Future Proof Hardware
Built for AI training, machine learning, and data-intensive workloads, our high-performance workstations eliminate bottlenecks, reduce training time, and accelerate deployment. Designed for enterprise teams, these scalable systems deliver faster iteration, reliable performance, and future-ready infrastructure for demanding production environments.
Engineers Need Faster Iteration
Slow training slows product velocity. Our high-performance systems eliminate queues and throttling, enabling instant experimentation. Faster iteration and shorter shipping cycles keep engineers unblocked, operating at startup speed while meeting enterprise demands for reliability, scalability, and long-term growth today globally.
Cloud Cost are Insane
Cloud GPUs are convenient, until they become your largest monthly expense. Our workstations and servers often pay for themselves in 4–8 weeks, giving you predictable, fixed-cost compute with no surprise billing and no resource throttling.