VRLA Tech AMD EPYC Workstation
$32,999.99
The VRLA Tech AMD EPYC Workstation is a custom-built AMD EPYC 9005…
Description
The VRLA Tech AMD EPYC Workstation is a custom-built AMD EPYC 9005 Turin system in tower form factor for high-performance computing, massive-memory AI, and heavy scientific simulation. It supports up to 192 cores per socket, 6TB of 12-channel DDR5 ECC memory, up to 128 PCIe 5.0 lanes, and server-class BMC remote management. It is the right platform when you have outgrown Threadripper Pro’s 96-core and 2TB-RAM ceilings, when you need 12-channel memory bandwidth rather than 8-channel, or when your workload benefits from multi-socket expansion. Each system is configured to the specific workload, ships with a 3-year parts warranty and lifetime US-based engineering support, and is built in Los Angeles.
| CPU | AMD EPYC 9005 Turin — 9555P (64 cores), 9655P (96 cores), 9755 (128 cores), or 9965 (192 cores). Single-socket or dual-socket configurations. |
| Platform | SP5 socket, 12-channel DDR5, up to 128 PCIe 5.0 lanes per socket |
| Memory | 12-channel DDR5 ECC RDIMM, up to 6TB per socket (12TB total in 2P) |
| GPU | Up to four NVIDIA RTX PRO 6000 Blackwell, RTX 5090, H100, or H200 cards |
| Remote management | ASPEED BMC with IPMI 2.0, Redfish API, and dedicated management network port |
| Storage | PCIe 5.0 NVMe boot, plus enterprise SATA, SAS, or U.2 NVMe arrays for data tier |
| Cooling | Enterprise-grade tower chassis with server-class cooling, optional liquid loop on 500W+ TDP configurations |
| Warranty | 3-year parts, lifetime US-based engineering support |
Built for when Threadripper Pro is not enough
EPYC is AMD’s data-center platform. Most EPYC deployments run in rack-mounted servers inside data centers. A small but important segment of users needs EPYC’s capabilities — memory bandwidth, memory capacity, core count, remote management — in workstation form factor next to their desk. That is what this configuration is for. It is not a general-purpose workstation alternative; it is a purpose-built platform for specific workloads that Threadripper Pro cannot handle.
Those workloads are: in-memory AI inference serving language models whose weights plus KV cache exceed 2TB, computational fluid dynamics on models that require massive RAM, computational chemistry and molecular dynamics simulations, multi-tenant research workstations where several researchers share one machine via VMs or containers, and HPC codes that were designed for dual-socket systems. For every other workload — including most AI training, rendering, VFX, CAD, and creative work — Threadripper Pro will be faster, simpler, and more cost-effective. We tell you honestly whether EPYC or Threadripper Pro fits your workflow. Request a consultation here.
When EPYC is the right platform
Versus Threadripper Pro
Threadripper Pro covers the vast majority of workstation workloads: up to 96 cores, 2TB of 8-channel ECC memory, 128 PCIe 5.0 lanes, four GPUs at full bandwidth. Step up to EPYC only when you specifically need one of the following: more than 2TB of RAM (EPYC supports 6TB per socket), 12-channel memory bandwidth instead of 8-channel, more than 96 cores in a single socket, multi-socket (2P) capability for 12TB+ total RAM, or IPMI/BMC remote management. If none of those apply, Threadripper Pro is the right platform and will be simpler and more cost-effective. See our Threadripper Pro workstation page for details.
Versus Intel Xeon
Intel Xeon Scalable 6th generation (Granite Rapids) is EPYC’s direct competitor on server and HPC workloads. Xeon has strengths in certain ISV-certified scientific pipelines and in workflows tuned for Intel’s AVX-512 and AMX extensions. EPYC typically wins on memory bandwidth, on per-dollar core count, and on workloads that scale across many cores without specific Intel optimizations. For most HPC and AI-serving workloads, EPYC is the stronger general-purpose choice in 2026.
Versus an EPYC rack server
The CPU and motherboard silicon are identical. An EPYC workstation uses a tower chassis with consumer-friendly acoustics and I/O for desk-side placement. An EPYC rack server uses 1U, 2U, or 4U form factor with higher fan noise, hot-aisle airflow, and rack-rail mounting. Choose the workstation form factor when the system sits next to a user and noise matters. Choose the rack form factor for data center or co-location deployment. We build both.
Platform comparison
| Feature | AMD EPYC 9005 | Threadripper Pro | Threadripper (non-Pro) | Intel Xeon 6 | Ryzen 9000 |
|---|---|---|---|---|---|
| Max cores per socket | 192 | 96 | 64 | 128 | 16 |
| Multi-socket | 2P supported | Single only | Single only | 2P supported | Single only |
| Memory channels | 12 | 8 | 4 | 12 | 2 |
| Max RAM per socket | 6TB ECC | 2TB ECC | 256GB | 6TB ECC | 192GB |
| PCIe 5.0 lanes | 128 per socket | 128 | 48 | 96 per socket | 28 |
| Remote management | BMC / IPMI | None standard | None | BMC / IPMI | None |
| Best for | HPC, massive-memory AI, multi-tenant | Heavy AI, sim, 4-GPU workstation | Render, VFX, dual-GPU | HPC with Intel AVX/AMX tuning | CAD, creative, single-GPU |
What you configure
Every EPYC workstation we build is a full custom configuration. The components we help you specify:
- Processor. The 9555P (64 cores) is the entry point — more cores than a 9995WX Threadripper Pro at a lower cost per core. The 9655P (96 cores) matches Threadripper Pro 9995WX on core count but adds 12-channel memory. The 9755 (128 cores) and 9965 (192 cores) exceed any Threadripper Pro and target HPC, in-memory AI, and multi-tenant workloads. Dual-socket configurations double both core count and memory capacity.
- Memory. 12-channel DDR5 ECC RDIMM, sized to workload. Populating all 12 channels delivers peak memory bandwidth, which matters most for CFD, sparse linear algebra, and in-memory AI serving. Capacity scales from 256GB for GPU-heavy configurations up to 6TB per socket for massive-memory workloads.
- GPUs. One to four cards at full PCIe 5.0 x16 bandwidth per socket. Common configurations include 4× NVIDIA H100 PCIe for mixed training and inference, 4× RTX PRO 6000 Blackwell (384GB total VRAM) for local LLM work, or 4× H200 for frontier-scale inference. EPYC’s 128 PCIe lanes per socket allow full GPU bandwidth without compromise.
- Storage. PCIe 5.0 NVMe boot, plus enterprise U.2 NVMe arrays for high-throughput data tiers, and SATA or SAS for high-capacity cold storage. For HPC and AI preprocessing workloads, storage bandwidth is often the bottleneck and warrants careful sizing.
- Remote management and chassis. ASPEED BMC with IPMI 2.0 and Redfish API standard on workstation-class EPYC motherboards. Full-tower server-grade chassis with proper airflow for 500W+ TDP CPUs, redundant PSU options, and dual-socket capable layouts.
Workloads we build EPYC workstations for
EPYC workstations serve specific workloads that Threadripper Pro cannot. Our typical EPYC builds fall into these categories:
- Massive-memory AI inference. Serving language models where the model weights plus KV cache exceed 2TB, requiring EPYC’s 6TB per socket. See our EPYC AI / LLM server configuration for the rack-mounted variant.
- Computational fluid dynamics. ANSYS Fluent, OpenFOAM, Star-CCM+ on models that saturate 12-channel memory bandwidth and require more than 2TB of RAM. CFD on large meshes is memory-bandwidth-bound, and EPYC’s 12-channel memory is the correct platform.
- Computational chemistry and molecular dynamics. GROMACS, LAMMPS, NAMD, Gaussian, VASP on large systems. Long-running scientific simulations benefit from both EPYC’s memory capacity and server-class reliability.
- Multi-tenant research environments. Systems running multiple VMs, containers, or user sessions with real compute and memory resources per tenant. 192 cores and 6TB RAM let multiple researchers share one machine without contention.
- Seismic and geophysical processing. Energy industry workloads where large model domains and fast interconnect matter. EPYC’s memory bandwidth and PCIe capacity support these pipelines in workstation form factor.
Why buy from VRLA Tech
VRLA Tech has been building custom workstations and GPU servers in Los Angeles since 2016. We build for research labs, HPC teams, AI engineering groups, and scientific computing users — not for bulk retail.
Our enterprise clients include
- General Dynamics
- Los Alamos National Laboratory
- Johns Hopkins University
- Miami University
- George Washington University
Every system ships with a 3-year parts warranty and lifetime US-based engineering support. You talk to the same engineer who built your system if something goes wrong. Support includes remote BMC and IPMI configuration, driver and BIOS assistance, and hardware troubleshooting.
Lead time on EPYC workstations is typically 4 to 6 weeks. EPYC CPUs and 12-channel motherboards often require lead-time on supply, particularly for 192-core 9965 and 6TB memory configurations.
Frequently asked questions
What is the difference between AMD EPYC and Threadripper Pro?
EPYC uses the SP5 socket with 12-channel DDR5 memory supporting up to 6TB per socket, while Threadripper Pro uses sTR5 with 8-channel memory and up to 2TB. EPYC supports up to 192 cores per socket and can be configured in dual-socket (2P) systems, while Threadripper Pro maxes at 96 cores single-socket. EPYC includes server-class remote management (IPMI or BMC); Threadripper Pro does not. EPYC is designed for data-center workloads and enterprise reliability; Threadripper Pro is optimized for workstation form factor and single-user productivity.
How many cores can an EPYC workstation have?
Current AMD EPYC 9005 Turin processors offer 8 to 192 cores per socket. Workstation-class configurations typically specify the 64-core 9555P, 96-core 9655P, 128-core 9755, or 192-core 9965, depending on the workload’s parallelism characteristics and budget.
When should I choose an EPYC workstation over Threadripper Pro?
Choose EPYC when your workload needs more than 2TB of RAM, requires 12-channel memory bandwidth rather than 8-channel, benefits from more than 96 cores in a single socket, or requires server-class remote management (IPMI or BMC). Typical EPYC-over-Threadripper-Pro cases include large CFD simulation, computational chemistry, serving very large language models in memory, and multi-tenant research environments.
Is an EPYC workstation overkill for single-user workloads?
For most single-user workloads, yes. EPYC is justified when specific requirements cannot be met by Threadripper Pro: more than 2TB of RAM, more than 96 cores, 12-channel memory bandwidth, multi-socket expansion, or IPMI remote management. If none of those apply, Threadripper Pro will be faster per dollar and simpler to maintain for single-user work.
How much RAM can an EPYC workstation support?
EPYC 9005 Turin supports up to 6TB of DDR5 ECC memory per socket using 12-channel configurations with 512GB RDIMMs. Dual-socket EPYC systems can reach up to 12TB of total memory. This capacity matters for in-memory database work, massive CFD models, large-context language model serving, and scientific simulations that exceed Threadripper Pro’s 2TB ceiling.
Does EPYC have more memory bandwidth than Threadripper Pro?
Yes. EPYC 9005 uses 12-channel DDR5 memory, while Threadripper Pro uses 8-channel. For memory-bound workloads such as CFD simulation, sparse linear algebra, and serving large language models at inference time, EPYC’s higher memory bandwidth can meaningfully reduce wall-clock time.
Can I use EPYC for AI workloads?
Yes. EPYC is particularly well-suited for AI workloads where CPU memory bandwidth and capacity matter: serving large language models where model weights exceed GPU VRAM, multi-tenant inference infrastructure, and preprocessing pipelines moving large datasets. For GPU-bound training, the GPUs do the work and CPU platform choice matters less; Threadripper Pro is typically simpler for single-user GPU-heavy AI.
What is the difference between an EPYC workstation and an EPYC server?
The silicon is identical. An EPYC workstation uses a tower chassis for desk-side placement, consumer-friendly I/O, and typical workstation cooling. An EPYC server uses a 1U, 2U, or 4U rackmount chassis for data-center deployment with higher fan noise, hot-aisle airflow, and rack-rail mounting. Choose the workstation form factor when the system sits next to a user; choose server for rack installation.
Does an EPYC workstation support multi-GPU configurations?
Yes. EPYC provides up to 128 PCIe 5.0 lanes per socket, supporting four or more double-wide GPUs at full x16 bandwidth each. This matches or exceeds Threadripper Pro’s 128 PCIe lanes. For dense GPU workloads combined with massive memory requirements, EPYC is the correct choice.
What warranty comes with a VRLA Tech EPYC workstation?
All VRLA Tech workstations include a 3-year parts warranty and lifetime US-based engineering support. Customers work directly with the engineer who built their system. Support includes remote diagnostics, BMC and IPMI configuration assistance, driver and BIOS support, and component troubleshooting.
Additional information
| Weight | 50 lbs |
|---|---|
| Dimensions | 26 × 14 × 27 in |







Dr. Matthew
5 stars across the board. Quality, customer service, price, speed, warranty. Excellent job VRLA Tech. Ill be back soon and so will my colleagues
Steven
VRLA Tech delivered fast and strong. Got my project up and running ASAP and I have already been back 3 times. I will continue to return as my work progresses. Their price is fair and their craftsmanship is ideal. Highly recommended!!!!