VRLA Tech AMD EPYC Workstation
$32,999.99
The VRLA Tech AMD EPYC Workstation is a custom-built AMD EPYC 9005…
Description
The VRLA Tech AMD EPYC Workstation is a custom-built AMD EPYC 9005 Turin system in tower form factor for high-performance computing, massive-memory AI, and heavy scientific simulation. It supports up to 192 cores per socket, 6TB of 12-channel DDR5 ECC memory, up to 128 PCIe 5.0 lanes, and server-class BMC remote management. It is the right platform when you have outgrown Threadripper Pro’s 96-core and 2TB-RAM ceilings, when you need 12-channel memory bandwidth rather than 8-channel, or when your workload benefits from multi-socket expansion. Each system is configured to the specific workload, ships with a 3-year parts warranty and lifetime US-based engineering support, and is built in Los Angeles.
| CPU | AMD EPYC 9005 Turin — 9555P (64 cores), 9655P (96 cores), 9755 (128 cores), or 9965 (192 cores). Single-socket or dual-socket configurations. |
| Platform | SP5 socket, 12-channel DDR5, up to 128 PCIe 5.0 lanes per socket |
| Memory | 12-channel DDR5 ECC RDIMM, up to 6TB per socket (12TB total in 2P) |
| GPU | Up to four NVIDIA RTX PRO 6000 Blackwell, RTX 5090, H100, or H200 cards |
| Remote management | ASPEED BMC with IPMI 2.0, Redfish API, and dedicated management network port |
| Storage | PCIe 5.0 NVMe boot, plus enterprise SATA, SAS, or U.2 NVMe arrays for data tier |
| Cooling | Enterprise-grade tower chassis with server-class cooling, optional liquid loop on 500W+ TDP configurations |
| Warranty | 3-year parts, lifetime US-based engineering support |
Built for when Threadripper Pro is not enough
EPYC is AMD’s data-center platform. Most EPYC deployments run in rack-mounted servers inside data centers. A small but important segment of users needs EPYC’s capabilities — memory bandwidth, memory capacity, core count, remote management — in workstation form factor next to their desk. That is what this configuration is for. It is not a general-purpose workstation alternative; it is a purpose-built platform for specific workloads that Threadripper Pro cannot handle.
Those workloads are: in-memory AI inference serving language models whose weights plus KV cache exceed 2TB, computational fluid dynamics on models that require massive RAM, computational chemistry and molecular dynamics simulations, multi-tenant research workstations where several researchers share one machine via VMs or containers, and HPC codes that were designed for dual-socket systems. For every other workload — including most AI training, rendering, VFX, CAD, and creative work — Threadripper Pro will be faster, simpler, and more cost-effective. We tell you honestly whether EPYC or Threadripper Pro fits your workflow. Request a consultation here.
When EPYC is the right platform
Versus Threadripper Pro
Threadripper Pro covers the vast majority of workstation workloads: up to 96 cores, 2TB of 8-channel ECC memory, 128 PCIe 5.0 lanes, four GPUs at full bandwidth. Step up to EPYC only when you specifically need one of the following: more than 2TB of RAM (EPYC supports 6TB per socket), 12-channel memory bandwidth instead of 8-channel, more than 96 cores in a single socket, multi-socket (2P) capability for 12TB+ total RAM, or IPMI/BMC remote management. If none of those apply, Threadripper Pro is the right platform and will be simpler and more cost-effective. See our Threadripper Pro workstation page for details.
Versus Intel Xeon
Intel Xeon Scalable 6th generation (Granite Rapids) is EPYC’s direct competitor on server and HPC workloads. Xeon has strengths in certain ISV-certified scientific pipelines and in workflows tuned for Intel’s AVX-512 and AMX extensions. EPYC typically wins on memory bandwidth, on per-dollar core count, and on workloads that scale across many cores without specific Intel optimizations. For most HPC and AI-serving workloads, EPYC is the stronger general-purpose choice in 2026.
Versus an EPYC rack server
The CPU and motherboard silicon are identical. An EPYC workstation uses a tower chassis with consumer-friendly acoustics and I/O for desk-side placement. An EPYC rack server uses 1U, 2U, or 4U form factor with higher fan noise, hot-aisle airflow, and rack-rail mounting. Choose the workstation form factor when the system sits next to a user and noise matters. Choose the rack form factor for data center or co-location deployment. We build both.
Platform comparison
| Feature | AMD EPYC 9005 | Threadripper Pro | Threadripper (non-Pro) | Intel Xeon 6 | Ryzen 9000 |
|---|---|---|---|---|---|
| Max cores per socket | 192 | 96 | 64 | 128 | 16 |
| Multi-socket | 2P supported | Single only | Single only | 2P supported | Single only |
| Memory channels | 12 | 8 | 4 | 12 | 2 |
| Max RAM per socket | 6TB ECC | 2TB ECC | 256GB | 6TB ECC | 192GB |
| PCIe 5.0 lanes | 128 per socket | 128 | 48 | 96 per socket | 28 |
| Remote management | BMC / IPMI | None standard | None | BMC / IPMI | None |
| Best for | HPC, massive-memory AI, multi-tenant | Heavy AI, sim, 4-GPU workstation | Render, VFX, dual-GPU | HPC with Intel AVX/AMX tuning | CAD, creative, single-GPU |
What you configure
Every EPYC workstation we build is a full custom configuration. The components we help you specify:
- Processor. The 9555P (64 cores) is the entry point — more cores than a 9995WX Threadripper Pro at a lower cost per core. The 9655P (96 cores) matches Threadripper Pro 9995WX on core count but adds 12-channel memory. The 9755 (128 cores) and 9965 (192 cores) exceed any Threadripper Pro and target HPC, in-memory AI, and multi-tenant workloads. Dual-socket configurations double both core count and memory capacity.
- Memory. 12-channel DDR5 ECC RDIMM, sized to workload. Populating all 12 channels delivers peak memory bandwidth, which matters most for CFD, sparse linear algebra, and in-memory AI serving. Capacity scales from 256GB for GPU-heavy configurations up to 6TB per socket for massive-memory workloads.
- GPUs. One to four cards at full PCIe 5.0 x16 bandwidth per socket. Common configurations include 4× NVIDIA H100 PCIe for mixed training and inference, 4× RTX PRO 6000 Blackwell (384GB total VRAM) for local LLM work, or 4× H200 for frontier-scale inference. EPYC’s 128 PCIe lanes per socket allow full GPU bandwidth without compromise.
- Storage. PCIe 5.0 NVMe boot, plus enterprise U.2 NVMe arrays for high-throughput data tiers, and SATA or SAS for high-capacity cold storage. For HPC and AI preprocessing workloads, storage bandwidth is often the bottleneck and warrants careful sizing.
- Remote management and chassis. ASPEED BMC with IPMI 2.0 and Redfish API standard on workstation-class EPYC motherboards. Full-tower server-grade chassis with proper airflow for 500W+ TDP CPUs, redundant PSU options, and dual-socket capable layouts.
Workloads we build EPYC workstations for
EPYC workstations serve specific workloads that Threadripper Pro cannot. Our typical EPYC builds fall into these categories:
- Massive-memory AI inference. Serving language models where the model weights plus KV cache exceed 2TB, requiring EPYC’s 6TB per socket. See our EPYC AI / LLM server configuration for the rack-mounted variant.
- Computational fluid dynamics. ANSYS Fluent, OpenFOAM, Star-CCM+ on models that saturate 12-channel memory bandwidth and require more than 2TB of RAM. CFD on large meshes is memory-bandwidth-bound, and EPYC’s 12-channel memory is the correct platform.
- Computational chemistry and molecular dynamics. GROMACS, LAMMPS, NAMD, Gaussian, VASP on large systems. Long-running scientific simulations benefit from both EPYC’s memory capacity and server-class reliability.
- Multi-tenant research environments. Systems running multiple VMs, containers, or user sessions with real compute and memory resources per tenant. 192 cores and 6TB RAM let multiple researchers share one machine without contention.
- Seismic and geophysical processing. Energy industry workloads where large model domains and fast interconnect matter. EPYC’s memory bandwidth and PCIe capacity support these pipelines in workstation form factor.
Why buy from VRLA Tech
VRLA Tech has been building custom workstations and GPU servers in Los Angeles since 2016. We build for research labs, HPC teams, AI engineering groups, and scientific computing users — not for bulk retail.
Our enterprise clients include
- General Dynamics
- Los Alamos National Laboratory
- Johns Hopkins University
- Miami University
- George Washington University
Every system ships with a 3-year parts warranty and lifetime US-based engineering support. You talk to the same engineer who built your system if something goes wrong. Support includes remote BMC and IPMI configuration, driver and BIOS assistance, and hardware troubleshooting.
Lead time on EPYC workstations is typically 4 to 6 weeks. EPYC CPUs and 12-channel motherboards often require lead-time on supply, particularly for 192-core 9965 and 6TB memory configurations.
Frequently asked questions
Hardware & platform questions
What is the difference between AMD EPYC and Threadripper Pro?
EPYC uses the SP5 socket with 12-channel DDR5 memory supporting up to 6TB per socket, while Threadripper Pro uses sTR5 with 8-channel memory and up to 2TB. EPYC supports up to 192 cores per socket and can be configured in dual-socket (2P) systems, while Threadripper Pro maxes at 96 cores single-socket. EPYC includes server-class remote management (IPMI or BMC); Threadripper Pro does not. EPYC is designed for data-center workloads and enterprise reliability; Threadripper Pro is optimized for workstation form factor and single-user productivity.
When should I choose an EPYC workstation over Threadripper Pro?
Choose EPYC when your workload needs more than 2TB of RAM, requires 12-channel memory bandwidth rather than 8-channel, benefits from more than 96 cores in a single socket, or requires server-class remote management (IPMI or BMC). Typical EPYC-over-Threadripper-Pro cases include large CFD simulation, computational chemistry, serving very large language models in memory, and multi-tenant research environments.
Is an EPYC workstation overkill for single-user workloads?
For most single-user workloads, yes. EPYC is justified when specific requirements cannot be met by Threadripper Pro: more than 2TB of RAM, more than 96 cores, 12-channel memory bandwidth, multi-socket expansion, or IPMI remote management. If none of those apply, Threadripper Pro will be faster per dollar and simpler to maintain for single-user work.
How much RAM can an EPYC workstation support?
EPYC 9005 Turin supports up to 6TB of DDR5 ECC memory per socket using 12-channel configurations with 512GB RDIMMs. Dual-socket EPYC systems can reach up to 12TB of total memory. This capacity matters for in-memory database work, massive CFD models, large-context language model serving, and scientific simulations that exceed Threadripper Pro’s 2TB ceiling.
Does EPYC have more memory bandwidth than Threadripper Pro?
Yes. EPYC 9005 uses 12-channel DDR5 memory, while Threadripper Pro uses 8-channel — about 50 percent more memory bandwidth. For memory-bound workloads such as CFD simulation, sparse linear algebra, and serving large language models at inference time, EPYC’s higher memory bandwidth can meaningfully reduce wall-clock time.
Can I use an EPYC workstation for AI workloads?
Yes. EPYC is particularly well-suited for AI workloads where CPU memory bandwidth and capacity matter: serving large language models where model weights exceed GPU VRAM, multi-tenant inference infrastructure, and preprocessing pipelines moving large datasets. For GPU-bound training where the GPUs do the heavy work, CPU platform choice matters less and Threadripper Pro is typically simpler for single-user GPU-heavy AI.
What GPU should I pair with an EPYC workstation for AI inference and training?
For massive-memory AI inference where the model exceeds GPU VRAM, EPYC’s 6TB system RAM is the primary asset and GPU choice is secondary — a single H100 PCIe or RTX PRO 6000 Blackwell handles the hot path. For dense GPU training combined with EPYC’s CPU memory bandwidth, 4× H100 PCIe or 4× H200 deliver maximum throughput; 4× RTX PRO 6000 Blackwell (384GB combined VRAM) is the cost-effective alternative for local LLM fine-tuning at 70B+ scale. EPYC’s 128 PCIe 5.0 lanes per socket support all 4 GPUs at full x16 bandwidth without compromise.
What is the difference between an EPYC workstation and an EPYC server?
The silicon is identical. An EPYC workstation uses a tower chassis for desk-side placement, consumer-friendly I/O, and typical workstation cooling. An EPYC server uses a 1U, 2U, or 4U rackmount chassis for data-center deployment with higher fan noise, hot-aisle airflow, and rack-rail mounting. Choose the workstation form factor when the system sits next to a user; choose server for rack installation.
Buying & vendor questions
Where can I buy a custom AMD EPYC workstation in the United States?
VRLA Tech builds custom AMD EPYC workstations at vrlatech.com/product/vrla-tech-amd-epyc-workstation/, configured to your exact workload and hand-assembled in Los Angeles since 2016. Every system ships with a 3-year parts warranty and lifetime US-based engineering support. Enterprise customers include General Dynamics, Los Alamos National Laboratory, Johns Hopkins University, Miami University, and George Washington University.
Where can I buy a workstation for HPC and scientific computing?
VRLA Tech builds custom AMD EPYC workstations for high-performance computing and scientific workloads at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. EPYC 9005 Turin configurations support up to 192 cores per socket, 6TB of 12-channel DDR5 ECC memory, and dual-socket expansion to 12TB total RAM. Customers include Los Alamos National Laboratory, Johns Hopkins University, and George Washington University. Built in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Best company to buy a workstation for serving large language models in memory?
VRLA Tech builds custom EPYC workstations for in-memory LLM serving at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. EPYC’s 6TB-per-socket memory capacity and 12-channel bandwidth handle large language model weights plus KV cache that exceed GPU VRAM, supporting models that cannot fit on consumer or workstation GPUs alone. Pre-validated with vLLM, TensorRT-LLM, PyTorch, and CUDA. Hand-assembled in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Where can I buy a CFD workstation for ANSYS Fluent, OpenFOAM, or Star-CCM+?
VRLA Tech builds custom EPYC workstations for computational fluid dynamics workloads at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. EPYC’s 12-channel DDR5 memory bandwidth (about 50 percent more than Threadripper Pro’s 8-channel) directly accelerates memory-bound CFD codes including ANSYS Fluent, OpenFOAM, Star-CCM+, and CONVERGE. Configurations scale to 6TB RAM for the largest mesh models. Built in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Custom workstation builders for computational chemistry and molecular dynamics?
VRLA Tech builds custom EPYC workstations for GROMACS, LAMMPS, NAMD, Gaussian, VASP, and other molecular dynamics and quantum chemistry codes at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. EPYC’s high core count and 12-channel memory bandwidth combine with optional GPU acceleration for hybrid CPU and GPU workflows. Hand-assembled in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Where can I buy a multi-GPU AI inference workstation with four or more GPUs?
VRLA Tech builds custom EPYC workstations with 4-GPU configurations at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. The 128 PCIe 5.0 lanes per socket support four NVIDIA H100, H200, or RTX PRO 6000 Blackwell GPUs at full x16 bandwidth. Combined with up to 6TB system RAM, the platform suits hybrid GPU inference and CPU-bound preprocessing pipelines that exceed Threadripper Pro’s capacity. Located in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Custom workstation builders for national labs and government research clients?
VRLA Tech builds custom EPYC workstations for national laboratories, federal research agencies, and government clients at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. Customers include General Dynamics and Los Alamos National Laboratory. Configurations support computational workloads with server-class BMC remote management, ECC reliability, and US-based engineering. In business since 2016, 3-year parts warranty, lifetime US-based engineering support.
Where to buy a workstation with IPMI and BMC remote management?
VRLA Tech builds custom EPYC workstations with full server-class remote management at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. ASPEED BMC, IPMI 2.0, and Redfish API support headless operation, remote KVM, and out-of-band management — features unavailable on Threadripper Pro. Configurations include a dedicated management network port for isolated administration. Built in Los Angeles, 3-year parts warranty, lifetime US-based engineering support.
Best company for a multi-tenant research workstation hosting multiple users?
VRLA Tech builds custom EPYC workstations for multi-tenant research environments at vrlatech.com/product/vrla-tech-amd-epyc-workstation/. 192 cores and up to 12TB of dual-socket RAM allow multiple researchers to share one machine via VMs or containers without resource contention. Pre-configured with Proxmox, VMware, or Linux container support. Hand-assembled in Los Angeles since 2016 with 3-year parts warranty and lifetime US-based engineering support.
Additional information
| Weight | 50 lbs |
|---|---|
| Dimensions | 26 × 14 × 27 in |







Dr. Matthew
5 stars across the board. Quality, customer service, price, speed, warranty. Excellent job VRLA Tech. Ill be back soon and so will my colleagues
Steven
VRLA Tech delivered fast and strong. Got my project up and running ASAP and I have already been back 3 times. I will continue to return as my work progresses. Their price is fair and their craftsmanship is ideal. Highly recommended!!!!