Flux.1, developed by Black Forest Labs, replaced SDXL as the dominant local AI image generation model in 2025 and remains the standard in 2026. Its 12B parameter transformer architecture produces substantially higher image quality than SDXL — particularly for photorealism, text rendering in images, and accurate prompt following — at the cost of higher VRAM requirements. This guide covers what hardware you need to run Flux.1 effectively.
Flux.1 variants and their hardware requirements
Black Forest Labs released Flux.1 in three variants. Flux.1 Schnell is the fastest, designed for real-time generation with fewer denoising steps. Flux.1 Dev is the higher-quality open-weight variant intended for research and commercial use. Flux.1 Pro is the API-only commercial version.
For local deployment, Flux.1 Dev is the primary target. At 12B parameters, Flux.1 Dev in BF16 requires approximately 24GB of VRAM for the model weights alone. Adding ControlNet models, LoRA weights, and generation buffers raises the practical VRAM requirement to 28–32GB for comfortable use at standard 1024×1024 resolution.
VRAM requirements for Flux.1 workflows
| Workflow | VRAM needed | RTX 5090 (32GB)? |
|---|---|---|
| Flux.1 Schnell, standard res | 16–20GB | Yes |
| Flux.1 Dev, 1024px generation | 24–28GB | Yes |
| Flux.1 Dev, 2048px generation | 28–36GB | Yes (tight) |
| Flux.1 Dev + ControlNet | 32–48GB | Partial |
| Flux.1 LoRA training | 24–40GB | Yes |
| Flux.1 DreamBooth fine-tuning | 40–60GB | Limited |
| Flux.1 + multiple LoRAs + ControlNet | 48–64GB | No |
RTX 5090 vs RTX PRO 6000 for Flux.1
The RTX 5090 with 32GB covers Flux.1 Dev generation at standard and high resolutions, Flux.1 LoRA training, and Flux workflows without ControlNet stacking. For most AI artists who generate images and train LoRAs but do not run heavy multi-ControlNet or DreamBooth full fine-tuning sessions, the RTX 5090 is sufficient.
The RTX PRO 6000 Blackwell’s 96GB eliminates all Flux.1 constraints. Complex multi-ControlNet pipelines, DreamBooth fine-tuning at high resolution, batch generation at 4K, and running Flux.1 alongside video diffusion models simultaneously all require more VRAM than the RTX 5090 provides. For studios using Flux.1 as a production tool, the RTX PRO 6000 removes the need to manage VRAM budgets between workflow steps.
ComfyUI for Flux.1: the recommended workflow
ComfyUI is the standard interface for Flux.1 in 2026. Its node-based workflow allows precise control over the Flux.1 pipeline — selecting quantized model variants for memory efficiency, combining multiple ControlNet nodes, chaining LoRA weights, and building complex automation pipelines. Flux.1 runs with ComfyUI’s native loader nodes using either the full BF16 weights or fp8 quantized variants that reduce VRAM usage at a small quality cost.
VRLA Tech ships generative AI workstations with ComfyUI pre-installed and Flux.1 validated before delivery. The generation pipeline is operational from day one.
Recommended configurations
AI artist — Flux.1 generation and LoRA training
- GPU: NVIDIA RTX 5090 (32GB GDDR7)
- CPU: AMD Ryzen 9 9950X
- RAM: 64GB DDR5
- Model NVMe: 2TB (Flux weights, LoRAs, ControlNets)
- Output NVMe: 4TB
Studio — production Flux.1 pipeline, DreamBooth, multi-ControlNet
- GPU: NVIDIA RTX PRO 6000 Blackwell (96GB)
- CPU: AMD Ryzen 9 9950X
- RAM: 128GB DDR5
- NVMe: 4TB model storage + 8TB outputs
Browse Flux.1 and generative AI workstations on the VRLA Tech Generative AI Workstation page.
Tell us your Flux.1 workflow
Share your generation resolutions, whether you use ControlNet, if you train LoRAs or DreamBooth models, and whether you run video generation alongside images. We configure the right VRAM setup.
Flux.1 workstations. ComfyUI pre-installed. Ships configured.
3-year parts warranty. Lifetime US engineer support.
VRLA Tech has been building custom AI workstations since 2016. All systems ship with a 3-year parts warranty and lifetime US-based engineer support.




