Containerized Flux

Important: Model Access Requirements

The Flux Dev model is gated on Hugging Face. Before using it, you must:

  • Log into Hugging Face at huggingface.co
  • Go to the model page: black-forest-labs/FLUX.1-dev
  • Click “Access Repository” and accept the model’s terms.

If you don’t do this, the container will fail when trying to download the model.

Flux Dev in a Container

Flux Dev is a powerful AI model for image generation, but setting it up is a hassle.

  • Library versions conflict.
  • GPU dependencies break.
  • Python environments turn into an unmaintainable mess.

Instead of dealing with dependency issues, I built a containerized Flux Dev setup - a self-contained Docker image that lets you start generating images instantly.

No manual installs. No environment headaches. Just run a container, pass a prompt, and get an image.

Why Containerize Flux Dev?

Most AI models require:

  • Torch versions that conflict with other Python projects.
  • Hugging Face dependencies that need authentication.
  • GPU drivers that may or may not cooperate.

A containerized solution fixes this:

  • Zero Install Hassles – No need to install Python, Torch, or Diffusers.
  • 100% Repeatable – Works the same on your laptop, a cloud VM, or a Kubernetes cluster.
  • GPU or CPU Support – Automatically detects and uses a GPU when available.
  • Preloaded with Model Caching – Uses an external Hugging Face cache, so you don’t redownload models every time.

This setup is optimized for 24GB GPUs like the RTX 3090, running each image in ~8 seconds.

Setting Up Hugging Face Cache and Token

To prevent the container from re-downloading the model every time, set up a persistent model cache on your host machine.

Step 1: Create a Hugging Face Model Cache Directory

I use /data/models as it’s a big, beefy SSD instead of my lean, mean NVMe for the host OS. Just make sure you pick a place that has a lot of space and try being consistent across containers and environments to make your life easier.

mkdir -p /data/models

This directory will store all downloaded models and should be mounted into the container.

Step 2: Get Your Hugging Face API Token

export HF_TOKEN="your_huggingface_api_token"

This ensures that your container can authenticate and download the model.

How to Use This

Step 1: Pull the Docker Image

If you just want to run it, grab the pre-built image:

docker pull ghcr.io/codysnider/flux:latest

Or build it yourself:

git clone https://github.com/codysnider/flux.git
cd flux
docker build -t ghcr.io/codysnider/flux .

Step 2: Run the Container

To generate an image from a text prompt:

mkdir -p "$(pwd)/output"

# Generate a random seed between 1 and 9999
SEED=$((1 + RANDOM % 9999))

docker run --rm --gpus all \
    -v "/data/models:/app/hf_cache" \
    -e HF_HOME="/app/hf_cache" \
    -e HF_TOKEN="$HF_TOKEN" \
    -v "$(pwd)/output:/app/output" \
    ghcr.io/codysnider/flux:latest \
    --prompt "A potato farmer holding a sign that says 'Flux on a potato'" \
    --width 512 --height 512 --steps 20 --seed "$SEED" \
    --output "/app/output/potato.png"

What this does:

  • Creates an output folder for generated images.
  • Mounts the Hugging Face model cache to avoid redownloading models.
  • Passes your Hugging Face API token for authentication.
  • Runs the container with a custom prompt and image size.
  • Outputs a PNG file with the generated image.

After it runs, check output/potato.png for the result.

Performance

This setup is tested on an RTX 3090 (24GB VRAM) and generates images in:

  • ~8 seconds for 512x512 resolution (20 steps)

Automated Testing

To verify the setup, run this test script:

./test.sh

This will:

  • Run the container.
  • Measure execution time.
  • Validate the output image format.
  • Check that a 512x512 image is generated correctly.

Final Thoughts

With this containerized Flux Dev setup, you don’t have to waste time installing dependencies or fixing broken environments.

GitHub: codysnider/flux