Containerized NSFW Image Classification

Detecting NSFW content in images using AI is simple and super light on system resources. I built a containerized NSFW image classifier - a self-contained Docker image that lets you classify images instantly. No package conflicts. No CUDA headaches. Just run a container, pass an image, and get a classification.


Why Containerize NSFW Classification?

Running AI models outside a container is a mess:

  • Torch dependencies may not match your other projects.
  • CUDA versions are notoriously finicky.
  • Manually setting up Python, PIL, and Transformers is a waste of time.

A containerized solution solves these problems:

  • Zero Install Hassles – No need to install Python, Torch, or Transformers manually.
  • Fully Repeatable – Works identically on your local machine or a cloud GPU.
  • CPU or GPU Support – Detects and uses a GPU when available.
  • Preloaded with Model Caching – Uses an external Hugging Face cache to prevent redownloading.

This setup runs efficiently on a GPU, but it also works with CPU-only environments.

Setting Up Hugging Face Cache and API Token

Step 1: Create a Model Cache Directory

To avoid redownloading models every time, create a persistent cache directory:

mkdir -p /data/models

This directory will store all downloaded models.

Step 2: Get Your Hugging Face API Token

  • Log into Hugging Face.
  • Generate a new token with “read” access.
  • Export it in your shell:
export HF_TOKEN="your_huggingface_api_token"

This ensures the container can authenticate with Hugging Face.

How to Use This

Step 1: Pull the Docker Image

To run the classifier without building from scratch, pull the prebuilt image:

docker pull ghcr.io/codysnider/nsfw-image-detection:latest

Or build it yourself:

git clone https://github.com/codysnider/nsfw-image-detection.git
cd nsfw-classifier
docker build -t ghcr.io/codysnider/nsfw-image-detection .

Step 2: Run the Container for Classification

To classify an image:

docker run --rm \
    -e HF_TOKEN="$HF_TOKEN" \
    -v "$(pwd)/input:/app/input" \
    ghcr.io/codysnider/nsfw-image-detection:latest \
    --image "/app/input/example.jpg"

What this does:

  • Mounts the input directory so the container can access images.
  • Passes your Hugging Face API token for authentication.
  • Runs the classifier on the provided image.
  • Prints “normal” or “nsfw” based on the model’s prediction.

Testing CPU vs. GPU Performance

Step 1: Prepare Test Images

Place two images inside an input/ folder:

  • sfw.jpg – A safe-for-work image.
  • nsfw.jpg – An NSFW image.

Step 2: Run the Test Script

Run the test script to check classification accuracy and measure CPU vs. GPU inference time:

./test.sh

Final Thoughts

With this containerized NSFW image classifier, you don’t have to mess around with dependency issues or CUDA mismatches.

Just run a container, pass an image, and get instant results.

GitHub: codysnider/nsfw-image-detection