Skip to content

Podman + NVIDIA Setup

This guide configures Podman with NVIDIA GPU support on Linux systems. Podman runs containers without a daemon and supports rootless operation for better security.

Install Podman using your distribution’s package manager:

Terminal window
# Ubuntu/Debian
sudo apt update && sudo apt install podman
# Fedora/RHEL
sudo dnf install podman
# Check installation
podman --version

Verify Podman is working:

Terminal window
podman run hello-world

Install the NVIDIA container toolkit:

Terminal window
# Add NVIDIA repository (same as Docker setup)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
# Install toolkit
sudo apt update && sudo apt install nvidia-container-toolkit

Configure Podman runtime:

Terminal window
sudo nvidia-ctk runtime configure --runtime=podman

Podman uses Container Device Interface (CDI) for GPU access:

Terminal window
# Generate CDI specification for your GPUs
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
# Verify CDI file was created
ls -la /etc/cdi/

Test that containers can access your GPU:

Terminal window
podman run --rm --device nvidia.com/gpu=all nvidia/cuda:12.0-base-ubuntu22.04 nvidia-smi

You should see your GPU information displayed.

Step 5: Configure SELinux (Fedora/RHEL only)

Section titled “Step 5: Configure SELinux (Fedora/RHEL only)”

If you’re using Fedora, RHEL, or CentOS, configure SELinux:

Terminal window
# Check if SELinux is enforcing
getenforce
# If output is "Enforcing", run:
sudo setsebool -P container_use_devices=on

Create data directory:

Terminal window
sudo mkdir -p /opt/gpuflow
sudo chown $USER:$USER /opt/gpuflow

Run the provider container:

Terminal window
podman run -d \
--name gpuflow-provider \
--restart=unless-stopped \
--device nvidia.com/gpu=all \
--network=host \
-v /opt/gpuflow:/data:Z \
-e GPUFLOW_API_KEY="get-from-dashboard" \
ghcr.io/gpuflow/provider:latest

Check the provider status:

Terminal window
podman ps
podman logs gpuflow-provider

Expected log output:

  • GPU detection successful
  • Network connectivity established
  • Provider registered with GPUFlow

To start the provider automatically on boot:

Terminal window
# Generate systemd service file
podman generate systemd --name gpuflow-provider --files --new
# Move service file to systemd directory
sudo mv container-gpuflow-provider.service /etc/systemd/system/
# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable container-gpuflow-provider.service
sudo systemctl start container-gpuflow-provider.service

Common commands for managing your provider:

Terminal window
# Check status
podman ps -a
# View logs
podman logs gpuflow-provider -f
# Restart provider
podman restart gpuflow-provider
# Stop provider
podman stop gpuflow-provider
# Update provider
podman pull ghcr.io/gpuflow/provider:latest
podman stop gpuflow-provider
podman rm gpuflow-provider
# Re-run the podman run command from Step 6

Your provider is now running. Complete the setup:

  1. Create your GPUFlow account
  2. Link your hardware in the dashboard
  3. Create your first GPU listing