Podman + NVIDIA Setup
This guide configures Podman with NVIDIA GPU support on Linux systems. Podman runs containers without a daemon and supports rootless operation for better security.
Step 1: Install Podman
Section titled “Step 1: Install Podman”Install Podman using your distribution’s package manager:
# Ubuntu/Debiansudo apt update && sudo apt install podman
# Fedora/RHELsudo dnf install podman
# Check installationpodman --version
Verify Podman is working:
podman run hello-world
Step 2: Install NVIDIA Container Toolkit
Section titled “Step 2: Install NVIDIA Container Toolkit”Install the NVIDIA container toolkit:
# Add NVIDIA repository (same as Docker setup)distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
# Install toolkitsudo apt update && sudo apt install nvidia-container-toolkit
Configure Podman runtime:
sudo nvidia-ctk runtime configure --runtime=podman
Step 3: Generate CDI specification
Section titled “Step 3: Generate CDI specification”Podman uses Container Device Interface (CDI) for GPU access:
# Generate CDI specification for your GPUssudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
# Verify CDI file was createdls -la /etc/cdi/
Step 4: Test GPU access
Section titled “Step 4: Test GPU access”Test that containers can access your GPU:
podman run --rm --device nvidia.com/gpu=all nvidia/cuda:12.0-base-ubuntu22.04 nvidia-smi
You should see your GPU information displayed.
Step 5: Configure SELinux (Fedora/RHEL only)
Section titled “Step 5: Configure SELinux (Fedora/RHEL only)”If you’re using Fedora, RHEL, or CentOS, configure SELinux:
# Check if SELinux is enforcinggetenforce
# If output is "Enforcing", run:sudo setsebool -P container_use_devices=on
Step 6: Deploy GPUFlow provider
Section titled “Step 6: Deploy GPUFlow provider”Create data directory:
sudo mkdir -p /opt/gpuflowsudo chown $USER:$USER /opt/gpuflow
Run the provider container:
podman run -d \ --name gpuflow-provider \ --restart=unless-stopped \ --device nvidia.com/gpu=all \ --network=host \ -v /opt/gpuflow:/data:Z \ -e GPUFLOW_API_KEY="get-from-dashboard" \ ghcr.io/gpuflow/provider:latest
Check the provider status:
podman pspodman logs gpuflow-provider
Expected log output:
- GPU detection successful
- Network connectivity established
- Provider registered with GPUFlow
Step 7: Enable systemd service (optional)
Section titled “Step 7: Enable systemd service (optional)”To start the provider automatically on boot:
# Generate systemd service filepodman generate systemd --name gpuflow-provider --files --new
# Move service file to systemd directorysudo mv container-gpuflow-provider.service /etc/systemd/system/
# Enable and start servicesudo systemctl daemon-reloadsudo systemctl enable container-gpuflow-provider.servicesudo systemctl start container-gpuflow-provider.service
Managing the provider
Section titled “Managing the provider”Common commands for managing your provider:
# Check statuspodman ps -a
# View logspodman logs gpuflow-provider -f
# Restart providerpodman restart gpuflow-provider
# Stop providerpodman stop gpuflow-provider
# Update providerpodman pull ghcr.io/gpuflow/provider:latestpodman stop gpuflow-providerpodman rm gpuflow-provider# Re-run the podman run command from Step 6
Next steps
Section titled “Next steps”Your provider is now running. Complete the setup:
- Create your GPUFlow account
- Link your hardware in the dashboard
- Create your first GPU listing