Introduction
Containers have become the de facto standard for application deployment, but the conversation often jumps straight to Kubernetes when discussing production workloads. While K8s excels at large-scale orchestration, many production services don’t require that level of complexity. For single-host or small-scale deployments, a well-architected Podman setup with systemd integration can provide robust, secure, and maintainable infrastructure.
This article demonstrates a production-grade container deployment using Red Hat Enterprise Linux 10, Podman Quadlets, and Traefik as a reverse proxy. We’ll walk through deploying Forgejo (a self-hosted Git service) as a practical example, covering the technical implementation and the architectural decisions behind it.

Why This Approach?
Before diving into the implementation, let’s address the fundamental design decisions:
Podman over Docker
Red Hat has made Podman the standard container runtime in RHEL for compelling technical and security reasons. This isn’t just vendor preference - it represents fundamental architectural improvements:
- Daemonless architecture: No privileged daemon running as root, reducing the attack surface significantly. Each container runs as a direct child of systemd or the user session.
- Rootless containers: Native support for running containers as unprivileged users - a first-class feature, not a bolt-on
- systemd integration: First-class integration with the init system that already manages your services. This is particularly powerful in RHEL environments where systemd’s maturity and tooling are well-established.
- OCI compliance: Full compatibility with Docker images and registries - your existing container images work without modification
- Pod support: Kubernetes-style pod concepts for grouping containers, making the transition to K8s smoother if needed
- Fork/exec model: Unlike Docker’s client-server architecture, Podman uses traditional Unix fork/exec, making it more auditable and debuggable with standard tools
From an enterprise perspective, Podman aligns with RHEL’s security-first philosophy. SELinux, user namespaces, and cgroups v2 integration are not afterthoughts but core design elements.
Quadlets over Compose
Traditional Docker Compose files are imperative and require a separate daemon. Podman Quadlets leverage systemd’s unit file format, providing:
- Declarative configuration: Define the desired state, let systemd handle the lifecycle
- Native service management: Use familiar
systemctlcommands - the same tooling administrators already know - Dependency management: Leverage systemd’s robust dependency graph (
After=,Requires=, etc.) - Automatic updates: Built-in support for container image updates via
podman-auto-update.timer - Resource control: Direct access to systemd’s cgroup integration for CPU, memory, and I/O limits
- Journal integration: All container logs automatically flow to journald, integrating with existing log infrastructure
Network Segmentation by Design
Proper network isolation is crucial for security:
- Frontend network: IPv6-enabled network for Traefik and application containers
- Backend networks: Isolated networks for database communication
- No unnecessary exposure: Database containers never touch the frontend network
The Architecture
Our example deployment consists of three components:
- Forgejo - The Git service application
- PostgreSQL - The database backend
- Traefik - The reverse proxy handling TLS and routing
Internet
│
▼
Traefik (Port 443)
│
┌────────────┴────────────┐
│ frontend network │
│ (10.89.0.0/24) │
└────────────┬────────────┘
│
Forgejo Container
│
┌────────────┴────────────┐
│ forgejo-backend.network │
│ (isolated) │
└────────────┬────────────┘
│
PostgreSQL Container
Practical Implementation: Deploying Forgejo
Let’s walk through deploying a complete Git hosting service with database backend and TLS termination.
Step 1: Enable Podman Socket for Traefik
Traefik’s Docker provider expects a Docker-compatible API socket. Podman provides this through a systemd-managed socket:
# Enable and start the Podman socket
systemctl enable --now podman.socket
# Verify the socket is active
systemctl status podman.socket
# Check socket location
ls -la /run/podman/podman.sock
Technical detail: The Podman socket (/run/podman/podman.sock) provides a Docker-compatible REST API. Traefik connects to this socket to discover containers and read their labels for dynamic configuration. This is mounted into the Traefik container as /var/run/docker.sock, maintaining compatibility with Traefik’s Docker provider configuration.
Without this socket enabled, Traefik cannot discover containers or process their routing labels - the declarative configuration simply won’t work.
Step 2: Network Configuration
First, create the networks. The frontend network enables IPv6 and connects to Traefik:
# Create the IPv6-enabled frontend network
podman network create \
--ipv6 \
--subnet 10.89.0.0/24 \
--gateway 10.89.0.1 \
ipv6
# Create the isolated backend network for database communication
podman network create forgejo-backend.network
Design Decision: Why two networks? The frontend network (despite its name “ipv6”) handles all external-facing traffic. The backend network ensures PostgreSQL is never directly accessible from the frontend, implementing defense-in-depth.
Step 3: Secrets Management
Never hardcode passwords in configuration files. Podman secrets integrate with systemd and provide secure credential storage:
# Generate a strong database password
pwgen -s 32 1 | podman secret create forgejo_db_password -
Design Decision: Using podman secret over environment variables in files provides:
- Encrypted storage
- Proper access control
- No secrets in process lists or logs
- Easy rotation without rebuilding containers
Step 4: Database Container (Quadlet)
Create /etc/containers/systemd/forgejo-db.container:
[Container]
ContainerName=forgejo-db
AutoUpdate=registry
Image=docker.io/postgres:16-alpine
# Network isolation - only on backend network
Network=forgejo-backend.network
# PostgreSQL configuration
Environment=POSTGRES_USER=forgejo
Environment=POSTGRES_DB=forgejo
# Secret injection as environment variable
Secret=forgejo_db_password,type=env,target=POSTGRES_PASSWORD
# Persistent storage with SELinux context
Volume=/opt/forgejo/postgres:/var/lib/postgresql/data:z
[Service]
Restart=always
[Install]
WantedBy=default.target
Key technical points:
AutoUpdate=registry: Enables automatic image updates viapodman auto-updateVolumeflag:z: Automatically relabels SELinux contexts for container accessSecretdirective: Injects the secret as an environment variable at runtime- No frontend network: Database is completely isolated from external access
Step 5: Application Container (Quadlet)
Create /etc/containers/systemd/forgejo-server.container:
[Container]
ContainerName=forgejo-server
Image=codeberg.org/forgejo/forgejo:13
AutoUpdate=registry
# Dual network attachment
Network=forgejo-backend.network
Network=ipv6
# Application configuration
Environment=USER_UID=1000
Environment=USER_GID=1000
Environment=FORGEJO__database__DB_TYPE=postgres
Environment=FORGEJO__database__HOST=forgejo-db:5432
Environment=FORGEJO__database__NAME=forgejo
Environment=FORGEJO__database__USER=forgejo
# Database password from secret
Secret=forgejo_db_password,type=env,target=FORGEJO__database__PASSWD
# Persistent storage
Volume=/opt/forgejo/forgejo:/data:z
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
# Traefik labels for routing
Label="traefik.enable=true"
Label="traefik.docker.network=ipv6"
Label="traefik.http.routers.forgejo.rule=Host(`git.example.com`)"
Label="traefik.http.routers.forgejo.entrypoints=https"
Label="traefik.http.routers.forgejo.service=forgejo-http"
Label="traefik.http.routers.forgejo.tls.certresolver=traefiktls"
Label="traefik.http.routers.forgejo.middlewares=secure-headers@file"
Label="traefik.http.services.forgejo-http.loadbalancer.server.port=3000"
# SSH Git access via Traefik TCP routing
Label="traefik.tcp.routers.forgejo-ssh.rule=HostSNI(`*`)"
Label="traefik.tcp.routers.forgejo-ssh.entrypoints=ssh"
Label="traefik.tcp.routers.forgejo-ssh.service=forgejo-ssh"
Label="traefik.tcp.services.forgejo-ssh.loadbalancer.server.port=22"
[Service]
Restart=always
[Install]
WantedBy=default.target
[Unit]
After=forgejo-db.service
Design decisions explained:
- Dual network attachment: The container needs backend network for PostgreSQL and frontend for Traefik
- Traefik labels: Declarative routing configuration - no manual Traefik config files needed
- SSH routing: Traefik handles both HTTP and TCP (Git SSH) on different ports
- Systemd dependency:
After=forgejo-db.serviceensures database starts first
Step 6: Reverse Proxy Configuration
Create /etc/containers/systemd/traefik.container:
[Container]
ContainerName=traefik
Image=docker.io/traefik:latest
AutoUpdate=registry
# Required for binding to privileged ports
AddCapability=CAP_NET_BIND_SERVICE
# Frontend network only
Network=ipv6
# Port exposure
PublishPort=80:80
PublishPort=443:443
PublishPort=2222:2222
# Security hardening
NoNewPrivileges=true
SecurityLabelType=container_runtime_t
# Configuration and state
Volume=/etc/localtime:/etc/localtime:ro
Volume=/run/podman/podman.sock:/var/run/docker.sock:ro
Volume=/opt/traefik/traefik.yml:/etc/traefik/traefik.yml:z,ro
Volume=/opt/traefik/config.yml:/etc/traefik/config.yml:z,ro
Volume=/opt/traefik/letsencrypt:/letsencrypt:z
# Self-configuration for dashboard
Label=traefik.enable=true
Label=traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)
Label=traefik.http.routers.dashboard.entrypoints=https
Label=traefik.http.routers.dashboard.service=api@internal
Label=traefik.http.routers.dashboard.tls=true
Label=traefik.http.routers.dashboard.tls.certresolver=traefiktls
Label=traefik.http.routers.dashboard.middlewares=dashboard-auth,secure-headers@file
Label=traefik.http.middlewares.dashboard-auth.basicauth.users=admin:$$2y$$05$$...
[Service]
Restart=always
[Install]
WantedBy=default.target
Create /opt/traefik/traefik.yml:
global:
checkNewVersion: true
sendAnonymousUsage: false
api:
dashboard: true
insecure: false
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
ssh:
address: ":2222"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
network: ipv6
file:
filename: /etc/traefik/config.yml
watch: true
certificatesResolvers:
traefiktls:
acme:
email: [email protected]
storage: /letsencrypt/acme.json
httpChallenge:
entryPoint: http
log:
level: INFO
Create /opt/traefik/config.yml for shared middlewares:
http:
middlewares:
secure-headers:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
stsPreload: true
forceSTSHeader: true
customFrameOptionsValue: "SAMEORIGIN"
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
permissionsPolicy: "geolocation=(), microphone=(), camera=()"
Security highlights:
NoNewPrivileges: Prevents privilege escalationSecurityLabelType: SELinux type enforcement- Automatic HTTP to HTTPS redirect
- HSTS headers for HTTPS enforcement
- Let’s Encrypt automation for TLS certificates
Step 7: Deployment
Reload systemd to recognize the new units:
Enable and start the services:
# Enable automatic startup
systemctl enable forgejo-db.service
systemctl enable forgejo-server.service
systemctl enable traefik.service
# Start the stack
systemctl start forgejo-db.service
systemctl start forgejo-server.service
systemctl start traefik.service
The beauty of systemd integration: You can now manage containers like any other service:
# Check status
systemctl status forgejo-server.service
# View logs
journalctl -u forgejo-server.service -f
# Restart
systemctl restart forgejo-server.service
# View dependency tree
systemctl list-dependencies forgejo-server.service
Step 8: Automated Updates
Enable automatic container image updates:
# Enable the timer
systemctl enable --now podman-auto-update.timer
# Check update status
podman auto-update --dry-run
With AutoUpdate=registry in the Quadlet files, Podman will:
- Check for new images daily
- Pull updates if available
- Recreate containers with new images
- Preserve all volumes and configuration
Advanced Topics
Working with Red Hat Registries
While this example uses public container registries, production RHEL deployments often leverage Red Hat’s container catalog:
# Authenticate to Red Hat registry (requires active subscription)
podman login registry.redhat.io
# Pull RHEL-based images
podman pull registry.redhat.io/rhel9/postgresql-15
# Use in Quadlet
Image=registry.redhat.io/rhel9/postgresql-15
Red Hat certified container images include: - Support lifecycle matching RHEL versions - Security errata and CVE fixes - Compliance with enterprise requirements - Verified compatibility with RHEL container hosts
SELinux Integration
RHEL’s mandatory access control is a feature, not a bug. While many Docker tutorials suggest disabling SELinux, Podman embraces it as a critical security layer. The :z flag in volume mounts automatically handles SELinux labeling:
Volume=/opt/forgejo/postgres:/var/lib/postgresql/data:z
This relabels the host directory with the correct SELinux context (container_file_t) for container access. For read-only mounts, use :ro without :z:
Volume=/etc/localtime:/etc/localtime:ro
RHEL best practice: Never disable SELinux. If you encounter permission issues, investigate the context (ls -Z) rather than setting permissive mode. Podman’s integration makes this dramatically easier than it was with Docker.
For advanced scenarios, you can use uppercase :Z to make the volume private to that specific container, or manually manage contexts:
# Check SELinux context
ls -Z /opt/forgejo/
# Manually set context if needed
semanage fcontext -a -t container_file_t "/opt/forgejo(/.*)?"
restorecon -Rv /opt/forgejo/
Resource Limits
Systemd provides granular resource control through cgroups:
[Service]
MemoryMax=2G
CPUQuota=200%
TasksMax=1024
These limits are enforced by the kernel and prevent resource exhaustion.
Health Checks
Podman supports container health checks:
[Container]
HealthCmd=/usr/bin/curl -f http://localhost:3000/ || exit 1
HealthInterval=30s
HealthTimeout=5s
HealthRetries=3
Systemd can react to failed health checks and restart containers automatically.
Monitoring and Observability
Viewing Logs
All container output goes to journald:
# Real-time logs
journalctl -u forgejo-server.service -f
# Last 100 lines
journalctl -u forgejo-server.service -n 100
# Logs from specific time
journalctl -u forgejo-server.service --since "2024-01-01 12:00:00"
Container Inspection
# Container details
podman inspect forgejo-server
# Resource usage
podman stats forgejo-server
# Network information
podman network inspect ipv6
Comparison with Kubernetes
This approach is not meant to replace Kubernetes for large-scale deployments, but it offers distinct advantages for single-host or small-scale scenarios:
Advantages:
- Significantly lower resource overhead
- Simpler mental model and troubleshooting
- Direct integration with OS-level tools
- No additional control plane components
- Easier to audit and secure
When to choose K8s instead:
- Multi-host orchestration requirements
- Advanced scheduling needs
- Built-in service mesh requirements
- Teams already invested in K8s ecosystem
Security Considerations
This setup implements several security layers:
- Network segmentation: Databases isolated from frontend
- Rootless option: All containers can run as unprivileged users
- SELinux enforcement: Mandatory access control
- Secret management: No credentials in configuration files
- Automatic updates: Regular security patches
- TLS termination: Encrypted transport with Let’s Encrypt
- Security headers: HSTS, CSP, and other protections
For even stricter security, run containers in rootless mode:
# As regular user (not root)
systemctl --user enable --now forgejo-server.service
Conclusion
Modern container deployment doesn’t require Kubernetes for every use case. With RHEL Quadlets, Podman, and proper architectural patterns, you can build production-grade container infrastructure that is:
- Secure: Multiple layers of isolation and access control, leveraging RHEL’s security-first design
- Maintainable: Declarative configuration with systemd integration
- Observable: Native integration with journald and systemd tools
- Automated: Built-in update mechanisms via systemd timers
- Resilient: Systemd’s proven service management
- Enterprise-ready: Backed by Red Hat’s support lifecycle and security practices
This approach proves particularly valuable for:
- Self-hosted services and edge deployments
- Development environments matching production
- Organizations preferring simpler operational models
- Hybrid scenarios where some services don’t warrant K8s overhead
Next Steps for Red Hat Practitioners
This foundation scales naturally into the broader Red Hat container ecosystem:
- Fedora CoreOS: Apply these Quadlet patterns to immutable, auto-updating infrastructure
- OpenShift: Recognize how systemd-managed containers relate to K8s pods
- Ansible automation: Codify Quadlet deployment with
containers.podmancollection - Image building: Explore Buildah and Skopeo for OCI image workflows
The combination of Podman’s security-first design and systemd’s battle-tested service management creates a robust foundation for containerized applications without the operational overhead of full orchestration platforms - and provides essential knowledge for working across Red Hat’s entire container portfolio.