Description
Issue Description
I'm trying to run prometheus using the container, and it has many many command like flags that need to be passed to the podman run ..
command. This seems to work fine for all of my other containers (even the alertmanager container that has equally as many CLI flags). However, for the Prometheus container, the flags to not get appended, and the container is started without any CLI flags beyond the podman-specific ones. (e.g. podman run <podman flags> quay.io/prometheus/prometheus:v3.4.2
).
Steps to reproduce the issue
Steps to reproduce the issue
- I made a Quadlet file, as specified in the Podman documentation and placed it at
$HOME/.config/containers/systemd/promtheus.container
- After placing the Quadlet there, I refreshed SystemD with
systemctl --user daemon-reload
- I started the container (
systemctl --user start prometheus.service
) and noticed that the configuration was incorrect - I checked the generated Unit file at
/var/run/user/$UID/systemd/generator/prometheus.service
Describe the results you received
Here is the quadlet file I used:
[Unit]
Description=Prometheus Container
After=mainv6.network
After=prometheus-data.volume
After=podman-secret@prometheus-vault-token.service
Requires=mainv6.network
Requires=prometheus-data.volume
Requires=podman-secret@prometheus-vault-token.service
[Container]
Exec=--web.external-url='https://monitor.<my-domain>/' \
--config.file='/etc/prometheus/prometheus.yml' \
--config.auto-reload-interval=30s \
--web.listen-address='0.0.0.0:9191' \
--web.read-timeout=5m \
--web.max-connections=512 \
--web.enable-lifecycle \
--web.enable-admin-api \
--web.enable-otlp-receiver \
--web.enable-remote-write-receiver \
--web.console.templates='consoles' \
--web.console.libraries='console_libraries' \
--web.page-title='Prometheus Time Series Collection and Processing Server' \
--web.cors.origin='https?://(grafana|monitor|alerts)\.<my-domain>\.<tld>' \
--storage.tsdb.path='prometheus/' \
--storage.tsdb.retention.size=256GB \
--enable-feature='expand-external-labels,memory-snapshot-on-shutdown,promql-per-step-stats,promql-experimental-functions,extra-scrape-metrics,native-histograms,concurrent-rule-eval,otlp-write-receiver' \
--log.level=info \
--log.format=logfmt
Pull=newer
ContainerName=prometheus
HostName=prometheus
Network=main-v6
PublishPort=9191:9191
LogDriver=journald
LogOpt=path=/var/log/container/prometheus.json
LogOpt=max-size=10mb
Volume=prometheus-data:/prometheus
Secret=prometheus-vault-token,type=mount,target=/etc/prometheus/prometheus-token,mode=0444,uid=0,gid=0
Mount=type=bind,source=$HOME/prometheus,destination=/etc/prometheus,relabel=shared,idmap,readonly=false
Image=quay.io/prometheus/prometheus:v3.4.2
[Service]
Restart=always
[Install]
WantedBy=default.target
Which resulted in this service being generated:
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator
#
[Unit]
Wants=podman-user-wait-network-online.service
After=podman-user-wait-network-online.service
Description=Prometheus Container
After=mainv6-network.service prometheus-data-volume.service podman-secret@prometheus-vault-token.service
Requires=mainv6-network.service prometheus-data-volume.service podman-secret@prometheus-vault-token.service
SourcePath=/var/home/annie/.config/containers/systemd/prometheus.container
RequiresMountsFor=%t/containers
RequiresMountsFor=/var/home/annie/prometheus
[X-Container]
Exec=--web.external-url='https://monitor.<my-domain>/' --config.file='/etc/prometheus/prometheus.yml' --config.auto-reload-interval=30s --web.listen-address='0.0.0.0:9191' --web.read-timeout=5m --web.max-connections=512 --web.enable-lifecycle --web.enable-admin-api --web.enable-otlp-receiver --web.enable-remote-write-receiver --web.console.templates='consoles' --web.console.libraries='console_libraries' --web.page-title='Prometheus Time Series Collection and Processing Server' --web.cors.origin='https?://(grafana|monitor|alerts)\.<my-domain>\.<tld>' --storage.tsdb.path='prometheus/' --storage.tsdb.retention.size=256GB --enable-feature='expand-external-labels,memory-snapshot-on-shutdown,promql-per-step-stats,promql-experimental-functions,extra-scrape-metrics,native-histograms,concurrent-rule-eval,otlp-write-receiver' --log.level=info --log.format=logfmt
Pull=newer
ContainerName=prometheus
HostName=prometheus
Network=main-v6
PublishPort=9191:9191
LogDriver=journald
LogOpt=path=/var/log/container/prometheus.json
LogOpt=max-size=10mb
Volume=prometheus-data:/prometheus
Secret=prometheus-vault-token,type=mount,target=/etc/prometheus/prometheus-token,mode=0444,uid=0,gid=0
Mount=type=bind,source=$HOME/prometheus,destination=/etc/prometheus,relabel=shared,idmap,readonly=false
Image=quay.io/prometheus/prometheus:v3.4.2
[Service]
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name prometheus --cidfile=%t/%N.cid --replace --rm --log-driver journald --log-opt path=/var/log/container/prometheus.json --log-opt max-size=10mb --cgroups=split --pull newer --hostname prometheus --network main-v6 --sdnotify=conmon -d -v prometheus-data:/prometheus --publish 9191:9191 --secret prometheus-vault-token,type=mount,target=/etc/prometheus/prometheus-token,mode=0444,uid=0,gid=0 --mount type=bind,source=$HOME/prometheus,destination=/etc/prometheus,relabel=shared,idmap,readonly=false quay.io/prometheus/prometheus:v3.4.2
[Install]
WantedBy=default.target
Note: The
$HOME
$UID
and$GID
variables are just here for sanitization
Describe the results you expected
This is the unit file that should have been generated:
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator
#
[Unit]
Wants=podman-user-wait-network-online.service
After=podman-user-wait-network-online.service
Description=Prometheus Container
After=mainv6-network.service prometheus-data-volume.service podman-secret@prometheus-vault-token.service
Requires=mainv6-network.service prometheus-data-volume.service podman-secret@prometheus-vault-token.service
SourcePath=/var/home/annie/.config/containers/systemd/prometheus.container
RequiresMountsFor=%t/containers
RequiresMountsFor=/var/home/annie/prometheus
[X-Container]
Exec=--web.external-url='https://monitor.<my-domain>/' --config.file='/etc/prometheus/prometheus.yml' --config.auto-reload-interval=30s --web.listen-address='0.0.0.0:9191' --web.read-timeout=5m --web.max-connections=512 --web.enable-lifecycle --web.enable-admin-api --web.enable-otlp-receiver --web.enable-remote-write-receiver --web.console.templates='consoles' --web.console.libraries='console_libraries' --web.page-title='Prometheus Time Series Collection and Processing Server' --web.cors.origin='https?://(grafana|monitor|alerts)\.<my-domain>\.<tld>' --storage.tsdb.path='prometheus/' --storage.tsdb.retention.size=256GB --enable-feature='expand-external-labels,memory-snapshot-on-shutdown,promql-per-step-stats,promql-experimental-functions,extra-scrape-metrics,native-histograms,concurrent-rule-eval,otlp-write-receiver' --log.level=info --log.format=logfmt
Pull=newer
ContainerName=prometheus
HostName=prometheus
Network=main-v6
PublishPort=9191:9191
LogDriver=journald
LogOpt=path=/var/log/container/prometheus.json
LogOpt=max-size=10mb
Volume=prometheus-data:/prometheus
Secret=prometheus-vault-token,type=mount,target=/etc/prometheus/prometheus-token,mode=0444,uid=0,gid=0
Mount=type=bind,source=$HOME/prometheus,destination=/etc/prometheus,relabel=shared,idmap,readonly=false
Image=quay.io/prometheus/prometheus:v3.4.2
[Service]
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name prometheus --cidfile=%t/%N.cid --replace --rm --log-driver journald --log-opt path=/var/log/container/prometheus.json --log-opt max-size=10mb --cgroups=split --pull newer --hostname prometheus --network main-v6 --sdnotify=conmon -d -v prometheus-data:/prometheus --publish 9191:9191 --secret prometheus-vault-token,type=mount,target=/etc/prometheus/prometheus-token,mode=0444,uid=0,gid=0 --mount type=bind,source=$HOME/prometheus,destination=/etc/prometheus,relabel=shared,idmap,readonly=false quay.io/prometheus/prometheus:v3.4.2 --web.external-url='https://monitor.<my-domain>/' --config.file='/etc/prometheus/prometheus.yml' --config.auto-reload-interval=30s --web.listen-address='0.0.0.0:9191' --web.read-timeout=5m --web.max-connections=512 --web.enable-lifecycle --web.enable-admin-api --web.enable-otlp-receiver --web.enable-remote-write-receiver --web.console.templates='consoles' --web.console.libraries='console_libraries' --web.page-title='Prometheus Time Series Collection and Processing Server' --web.cors.origin='https?://(grafana|monitor|alerts)\.<my-domain>\.<tld>' --storage.tsdb.path='prometheus/' --storage.tsdb.retention.size=256GB --enable-feature='expand-external-labels,memory-snapshot-on-shutdown,promql-per-step-stats,promql-experimental-functions,extra-scrape-metrics,native-histograms,concurrent-rule-eval,otlp-write-receiver' --log.level=info --log.format=logfmt
[Install]
WantedBy=default.target
podman info output
host:
arch: amd64
buildahVersion: 1.40.1
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.13-1.fc42.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: '
cpuUtilization:
idlePercent: 98.99
systemPercent: 0.41
userPercent: 0.59
cpus: 16
databaseBackend: sqlite
distribution:
distribution: fedora
variant: coreos
version: "42"
eventLogger: journald
freeLocks: 1985
hostname: $HOSTNAME
idMappings:
gidmap:
- container_id: 0
host_id: $GID
size: 1
- container_id: 1
host_id: 589824
size: 65536
uidmap:
- container_id: 0
host_id: $UID
size: 1
- container_id: 1
host_id: 589824
size: 65536
kernel: 6.14.9-300.fc42.x86_64
linkmode: dynamic
logDriver: journald
memFree: 18758422528
memTotal: 29182439424
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.15.0-1.fc42.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.15.0
package: netavark-1.15.2-1.fc42.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.15.2
ociRuntime:
name: crun
package: crun-1.21-1.fc42.x86_64
path: /usr/bin/crun
version: |-
crun version 1.21
commit: 10269840aa07fb7e6b7e1acff6198692d8ff5c88
rundir: /run/user/$UID/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20250606.g754c6d7-1.fc42.x86_64
version: |
pasta 0^20250606.g754c6d7-1.fc42.x86_64
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/user/$UID/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-2.fc42.x86_64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.8.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 0
swapTotal: 0
uptime: 2h 23m 7.00s (Approximately 0.08 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
store:
configFile: $HOME/.config/containers/storage.conf
containerStore:
number: 26
paused: 0
running: 26
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: $HOME/.local/share/containers/storage
graphRootAllocated: 499309219840
graphRootUsed: 244189872128
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 48
runRoot: /run/user/$UID/containers
transientStore: false
volumePath: $HOME/.local/share/containers/storage/volumes
version:
APIVersion: 5.5.1
BuildOrigin: Fedora Project
Built: 1749081600
BuiltTime: Thu Jun 5 00:00:00 2025
GitCommit: 850db76dd78a0641eddb9ee19ee6f60d2c59bcfa
GoVersion: go1.24.3
Os: linux
OsArch: linux/amd64
Version: 5.5.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Here's my systemd version:
systemd 257 (257.6-1.fc42)
+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +BTF +XKBCOMMON +UTMP +SYSVINIT +LIBARCHIVE
Here's my OS:
NAME="Fedora Linux"
VERSION="42.20250609.3.0 (CoreOS)"
RELEASE_TYPE=stable
ID=fedora
VERSION_ID=42
VERSION_CODENAME=""
PLATFORM_ID="platform:f42"
PRETTY_NAME="Fedora CoreOS 42.20250609.3.0"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:42"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=42
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=42
SUPPORT_END=2026-05-13
VARIANT="CoreOS"
VARIANT_ID=coreos
OSTREE_VERSION='42.20250609.3.0'
Additional information
This is currently running on a 16 core x86_64 machine, and it seems to also happen on my Raspberry Pi 5 running Debian and a VPS running AlmaLinux.