+
Skip to content

crio-<containerID>.scope succeeded #8904

Closed as not planned
Closed as not planned
@Syx5290

Description

@Syx5290

What happened?

crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope succeeded,but the container is still running
0880c4c80710ea9f8224

Jan 09 10:07:45 ceasphere23-node-3 systemd[1]: Started crio-conmon-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope.
Jan 09 10:07:46 ceasphere23-node-3 conmon[330022]: conmon 0644511bc2734f5ff9f9 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Jan 09 10:07:46 ceasphere23-node-3 conmon[330022]: conmon 0644511bc2734f5ff9f9 <ninfo>: terminal_ctrl_fd: 12
Jan 09 10:07:46 ceasphere23-node-3 conmon[330022]: conmon 0644511bc2734f5ff9f9 <ninfo>: winsz read side: 16, winsz write side: 16
Jan 09 10:08:10 ceasphere23-node-3 systemd[1]: Started libcontainer container 0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.
Jan 09 10:08:13 ceasphere23-node-3 crio[27514]: time="2025-01-09 10:08:13.434244935+08:00" level=info msg="Created container 0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118: ccos-monitoring/prometheus-agent-0-0/prometheus" id=ddb88684-9f80-4cc8-a910-c5044be1a01c name=/runtime.v1.RuntimeService/CreateContainer
Jan 09 10:08:13 ceasphere23-node-3 hyperkube[57677]: I0109 10:08:13.434460   57677 remote_runtime.go:446] "[RemoteRuntimeService] CreateContainer" podSandboxID="7237e359fec361cef9d8adb215defbace399f033eb6ac56a942197e1bc02fb35" containerID="0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118"
Jan 09 10:08:13 ceasphere23-node-3 hyperkube[57677]: I0109 10:08:13.434532   57677 remote_runtime.go:459] "[RemoteRuntimeService] StartContainer" containerID="0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118" timeout="2m0s"
Jan 09 10:08:13 ceasphere23-node-3 crio[27514]: time="2025-01-09 10:08:13.434722361+08:00" level=info msg="Starting container: 0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118" id=e432b000-79cc-4498-a08f-b0e3fe21e658 name=/runtime.v1.RuntimeService/StartContainer
Jan 09 10:08:13 ceasphere23-node-3 crio[27514]: time="2025-01-09 10:08:13.469891032+08:00" level=info msg="Started container" PID=364493 containerID=0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118 description=ccos-monitoring/prometheus-agent-0-0/prometheus id=e432b000-79cc-4498-a08f-b0e3fe21e658 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7237e359fec361cef9d8adb215defbace399f033eb6ac56a942197e1bc02fb35
Jan 09 10:08:13 ceasphere23-node-3 hyperkube[57677]: I0109 10:08:13.487397   57677 remote_runtime.go:477] "[RemoteRuntimeService] StartContainer Response" containerID="0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118"
Jan 09 10:08:13 ceasphere23-node-3 hyperkube[57677]: I0109 10:08:13.676730   57677 kubelet.go:2250] "SyncLoop (PLEG): event for pod" pod="ccos-monitoring/prometheus-agent-0-0" event=&{ID:f921ae10-59ef-4825-ae61-248f1989c789 Type:ContainerStarted Data:0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118}
Jan 09 10:08:16 ceasphere23-node-3 systemd[1]: crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope: Succeeded.
Jan 09 10:08:16 ceasphere23-node-3 systemd[1]: crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope: Consumed 4.450s CPU time.
Jan 09 10:08:37 ceasphere23-node-3 hyperkube[57677]:         rpc error: code = Unknown desc = command error: time="2025-01-09T10:08:37+08:00" level=error msg="exec failed: unable to start container process: error adding pid 410135 to cgroups: failed to write 410135: open /sys/fs/cgroup/systemd/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf921ae10_59ef_4825_ae61_248f1989c789.slice/crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope/cgroup.procs: no such file or directory"

0880c4c80710ea9fa225

When I checked the status using systemctl status 0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope, it was no longer present. Under normal circumstances, the .scope service should exist.

What did you expect to happen?

crio-.scope should not succeeded befor container exit!

crictl exec -it containerID /bin/sh excute normally

How can we reproduce it (as minimally and precisely as possible)?

Occasionally occurs.

Anything else we need to know?

No response

CRI-O and Kubernetes version

# crio-status info
cgroup driver: systemd
storage driver: overlay
storage root: /var/lib/containers/storage
default GID mappings (format <container>:<host>:<size>):
  0:0:4294967295
default UID mappings (format <container>:<host>:<size>):
  0:0:4294967295

crio --version
crio version 1.24.1
Version:          1.24.1
GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
GitTreeState:     clean

OS version

4.19.90-52.39

Additional environment details (AWS, VirtualBox, physical, etc.)

runc --version
runc version 1.1.4
spec: 1.0.2-dev

conmon --version
conmon version 2.0.30

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载