+
Skip to content

Use deviceClass: hdd After enableCrushUpdates: true, there are no hosts in the cluster #16530

@zsksy123

Description

@zsksy123

version

helm list
NAME              	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART                    	APP VERSION
rook-ceph-cluster 	rook-ceph	59      	2025-09-24 17:55:34.097522 +0800 CST	deployed	rook-ceph-cluster-v1.17.8	v1.17.8
rook-operator     	rook-ceph	5       	2025-09-17 11:27:12.331646 +0800 CST	deployed	rook-ceph-v1.17.8        	v1.17.8

Let me describe my problem in detail. Previously, all the devices in our cluster were nvme. Later, we wanted to add three devices, and the disks on each device were of hdd type. Moreover, we wanted to divide these newly added devices into a pool named ceph-blockpool-hdd
Configuration as follows

cephClusterSpec:
  storage:
    useAllNodes: false
    useAllDevices: false
    nodes:
    - name: "stor11"
      devices:
      - name: "nvme0n1"
      - name: "nvme1n1"
      ......
    - name: "stor12"
      devices:
      - name: "nvme0n1"
      - name: "nvme1n1"
      - name: "nvme2n1"
      ......
    - name: "stor18"
      devices:
      - name: "sda"
      - name: "sdb"
      #- name: "sdc"
    - name: "stor19"
      devices:
      - name: "sda"
      - name: "sdb"
cephBlockPools:
  - name: ceph-blockpool
    spec:
      failureDomain: host
      replicated:
        size: 2
      deviceClass: nvme
      enableCrushUpdates: true
    storageClass:
      enabled: true
      name: rook-ceph-block
      isDefault: true
      reclaimPolicy: Retain
      allowVolumeExpansion: true
      volumeBindingMode: "Immediate"
      mountOptions: []
      allowedTopologies: []
      parameters:
        imageFormat: "2"
        imageFeatures: layering
        csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
        csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/fstype: ext4

  - name: ceph-blockpool-hdd
    spec:
      failureDomain: host
      replicated:
        size: 1
      deviceClass: hdd
      enableCrushUpdates: true
    storageClass:
      enabled: true
      name: rook-ceph-block-hdd
      isDefault: true
      reclaimPolicy: Retain
      allowVolumeExpansion: true
      volumeBindingMode: "Immediate"
      mountOptions: []
      allowedTopologies: []
      parameters:
        imageFormat: "2"
        imageFeatures: layering
        csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
        csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
        csi.storage.k8s.io/fstype: ext4

However, when checking with the ceph osd tree, no stor18 or stor19 hosts were found; only isolated OSDs were detected
And the ceph-blockpool-hdd pool was not created successfully either

kubectl get cephblockpools
NAME                 PHASE   TYPE         FAILUREDOMAIN   AGE
ceph-blockpool-hdd       Failure   Replicated   host            1d
kubectl describe  cephblockpools ceph-blockpool-hdd   
failed to create pool "ceph-blockpool-hdd".: failed to configure pool "ceph-blockpool-hdd".:
failed to initialize pool "ceph-blockpool-hdd" for RBD use. : signal: interrupt

The following osd.166 and osd.177 do not belong to any host

ceph osd tree
166    hdd          0  osd.166              up   1.00000  1.00000
167    hdd          0  osd.167              up   1.00000  1.00000
ceph osd df
166    hdd        0   1.00000   15 TiB   105 MiB   61 MiB    1 KiB    44 MiB    15 TiB     0  0.00    0      up
167    hdd        0   1.00000   15 TiB   105 MiB   61 MiB    1 KiB    44 MiB    15 TiB     0  0.00    0      up
                        TOTAL  1.0 PiB   2.3 TiB  2.2 TiB   51 MiB   111 GiB  1023 TiB  0.23
ceph osd crush rule dump ceph-blockpool-hdd
{
    "rule_id": 19,
    "rule_name": "ceph-blockpool-hdd",
    "type": 1,
    "steps": [
        {
            "op": "take",
            "item": -21,
            "item_name": "default~hdd"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载