ceph osd tree does not show the newly added nodes

146 views
Skip to first unread message

Chel Db

unread,
Jun 21, 2021, 10:52:58 AM6/21/21
to rook-dev
        Our current production osds are running short of disk space, so to alleviate that, 3 new k8 nodes with 500GB HDD were added to the cluster(done via Ansible)

k get nodes, gives me the newly added nodes are in Ready state but on checking the osd tree, it does not show me the newly added nodes in the list.

ceph osd tree is the command I used.

osd-prepare pods for those 3 nodes were created fine, but I dont see rook-ceph-osd-1... pods getting created for those 3 nodes.

In total we have 14 worker nodes, but only 11 rook-ceph-osd pods were created as shown below.

Why does the newly added nodes are not getting reflected in the ceph tree ?

rook-ceph-osd-1-56847868d9-qtbmp                          1/1     Running                 2          13d
rook-ceph-osd-11-5df47f4495-q7k22                         1/1     Running                 3          13d
rook-ceph-osd-12-5c8c79c67c-js8gs                         1/1     Running                 11         39d
rook-ceph-osd-13-67fb54c987-tflnm                         1/1     Running                 3          13d
rook-ceph-osd-2-8655965f4d-zhv77                          1/1     Running                 20         116d
rook-ceph-osd-3-5c6b88b597-7h2rn                          1/1     Running                 15         109d
rook-ceph-osd-4-69d5487657-xvmqn                          0/1     Init:CrashLoopBackOff   71         5h42m
rook-ceph-osd-6-795fdfdfc7-npw6c                          1/1     Running                 21         213d
rook-ceph-osd-7-5fd594ddb7-5xn26                          1/1     Running                 16         109d
rook-ceph-osd-8-5f496fccbd-nq4mv                          1/1     Running                 19         215d
rook-ceph-osd-9-7689855974-6mqtm                          1/1     Running                 15         219d


Snippet of one of the rook-ceph-osd-prepare pods

k logs rook-ceph-osd-prepare-as-net-12-xttl9 -n rook-ceph2021-06-09 20:10:09.109613 I | rookcmd: starting Rook v1.3.8 with arguments '/rook/rook ceph osd provision'
2021-06-09 20:10:09.109673 I | rookcmd: flag values: --cluster-id=cb23b7ba-3290-4427-84ca-3e82f1b61fa6, --data-device-filter=sdb, --data-device-path-filter=, --data-devices=, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=DEBUG, --metadata-device=, --node-name=csg-nscg-0012, --operator-image=, --osd-database-size=0, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account=
2021-06-09 20:10:09.109678 I | op-mon: parsing mon endpoints: f=10.233.50.141:6789,j=10.233.13.53:6789,m=10.233.44.200:6789
2021-06-09 20:10:09.124240 I | op-osd: CRUSH location=root=default host=as-net-12
2021-06-09 20:10:09.124259 I | cephcmd: crush location of osd: root=default host=as-net-12
2021-06-09 20:10:09.132649 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2021-06-09 20:10:09.132737 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2021-06-09 20:10:09.132824 D | cephosd: config file @ /etc/ceph/ceph.conf: [global]
fsid = 092e1f47-e4c3-4175-ba0e-f32d5b58a9f3
mon initial members = f j m
mon host = [v2:10.233.50.141:3300,v1:10.233.50.141:6789],[v2:10.233.13.53:3300,v1:10.233.13.53:6789],[v2:10.233.44.200:3300,v1:10.233.44.200:6789]
public addr = 10.233.91.152
cluster addr = 10.233.91.152[client.admin]
keyring = /var/lib/rook/rook-ceph/client.admin.keyring2021-06-09 20:10:09.132834 I | cephosd: discovering hardware
2021-06-09 20:10:09.132840 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
2021-06-09 20:10:09.137101 D | exec: Running command: lsblk /dev/fd0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME
2021-06-09 20:10:09.138987 D | exec: Running command: sgdisk --print /dev/fd0
Reply all
Reply to author
Forward
0 new messages