kubectl version --short
Client Version: v1.18.1
Server Version: v1.18.2
ceph version
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
rook package --branch release-1.4
Hi,
In my test environment, my kubernetes cluster is composed of 1 master and 3 worker nodes.
To test Rook-Ceph, I use the default cluster.yaml file configured to use all nodes and all devices.
For each worker node, two disks have been added 20Gb and 10 Gb
So I have a lot of free space.
rook-ceph have been deployed successfully.
Nethertheless, the Health is not OK, but in Warning, with :
ceph -s
cluster:
id: b75eef6a-8194-45d5-b0a9-626d2416a822
health: HEALTH_WARN
mon b is low on available space
services:
mon: 3 daemons, quorum a,b,c (age 17h)
mgr: a(active, since 16h)
osd: 6 osds: 6 up (since 17h), 6 in (since 17h)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 84 GiB / 90 GiB avail
pgs: 1 active+clean
[root@rook-ceph-tools-7d9467775-n8s6k /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 90 GiB 84 GiB 19 MiB 6.0 GiB 6.69
TOTAL 90 GiB 84 GiB 19 MiB 6.0 GiB 6.69
--- POOLS ---
POOL ID STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 0 B 0 0 B 0 25 GiB
Would you have an idea how I can fix that ? Is it a bug ?
Thank you