Hello,
I have some troubles with mounting a ceph storage into my kubernetes cluster.
My Kubernetes cluster is based on 3 coreos and 1 centos hosts.
Here are the steps Im doing to mount the ceph storage into kubernetes:
1. I create a Persistent Volume
2. I create a Persistent Volume claim
3. I create a DaemonSet which is starting a busybox image on each node with the claimed ceph sorage. Im starting a DaemonSet because I want to have this running on coreos and centos for comparison.
Here is my yaml file for the steps above:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBUkySkZWVFE9PQ==
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "ceph"
spec:
capacity:
storage: "2Gi"
accessModes:
- "ReadWriteOnce"
rbd:
monitors:
- "172.28.150.31:6789"
- "172.28.150.32:6789"
- "172.28.150.33:6789"
pool: rbd
image: foo2
user: admin
keyring: "/etc/ceph/ceph.client.admin.keyring"
secretRef:
name: "ceph-secret"
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: "Recycle"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ceph-pod1
spec:
template:
metadata:
labels:
app: ceph-pod1
name: ceph-pod1
spec:
containers:
- image: busybox
name: ceph-busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
securityContext:
privileged: true
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
hostNetwork: true
hostPID: true
After a while, the pod on the centos host are in running state while the pods on the coreos machines remains in "ContainerCreating" State:
ceph-pod1-2om7i 0/1 ContainerCreating 0 8m
ceph-pod1-8dwrc 0/1 ContainerCreating 0 8m
ceph-pod1-dvip9 1/1 Running 0 8m
ceph-pod1-rk57h 0/1 ContainerCreating 0 8m
ceph-pod1-rsw6l 0/1 ContainerCreating 0 8m
So it seems that the storage can be mounted to the pods on the centos host (I chekced in inside the busybox host).
On the centos host, the was one additional step I have done: "
yum install -y ceph-common", as we know, on coreos this is not possible :-)
From my point of view it seems that there are some ceph utilities missing on the coreos host, A look into the docker log files showing the error "
rbd: failed to modprobe rbd error:executable file not found in $PATH"
Jul 14 12:03:27 coreos2 kubelet-wrapper[727]: E0714 12:03:27.360815 727 disk_manager.go:56] failed to attach disk
Jul 14 12:03:27 coreos2 kubelet-wrapper[727]: E0714 12:03:27.361464 727 rbd.go:215] rbd: failed to setup
Jul 14 12:03:27 coreos2 kubelet-wrapper[727]: E0714 12:03:27.361854 727 goroutinemap.go:155] Operation for "kubernetes.io/rbd/[172.28.50.231:6789 172.28.50.232:6789 172.28.50.233:6789]:foo2" failed. No retries permitted until 2016-07-14 12:03:27.861840454 +0000 UTC (durationBeforeRetry 500ms). error: MountVolume.SetUp failed for volume "kubernetes.io/rbd/[172.28.50.231:6789 172.28.50.232:6789 172.28.50.233:6789]:foo2" (spec.Name: "ceph") pod "f8606d63-49ba-11e6-8908-326261363032" (UID: "f8606d63-49ba-11e6-8908-326261363032") with: rbd: failed to modprobe rbd error:executable file not found in $PATH
Jul 14 12:03:27 coreos2 kubelet-wrapper[727]: I0714 12:03:27.895290 727 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/rbd/[172.28.50.231:6789 172.28.50.232:6789 172.28.50.233:6789]:foo2" (spec.Name: "ceph") to pod "f8606d63-49ba-11e6-8908-326261363032" (UID: "f8606d63-49ba-11e6-8908-326261363032").
Is there something missing on the coreos host ?
Or am I on the wrong way to mounting an external ceph storage into kuberenetes ?
Cheers, Thomas