Pod running but Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out

327 views
Skip to first unread message

Raphael Stonehorse

unread,
Nov 7, 2023, 1:31:23 PM11/7/23
to kubernetes-sig-storage
root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
Name:             cassandra-0
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-eu-1-worker-1/xx.xxx.xxx.xxx
Start Time:       Tue, 07 Nov 2023 19:18:49 +0100
Labels:           app=cassandra
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=cassandra-58c99f489d
                  statefulset.kubernetes.io/pod-name=cassandra-0
Annotations:      cni.projectcalico.org/containerID: ee11d6b9b5dfade09500ccf53d2d1e4e04aaf479c4502d76f6ce0044c6683ac4
                  cni.projectcalico.org/podIP: 192.168.200.12/32
                  cni.projectcalico.org/podIPs: 192.168.200.12/32
Status:           Running
IP:               192.168.200.12
IPs:
  IP:           192.168.200.12
Controlled By:  StatefulSet/cassandra
Containers:
  cassandra:
    Container ID:   containerd://1386bc65f0f9c11eb9351435578c37efb7081fbbf0acd7a9b2ab6d3507576e0f
    Image:          gcr.io/google-samples/cassandra:v13
    Image ID:       gcr.io/google-samples/cassandra@sha256:7a3d20afa0a46ed073a5c587b4f37e21fa860e83c60b9c42fec1e1e739d64007
    Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 07 Nov 2023 19:18:51 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:      500m
      memory:   1Gi
    Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      MAX_HEAP_SIZE:           512M
      HEAP_NEWSIZE:            100M
      CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
      CASSANDRA_CLUSTER_NAME:  K8Demo
      CASSANDRA_DC:            DC1-K8Demo
      CASSANDRA_RACK:          Rack1-K8Demo
      POD_IP:                   (v1:status.podIP)
    Mounts:
      /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nzb6p (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  k8s-eu-1-worker-1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  k8s-eu-1-worker-1-cassandra-0
    ReadOnly:   false
  kube-api-access-nzb6p:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  7m28s  default-scheduler  Successfully assigned default/cassandra-0 to k8s-eu-1-worker-1
  Normal   Pulling    7m28s  kubelet            Pulling image "gcr.io/google-samples/cassandra:v13"
  Normal   Pulled     7m28s  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 383ms (383ms including waiting)
  Normal   Created    7m28s  kubelet            Created container cassandra
  Normal   Started    7m27s  kubelet            Started container cassandra
  Warning  Unhealthy  7m     kubelet            Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out

The pod is running but I get `Unhealthy Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out` :

I do not understand is it in "Running" state or is it "Unhealthy" ?

Matthew Cary

unread,
Nov 7, 2023, 2:34:02 PM11/7/23
to Raphael Stonehorse, kubernetes-sig-storage
It looks like your pod is actually okay & running as expected. k8s events and status can be a little confusing.

For the current status of the pod, look at the "Conditions" section:
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True

Everything seems to be 5x5.

The events section is a historical record of events. I personally find it somewhat confusing, because repeated events are coalesced and it can be hard to tell if events are actively firing or are old. But the pod unhealthy event is 7 minutes old and wasn't repeated:

  Warning  Unhealthy  >>7m<<     kubelet            Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out

so likely that just means the pod took some time go ready, failed one healthcheck, but then got going and there's been no events since (successful health checks don't generate events).



--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-st...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-storage/a7046ff2-8354-4ab5-ab7d-2898cc6166abn%40googlegroups.com.

Raphael Stonehorse

unread,
Nov 8, 2023, 6:25:55 AM11/8/23
to kubernetes-sig-storage
Thank you Matt

Since I have 5 shared NFS folders:

    root@k8s-eu-1-master:~# df -h | grep /srv/
    aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1  391G  6.1G  365G   2% /mnt/data
    bb.bbb.bbb.bbb:/srv/shared-k8s-eu-1-worker-2  391G  6.1G  365G   2% /mnt/data
    cc.ccc.ccc.cc:/srv/shared-k8s-eu-1-worker-3   391G  6.1G  365G   2% /mnt/data
    dd.ddd.ddd.dd:/srv/shared-k8s-eu-1-worker-4   391G  6.1G  365G   2% /mnt/data
    ee.eee.eee.eee:/srv/shared-k8s-eu-1-worker-5  391G  6.1G  365G   2% /mnt/data

I added in `cassandra-statefulset.yaml`  the second volumeMount with its volumeClaimTemplate:

        # These volume mounts are persistent. They are like inline claims,
            # but not exactly because the names need to match exactly one of
            # the stateful pod volumes.
            volumeMounts:
            - name: k8s-eu-1-worker-1
              mountPath: /srv/shared-k8s-eu-1-worker-1
            - name: k8s-eu-1-worker-2
              mountPath: /srv/shared-k8s-eu-1-worker-2
   
      # These are converted to volume claims by the controller
      # and mounted at the paths mentioned above.
      # do not use these in production until ssd GCEPersistentDisk or other ssd pd
      volumeClaimTemplates:
      - metadata:
          name: k8s-eu-1-worker-1
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: k8s-eu-1-worker-1
          resources:
            requests:
              storage: 1Gi
      - metadata:
          name: k8s-eu-1-worker-2
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: k8s-eu-1-worker-2
          resources:
            requests:
              storage: 1Gi
   
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: k8s-eu-1-worker-1
    provisioner: k8s-sigs.io/k8s-eu-1-worker-1
    parameters:
      type: pd-ssd
   
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: k8s-eu-1-worker-2
    provisioner: k8s-sigs.io/k8s-eu-1-worker-2
    parameters:
      type: pd-ssd

It seems working fine at the beginning:

    root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml
    statefulset.apps/cassandra created

But the statefulset remains in a "not-ready" state:

    root@k8s-eu-1-master:~# kubectl get sts
    NAME        READY   AGE
    cassandra   0/3     17m

root@k8s-eu-1-master:~# kubectl describe sts cassandra
    Name:               cassandra
    Namespace:          default
    CreationTimestamp:  Wed, 08 Nov 2023 12:02:10 +0100
    Selector:           app=cassandra
    Labels:             app=cassandra
    Annotations:        <none>
    Replicas:           3 desired | 1 total
    Update Strategy:    RollingUpdate
      Partition:        0
    Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:  app=cassandra
      Containers:
       cassandra:
        Image:       gcr.io/google-samples/cassandra:v13

        Ports:       7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
        Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
        Limits:
          cpu:     500m
          memory:  1Gi
        Requests:
          cpu:      500m
          memory:   1Gi
        Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
        Environment:
          MAX_HEAP_SIZE:           512M
          HEAP_NEWSIZE:            100M
          CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
          CASSANDRA_CLUSTER_NAME:  K8Demo
          CASSANDRA_DC:            DC1-K8Demo
          CASSANDRA_RACK:          Rack1-K8Demo
          POD_IP:                   (v1:status.podIP)
        Mounts:
          /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
          /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
      Volumes:  <none>
    Volume Claims:
      Name:          k8s-eu-1-worker-1
      StorageClass:  k8s-eu-1-worker-1
      Labels:        <none>
      Annotations:   <none>
      Capacity:      1Gi
      Access Modes:  [ReadWriteOnce]
      Name:          k8s-eu-1-worker-2
      StorageClass:  k8s-eu-1-worker-2
      Labels:        <none>
      Annotations:   <none>
      Capacity:      1Gi
      Access Modes:  [ReadWriteOnce]

    Events:
      Type    Reason            Age   From                    Message
      ----    ------            ----  ----                    -------
      Normal  SuccessfulCreate  18m   statefulset-controller  create Claim k8s-eu-1-worker-1-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success
      Normal  SuccessfulCreate  18m   statefulset-controller  create Claim k8s-eu-1-worker-2-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success
      Normal  SuccessfulCreate  18m   statefulset-controller  create Pod cassandra-0 in StatefulSet cassandra successful

The corresponding pod remains in "Pending" State :

    root@k8s-eu-1-master:~# kubectl get pods
    NAME                                                              READY   STATUS    RESTARTS   AGE
    cassandra-0                                                       0/1     Pending   0          19m
    k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k   1/1     Running   0          19h


root@k8s-eu-1-master:~# kubectl get pods
    NAME                                                              READY   STATUS    RESTARTS   AGE
    cassandra-0                                                       0/1     Pending   0          19m
    k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k   1/1     Running   0          19h
    root@k8s-eu-1-master:~#
    root@k8s-eu-1-master:~# kubetl describe pod cassandra-0
    kubetl: command not found

    root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
    Name:             cassandra-0
    Namespace:        default
    Priority:         0
    Service Account:  default
    Node:             <none>
    Labels:           app=cassandra
                      apps.kubernetes.io/pod-index=0
                      controller-revision-hash=cassandra-79d64cd8b
                      statefulset.kubernetes.io/pod-name=cassandra-0
    Annotations:      <none>
    Status:           Pending
    IP:              
    IPs:              <none>

    Controlled By:    StatefulSet/cassandra
    Containers:
      cassandra:
        Image:       gcr.io/google-samples/cassandra:v13

        Ports:       7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
        Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
        Limits:
          cpu:     500m
          memory:  1Gi
        Requests:
          cpu:      500m
          memory:   1Gi
        Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
        Environment:
          MAX_HEAP_SIZE:           512M
          HEAP_NEWSIZE:            100M
          CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
          CASSANDRA_CLUSTER_NAME:  K8Demo
          CASSANDRA_DC:            DC1-K8Demo
          CASSANDRA_RACK:          Rack1-K8Demo
          POD_IP:                   (v1:status.podIP)
        Mounts:
          /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
          /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxx58 (ro)
    Conditions:
      Type           Status
      PodScheduled   False
    Volumes:
      k8s-eu-1-worker-1:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  k8s-eu-1-worker-1-cassandra-0
        ReadOnly:   false
      k8s-eu-1-worker-2:

        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  k8s-eu-1-worker-2-cassandra-0
        ReadOnly:   false
      kube-api-access-wxx58:

        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Guaranteed
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  20m                default-scheduler  0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..
      Warning  FailedScheduling  10m (x3 over 20m)  default-scheduler  0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..

With only one of the two Persistent Volume Claims in "Bound" Status, and the other one still "Pending" Status:

    root@k8s-eu-1-master:~# kubectl get pvc
    NAME                            STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
    k8s-eu-1-worker-1-cassandra-0   Bound     pvc-4f1d877b-8e01-4b76-b4e1-25bc226fd1a5   1Gi        RWO            k8s-eu-1-worker-1   21m
    k8s-eu-1-worker-2-cassandra-0   Pending                                                                        k8s-eu-1-worker-2   21m

What's wrong with my `cassandra-statefulset.yaml` setting above?

Raphael Stonehorse

unread,
Nov 8, 2023, 7:10:38 AM11/8/23
to kubernetes-sig-storage
Hi Matt !

My fault

I didn't create the second `k8s-eu-1-worker-2-nfs-subdir-external-provisioner`

Once I created it, the stateful pod went in Running State

Sorry for my last request of help
I'm a bit confused about `Kubernetes` Statefulsets

Raphael

Hendrik Land

unread,
Nov 8, 2023, 7:16:29 AM11/8/23
to Raphael Stonehorse, kubernetes-sig-storage
Hi Raphael,

Keep in mind that the mountPath refers to where the PVC is mounted inside your container. This usually wouldn’t be the same as your NFS export (e.g. Cassandra wouldn’t write to /srv/shared-k8s… inside its container). The mountPath needs to be the path where your app tries to write its data to.

Cheers

Hendrik

Raphael Stonehorse

unread,
Nov 8, 2023, 12:36:39 PM11/8/23
to Hendrik Land, matt...@google.com, kubernetes-sig-storage
I do not understand what exactly should I set as `mountPath`


I started from scratch again, from a "clean" starting point, in order to be able to understand better what is going on

These are my `shared nfs folders` :
   
        root@k8s-eu-1-master:~# df -h | grep /srv/
        aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1  391G  6.1G  365G   2% /mnt/data
        bb.bbb.bbb.bbb:/srv/shared-k8s-eu-1-worker-2  391G  6.1G  365G   2% /mnt/data
        cc.ccc.ccc.cc:/srv/shared-k8s-eu-1-worker-3   391G  6.1G  365G   2% /mnt/data
        dd.ddd.ddd.dd:/srv/shared-k8s-eu-1-worker-4   391G  6.1G  365G   2% /mnt/data
        ee.eee.eee.eee:/srv/shared-k8s-eu-1-worker-5  391G  6.1G  365G   2% /mnt/data


Following the indications found here: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md , I deployed the `nfs-subdir-external-provisioner` for the two different provisioners :

        root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-1-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
        > --set nfs.server=aa.aaa.aaa.aaa \
        > --set nfs.path=/srv/shared-k8s-eu-1-worker-1 \  // <----- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md#configuration : `nfs.path` = Basepath of the mount point to be used
        > --set storageClass.name=k8s-eu-1-worker-1 \
        > --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-1 \
        > --set nfs.volumeName=k8s-eu-1-worker-1-nfs-v
        NAME: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
        LAST DEPLOYED: Wed Nov  8 17:47:20 2023
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
   
        root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-2-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
        > --set nfs.server=bb.bbb.bbb.bbb \
        > --set nfs.path=/srv/shared-k8s-eu-1-worker-2 \ // <----- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md#configuration : `nfs.path` = Basepath of the mount point to be used
        > --set storageClass.name=k8s-eu-1-worker-2 \
        > --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-2 \
        > --set nfs.volumeName=k8s-eu-1-worker-2-nfs-v
        NAME: k8s-eu-1-worker-2-nfs-subdir-external-provisioner
        LAST DEPLOYED: Wed Nov  8 17:48:34 2023
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None

No `pv` and no `pvc` :

        root@k8s-eu-1-master:~# kuubectl get pv
        kuubectl: command not found
        root@k8s-eu-1-master:~#
        root@k8s-eu-1-master:~# kubectl get pvc
        No resources found in default namespace.

As far as I understand, but I'm here to ask you a clarification, from the example "StatefulSet Basics" : https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset
the `mountPath:` is the shared folder where, in my case Cassandra, will put data

And  in the example of the book `Kubernetes in Action, second edition` : https://github.com/luksa/kubernetes-in-action-2nd-edition/blob/master/Chapter15/sts.quiz.yaml
the `mountPath` is where the data is going to be put. If I understand it correctly


I applied the `cassandra-statefulset.yaml` :

        root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml
        statefulset.apps/cassandra created
        Warning: resource storageclasses/k8s-eu-1-worker-2 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
        The StorageClass "k8s-eu-1-worker-2" is invalid: parameters: Forbidden: updates to parameters are forbidden.

And this is the resulting StatefulSet :

        root@k8s-eu-1-master:~# kubectl get sts
        NAME        READY   AGE
        cassandra   1/3     101s


   
        root@k8s-eu-1-master:~# kubectl describe sts cassandra
        Name:               cassandra
        Namespace:          default
        CreationTimestamp:  Wed, 08 Nov 2023 18:20:06 +0100

        Selector:           app=cassandra
        Labels:             app=cassandra
        Annotations:        <none>
        Replicas:           3 desired | 2 total

        Update Strategy:    RollingUpdate
          Partition:        0
        Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
          Normal  SuccessfulCreate  2m13s  statefulset-controller  create Claim k8s-eu-1-worker-1-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success
          Normal  SuccessfulCreate  2m13s  statefulset-controller  create Claim k8s-eu-1-worker-2-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success
          Normal  SuccessfulCreate  2m13s  statefulset-controller  create Pod cassandra-0 in StatefulSet cassandra successful
          Normal  SuccessfulCreate  96s    statefulset-controller  create Claim k8s-eu-1-worker-1-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success
          Normal  SuccessfulCreate  96s    statefulset-controller  create Claim k8s-eu-1-worker-2-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success
          Normal  SuccessfulCreate  96s    statefulset-controller  create Pod cassandra-1 in StatefulSet cassandra successful


Pods :

        root@k8s-eu-1-master:~# kubectl get pods
        NAME                                                              READY   STATUS             RESTARTS      AGE
        cassandra-0                                                       1/1     Running            0             3m42s
        cassandra-1                                                       0/1     CrashLoopBackOff   3 (41s ago)   3m5s
        k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff9tn7z   1/1     Running            0             36m
        k8s-eu-1-worker-2-nfs-subdir-external-provisioner-bf5645b8khs9q   1/1     Running            0             35m


pod `cassandra-0` in `Running` State :

        root@k8s-eu-1-master:~# kubect describe pod cassandra-0
        kubect: command not found

        root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
        Name:             cassandra-0
        Namespace:        default
        Priority:         0
        Service Account:  default
        Node:             k8s-eu-1-worker-1/aa.aaa.aaa.aaa
        Start Time:       Wed, 08 Nov 2023 18:20:08 +0100

        Labels:           app=cassandra
                          apps.kubernetes.io/pod-index=0
                          controller-revision-hash=cassandra-79d64cd8b
                          statefulset.kubernetes.io/pod-name=cassandra-0
        Annotations:      cni.projectcalico.org/containerID: 5f905edc4dd1cacc3e7a2fe0fd3299734b73cf8befc89b04e17f3ee42ff87198
                          cni.projectcalico.org/podIP: 192.168.200.17/32
                          cni.projectcalico.org/podIPs: 192.168.200.17/32
        Status:           Running
        IP:               192.168.200.17
        IPs:
          IP:           192.168.200.17

        Controlled By:  StatefulSet/cassandra
        Containers:
          cassandra:
            Container ID:   containerd://2642129ba57afc536ed35534fa78084b15973a0e6a4f48e598697ca48230d52c

            Image:          gcr.io/google-samples/cassandra:v13
            Image ID:       gcr.io/google-samples/cassandra@sha256:7a3d20afa0a46ed073a5c587b4f37e21fa860e83c60b9c42fec1e1e739d64007
            Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
            Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
            State:          Running
              Started:      Wed, 08 Nov 2023 18:20:09 +0100

            Ready:          True
            Restart Count:  0
            Limits:
              cpu:     500m
              memory:  1Gi
            Requests:
              cpu:      500m
              memory:   1Gi
            Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
            Environment:
              MAX_HEAP_SIZE:           512M
              HEAP_NEWSIZE:            100M
              CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
              CASSANDRA_CLUSTER_NAME:  K8Demo
              CASSANDRA_DC:            DC1-K8Demo
              CASSANDRA_RACK:          Rack1-K8Demo
              POD_IP:                   (v1:status.podIP)
            Mounts:
              /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
              /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
              /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-589qf (ro)

        Conditions:
          Type              Status
          Initialized       True
          Ready             True
          ContainersReady   True
          PodScheduled      True
        Volumes:
          k8s-eu-1-worker-1:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  k8s-eu-1-worker-1-cassandra-0
            ReadOnly:   false
          k8s-eu-1-worker-2:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  k8s-eu-1-worker-2-cassandra-0
            ReadOnly:   false
          kube-api-access-589qf:

            Type:                    Projected (a volume that contains injected data from multiple sources)
            TokenExpirationSeconds:  3607
            ConfigMapName:           kube-root-ca.crt
            ConfigMapOptional:       <nil>
            DownwardAPI:             true
        QoS Class:                   Guaranteed
        Node-Selectors:              <none>
        Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                     node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
        Events:
          Type     Reason            Age    From               Message
          ----     ------            ----   ----               -------
          Warning  FailedScheduling  5m2s   default-scheduler  0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..
          Normal   Scheduled         5m     default-scheduler  Successfully assigned default/cassandra-0 to k8s-eu-1-worker-1
          Normal   Pulling           4m59s  kubelet            Pulling image "gcr.io/google-samples/cassandra:v13"
          Normal   Pulled            4m59s  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 384ms (384ms including waiting)
          Normal   Created           4m59s  kubelet            Created container cassandra
          Normal   Started           4m59s  kubelet            Started container cassandra
          Warning  Unhealthy         4m30s  kubelet            Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out


pod  `cassandra-1` in `CrashLoopBackOff` State :


        root@k8s-eu-1-master:~# kubectl describe pod cassandra-1
        Name:             cassandra-1

        Namespace:        default
        Priority:         0
        Service Account:  default
        Node:             k8s-eu-1-worker-2/bb.bbb.bbb.bbb
        Start Time:       Wed, 08 Nov 2023 18:20:44 +0100
        Labels:           app=cassandra
                          apps.kubernetes.io/pod-index=1
                          controller-revision-hash=cassandra-79d64cd8b
                          statefulset.kubernetes.io/pod-name=cassandra-1
        Annotations:      cni.projectcalico.org/containerID: 5aa8466c7b79851e92b9f073f5c2b7adfa10f8caaa5f123ab7b9bdad48e7c042
                          cni.projectcalico.org/podIP: 192.168.236.28/32
                          cni.projectcalico.org/podIPs: 192.168.236.28/32
        Status:           Running
        IP:               192.168.236.28
        IPs:
          IP:           192.168.236.28

        Controlled By:  StatefulSet/cassandra
        Containers:
          cassandra:
            Container ID:   containerd://893e65cce8ff3c72777c8b0c5c170a6e0663fedc276dcafb0da56a2d853357b1

            Image:          gcr.io/google-samples/cassandra:v13
            Image ID:       gcr.io/google-samples/cassandra@sha256:7a3d20afa0a46ed073a5c587b4f37e21fa860e83c60b9c42fec1e1e739d64007
            Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
            Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
            State:          Waiting
              Reason:       CrashLoopBackOff
            Last State:     Terminated
              Reason:       Error
              Exit Code:    3
              Started:      Wed, 08 Nov 2023 18:25:37 +0100
              Finished:     Wed, 08 Nov 2023 18:26:01 +0100
            Ready:          False
            Restart Count:  5

            Limits:
              cpu:     500m
              memory:  1Gi
            Requests:
              cpu:      500m
              memory:   1Gi
            Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
            Environment:
              MAX_HEAP_SIZE:           512M
              HEAP_NEWSIZE:            100M
              CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
              CASSANDRA_CLUSTER_NAME:  K8Demo
              CASSANDRA_DC:            DC1-K8Demo
              CASSANDRA_RACK:          Rack1-K8Demo
              POD_IP:                   (v1:status.podIP)
            Mounts:
              /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
              /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
              /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dth49 (ro)

        Conditions:
          Type              Status
          Initialized       True
          Ready             False
          ContainersReady   False
          PodScheduled      True
        Volumes:
          k8s-eu-1-worker-1:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  k8s-eu-1-worker-1-cassandra-1

            ReadOnly:   false
          k8s-eu-1-worker-2:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  k8s-eu-1-worker-2-cassandra-1
            ReadOnly:   false
          kube-api-access-dth49:

            Type:                    Projected (a volume that contains injected data from multiple sources)
            TokenExpirationSeconds:  3607
            ConfigMapName:           kube-root-ca.crt
            ConfigMapOptional:       <nil>
            DownwardAPI:             true
        QoS Class:                   Guaranteed
        Node-Selectors:              <none>
        Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                     node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
        Events:
          Type     Reason            Age                    From               Message
          ----     ------            ----                   ----               -------
          Warning  FailedScheduling  6m43s                  default-scheduler  0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..
          Normal   Scheduled         6m42s                  default-scheduler  Successfully assigned default/cassandra-1 to k8s-eu-1-worker-2
          Normal   Pulled            6m41s                  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 423ms (423ms including waiting)
          Normal   Pulled            6m14s                  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 416ms (416ms including waiting)
          Normal   Pulled            5m37s                  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 385ms (385ms including waiting)
          Normal   Pulling           4m44s (x4 over 6m41s)  kubelet            Pulling image "gcr.io/google-samples/cassandra:v13"
          Normal   Created           4m43s (x4 over 6m41s)  kubelet            Created container cassandra
          Normal   Started           4m43s (x4 over 6m40s)  kubelet            Started container cassandra
          Normal   Pulled            4m43s                  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 401ms (401ms including waiting)
          Warning  Unhealthy         4m19s (x2 over 5m50s)  kubelet            Readiness probe failed:
          Warning  BackOff           4m7s (x5 over 5m49s)   kubelet            Back-off restarting failed container cassandra in pod cassandra-1_default(3cdaae82-7f9e-4089-ac2d-9ceecba12bcc)
          Warning  Unhealthy         88s (x4 over 6m18s)    kubelet            Readiness probe failed: nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.

After deploying `cassandra-statefulset.yaml` , these are the `Persistent Volumes` and the `Persistent Volume Claims` :

        root@k8s-eu-1-master:~# kubectl get pv
        NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS        REASON   AGE
        pvc-3e93df7b-df0c-46c1-a3bd-5f8c99be2802   1Gi        RWO            Delete           Bound    default/k8s-eu-1-worker-2-cassandra-0   k8s-eu-1-worker-2            10m
        pvc-9f18934f-43a9-4d52-8ea8-bd55b5b6398c   1Gi        RWO            Delete           Bound    default/k8s-eu-1-worker-1-cassandra-1   k8s-eu-1-worker-1            9m30s
        pvc-a1857750-f4f7-47ef-8c9b-d35bd683eb17   1Gi        RWO            Delete           Bound    default/k8s-eu-1-worker-1-cassandra-0   k8s-eu-1-worker-1            10m
        pvc-dc2d429d-3f67-4e76-bfba-a9fb7115cc81   1Gi        RWO            Delete           Bound    default/k8s-eu-1-worker-2-cassandra-1   k8s-eu-1-worker-2            9m30s

       
        root@k8s-eu-1-master:~# kubectl get pvc
        NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
        k8s-eu-1-worker-1-cassandra-0   Bound    pvc-a1857750-f4f7-47ef-8c9b-d35bd683eb17   1Gi        RWO            k8s-eu-1-worker-1   10m
        k8s-eu-1-worker-1-cassandra-1   Bound    pvc-9f18934f-43a9-4d52-8ea8-bd55b5b6398c   1Gi        RWO            k8s-eu-1-worker-1   9m34s
        k8s-eu-1-worker-2-cassandra-0   Bound    pvc-3e93df7b-df0c-46c1-a3bd-5f8c99be2802   1Gi        RWO            k8s-eu-1-worker-2   10m
        k8s-eu-1-worker-2-cassandra-1   Bound    pvc-dc2d429d-3f67-4e76-bfba-a9fb7115cc81   1Gi        RWO            k8s-eu-1-worker-2   9m34s


My objective is to use these `shared`  NFS folders as the paths where the cassandra is going to put data :

        root@k8s-eu-1-master:~# df -h | grep /srv/
        aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1  391G  6.1G  365G   2% /mnt/data
        bb.bbb.bbb.bbb:/srv/shared-k8s-eu-1-worker-2  391G  6.1G  365G   2% /mnt/data
        cc.ccc.ccc.cc:/srv/shared-k8s-eu-1-worker-3   391G  6.1G  365G   2% /mnt/data
        dd.ddd.ddd.dd:/srv/shared-k8s-eu-1-worker-4   391G  6.1G  365G   2% /mnt/data
        ee.eee.eee.eee:/srv/shared-k8s-eu-1-worker-5  391G  6.1G  365G   2% /mnt/data

So, I thought, that I had to explicitly set these folders in the `cassandra-statefulset.yaml` file :
What should exactly set as `mountPath` in the `cassandra-statefulset.yaml` ?

Matthew Cary

unread,
Nov 8, 2023, 12:58:28 PM11/8/23
to Raphael Stonehorse, Hendrik Land, kubernetes-sig-storage
mountPath is what your cassandra application sees. That appears in the filesystem seen by that container.

I'm not totally familiar with the nfs bits in the cassandra helm chart, but it looks like the nfs path is the exported name of the share. Think of it as the address of the volume. It doesn't matter to the cassandra application, it's only used by k8s to find the volume.

So the tl;dr is that the mountPath in the statefulset resource needs to match what the cassandra application is configured to use. I think for this cassandra helm chart this is done in a configmap somewhere.

If you run kubectl logs on the failing cassandra pod, I wouldn't be surprised to find an error message about a directory not existing --- that will give a clue as to what the mountPath should be. Another option is to look through the helm chart defaults, there's probably a cassandra data path variable.

Raphael Stonehorse

unread,
Nov 8, 2023, 3:51:41 PM11/8/23
to Matthew Cary, Hendrik Land, kubernetes-sig-storage
I discovered that I suffered from a known Ubuntu with Docker Remote Repo Pulling Issue
I solved it, and now I tried again to apply the same `cassandra-statefulset.yaml` file :
Again, one of the two pods become in "CrashLoopBackOff" State:


root@k8s-eu-1-master:~# kubectl get pods
NAME                                                              READY   STATUS             RESTARTS      AGE
cassandra-0                                                       1/1     Running            0             5m52s
cassandra-1                                                       0/1     CrashLoopBackOff   4 (81s ago)   5m5s
k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff9tn7z   1/1     Running            0             3h59m
k8s-eu-1-worker-2-nfs-subdir-external-provisioner-bf5645b8khs9q   1/1     Running            0             3h57m


Logs of pod `cassandra-0` :

        root@k8s-eu-1-master:~# kubectl logs cassandra-0
        Starting Cassandra on 192.168.200.19
        CASSANDRA_CONF_DIR /etc/cassandra
        CASSANDRA_CFG /etc/cassandra/cassandra.yaml
        CASSANDRA_AUTO_BOOTSTRAP true
        CASSANDRA_BROADCAST_ADDRESS 192.168.200.19
        CASSANDRA_BROADCAST_RPC_ADDRESS 192.168.200.19
        CASSANDRA_CLUSTER_NAME K8Demo
        CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
        CASSANDRA_CONCURRENT_COMPACTORS
        CASSANDRA_CONCURRENT_READS
        CASSANDRA_CONCURRENT_WRITES
        CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
        CASSANDRA_DC DC1-K8Demo
        CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
        CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
        CASSANDRA_GC_WARN_THRESHOLD_IN_MS
        CASSANDRA_INTERNODE_COMPRESSION
        CASSANDRA_KEY_CACHE_SIZE_IN_MB
        CASSANDRA_LISTEN_ADDRESS 192.168.200.19
        CASSANDRA_LISTEN_INTERFACE
        CASSANDRA_MEMTABLE_ALLOCATION_TYPE
        CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
        CASSANDRA_MEMTABLE_FLUSH_WRITERS
        CASSANDRA_MIGRATION_WAIT 1
        CASSANDRA_NUM_TOKENS 32
        CASSANDRA_RACK Rack1-K8Demo
        CASSANDRA_RING_DELAY 30000
        CASSANDRA_RPC_ADDRESS 0.0.0.0
        CASSANDRA_RPC_INTERFACE
        CASSANDRA_SEEDS cassandra-0.cassandra.default.svc.cluster.local
        CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
        changed ownership of '/cassandra_data/data' from root to cassandra
        changed ownership of '/cassandra_data' from root to cassandra
        changed ownership of '/etc/cassandra/logback.xml' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra-env.sh' from root to cassandra
        changed ownership of '/etc/cassandra/jvm.options' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra.yaml' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra-rackdc.properties' from root to cassandra
        changed ownership of '/etc/cassandra' from root to cassandra
        OpenJDK 64-Bit Server VM warning: Cannot open file /usr/local/apache-cassandra-3.11.2/logs/gc.log due to No such file or directory
       
        INFO  20:40:34 Configuration location: file:/etc/cassandra/cassandra.yaml
        INFO  20:40:35 Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=null; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=192.168.200.19; broadcast_rpc_address=192.168.200.19; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; client_encryption_options=<REDACTED>; cluster_name=K8Demo; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/cassandra_data/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@275710fc; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=GossipingPropertyFileSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=/cassandra_data/hints; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=all; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=192.168.200.19; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=32; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/cassandra_data/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=cassandra-0.cassandra.default.svc.cluster.local}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@525f1e4e; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
        INFO  20:40:35 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
        INFO  20:40:35 Global memtable on-heap threshold is enabled at 128MB
        INFO  20:40:35 Global memtable off-heap threshold is enabled at 128MB
        INFO  20:40:36 Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
        INFO  20:40:36 Back-pressure is disabled with strategy null.
        INFO  20:40:36 Unable to load cassandra-topology.properties; compatibility mode disabled
        INFO  20:40:36 Overriding RING_DELAY to 30000ms
        INFO  20:40:37 Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi
        INFO  20:40:37 Hostname: cassandra-0.cassandra.default.svc.cluster.local
        INFO  20:40:37 JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_151
        INFO  20:40:37 Heap size: 512.000MiB/512.000MiB
        INFO  20:40:37 Code Cache Non-heap memory: init = 2555904(2496K) used = 4263040(4163K) committed = 4325376(4224K) max = 251658240(245760K)
        INFO  20:40:37 Metaspace Non-heap memory: init = 0(0K) used = 17516032(17105K) committed = 17956864(17536K) max = -1(-1K)
        INFO  20:40:37 Compressed Class Space Non-heap memory: init = 0(0K) used = 2109568(2060K) committed = 2228224(2176K) max = 1073741824(1048576K)
        INFO  20:40:37 G1 Eden Space Heap memory: init = 28311552(27648K) used = 31457280(30720K) committed = 333447168(325632K) max = -1(-1K)
        INFO  20:40:37 G1 Survivor Space Heap memory: init = 0(0K) used = 5242880(5120K) committed = 5242880(5120K) max = -1(-1K)
        INFO  20:40:37 G1 Old Gen Heap memory: init = 508559360(496640K) used = 3249664(3173K) committed = 508559360(496640K) max = 536870912(524288K)
        INFO  20:40:37 Classpath: /etc/cassandra:/usr/local/apache-cassandra-3.11.2/build/classes/main:/usr/local/apache-cassandra-3.11.2/build/classes/thrift:/usr/local/apache-cassandra-3.11.2/lib/HdrHistogram-2.1.9.jar:/usr/local/apache-cassandra-3.11.2/lib/ST4-4.0.8.jar:/usr/local/apache-cassandra-3.11.2/lib/airline-0.6.jar:/usr/local/apache-cassandra-3.11.2/lib/antlr-runtime-3.5.2.jar:/usr/local/apache-cassandra-3.11.2/lib/apache-cassandra-3.11.2.jar:/usr/local/apache-cassandra-3.11.2/lib/apache-cassandra-thrift-3.11.2.jar:/usr/local/apache-cassandra-3.11.2/lib/asm-5.0.4.jar:/usr/local/apache-cassandra-3.11.2/lib/caffeine-2.2.6.jar:/usr/local/apache-cassandra-3.11.2/lib/cassandra-driver-core-3.0.1-shaded.jar:/usr/local/apache-cassandra-3.11.2/lib/commons-cli-1.1.jar:/usr/local/apache-cassandra-3.11.2/lib/commons-codec-1.9.jar:/usr/local/apache-cassandra-3.11.2/lib/commons-lang3-3.1.jar:/usr/local/apache-cassandra-3.11.2/lib/commons-math3-3.2.jar:/usr/local/apache-cassandra-3.11.2/lib/compress-lzf-0.8.4.jar:/usr/local/apache-cassandra-3.11.2/lib/concurrent-trees-2.4.0.jar:/usr/local/apache-cassandra-3.11.2/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/local/apache-cassandra-3.11.2/lib/disruptor-3.0.1.jar:/usr/local/apache-cassandra-3.11.2/lib/ecj-4.4.2.jar:/usr/local/apache-cassandra-3.11.2/lib/guava-18.0.jar:/usr/local/apache-cassandra-3.11.2/lib/high-scale-lib-1.0.6.jar:/usr/local/apache-cassandra-3.11.2/lib/hppc-0.5.4.jar:/usr/local/apache-cassandra-3.11.2/lib/jackson-core-asl-1.9.13.jar:/usr/local/apache-cassandra-3.11.2/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/apache-cassandra-3.11.2/lib/jamm-0.3.0.jar:/usr/local/apache-cassandra-3.11.2/lib/javax.inject.jar:/usr/local/apache-cassandra-3.11.2/lib/jbcrypt-0.3m.jar:/usr/local/apache-cassandra-3.11.2/lib/jcl-over-slf4j-1.7.7.jar:/usr/local/apache-cassandra-3.11.2/lib/jctools-core-1.2.1.jar:/usr/local/apache-cassandra-3.11.2/lib/jflex-1.6.0.jar:/usr/local/apache-cassandra-3.11.2/lib/jna-4.2.2.jar:/usr/local/apache-cassandra-3.11.2/lib/joda-time-2.4.jar:/usr/local/apache-cassandra-3.11.2/lib/json-simple-1.1.jar:/usr/local/apache-cassandra-3.11.2/lib/jstackjunit-0.0.1.jar:/usr/local/apache-cassandra-3.11.2/lib/libthrift-0.9.2.jar:/usr/local/apache-cassandra-3.11.2/lib/log4j-over-slf4j-1.7.7.jar:/usr/local/apache-cassandra-3.11.2/lib/logback-classic-1.1.3.jar:/usr/local/apache-cassandra-3.11.2/lib/logback-core-1.1.3.jar:/usr/local/apache-cassandra-3.11.2/lib/lz4-1.3.0.jar:/usr/local/apache-cassandra-3.11.2/lib/metrics-core-3.1.0.jar:/usr/local/apache-cassandra-3.11.2/lib/metrics-jvm-3.1.0.jar:/usr/local/apache-cassandra-3.11.2/lib/metrics-logback-3.1.0.jar:/usr/local/apache-cassandra-3.11.2/lib/netty-all-4.0.44.Final.jar:/usr/local/apache-cassandra-3.11.2/lib/ohc-core-0.4.4.jar:/usr/local/apache-cassandra-3.11.2/lib/ohc-core-j8-0.4.4.jar:/usr/local/apache-cassandra-3.11.2/lib/reporter-config-base-3.0.3.jar:/usr/local/apache-cassandra-3.11.2/lib/reporter-config3-3.0.3.jar:/usr/local/apache-cassandra-3.11.2/lib/sigar-1.6.4.jar:/usr/local/apache-cassandra-3.11.2/lib/slf4j-api-1.7.7.jar:/usr/local/apache-cassandra-3.11.2/lib/snakeyaml-1.11.jar:/usr/local/apache-cassandra-3.11.2/lib/snappy-java-1.1.1.7.jar:/usr/local/apache-cassandra-3.11.2/lib/snowball-stemmer-1.3.0.581.1.jar:/usr/local/apache-cassandra-3.11.2/lib/stream-2.5.2.jar:/usr/local/apache-cassandra-3.11.2/lib/thrift-server-0.3.7.jar:/usr/local/apache-cassandra-3.11.2/lib/jsr223/*/*.jar:/usr/local/apache-cassandra-3.11.2/lib/jamm-0.3.0.jar
        INFO  20:40:37 JVM Arguments: [-Xloggc:/usr/local/apache-cassandra-3.11.2/logs/gc.log, -ea, -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:+UseG1GC, -XX:G1RSetUpdatingPauseTimePercent=5, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Dcassandra.migration_task_wait_in_seconds=1, -Dcassandra.ring_delay_ms=30000, -Xms512M, -Xmx512M, -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler, -javaagent:/usr/local/apache-cassandra-3.11.2/lib/jamm-0.3.0.jar, -Dcassandra.jmx.local.port=7199, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password, -Djava.library.path=/usr/local/apache-cassandra-3.11.2/lib/sigar-bin, -Djava.rmi.server.hostname=192.168.200.19, -Dcassandra.libjemalloc=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1, -XX:OnOutOfMemoryError=kill -9 %p, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/usr/local/apache-cassandra-3.11.2/logs, -Dcassandra.storagedir=/usr/local/apache-cassandra-3.11.2/data, -Dcassandra-foreground=yes]
        WARN  20:40:37 Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
        INFO  20:40:37 jemalloc seems to be preloaded from /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
        WARN  20:40:37 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
        INFO  20:40:37 Initializing SIGAR library
        INFO  20:40:37 Checked OS settings and found them configured for optimal performance.
        WARN  20:40:37 Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low, recommended value: 1048575, you can change it with sysctl.
        WARN  20:40:37 Directory /cassandra_data/commitlog doesn't exist
        WARN  20:40:37 Directory /cassandra_data/saved_caches doesn't exist
        WARN  20:40:37 Directory /cassandra_data/hints doesn't exist
        INFO  20:40:38 Initialized prepared statement caches with 10 MB (native) and 10 MB (Thrift)
        INFO  20:40:40 Initializing system.IndexInfo
        INFO  20:40:43 Initializing system.batches
        INFO  20:40:43 Initializing system.paxos
        INFO  20:40:43 Initializing system.local
        INFO  20:40:43 Initializing system.peers
        INFO  20:40:44 Initializing system.peer_events
        INFO  20:40:44 Initializing system.range_xfers
        INFO  20:40:44 Initializing system.compaction_history
        INFO  20:40:44 Initializing system.sstable_activity
        INFO  20:40:44 Initializing system.size_estimates
        INFO  20:40:44 Initializing system.available_ranges
        INFO  20:40:44 Initializing system.transferred_ranges
        INFO  20:40:44 Initializing system.views_builds_in_progress
        INFO  20:40:44 Initializing system.built_views
        INFO  20:40:44 Initializing system.hints
        INFO  20:40:44 Initializing system.batchlog
        INFO  20:40:44 Initializing system.prepared_statements
        INFO  20:40:44 Initializing system.schema_keyspaces
        INFO  20:40:44 Initializing system.schema_columnfamilies
        INFO  20:40:44 Initializing system.schema_columns
        INFO  20:40:44 Initializing system.schema_triggers
        INFO  20:40:44 Initializing system.schema_usertypes
        INFO  20:40:44 Initializing system.schema_functions
        INFO  20:40:45 Initializing system.schema_aggregates
        INFO  20:40:45 Not submitting build tasks for views in keyspace system as storage service is not initialized
        INFO  20:40:45 Scheduling approximate time-check task with a precision of 10 milliseconds
        INFO  20:40:45 Initializing system_schema.keyspaces
        INFO  20:40:45 Initializing system_schema.tables
        INFO  20:40:45 Initializing system_schema.columns
        INFO  20:40:45 Initializing system_schema.triggers
        INFO  20:40:45 Initializing system_schema.dropped_columns
        INFO  20:40:45 Initializing system_schema.views
        INFO  20:40:45 Initializing system_schema.types
        INFO  20:40:45 Initializing system_schema.functions
        INFO  20:40:45 Initializing system_schema.aggregates
        INFO  20:40:46 Initializing system_schema.indexes
        INFO  20:40:46 Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
        INFO  20:40:48 Initializing key cache with capacity of 25 MBs.
        INFO  20:40:48 Initializing row cache with capacity of 0 MBs
        INFO  20:40:48 Initializing counter cache with capacity of 12 MBs
        INFO  20:40:48 Scheduling counter cache save to every 7200 seconds (going to save all keys).
        INFO  20:40:49 Global buffer pool is enabled, when pool is exhausted (max is 128.000MiB) it will allocate on heap
        INFO  20:40:50 Populating token metadata from system tables
        INFO  20:40:50 Token metadata:
        INFO  20:40:50 Completed loading (1 ms; 5 keys) KeyCache cache
        INFO  20:40:50 No commitlog files found; skipping replay
        INFO  20:40:50 Populating token metadata from system tables
        INFO  20:40:50 Token metadata:
        INFO  20:40:51 Preloaded 0 prepared statements
        INFO  20:40:51 Cassandra version: 3.11.2
        INFO  20:40:51 Thrift API version: 20.1.0
        INFO  20:40:51 CQL supported versions: 3.4.4 (default: 3.4.4)
        INFO  20:40:51 Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4)
        INFO  20:40:51 Initializing index summary manager with a memory pool size of 25 MB and a resize interval of 60 minutes
        INFO  20:40:51 Starting Messaging Service on /192.168.200.19:7000 (eth0)
        WARN  20:40:51 No host ID found, created aa11393a-e0c3-4985-8b30-cd7cffaa9981 (Note: This should happen exactly once per node).
        INFO  20:40:51 Loading persisted ring state
        INFO  20:40:52 Starting up server gossip
        INFO  20:40:52 This node will not auto bootstrap because it is configured to be a seed node.
        INFO  20:40:52 Generated random tokens. tokens are [-6196879740108485456, 897543194900876496, -4399053445991388, -4555536642788239265, 650711900892166799, -4328425306499364801, -6310491252349751233, -3937082349929739874, 1144586990986383417, -8819711010873651108, -8785511878465955344, -4410845794339372638, -2923908580731187113, 8582561413133760736, 12787708335840440, 5237794556437261671, -4316600112744714400, 3872280806625726649, -1320393483584130223, -4227977998973815265, -6770618365176416068, -334696186727605248, -3362283659288738152, 6920781103980355823, -4230204938206024186, 3763614360949331826, -7945292248833444603, -1975487066853708787, 1922793818115238594, -5323936225012743851, -4720523366346820688, 3366775271922014782]
        INFO  20:40:52 Create new Keyspace: KeyspaceMetadata{name=system_traces, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=2}}, tables=[org.apache.cassandra.config.CFMetaData@735262f9[cfId=c5e99f16-8677-3914-b17e-960613512345,ksName=system_traces,cfName=sessions,flags=[COMPOUND],params=TableParams{comment=tracing sessions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [client command coordinator duration request started_at parameters]],partitionKeyColumns=[session_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[client, command, session_id, coordinator, request, started_at, duration, parameters],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@5bb8e628[cfId=8826e8e9-e16a-3728-8753-3bc1fc713c25,ksName=system_traces,cfName=events,flags=[COMPOUND],params=TableParams{comment=tracing events, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [activity source source_elapsed thread]],partitionKeyColumns=[session_id],clusteringColumns=[event_id],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[activity, event_id, session_id, source, thread, source_elapsed],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]}
        INFO  20:40:56 Not submitting build tasks for views in keyspace system_traces as storage service is not initialized
        INFO  20:40:56 Initializing system_traces.events
        INFO  20:40:56 Initializing system_traces.sessions
        INFO  20:40:56 Create new Keyspace: KeyspaceMetadata{name=system_distributed, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=3}}, tables=[org.apache.cassandra.config.CFMetaData@3534ce49[cfId=759fffad-624b-3181-80ee-fa9a52d1f627,ksName=system_distributed,cfName=repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [coordinator exception_message exception_stacktrace finished_at parent_id range_begin range_end started_at status participants]],partitionKeyColumns=[keyspace_name, columnfamily_name],clusteringColumns=[id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[status, id, coordinator, finished_at, participants, exception_stacktrace, parent_id, range_end, range_begin, exception_message, keyspace_name, started_at, columnfamily_name],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@3fe7a821[cfId=deabd734-b99d-3b9c-92e5-fd92eb5abf14,ksName=system_distributed,cfName=parent_repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [exception_message exception_stacktrace finished_at keyspace_name started_at columnfamily_names options requested_ranges successful_ranges]],partitionKeyColumns=[parent_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.TimeUUIDType,columnMetadata=[requested_ranges, exception_message, keyspace_name, successful_ranges, started_at, finished_at, options, exception_stacktrace, parent_id, columnfamily_names],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@3c0b5d5c[cfId=5582b59f-8e4e-35e1-b913-3acada51eb04,ksName=system_distributed,cfName=view_build_status,flags=[COMPOUND],params=TableParams{comment=Materialized View build status, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UUIDType),partitionColumns=[[] | [status]],partitionKeyColumns=[keyspace_name, view_name],clusteringColumns=[host_id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[view_name, status, keyspace_name, host_id],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]}
        INFO  20:40:58 Not submitting build tasks for views in keyspace system_distributed as storage service is not initialized
        INFO  20:40:58 Initializing system_distributed.parent_repair_history
        INFO  20:40:58 Initializing system_distributed.repair_history
        INFO  20:40:58 Initializing system_distributed.view_build_status
        INFO  20:40:58 JOINING: Finish joining ring
        INFO  20:40:59 Create new Keyspace: KeyspaceMetadata{name=system_auth, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[org.apache.cassandra.config.CFMetaData@513aad1a[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [can_login is_superuser salted_hash member_of]],partitionKeyColumns=[role],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[salted_hash, member_of, role, can_login, is_superuser],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@8b742d[cfId=0ecdaa87-f8fb-3e60-88d1-74fb36fe5c0d,ksName=system_auth,cfName=role_members,flags=[COMPOUND],params=TableParams{comment=role memberships lookup table, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[role],clusteringColumns=[member],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, member],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@5ca04a17[cfId=3afbe79f-2194-31a7-add7-f5ab90d8ec9c,ksName=system_auth,cfName=role_permissions,flags=[COMPOUND],params=TableParams{comment=permissions granted to db roles, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | [permissions]],partitionKeyColumns=[role],clusteringColumns=[resource],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, resource, permissions],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@75e5882[cfId=5f2fbdad-91f1-3946-bd25-d5da3a5c35ec,ksName=system_auth,cfName=resource_role_permissons_index,flags=[COMPOUND],params=TableParams{comment=index of db roles with permissions granted on a resource, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@32bcc394, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[resource],clusteringColumns=[role],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[resource, role],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]}
        INFO  20:41:00 Not submitting build tasks for views in keyspace system_auth as storage service is not initialized
        INFO  20:41:00 Initializing system_auth.resource_role_permissons_index
        INFO  20:41:00 Initializing system_auth.role_members
        INFO  20:41:00 Initializing system_auth.role_permissions
        INFO  20:41:00 Initializing system_auth.roles
        INFO  20:41:00 Waiting for gossip to settle...
        INFO  20:41:08 G1 Young Generation GC in 310ms.  G1 Eden Space: 198180864 -> 0; G1 Old Gen: 64626256 -> 39098912; G1 Survivor Space: 1048576 -> 17825792;
        INFO  20:41:08 No gossip backlog; proceeding
        INFO  20:41:09 Netty using native Epoll event loop
        INFO  20:41:09 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
        INFO  20:41:09 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
        INFO  20:41:09 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
        WARN  20:41:11 Trigger directory doesn't exist, please create it and try again.
        INFO  20:41:11 Created default superuser role 'cassandra'


Logs of pod `cassandra-1`  (the one in "CrashLoopBackOff" State) :

        root@k8s-eu-1-master:~# kubectl logs cassandra-1
        Starting Cassandra on 192.168.236.30
        CASSANDRA_CONF_DIR /etc/cassandra
        CASSANDRA_CFG /etc/cassandra/cassandra.yaml
        CASSANDRA_AUTO_BOOTSTRAP true
        CASSANDRA_BROADCAST_ADDRESS 192.168.236.30
        CASSANDRA_BROADCAST_RPC_ADDRESS 192.168.236.30
        CASSANDRA_CLUSTER_NAME K8Demo
        CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
        CASSANDRA_CONCURRENT_COMPACTORS
        CASSANDRA_CONCURRENT_READS
        CASSANDRA_CONCURRENT_WRITES
        CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
        CASSANDRA_DC DC1-K8Demo
        CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
        CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
        CASSANDRA_GC_WARN_THRESHOLD_IN_MS
        CASSANDRA_INTERNODE_COMPRESSION
        CASSANDRA_KEY_CACHE_SIZE_IN_MB
        CASSANDRA_LISTEN_ADDRESS 192.168.236.30
        CASSANDRA_LISTEN_INTERFACE
        CASSANDRA_MEMTABLE_ALLOCATION_TYPE
        CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
        CASSANDRA_MEMTABLE_FLUSH_WRITERS
        CASSANDRA_MIGRATION_WAIT 1
        CASSANDRA_NUM_TOKENS 32
        CASSANDRA_RACK Rack1-K8Demo
        CASSANDRA_RING_DELAY 30000
        CASSANDRA_RPC_ADDRESS 0.0.0.0
        CASSANDRA_RPC_INTERFACE
        CASSANDRA_SEEDS cassandra-0.cassandra.default.svc.cluster.local
        CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
        changed ownership of '/cassandra_data/data' from root to cassandra
        changed ownership of '/cassandra_data' from root to cassandra
        changed ownership of '/etc/cassandra/jvm.options' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra.yaml' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra-env.sh' from root to cassandra
        changed ownership of '/etc/cassandra/logback.xml' from root to cassandra
        changed ownership of '/etc/cassandra/cassandra-rackdc.properties' from root to cassandra
        changed ownership of '/etc/cassandra' from root to cassandra
        OpenJDK 64-Bit Server VM warning: Cannot open file /usr/local/apache-cassandra-3.11.2/logs/gc.log due to No such file or directory
       
        INFO  20:46:32 Configuration location: file:/etc/cassandra/cassandra.yaml
        INFO  20:46:34 Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=null; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=192.168.236.30; broadcast_rpc_address=192.168.236.30; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; client_encryption_options=<REDACTED>; cluster_name=K8Demo; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/cassandra_data/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@275710fc; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=GossipingPropertyFileSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=/cassandra_data/hints; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=all; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=192.168.236.30; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=32; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/cassandra_data/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=cassandra-0.cassandra.default.svc.cluster.local}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@525f1e4e; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
        INFO  20:46:34 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
        INFO  20:46:34 Global memtable on-heap threshold is enabled at 128MB
        INFO  20:46:34 Global memtable off-heap threshold is enabled at 128MB
        INFO  20:46:34 Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
        INFO  20:46:34 Back-pressure is disabled with strategy null.
        INFO  20:46:34 Unable to load cassandra-topology.properties; compatibility mode disabled
        INFO  20:46:34 Overriding RING_DELAY to 30000ms


Do I have to set mountPath  to '/cassandra_data/data'  ?


Matthew Cary

unread,
Nov 14, 2023, 2:57:47 PM11/14/23
to Raphael Stonehorse, Hendrik Land, kubernetes-sig-storage
The configuration between the two pods should be the same, I don't know why the first would succeed and the second would fail?

FWIW the second instance seems to be stuck on JMX configuration.
Reply all
Reply to author
Forward
0 new messages