Closed #52441.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
As discussed offline, this is something that needs to be turned on when using local-up-cluster (https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh#L229).
However, it may help us to include some documentation on how hostpath must be turned on using a controller-manager flag.
cc @kubernetes/sig-storage-misc
Reopened #52441.
I looked through the tutorial and saw that the example yamls manually create hostpath PVs to be used by mysql and wordpress deployments. So the hostpath provisioner should not need to be invoked. I think the issue is the PVCs need to specify storageClassName: "", otherwise the default storageclass may be used instead.
/assign
Any evidence leading to your conclusion that the default storage class is invoked?
If I'm understanding this correctly, users are not encouraged to explicitly set storageClassName to anything (including empty string). This is for portability of the manifests, and it ensures that the default storage class is effected.
I concluded that the default storage class was invoked because:
Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabledThis specific tutorial has a whole section on how to create HostPath volumes for running on minikube. Instead, it sounds like the preferred approach is to revamp this whole section and use dynamic provisioning with default storage classes, and maybe just add a small blurb that you need to explicitly enable the hostpath provisioner in minikube environments.
Tutorial has been updated to use default storage class.
/close
Closed #52441.
what if you are not using minikube, then what? seems that everybody is assuming that minikube is the only environment, but some of us run a single node cluster and still want host path to replace our bare metal vps with a similar kubernetes controlled vps.
Hi @christhomas, if you cannot use the hostpath provisioner in your environment, then you will have to fallback to the static creation of hostpath PVs. I couldn't find a great way to include both methods in the tutorial without making it confusing/complicated, but am open to suggestions.
I've today found a way to implement host path quickly and easily, lets see what you think of this.
Use this to setup the storageclass and permissions, maybe you already have the permissions, so you can skip this, or take parts of it, but this is what I used:
apiVersion: v1
kind: ServiceAccount
metadata:
name: hostpath-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hostpath-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hostpath-provisioner
namespace: kube-system
subjects:
- kind: ServiceAccount
name: hostpath-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: hostpath-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: hostpath-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: hostpath-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hostpath-provisioner
subjects:
- kind: ServiceAccount
name: hostpath-provisioner
---
# -- Create a pod in the kube-system namespace to run the host path provisioner
apiVersion: v1
kind: Pod
metadata:
namespace: kube-system
name: hostpath-provisioner
spec:
serviceAccountName: hostpath-provisioner
containers:
- name: hostpath-provisioner
image: mazdermind/hostpath-provisioner:latest
imagePullPolicy: "IfNotPresent"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /mnt/kubernetes-pv-manual
volumeMounts:
- name: pv-volume
mountPath: /mnt/kubernetes-pv-manual
volumes:
- name: pv-volume
hostPath:
path: /mnt/kubernetes-pv-manual
---
# -- Create the standard storage class for running on-node hostpath storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: kube-system
name: manual
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: hostpath
You might need to adjust "/mnt/kubernetes-pv-manual" to your setup
Then you can create a PVC for your app like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-php
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Mi
Then you can mount it into your application, here I drop a deployment I was working on, obviously you can edit this to your taste
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-php
spec:
replicas: 1
template:
metadata:
labels:
app: hello-php
spec:
containers:
- name: hello-php-nginx
image: hello-php-nginx:v2
ports:
- containerPort: 8080
volumeMounts:
- name: php-socket
mountPath: /sock
- name: hello-php-phpfpm
image: hello-php-phpfpm:v2
volumeMounts:
- name: php-socket
mountPath: /sock
- name: data
mountPath: /data
volumes:
- name: php-socket
emptyDir: {}
- name: data
persistentVolumeClaim:
claimName: hello-php
What do you think?
Looks fine, I think setting up the hostpath provisioner would be better suited to under the hostpath documentation and/or default storage class documentation, rather than the tutorial though. The tutorial is meant to run across many environments that may not be using hostpath for their default storage class.
sure, but its still stuff missing from the official docs and explanations, so it should go somewhere
Agree, I think under the hostpath section and/or default storage class section would be a good fit. Would you like to open a PR for it?
Hi,
How to enable local storage volumes for installation done through kubeadm
@vikasgubbi are you doing a baremetal installation? If so, then I have some yaml files on my github that might help you out, I used a hostpath provisioner. All the details are here:
https://github.com/christhomas/kubernetes-cluster/blob/master/03-storage-hostpath.yml