I'm okay with this, but I'd like a thumbs up from sig-storage that this is their preferred default.
@kubernetes/sig-storage-pr-reviews
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
Is local-up only used for testing purposes? The only thing I'm concerned about is if anyone uses it for production. Then they will get very surprised they lose all their data and realize hostpath PV was in /tmp.
@msau42 yes, only for testing.
Yes, local-up should only be used for testing. That said, if there is a risk of data loss, even in testing, we should call it out with a warning.
We call it out in the docs and official tutorials that hostpath pv should only be used for testing purposes. But still, it gets missed. The user experience has been:
With this change, we run the risk of the user not being aware of the limitations because everything will succeed.
@msau42 When I run local up, I found there is a default storageclas
NAME PROVISIONER AGE
standard (default) kubernetes.io/host-path 6m
I have know that the host_path provisioner is used only for development and testing only and WILL NOT WORK in a multi-node cluster.But when I create a pvc and refer to the default provisioner , the provision is failed because I do not enable the host path provisioner, so I need to find out how to enable it, It is not friendly for users.
I agree it's not great that it works out of the box. But I also don't want users to not be aware of the limitations and potentially end up storing production data on it. I don't see a great way to be able to "warn" users that hostpath provisioner is only for dev/testing. It's already in the docs, but it's very easy to miss if you don't read them.
@msau42 But local-up is only a single node cluster and only used for testing. I think the docs is enough to "warn" users that hostpath provisioner is only for dev/testing when users deploy product cluster.
My opinion is that we should probably enable the provisioner, however it would be nice to add a yellow (or even red) warning into hack/local-up-cluster output that the provisioner is just for testing.
@jsafrane commented on this pull request.
> @@ -233,7 +233,7 @@ CONTAINER_RUNTIME_ENDPOINT=${CONTAINER_RUNTIME_ENDPOINT:-""}
IMAGE_SERVICE_ENDPOINT=${IMAGE_SERVICE_ENDPOINT:-""}
CHAOS_CHANCE=${CHAOS_CHANCE:-0.0}
CPU_CFS_QUOTA=${CPU_CFS_QUOTA:-true}
-ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"false"}
You should enable it only when you create the default storage class for hostpath, i.e. somewhere in create_storage_class. The script deploys real AWS/GCE/OpenStack/... storage class in the clouds and it does not need hostpath provisioner there (unless user enables it explicitly via env. variable).
@wackxu pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
@msau42 @jsafrane Updated, the warning message like below:
root@node1:/xsw/gowork/src/k8s.io/kubernetes# ./hack/local-up-cluster.sh
WARNING : The kubelet is configured to not fail even if swap is enabled; production deployments should disable swap.
WARNING : The hostpath provisioner is used only for development and testing only and WILL NOT WORK in a multi-node cluster.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jsafrane, wackxu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: ixdy
Assign the PR to them by writing /assign @ixdy in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/lgtm cancel
Got second thoughts. What happens when user hits ^C? Looking at the code, it leaves the host path volumes and their data in /tmp/hostpath_pv/ (hardcoded in host_path.go). Should there be some cleanup in local-up-cluster.sh?
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: wackxu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: ixdy
Assign the PR to them by writing /assign @ixdy in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
@jsafrane When users use ^C to force stop local cluster, I think just leave the data there and let users manually clear it because users maybe want to use the data for analyse and so on. Just like we manually enable the host path provisioner.
Hmm, I've never needed any data when I stopped my local cluster - I use local-up-cluster.sh to debug Kubernetes, not applications on top of it. I admin different people can use it for different purposes.
We can always add some HOSTPATH_PROVISIONER_CLEANUP variable later.
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jsafrane, wackxu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: ixdy
Assign the PR to them by writing /assign @ixdy in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
@cblecker, please look at the discussion right above and approve / disapprove.
/test all
Tests are more than 96 hours old. Re-running tests.
@cblecker requested changes on this pull request.
> @@ -150,6 +151,11 @@ if [ "$(id -u)" != "0" ]; then
echo "WARNING : This script MAY be run as root for docker socket / iptables functionality; if failures occur, retry as root." 2>&1
fi
+if [ -z "$CLOUD_PROVIDER" ]; then
+ ENABLE_HOSTPATH_PROVISIONER=true
The problem with this logic is there is no way to force disable the hostpath provisioner.
We should not set the default above, and then in this logic, check if it is unset as a condition. Then afterwards, set the false default if unset.
> @@ -150,6 +151,11 @@ if [ "$(id -u)" != "0" ]; then
echo "WARNING : This script MAY be run as root for docker socket / iptables functionality; if failures occur, retry as root." 2>&1
fi
+if [ -z "$CLOUD_PROVIDER" ]; then
+ ENABLE_HOSTPATH_PROVISIONER=true
+ echo "WARNING : The hostpath provisioner is used only for development and testing only and WILL NOT WORK in a multi-node cluster."
This indicates that the hostpath provision is used for development, but doesn't actually say that the script is taking the action of enabling it.
@wackxu: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-integration | 6895805 | link | /test pull-kubernetes-integration |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
/test all
Tests are more than 96 hours old. Re-running tests.
—
@wackxu: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce | 6895805 | link | /test pull-kubernetes-e2e-gce |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #62522.
@fejta-bot: Closing this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—