Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "backingimagedatasources.longhorn.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "longhorn": current value is "longhorn-crd"
This happens typically when Helm attempts to roll out a new revision of an application and then something goes wrong in the process, like a bug in an application itself or an issue inside the Kubernetes cluster, which causes the new deployment to never get completed. This faulty deployment becomes dangling, therefore preventing all the future deployments to be rolled out. You can easily check the latest deployment status by retrieving the history of application deployments with helm history.
In order to fix this error all you need to do is to perform a rollback with helm rollback [release_name] [revision_number] -n [app_namespace] so that the latest stable version of the deployment will become the active one and the dangling deployment will be cancelled. In the example above deployment with revision 15 is the one that is known to be stable and was successfully rolled out before so we can perform a rollback like this:
helm rollback is a useful command which you can also use to roll back to the latest stable release in case you discover issues during testing of your application. You can read more about it here: Helm Rollback
Below you can see an example of a helm upgrade command with --atomic parameter: the faulty deployment is being terminated automatically once the helm upgrade operation wait time is reached and instead of dangling deployments we can see rollback operations happening when we run helm history command.
Helm3 makes use of the Kubernetes Secrets object to store any information regarding a release. These secrets are basically used by Helm to store and read it's state every time we run: helm list, helm history or helm upgrade in our case.
When running the helm upgrade, it picks up my values.yaml and creates a /var/lib/grafana/dashboards/default directory, however the kubernetes.json it loads is empty, and I get the following error in the log;
is referred from within the child helm chart rather than the parent chart, and I did not want to store my dashboards there as that helm chart is zipped up and therefore difficult to PR. So the only way I could get the parent chart to pick up dashboards was to configure them as configmaps.
Hi @swaps1 I am trying to follow your method to import the dashboards, Iam using the grafana helm chart as a dependency chart to my application. When you say " * create a grafana/templates/grafana-dashboard-configmap.yaml" Did you create this after downloading the grafana helm chart or in a different chart ? And the same question with the configmap
Hi @gnutakki My grafana helm chart is downloaded as a zip file and stored in the root grafana helm directory
grafana/charts/grafana-6.16.10.tgz
I have put the configmap yaml template in grafana/templates/grafana-dashboard-configmap.yaml and yes this was created after I downloaded the helm chart and lives separately to the zip file in grafana/charts/. This way I can download the latest charts and still overlay with my own templates and dashboards. The dashboards referred to in the configmap yaml live in;
grafana/grafana-dashboards/dashboad1.json
The helm install/upgrade will then load the chart from the zip file, and overlay with anything in the templates folder. As the template can use the .Files.Get to refer to local files, it can look in the grafana-dashboards folder for any new dashboards. Simply add your dashboards.json into this folder and update the template to look for this new dashboard.
I ran `helm repo update` and am still not able to install the helm chart. Has anyone else had this issue? My helm and tiller are installed correctly and I am able to install other helm charts with no problems.
The version field inside of the Chart.yaml is used by many of theHelm tools, including the CLI and the Tiller server. When generating apackage, the helm package command will use the version that it findsin the Chart.yaml as a token in the package name. The system assumesthat the version number in the chart package name matches the version number inthe Chart.yaml. Failure to meet this assumption will cause an error.
When managing charts in a Chart Repository, it is sometimes necessary todeprecate a chart. The optional deprecated field in Chart.yaml can be usedto mark a chart as deprecated. If the latest version of a chart in therepository is marked as deprecated, then the chart as a whole is considered tobe deprecated. The chart name can later be reused by publishing a newer versionthat is not marked as deprecated. The workflow for deprecating charts, asfollowed by the helm/chartsproject is:
A LICENSE is a plain text file containing the licensefor the chart. The chart can contain a license as it may have programming logic in the templates andwould therefore not be configuration only. There can also be separate license(s) for the applicationinstalled by the chart, if required.
The chart can also contain a short plain text templates/NOTES.txt file that will be printed outafter installation, and when viewing the status of a release. This file is evaluated as atemplate, and can be used to display usage notes, next steps, or any otherinformation relevant to a release of the chart. For example, instructions could be provided forconnecting to a database, or accessing a web UI. Since this file is printed to STDOUT when runninghelm install or helm status, it is recommended to keep the content brief and point to the READMEfor greater detail.
When helm dependency update retrieves charts, it will store them aschart archives in the charts/ directory. So for the example above, onewould expect to see the following files in the charts directory:
A chart repository is an HTTP server that houses one or more packagedcharts. While helm can be used to manage local chart directories, whenit comes to sharing charts, the preferred mechanism is a chartrepository.
Helm comes with built-in package server for developer testing (helmserve). The Helm team has tested other servers, including Google CloudStorage with website mode enabled, and S3 with website mode enabled.
On the client side, repositories are managed with the helm repocommands. However, Helm does not provide tools for uploading charts toremote repository servers. This is because doing so would addsubstantial requirements to an implementing server, and thus raise thebarrier for setting up a repository.
Hooks allow you, the chart developer, an opportunity to performoperations at strategic points in a release lifecycle. For example,consider the lifecycle for a helm install. By default, the lifecyclelooks like this:
Practically speaking, this means that if you create resources in a hook, youcannot rely upon helm delete to remove the resources. To destroy suchresources, you need to either write code to perform this operation in a pre-deleteor post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hook weights can be positive or negative numbers but must be represented asstrings. When Tiller starts the execution cycle of hooks of a particular kind (ex. the pre-install hooks or post-install hooks, etc.) it will sort those hooks in ascending order.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out. Thisbehavior can be changed using the helm.sh/hook-delete-timeout annotation. The value is the number of seconds Tillershould wait for the hook to be fully deleted. A value of 0 means Tiller does not wait at all.
The crd-install hook is executed very early during an installation, beforethe rest of the manifests are verified. CRDs can be annotated with this hook sothat they are installed before any instances of that CRD are referenced. In thisway, when verification happens later, the CRDs will be available.
When a helm release, that uses a hook, is being updated, it is possible that the hook resource might already exist in the cluster. In such circumstances, by default, helm will fail trying to install the hook resource with an "... already exists" error.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade. There are, in fact, good reasons why one might want to keep the hook: for example, to aid manual debugging in case something went wrong. In this case, the recommended way of ensuring subsequent attempts to create the hook do not fail is to define a "hook-delete-policy" that can handle this: "helm.sh/hook-delete-policy": "before-hook-creation". This hook annotation causes any existing hook to be removed, before the new hook is installed.
If it is preferred to actually delete the hook after each use (rather than have to handle it on a subsequent use, as shown above), then this can be achieved using a delete policy of "helm.sh/hook-delete-policy": "hook-succeeded,hook-failed".
The annotation "helm.sh/resource-policy": keep instructs Tiller to skip thisresource during a helm delete operation. However, this resource becomesorphaned. Helm will no longer manage it in any way. This can lead to problemsif using helm install --replace on a release that has already been deleted, buthas kept resources.
Each time you want to add a new chart to your repository, you must regeneratethe index. The helm repo index command will completely rebuild the index.yamlfile from scratch, including only the charts that it finds locally.
However, you can use the --merge flag to incrementally add new charts to anexisting index.yaml file (a great option when working with a remote repositorylike GCS). Run helm repo index --help to learn more,
Under the hood, the helm repo add and helm repo update commands arefetching the index.yaml file and storing them in the$HELM_HOME/repository/cache/ directory. This is where the helm searchfunction finds information about charts.
df19127ead