Background
I recently had to migrate several Kubernetes services to Helm. These services were previously deployed using kubectl apply -f
and
convert our current single-file Kubernetes manifest into a Helm Chart (keeping the same names and everything) to avoid having to modify the services or delete and recreate the application (and then get new ELBs, reconfigure DNS…).
I Don’t Own These Resources
Since Helm is standalone, the only way it knows which resources it manages is via labels
and annotations
.
If we try to helm install
a chart and one of the Kubernetes object names already exists,
Helm will throw an error saying, “This resource already exists and I don’t own it, so I’m not going to touch it”
(more or less).
helm install my-api-dev . --namespace demo --create-namespace -f ...
You will get a message like:
Error: INSTALLATION FAILED: Unable to continue with install: ServiceAccount "my-api-dev" in namespace "demo" exists and
cannot be imported into the current release: invalid ownership metadata; label validation error:
key "app.kubernetes.io/managed-by" must equal "Helm": current value is "deployment";
annotation validation error: missing key "meta.helm.sh/release-name": must be set to "my-api-dev";
annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "demo"
These annotations tell Helm which release they are part of:
meta.helm.sh/release-name: my-api-dev
meta.helm.sh/release-namespace: demo
And this label tells that they are managed by Helm:
app.kubernetes.io/managed-by: Helm
I Own These Resources Now !
To trick Helm into thinking that the resources have been deployed using a chart, we need to add the necessary labels and annotations.
This can be acomplished in (at least) 2 ways. Manually with kubectl
or using kustomize
.
The easiest is to leverage Kustomize
to apply the labels adn annotations to all the K8s resources in one go.
This was possible for me because we always stored into artifactory the latest K8s manifest applied.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path/to/your/k8s-manifest.yaml
commonAnnotations:
meta.helm.sh/release-name: my-api-dev
meta.helm.sh/release-namespace: demo
commonLabels:
app.kubernetes.io/managed-by: Helm
kubectl apply -k .
will apply the changes to the resources.
But wait… if you simply apply it like this you are going to get:
namespace/demo configured
serviceaccount/my-api-dev configured
configmap/sia-config-my-api-dev configured
configmap/workload-config-my-api-dev configured
service/my-api-dev configured
horizontalpodautoscaler.autoscaling/my-api-dev configured
ingress.networking.k8s.io/my-api-dev configured
The Deployment "my-api-dev" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"my-api-dev", "app.kubernetes.io/managed-by":"Helm"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Because the LabelSelectorRequirement
is immutable and cannot be changed. In our case we don’t really care so we can force the apply:
kubectl apply -k . --force
This will bypass the above error and apply the labels and annotations to all the resources.
If you don’t have on hand the latest manifest, you can apply the labels and annotations manually with kubectl
:
(in a loop)
```bash
kind=..
name=...
namespace=...
releasename=...
kubectl annotate $kind $name -n $namespace "meta.helm.sh/release-name"=$releasename
kubectl annotate $kind $name -n $namespace "meta.helm.sh/release-namespace"=$namespace
kubectl label $kind $name -n $namespace "app.kubernetes.io/managed-by"=Helm
Once the labels and annotations are applied, you can install the Helm chart:
helm install my-api-dev . --namespace demo --create-namespace -f ...
helm install my-api-dev . -f external_values_files.yml --create-namespace -n demo
NAME: my-api-dev
LAST DEPLOYED: Tue Oct 22 15:00:31 2024
NAMESPACE: demo
STATUS: deployed
REVISION: 1
TEST SUITE: None
Voila.
Three-Way Strategic Merge Patches
In our case, only modifying applying the required labels and annotations is enough because we strongly discouraged anyone to apply manual changes to the K8s resources. All the changes are stored and applied via GitOps.
However… it could be that in your case, the K8s state (or the current K8s manifest applied) will differ from the template Helm will install. (Differs in the sense that maybe volumes names or volumes path have changed) and this will cause issue when installing the Chart. It might not fail but the applied manifest will not be the one you think you applied (because of the 3 way merge).
If your live state differs from the desired state in your Helm chart, Helm attempts to apply only the differences. To ensure consistency between your desired state and the live state, we need to use the three-way strategic merge patches feature in Helm.
Helm 3 has implemented the three-way strategic merge patches, which compares the proposed chart’s manifests with the most recent chart’s manifests and the live state to determine the differences. Helm uses secret storage to store release metadata, including the entire proposed Kubernetes manifest. This allows Helm to perform its three-way merge. You can inspect the Helm release secrets with:
For example, if you already a chart installed you can have a peek and what’s in the secret:
kubectl get secret -n demo
NAME TYPE DATA AGE
sh.helm.release.v1.my-api-dev.v1 helm.sh/release.v1 1 136m
If you fetch that secret and look at its content:
apiVersion: v1
data:
release: SDR........
kind: Secret
metadata:
labels:
name: name
owner: helm
status: deployed
version: "1"
name: sh.helm.release.v1.my-api-dev.v1
namespace: demo
type: helm.sh/release.v1
The data.release
field is double base64 encoded and compressed with gzip. This is where the manifest (the latest applied manifest using kubectl
from the latest deploy)
lives.
echo SDR............. | base64 -d | base64 -d | gunzip -c
will give the following:
{
"name": "my-api-dev",
"info": {
"first_deployed": "2024-10-16T07:03:29.363389168Z",
"last_deployed": "2024-10-16T07:03:29.363389168Z",
"deleted": "",
"description": "Install complete",
"status": "deployed"
},
"chart": {
"metadata": {
"name": "my-api-dev",
"version": "1.0.0",
"description": "XXXXX",
"apiVersion": "v2",
"appVersion": "1.0.0",
[........]
"schema": null,
"files": null
},
"manifest": "---\n# Source: my-api-dev/charts/.........\n",
"version": 1,
"namespace": "demo"
}
The manifest field is where the latest K8s applied is stored and this is where we can inject the live state of the K8s resources.
Trick helm into thinking that the resources have been deployed using a chart
These are the steps that could be used to migrate an existing Kubernetes applications to Helm without downtime if the manifest differs.
- Grab the latest K8s manifest. If you have it stored somewhere, better other wise you can use
kubectl get
to fetch each resource from the live application. - Build a oneliner manifest
cat /tmp/manifest.yml | awk '{printf "%s\\n", $0}'
- Inject the manifest
{
"name": "release-name",
"info": {
...
},
"chart": {
...
},
"config": {...},
"manifest": "INJECT HERE"",
"version": 1,
"namespace": "demo"
}
- Build a oneliner of te above `cat /tmp/manifest.json | jq -c > /tmp/manifest-one-liner.json
- Gzip encode it and double base64 it
data_release=$(cat /tmp/manifest-one-liner.json | gzip -c -k | base64 | base64 | awk '{print}' ORS='')
- And finally build the secret:
cat << EOF | kubectl create -f-
apiVersion: v1
data:
release: ${data_release}
kind: Secret
metadata:
labels:
name: name
owner: helm
status: deployed
version: "1"
name: sh.helm.release.v1.name.v1
namespace: demo
type: helm.sh/release.v1
EOF
Voila