Pravar Agrawal Technology & Travel

Power Up With Helm3

If you are already familiar with the Kubernetes world, usage of Helm is indispensable. It’s an immensely powerful utility when it comes to managing deployments, upgrades to Kubernetes resources. Earlier, there were few concerns especially in terms of security about using Helm. But with the introduction of Helm v3, those issues have been fixed with addition of new features . Before we jump into Helm v3, let’s understand a little about it’s previous version.

Helm is a package manager for Kubernetes ecosystem. It comes into play during the deployment or managing of Kubernetes resources like Deployment objects, Services, Ingress etc. These resources are managed in the form of Helm charts which are written in their own DSL to create underlying K8s resources. A sample Helm chart is available here. The Helm until v3 used to work with the help of another component called Tiller. Tiller used to by default run as tiller-deploy deployment in kube-system namespace. And when the user wanted to deploy a new Helm chart release with the help of helm local client, Tiller used to communicate every client instruction to the kube-apiserver followed by the instructions to update or create the resources. With introduction of Helm v3, the Tiller has been removed and the older client/server architecutre is replaced with client/library architecture with just helm binary. This gives a boost to the security since it is now per user basis. Also, releases are now stored as in-cluster secrets and are persisted on the namespaces basis and are not in Tiller namespace anymore.

Let’s get into action with Helm now. Helm v3 can be installed in similar fashion just like the previous versions but this time, we don’t need to do helm init and also HELM_HOME has been removed. I’m currently running the below version in my local machine:

$ helm version
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}

We can straight away add the repository for grabbing the charts and get going like below:

$ helm rep add nginx https://helm.nginx.com/stable

Also, another change has been the replacement of previously used delete argument with uninstall argument, in order to remove a chart. Also we can do a search of any existing package in the added repositories:

$ helm search repo nginx-ingress

Helm search command can query both local and helm hub repositories as well now. Apart from the above two some more changes have been introduced as well like,

  • delete is replaced with uninstall,
  • fetch is replaced with pull,
  • helm install now requires a mandatory release name or - - generate argument,
  • inspect is replaced with show

Now, I’m going to deploy a sample helm chart in my local K8s cluster which I’m running using docker desktop. I’ve a sample nodejs application molded into a helm chart.

$ git clone [email protected]:pravarag/nodejs-on-k8s.git
Cloning into 'nodejs-on-k8s'...
Receiving objects: 100% (28/28), 7.13 KiB | 3.57 MiB/s, done.
Resolving deltas: 100% (1/1), done.

Now, let’s see what is inside this chart:

.
├── Dockerfile
├── README.md
├── k8s-manifests
│   ├── nodeApp-deploy.yml
│   ├── nodeApp-hpa.yml
│   ├── nodeApp-priorityclass.yml
│   └── nodeApp-service.yml
├── nodejs-chart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── NOTES.txt
│   │   ├── _helpers.tpl
│   │   ├── deployment.yaml
│   │   ├── hpa.yaml
│   │   ├── priority_class.yaml
│   │   ├── service.yaml
│   │   └── tests
│   │       └── test-connection.yaml
│   └── values.yaml
├── package.json
└── server.js

Our main focus is on two things here: values.yaml and files inside templates. I’ll just show a deployment.yaml which is templatised in Helm style dsl here.

$ cat deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-deployment
  labels:
    app: nodeJs
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: nodeJs
  template:
    metadata:
      labels:
        app: nodeJs
    spec:
      containers:
        - name: hello-world-app
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 3000
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      priorityClassName: {{ .Values.priorityClassDefault.name }}

What we see here between those curl braces like {{ .Values.replicaCount }}, are the values being passed on from the values.yaml file. And a values.yaml looks like:

$cat values.yaml
# Default values for nodejs-chart.
# Declare variables to be passed into your templates.

replicaCount: 10

image:
  repository: pravarag/nodejs-test
  # repository: #
  tag: latest
  pullPolicy: IfNotPresent


service:
  type: LoadBalancer
  port: 3000
  targetPort: 3000


resources:
  limits:
    cpu: 500m
    memory: 500Mi
  requests:
    cpu: 5m
    memory: 5Mi

nodeSelector: {}

tolerations: []

affinity: {}

priorityClassDefault:
  enabled: true
  name: high-priority
  value: 1200000


controller:
  autoscaling:
    enabled: true
    minReplicas: 7
    maxReplicas: 10
    averageCPUUtilization: 50
    averageMemoryUtilization: 60

And once we give a command to install this chart, these templates are installed inside the K8s cluster and we can see our pods coming up and running:

nodejs-chart

Also the helm list command is used to check currently installed charts inside our K8s cluster.

So, that was a quick power show of Helm v3 and these are not just the only things one can do with Helm. To know more features of Helm, follow here. Also check out this repository which hosts most widely used helm charts here.

Until next time, ciao!!