Creating Helm chart deployment for a Web App
Getting started
First of all, we will need to have installed helm on our system. You can follow these steps in the official documentation to have it installed in your system.
Let’s follow these three steps to install it on Windows:
- Download the helm binary from the Helm GitHub releases page
- Create a directory on your user folder called
local-apps
, then copy inside it thehelm.exe
file that was bundled in the .zip file you downloaded from GitHub - Go to “My Computer” > “Properties” > “Advanced” > “Environment Variables” > “Path”, and add in there a new value pointing to the folder that you have created
Finally, open a cmd window and run helm version
to confirm that you have installed it correctly. You should see your helm version being displayed on the console.
General structure of the helm charts
Our charts will be located in a folder named charts
in the root folder of the repository, inside it we will have a folder for each deployment we want to have. At the same time we will have a values
folder in the root folder of the repository, where we will store all the specific values for each environment.
Generating the template
Inside a cmd window, go to the charts
folder and run the following command:
1
helm create <deployment-name>
In <deployment-name>
insert your app name. From now on for the purposes of this document, we will call our app web-app
.
This will generate a folder named web-app
, which inside will have the folders charts
and templates
, and also the files .helmignore
, Chart.yaml
and values.yaml
.
In our case we will go with a custom template, so you can keep Chart.yaml
, values.yaml
and .helmignore
, you can delete the other folders and files, but keep the templates
folder empty, there we will be creating our chart files.
Chart.yaml file
The Chart.yaml
file contains some information about our chart and application, if it has just been generated it should look something like this:
1
2
3
4
5
6
7
8
9
apiVersion: v2
name: web-app
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
We should change the following elements:
description
: here you should replace the placeholder text with a short description about your appversion
: this is the helm chart version, and should each time you change something in your helm charts. It follows the Semantic VersioningappVersion
: this is your app version, and should follow the same version you are using with your app. It is highly recommended to maintain this value between double quotes, as it technically is not expected to be semantic versioning.
values.yaml file
This file is very important and is the default values for our helm chart. It should always have development
environment values by default, and only replace the specific values that will change in other environments in separate files on our values
folder we talked about earlier. For now, know that this file should have specified all the variables our helm chart needs to work.
For this example, your values.yaml
file should look something like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
ReplicaCount: 2
AppName: webapp
Autoscaling:
UseAutoscaling: true
MinReplicas: 2
MaxReplicas: 3
EnableTargetCPU: true
EnableTargetMem: true
TargetCPUUtilizationPercentage: 90
TargetMemoryUtilizationPercentage: 80
App:
Resources:
Requests:
Cpu: 50m
Memory: 400Mi
Limits:
Cpu: 600m
Memory: 2Gi
Hosts:
Local: mywebapp.com
SelfUrl: https://webapp.com
CorsOrigins: https://webapp.com,https://frontend.webapp.com
EnvironmentShortName: Dev
NetCoreEnvironment: Development
Image: myacrurl.azurecr.io/dev/webapp:1.0.0
ImagePullSecret: my-acr-secret
LivenessProbe:
Enabled: true
StartupProbe:
Enabled: true
ReadinessProbe:
Enabled: false
AzureKeyVault:
IsEnabled: true
KeyVaultName: MyKeyVault
ApplicationInsights:
InstrumentationKey: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Certificate:
IsEnabled: true
ClusterIssuerName: letsencrypt
SomeString: Some text
ExampleSettings:
ListExample:
- 123456
BoolExample: false
AnotherExample:
AnotherListExample:
- ListItem01
- ListItem02
- ListItem03
- ListItem04
- ListItem05
- ListItem06
ThirdExample:
IsEnabled: false
YetAnotherList:
- ListItem01
- ListItem02
templates folder
This folder contains all our helm “logic”
The most basic files to have are deployment.yaml
, hpa.yaml
and service.yaml
, as with these three files we will cover the bare minimum functionality. We will cover these and some other files that will help us to control our deployment better and give us more functionality.
One very important note about working with these files, is that almost none of the parameters that we will add to these files should have hardcoded values. Everything should be a variable that is later specified on the values.yaml
file and later on changed in a custom <environment>-values.yaml
file.
Variables look like this: {{ .Values.variableName }}
.Values
is a mandatory keyword, and then the name of the variable. In this case we assume variableName
is at the first level of our values.yaml
, ie it is not embedded in another key:
1
variableName: some-value
But it could look something like this:
1
2
3
variableGroup:
variableSubgroup:
variableName: some-value
In that case, you have to specify it like this: {{ .Values.variableGroup.variableSubgroup.variableName }}
As you may have noticed, the dot is used to access the parameters of each element, you can use this to your advantage to better organize your parameters.
deployment.yaml
This file contains the deployment for your app. Here we have a basic deployment for a web oriented app:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.AppName }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.AppName }}
template:
metadata:
labels:
app: {{ .Values.AppName }}
spec:
containers:
- name: {{ .Values.AppName }}
image: {{ .Values.Image }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: {{ .Values.NetCoreEnvironment }}
ports:
- containerPort: 80
{{- if .Values.LivenessProbe.Enabled }}
livenessProbe: # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command
httpGet:
path: /
port: 80
failureThreshold: 1
periodSeconds: 10
{{- end }}
{{- if .Values.StartupProbe.Enabled }}
startupProbe: # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes
httpGet:
path: /
port: 80
failureThreshold: 30
periodSeconds: 5
{{- end }}
{{- if .Values.ReadinessProbe.Enabled }}
readinessProbe: # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
{{- end }}
volumeMounts:
- mountPath: /app/appsettings.{{ .Values.NetCoreEnvironment }}.json
subPath: appsettings.{{ .Values.NetCoreEnvironment }}.json
name: config-volume
resources:
requests:
cpu: {{ .Values.App.Resources.Requests.Cpu }}
memory: {{ .Values.App.Resources.Requests.Memory }}
limits:
cpu: {{ .Values.App.Resources.Limits.Cpu }}
memory: {{ .Values.App.Resources.Limits.Memory }}
{{- if .Values.ImagePullSecret }}
imagePullSecrets:
- name: {{ .Values.ImagePullSecret }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ .Values.AppName }}-config-{{ lower .Values.EnvironmentShortName }}
I’ve included the imagePullSecrets
parameter if you need any kind of authentication to retrieve your app image. You can create a new secret with the following command, replacing the placeholders with your values:
1
kubectl create secret docker-registry <name-of-secret> --namespace <namespace-name> --docker-server=<container-registry-hostname> --docker-username=<container-registry-username> --docker-password=<container-registry-password>
Take note that the variable ImagePullSecret
should match with what you have specified in <name-of-secret>
, and be on the same namespace. This should only be necessary if you are not using another authentication method that takes care of this.
service.yaml
The service will allow your app to be contacted by other applications on the cluster, and to configure an Ingress later on, which will make your application accessible from outside the cluster by a hostname, among other features.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.AppName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.AppName }}
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
selector:
app: {{ .Values.AppName }}
Something very important about ports in services is the following:
port
exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port. This is important when connecting an Ingress to this service, which we will see later on.targetPort
is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
hpa.yaml
The hpa resource will allow you to define the rules for pod instances scaling by your app resources consumption:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{{- if .Values.Autoscaling.UseAutoscaling }}
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.AppName }}
namespace: {{ .Release.Namespace }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.AppName }}
minReplicas: {{ .Values.Autoscaling.MinReplicas }}
maxReplicas: {{ .Values.Autoscaling.MaxReplicas }}
metrics:
{{- if .Values.Autoscaling.EnableTargetCPU }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.Autoscaling.TargetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.Autoscaling.EnableTargetMem }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.Autoscaling.TargetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
minReplicas
and maxReplicas
will specify the minimum and maximum amount of instances should be allowed when scaling your app. You can also enable metrics, and by using CPU and memory utilization based scaling, you can specify a percentage of the total of resources requested for your app that the cluster should take into account before upscaling or downscaling. This value is defined in resources: requests
in your deployment.yaml
. Lets assume that you have configured memory utilization at 80%, if at any given point your deployment use above of the 80% of resources requested for your deployment, Kubernetes will add more pods to meet the user demand of the application, how many pods will create depends on if it by scaling up that percentage usage goes below 80% or if it hits the maxReplicas
amount of pods. Once the demand goes down and consumption lowers, Kubernetes will automatically reduce the amount of pods to the current demand, or until it hits the minReplicas
amount of pods.
configmap.yaml
Another important file is the configmap, it will allows us to define files in which we can change its values using variables. This whole chart is based on a .NET Core web app, so in this configmap example I will illustrate a appsettings file that will match with the current NETCORE_ENVIRONMENT
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.AppName }}-config-{{ lower .Values.EnvironmentShortName }}
namespace: {{ .Release.Namespace }}
data:
appsettings.{{ .Values.NetCoreEnvironment }}.json: |-
{
"Tenant": 1,
"SomeString": {{ .Values.SomeString }},
"Configuration": {
"AzureKeyVault": {
"IsEnabled": {{ .Values.AzureKeyVault.IsEnabled }},
"KeyVaultName": "{{ .Values.AzureKeyVault.KeyVaultName }}"
}
},
"ApplicationInsights": {
"InstrumentationKey": "{{ .Values.ApplicationInsights.InstrumentationKey }}"
},
"ExampleSettings": {
"SettingEnabled": true,
"NumericValue": "654321",
"StringValue": "XXXXX-XXXXX",
"ListExample": [{{- $list := .Values.ExampleSettings.ListExample }}
{{- range $index, $element := $list }}
{{- quote $element }}{{ if ne (sub (len $list) 1) $index }}, {{ end }}
{{- end }}],
"BoolExample": {{ .Values.ExampleSettings.VariableExample }}
},
"AnotherExample": {
"AnotherListExample": [
{{- $list := .Values.AnotherExample.AnotherListExample }}
{{- range $index, $element := $list }}
{{ quote $element }}{{ if ne (sub (len $list) 1) $index }},{{ end }}
{{- end }}
]
},
"App": {
"SelfUrl": "{{ .Values.SelfUrl }}",
"CorsOrigins": "{{ .Values.CorsOrigins }}"
},
"ThirdExample": {
"IsEnabled": {{ .Values.ThirdExample.IsEnabled }},
"YetAnotherList": [
{{- $list := .Values.ThirdExample.YetAnotherList }}
{{- range $index, $element := $list }}
{{ quote $element }}{{ if ne (sub (len $list) 1) $index }},{{ end }}
{{- end }}
],
"StringExample": {{ .Values.ThirdExample.StringExample }}
}
}
In this configmap I decided to keep some for loops to show how a yaml list could be formatted in a json configuration file. The most interesting part of this in particular is the {{ if ne (sub (len $list) 1) $index }}
statement. It is an if conditional that applies to the comma character between the if and {{ end }}
. The logic is that if the current index is not equal to the length of the whole list minus 1 (remember that the index is 0 based), then it does add a comma, otherwise it skips it. This is to keep the convention where items in a list are separated by a comma, except for the last item.
ingress.yaml
Ingress are very useful to make our application accesible from the outside. It basically works as a reverse proxy, where we specify the URL and the service. The Ingress will automatically route our connection to the service that matches the URL we requested. This will allows us to have multiple web apps running in the same cluster.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.AppName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.AppName }}
spec:
ingressClassName: nginx
{{- if .Values.Certificate.IsEnabled }}
tls:
- hosts:
- {{ .Values.Hosts.Local }}
secretName: {{ .Values.AppName }}-{{ lower .Values.EnvironmentShortName }}-letsencrypt
{{- end }}
rules:
- host: {{ .Values.Hosts.Local }}
http:
paths:
- backend:
service:
name: {{ .Values.AppName }}
port:
number: 80
path: /
pathType: ImplementationSpecific
Here the important bits are the rules: host
, which specifies the URL the Ingress will map this to, and rules: http: paths: backend: service
, where we specify to which service this Ingress is connected to. Remember that port: number
have to match the Service port (and not the app port).
There is also tls
, which is used to specify a TLS HTTPS connection. This is configured with an if conditional, which checks if Certificate.IsEnabled
variable is true, if it’s not then this will not be added to the deployment, otherwise it will. We will see more about this when we talk about certificate.yaml
certificate.yaml
This file is used to generate TLS certificates for our application, and it can get pretty complex depending on what we want to do. Teaching how to install cert-manager in your Kubernetes cluster is out of the scope of this document. We will cover a basic Let’s Encrypt configuration. Because of how Let’s Encrypt works, you must have a valid domain that is accessible from the Internet and that is already pointing to your Kubernetes cluster, otherwise Let’s Encrypt will not be able to generate the certificates.
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .Values.AppName }}-{{ lower .Values.EnvironmentShortName }}-letsencrypt
namespace: {{ .Release.Namespace }}
spec:
commonName: {{ .Values.Hosts.Local }}
dnsNames:
- {{ .Values.Hosts.Local }}
issuerRef:
kind: ClusterIssuer
name: {{ .Values.Certificate.ClusterIssuerName }}
secretName: {{ .Values.AppName }}-{{ lower .Values.EnvironmentShortName }}-letsencrypt
Take note that it needs a ClusterIssuer
, which is specified in Certificate.ClusterIssuerName
. For that, we will need to create this ClusterIssuer
. Since this resource scope is cluster wide, it only needs to be created once. You can create it by running the following yaml with kubectl apply -f <your-cluster-issuer-yaml-path>.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: your@email.com
preferredChain: ""
privateKeySecretRef:
name: letsencrypt
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx
Make sure that privateKeySecretRef: name
matches your Certificate.ClusterIssuerName
value, otherwise it will fail to generate certificates.
Adding other dependencies to your deployment
Obviously you can add more deployments and services in a single chart. To maintain order though, this should be done only with other components/dependencies that our app needs and are small enough to bundle with our app, for example, a redis image:
redis-deployment.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.AppName }}-redis
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.AppName }}-redis
template:
metadata:
labels:
app: {{ .Values.AppName }}-redis
spec:
containers:
- name: {{ .Values.AppName }}-redis
image: redis:6.2.6
ports:
- containerPort: 6379
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 300m
memory: 200Mi
redis-service.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.AppName }}-redis
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.AppName }}-redis
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
selector:
app: {{ .Values.AppName }}-redis
Then we can connect from our app just using the service name and the port, considering that because it is bundled inside our app helm chart it will be deployed within the same namespace, the Kubernetes DNS will automatically resolve the name. If we wanted to access to this service from another namespace we should use the following URL syntax: <servicename>.<namespace>.svc.cluster.local