Deploying NodeJs (ExpressJs) project with Docker on Kubernetes

Deploying NodeJs (ExpressJs) project with Docker on Kubernetes

Docker on Kubernetes

Prerequisites: Nodejs understanding, Kubernetes/Docker Architecture Theory

We are going to learn how to:

  1. Deploy an expressjs app (docker image) to kubernetes
  2. Add a kubernetes health-check to it.

I will keep this article simple and hopefully you will understand things easily.

I will start with dockerizing expressjs app, following is the sample index.js file.


const Express = require('express');
const port = process.env.PORT || 2087; // you can use any free port
const app = Express();
app.get('/', (req, res) => {
    res.send('hello world');


Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

Docker makes it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Download Docker for windows:


We will have to add this docket file in the root of our project.

FROM node:8
# Install app dependencies
# A wildcard is used to ensure both package.json AND package lock.json are copied
COPY package*.json ./
RUN npm install
# Add your source files
COPY . .
CMD ["npm", "start"]

Our Docker file is simple and easy to understand:

  1. First line refers to Node version that we are using.
  2. We then create an app directory and copy npm’s package*.json(s) files. A asterisk(*) sign refers to copying all files starting with name package.
  3. Then, we run npm install, and copy our project to our image.
  4. Since our project is listening on 2087 port, so we have exposed it here.
  5. At the end, run npm start from terminal.

Docker Repository

Before building the image, we will be needing to push docker image to a repo in order to deploy it to Kubernetes. I would suggest you guys to create an account and repository at docker-hub.

Note: Docker hub provides only 1 private repository for free, else you will get public repositories in free account.

After you have made a docker repository, all we have to do is, write this command in terminal, residing inside the root folder of our project.
Login to repository:

docker login
// then enter username and password

Building Docker Image:

docker build -t [docker hub username]/[your repository name]
docker push [docker hub username]/[your repository name]:latest

So what we have done here is that we have built a docker image and pushed it to our repository. We will need this when we deploy our app to kubernetes.

Kubernetes and Minikube


Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.



We will be deploying our app to Minikubebecause it will allow us to run Kubernetes locally very easily. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

There are other alternatives beside minikube as well, for example:

  • Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you to start coding and deploying in containers in minutes on a single-node Kubernetes cluster.
  • Minishift installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (minishift start) for Windows, macOS, and Linux. The container start is based on oc cluster up (Linux only). You can also install the included add-ons.
  • MicroK8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.



Please follow this article for minikube and kubernetes setup:

Note (A big trouble savior!!!) : You might face issues in stopping minikube, so here is the work around for it in advance:

While minikube is running:

 $ minikube ssh
 $ sudo poweroff

Deployment to Minkube

Kuberenetes have config files to organize information about clusters, users, namespaces, and authentication mechanisms. The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster. With kubeconfig files, you can organize your clusters, users, accesses, and namespaces. You can also define contexts to quickly and easily switch between clusters and namespaces

context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster.

So, after you have successfully installed minikube and k8s, don’t forget to set the config to minikube as we will be using our local cluster i.e. minikube for our deployment.

$ kubectl config use-context minikube

Now we want to deploy our containerized applications on top of kubernetes. To do so, we need to create a Kubernetes Deployment configuration.

Once we’ve created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. This provides a self-healing mechanism to address machine failure or maintenance. Related to this, we will be using in our deployment in the form of liveness probe.

In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure

By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

When we create a Deployment, we’ll need to specify the container image for our application and the number of replicas that we want to run.

So, After we have set your config to minikube, we will be needing a deployment file that will deploy our docker image to our local minikube cluster along with a https liveness health ( supposing our project is running on https)

apiVersion: extensions/v1beta1
kind: Deployment
    name: hello-nodejs // your nodejs app name
    replicas: 3
                app: hello-nodejs //your nodejs app name
                image: [docker hub username]/[your repository name]
                imagePullPolicy: Always
                name: hello-nodejs // your nodejs app name
                        path: /health
                        port: 2087
                        scheme: HTTPS
                    initialDelaySeconds: 40
                    periodSeconds: 3
                     containerPort: 2087

Save this file as deployment.yml and I will explain this deployment file later in detail. For now, lets deploy our project to our minikube cluster with a simple command:

$ kubectl create -f deployment.yml

Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well


So, after you have deployed your app on minikube, type in this command in terminal:

$ kubectl get pods

You will be able to see 3 pods after entering above command on terminal.

Why we see 3 pods? Because in our deployment.yml file, we have assigned 3 replicas for the project and now we have 3 instances of our project in 3 different pods.

Below the containers section, image refers to the docker image we have pushed in our docker repository, and imagePullPolicy is set to Always which means that kubernetes always have to take a fresh pull of the docker image of the project whenever we deploy our project to kubernetes.

Liveness Probe

In our deployment.yml file, you can see that we have added a liveness probe.
Liveness probe is our project savior. If due to any reason (for example: your server goes down or it stops responding to requests, etc..), our project fails at any pod, liveness probe will restart the respective pod and hence restarting our entire projectlivenessProbe:

        path: /health
        port: 2087
        scheme: HTTPS
    initialDelaySeconds: 40
    periodSeconds: 3

In our nodejs project, we have exposed a ‘/health’ api (see below), and in our deployment.yml, within the section of liveness probe, we have told kubernetes that hit the ‘/health’ api of the project, defined under the httpGet section. If the project fails due any reason, our api will not return any successful status, and hence the kubernetes will restart that respective pod.

app.get('/health', (req, res) => {
    console.log('health check');

You can also see the under httpGet section, we have set a scheme equal to HTTPS, assuming that our project is running on HTTPS. Liveness check does not verify the certificates for https, hence you can easily check the health of your project even if it is on HTTPS with self-signed certificates. If you do not set any scheme than default is HTTP.

We have set initialDelaySeconds equal to 40 seconds, which is the time interval of the project to successfully start working after it is deployed or restarted. If this interval is set to a lower value (a value that is smaller than your project startup time) than the liveness probe will keep failing, because it won’t get any reply from the project as it has not yet started and hence liveness probe will restart the pod, and will continue to do so and your project will never run on that deployment.

periodSeconds in the liveness specifies the delay between the every hit of the liveness probe to our health api. Meaning, liveness probe will hit our health api every 3 seconds.


So in this article we have learned how to deploy a docker image project on kubernetes along with a health check that will act as a savior for our project if any failure occurs, automating the entire process of failure handling.

Kubernetes Liveness and Readiness checks have vastly improved the reliability of the deployment that will provide a more better user-end experience. However, if these probes are not handled correctly, then it will make our deployment worse rather than better.

For more information about health check, visit:



USA408 365 4638


1301 Shoreway Road, Suite 160,

Belmont, CA 94002

Contact us

Whether you are a large enterprise looking to augment your teams with experts resources or an SME looking to scale your business or a startup looking to build something.
We are your digital growth partner.

Tel: +1 408 365 4638
Support: +1 (408) 512 1812