fbpx
Unlock the power of choice with CloudPlex and DigitalOcean Get CloudPlex free for 3 months, and Digital Ocean for 1 month

The Node.js Developer’s Guide To Kubernetes – Part II

kubernetes-node
In the previous post of this series, we have seen how to create a local Docker development environment, powered by Docker Compose. We learned why it’s recommended to run the same stack for production workloads. We learned why Kubernetes is a better solution than Compose and that’s why we started discovering this powerful orchestrator.

In this second part, we will go through many details related to Kubernetes object models, architecture and networking. Our goal is to guide you step by step through all the technical details. After following this tutorial, you will gain enough knowledge to deploy and scale most Node.js applications in a Kubernetes cluster.

Deploying MongoDB

It is possible to deploy applications in Kubernetes using a single imperative command as shown below. However, it is recommended to use the declarative way of deploying applications and use deployment objects instead of Pods directly for two main reasons:
  • The declarative method is easier to review, automate, and backup
  • Deployment objects automate the replication of the Pods and rolling out updates of the application
 $> kubectl run --generator=run-pod/v1 --image=mongo:4.2 mongo-db
To deploy MongoDB to Kubernetes we need the following resources:

Persistent Volume

This resource is needed to define the storage volume where MongoDB data can be stored. Kubernetes supports a wide range of PersistentVolumes such as Glusterfs, CephFS, AzureFile, and many more. For the sake of simplicity, we will use the HostPath volume plugin to create the volume locally.
Usually, we use YAML files to define resources and the kubectl command-line to create, manage and update the same resources. There are four common configurations sections among Kubernetes resources. These sections are listed below
  • apiVersion: describes the Kubernetes API versions to be used to create the resource.
  • Kind: describes the type of resource to be created.
  • Metadata: attaches meta-information regarding the resource such as its name or labels.
  • Spec: describes the specifications of the resource(s).
We are going to use the following YAML to create a PersistentVolume object in Kubernetes. As it’s shown, the definitions file follows the same structure described above. In the Spec section, the object details and configurations are provided such as the capacity and the path of the volume on the host machine.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mo-data-pv
  labels:
    type: local
spec:
  storageClassName: generic
  capacity:
    storage: 500Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/mongo"
Creating the above object can be done by executing one of the following commands
$>  kubectl apply -f mongo-pv.yaml
$>  kubectl create -f mongo-pv.yaml
And you can verify the creation of the resource and its status using the following commands:
$> kubectl get persistentvolumes mo-data-pv
$> kubectl describe persistentvolumes mo-data-pv
Note that we used the resource name defined in the resource file.

Persistent Volume Claim

Each application that needs to store data in a volume needs to request access to a data volume. This action can be achieved by creating and attaching a PersistentVolumeClaim to the application Pods. The below snippet shows the definition file that can be used to create the PersistentVolumeClaim. The same sections are included in the file but with different values and configurations
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mo-data-pvc
spec:
  storageClassName: generic
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
Creating the above object can be done by executing one of the following commands:
$>  kubectl apply -f mongo-pvc.yaml
$>  kubectl create -f mongo-pvc.yaml
You can verify the creation of the resource and its status:
$> kubectl get persistentvolumeclaims mo-data-pvc
$> kubectl describe persistentvolumeclaims mo-data-pvc

Deployment

The deployment resources are used to manage and control the life cycle of applications Pods. As a result, the spec section for the deployment resource is containing information about the applications Pod and how to control them, below is a brief description of the basic sections that are required for Deployment Spec:
  • Replicas: an integer that specifies how many pods the deployment should create.
  • Selectors: this configuration section specifies the conditions used to select the Pods managed by Deployment. For instance, in the example that we are going to use, Pods that have the label app: mongodb-pod will be managed by the Deployment resource.
  • Template: this section contains the configurations of the Pods.
The Template section contains the following subsections:
  • Metadata: The information that will be attached to every Pod created by our Deployment. The labels defined in this section must match the ones used in the selector sections so that the deployment can manage the Pods after their creation.
  • Spec: The specifications of the Pod containers (notice that containers sections is a list and can include more than one container definition). Here we can define specific container configurations such as the image name, the attached volumes, restart policy and used ports.
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mongo-db
  name: mongodb-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb-pod
  template:
    metadata:
      labels:
        app: mongodb-pod
    spec:
      containers:
      - name: mongodb
        volumeMounts:
          - mountPath: /var/lib/mongo
            name: mo-data
        image: mongo:4.2
        ports:
        - containerPort: 27017
      volumes:
      - name: mo-data
        persistentVolumeClaim:
          claimName: mo-data-pvc
      restartPolicy: Always
Save these configurations to “mongodb-deployment.yaml” and create the described objects using:
$>  kubectl apply -f mongo-deployment.yaml
$>  kubectl create -f mongo-deployment.yaml
To verify that the creation and the status of deployed resources, use:
$> kubectl get deployments.apps mongodb-deployment
$> kubectl describe deployments.apps mongodb-deployment
You can also check the status of the Pods created by the Deployment using the following commands:
$> kubectl get pods --selector=app=mongodb-pod

Service

MongoDB ports need to be accessible by other applications on the same cluster. This is what we implement here by using a K8s Service. When you define more than 1 replica of a Deployment, the Service resource will take care of load balancing the traffic between all of these replicas.
The specification of a service resource defines the following items
  • The type of service: Kubernetes supports a couple of service types for managing internal and external traffic. Since MongoDB should not be accessed by any external entity, we will create a ClusterIP service that will expose the service on a cluster-internal IP (only accessible by Pods in the same cluster).
  • selector: the conditions used to select Pods managed by the service resource.
  • port: the exposed service port.
  • targetPort: the pod exposed port.
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mongo-db
  name: mongodb-service
spec:
  type: ClusterIP
  selector:
    app: mongodb-pod
  ports:
  - port: 27017
    targetPort: 27017
Creating the above object can be done by executing one of the following commands.
$>  kubectl apply -f mongo-service.yaml
$>  kubectl create -f mongo-service.yaml
And you can verify the creation of the resource and its status using the following commands:
$> kubectl get service mongodb-service
$> kubectl describe service mongodb-service
$> kubectl get endpoints mongodb-service
Now that we are done with deploying the MongoDB resources, let’s deploy the Node.js application.

Deploy the Node.js App

The deployment of this application requires a Deployment and a Service object since it’s stateless and does not persist data (in contrast to the MongoDB application where data is persistent).

Deployment

The deployment resource for our Node.js application is similar to MongoDB one with minor differences such as labels and Deployment name. In addition, the Node.js container spec defines different environment variables. Note that we use the MongoDB service name “mongodb-service” to configure the connection between Node.js and MongoDB.
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: easy-notes
  name: easy-notes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: easy-notes-pod
  template:
    metadata:
      labels:
        app: easy-notes-pod
    spec:
      containers:
      - name: easy-notes
        env:
        - name: MONGO_URL
          value: mongodb://mongodb-service:27017/easy-notes
        image: wshihadeh/node-easy-notes-app:latest
        ports:
        - containerPort: 3000
      restartPolicy: Always

Service

The Node.js service can be created using the below definition file which exposes the application service on port 8080 and redirects the traffic to the containers on port 3000.
apiVersion: v1
kind: Service
metadata:
  labels:
    app: easy-notes
  name: easy-notes-service
spec:
  ports:
  - port: 8080
    targetPort: 3000
  selector:
    app: easy-notes-pod
  type: ClusterIP
Creating and managing the Node.js deployment and service should be done in a similar way to what we did with MongoDB.

Expose the service externally

Both the MongoDB and the Node.js applications are running on Kubernetes, however, they are only accessible from inside the cluster. In order to access Node.js form host nodes or from the outside, we need to implement one of these options
  • Update the Node.js service to be a NodePort service instead of ClusterIP.
  • Use an Nginx Ingress controller to expose the service.
Using the Nginx controller is the best option. It helps in reducing the number of exposed and managed ports and allows controlling the incoming traffic to our services. Therefore, we will implement this option. Below are the steps needed for deploying and using Nginx ingress controllers.

First of all, we need to deploy a default backend application. This can be any service that responds with a 404 page at “/” and 200 on a “/healthz” endpoints. Below is an example of the resources needed to deploy such an application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
The next step is deploying an Nginx Ingress Controller. The Controller can be deployed using a Deployment resource. It is important to configure it with a default backend and set the POD_NAME and POD_NAMESPACE environment variables.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=default/default-http-backend
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
Now, let’s expose the Nginx Controller on port 80 and 443. To do this, define a Service resource for the Nginx Controller. We need to expose a LoadBalancer on the host nodes (you can also use NodePort).
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  selector:
    app: ingress-nginx
Lastly, configure the Ingress Controller to expose and forward traffic to the Node.js application.

This task can be achieved by creating an Ingress resource that defines how and when to forward the traffic to the Node.js application. With the below resource we configured the Ingress Controller to forward traffic to the application Service on port 8080 when the host of the request is easynotes.lvh.me.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: easy-notes-ingress
spec:
  rules:
  - host: easynotes.lvh.me
    http:
      paths:
      - backend:
          serviceName: easy-notes-service
          servicePort: 8080
Now, create the defined resources:
$>  kubectl apply -f ${file_name}
$>  kubectl create -f ${file_name}
Once all Services are deployed you should be able to access our Node.js application form the host machine using your web browser or the following curl command:
$>   curl -fs easynotes.lvh.me

Scale the services

In the previous sections, we created a Deployment, and then exposed it publicly via a Service. However, the Deployment created one Pod. Pod like any other physical or virtual resource has its performance limit. Imagine what happens when the workload or the external traffic increases: The Pod will certainly go down and our application Service will fail to serve the users. This is when scaling can help. In this kind of scenario, increasing the number of Pods is a feature that allows you to handle more traffic. This is why, as a developer, you should learn about scaling your Services.

Fortunately, this powerful feature is achieved easily. Scaling any of the services described in this post can be done using the below command.

$>  kubectl scale deployment --replicas ${replica_count} ${deployment_name}

Conclusion

Deployment with Kubernetes is more complex than what we did at the beginning of this tutorial using Compose. However, it is more robust, flexible, and secure. The full implementation created in this post can be found in the following repository.
Start building app

Start building your Kubernetes application

93360cookie-checkThe Node.js Developer’s Guide To Kubernetes – Part II