Microservices Orchestration with Kubernetes

Microservices-Orchestration-With-Kubernetes

    

kubernetes

In our last post, we discussed the recent trend towards microservices, and some of the complications that can arise as part of a microservices-based architecture. Over the next few weeks, we’ll be diving deeper into that problem. We’ll explore the tradeoffs inherent in different design choices, and what emerging technologies can do to alleviate those issues.

Before diving in, however, we thought it might make sense to take a moment to look at the state of the art in microservices today. We’ll start by going over Kubernetes, the leading container management and service orchestration framework. Kubernetes and microservices are almost synonymous these days, so it’s good to have a thorough understanding of how they fit together.

Kubernetes

Much like microservices themselves, containers have been gaining ground in recent years as an indispensable part of the modern scalable architecture. As with microservices, containers have caught on because they provide real benefits to the development process: they’re dependable, scale easily, and provide a nice abstraction that isolates the core component of your web services.

In particular, one containerization technology has taken off far above the rest. That’s right, the next stop on our microservices journey is to take a look at Kubernetes and Docker, the workhorse of the modern microservices setup. Kubernetes is, simply put, the gold standard for modern container-based DevOps and microservices and containers go hand-in-hand.

Kubernetes

As containerization technology was taking off, there were several competing technologies for how to manage large-scale Docker deployments and container-based services. You may remember a few of those retired solutions: Docker Swarm, Apache Mesos, OpenStack Magnum, etc. Now, however, Kubernetes has eliminated the competition. It’s the only containerization solution available natively on AWS, Azure, Google Cloud, as well as many private cloud vendors like RedHat and Pivotal.

Kubernetes was able to gain so much ground so quickly because it was able to separate configuration from orchestration.This level of sophistication should come as no surprise, since Kubernetes emerged from an internal project at Google called Borg, which reflected decades of combined experience with distributed systems. With Kubernetes, you specify what you want the service to look like, how many instances, what type of redundancy, where services are located. Then the tool calculates what changes need to happen to create that service from the status quo. Think of it in analogy to SQL, where you don’t specify how the database adds or transforms each individual row. You specify how you want the data to look, and the database figures out how to get there. Kubernetes is the same way.

Kubernetes Features

What Kubernetes brings to the table is the ability to treat containers as a service definition. Kubernetes starts with just containers. Even if you’re just looking to deploy a container without getting into the world of microservices, Kubernetes has a lot to offer you in terms of management and deployment. You install the kubernetes software on servers in your cluster, and the master Kubernetes process will deploy your software automatically.

Aside from basic containers, Kubernetes operates on what it calls pods. Pods are individual definitions composed from one or more services. A pos can contain anything from a single server operating alone to a full-fledged multi-container service such as a database container combined with a key-value store and an http server wrapped up in one. Pods are the basic building blocks of Kubernetes.

The last element is the service. In Kubernetes, a service is like a recipe for combining pods into an application. While pods are concrete deployments with a lifespan, a service is more abstract. It describes an individual component like a backend or a database.

Combining all these is the Kubernetes command-line tool, kubectl. While the abstractions that Kubernetes provides are great, the command-line tool is quite powerful and allows you to describe complex changes to your architecture with kubectl commands. All told, the kubectl CLI tool contains almost 50 different commands for handling everything that comes up in the course of modifying a container-based microservices deployment (and of course, more than a few ways to shoot yourself in the foot).

Getting Our Hands Dirty

While a high level description is helpful, nothing beats actually deploying a Kubernetes service to develop an understanding. We’re not doing anything fancy here, just showing how to deploy a simple “Hello World” service, but it should be instructive.

We’ve written a simple server in Go that responds to http requests with a “Hello World”. The code is pretty straightforward:

package main
   import (
   "fmt"
   "log"
   "net/http"
   "os"
)
func handler(w http.ResponseWriter, r *http.Request) {
   log.Print("Hello world received a request.")
   version := os.Getenv("VERSION")
   if version == "" {
      version = "v1"
   }
   log.Println(version)
   fmt.Fprintf(w, "Hello world %s\n",version)
}
func main() {
   log.Print("Hello world sample started.")
   http.HandleFunc("/api/hello", handler)
   port := os.Getenv("PORT")
   if port == "" {
   port = "8080"
   }
   log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}

The first step to run this is by building it into a Docker container. To do that, we build the following Dockerfile starting with the base Go Docker image.

# Use the official Golang image to create a build artifact.
# https://hub.docker.com/_/golang
FROM golang as builder
# Copy local code to the container image.
WORKDIR /go/src/github.com/haseebh/hello-world
COPY . .
RUN go build -o helloworld-v1 main/helloworld-v1.go
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine
COPY --from=builder /go/src/github.com/haseebh/hello-world-v1 /helloworld-v1
ENV PORT 8080
# Run the web service on container startup.
CMD ["/helloworld-v1"]

Now we just need to build it. Choose an image tag, and run the following two Docker commands to build and save the image:

# Build the container on your local machine
docker build -t  .

# Push the container to docker registry
docker push 

One more step before we can deploy. While we’ve defined what will go into our pod, we haven’t defined what our service is. Let’s make a simple service definition, called our Hello Service. We’ll save it in a hello-service.yml service definition file.


apiVersion: v1
kind: Service
metadata:
   name: helloworld-v1
spec:
   ports:
      - port: 80
      protocol: TCP
      targetPort: 8080
    selector:
       app: helloworld-v1
    type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
   name: helloworld-v1
   labels:
      app: helloworld-v1
spec:
   replicas: 1
   selector:
      matchLabels:
         app: helloworld-v1
   template:
      metadata:
         labels:
            app: helloworld-v1
      spec:
         containers:
            - name: helloworld-kubernetes
            # replace  with your actual image
              image: 
              ports:
                - containerPort: 8080

And now we’ve got everything we need to get going. Our image has been built and we’ve defined a service based off of it. Now we can finally get to deploying it using Kubernetes. We’ll use the kubectl command line tool to deploy it on our cluster:

kubectl apply -f helloworld-go-v1.yaml

To get the service load balancer IP run the following command

kubectl get svc helloworld-v1 -o wide

Note down the external IP. And now when we visit our load balancer address, we can see the deployed service. It’s not much, but that “Hello World” shows us that this all worked!

Key Components

Building this service has allowed us to demonstrate most of the main Kubernetes components. First, we laid out the Dockerfile to create the code for a service. To actually create a Service in Kubernetes, though we needed to define it using YAML. Our definition takes the image we defined and provides some key information about it: where it should be deployed, what version it is, and other configuration information.

After that, we deployed the service on a Pod. In the Kubernetes model, Pods are closely tied to containers. Many deployments, like ours, use a single pod for the service. Strictly speaking, Kubernetes doesn’t manage containers, it manages pods. Sometimes those containers have a one-to-one relationship with pods, other times there are multiple containers.

Finally, we saw the principle of orchestration in action. After defining how we wanted our API to be deployed, we simply pushed that config file out to Kubernetes and it took care of the rest. Using kubectl, we were able to specify what we wanted our architecture to become, and Kubernetes took care of the rest. As we look at more complex examples later, with multiple versions and complex deployments, we’ll see the power of this simple idea more clearly.

Going Deeper

Deploying a simple service is just the beginning. Kubernetes works natively with microservices, and is a good way to deploy basic, and even more complex microservices architectures without too much of a hassle. But to really take advantage of the scalability of microservices, you’ll need a little more.

In our next article, we’ll be taking a look at Istio. With a microservices approach, we were able to take a monolithic app and break it up into multiple services. We saw in our first article that this approach offers more developer agility and a better abstraction for working with complex systems. Here, we saw how microservices can be deployed in practice using Kubernetes. Next week, we’ll start taking a look at some emerging concepts in the microservices realm, like microservices mesh, to show you what these technologies are really capable of.

Asad Faizi

Founder CEO

CloudPlex.io, Inc

asad@cloudplex.io