Microservices Mesh – Part I

Microservices Mesh – Part I

    

microservice_part1_1

If you’ve been following the trends in distributed and cloud architectures over the past few years, you’ve likely heard a lot about microservices. Since their introduction a few years ago, they’ve slowly started taking over the conversation around enterprise cloud deployments. More and more companies are announcing their move to microservices, and making blog posts and press releases to celebrate it.

But while the move to microservices is justly celebrated, many companies neglect to let you know what’s going on behind the scenes. Properly done, microservices solve many of the headaches from the monolith architecture: they’re easy to iterate, cleanly separate into teams, and break code down into manageable components. However, many of the celebratory posts leave out the follow up work and components required to reap the benefits of microservices.

Today, we’re taking a look behind the curtain, so to speak. We’re looking at one particular technology, microservices mesh, that smooths the rough edges of microservices and helps make it easier for developers to get working in a microservices framework. Hopefully, by the end you’ll have a fuller picture of how mesh can fit into your microservices deployment.

What Is Microservices Mesh?

A Microservices Mesh (sometimes also called a service mesh) is an abstract layer that defines the interactions between different microservices in your application. A mesh uses the networking between different containers to control how different components of your application interact. Although that may sound abstract, it’s actually quite a practical concept. Containers interact via the network, so changing the network topology allows you to redefine how containers can interact.

Because the networking between components is very fundamental in microservices, manipulating that networking via a microservices mesh allows you to accomplish some very useful things. For example, when you deploy new versions of a component, you can instantly point network devices away from the old instances and onto the new ones without any configuration. Or if you’re having difficulties scaling, you can use the mesh to point to different load balancers for different services, and beef up the number of containers for different components of your application.

When starting out with microservices, one common piece of advice is to treat different components within your application as APIs from completely different providers. Microservices Mesh gives you the ability to implement this on a network level, by defining exactly what services are available at what network locations. Instead of deploying a configuration change whenever services move or are redefined, you instead make a networking change.

What Does It Get You?

Microservices are praised for their ability to scale, and their ability to break a large app down into digestible components. In contrast, where monolithic apps shine are areas where centralization is important. Logging is easier in monoliths, because they’re running in one place. Version control is easier, because you’re overwriting a single instance. When developers switch from monoliths to microservices, they’re frequently lost. There’s no one central place to log to, or to identify which version of a service they’re targeting.

The key insight of the mesh idea is that there can be, in many cases, a central source of truth for some of that information, namely the networking layer. Consider the case of deploying a new version of a component, which we mentioned above. Rather than destroying all containers hosting the old version and launching new containers with the new version (and repeating this process for components with a dependency on the new service), with a mesh-based application you have control over what containers other containers can see via the network.

That means if you want to deploy a new version, you can just point to new containers using DNS if you’d like. Or point to a new load balancer. Or change the containers that the existing load balancer points to.

By focusing on the network behind the components of your app (the mesh in which they operate) you can retain some of the centralization that made management so much easier in the monolithic world. Want to gain more insight into how traffic flows through your app? Add some monitoring to the network between components. Need to beef up security? Add stricter encryption and enforce HTTPS between components. Mesh makes all this possible.

Want To Learn More?

Over the next couple of weeks, we’ll be doing a deeper dive into how microservices mesh operates and what it’s used for in practice for some microservices deployments. We’ll focus on the real benefits that mesh brings to microservices: control, manageability, and insight into what’s going on in a larger application.

Asad Faizi

Founder CEO

CloudPlex.io, Inc

asad@cloudplex.io

    

Microservices Mesh – Part II – Istio Basics

Microservices Mesh – Part II – Istio Basics

    

microservice

Setting up a basic microservice in Kubernetes is deceptively simple. In our last article, we showed how easy it is to get off the ground using containers. We built a simple Docker image, deployed it using Kubernetes, and queried our app. Of course, that was relatively painless! But in the real world, cloud architectures are usually more complicated than that, involving multiple tens or hundreds of services, with databases, authentication, and other real-world concerns.

Managing all those services can be a real hassle. In this article, we’ll introduce Istio, which is the next level up in terms of facilitating and managing large-scale cloud deployments. Earlier, we talked about the mesh architecture for microservices, which is what Istio enables.

Mesh allows you to reap the scalability benefits of a microservices architecture while also enabling the centralization-based advantages common to monoliths, like logging and versioning. For more on mesh, see our previous discussion outlining the basics of mesh, and the advantages it offers.

In this post, we’ll take a tour of what Istio has to offer in implementing the cloud mesh architectural pattern. We’ll install the control plane, and then see what Istio offers in terms of benefits. Finally, we’ll take the Kubernetes service we defined last time and add a sidecar proxy to it, and link it to our control panel above.

Digging In: Data Plane and the Sidecar

Istio defines two key architectural terms, the data plane, and the control plane. The data plane refers to the data moving through your application, being passed to different service instances and handled by the services themselves. The data plane is materialized mainly through the sidecar proxy. The control layer determines how the services are instantiated and where telemetry and info about the services are held. The key elements of the control plane are the pilot and the mixer. Let’s take a look at these in order.

The sidecar proxy runs alongside the pod that defines your service in Kubernetes. As the name suggests, it’s added alongside the main service components, and it operates on traffic that is directed at that service. This design allows you to add a sidecar proxy to your existing service definition within a pod: you simply add the lines defining the sidecar into the service and it gets to work.

Microservices Mesh

What you get in return is a laundry-list of benefits that are the core of Istio’s cloud mesh offering. The sidecar proxy intercepts traffic coming into the service and allows you to route it in an intelligent way. That could mean something as simple as load-balancing and handling the TLS termination as an easy way to speed things up, or something more complicated like handling the versioning and staged rollout of a new version of the service and collecting metrics on usage. The sidecar allows you to add these features onto an existing microservices architecture without redesigning the entire system.

After you grasp the initial goal of the sidecar, a lot of the power of Istio, and of cloud mesh in general, comes into focus. Because they act as a single, unified bridge between service pods, the sidecars collectively encounter and see all the traffic that flows through your application. This means if you want to harden your security, the sidecars offer a single place where you can add authentication and https between services, log events to check for anomalies, and add traffic controls and gatekeeping for authentication.

On top of that, because the sidecars act as the central communication endpoints between services, they allow you to build resilience into your application and add an extra level of scalability. One common concern with microservices is that server pods are all isolated, and requests may get handled slowly or dropped if there’s an issue with a microservice. With sidecars, you can add timeouts, smarter load balancing, and additional monitoring all in one place.

Centralizing: The Control Plane

At the other end of this setup is the control plane. The control plane acts as a controller for the sidecars that are running in your application as well as a central repository for all the information (like logging and version updates) that the services in your mesh can look to as a single source of truth.

Microservices Mesh

When using Istio, the main way you interact with the control plane is through Kubernetes. After installing the Istio packages and definitions, the control plane is accessible through kubectl commands that manipulate the state of your system. For example, when you upgrade a pod to a new version using kubectl, the version update starts by updating your local control plane variables.

The easiest way to see this is just by using the get-svc command within kubectl to list services related to a given library. To check on what istio things are running, you can run

kubectl get svc -n istio-system

And see the list of Istio’s core control plane features running in the background. You may recognize some of them, such as Citadel, the service that manages security for traffic between services.

Install Istio

First things first, let’s take a look at what Istio offers out of the box. We’ll be installing the Istio control plane to manage the basic HTTP API we defined in our previous article. That API service was defined on Kubernetes, and was implemented as a single Kubernetes pod, with the API running within.

To install Istio, follow the steps on the Istio official quick start guide. First, start by downloading the latest release from the Istio releases page. Istio is still under quite active development, and the latest release is always the best place to start. To do this, all you do is pull down the file and ensure it’s available in your path.

Then, add the Istio definitions to your Kubernetes so they’ll be available to use with the kubectl command line tool. Just add the .yaml files you pulled down above in the install directory using kubectl apply:

kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

Then you’ll need to activate the Istio installation by choosing an authentication method. For this demo, I’ll be using the default of mutual HTTPS authentication, which is great for demo projects and starting new work. For adding a mesh to an existing project you’ll need to look into your options a little more. For now, you can run the following command:

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

With that, you should be good to go. You’ll need to add the istio setup to existing pods you’ve created, and for new pods, you’ll need to add istio as a dependency.

Deploy helloworld App

We use the helloworld sample application which was explained in our last blog. This is going to create one Deployment, one Service, one Gateway, and one Virtual Service. Update your configuration file to match the below:

helloworld.yaml


apiVersion: v1
kind: Service
metadata:
   name: helloworld
spec:
   type: ClusterIP
   ports:
      - port: 80
      targetPort: 8080
   selector:
       app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
   name: helloworld-v1
spec:
   replicas: 1
   selector:
      matchLabels:
         app: helloworld
   template:
      metadata:
      labels:
         app: helloworld
         version: v1
      spec:
         containers:
          - name: helloworld-kubernetes
            image: haseebm/helloworld-kubernetes
            ports:
             - containerPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
   name: helloworld-gateway
spec:
   selector:
      istio: ingressgateway # use istio default controller
   servers:
    - port:
         number: 80
         name: http
         protocol: HTTP
   hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
   name: helloworld
spec:
   hosts:
    - "*"
   gateways:
    - helloworld-gateway
   http:
      route:
       - destination:
           host: helloworld
           port:
              number: 80

Manually Injecting Istio Proxy Sidecar

Istio uses sidecar pattern to have istio sidecar container sit with helloworld app container in the same Pod.


$ kubectl apply -f <(istioctl kube-inject -f helloworld.yaml)
service/helloworld created
deployment.extensions/helloworld-v1 created
gateway.networking.istio.io/helloworld-gateway created
virtualservice.networking.istio.io/helloworld created

Confirm Pods and Service running


$ kubectl get pod,svc | grep helloworld
pod/helloworld-v1-1cbca3f8d5-achr2 2/2 Running
service/helloworld ClusterIP 10.160.58.61

Now, check traffic for helloworld

$ curl a2******.ap-southeast-1.elb.amazonaws.com/api/hello
Hello world v1

Next Steps

Istio is a great way to get started in the wide world of cloud mesh technologies, and intelligent microservices management more generally. As we’ve seen in the past few articles, properly managed microservices have a lot to offer in terms of technical advantage and scalability, but making the best use of available technology is crucial to reaping those benefits.

In our next few articles, we’ll look at some more applications of Istio and cloud mesh to enhance the security and manageability of our sample architecture. In our next post, we’ll discuss how to manage deploys and version updates in Istio to seamlessly push updates to your code without interruptions or broken deployments.

Asad Faizi

Founder CEO

CloudPlex.io, Inc

asad@cloudplex.io