Developing, deploying and testing Flask applications on Kubernetes – Part I
In this step by a step blog post, that illustrates how to integrate Python Flask applications with Docker and run them in a Kubernetes cluster, we will cover the following topics:
- Dockerizing an existing Flask application.
- Deploying it using docker-compose.
- Deploying to a Kubernetes cluster.
- Inspecting, Testing, and scaling
Before proceeding, make sure that your environment satisfies these requirements. Start by installing the following dependencies on your machine
The Flask Application
The application that we will use during this post is a simple Python application that is used as a wrapper for the weather API OpenWeatherMap . The application has the following HTTP endpoints
- “/”: This is the root route and it basically responds with a static string. This endpoint can be used as a health check for the application.
- “//”: The API endpoint used to retrieve the weather information for a given city. Both the city and the country should be provided.
The complete application source code is shown below. The application simply processes the requests and forwards them to https://samples.openweathermap.org weather API endpoint then responds with the same data retrieved from the API endpoint.
from flask import Flask import requests app = Flask(__name__) API_KEY = "b6907d289e10d714a6e88b30761fae22" @app.route('/') def index(): return 'App Works!' @app.route('///') def weather_by_city(country, city): url = 'https://samples.openweathermap.org/data/2.5/weather' params = dict( q=city + "," + country, appid= API_KEY, ) response = requests.get(url=url, params=params) data = response.json() return data if __name__ == '__main__': app.run(host="0.0.0.0", port=5000)
Dockerizing our Flask Application
Dockerizing python applications is a straightforward and easy task. To do this, we need to introduce the following files to the project
- The requirements files: This file contains the list of all dependencies needed by the application. For instance, one of the dependencies for our application is Flask. This file will be used to install all the defined packages during the Docker build time. Below is the content of the requirements file.
certifi==2019.9.11 chardet==3.0.4 Click==7.0 Flask==1.1.1 idna==2.8 itsdangerous==1.1.0 Jinja2==2.10.3 MarkupSafe==1.1.1 requests==2.22.0 urllib3==1.25.7 Werkzeug==0.16.0
- The Dockerfile: This file includes all the Docker instructions needed to build the Docker image of our application. As it is shown in the below file, Docker will perform the following actions to build the Docker image:
- Use python:3 as a base image for our application.
- Create the working directory inside the image and copy the requirements file. (This step help in optimizing the Docker build time.)
- Install all the dependencies using pip.
- Copy the rest of the application files.
- Expose port 5000 and set the default command (CMD) for the image.
FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /app WORKDIR /app COPY requirements.txt /app RUN pip install --upgrade pip RUN pip install -r requirements.txt COPY . /app EXPOSE 5000 CMD [ "python", "app.py" ]
We can now build the Docker image of our application using the below command:
$> docker build -t weather:v1.0 .
Running the application
We can run the application locally using Docker CLI as shown below:
$> docker run -dit --rm -p 5000:5000 --name weather weather:v1.0
Or we can use a docker-compose file to manage the build and deployment of the application in a local development environment. For instance, the below Compose file will take care of building the Docker image for the application and deploying it.
version: '3.6' services: weather: build: . ports: - "5000:5000" volumes: - .:/app
Running the application using docker-compose can be done using:
$> docker-compose up
Once the application is running a CURL command can be used to retrieve the weather data for London, for instance:
$> curl http://0.0.0.0:5000/london/uk/
Running service directly with docker command or even with docker-compose is not recommended for production services because it’s not a production-ready tool. It will neither ensure that your application runs in a highly available mode nor help you to scale your application.
To illustrate better the last point, Compose is limited to only one Docker host and does not support running Docker services in clusters.
As a result, there is a need to use other solutions that provide such feathers. One of the most well known and used solutions is Kubernetes. This tool is an open-source project for automating deployment, scaling, and management of containerized applications. It is wildly used by companies and individuals around the world for the following reasons.
- It is free: The project is open-source and maintained by the CNCF.
- It is adopted by big companies such as Google, AWS, Microsoft, and many others.
- There are many cloud systems that offer Kubernetes managed services such as AWS, Google Cloud, and DigitalOcean.
- There are many plugins and tools developed by the Kubernetes community to make managing Kubernetes easier and more productive.
Creating a Kubernetes Cluster for your Development Environment
Kubernetes is a distributed system and integrates several components and binaries. This makes it is challenging to build production clusters, at the same time running Kubernetes in a development environment will consume most of the machine resources. Furthermore, it would be difficult for developers to maintain the local cluster.
This is why there is a real need to run Kubernetes locally in an easy and smooth way. A tool that should help developers keep focusing on the development and not on maintaining clusters.
There are several options that can be used for achieving this task below are the top three:
- Docker for mac: If you have a MacBook, you can install Docker for mac and enable Kubernetes from the configuration of the application as shown in the below image. You will have a Kubernetes cluster deployed locally.
- Microk8s: Single-package fully conformant lightweight Kubernetes that work on 42 flavours of Linux. This can be used to run Kubernetes locally on Linux systems including Raspberry Pi. MicroK8s can be installed on CentOS using the Snap command line as shown in the below snippet
$> sudo yum install epel-release $> sudo yum install snapd $> sudo systemctl enable --now snapd.socket $> sudo ln -s /var/lib/snapd/snap /snap $> sudo snap install microk8s --classic
In case you have a different Linux distribution you can find the instruction on the following page.
- Minikube: implements a local Kubernetes cluster on macOS, Linux, and Windows that supports most of the Kubernetes features. Depending on your operation system, you can select the commands needed for installing Kubernetes from this page. For instance, to install the application and start it on macOS you can use the below commands:
$> brew install minikube $> minikube start
Once Minikube, Microk8s, or Docker for Mac is installed and running, you can start using the Kubernetes command line to interact with the Kubernetes cluster.
The above tools can be used easily to bootstrap a development environment and test your Kubernetes Deployments locally. However, they do not implement all features supported by Kubernetes, and not all of them designed to support multi-node clusters.
Minikube, Microk8s, or Docker for Mac are great tools for local development. However, for testing and staging environments, highly available clusters are needed to simulate and test the application in a production-like environment. On the other hand, Running Kubernetes clusters 24/7 for the testing environment can be very expensive. You should make sure to run your cluster only when needed, shut it down when it’s no longer required, then recreate when it’s needed again.
Using Cloudplex, creating, running, terminating, and recreating clusters is easy as pie. You can deploy your first cluster for free . In a few minutes, your cluster is up and running, you can save its configuration, shut it down and recreate it when needed.
In part II of this series, we are going to how to deploy our application to a Kubernetes testing cluster. We are going to create the Kubernetes Deployment for our Flask application and use Traefik to manage our Ingress and expose our application to external traffic.