fbpx
Unlock the power of choice with CloudPlex and DigitalOcean Get CloudPlex free for 3 months, and Digital Ocean for 1 month

Postgres on Kubernetes: Using AWS EBS as a volume For data persistence – Part II

Kubernetes
It is recommended that you go through the first part if this Kubernetes tutorial: Postgres on Kubernetes: Using AWS EBS as a volume For data persistence – Part I

Using AWS EBS Volume as Persistence Storage for Postgres Container in Kubernetes

Let’s go ahead and create an EBS Volume in AWS, we will use the same EBS volume to store the Postgres data. If you are using Kubernetes on AWS or Google Cloud, using the cloud provided volumes are a great and easy way to ensure data persistence and application statefulness.

The process for using AWS EBS is pretty simple: First, we need to create an EBS volume from within our AWS dashboard. Once it’s done, we need to note down the EBS Volume ID and use this particular Volume ID in our Persistent Volume (PV) and Persistent Volume Claim (PVC).

Log into your AWS account, make sure you are in the right AWS region and create EBS Volume here and note down the Volume ID.

Here is how our Persistent Volume YAML looks like now.

apiVersion: v1
kind: PersistentVolume
metadata:
 name: "pg-pv-volume"
spec:
 capacity:
   storage: 3Gi
 accessModes:
   - ReadWriteOnce
 persistentVolumeReclaimPolicy: Recycle
 awsElasticBlockStore:
   volumeID: vol-0bef810b29fd11e8d
   fsType: ext4
And here is how our Persistent Volume Claim file looks like in this case:
kind: PersistentVolumeClaim
metadata:
 name: pg-claim
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 3Gi
 volumeName: pg-pv-volume
That’s it, using our deployment YAML, we will be able to create Postgres Pod which will be utilizing the AWS EBS for data storage and persistence.
Note:
We haven’t exposed our Postgres deployment as a service – the reason for this is to ensure maximum security and also we will be accessing this service from within a single pod.

If we wish to access this newly deployed Postgres from some external locations or different pods, then we will have to create a service with type either a Node-Port or as LoadBalancer (on cloud service providers like AWS, Google Cloud, it will automatically spin up a LoadBalancer).

Connecting Database Container Using A Sample Application

Let’s summarize what we have achieved so far. We have set up a Postgres container with persistent storage using both Persistent Volumes on a standalone Kubernetes cluster as well as on an AWS Managed Cluster.
Now we need to see how we can use a sample application (developed in Go) to connect with this Postgres Container. The following is a simple code that establishes a connection to the Postgres instance and if it is successful then it displays a success message otherwise it will point out the error.
In the following code file, you can replace the database port, username, and password ( assuming you have used different credentials while creating ConfigMap) with your credentials. Here is how our Go script looks like, we named it main.go.
package main
import (
       "database/sql"
       "fmt"
       _ "github.com/lib/pq"
)
const (
       host = "127.0.0.1"
       port = 5432
       u = "root"
       p = "root123"
       dbname = "postgresdb"
)
func main() {
       pg_con_string := fmt.Sprintf("host=%s port=%d user=%s "+
       "password=%s dbname=%s sslmode=disable",
       host, port, u, p, dbname)
       if err != nil {
               panic(err)
       }
       defer db.Close()
       error = db.Ping()
       if err != nil {
               panic(err)
       }
       fmt.Println("You are Successfully connected!")
}
We will need to dockerize this Go application and run it on our Kubernetes cluster. In order to dockerize it, we need to have a Dockerfile which will contain the Go runtime, along with a working directory and the above script copied to the container image.

Our Dockerfile will build the code and execute it using ENTRYPOINT or CMD.

FROM golang
RUN mkdir -p /app
WORKDIR /app
ADD . /app
RUN go build ./main.go
CMD ["./main"]
Build the Docker image for our Go app using the following command:
docker build -t go-app .
This command will build the docker image of our application and will tag it as “go-app”.

Optionally, we can push this docker image to a private registry. Now, this Docker image is ready to be deployed on the Kubernetes cluster.

We will run this container in the same pod where the Postgres is running. We need to be careful about the order in which our Deployment file will create Pods.

The Postgres container should spin up first and the app container should follow. Once the Go app container is up, it will make a connection with Postgres and report the result. Please note that we used “127.0.0.1” or “localhost” in the connection hostname, as both containers (this Go application container and Postgres) are running in the same Pod. In other words, they are able to successfully discover each other on the same host (localhost) and using the specified ports.

If we run this Go container in another Pod then we will need to expose the Postgres deployment using service on NodePort. Here is how our new Deployment file looks like now.

apiVersion: apps/v1
kind: Deployment
metadata:
 name: postgres
 labels:
   app: postgres
spec:
 replicas: 1
 selector:
   matchLabels:
     app: postgres
 template:
   metadata:
     labels:
       app: postgres
   spec:
     containers:
     - name: postgres
       image: postgres:10.4
       imagePullPolicy: "IfNotPresent"
       ports:
         - containerPort: 5432
       envFrom:
         - configMapRef:
             name: pg-config
       volumeMounts:
         - mountPath: /var/lib/postgresql/data
           name: postgredb
     - name: go-app
       image: go-app
       ports:
          - containerPort: 80
     volumes:
       - name: postgredb
         persistentVolumeClaim:
           claimName: pg-claim
We have used 1 as a number of replicas for now but for higher availability and better performance, we can spin up multiple replicas of the same containers.

Apply the changes using kubectl command.

kubectl apply -f pg-deployment.yaml
Now, if you check the Pod, you will see that it has two running containers.

If we check the logs of our Go container, it will demonstrate the successful database connection. This is the command that we will use to see logs of our Go application container.

kubectl logs postgres-6b4b8d98d8-bl852 -c go-app
Replace “postgres-6b4b8d98d8-bl852” with the pod name in your case.

Note that “-c” is used with the container name if we have multiple containers in the same Pod.

Notes

Our Go app is interacting with the Postgres container and is accessible within the cluster only. If we want this application to be externally accessible from the Internet, then we will have to expose our Go application using a Kubernetes Service. You can use the YAML file to expose this deployment simply use the following command:
kubectl expose deployment postgres --type=NodePort --name=go-app
You can view service status as well, the following command will list down all running services and their associated NodePorts and IPs. If you are using Kubernetes on AWS or Google Cloud then you can also use “LoadBalancer” with “–type” and you’ll see a fully working Load Balancer configured on your Cloud. It is also possible to use an Ingress like Nginx Ingress to expose your application in a secure way.
kubectl get svc
That’s it, our Kubernetes deployment and service has been configured and available now.

Conclusion

We hope that you found this Kubernetes tutorial useful. We discovered the difference between stateful and stateless applications, we understood how to configure ConfigMaps and use Persistent Volumes for database Deployments, and we experimented with the connection of our code and the database container. Kubernetes is a powerful tool and there are many things to learn in order to master it. We will see other concepts and you will discover new Kubernetes tutorials each week.
Start building app

Start building your cloud native application

94500cookie-checkPostgres on Kubernetes: Using AWS EBS as a volume For data persistence – Part II