In a previous blog article we explored the capabalities of Docker and created containers for a Django and React web application, regarding a development environment. While Docker is an excellent tool for packaging containerized applications, it might not be enough for deploying applications in a production environment. Managing multiple containers across multiple servers, load balancing, and scaling the application is out of Docker's scope and can quickly become a complex, time-consuming, and error-proning task. This is where container orchestration comes into play.
Container orchestration is a crucial tool for managing and deploying containerized applications at scale. With the increasing adoption of containerization, the need for orchestration has become critical to ensure a efficient management of infrastructures. Container orchestration frameworks enable development and operation teams to automate deployment and management of containerized applications. This technology also promotes high availability, resilience, and fault tolerance, making it a valuable tool to implement Continuous Integration/Continuous Deployment (CI/CD).
While there are many container orchestration tools available, Kubernetes has emerged as the most popular framework. Among many reasons why Kubernetes became so popular, there are its scalability, flexibility, and active open-source community. Kubernetes offers a comprehensive solution for orchestrating from small to large-scale applications, making it a viable choice for diverse enterprises. Besides that, Kubernetes provides built-in security features and robust monitoring capabilities, making it secure and reliable.
Since Kubernetes is designed to run applications in clusters, developers may benefit from deploying the Kubernetes cluster locally throughout the infrastructure development stage. Minikube is a tool that allows developers to create a local Kubernetes cluster, which mimics a production environment. Using Minikube they can test and debug their applications locally and ensure that the application will run correctly when deployed to a production cluster.
The goal of this blog article is to demonstrate how to deploy the previously developed Django and React application to a Kubernetes cluster using Minikube. By following this tutorial, you'll learn how to create a local Kubernetes cluster using Minikube and deploy the pre-built containerized applications to it. This tutorial covers topics such as creating Kubernetes PersistentVolumes, Deployments, Services, and much more. Stay tuned and take the best out of it!
In order to structure our learning, the tutorial is split into four stages:
- Stage 0: Foundation - stage0-base
- This branch is a copy of the main branch of the previous tutorial
- Stage 1: PostgreSQL Database - stage1-psql
- Stage 2: Django API - stage2-django
- Stage 3: React APP - stage3-react
A basic understanding of Kubernetes and the following resources is required: ConfigMap, Secret, Pod, Deployment, Service, and Ingress. In case you haven't had contact with these resources, take your time to read the Kubernetes documentation. Don't rush, the blog post will still be available for you later!
To follow the approach proposed in this tutorial, we must have the following packages installed on our system:
- Git
git 2.40
- Docker
docker 23.0
- Kubernetes
kubectl 1.27
minikube 1.29
- Be sure that your Minikube cluster is up and running. To achieve that, execute
minikube start
. Check the status of your cluster withminikube status
. You also need to deploy an Ingress controller runningminikube addons enable ingress
. The status of your Minikube addons can be verified withminikube addons list
. For debugging Minikube's setup, refer to their detailed Documentation
- Be sure that your Minikube cluster is up and running. To achieve that, execute
Remember that you can still use newer version of the referred packages, but be aware that sometimes you may reproduce different outputs.
That's all you need..! After installing these packages and configuring Minikube, you're good to go!
The first step of our Kubernetes journey is to download the data related to the containers developed in the previous blog article. Since the main
branch of the previous tutorial was forked into the stage0-base
branch of the current tutorial, we simply have to clone the current tutorial's Git repository into a local machine and switch the branch.
-
Create the local repository
~/mayflower
, clone the Git repository into it, and switch to the branchstage0-base
.mkdir ~/mayflower cd ~/mayflower git clone https://github.com/rodolfoksveiga/k8s-django-react.git . git switch stage0-base
-
Add the hosts
api.mayflower.de
to the variableALLOWED_HOSTS
at~/mayflower/api/api/settings.py
. Doing it so, we allow Django to serve our API not only onlocalhost
but also on the domainapi.mayflower.de
.... ALLOWED_HOSTS = ["localhost", "api.mayflower.de"] ...
-
Add the hosts
api.mayflower.de
andapp.mayflower.de
to your hosts' file.To be able to access our application using the browser, we have to map Minikube's IP address to the target URL. So first we have to find out our Minikube's IP. To figure that out, we can execute
minikube ip
and check the command's output. Next we have to add two new lines to our hosts' file containing our Minikube IP and the mapped hosts. To do that we can follow the example below:... $MINIKUBE_IP_ADDRESS api.mayflower.de $MINIKUBE_IP_ADDRESS app.mayflower.de
In this example we just have to substitute
$MINIKUBE_IP_ADDRESS
with the IP address printed out as we ran the commandminikube ip
.The host file location depends on your operational system. At Linux and MacOS the file can be found on
/etc/hosts
while at Windows it can be found onc:\Windows\System32\Drivers\etc\hosts
.
-
Assign the database host dynamically at
~/mayflower/api/api/settings.py
. The database host defined on~/mayflower/docker-compose.yaml
(psql
) is different than the host we'll define later on our Kubernetes' Service (database-service
), therefore, we have to dynamically check whether our application is running on the Kubernetes Cluster or using Docker and assign the right value to the database host.... DATABASES = { "default": { ... "HOST": environ.get("PSQL_SERVICE") or environ.get("PSQL_HOST"), ... } } ...
Here we first checked if the environment variable
PSQL_SERVICE
defined by Kubernetes exists. If it does exist we use its value, if it does not exist we use the value of the environment variablePSQL_HOST
defined in~/mayflower/api/.env
.
At this point we should be able to run docker-compose up
from ~/mayflower
. After successfully running the containers, we can open our browser and navigate to http://localhost:8000 (Django API) or http://localhost:3000 (React APP). If you want to have a better idea of the project's structure, feel free to investigate the files within this branch and test the application. By doing it so, you'll feel more confident in the comming sections of this tutorial.
If it's all still a bit confusing for you, I totally recommend you to step back and follow the previous blog article through.
From now on it's all Kubernetes! In the following sections we'll define the necessary resources to deploy database, backend, and frontend to Kubernetes. We'll store our environment variables in ConfigMaps or Secrets, according to the needs. All our persistent data will be managed by a PersistentVolume (PV), which will be attached to a Pod through a PersistentVolumeClaims (PVC). We'll deploy our containers using Deployments. A Deployment is an abstract layer wrapping Pods, the Kubernetes' smallest deployable units of computing. Deployments regularly check if their Pods are healthy and create or delete Pods whenever demanded. We'll finally expose our Pods internally using ClusterIP Services and externally using Ingresses.
That was all we need to prepare. Let's get started!
In the database section we'll setup the following resources: Secret, PV, PVC, Deployment, and Service. Note that we don't need to deploy an Ingress for our database Service, because we want the database to be exposed only to the backend, which lives within the cluster. For that, a ClusterIP Service is enough and safer than an Ingress.
-
~/mayflower/infra/database/secret.yaml
Since the environment variables used to deploy the database are sensitive and we don't want to expose their values to the world, we will encode them and store them in a Secret.
apiVersion: v1 data: POSTGRES_DB: cG9zdGdyZXM= POSTGRES_PASSWORD: cGFzc3dvcmQ= POSTGRES_USER: YWRtaW4= kind: Secret metadata: name: database-secret
As discussed in the previous blog article, we needed to define three environment variables to properly start our database container:
POSTGRES_DB
,POSTGRES_USER
, andPOSTGRES_PASSWORD
. These variables were encoded using the "base64" schema before they were stored in the Secret. To get the decoded value of a variable, runecho $VARIABLE_VALUE | base64 --decode
. For example the commandecho cG9zdGdyZXM= | base64 --decode
should print out the value ofPOSTGRES_DB
, which happens to bepostgres
.
-
~/mayflower/infra/database/persistent-volume.yaml
The PV is a resource that assures, as the name suggests, that our data persists if the attached Pod occasionally dies or isn't
Running
state.apiVersion: v1 kind: PersistentVolume metadata: name: database-persistent-volume spec: capacity: storage: 200Mi hostPath: path: /data accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain
- Description of the PV's specification:
spec.capacity.storage = 200Mi
- The maximum storage capacity of
200Mi
assures that only PVCs requiring200Mi
or less will be able to attach to this PV.
- The maximum storage capacity of
spec.hostPath.path = /data
- The
hostPath
indicates where the data will be stored in the host machine.
- The
spec.accessModes
: list containing values that describe under which conditions Pods can connect to the PVaccessModes[0] = ReadWriteOnce
- The value
ReadWriteOnce
restricts the PV to be mounted by a single Node. The PV can still be mounted by multiple Pods, as long as they live in the same Node.
- The value
spec.persistentVolumeReclaimPolicy = Retain
- The value
Retain
assures that if the Pod dies and the PVC is detached from the PV, the PV will persist and wait for a new instance of the Pod to spin up and reattach to the PVC.
- The value
- Description of the PV's specification:
-
~/mayflower/infra/database/persistent-volume-claim.yaml
A PV can only be requested and mounted to a Pod through a PVC. The PVC describes the requirements that the PV must fulfil in order to attach to the PVC. Remember that if the PVC isn't referenced by a Pod, it won't attach to any PV.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: database-persistent-volume-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Mi
All the requirements defined for this PVC match the values previously described by our PV manifest. By doing it, we guarantee that only one Pod will store data in and consume data from the pre-defined PV, since the Pod consumes all the storage available on the PV.
Note that the PVC's access modes must also match the access modes defined in the PV. If they don't match the PVC won't be attached to the PV and naturally the volume won't mount to any Pod referencing this PVC.
-
~/mayflower/infra/database/deployment.yaml
A Deployment controls the lifecycle of the Pods with labels matching the Deployment's selector labels. We'll use a Deployment to specify all the configurations of the database Pod and its container.
apiVersion: apps/v1 kind: Deployment metadata: name: database-deployment spec: replicas: 1 selector: matchLabels: app: database template: metadata: labels: app: database spec: containers: - image: postgres:14.1-alpine name: database ports: - containerPort: 5432 envFrom: - secretRef: name: database-secret volumeMounts: - name: storage mountPath: /var/lib/postgresql/data volumes: - name: storage persistentVolumeClaim: claimName: database-persistent-volume-claim
- Description of the Deployment's specification:
spec.replicas = 1
- The Deployment tries to always keep one Pod in
Running
state.
- The Deployment tries to always keep one Pod in
spec.selector.matchLabels = app=database
- The Deployment is allowed to manage the Pods with label
app
equals todatabase
. Any other Pod with such label key and value will also be taken into consideration by this Deployment.
- The Deployment is allowed to manage the Pods with label
spec.template
: configuration of the Pod to be created by the Deployment whenever neededtemplate.metadata.labels = app=database
- Set the label key
app
equals todatabase
to this Pod, so the Deployment can use it for future state management of its underlying Pods.
- Set the label key
template.spec.containers
: list of containers deployed in the Podcontainers[0].image = postgres:14.1-alpine
- The image used to deploy the Pods is the official PostgreSQL image.
containers[0].ports.containerPort = 5432
- The container exposes the default port of the official PostgreSQL image.
containers[0].envFrom.secretRef
- Inject all the secrets from the Secret store
database-secret
as environment variables of the container.
- Inject all the secrets from the Secret store
containers[0].volumeMounts
- The container mounts just one volume on path
/var/lib/postgresql/data
(container), where it stores the data of our database.
- The container mounts just one volume on path
template.volumes
- Map the volume mounted on the container to the PVC
database-persistent-volume-claim
created beforehand. Note that the PVC will look for a PV matching its requirements.
- Map the volume mounted on the container to the PVC
- Description of the Deployment's specification:
-
~/mayflower/infra/database/service.yaml
Finally we want to expose our database Deployment to the backend Service we'll deploy next. Since the database and the backend live in the same cluster, we'll set this communication using a ClusterIP Service.
apiVersion: v1 kind: Service metadata: name: database-service spec: type: ClusterIP selector: app: database ports: - name: 5432-5432 port: 5432 targetPort: 5432
- Description of the Service's specification:
type = ClusterIP
selector = app=database
- As well as the Deployment, the Service use the
selector
to match the Pods with the same labels. In this case, we want to match the database Pod, which has label keyapp
equals todatabase
.
- As well as the Deployment, the Service use the
ports
: list of ports that will be exposed by the clusterports[0].port = 5432
andports[0].targetPort === 5432
- Port
5432
of the container (target) was mapped to the same port on the cluster.
- Port
After setting it all up we can execute
kubectl create -f ~/mayflower/infra/database
. If everything went right, when we runkubectl get all
we'll see the resources we just deployed. For example, if we runkubectl get deploys
we should see a list containing ourdatabase-deployment
. - Description of the Service's specification:
You probably have learned a lot in this section and what we'll do next will be pretty similar. To avoid repeting concepts, we'll keep the resources' description quite shorter in the comming sections. If you need, come back to this section to recap some stuff that may still be confusing.
In the second section we'll setup the Django API and its resources, which are: Secret, Deployment, Service, and Ingress. This time we need an Ingress to externally expose our API. The Ingress will map our cluster IP to a human-readable URL. Be sure that you went through the steps described at the requirements section, so you can later access the API from your browser.
-
~/mayflower/infra/backend/secret.yaml
apiVersion: v1 data: DJANGO_SUPERUSER_EMAIL: YWRtaW5AbWF5Zmxvd2VyLmNvbQ== DJANGO_SUPERUSER_PASSWORD: bWF5Zmxvd2Vy DJANGO_SUPERUSER_USERNAME: YWRtaW4= PSQL_HOST: ZGF0YWJhc2Utc2VydmljZQ== PSQL_NAME: cG9zdGdyZXM= PSQL_PASSWORD: cGFzc3dvcmQ= PSQL_PORT: NTQzMg== PSQL_USER: YWRtaW4= kind: Secret metadata: name: backend-secret
Here we had to define all the nine environment variables necessary to run our Docker image. These variables are essential to properly setup communication between Django's ORM and the PostgreSQL database. A detailed description of these variables can be found in the previous blog article. To check the value of the variables, you can use the command
echo $VARIABLE_VALUE | base64 --decode
, as mentioned in the database section.
-
~/mayflower/infra/backend/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: labels: app: backend name: backend-deployment spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - image: rodolfoksveiga/django-react_django:new name: django ports: - containerPort: 8000 envFrom: - secretRef: name: backend-secret volumeMounts: - name: backend-logs mountPath: /var/log volumes: - name: backend-logs hostPath: path: /var/log
- Description of the Deployment's specification:
spec.replicas = 1
spec.selector.matchLabels = app=backend
spec.template
template.metadata.labels = app=backend
template.spec.containers
containers[0].image = rodolfoksveiga/django-react_django:latest
- The image was generated using the backend
Dockerfile
created on the last tutorial.
- The image was generated using the backend
containers[0].ports.containerPort = 8000
- The container exposes the default port of the official Django image.
containers[0].envFrom.secretRef
- Inject all the secrets from the Secret store
backend-secret
as environment variables of the container.
- Inject all the secrets from the Secret store
containers[0].volumeMounts
- The container mounts just one volume on path
/var/log
(container), where it writes Django API logs.
- The container mounts just one volume on path
template.volumes
- Map the volume mounted on the container to the
/var/log
(host machine).
- Map the volume mounted on the container to the
- Description of the Deployment's specification:
-
~/mayflower/infra/backend/service.yaml
apiVersion: v1 kind: Service metadata: name: backend-service spec: type: ClusterIP selector: app: backend ports: - name: 8000-8000 port: 8000 targetPort: 8000
- Description of the Service's specification:
type = ClusterIP
selector = app=backend
ports
ports[0].port = 8000
andports[0].targetPort === 8000
- Description of the Service's specification:
-
~/mayflower/infra/backend/ingress.yaml
As we already discussed, a Pod exposed by a Service can only be reached from inside the cluster. Since we want to call the API from the frontend, which will run on our browser, we also need to deploy an Ingress.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: backend-ingress spec: rules: - host: api.mayflower.de http: paths: - path: / pathType: Prefix backend: service: name: backend-service port: number: 8000
- Description of the Ingress's specification:
rules
: list of hosts to expose routes associated to servicesrules[0].host = api.mayflower.de
- The mapped URL of our Django API. This URL matches the URL added to our hosts' file on the foundation section.
rules.http.paths
: list of routes to expose spicific service portspaths[0].path = /
- The root path of the host is mapped the root route endpoint of Django API.
paths[0].pathType = Prefix
- The value
Prefix
of thepathType
key means that the children routes of our Service will also be mapped to this host. For example, we'll be able to access the backend endpoint/admin
onhttps://api.mayflower.de/admin
as well as the endpoint/students
onhttps://api.mayflower.de/students
.
- The value
paths[0].backend.service.name = backend-service
- The Ingress will look for a Service called
backend-service
.
- The Ingress will look for a Service called
paths[0].backend.service.port.number = 8000
- We mapped the Service's port
8000
to the root route.
- We mapped the Service's port
- Description of the Ingress's specification:
To deploy our backend in we can execute kubectl create -f ~/mayflower/infra/backend
, and "voilà", our Django API is accessible through the URL https://api.mayflower.de
. Isn't it cool!?
Play around with your API, add some students to our database, so you can see it later on our frontend URL.
Last but not least, we'll deploy the React APP using the following resources: ConfigMap, Deployment, Service, and Ingress. Note that this time we opted for a ConfigMap instead of a Secret, because the only variable we will store in it isn't that important and can be exposed to other people.
-
~/mayflower/infra/frontend/config-map.yaml
apiVersion: v1 data: REACT_APP_API_URL: https://api.mayflower.de kind: ConfigMap metadata: name: frontend-config-map
In this ConfigMap manifest we set just one environment variable called
REACT_APP_API_URL
. This variable is used to print the Django Admin URL as a link in the frontend.
-
~/mayflower/infra/frontend/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: labels: app: frontend name: frontend-deployment spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - image: rodolfoksveiga/django-react_react:latest name: react envFrom: - configMapRef: name: frontend-config-map
- Description of the Deployment's specification:
spec.replicas = 1
spec.selector.matchLabels = app=frontend
spec.template
template.metadata.labels = app=frontend
template.spec.containers
containers[0].image = rodolfoksveiga/django-react_django:latest
- The image was generated using the frontend
Dockerfile
created on the previous tutorial.
- The image was generated using the frontend
containers[0].ports.containerPort = 3000
- The container exposes the default port of the official React image.
containers[0].envFrom.configMapRef
- Inject all the configurations from the ConfigMap store
frontend-config-map
as environment variables of the container.
- Inject all the configurations from the ConfigMap store
- Description of the Deployment's specification:
-
~/mayflower/infra/frontend/service.yaml
apiVersion: v1 kind: Service metadata: name: frontend-service spec: type: ClusterIP selector: app: frontend ports: - name: 3000-3000 port: 3000 targetPort: 3000
- Description of the Service's specification:
type = ClusterIP
selector = app=frontend
ports
ports[0].port = 3000
andports[0].targetPort === 3000
- Description of the Service's specification:
-
~/mayflower/infra/frontend/ingress.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress spec: rules: - host: app.mayflower.de http: paths: - path: / pathType: Prefix backend: service: name: frontend-service port: number: 3000
- Description of the Ingress's specification:
rules
rules[0].host = app.mayflower.de
rules.http.paths
paths[0].path = /
paths[0].pathType = Prefix
paths[0].backend.service.name = frontend-service
paths[0].backend.service.port.number = 3000
- Description of the Ingress's specification:
Finally we can execute kubectl create -f ~/mayflower/infra/frontend
, and shortly our React APP will be available on our browser through the URL https://app.mayflower.de
. If you can see in the frontend the data you have created before in backend URL, it means you did everthing right and the services are properly connected to each other. The backend manage the database and the frontend prints the data gathered from the backend. It wasn't that hard, right!?
That was quick, but it's indeed everything you need to get started with Kubernetes. Now you can use this Kubernetes cluster as you will. You can play around with it, extend it, and perhaps use it as a baseline to create your future customer's application.
Following through this tutorial, you learned how to serve your PostgreSQL, Django, and React containers from a Minikube Kubernetes cluster, which mimics a real cloud server. Since you already learned in the previous tutorial how to package application in containers, this was your second step into the cloud - and it was huge one!
You can reproduce many of the concepts you've learned here in a real world application, but there were still some limitations to consider before you publicly deploy your containers to Kubernetes. Among the limitations, it's worth it to point out once more that the Secrets we defined are only "base64" encoded, therefore everyone can decode it using a simple a command. To really protect your data you must encrypt it at rest. Luckly Kubernetes has a built-in feature to achieve that. Check it out and implement it on your Kubernetes' cluster.
Well, you have now the whole power of Kubernetes at the tip of your fingers! You can seamlessly deploy new resources, expose them internally to the services you've already deployed or to external services, scale your containers using Deployments, so the demand of your application is optimally fulfilled, and much more... Kubernetes offers you the possibility to easily design the infrastructure to match your needs, so it's opportunity to go further and think your infrastructure your way. Use the concepts you've just learned, but don't limit yourself to them.