Note: If you have missed my previous articles on Docker and Kubernetes, you can find them here.
Application deployment models evolution.
Getting started with Docker.
Docker file and images.
Publishing images to Docker Hub and re-using them
Docker- Find out what's going on
Docker Networking- Part 1
Docker Networking- Part 2
Docker Swarm-Multi-Host container Cluster
Docker Networking- Part 3 (Overlay Driver)
Introduction to Kubernetes
Kubernetes- Diving in (Part 1)
Kubernetes-Diving in (Part2)- Services
Kubernetes- Infrastructure As Code with Yaml (part 1)
Kubernetes- Infrastructure As Code Part 2- Creating PODs with YAML
Kubernetes Infrastructure-as-Code part 3- Replicasets with YAML
Kubernetes Infrastructure-as-Code part 4 - Deployments and Services with YAML
Deploying a microservices APP with Kubernetes
Kubernetes enables microservices to be deployed easily and enables scaling, monitoring of deployed objects. With replicasets, additional PoDs of a similar type can be deployed at runtime.
Consider a scenario where a microservice application similar to the voting app is deployed. If there is a spike in the number of users, additional PoDs can be deployed. If we think about it from a operations (DevOps) point of view the administrator of the app is presented with the following choices:
a) Predict the maximum number of users and deploy replicas based on the maximum load- This would mean resources (like CPU cycles, memory etc) are wasted most of the time.
b)Have someone (or a team) continuously monitor user traffic and deploy replicas when required- Needs additional operational staffing.
This leads to the question- Is it possible to deploy Kubernetes objects on the fly based on some external variables? The answer is yes. You see, Kubernetes offers APIs to deploy objects in many popular programming languages like Python, Go, Java etc. With a script/program it is possible to monitor the number of users of the app and dynamically scale up or scale down the number of replicas.
I am going to show you how to do this with my favorite language which is Python. We are going to scale replicas in a deployment based on time of day.
If you intend to follow along, you need to have the following components installed:
1) Python 3
2) Pip for python
3) Kubernetes client for python
4) Any editor (I prefer Eclipse with PyDev and remote systems plugin)
On a Ubuntu system, you can install all these components with the apt-get package manager
root@sathish-vm2:/home/sathish# apt-get install python3
..................
root@sathish-vm2:/home/sathish# apt-get -y install python3-pip
.......................
root@sathish-vm2:/home/sathish# pip3 install kubernetes
.......................
Successfully installed cachetools-4.1.1 google-auth-1.23.0 kubernetes-12.0.0 python-dateutil-2.8.1 requests-oauthlib-1.3.0 rsa-4.6 websocket-client-0.57.0
If everything installed correctly, you should be able to import Kubernetes without error
root@sathish-vm2:/home/sathish# python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import kubernetes
>>>
Here is the YAML file required for the code
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Here is my python code
from os import path
import yaml
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
with open(path.join(path.dirname(__file__), "deploy.yaml")) as f:
dep = yaml.safe_load(f)
k8s_apps_v1 = client.AppsV1Api()
resp = k8s_apps_v1.create_namespaced_deployment(
body=dep, namespace="default")
print("Deployment created. status='%s'" % resp.metadata.name)
if __name__ == '__main__':
main()
Note: Many examples similar to the above are available here.
I am going to run this and check things out.
root@sathish-vm2:/home/sathish/python# python3 deploypod.py
Deployment created. status='nginx-deployment'
root@sathish-vm2:/home/sathish/python# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 96s
Now that the barebones example works, I am going to modify things a bit and create more replicas, for this purpose I am defining a update_nginix function
def update_ngnix (replicas=3,imagename="nginx"):
config.load_kube_config()
k8s_apps_v1 = client.AppsV1Api()
container = client.V1Container(
name="nginx",
image=imagename,
ports=[client.V1ContainerPort(container_port=80)],
)
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={"app": "nginx"}),
spec=client.V1PodSpec(containers=[container]))
spec = client.V1DeploymentSpec(
replicas=replicas,
template=template,
selector={'matchLabels': {'app': 'nginx'}})
# Instantiate the deployment object
deployment = client.V1Deployment(
api_version="apps/v1",
kind="Deployment",
metadata=client.V1ObjectMeta(name=DEPLOYMENT_NAME),
spec=spec)
deployment.spec.replicas=replicas
api_response = k8s_apps_v1.patch_namespaced_deployment(
name=DEPLOYMENT_NAME,
namespace="default",
body=deployment)
print("Deployment updated. status='%s'" % str(api_response.status))
Let me add one more condition: I want to run 3 replicas from a given time for 3 hours- then the deployment should scale down to 1 replica. Let's define the function for this
def scale_replicas_time(scal_up_hour):
now=time.localtime()
hr=int(now.tm_hour)
endtime=hr+3
if hr >=scal_up_hour and hr < endtime:
return True
Finally, I want the program to check for the condition every hour and call "update_replica" if it's scale_up_hour. I can handle this in the main function
if __name__ == '__main__':
create_ngnix_deployment()
while True:
#5 PM
if scale_replicas_time(17):
update_ngnix(replicas=3)
else:
update_ngnix(replicas=1)
#sleep for an hour
time.sleep(3600)
Here is a complete listing of code:
from os import path
import yaml
from kubernetes import client, config
from datetime import datetime
import time
DEPLOYMENT_NAME="nginx-deployment"
def create_ngnix_deployment():
config.load_kube_config()
k8s_apps_v1 = client.AppsV1Api()
with open(path.join(path.dirname(__file__), "deploy.yaml")) as f:
dep = yaml.safe_load(f)
resp = k8s_apps_v1.create_namespaced_deployment(
body=dep, namespace="default")
print("Deployment created. status='%s'" % resp.metadata.name)
def update_ngnix (replicas=3,imagename="nginx"):
config.load_kube_config()
k8s_apps_v1 = client.AppsV1Api()
container = client.V1Container(
name="nginx",
image=imagename,
ports=[client.V1ContainerPort(container_port=80)],
)
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={"app": "nginx"}),
spec=client.V1PodSpec(containers=[container]))
spec = client.V1DeploymentSpec(
replicas=replicas,
template=template,
selector={'matchLabels': {'app': 'nginx'}})
# Instantiate the deployment object
deployment = client.V1Deployment(
api_version="apps/v1",
kind="Deployment",
metadata=client.V1ObjectMeta(name=DEPLOYMENT_NAME),
spec=spec)
deployment.spec.replicas=replicas
api_response = k8s_apps_v1.patch_namespaced_deployment(
name=DEPLOYMENT_NAME,
namespace="default",
body=deployment)
print("Deployment updated. status='%s'" % str(api_response.status))
def scale_replicas_time(scal_up_hour):
now=time.localtime()
hr=int(now.tm_hour)
print("current hour" + str(hr))
endtime=hr+3
if hr >=scal_up_hour and hr < endtime:
return True
if __name__ == '__main__':
create_ngnix_deployment()
while True:
if scale_replicas_time(17):
update_ngnix(replicas=3)
else:
update_ngnix(replicas=1)
#sleep for an hour
time.sleep(3600)
Let's run the code
root@sathish-vm2:/home/sathish/python# python3 deploy.py
Deployment created. status='nginx-deployment'
current hour 12 <<<<<<<
Deployment updated. status='{'available_replicas': None,
'collision_count': None,
'conditions': [{'last_transition_time': datetime.datetime(2020, 11, 7, 12, 53, 20, tzinfo=tzlocal()),
'last_update_time': datetime.datetime(2020, 11, 7, 12, 53, 20, tzinfo=tzlocal()),
'message': 'Created new replica set '
'"nginx-deployment-7848d4b86f"',
'reason': 'NewReplicaSetCreated',
'status': 'True',
'type': 'Progressing'}],
'observed_generation': None,
'ready_replicas': None,
'replicas': None,
'unavailable_replicas': None,
'updated_replicas': None}'
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 100s
Changing parameter in function main so it falls within scaling window
if __name__ == '__main__':
create_ngnix_deployment()
while True:
if scale_replicas_time(12):
update_ngnix(replicas=3)
else:
update_ngnix(replicas=1)
#sleep for an hour
time.sleep(3600)
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/3 3 1 19s
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/3 3 1 26s
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/3 3 2 30s
root@sathish-vm2:/home/sathish#
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/3 3 2 34s
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 36s
Hope this article was useful in understanding the basics of Kubernetes python client and would be a good starting point for those who want to play around with it. Thanks for your visit and happy weekend :)
Comments