Dynamic Jenkins Cluster And Deploy Application Using Kubernetes

Rupali Gurjar
7 min readAug 13, 2020

Jenkins

Jenkins is an open-source continuous integration and delivery system designed to ensure build and deploy automation. It is well-suited to be installed in the cloud to run self-hosted pipelines.

It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tom

Jenkins Cluster

Jenkins supports clustering via master-slave mode. A build process can be delegated to several slave (worker) nodes. This allows serving multiple projects in a single Jenkins cluster setup.

There are two types of Jenkins Slave

  1. Static
  2. Dynamic

Kubernetes

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Task Overview

Create A dynamic Jenkins cluster and perform task-3 using the dynamic Jenkins cluster.

Steps to proceed as:

1. Create container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured )

2. When we launch the job it should automatically starts job on slave based on the label provided for dynamic approach.

3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins

4. Job1 : Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:

1. Create the new image dynamically for the application and copy the application code into that corresponding docker image

2. Push that image to the docker hub (Public repository)

( Github code contain the application code and Dockerfile to create a new image )

5. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:

1. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.

2. If Application created first time, then Expose the application. Else don’t expose it.

Task Description

Let’s Begin the task !

First of all for the Overview of Task 3, you can visit My Article .

Here we are going to set up for Distributed job Cluster So there are some pre-requisite for this .

  1. Start Minikube : To use kubernetes service , we have to start minikube .
minikube start

2. Tcp Socket Support : If docker client and server are using different systems then we have to enable Tcp Socket Support .

Go to this file >> /usr/lib/systemd/system/docker.service and add it

And Run these two commands to reload and Restart .

systemctl daemon-reload
systemctl restart docker

3. Configure Kubectl : To use kubectl command , we need to configure it . For this we have to provide some crt certificates , key and Config file .

4. SSH Configuration : If jenkins slave is a linux system , there should be ssh configured .

Here is my Dockerfile

In this Dockerfile , I have installed Kubectl command , Generate ssh key and start the service . Here I have made a working Directory “/root/Task4” .

Now build this Image .

docker build -t kubernetes:v1 <directory_of_Dockerfile>

Now , we push this Dockerfile to Docker hub .

docker tag kubernetes:v1 rupali04/kubernetes:v1docker push rupali04/kubernetes:v1

Here we can see this image on Dockerhub also .

Now to Configure Cloud , we have to install a plugin named “Docker” .

Then go to >>Manage Jenkins >> Manage Nodes and Clouds >>Configure Clouds .

Docker Host Url : tcp://<ip_of_docker_server>:4243

Remote File System Root : <jenkins_working_Directory>

Provide User name and Password to login via ssh and use Non verifying Verification Strategy .

Now it is done . The set up is ready .

Job1 : Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:

1. Create the new image dynamically for the application and copy the application code into that corresponding docker image

2. Push that image to the docker hub (Public repository) .

Download code from Github

It will download the code from github and copy in “/root/Task4_web” folder . This Code contains Dockerfile and webpages .

It will build and tag that image . After that push it to dockerhub .

Here we can see Console Output .

Here is my Dockerfile .

FROM centos
RUN yum install httpd -y
COPY index.html /var/www/html
EXPOSE 80
CMD /usr/sbin/httpd -DFOREGROUND

Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:

1. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.

2. If Application created first time, then Expose the application. Else don’t expose it.

We want that this job would run on the dynamic Jenkins Slave so here we have to write slave name .

It is a pipeline so it would build after job1 .

It will download the code from Github and Deploy application in kubernetes Cluster .

Here we can see , this job is running in the dynamic Jenkins Slave .

Console Output :-

Here is My deployment.yml file

apiVersion: v1
kind: Service
metadata:
name: myweb
labels:
app: myweb
spec:
ports:
- port: 80
nodePort: 30105
selector:
app: myweb
tier: frontend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
tier: frontend
template:
metadata:
name: myweb
labels:
app: myweb
tier: frontend
spec:
containers:
- name: myweb
image: rupali04/webserver:latest

This Application is deployed in Kubernetes cluster so we can see it here

Webpage :-

Here I change the code and it is so powerful that it would update the website without any downtime .

I have set Poll SCM as trigger of job1 so it will download the code , build the image , push to Dockerhub and trigger job2 .

We can see these two version of Dockerfile on Dockerhub

After Rolling Update :-

Build Pipeline View :-

Hope You like this Article !!

Thanks for Reading : )

--

--