0% found this document useful (1 vote)
24 views26 pages

Advanced End-to-End Kubernetes DevSecOps Project

The document outlines a comprehensive guide for setting up an Advanced End-to-End DevSecOps project using Kubernetes on AWS EKS, incorporating tools like Jenkins, ArgoCD, Prometheus, and Grafana. It details three phases: infrastructure setup, CI/CD configuration with Jenkins and ArgoCD, and monitoring the EKS cluster. Each phase includes step-by-step instructions for installation, configuration, and deployment of applications and services.

Uploaded by

Mahaboob Basha S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (1 vote)
24 views26 pages

Advanced End-to-End Kubernetes DevSecOps Project

The document outlines a comprehensive guide for setting up an Advanced End-to-End DevSecOps project using Kubernetes on AWS EKS, incorporating tools like Jenkins, ArgoCD, Prometheus, and Grafana. It details three phases: infrastructure setup, CI/CD configuration with Jenkins and ArgoCD, and monitoring the EKS cluster. Each phase includes step-by-step instructions for installation, configuration, and deployment of applications and services.

Uploaded by

Mahaboob Basha S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 26

Advanced End-to-End DevSecOps Kubernetes Three-Tier Project using AWS

EKS, ArgoCD, Prometheus, Grafana, and Jenkins


PHASE1: Infrastructure setup
PHASE2: Jenkins and Argocd setup
PHASE3: Monitoring setup

Phase1: Infrastructure Setup

Step1: create an ec2 instance and install docker, Jenkins, docker, terraform, kubectl, awscli, trivy,
eksctl using user-data.

Allocate the storage 30gb and use min of 4 cpu instance

Tools-install URL: https://github.com/Madeep9347/End-to-End-Kubernetes-Three-Tier-DevSecOps-


Project/blob/master/Jenkins-Server-TF/tools-install.sh

Step2: Access Jenkins:

Open a web browser and go to http://your_server_ip_or_domain:8080.

You will see a page asking for the initial admin password. Retrieve it using:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Enter the password, install suggested plugins, and create your first admin user.

Goto plugins and install AWS Credentials , AWS Steps and Terraform

Step 3: Create Access key and Secret key for IAM user with Administrator access and Add those
credentials in Jenkins credentials
Step4: Install terraform in Jenkins

Goto to tools → Terraform installations→ install from the local

Step 5: Create Jenkins job

Create pipeline Jenkins job and use the jenkinsfile

https://github.com/Madeep9347/EKS-Terraform-GitHub-Actions/blob/master/Jenkinsfile

edit the backend configuration with your s3 bucket name and dynamo table
Step 6: Create an instance ( Jump server) in the vpc created by Terraform and install the eksctl ,
awscli, kubectl, helm.

Link to tools : https://github.com/Madeep9347/End-to-End-Kubernetes-Three-Tier-DevSecOps-


Project/blob/master/Jenkins-Server-TF/tools-install.sh

Step 7: configure the aws credentials in the jump server


Step 8: Update the Kubeconfig file

aws eks update-kubeconfig --name dev-medium-eks-cluster --region us-east-1

check the jump-server able to communicate with the Kubernetes cluster

kubectl get nodes

Step 9: Now, we will configure the Load Balancer on our EKS because our application will have an
ingress controller.

Download the policy for the LoadBalancer prerequisite.

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-
controller/v2.5.4/docs/install/iam_policy.json

Step 10: Create the IAM policy using the below command

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document


file://iam_policy.json

Step 11: Create a Service Account by using below command and replace your account ID with your
one

eksctl create iamserviceaccount --cluster=dev-medium-eks-cluster --namespace=kube-system --


name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-
policy-arn=arn:aws:iam::590184124026:policy/AWSLoadBalancerControllerIAMPolicy --approve --
region=us-east-1

Check the load balancer is created or not

Step 12: Run the below command to deploy the AWS Load Balancer Controller

sudo snap install helm --classic


helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set
clusterName=dev-medium-eks-cluster --set serviceAccount.create=false --set
serviceAccount.name=aws-load-balancer-controller

After 2 minutes, run the command below to check whether your pods are running or not.

kubectl get deployment -n kube-system aws-load-balancer-controller

Step 13: Create namespace for Argocd


kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-


cd/v2.4.7/manifests/install.yaml

kubectl get all -n argocd

kubectl get svc -n argocd

Step 14: Edit the argocd-server type Cluster ip to LoadBalance

Step 15: Now access the Argocd with loadbalancer dns name

Default username: admin

Initial admin password is stored in

kubectl edit secret argocd-initial-admin-secret -n argocd

get the password and decode with base64

Copy the password and login to the argocd


PHASE 2: Jenkins(Continuous Integration) and Argocd (Continuous Deployment/Delivery)

Step 1: Sonarqube setup

We are running the sonarqube in the port 9000 and use the same Jenkins ip address

Access SonarQube:

Open a web browser and go to http://your_server_ip_or_domain:9000.

The default username and password are both admin.

Step2: Create the Token

Goto administration → security → users→ generate token

Copy the token and add this token in the Jenkins credentials
Step3: create the projects for frontend and backend

Click on setup and select manually and use the generated token
Step 5: Create the ECR repositories

Click on create repository → choose the private repo and enter the repository.

Here create the 2 repositories for frontend and backend


Step 6: Add the repositories, aws Account id in the jenkins credentials

And also add the GitHub credentials and Personal Access Token (PAT) in the Jenkins Credentials.

Final all the credentials looks like this

Step 7: Install plugins docker, sonar-scanner, OWASP, Nodejs

Step8: Setup the tools in Jenkins


Step 9: create the Jenkins job for frontend application

Jenkinsfile: https://github.com/Madeep9347/End-to-End-Kubernetes-Three-Tier-DevSecOps-
Project/blob/master/Jenkins-Pipeline-Code/Jenkinsfile-Frontend
Step 10 : Create the Jenkins job for Backend Application

Step 11: Install and Configure ArgoCD in the jump-server:

We will be deploying our application on a three-tier namespace. To do that, we will create a three-
tier namespace on EKS.

Create an separate namespace for argocd using the command:

kubectl create namespace three-tier

Now, we will install argoCD:


kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-
cd/v2.4.7/manifests/install.yaml

these command will create all the required pods and services of the argocd server in the argocd
namespace.

All pods must be running, to validate run the below command:

kubectl get pods -n argocd

Now, expose the argoCD server as LoadBalancer using the below command:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

You can validate whether the Load Balancer is created or not by going to the AWS Console:

To access the argoCD, copy the LoadBalancer DNS and hit on your favorite browser.

You will get a warning like Your connection is not private then Click on Advanced.
Now, we need to get the password for our argoCD server to perform the deployment.

kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 –d

Enter the username (admin) and password in argoCD and click on SIGN IN.

Here is our ArgoCD Dashboard.


Step12:Now we need to add the Github Repository to the argocd by going to the setting.

Connect the github repository using HTTPS

Now we need to add the Github Repository to the argocd by going to the setting.

Go to the settings add the repo using the connect repo using the HTTPS and add the github
repository URL.

And Click on connect then the repo will be connected to the argocd:
Step 13:Now create the separate apps for Frontend, Backend, Database and the ingress manifest
files:

Frontend app creation:

Create these apps same for the Backend, Database and Ingress.
This is the Frontend Application Deployment in ArgoCD:

This is the Backend Application Deployment in ArgoCD:


This is the Database Application Deployment in ArgoCD:

This is the Ingress Application Deployment in ArgoCD:

If you observe, we have configured the Persistent Volume & Persistent Volume Claim. So, if the pods
get deleted then, the data won’t be lost. The Data will be stored on the host machine.
Step 14:To ensure all the pods and the service are running are not check in the eks cluster using
command:

Kubectl get all –n three-tier

Step 15: Make sure to modify the deployment.yml files in frontend,backend.

And also edit the domain name in evn

Step 16: Create the DNS record for ingress-controller load balancer
Step 17: Wait for 2-3 minutes and Access the Application using the Domain name

PHASE 3 : Monitoring the EKS cluster

We will achieve the monitoring using Helm


Step1 :Add the prometheus repo by using the below command

helm repo add stable https://charts.helm.sh/stable

Step 2: Install the Prometheus


helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus

Step 3: Now, we need to access our Prometheus and Grafana consoles from outside of the cluster.

For that, we need to change the Service type from ClusterType to LoadBalancer

Edit the stable-kube-prometheus-sta-prometheus service

kubectl edit svc Prometheus-server


Step 4: Install Grafana

helm repo add grafana https://grafana.github.io/helm-charts


helm repo update
helm install grafana grafana/Grafana

Step 5: Edit the stable-grafana service

Kubectl edit svc grafana

Step 6: Get the password for grafana from secrets

Kubectl edit secret grafana

Copy the password and decode with base64


Step7: Now access the grafana using loadbalancer

Step 8: select the data source as prometheus

Enter your loadbalancer domain name and test


Step 9: Create the Dashboard

Import the dashboard

You might also like