Gitops
Gitops
to GitOps $
Deliver quality at speed
eBook 1
Table of Contents
The Freedom of Choice 03 GitOps Hands On Tutorial 11
What Happens When you Adopt GitOps? 06 Part 3: Setup CI and Connect a 23
Container Registry
eBook 2
GitOps in Practice
A “you build it, you own it” development process requires tools that developers know and understand. GitOps is our name for how we
use developer tooling to drive operations.
GitOps is a way to do Continuous Delivery. More specifically, it is an operating model for building Cloud Native applications that unifies
Deployment, Monitoring and Management. It works by using Git as a source of truth for declarative infrastructure and applications.
Automated CI/CD pipelines roll out changes to your infrastructure when commits are pushed and approved in Git. It also makes use of
diff tools to compare the actual production state with what’s under source control and alerts you when there is a divergence.
The ultimate goal of GitOps is to speed up development so that your team can make changes and updates safely and securely to
complex applications running in Kubernetes.
Freedom of choice
Because there is no single tool that can do everything required in your pipeline, GitOps gives you the freedom to choose the best
tools for the different parts of your CICD pipeline. You can select a set of tools from the open source ecosystem or from closed source
or depending on your use case, you may even combine them. The most difficult part of creating a pipeline is glueing all of the pieces
together.
Whatever you choose for your continuous delivery pipeline, applying GitOps best practices with Git (or any version control) should
be an integral component of your process. It will make building and adopting continuous delivery in your organization easier and will
significantly speed up your team’s ability to ship features.
eBook 3
The Principles of GitOps
---
eBook 4
Key Benefits of GitOps
1. Increased Productivity
Continuous deployment automation with an integrated feedback control loop speeds up your mean time to deployment. This allows your
team to ship 30-100x more changes per day, and increases overall development output 2-3 times.
4. Higher Reliability
With Git’s capability to revert/rollback and fork, you gain stable and reproducible rollbacks. Because your entire system is described in Git,
you have a single source of truth from which to recover after a meltdown, reducing your meantime to recovery (MTTR) from hours to minutes.
eBook 5
What Happens When you Adopt GitOps?
→ Any developer that uses Git can start deploying new features to Kubernetes
→ The same workflows are maintained across development and operations
Since all of your developers are already living in Git, incorporating GitOps into your
organization is simple. Having everything in one place means that your operations team
can also use the same workflow to make infrastructure changes by creating issues, and
reviewing pull requests. GitOps allows you to roll back any kind of change made in your
cluster. In addition to this, you also get built-in observability, which enables your teams to
have more autonomy to make changes.
eBook 6
Typical CI/CD Pipeline
What does a typical CI/CD pipeline look like in most organizations? Your development team writes and then pushes code into a
code repo. Let’s say for example that you have one microservice repo and in that repo, you bundle your application code along with
deployment YAML manifest files or a Helm chart that define how your application will run in your cluster. When you push that code
to Git, the continuous integration tool kicks off unit tests that eventually build the Docker image that gets pushed to the container
registry.
With a typical CI/CD pipeline, Docker images are deployed using some sort of bash script or another method of talking directly to
the cluster API.
CONTINUOUS CONTINUOUS
INTEGRATION DEPLOYMENT
eBook 7
Security and the Typical CI/CD Pipeline
With this approach, your CI tooling pushes and deploys images to the cluster. For the CI system to apply the changes to a cluster,
you have to share your API credentials with the CI tooling and that means your CI tool becomes a high value target. If someone
breaks into your CI tool, they will have total control over your production cluster, even if your production cluster is highly secure.
But what happens if your cluster goes down, and you need to recreate it? How do you restore the previous state of your
application? You would have to run all of your CI jobs to rebuild everything and then re-apply all of the workloads to the new cluster.
The typical CI pipeline doesn’t have its state easily recorded.
Let’s see how we can improve the typical CI/CD pipeline with GitOps.
eBook 8
The GitOps Deployment Pipeline
GitOps implements a Kubernetes controller that listens for and synchronizes deployments to your Kubernetes cluster. The controller uses the operator
pattern. This is significant on two levels:
1. It is more secure.
2. It automates complex error prone tasks like having to manually update YAML manifests.
With the operator pattern, an agent acts on behalf of the cluster. It listens to events relating to custom resource changes, and then applies those changes
based on a deployment policy. The agent is responsible for synchronizing what’s in Git with what’s running in the cluster, providing a simple way for
your team to achieve continuous deployment.
VCS Container
Dev RW Code RO CI RW Regristry
Base
RW - Read/Write
RO - Read Only Config
Repo
CI - Continuous Integration
eBook 9
GitOps Separation of Privileges
GitOps separates CI from CD and is a more secure method of deploying applications to Kubernetes. The table below shows how
GitOps separates read/write privileges between the cluster, your CI and CD tooling and the container repository, providing your
team with a secure method of creating updates.
CI Tooling CD Tooling
Test, Build, Scan, Publish Synchronize Git with the cluster
Runs outside the production cluster Runs inside the production cluster
Read/Write access to the continuous integration Read/Write access to the production cluster
environment
eBook 10
GITOPS
HANDS ON TUTORIAL
eBook 11
GitOps Hands On Tutorial
In this tutorial, we’ll show you how to set up a CI/CD pipeline. We’ll then deploy a demo application to a cluster, make a small update
to the application and deploy the change with Weave Cloud.
In our product Weave Cloud, the GitOps core machinery is in its CI/CD tooling with the critical piece being continuous deployment
(CD) that supports Git-cluster synchronization. This provides a feedback loop that gives you control over your Kubernetes cluster.
Here is a typical developer workflow for creating or
updating a new feature:
eBook 12
Tutorial Prerequisites
Before you can start doing GitOps, you’ll need to set up the following items. Once complete, you will have an entire end to end CI/CD pipeline
setup. In this tutorial we use particular tools, but keep in mind that you can pick and choose the tools you need and you don’t have to use the
ones listed here.
A Kubernetes To complete this tutorial, you will need a Kubernetes cluster. This tutorial should work with any valid
cluster Kubernetes installation. This example uses three Ubuntu hosts on Digital Ocean and then installs
Kubernetes with kubeadm. Sign up for Digital Ocean to receive a three month free trial (you’ll have to
input your credit card for this).
There are plenty of other options as well, such as minikube, or you can use one of the public cloud
providers like Google Kubernetes Engine or Amazon’s EKS.
You’ll need to create a new repository for this tutorial and make it available from your cluster through
Github account
a URL from Git.
Quay.io account This is the container repository that we’ll use in this tutorial.
This tutorial uses Travis as the continuous integration piece of the pipeline.
Travis account
Weave Cloud You’ll use Weave Cloud to automate, observe and control your deployments..
Trial Account
eBook 13
PART 1:
SPIN UP A KUBERNETES
CLUSTER
eBook 14
Part 1: Spin up a Kubernetes Cluster
• Ubunutu 16.04
• 4GB or more of RAM per instance
kubeadm is a command line tool that you can use to easily spin up a cluster. In this tutorial, we’ll run the minimum commands to get a
cluster up and running. For information on all kubeadm command-line options, see the kubeadm docs.
eBook 15
Part 1: Spin up a Kubernetes Cluster
eBook 16
Part 1: Spin up a Kubernetes Cluster
kubeadm init
The network interface is auto-detected and then advertises the master on it with the default gateway.
Make a record of the kubeadm join command that kubeadm init outputs. You will need this once it’s time to
join the nodes. This token is used for mutual authentication between the master and any joining nodes.
eBook 17
Part 1: Spin up a Kubernetes Cluster
5. Set up the environment for Kubernetes.
On the master run the following as a regular user:
Confirm that the pod network is working and that the kube-dns pod is running:
Once the kube-dns pod is up and running, you can join all of the nodes to form the cluster.
eBook 18
Part 1: Spin up a Kubernetes Cluster
The nodes are where the workloads (containers and pods, etc) run.
Join the nodes to your cluster with (shown on the master node that you initialized):
After the node has successfully joined, you will see the following in the terminal:
Run kubectl get nodes on the master to see this machine join.
Run kubectl get nodes on the master to display a cluster with the number of machines as you created.
eBook 19
Part 1: Spin up a Kubernetes Cluster
Sign up for Weave Cloud and create a new instance. Weave Cloud instances are the primary workspaces for
your application and provides a view onto your cluster and the application that is running on it.
For automated deployments and GitOps, there is one additional configuration step in Weave Cloud that we will
cover in part four of this tutorial.
eBook 20
PART 2:
FORK THE DEMO
APPLICATION REPOSITORY
eBook 21
Part 2: Fork the Demo Application Repository
Before you can modify the microservices demo application, The Sock Shop, fork the following two repositories:
• https://github.com/microservices-demo/front-end - This is the front-end service of the Sock Shop
application. You will update the color the buttons in this example.
• https://github.com/microservices-demo/microservices-demo - This is the repo that stores the Kubernetes
configuration files for the application. The Weave Cloud deployment agent automatically updates the front-
end YAML manifest file in this repository.
eBook 22
PART 3:
SETUP CI AND CONNECT A
CONTAINER REGISTRY
eBook 23
Part 3: Setup CI and Connect a Container Registry
Create a new public repository called front-end. This is the container repository that will be used by Travis CI to
push the new images.
In this example, you will hook up Travis CI to your GitHub and Quay.io accounts.
Connect your GitHub account by clicking the + button next to My Repositories and then toggle the build button
for <YOUR_GITHUB_USERNAME>/front-end so that Travis can automatically run builds for that repo.
eBook 24
Part 3: Setup CI and Connect a Container Registry
1. Go back to Quay.io, select your repository. Create a Robot Account by selecting the + from the header
and call it ci_push_pull.
eBook 25
Part 3: Setup CI and Connect a Container Registry
1. Ensure that the Robot Account has Admin permissions in Quay on your repository.
2. Copy the username and the generated token for the Quay robot account:
eBook 26
Part 3: Setup CI and Connect a Container Registry
eBook 27
Part 3: Setup CI and Connect a Container Registry
DOCKER_USER=<”user-name+robot-account”>
DOCKER_PASS=<”robot-token”>
Where,
<”user-name+ci_push_pull”> is your user-name including the + sign and the name of the robot account.
<”robot-token”> is the token found and copied from the Robot Token dialog box.
eBook28
eBook 6
Part 3: Setup CI and Connect a Container Registry
env:
- GROUP=quay.io/<YOUR_QUAY_USERNAME> COMMIT=”${TRAVIS_COMMIT}” TAG=”${TRAVIS_TAG}”
REPO=front-end;
Commit and push this change to your fork of the front-end repo.
eBook 29
Part 3: Setup CI and Connect a Container Registry
7. Modify the Manifest file so it Points to Your Container Image
Using an editor of your choice, open manifests/front-end-dep.yaml, from the microservices-demo repo
you forked and update the image line.
Change it from:
image: weaveworksdemos/front-end
To:
image: quay.io/$YOUR_QUAY_USERNAME/front-end:<deploy-tag>
Where,
• $ YOUR_QUAY_USERNAME - your Quay.io username
• < deploy-tag> - set this to master (although you may want to specify a branch in practice)
Go back to Travis-CI and watch as the image, as it’s unit-tested, built and pushed to Quay.io.
eBook 30
PART 4:
LET’S GET STARTED WITH
GITOPS
eBook 31
Part 4: Let’s Get Started with GitOps
1. Click then select Deploy from the menu that appears and then follow the instructions that appear.
2. Config repository - Add the ssh URL with the path to the YAML files for the microservices-demo:
Go to the Workload screen and watch as all of the services in the Sock Shop get automatically deployed. Go to Explore in Weave
Cloud to watch as it deploys to the cluster.
eBook 32
Part 4: Let’s Get Started with GitOps
eBook 33
Part 4: Let’s Get Started with GitOps
Find the port that the cluster allocated for the front-end service by running:
Launch the Sock Shop in your browser by going to the IP address of any of your node machines in your browser,
and by specifying the NodePort. So for example, http://<master_ip>:<pNodePort>. You can find the IP
address of the machines in the DigitalOcean dashboard. The nodeport is typically 30001.
eBook 34
Part 4: Let’s Get Started with GitOps
Watch as the new image deploys and shows up as a new image in the Weave Cloud UI.
Click the Deploy button, and reload the Socks Shop in your browser. Notice that the buttons in the catalogue and on the cart have
all changed to red!
So that’s useful for manually gated changes, but it’s even better to do continuous delivery.
eBook 35
Further Resources
eBook 36
About Weaveworks
Weaveworks makes it fast and simple for developers to build and operate
containerized applications. The Weave Cloud operations-as-a-service platform
provides a continuous delivery pipeline for building and operating applications,
letting teams connect, monitor and manage microservices and containers on
any server or public cloud. Weaveworks also contributes to several open source
projects, including Weave Scope, Weave Flux and eksctl. It was one of the first
members of the Cloud Native Computing Foundation. Founded in 2014, the company
is backed by Google Ventures, Accel Partners and others.
eBook 37