Basic DevOps hands-on tasks for beginners looking to learn the ropes. This is a work in progress and will be updated as we go along.
Table of Contents:
- DevOps-Tasks
- Pre-requisites
- Task 1: Starting apps
- Task 1.1 Docker
- Task 1.1.1 Postgre Server and Client
- Task 1.1.2 MySQL Server and Client
- Task 1.1.3 MongoDB Server and Client
- Task 1.1.4 Redis Server and Client
- Task 1.1.5 Your Website
- Task 1.1.5 Your Docker Image
- Task 1.1.6 Docker Exec
- Task 1.1.7 Docker Environment Variables
- Task 1.1.8 Docker Volumes
- Task 1.1.9 Docker Networks
- Task 1.1.10 Docker Commit
- Task 1.1.11 Dockerfile and Justauser
- Task 1.2 Docker-compose
- Task 1.2.1 Docker Compose Up/Down/Start/Stop
- Task 1.2.2 Docker Compose Logs
- Task 1.2.3 Docker Compose Ps
- Task 1.2.4 Docker Compose Exec
- Task 1.2.5 Docker Compose Scale
- Task 1.2.6 Docker Compose Top
- Task 1.2.7 Running a Single Service
- Task 1.2.8 Running Multiple Services
- Task 1.2.9 Your Compose
- Task 1.2.10 Your Compose from Source
- Task 1.3 Kubernetes
- Task 1.3.1 Starting a Cluster
- Task 1.3.2 Connecting to the Cluster
- Task 1.3.3 Creating a Pod
- Task 1.3.4 Creating a Deployment
- Task 1.3.5 Creating a StatefulSet
- Task 1.3.6 Creating a Secret
- Task 1.3.7 Creating a ConfigMap
- Task 1.3.8 Creating a Service
- Task 1.3.9 Creating an Ingress
- Task 1.3.10 Performing a Rollout Restart
- Task 1.3.11 Patching a Resource
- Task 1.3.12 Creating a Persistent Volume and Persistent Volume Claim
- Task 1.3.13 Updating a Deployment to Use a Persistent Volume
- Task 1.3.14 Creating a Horizontal Pod Autoscaler
- Task 1.3.15 Creating a Network Policy
- Task 1.3.16 Creating a Job
- Task 1.1 Docker
- Task 2: Pipelines
- Task 3: Data Formats
- Task 4: Infrastructure as Code
- Task 5: Cloud Providers
- Questions
- Terms, buzzwords and more
Few important notes on how to research and learn:
- Google is your best friend. You can find anything you want, including the official documentation.
- Stackoverflow is your second best friend. There is always someone who had the same problem as you.
- Youtube is your third best friend. There is always a video explaining what you are looking for.
- Reddit is your fourth best friend. There is always a community that can help you.
Also most of the tools have communities in Reddit, Discord, Slack, Glip etc. It is good to join them and ask questions if you are stuck.
A good professional is always up-to date with the latest news and trends. For starters you can use the following websites and channels to keep up:
- PowerCert
- A Cloud Guru
- TLF
- Coding Tech
- Crosstalk Solutions
- ExplainingComputers
- Linux For Everyone
- The Linux Experiment
- DistroTube
- Craft Computing
- Gary Explains
- Simplilearn
- CBT Nuggets
- NCIX Tech Tips
- Just me and Open Source
- NixieDoesLinux
- Amit Nepal
- SysAdminGirl
- Coding Tech
- Computerphile
- GotoConferences
- elithecomputerguy
- NetworkChuck
- TechWorldwithNana
When you are not sure about what is out there in terms of technologies and resources, always consult with your local awesome list:
- AcalephStorage/awesome-devops
- awesome-soft/awesome-devops
- Lets-DevOps/awesome-learning
- AcalephStorage/awesome-devops
- awesome-selfhosted/awesome-selfhosted
- juandecarrion/awesome-self-hosted
- kahun/awesome-sysadmin
- epcim/awesome-sysadmin2
- devops-roadma
In order to move forward, it is better to have a virtual machine running Ubuntu 22.04 (or latest stable) with the following tools (latest stable versions) installed:
- asdf
- git
- docker
- minikube
- terraform (via asdf)
- kubectl (via asdf)
- helm (via asdf)
- nodejs (via asdf)
- npm (via asdf)
- ansible (via asdf)
- jq
- curl
- wget
- DBeaver
- MongoDB Compass
- Redis Commander
You can find the official documentations to see how to install and configure the tools.
To ease your way into the DevOps world, we will be using the following services:
- Register at GitHub.com.
- Register at GitLab.com.
- Register at Azure DevOps.
- Register at Docker Hub.
- Register at Oracle for a free VM.
- Register at Cloudflare for managing DNS zones.
- If you are a student (or have a student email), get the GitHub Student Developer Pack.
TODO
Take a look at the examples for running databases, frameworks, apps and others here: Docker Examples
Start the latest stable Postgre server via docker run command. The server should be accessible from the host machine. In another container run psql (Postgre client CLI) and connect to the server.
DoD:
- You must be able to view the Postgre server version from the client via command, be able to CRUD databases and get the size of all databases.
Bonus: Also do it via GUI (ex. DBeaver).
Start the latest stable MySQL server via docker run command. The server should be accessible from the host machine. In another container run mysql (MySQL client CLI) and connect to the server.
DoD:
- You must be able to view the MySQL server version from the client via command, be able to CRUD databases and get the size of all databases.
Bonus: Also do it via GUI (ex. DBeaver).
Start a MongoDB server with the latest version via a docker run command. The server should be accessible from the host machine. In another container, run the MongoDB client CLI (mongosh) and connect to the server.
DoD:
- You must be able to view the MongoDB server version from the client via a command, be able to CRUD collections, and retrieve the size of all collections.
Bonus: Also do it via GUI (ex. MongoDB Compass).
Start the latest stable Redis server via a docker run command. The server should be accessible from the host machine. In another container, run the Redis CLI (redis-cli) and connect to the server.
DoD:
You must be able to view the Redis server version from the client via a command, be able to CRUD data, and retrieve the size of all databases.
Bonus: Also do it via GUI (ex. Redis Commander).
Generate a simple website using ChatGPT (inline CSS, JS and HTML) and save it as index.html. In the same folder prepare Dockerfile that will use nginx:alpine as a base image and copy the index.html file to the /usr/share/nginx/html folder. Build the image and run it as a container. The website should be accessible from the host machine on port 8080.
DoD:
-You must be able to view the website from the host machine via a browser.
Create a hub.docker.com account and push the image you created in the previous task to your account. The image should be public.
DoD:
- You must be able to view the image from the hub.docker.com website.
Run a new alpine
container in the background. Then, use the docker exec
command to run the ls
command in the running container.
DoD:
- You must be able to view the output of the
ls
command from the running container.
Create a Dockerfile that uses the nginx:alpine
image as a base. In the Dockerfile, use the ENV
instruction to define an environment variable NGINX_PORT
with a default value of 8080. Then, run a container from this image, but override the NGINX_PORT
environment variable to 8081 using the -e
option in the docker run
command.
DoD:
- You must be able to view the value of the
NGINX_PORT
environment variable from within the running container. You can do this by using thedocker exec
command to run theenv
command in the running container.
Create a Docker volume using the docker volume create
command. Then, run a new nginx:alpine
container and mount the created volume to the /usr/share/nginx/html
directory in the container. Create a simple index.html
(write hello world inside) file in the volume (from the host machine), and verify that the file is accessible from within the container.
DoD:
- You must be able to view the
index.html
file from within the running container. You can do this by using thedocker exec
command to run thecat
command in the running container.
Bonus: Prepare the index.html file via a shell script.
Create a new Docker network using the docker network create
command. Then, run two new alpine
containers in this network. Use the docker exec
command to install ping
in both containers, and then use ping
to verify that the containers can communicate with each other over the network.
DoD:
- You must be able to successfully ping one container from the other.
Run a new alpine
container, and use the docker exec
command to install curl
in the running container. Then, use the docker commit
command to create a new image from the running container. Finally, run a new container from this image and verify that curl
is installed.
DoD:
- You must be able to successfully run the
curl
command in a new container created from the committed image.
Create a Dockerfile based on the latest stable Debian image. Create a file called test.txt with some sample text inside. In the Dockerfile, use the RUN
instruction to install curl
, git
and wget
in the image. Then, build the image and run a new container from it. Verify that curl
and wget
are installed in the running container. After that modify the Dockerfile to use a custom user called "justauser" and build the image again. Run a new container from the new image and verify that the user is "justauser". After that modify the Dockerfile to copy the test.txt file to the /home/justauser folder and build the image again. Run a new container from the new image and verify that the test.txt file is in the /home/justauser folder and justauser can read it.
DoD:
- You must be able to successfully run the
curl
,git
andwget
commands in a new container created from the image. - When running
id
in the container, the user should be "justauser". - The test.txt file should be in the /home/justauser folder and justauser should be able to read it.
Create a Docker Compose file that defines a service to run a simple web server (for example, nginx). Use the docker compose up
command to start the service, and verify that the web server is running correctly. Stop the service and start it again. Then, use the docker compose down -v
command to stop the service and remove the volumes.
DoD:
- You must be able to start the service with
docker compose up
and stop it withdocker compose down -v
. - The web server must be accessible from the host browser when the service is up.
Start the service from the previous tasks, and then use the docker compose logs
command to view the logs of the service.
DoD:
- You must be able to view the logs of the service with
docker compose logs
.
Start the service from the previous tasks, and then use the docker compose ps
command to view the status of the service.
DoD:
- You must be able to view the status of the service with
docker compose ps
.
Start the service from the previous tasks, and then use the docker compose exec
command to run a command (for example, ls
) in the service container.
DoD:
- You must be able to run a command in the service container with
docker compose exec
.
Create a Docker Compose file that defines a service to run a stateless application (for example, a web server). Use the docker compose up --scale
command to start multiple instances of the service, and verify that all instances are running correctly.
DoD:
- You must be able to start multiple instances of the service with
docker compose up --scale
. - All instances must be running correctly.
Start the service from the previous tasks, and then use the docker compose top
command to view the running processes in the service container.
DoD:
- You must be able to view the running processes in the service container with
docker compose top
.
Create a Docker Compose file named docker-compose.yml
and define a service to run a single container based on https://hub.docker.com/r/yeasy/simple-web/. The container should expose a port, and the service should be named myapp
. Run the service using Docker Compose.
DoD:
- The Docker Compose file
docker-compose.yml
should define a service namedmyapp
. - The service should run a single container based on an image of your choice.
- The container should expose a port.
- Running
docker-compose up
should start the service and the container.
Extend the previous Docker Compose file (docker-compose.yml
) to define two services: web
and database
. The web
service should run a container based on an image of a web application (e.g., nginx), and the database
service should run a container based on an image of a database server (e.g., MySQL). The two services should be able to communicate with each other.
DoD:
- The Docker Compose file should define two services named
web
,database
andmyapp
. - The
web
service should run a container based on an image of a web application (e.g., nginx). - You must shell into the web container, install telnet and perform a telnet to the database container on port 3306 - the connection should be successful.
- From the
web
container, you must be able to ping the database container. - From the
web
container, you must be able to get the website running on themyapp
container usingcurl
. - The
database
service should run a container based on an image of a database server (e.g., MySQL). - The containers in the
web
anddatabase
services should be able to communicate with each other.
Create a docker-compose.yml file that will run your website from the previous task (1.1.5). The website should be accessible from the host machine on port 9999
.
DoD:
- From the host machine, you must be able to view the website when the compose file is up via your browser.
Create a docker-compose.yml file that will build and run your image's Dockerfile from the previous task (1.1.5). The website should be accessible from the host machine on port 9999
.
DoD:
- From the host machine, you must be able to view the website when the compose file is up via your browser.
Start a local Kubernetes cluster using Minikube.
DoD:
- You must be able to get the status of a local Kubernetes cluster using the
minikube status
command. - Running
kubectl cluster-info
should display information about the running cluster.
Connect to the local Kubernetes cluster using kubectl
, k9s
, and LENS
.
DoD:
- You must be able to view the nodes of the cluster using the
kubectl get nodes
command. - You must be able to connect to the cluster using
k9s
andLENS
TUI/GUI.
Create a Pod named my-pod
running the nginx:1.14.2
image in the local Kubernetes cluster.
DoD:
- You must be able to create the Pod using
kubectl apply
with a YAML configuration file. - Running
kubectl get pods
should displaymy-pod
in the list. - Running
kubectl describe pod my-pod
should show that the Pod is running thenginx:1.14.2
image.
Create a Deployment named my-deployment
running the nginx:1.14.2
image with 3 replicas in the local Kubernetes cluster.
DoD:
- You must be able to create the Deployment using
kubectl apply
with a YAML configuration file. - Running
kubectl get deployments
should displaymy-deployment
in the list. - Running
kubectl describe deployment my-deployment
should show that the Deployment is running 3 replicas of thenginx:1.14.2
image.
Create a StatefulSet named my-statefulset
running the nginx:1.14.2
image with 3 replicas in the local Kubernetes cluster.
DoD:
- You must be able to create the StatefulSet using
kubectl apply
with a YAML configuration file. - Running
kubectl get statefulsets
should displaymy-statefulset
in the list. - Running
kubectl describe statefulset my-statefulset
should show that the StatefulSet is running 3 replicas of thenginx:1.14.2
image.
Create a Secret named my-secret
with the data username=admin
and password=secret
in the local Kubernetes cluster.
DoD:
- You must be able to create the Secret using
kubectl create secret
command. - Running
kubectl get secrets
should displaymy-secret
in the list. - Running
kubectl describe secret my-secret
should show that the Secret contains the keysusername
andpassword
.
Create a ConfigMap named my-configmap
with the data log_level=info
and max_connections=100
in the local Kubernetes cluster.
DoD:
- You must be able to create the ConfigMap using
kubectl create configmap
command. - Running
kubectl get configmaps
should displaymy-configmap
in the list. - Running
kubectl describe configmap my-configmap
should show that the ConfigMap contains the keyslog_level
andmax_connections
.
Create a Service named my-service
that exposes my-deployment
on port 80 in the local Kubernetes cluster.
DoD:
- You must be able to create the Service using
kubectl apply
with a YAML configuration file. - Running
kubectl get services
should displaymy-service
in the list. - Running
kubectl describe service my-service
should show that the Service is routing traffic tomy-deployment
on port 80.
Create an Ingress named my-ingress
that routes traffic to my-service
on path /
in the local Kubernetes cluster.
DoD:
- You must be able to create the Ingress using
kubectl apply
with a YAML configuration file. - Running
kubectl get ingress
should displaymy-ingress
in the list. - Running
kubectl describe ingress my-ingress
should show that the Ingress is routing traffic tomy-service
on path/
.
Perform a rollout restart on my-deployment
.
DoD:
- You must be able to perform a rollout restart on
my-deployment
using thekubectl rollout restart
command. - Running
kubectl rollout status deployment my-deployment
should show that the Deployment has been restarted.
Update the my-deployment
to use the nginx:1.16.1
image.
DoD:
- You must be able to update
my-deployment
to use thenginx:1.16.1
image using thekubectl set image
command. - Running
kubectl describe deployment my-deployment
should show that the Deployment is now using thenginx:1.16.1
image.
Create a Persistent Volume (PV) named my-pv
with a capacity of 1Gi and access modes ReadWriteOnce
. Also, create a Persistent Volume Claim (PVC) named my-pvc
that requests a volume of 1Gi.
DoD:
- You must be able to create the PV and PVC using
kubectl apply
with a YAML configuration file. - Running
kubectl get pv
should displaymy-pv
in the list. - Running
kubectl get pvc
should displaymy-pvc
in the list. - The
my-pvc
should be bound tomy-pv
.
Update my-deployment
to mount the volume claimed by my-pvc
at the path /data
.
DoD:
- You must be able to update
my-deployment
to use the volume claimed bymy-pvc
usingkubectl apply
with a YAML configuration file. - Running
kubectl describe deployment my-deployment
should show that the Deployment is mounting the volume at/data
.
Create a Horizontal Pod Autoscaler (HPA) for my-deployment
that maintains between 1 and 10 replicas and targets CPU utilization at 50%.
DoD:
- You must be able to create the HPA using
kubectl apply
with a YAML configuration file. - Running
kubectl get hpa
should display the HPA formy-deployment
. - Running
kubectl describe hpa my-deployment
should show that the HPA is maintaining between 1 and 10 replicas and targeting CPU utilization at 50%.
Create a Network Policy that allows traffic to my-deployment
only from Pods in the same namespace.
DoD:
- You must be able to create the Network Policy using
kubectl apply
with a YAML configuration file. - Running
kubectl get networkpolicies
should display the Network Policy formy-deployment
. - Running
kubectl describe networkpolicy my-network-policy
(replacemy-network-policy
with the name of your Network Policy) should show that the Network Policy allows traffic tomy-deployment
only from Pods in the same namespace.
Create a Job that runs the busybox
image and executes the command echo "Hello, Kubernetes!"
.
DoD:
- You must be able to create the Job using
kubectl apply
with a YAML configuration file. - Running
kubectl get jobs
should display the Job. - Running
kubectl logs job/my-job
(replacemy-job
with the name of your Job) should display "Hello, Kubernetes!".
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
Try to answer the following questions with a few sentences:
- TODO
In order to understand the DevOps (and IT) world, you need to know the following terms, try to copy this table in a markdown file and fill in some information about how you understand the term, if the term has any tools or technologies that you can use for example and if it is not a technology - some use cases:
Terms, buzzword and more | Explain in one sentance | Provide example tools, technologies and use cases |
---|---|---|
devops pipelines | [YOUR ANSWER] | [YOUR ANSWER] |
containerization | [YOUR ANSWER] | [YOUR ANSWER] |
continuous integration | [YOUR ANSWER] | [YOUR ANSWER] |
continuous delivery | [YOUR ANSWER] | [YOUR ANSWER] |
continuous deployment | [YOUR ANSWER] | [YOUR ANSWER] |
build tools | [YOUR ANSWER] | [YOUR ANSWER] |
container fleet management | [YOUR ANSWER] | [YOUR ANSWER] |
version control system | [YOUR ANSWER] | [YOUR ANSWER] |
source control system | [YOUR ANSWER] | [YOUR ANSWER] |
container orchestration | [YOUR ANSWER] | [YOUR ANSWER] |
artifact repository | [YOUR ANSWER] | [YOUR ANSWER] |
AWS EC2 | [YOUR ANSWER] | [YOUR ANSWER] |
GCP Compute Instance | [YOUR ANSWER] | [YOUR ANSWER] |
DevSecOps | [YOUR ANSWER] | [YOUR ANSWER] |
immutable infrastructure | [YOUR ANSWER] | [YOUR ANSWER] |
server clustering | [YOUR ANSWER] | [YOUR ANSWER] |
server failover | [YOUR ANSWER] | [YOUR ANSWER] |
on-premise | [YOUR ANSWER] | [YOUR ANSWER] |
high availability | [YOUR ANSWER] | [YOUR ANSWER] |
configuration management | [YOUR ANSWER] | [YOUR ANSWER] |
repository branches | [YOUR ANSWER] | [YOUR ANSWER] |
application performance monitoring | [YOUR ANSWER] | [YOUR ANSWER] |
logs management | [YOUR ANSWER] | [YOUR ANSWER] |
dynamic scaling servers | [YOUR ANSWER] | [YOUR ANSWER] |
infrastructure resilience | [YOUR ANSWER] | [YOUR ANSWER] |
UAT | [YOUR ANSWER] | [YOUR ANSWER] |
regression testing | [YOUR ANSWER] | [YOUR ANSWER] |
development environment | [YOUR ANSWER] | [YOUR ANSWER] |
testing environment | [YOUR ANSWER] | [YOUR ANSWER] |
environment provisioning | [YOUR ANSWER] | [YOUR ANSWER] |
staging environment | [YOUR ANSWER] | [YOUR ANSWER] |
bastion server | [YOUR ANSWER] | [YOUR ANSWER] |
bare-metal server | [YOUR ANSWER] | [YOUR ANSWER] |
cloud computing | [YOUR ANSWER] | [YOUR ANSWER] |
big data | [YOUR ANSWER] | [YOUR ANSWER] |
IaaS | [YOUR ANSWER] | [YOUR ANSWER] |
PaaS | [YOUR ANSWER] | [YOUR ANSWER] |
SaaS | [YOUR ANSWER] | [YOUR ANSWER] |
I/O throughput | [YOUR ANSWER] | [YOUR ANSWER] |
Mean Time to Recovery | [YOUR ANSWER] | [YOUR ANSWER] |
linux distribution | [YOUR ANSWER] | [YOUR ANSWER] |
rolling update | [YOUR ANSWER] | [YOUR ANSWER] |
test automation | [YOUR ANSWER] | [YOUR ANSWER] |
technical debt | [YOUR ANSWER] | [YOUR ANSWER] |
VPC peering | [YOUR ANSWER] | [YOUR ANSWER] |
software development lifecycle | [YOUR ANSWER] | [YOUR ANSWER] |
microservices | [YOUR ANSWER] | [YOUR ANSWER] |
monolithic application | [YOUR ANSWER] | [YOUR ANSWER] |
type 1 hypervisor | [YOUR ANSWER] | [YOUR ANSWER] |
bootloader | [YOUR ANSWER] | [YOUR ANSWER] |
API | [YOUR ANSWER] | [YOUR ANSWER] |
deployment frequency | [YOUR ANSWER] | [YOUR ANSWER] |
percentage of failed deployments | [YOUR ANSWER] | [YOUR ANSWER] |
distributed vs decentralized systems | [YOUR ANSWER] | [YOUR ANSWER] |
continuous monitoring | [YOUR ANSWER] | [YOUR ANSWER] |
serverless computing | [YOUR ANSWER] | [YOUR ANSWER] |
RAID | [YOUR ANSWER] | [YOUR ANSWER] |
script shebang | [YOUR ANSWER] | [YOUR ANSWER] |
data science | [YOUR ANSWER] | [YOUR ANSWER] |
deep learning | [YOUR ANSWER] | [YOUR ANSWER] |
FOSS/FLOSS software | [YOUR ANSWER] | [YOUR ANSWER] |
Blue/Green deployment | [YOUR ANSWER] | [YOUR ANSWER] |
A/B Testing | [YOUR ANSWER] | [YOUR ANSWER] |
Canary releases | [YOUR ANSWER] | [YOUR ANSWER] |
Rolling releases | [YOUR ANSWER] | [YOUR ANSWER] |
Infrastructure as Code | [YOUR ANSWER] | [YOUR ANSWER] |
Compliance as Code | [YOUR ANSWER] | [YOUR ANSWER] |
Security as Code | [YOUR ANSWER] | [YOUR ANSWER] |
Pre-commit hooks | [YOUR ANSWER] | [YOUR ANSWER] |
SAST | [YOUR ANSWER] | [YOUR ANSWER] |
DAST | [YOUR ANSWER] | [YOUR ANSWER] |
cloud agnostic | [YOUR ANSWER] | [YOUR ANSWER] |
idempotent resource | [YOUR ANSWER] | [YOUR ANSWER] |
SDLC | [YOUR ANSWER] | [YOUR ANSWER] |
tainted resource | [YOUR ANSWER] | [YOUR ANSWER] |
An example of how to fill in data:
Terms, buzzword and more | Explain in one sentence | Provide example tools, technologies and use cases |
---|---|---|
devops pipelines | Automated workflows that orchestrate software delivery. | Tools: Jenkins, CircleCI, GitLab CI/CD. Use Cases: Continuous delivery, deployment automation. |
containerization | Packaging applications along with their dependencies. | Technologies: Docker, Kubernetes. Use Cases: Scalable and portable deployments, microservices architecture. |
continuous integration | Regularly integrating code changes to detect issues early. | Tools: Travis CI, Jenkins, GitLab CI/CD. Use Cases: Early bug detection, ensuring code quality. |