The goal of these scripts and the two associated prow jobs is to automate the process of building the docker-ce and containerd packages (as well as the static binaries) for ppc64le and of testing them. The packages would then be shared with the Docker team and be available on the https://download.docker.com package repositories.
To build these packages, we use the docker-ce-packaging and the containerd-packaging repositories.
For now, this process is semi-automated.
At the beginning, there was only one periodic prow job. However, it was taking too long to build the docker and containerd packages and exceeded the 2-hour timeout. It was taking a little bit more than 3 hours. We first worked on a periodic prow job because of the lack of tags on the docker-ce-packaging repository and of webhooks to this repository. For the moment, it is a semi-automated process, since we still need to manually edit the env.list file with the versions and the hash commits.
The prow job was then split into two postsubmit prow jobs.
- First prow job : postsubmit-build-docker.yaml
This postsubmit prow job is triggered by the editing of the env.list. This file contains the information we need to build the packages :
- DOCKER_VERS : latest version of docker, that we want to build
- DOCKER_PACKAGING_REF : commit associated to the latest version of docker
- CONTAINERD_BUILD : if set to 1, it means that a new containerd version has been released in the 1.4 branch and that we have not built it yet ; if set to 0, it means that we have already built it in a previous prow job and that we do not need to build it again (we will still check that no new distribution has been added in the meantime).
- CONTAINERD_VERS : latest version of containerd
- CONTAINERD_PACKAGING_REF : commit associated to the latest version of containerd
- RUNC_VERS : runc version used to build the static packages
This prow job builds the dynamic docker packages and then pushes them to our internal COS bucket, before creating the file 'env/date.list' which contains the current date (timestamp). We use the date in the directory where we store the docker packages in the COS bucket, so that we don't confuse the different builds.
-
Access to the internal COS Bucket and set up the environmental variables
-
Second prow job : postsubmit-build-container.yaml
This postsubmit prow job is triggered by the editing of the job/postsubmit-build-docker, which was edited at the end of the first prow job. This prow job builds the dynamic containerd packages (if CONTAINERD_BUILD is set to 1 in the env.list), the static packages, and tests all packages.
- Start the docker daemon
- Access to the internal COS Bucket and set up the environmental variables
- Get the dockertest and containerd directories if CONTAINERD_BUILD=0 from the COS bucket
- Build the dynamic containerd packages if CONTAINERD_BUILD=1 and the static packages
- Test the dynamic and static packages and check if there are any errors
- Push to the COS bucket shared with the Docker team the docker and containerd packages
Trigger the execution of the next prow job by pushing a file change on a tracking branch of a github repository. The tracking branch is [prow-job-tracking] https://github.com/ppc64le-cloud/docker-ce-build/tree/prow-job-tracking.
- dockerctl.sh Usage: dockerctl [start] | [stop]
Start or Stop the dockerd daemon. This script runs the dockerd-entrypoint.sh in the background and then checks if the docker daemon has started and is running. We specify the MTU. See the reason here.
This script mounts the internal COS bucket for further uses. It clones the docker-ce-packaging using the hash commit specified in the env.list and gets the list of distributions in the env-distrib.list.
This script mounts the internal COS bucket, if it has not already been mounted. It gets the dockertest directory from the COS bucket. It also gets the latest containerd directory in the COS bucket, if the latest version has already been built. We get the latest containerd directory for the tests.
This script builds the version of the dynamic docker packages, which is specified in the env.list. We build in parallel to gain some time. We build 4 distributions at the same time. After each package successfully built, we push the package to our internal COS bucket, to ensure that we have them stored in case the prow job fails before finishing.
This script builds the version of the dynamic docker packages, which is specified in the env.list and the static packages. As already mentionned, it only builds the containerd packages if CONTAINERD_BUILD is set to 1.
We cannot build the packages in parallel, due to a git
command in the Makefile.
As for the build-docker.sh, the packages are pushed to the internal COS bucket.
This script builds the static binaries and rename them (removes the version and adds ppc64le). It should be run in a container. The image of the container is the same image used as the basis of the prow jobs : quay.io/powercloud/docker-ce-build.
This script sets up the tests for both the docker-ce and containerd packages and the static binaries. It takes an optional 'test mode' argument: - local (default): the test is done on the packages that were just built. - staging: the test is done by installing the packages from the docker's staging repo (yum and apt). - release: the test is done by installing the packages from the docker's official repo (yum and apt). Note: static packages are only tested for the 'local' mode only at the moment at those packages are not published.
Local mode: In this script, for each distribution, we build an image, where we install the newly built packages. We then run a docker based on this said image, in which we run test-launch.sh. We do this for each distribution, for the docker-ce packages and the static binaries. It generates an errors.txt file with a summary of all tests, containing the exit codes of each test.
Staging mode: This script is used to check the packages published by Docker on https://download-stage.docker.com are correct. It uses the same basis as test.sh but uses different images (test-repo-DEBS and test-repo-RPMS).
Release mode: This script is used to check the packages published by Docker on https://download.docker.com are correct.
This script is called in the test.sh. This runs three tests for every distro we have built, using the powercloud/dockertest. It uses gotestsum to generate xml files.
-
test 1 : TestDistro
-
test 2 : TestDistroInstallPackage
-
test 3 : TestDistroPackageCheck
This script checks the errors.txt, generated by the test.sh, to determine if there are any errors in the tests of the packages.
This script should push all packages built to the COS bucket shared with Docker.
This Dockerfile is used for getting a docker-in-docker container. It is used for the basis of the prow job, as well as for the container building the packages and the one testing the packages. It also installs s3fs to get directly access to the COS buckets.
These two Dockerfiles are used for testing the docker-ce and containerd packages. Depending on the distro type (debs or rpms), we use them to build a container to test the packages and run test-launch.sh.
These two Dockerfiles are used for testing the static binaries. Like the two aforementioned Dockerfiles : depending on the distro type (debs or rpms), we use them to build a container to test the packages and run test-launch.sh.
These two Dockerfiles are used for testing the packages after Docker has published them on https://download-stage.docker.com or https://download-stage.docker.com. As well as for the previous Dockerfiles, depending on the distro type, we use them to build a container and test the packages with the script test-launch.sh.
You need first to set up the secrets docker-token and docker-s3-credentials with kubectl.
# docker-token
docker login
kubectl create secret generic docker-token \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
Template of secret for the secret-s3.yaml and the git-credentials (the latter only for tests), you just need to add the password in base64.
apiVersion: v1
kind: Secret
metadata:
name: #add name
type: Opaque
data:
password: #add password in base64
kubectl apply -f secret-s3.yaml
You also need the dockerd-entrypoint.sh, which is the script that starts the docker daemon :
wget -O /usr/local/bin/dockerd-entrypoint.sh https://raw.githubusercontent.com/docker-library/docker/094faa88f437cafef7aeb0cc36e75b59046cc4b9/20.10/dind/dockerd-entrypoint.sh
chmod +x /usr/local/bin/dockerd-entrypoint.sh
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-docker-build
spec:
automountServiceAccountToken: false
containers:
- name: test
command:
- /usr/local/bin/dockerd-entrypoint.sh
args:
- "--mtu=1440"
image: quay.io/powercloud/docker-ce-build
resources:
requests:
cpu: "4000m"
memory: "8Gi"
limits:
cpu: "4000m"
memory: "8Gi"
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
env:
- name: DOCKER_SECRET_AUTH
valueFrom:
secretKeyRef:
name: docker-token
key: .dockerconfigjson
- name: S3_SECRET_AUTH
valueFrom:
secretKeyRef:
name: docker-s3-credentials
key: password
terminationGracePeriodSeconds: 18
volumes:
- name: docker-graph-storage
emptyDir: {}
kubectl apply -f pod.yaml
kubectl exec -it pod/pod-docker-build -- /bin/bash
Explanations :
- The MTU needs to be specified, see Investigate the docker daemon configuration for docker-in-docker
- The resources : Resource Management for Pods and Containers
- The securityContext must be "privileged" and the volume /var/lib/docker must be specified : see Docker can now run within Docker
Run prow-build-docker.sh or prow-build-test-containerd.sh except for the line calling dockerctl.sh. The dockerd-entrypoint.sh script has already been called as entrypoint of the pod, so it should not be called a second time.
If the cluster was already created, get only the config file containing the necessary information to connect to the cluster and point the KUBECONFIG variable to the file. If there is no cluster, you can create a ppc64le cluster with kubeadm. The script that can run the prow job on the ppc64le cluster must be used on an x86 machine (no ppc64le support for kind).
On a ppc64le machine : See https://github.com/ppc64le-cloud/test-infra/wiki/Creating-Kubernetes-cluster-with-kubeadm-on-Power
On an x86 machine:
rm -rf $HOME/.kube/config/admin.conf
nano $HOME/.kube/config/admin.conf
# Copy the admin.conf from the ppc64le machine
export KUBECONFIG=$HOME/.kube/config/admin.conf
# Check if the cluster is running
kubectl cluster-info
On either of these machines, where the ppc64le cluster is running, configure the secrets (docker-s3-credentials and docker-token if needed).
On the x86 machine :
# Set CONFIG_PATH and JOB_CONFIG_PATH with an absolute path
export CONFIG_PATH="$(pwd)/test-infra/config/prow/config.yaml"
export JOB_CONFIG_PATH="$(pwd)/test-infra/config/jobs/periodic/docker-in-docker/periodic-build-docker.yaml"
./test-infra/hack/test-pj.sh ${JOB_NAME}
# The job name is specified in your yaml.
- If you don't need it and you are asked about Volume "ssh-keys-bot-ssh-secret", answer empty, or you can remove these lines from the config.yaml.
- In the test-pj.sh, the --local flag specifies the local directory in which the logs will be stored. If you want it to be pushed to a COS bucket or if you want the logs to be displayed in the UI, you need to remove the --local flag and the directory specified afterwards.
- Namespace : test-pods (namespace where to test the prow jobs)
- The prow UI and the job-history of my prow job postsubmit-build-docker