I have just started experimenting with CI/CD. I want to create a CI/CD pipeline for my project which builds/tests my application on Linux, MacOS, and Windows. For the Linux part, I need to use a specific Docker container (quay.io/pypa/manylinux2010_x86_64:latest
). Before starting the build in the container I do the usual setup (e.g., yum -y upgrade
, install CMake
, etc). And this is where I am starting to get confused. To my understanding, and after spending sometime Googling, the two most common ways to do that are the following:
1) Build a new Docker container which is based on quay.io/pypa/manylinux2010_x86_64:latest
but also comes with other dependencies installed. An example Dockerfile
would be the following:
FROM quay.io/pypa/manylinux2010_x86_64:latest
RUN yum -y upgrade \
yum clean all \
rm -rf /var/cache/yum \
git clone https://github.com/Kitware/CMake.git \
cd CMake \
git checkout -b build v3.15.3 \
./configure \
make \
make install \
cd .. \
rm -r CMake
This container is built once and stored in a repository. Then, every time the CI/CD pipeline runs, it fetches and uses this container.
2) Use the quay.io/pypa/manylinux2010_x86_64:latest
image in the CI/CD pipeline directly and make the yum -y upgrade
and CMake
installation commands part of the CI/CD pipeline scripts. This means that every time the CI/CD pipeline runs, it: (a) fetches the docker image, (b) starts the container, (c) runs yum
and installs the dependencies.
Can someone provide me with a list with all the pros, cons, and technical implications of the two approaches? The ones I can think of is that approach (1) spends less time during the CI/CD build, but at the same time, the user has to be responsible for building and hosting the custom Docker image.
Is any of the two approaches considered a bad practice?
Given my use-case, could you please help me choose which approach is the right one for me?
FYI: I mostly am interested in the GitLab and GitHub Actions CI/CD services.