0

I have just started experimenting with CI/CD. I want to create a CI/CD pipeline for my project which builds/tests my application on Linux, MacOS, and Windows. For the Linux part, I need to use a specific Docker container (quay.io/pypa/manylinux2010_x86_64:latest). Before starting the build in the container I do the usual setup (e.g., yum -y upgrade, install CMake, etc). And this is where I am starting to get confused. To my understanding, and after spending sometime Googling, the two most common ways to do that are the following:

1) Build a new Docker container which is based on quay.io/pypa/manylinux2010_x86_64:latest but also comes with other dependencies installed. An example Dockerfile would be the following:

FROM quay.io/pypa/manylinux2010_x86_64:latest

RUN yum -y upgrade \
    yum clean all \
    rm -rf /var/cache/yum \
    git clone https://github.com/Kitware/CMake.git \
    cd CMake \
    git checkout -b build v3.15.3 \
    ./configure \
    make \
    make install \
    cd .. \
    rm -r CMake

This container is built once and stored in a repository. Then, every time the CI/CD pipeline runs, it fetches and uses this container.

2) Use the quay.io/pypa/manylinux2010_x86_64:latest image in the CI/CD pipeline directly and make the yum -y upgrade and CMake installation commands part of the CI/CD pipeline scripts. This means that every time the CI/CD pipeline runs, it: (a) fetches the docker image, (b) starts the container, (c) runs yum and installs the dependencies.

  • Can someone provide me with a list with all the pros, cons, and technical implications of the two approaches? The ones I can think of is that approach (1) spends less time during the CI/CD build, but at the same time, the user has to be responsible for building and hosting the custom Docker image.

  • Is any of the two approaches considered a bad practice?

  • Given my use-case, could you please help me choose which approach is the right one for me?

FYI: I mostly am interested in the GitLab and GitHub Actions CI/CD services.

4

1 回答 1

1

我肯定会选择1。)

选项 2.) 在我看来有缺点,它

  1. 占用更多 CPU(你反复做事),更多时间,更多带宽,
  2. 更容易出错/不可靠(分发包 repo 的每一次网络中断都会导致构建损坏,或此级别上的所有其他问题。并且您希望让 CI/CD piplinne es 尽可能专注于构建软件本身。也许那是最重要的部分。如果构建失败,您的办公室应该会亮起一个大红灯。而且您不希望这种情况发生在愚蠢的网络错误上,而只发生在重要的代码错误上!
  3. 可能更不确定,因此您可能每次都获得一个已安装软件包的不同版本...
  4. 从理论上讲,由于安全原因,做一个经过验证和安全的“主人”并使用它也是有意义的。每次安装(如果未验证)都可能引入安全漏洞......

如果您像您一样从事专业的软件开发,那么无论如何您都需要一个 Docker/Container Repository 来上传您的构建工件,通常打包为一个容器。因此,我将构建一个“黄金板”“构建容器”并将其放入您的存储库中,并将其用作您在 CI/CD 管道中构建的基础。但如果这个选项太难,那么从选项 2) 开始。

于 2019-09-16T00:28:30.227 回答