![linux multidock linux multidock](https://stefanscherer.github.io/content/images/2018/06/pipeline-old.png)
You can see in the gitlab-ci.yml file, that we are using the docker login command for this. Please note, that you need to login to the Gitlab Container Registry. With the Dockerfile and the Makefile in place, the last step was to add a job to the gitlab-ci.yml specification. After the manifest is created, the docker manifest push command pushes it to the registry. This has to be a different tag than the one specified in the docker manifest create command. Please note that one has to push the architecture-specific images with a unique image tag. If you have a Dockerfile that supports multiple architectures, you can use the -platform argument to pass all the needed architectures. As I pass arguments for the various architectures to the Dockerfile, we have to run the docker buildx build command multiple times. This makes development and troubleshooting easier on your local machine. Makefiles have the advantage of keeping your code outside your CI/CD specification file. With the Dockerfile in place, the last step is to create and push the manifest. With this, I could keep one file and move the configuration one layer up to the Makefile.
![linux multidock linux multidock](https://www.entercom.gr/et-shop/image/cache/catalog/image%201-500x500.jpg)
To solve this problem, I passed the variable for the URL and the filename to the Dockerfile. These binaries come in separate versions for each architecture. In my case, I had to install a specific version of terraform, for which I had to download the binary. This is always the best way, as you don’t have to deal with finding the correct package for each architecture. If you look at the Dockerfile, you can see that I use the Alpine Linux package manager apk to install some packages. Here, I used one file and pass arguments for the architecture-specific configurations. Unfortunately, this is not always possible, so we have to evaluate the best solution on a case-by-case basis. The easiest part would be to write one Dockerfile for all architectures. And second, to build the images and push them to the registry with the proper configuration. First, I needed to ensure that the Dockerfile works for both architectures or that I have one Dockerfile for each architecture. There are two main things I had to consider. With the base image in place, I could work on the Docker image that needs to run on 圆4 and ARM. To solve this, I build the image once on my laptop and pushed it to the registry. This is a typical chicken and egg problem, that you often face during bootstrapping. As you can see in the gitlab-ci.yml file, we already use the docker buildx command to build and push the image. And second, add the code to the GitLab CI/CD configuration, to build and push the image. I did the second and created the 56-build-base image that is based on the docker:20 image. Or you create a Docker image that you can reuse during each run. I identified two ways how you can achieve this: Either you install and configure buildx during every run. So I had to decide how to install the buildx plugin and make it available during the GitLab CI/CD pipeline runs. However, the needed plugin is not part of that image. In the GitLab CI/CD pipeline, we use the docker image, that contains the docker command, as our base image. Multi-Arch images can be built with the Docker CLI plugin buildx. After that, I explain how to create the images and push them to the GitLab Container Registry.
![linux multidock linux multidock](https://images-na.ssl-images-amazon.com/images/I/71DxWnZUACL._SL1500_.jpg)
In the following section, I describe how to make sure that the GitLab CI/CD pipeline can run this command. Conveniently, the docker buildx command already supports Multi-Arch builds. To build a Docker Image that supports multiple processor architectures, an image has to be created for each architecture.
![linux multidock linux multidock](https://149366088.v2.pressablecdn.com/wp-content/uploads/2012/02/noname-1.jpg)
You can find the code I am referencing in the blog post in this GitLab Repository. In this post, I describe the work that I did to make the container base image available for both 圆4 and ARM architecture. This image was not yet ready to run on the AWS Graviton processor architecture. One tool that we use for the building and deployment process is a container base image. After that decision, we looked at the entire stack to see which tools we needed to adapt. AWS's own silicon, based on the ARM Neoverse cores (Alpine ALC12B00) 64-bit ARM processor. In one project I was working on, we migrated a Ruby on Rails application to the AWS Elastic Container Service.