Using Jenkins and Kaniko to build Docker images on AWS (2023)

Last updated on November 25, 2022

Running Jenkins on AWS is easy with ServerlessFargateRelease type, but what if we need Jenkins to build Docker images? This is the Docker-in-Docker problem, which is usually solved by giving the container privileged access to the host, which is not possible in Fargate. The alternative is to useKaniko, a tool that allows you to create a Docker image in a container without having to grant it privileged access.

In this article, you'll learn how to use Jenkins' Kaniko to easily build and push a Docker image to support your completely serverless CI pipelines.

contents

How Kaniko can future-proof your Jenkins pipelines

Preparando su entorno Jenkins para Kaniko

Script your pipeline to create and push images to AWS ECR

Complete Jenkins pipeline example

(Video) Building Docker images in Jenkins with Kaniko

Choosing the best option: some Kaniko alternatives

final thoughts

How Kaniko can future-proof your Jenkins pipelines

Kaniko runs in a Docker container and is for the sole purpose of building and shipping a Docker image. This design means it's easy to create one within a Jenkins pipeline that runs as many as needed on AWS.

Kaniko works by taking an input known asbuild context, which contains theDockerfileand any other files needed to build it. You can retrieve the build context from an AWS S3 bucket, Git repository, local storage, and more.

Kaniko then uses the build context toBuild the docker image, and soslide the imageto any supported registry, such as AWS ECR, Docker Hub, or Google's GCR.

Using Jenkins and Kaniko to build Docker images on AWS (1)

Benefits of using Kaniko with Jenkins

Thinking specifically of Jenkins pipelines running on AWS, Kaniko is helpful because:

  1. We can run Kaniko as a normal serverless AWS Fargate container using Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS). Privileged host access is not required.
  2. We can run as many Kaniko containers as needed by scaling out our Jenkins pipelines
  3. We don't have to worry about deleting images, running out of disk space or anything else related to managing a server

Preparando su entorno Jenkins para Kaniko

The rest of this article shows you how to integrate Kaniko into your own Jenkins pipeline using AWS ECS. There are many ways to set this up, but we assume the build context is stored in S3 and the image is pushed to AWS ECR.

You need to configure a few things in your environment before you can run Kaniko containers on ECS. Below are the general details of what you need.laterYou can apply an AWS Cloud Development Kit (CDK) application to your own AWS account to provide these features exactly as you need them, and to demonstrate Jenkins and Kaniko in action.

Create an S3 context bucket

Kaniko's build context is stored in S3, so it needs a bucket where Jenkins has permission to push objects. Watchunderto get the full list of Jenkins permissions.

(Video) How to Build and Publish Docker Images to AWS ECR using Jenkins Pipeline

Target ECR Repository

The target ECR repository is where Kaniko will push your final Docker image. Your Jenkins pipeline can then serve the image directly from ECR.

ECS task function for Kaniko

If you store the build context in S3, Kaniko needs permission to get it.

s3:top object

Also, you need permission to move objects to the destination ECR repository.

ecr:GetAuthorizationTokenecr:InitiateLayerUploadecr:UploadLayerPartecr:CompleteLayerUploadecr:PutImageecr:BatchGetImageecr:BatchCheckLayerAvailability

Create an IAM role for Kaniko with all of these permissions.

ECS task execution function for Kaniko

AWS uses the ECS Execution role when managing the Kaniko container. You need the following permissions to send logs to CloudWatch.

registros: CreateLogStreamlogs:PutLogEvents

In this example we use thetkgregory/kaniko-for-ecr: it's newpublic docker image that I deployed to Kaniko on Docker Hub. If you were to store this image in ECR, you would need to assign the appropriate permissions to the execution role.

Definition of ECS tasks for Kaniko

The task definition configures how Kaniko runs on ECS, including details about the Docker image to use, resource requirements, and IAM roles.

  • arelease typemust be FARGATE, which means that AWS takes care of provisioning the underlying resources on which the container runs
  • atask functionmitask execution functionshould be set to those described above
  • For himmemorymiUPCYou can choose as you like. I used 1024 MB and 512 CPU with no problems.
  • the task definition must specify a singleContainerfor Kaniko
    • apictureshould be a Kaniko Docker image plus aconfig.jsonIndication of the fact that we use ECR. For that we need to expand itgcr.io/kaniko-project/ejecutorBase image and add the configuration file. You can use the image I providedtkgregory/kaniko-for-ecr: it's new.
    • ProLoginyou can use the...awslogsController to log in to CloudWatch
    • without doorsmust be disclosed as we do not direct inquiries to Kaniko

Jenkins command line tools

you will have to havejqmiget textbase(this includes theenvsubstcommand that we will need later) installed on your Jenkins master/agent to follow the rest of this tutorial. This can be done in JenkinsDockerfilewith thatKORREInstruction.

EJECUTAR apt-get update && apt-get install jq -y && apt-get install gettext-base -y

You also need the AWS CLI to allow the Jenkins pipeline to interact with AWS. Here you areDockerfileInstruction.

Run curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install

When you test the demo CDK application, you use aImagen-Jenkins-DockerI have provided these prerequisites already installed.

Jenkins permissions

Jenkins needs these IAM permissions to interact with the above resources.

  • Put objects in the S3 bucket of the build context (s3:SetObject)
  • Execute ECS tasks using the Kaniko task definition (ecs:Run Task)
  • pass the role to allow Jenkins to run an ECS task with the two Kaniko roles (Yo: PassRole)
  • Describe the tasks that will be checked when Kaniko stops working as follows (ecs:DescribeTareas)
  • Please list the task definitions for the latest Kaniko task definition revision given below (ecs:ListTaskDefinitions)

If you provide theDemo CDK Application, your Jenkins function looks like this.

{ "Version": "2012-10-17", "Declaration": [ { "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-build-context/*" , "Effect": "RunTask" }, { "Action": "ecs:RunTask", "Resource": "arn:aws:ecs:<region>:<account-id>:task-definition/kaniko-builder: 11", "Effect": "Contact" }, { "Action": [ "ecs:DescribeTasks", "ecs:ListTaskDefinitions" ], "Resource": "*", "Effect": "Contact" }, { Action ": "creator:PassRole", "Resource": [ "creator::<account-id>:role/JenkinsMyStack-MyECSRole7D7DCDAA-2CJ4Y8PV1JDW", "creator::<account-id> :role/JenkinsKanikoStack-kanikotaskDefinitionExecutionRo-5LQJA8TGT3HM" ], "Effeito": "Allow" } ]};

Script your pipeline to create and push images to AWS ECR

With the above prerequisites in place, we need to modify our Jenkins pipeline to include these steps to build the Docker image with Kaniko.

1. Upload the build context to S3

After building the application, the Dockerfile and the necessary files must be archived in one.tar.gzFile and uploaded to the S3 bucket of the build context.

sh "tar c build/docker | gzip | aws s3 cp - 's3://$KANIKO_BUILD_CONTEXT_BUCKET_NAME/context.tar.gz'"

2. Create the execution task JSON file

When we call the AWS CLI to perform the task, it's easier to pass a file containing the Docker container commands that tell Kaniko what it needs to build. This file may be in your application repository where it is configuredsample applicationBuilt-in CDK demo application.

(Video) Building Docker Images using Jenkins step by step | Devops Integration Live Demo | JavaTechie

You can see it in the following file:

  • We use environment variables to refer to infrastructure items like subnet IDs and security group IDs.
  • Environment variables are replaced during the image build phase of the Jenkins pipeline usingenvsubstdomain
  • Application-specific items, such as Dockerfile build arguments, can be dynamically coded or added using the application's build tool.
{ "clúster": "${KANIKO_CLUSTER_NAME}", "launchType": "FARGATE", "networkConfiguration": { "awsvpcConfiguration": { "subredes": [ "${KANIKO_SUBNET_ID}" ], "securityGroups": [ "$ {KANIKO_SECURITY_GROUP_ID}" ], "assignPublicIp": "DISABLED" } }, "overrides": { "containerOverrides": [ { "name": "kaniko", "command": [ "--context", "s3:/ /${KANIKO_BUILD_CONTEXT_BUCKET_NAME}/context.tar.gz", "--context-sub-path", "./build/docker", "--build-arg", "JAR_FILE=spring-boot-api-example- 0.1.0-SNAPSHOT.jar", "--destination", "${KANIKO_REPOSITORY_URI}:latest", "--force" ] } ] }}

We can replace the environment variables with this command, which will generate a new file to use in step 4.

sh 'envsubst <ecs-run-task-template.json > ecs-run-task.json'

To learn more about the different commands you can pass to Kaniko, read heredocuments.

3. Get the latest Kaniko task definition review

When we run the Kaniko job, we run a job definition with a specific revision number. Since there is no way to tell AWS to run the latest job definition, we must query the revision using the CLI. The following Jenkins pipeline snippet orders task definition revisions by newest first, select first, and includes the relevant partsed.

script { LATEST_TASK_DEFINITION = sh(returnStdout: true, script: "/bin/bash -c 'aws ecs list-task-definitions \ --status active --sort DESC \ --family-prefix $KANIKO_TASK_FAMILY_PREFIX \ --query \' taskDefinitionArns[0]\' \ --texto de salida \ | sed \'s:.*/::\''").trim() }

4. Realiza una tarea Kaniko

We tell AWS to run Kaniko on ECS and pass the correct commands with theecs-run-task.jsonfile created in step 2. By using this file, we can keep the command more concise.

script { TASK_ARN = sh(returnStdout: true, script: "/bin/bash -c 'aws ecs run-task \ --task-definition $LATEST_TASK_DEFINITION \ --cli-input-json file://ecs-run-task .json \ | jq -j \'.tareas[0].tareaArn\''").trim()}

Then the image arises and the real magic happens. ✨

5. Wait for the Kaniko task to complete

When you run an ECS task using the AWS CLI, it immediately returns a response. We want the Jenkins pipeline to wait for Kaniko to finish running so that we can use the created image in future stages of the pipeline.

sh "aws ecs wait task-running --cluster jenkins-cluster --task $TASK_ARN"echo "Aufgabe wird ausgeführt"sh "aws ecs wait task-stopped --cluster jenkins-cluster --task $TASK_ARN"echo "A tarefa foi parou"

6. Check the output of the Kaniko build

During this validation phase, we make sure that the Kaniko container exits with status code 0. Anything else means something bad has happened and we need to exit the pipeline early.

script { EXIT_CODE = sh(returnStdout: true, script: "/bin/bash -c 'aws ecs describe-tasks \ --cluster jenkins-cluster \ --tasks $TASK_ARN \ --query \'tasks[0].containers [0].exitCode\' \ --output text'").trim() if (EXIT_CODE == '0') { echo 'Docker image successfully built and published' } else { error("Container closed with output unexpected code $EXIT_CODE. See logs for more information") }}

Complete Jenkins pipeline example

Using all of the techniques described above results in a pipeline like the one below. He buildsSpring Boot ApplicationUsing Gradle, it then builds the Docker image using Kaniko and waits for the image to build and push to ECR.

canalización { agente cualquier etapa { etapa ('Crear aplicación') { pasos { url de git: 'https://github.com/tkgregory/spring-boot-api-example.git', rama: 'kaniko' sh "./ gradlew assembliere dockerPrepare -Porg.gradle.jvmargs=-Xmx2g" sh "tar c build/docker | gzip | aws s3 cp - 's3://$KANIKO_BUILD_CONTEXT_BUCKET_NAME/context.tar.gz'" } } stage('Erstellen und veröffentlichen image') { pasos { sh 'envsubst < ecs-run-task-template.json > ecs-run-task.json' script { LATEST_TASK_DEFINITION = sh(returnStdout: true, script: "/bin/bash -c 'aws ecs list-task-definitions \ --status active --sort DESC \ --family-prefix $KANIKO_TASK_FAMILY_PREFIX \ --query \'taskDefinitionArns[0]\' \ --output text \ | sed \'s:.*/: :\''").trim() TASK_ARN = sh(returnStdout: true, script: "/bin/bash -c 'aws ecs run-task \ --task-definition $LATEST_TASK_DEFINITION \ --cli-input-json file ://ecs-run-task.json \ | jq -j \'.tasks[0].taskArn\''").trim() } echo "Tarefa enviada $TASK_ARN" sh "aws ecs wait task-running - -grupo jenkins -cluster --task $ TASK_ARN" echo "Task läuft" sh "aws ecs wait task-stopped --cluster jenkins-cluster --task $TASK_ARN" echo "Task parou" script { EXIT_CODE = sh(returnStdout: true, script : "/bin / bash -c 'aws ecs describe-tasks \ --cluster jenkins-cluster \ --tasks $TASK_ARN \ --query \'tasks[0].containers[0].exitCode\' \ --output texto'") . trim() if (EXIT_CODE == '0') { echo 'Imagen do Docker criada y publicada con éxito' } else { error("O contêiner foi encerrado com o codigo de saída inesperado $EXIT_CODE. Verifique os logs para obtener detalles. ") } } } } stage('Implementar') { pasos { echo 'Implementación en curso' } } }}

Demo setup provided for your AWS account

The fastest way to test the above pipeline is to implement theApplication Jenkins Kaniko CDKin your AWS account. A Jenkins instance is created along with:

  • all prerequisite features discussed above
  • Environment variables automatically exposed for use in the above pipeline
  • a preconfigured task definition, so all you have to do is click the build button to see Kaniko in action
Using Jenkins and Kaniko to build Docker images on AWS (2)

For deployment instructions for your AWS account, seeLÉAME.md.

CDK?

oAWS Cloud Development Kit(CDK) is a way to programmatically generate infrastructure templates as CloudFormation code using popular languages ​​such as TypeScript, Python, and JavaScript. For more complex implementations like the ones in this article, it's easier, less verbose, and less error-prone than writing a CloudFormation template from scratch.

For more details about the CDK and how to deploy Jenkins on AWS using the CDK, see the articleJenkins Deployment on AWS ECS with CDK.

Check image upload

To actually test that Kaniko successfully created and shipped the application's Docker image, we can run it with this Docker command and point to the image in ECR.

(Video) Build & Push Docker Image using Jenkins Pipeline | Devops Integration Live Example Step By Step

docker run --rm -p 8080:8080 <your-aws-account-id>.dkr.ecr.<your-region>.amazonaws.com/kaniko-demo:latest

Remember to first authenticate Docker with AWS by running the command shown inAuthenticate to your standard registrynothese AWS documents.

This should launch an old Spring Boot application accessible on port 8080. If so, that's fine. 👍

Using Jenkins and Kaniko to build Docker images on AWS (3)

Choosing the best option: some Kaniko alternatives

If you don't think Kaniko is right for your situation right now, why not consider these alternatives?

ECS EC2 cluster

The "traditional" approach is to create an ECS cluster consisting of EC2 instances that you manage yourself. You can run Jenkins as an ECS task with the required privileged mode and/var/run/docker.sockmount. This allows you to run docker commands in the container using the docker daemon on the host.

There's certainly nothing wrong with this approach, but in my experience, it adds complexity because you have to think about autoscaling your ECS cluster, maximizing resource utilization, and managing servers.

AWS CodeBuild

CodeBuild provides out-of-the-box support for building Docker images in a very easy way. No additional containers required!

Of course, if you're already running your CI processes on Jenkins, you'll need some kind of integration with CodeBuild. You can find out exactly how this works in the articleAWS CodeBuild-Integration con Jenkins-Pipelines.

final thoughts

You've seen that it's entirely possible to use Kaniko to build a containerized Docker image from a Jenkins pipeline. While the solution isn't exactly a one-liner at this point, this approach has advantages over the alternatives. In particular, we don't need servers, and we don't need to use a completely different build tool like AWS CodeBuild.

Note that there are other approaches to solving this problem when building Java applications. Specifically using the Docker Jib image builder tool. MoneyJib vs. Spring Boot to build Docker imageswhere I fully explore this option.

Using Jenkins and Kaniko to build Docker images on AWS (4)

Do you want to know more about Jenkins?
See the complete selectionJenkins-Tutorials.

Using Jenkins and Kaniko to build Docker images on AWS

(Video) Kaniko | Build Container Images In Kubernetes Without Docker Daemon

Related Posts

Videos

1. Kaniko In Telugu | Build Container Image with Kaniko using Kubernetes and Jenkins | No Docker Daemon
(Trie Tree Technologies)
2. How to Setup Dynamic Jenkins Build Agents using Docker | Setup Docker Containers As Build Agents
(DevOps Coach)
3. Kaniko - Building Container Images In Kubernetes Without Docker
(DevOps Toolkit)
4. Using KANIKO to solve "docker in docker" problem in your CI/CD
(Amer Zec)
5. How to Build a Container Image Using Buildah
(CloudBeesTV)
6. [ Kube 49.1 ] Deploy to Kubernetes cluster using Jenkins CI/CD pipeline | Building with Kaniko tool
(Just me and Opensource)
Top Articles
Latest Posts
Article information

Author: Ray Christiansen

Last Updated: 03/17/2023

Views: 5929

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Ray Christiansen

Birthday: 1998-05-04

Address: Apt. 814 34339 Sauer Islands, Hirtheville, GA 02446-8771

Phone: +337636892828

Job: Lead Hospitality Designer

Hobby: Urban exploration, Tai chi, Lockpicking, Fashion, Gunsmithing, Pottery, Geocaching

Introduction: My name is Ray Christiansen, I am a fair, good, cute, gentle, vast, glamorous, excited person who loves writing and wants to share my knowledge and understanding with you.