docker | Carlos Sanchez's Weblog
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
To deploy to Amazon Elastic Container Registry (ECR) we can create a secret with AWS credentials or we can run with more secure IAM node instance roles.
When running on EKS we would have an EKS worker node IAM role (
NodeInstanceRole
), we need to add the IAM permissions to be able to pull and push from ECR. These permissions are grouped in the
arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser
policy, that can be attached to the node instance role.
When using instance roles we no longer need a secret, but we still need to configure kaniko to authenticate to AWS, by using a
config.json
containing just
{ "credsStore": "ecr-login" }
, mounted in
/kaniko/.docker/
We also need to create the ECR repository beforehand, and, if using caching, another one for the cache.
ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
REPOSITORY=kanikorepo
REGION=us-east-1
# create the repository to push to
aws ecr create-repository --repository-name ${REPOSITORY}/kaniko-demo --region ${REGION}
# when using cache we need another repository for it
aws ecr create-repository --repository-name ${REPOSITORY}/kaniko-demo/cache --region ${REGION}

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: kaniko-eks
spec:
restartPolicy: Never
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.0.0
imagePullPolicy: Always
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/carlossg/kaniko-demo.git",
"--destination=${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY}/kaniko-demo:latest",
"--cache=true"]
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
resources:
limits:
cpu: 1
memory: 1Gi
volumes:
- name: docker-config
configMap:
name: docker-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: docker-config
data:
config.json: |-
{ "credsStore": "ecr-login" }
EOF
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
To push to Azure Container Registry (ACR) we can create an admin password for the ACR registry and use the standard Docker registry method or we can use a token. We use that token to craft both the standard Docker config file at
/kaniko/.docker/config.json
plus the ACR specific file used by the Docker ACR credential helper in
/kaniko/.docker/acr/config.json
. ACR does support caching and so it will push the intermediate layers to
${REGISTRY_NAME}.azurecr.io/kaniko-demo/cache:_some_large_uuid_
to be reused in subsequent builds.
RESOURCE_GROUP=kaniko-demo
REGISTRY_NAME=kaniko-demo
LOCATION=eastus
az login
# Create the resource group
az group create --name $RESOURCE_GROUP -l $LOCATION
# Create the ACR registry
az acr create --resource-group $RESOURCE_GROUP --name $REGISTRY_NAME --sku Basic
# If we want to enable password based authentication
# az acr update -n $REGISTRY_NAME --admin-enabled true

# Get the token
token=$(az acr login --name $REGISTRY_NAME --expose-token | jq -r '.accessToken')
And to build the image with kaniko
git clone https://github.com/carlossg/kaniko-demo.git
cd kaniko-demo

cat << EOF > config.json
"auths": {
"${REGISTRY_NAME}.azurecr.io": {}
},
"credsStore": "acr"
EOF
cat << EOF > config-acr.json
"auths": {
"${REGISTRY_NAME}.azurecr.io": {
"identitytoken": "${token}"
EOF
docker run \
-v `pwd`/config.json:/kaniko/.docker/config.json:ro \
-v `pwd`/config-acr.json:/kaniko/.docker/acr/config.json:ro \
-v `pwd`:/workspace \
gcr.io/kaniko-project/executor:v1.0.0 \
--destination $REGISTRY_NAME.azurecr.io/kaniko-demo:kaniko-docker \
--cache
In Kubernetes
If you want to create a new Kubernetes cluster
az aks create --resource-group $RESOURCE_GROUP \
--name AKSKanikoCluster \
--generate-ssh-keys \
--node-count 2
az aks get-credentials --resource-group $RESOURCE_GROUP --name AKSKanikoCluster --admin
In Kubernetes we need to mount the docker config file and the ACR config file with the token.
token=$(az acr login --name $REGISTRY_NAME --expose-token | jq -r '.accessToken')
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: kaniko-aks
spec:
restartPolicy: Never
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.0.0
imagePullPolicy: Always
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/carlossg/kaniko-demo.git",
"--destination=${REGISTRY_NAME}.azurecr.io/kaniko-demo:latest",
"--cache=true"]
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
- name: docker-acr-config
mountPath: /kaniko/.docker/acr/
resources:
limits:
cpu: 1
memory: 1Gi
volumes:
- name: docker-config
configMap:
name: docker-config
- name: docker-acr-config
secret:
name: kaniko-secret
---
apiVersion: v1
kind: ConfigMap
metadata:
name: docker-config
data:
config.json: |-
"auths": {
"${REGISTRY_NAME}.azurecr.io": {}
},
"credsStore": "acr"
---
apiVersion: v1
kind: Secret
metadata:
name: kaniko-secret
stringData:
config.json: |-
"auths": {
"${REGISTRY_NAME}.azurecr.io": {
"identitytoken": "${token}"
EOF
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
To push to Google Container Registry (GCR) we need to login to Google Cloud and mount our local
$HOME/.config/gcloud
containing our credentials into the kaniko container so it can push to GCR. GCR does support caching and so it will push the intermediate layers to
gcr.io/$PROJECT/kaniko-demo/cache:_some_large_uuid_
to be reused in subsequent builds.
git clone https://github.com/carlossg/kaniko-demo.git
cd kaniko-demo

gcloud auth application-default login # get the Google Cloud credentials
PROJECT=$(gcloud config get-value project 2> /dev/null) # Your Google Cloud project id
docker run \
-v $HOME/.config/gcloud:/root/.config/gcloud:ro \
-v `pwd`:/workspace \
gcr.io/kaniko-project/executor:v1.0.0 \
--destination gcr.io/$PROJECT/kaniko-demo:kaniko-docker \
--cache
kaniko can cache layers created by RUN commands in a remote repository. Before executing a command, kaniko checks the cache for the layer. If it exists, kaniko will pull and extract the cached layer instead of executing the command. If not, kaniko will execute the command and then push the newly created layer to the cache.
We can see in the output how kaniko uploads the intermediate layers to the cache.
INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0011] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0011] No cached layer found for cmd RUN cd /src && make
INFO[0011] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0051] Using files from context: [/workspace]
INFO[0051] ADD . /src
INFO[0051] Taking snapshot of files...
INFO[0051] RUN cd /src && make
INFO[0051] Taking snapshot of full filesystem...
INFO[0061] cmd: /bin/sh
INFO[0061] args: [-c cd /src && make]
INFO[0061] Running: [/bin/sh -c cd /src && make]
CGO_ENABLED=0 go build -ldflags '' -o bin/kaniko-demo main.go
INFO[0065] Taking snapshot of full filesystem...
INFO[0070] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f to cache now
INFO[0144] Saving file src/bin/kaniko-demo for later use
INFO[0144] Deleting filesystem...
INFO[0145] No base image, nothing to extract
INFO[0145] Executing 0 build triggers
INFO[0145] cmd: EXPOSE
INFO[0145] Adding exposed port: 8080/tcp
INFO[0145] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0146] No cached layer found for cmd COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Unpacking rootfs as cmd COPY --from=build-env /src/bin/kaniko-demo / requires it.
INFO[0146] EXPOSE 8080
INFO[0146] cmd: EXPOSE
INFO[0146] Adding exposed port: 8080/tcp
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] ENTRYPOINT ["/kaniko-demo"]
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Taking snapshot of files...
INFO[0146] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d to cache now
If we run kaniko twice we can see how the cached layers are pulled instead of rebuilt.
INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0010] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0012] Using caching version of cmd: RUN cd /src && make
INFO[0012] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0049] Using files from context: [/workspace]
INFO[0049] ADD . /src
INFO[0049] Taking snapshot of files...
INFO[0049] RUN cd /src && make
INFO[0049] Found cached layer, extracting to filesystem
INFO[0051] Saving file src/bin/kaniko-demo for later use
INFO[0051] Deleting filesystem...
INFO[0052] No base image, nothing to extract
INFO[0052] Executing 0 build triggers
INFO[0052] cmd: EXPOSE
INFO[0052] Adding exposed port: 8080/tcp
INFO[0052] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0054] Using caching version of cmd: COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Skipping unpacking as no commands require it.
INFO[0054] EXPOSE 8080
INFO[0054] cmd: EXPOSE
INFO[0054] Adding exposed port: 8080/tcp
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] ENTRYPOINT ["/kaniko-demo"]
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Found cached layer, extracting to filesystem
In Kubernetes
To deploy to GCR we can use a service account and mount it as a Kubernetes secret, but when running on Google Kubernetes Engine (GKE) it is more convenient and safe to use the node pool service account.
When creating the GKE node pool the default configuration only includes read-only access to Storage API, and we need
full
access in order to push to GCR. This is something that we need to change under Add a new node pool – Security – Access scopes – Set access for each API – Storage – Full. Note that the scopes cannot be changed once the node pool has been created.
If the nodes have the correct service account with full storage access scope then we do not need to do anything extra on our kaniko pod, as it will be able to push to GCR just fine.
PROJECT=$(gcloud config get-value project 2> /dev/null)

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: kaniko-gcr
spec:
restartPolicy: Never
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.0.0
imagePullPolicy: Always
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/carlossg/kaniko-demo.git",
"--destination=gcr.io/${PROJECT}/kaniko-demo:latest",
"--cache=true"]
resources:
limits:
cpu: 1
memory: 1Gi
EOF
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
We can build a Docker image with kaniko and push it to Docker Hub or any other standard Docker registry.
Running kaniko from a Docker daemon does not provide much advantage over just running a docker build, but it is useful for testing or validation. It also helps understand how kaniko works and how it supports the different registries and authentication mechanisms.
git clone https://github.com/carlossg/kaniko-demo.git
cd kaniko-demo
# if you just want to test the build, no pushing
docker run \
-v `pwd`:/workspace gcr.io/kaniko-project/executor:v1.0.0 \
--no-push
Building by itself is not very useful, so we want to push to a remote Docker registry.
To push to DockerHub or any other username and password Docker registries we need to mount the Docker
config.json
file that contains the credentials. Caching will not work for DockerHub as it does not support repositories with more than 2 path sections (
acme/myimage/cache
), but it will work in Artifactory and maybe other registry implementations.
DOCKER_USERNAME=[...]
DOCKER_PASSWORD=[...]
AUTH=$(echo -n "${DOCKER_USERNAME}:${DOCKER_PASSWORD}" | base64)
cat << EOF > config.json
"auths": {
"https://index.docker.io/v1/": {
"auth": "${AUTH}"
EOF
docker run \
-v `pwd`/config.json:/kaniko/.docker/config.json:ro \
-v `pwd`:/workspace \
gcr.io/kaniko-project/executor:v1.0.0 \
--destination $DOCKER_USERNAME/kaniko-demo:kaniko-docker
In Kubernetes
In Kubernetes we can manually create a pod that will do our Docker image build. We need to provide the build context, containing the same files that we would put in the directory used when building a Docker image with a Docker daemon. It should contain the
Dockerfile
and any other files used to build the image, ie. referenced in
COPY
commands.
As build context we can use multiple sources
GCS Bucket (as a
tar.gz
file)
gs://kaniko-bucket/path/to/context.tar.gz
S3 Bucket (as a
tar.gz
file) `
s3://kaniko-bucket/path/to/context.tar.gz
Azure Blob Storage (as a
tar.gz
file)
Local Directory, mounted in the
/workspace
dir as shown above
dir:///workspace
Git Repository
git://github.com/acme/myproject.git#refs/heads/mybranch
Depending on where we want to push to, we will also need to create the corresponding secrets and config maps.
We are going to show examples building from a git repository as it will be the most typical use case.
Deploying to Docker Hub or a Docker registry
We will need the Docker registry credentials in a
config.json
file, the same way that we need them to pull images from a private registry in Kubernetes.
DOCKER_USERNAME=[...]
DOCKER_PASSWORD=[...]
DOCKER_SERVER=https://index.docker.io/v1/
kubectl create secret docker-registry regcred \
--docker-server=${DOCKER_SERVER} \
--docker-username=${DOCKER_USERNAME} \
--docker-password=${DOCKER_PASSWORD}

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: kaniko-docker
spec:
restartPolicy: Never
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.0.0
imagePullPolicy: Always
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/carlossg/kaniko-demo.git",
"--destination=${DOCKER_USERNAME}/kaniko-demo"]
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker
resources:
limits:
cpu: 1
memory: 1Gi
volumes:
- name: docker-config
projected:
sources:
- secret:
name: regcred
items:
- key: .dockerconfigjson
path: config.json
EOF
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
This is the first post in a series about
kaniko
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
kaniko
is a tool to build container images from a
Dockerfile
, similar to
docker build
, but without needing a Docker daemon. kaniko builds the images inside a container, executing the
Dockerfile
commands in userspace, so it allows us to build the images in standard Kubernetes clusters.
This means that in a containerized environment, be it a Kubernetes cluster, a Jenkins agent running in Docker, or any other container scheduler, we no longer need to use Docker in Docker nor do the build in the host system by mounting the Docker socket, simplifying and improving the security of container image builds.
Still, kaniko does not make it safe to run untrusted container image builds, but it relies on the security features of the container runtime. If you have a minimal base image that doesn’t require permissions to unpack, and your Dockerfile doesn’t execute any commands as the root user, you can run Kaniko without root permissions.
kaniko builds the container image inside a container, so it needs a way to get the build context (the directory where the Dockerfile and any other files that we want to copy into the container are) and to push the resulting image to a registry.
The build context can be a compressed tar in a Google Cloud Storage or AWS S3 bucket, a local directory inside the kaniko container, that we need to mount ourselves, or a git repository.
kaniko can be run in Docker, Kubernetes, Google Cloud Build (sending our image build to Google Cloud), or gVisor. gVisor is an
OCI
sandbox runtime that provides a virtualized container environment. It provides an additional security boundary for our container image builds.
Images can be pushed to any standard Docker registry but also Google GCR and AWS ECR are directly supported.
With Docker daemon image builds (
docker build
) we have caching. Each layer generated by
RUN
commands in the Dockerfile is kept and reused if the commands don’t change. In kaniko, because the image builds happen inside a container that is gone after the build we lose anything built locally. To solve this, kaniko can push these intermediate layers resulting from
RUN
commands to the remote registry when using the
--cache
flag.
In this series I will be covering using kaniko with several container registries.
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
A follow up to
Running a JVM in a Container Without Getting Killed
In Java 10 there is
improved container integration
No need to add extra flags, the JVM will use 1/4 of the container memory for heap.
$ docker run -m 1GB openjdk:10 java -XshowSettings:vm \
-version
VM settings:
Max. Heap Size (Estimated): 247.50M
Using VM: OpenJDK 64-Bit Server VM

openjdk version "10.0.1" 2018-04-17
OpenJDK Runtime Environment (build 10.0.1+10-Debian-4)
OpenJDK 64-Bit Server VM (build 10.0.1+10-Debian-4, mixed mode)
Java 10 obsoletes the
-XX:MaxRAM
parameter, as the JVM will correctly detect the value.
You can still use the
-XX:MaxRAMFraction=1
option to squeeze all the memory from the container.
$ docker run -m 1GB openjdk:10 java -XshowSettings:vm \
-XX:MaxRAMFraction=1 -version
OpenJDK 64-Bit Server VM warning: Option MaxRAMFraction was deprecated in version 10.0 and will likely be removed in a future release.
VM settings:
Max. Heap Size (Estimated): 989.88M
Using VM: OpenJDK 64-Bit Server VM

openjdk version "10.0.1" 2018-04-17
OpenJDK Runtime Environment (build 10.0.1+10-Debian-4)
OpenJDK 64-Bit Server VM (build 10.0.1+10-Debian-4, mixed mode)
But it can be risky if your container uses off heap memory, as almost all the container memory is allocated to heap. You would have to either set
-XX:MaxRAMFraction=2
and use only 50% of the container memory for heap, or resort to
Xmx
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Kaniko
is a project launched by Google that allows building Dockerfiles without Docker or the Docker daemon.
Kaniko can be used inside Kubernetes to build a Docker image and push it to a registry, supporting Docker registry, Google Container Registry and AWS ECR, as well as
any other registry supported by Docker credential helpers
This solution is
still not safe, as containers run as root
, but it is way better than mounting the Docker socket and launching containers in the host. For one there are no leaked resources or containers running outside the scheduler.
To launch Kaniko from Jenkins in Kubernetes just need an agent template that uses the debug Kaniko image (just to have cat and nohup) and a Kubernetes secret with the image registry credentials, as shown in this
example pipeline
UPDATED: some changes needed for the latest Kaniko
/**
* This pipeline will build and deploy a Docker image with Kaniko
* https://github.com/GoogleContainerTools/kaniko
* without needing a Docker host
* You need to create a jenkins-docker-cfg secret with your docker config
* as described in
* https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
*/

def label = "kaniko-${UUID.randomUUID().toString()}"

podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /root
volumes:
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: regcred
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {

node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-skip-tls-verify --destination=mydockerregistry:5000/myorg/myimage
'''
Pros:
No need to mount docker socket or have docker binary
No stray containers running outside of the scheduler
Cons:
Still not secure
Does not support the full Dockerfile syntax yet
Skaffold
also has support for Kaniko, and can be used in your Jenkins X pipelines, which use Skaffold to abstract the image building.
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
This 2nd half of the year speaking season is starting and you’ll find me speaking about DevOps, Kubernetes, Jenkins,… at
Scaling Jenkins with Kubernetes
. Jenkins World, San Francisco, August 29-31
Using Kubernetes for Continuous Integration and Continuous Delivery
. Java2Days in Sofia, Bulgaria, October 17-19
Divide and Conquer: Easier Continuous Delivery using Micro-Services
. Bosnia Agile Day in Sarajevo, October 21
Using Kubernetes for Continuous Integration and Continuous Delivery
. JokerConf in Saint Petersburg, Russia November 4-5
Jenkins and Containers, a Match Made in Heaven
. Agile Testing Days in Potsdam, Germany, November 13-17
If you organize a conference and would like me to give a talk in 2018 you can find me
@csanchez
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
No pun intended
The JDK 8u131 has backported a nice feature in JDK 9, which is the ability of the JVM to detect how much memory is available when running inside a Docker container.
I have talked
multiple times
about the problems running a JVM inside a container, how it will default in most cases to a max heap of 1/4 of the host memory, not the container.
For example in my machine
$ docker run -m 100MB openjdk:8u121 java -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 444.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Wait, WAT? I set a container memory of 100MB and my JVM sets a max heap of 444M ? It is very likely that it is going to cause the Kernel to kill my JVM at some point.
Let’s try the JDK 8u131 with the experimental option
-XX:+UseCGroupMemoryLimitForHeap
$ docker run -m 100MB openjdk:8u131 java \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseCGroupMemoryLimitForHeap \
-XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 44.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Ok this makes more sense, the JVM was able to detect the container has only 100MB and set the max heap to 44M.
Let’s try in a bigger container
$ docker run -m 1GB openjdk:8u131 java \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseCGroupMemoryLimitForHeap \
-XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 228.00M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Mmm, now the container has 1GB but JVM is only using 228M as max heap. Can we optimize this even more, given that nothing else other than the JVM is running in the container? Yes we can!
$ docker run -m 1GB openjdk:8u131 java \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseCGroupMemoryLimitForHeap \
-XX:MaxRAMFraction=1 -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 910.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Using
-XX:MaxRAMFraction
we are telling the JVM to use available memory/MaxRAMFraction as max heap. Using
-XX:MaxRAMFraction=1
we are using almost all the available memory as max heap.
UPDATE: follow up for Java 10+
at
Running a JVM in a Container Without Getting Killed II
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
A one-liner to run a SSL Docker registry generating a Let’s Encrypt certificate.
This command will create a registry proxying the Docker hub, caching the images in a
registry
volume.
LetsEncrypt certificate will be auto generated and stored in the host dir as
letsencrypt.json
. You could also use a Docker volume to store it.
In order for the certificate generation to work the registry needs to be accessible from the internet in port 443. After the certificate is generated that’s no longer needed.
docker run -d -p 443:5000 --name registry \
-v `pwd`:/etc/docker/registry/ \
-v registry:/var/lib/registry \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
-e REGISTRY_HTTP_HOST=https://docker.example.com \
-e REGISTRY_HTTP_TLS_LETSENCRYPT_CACHEFILE=/etc/docker/registry/letsencrypt.json \
-e
[email protected]
-e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
registry:2
You can also create a
config.yml
in this dir and run the registry using the file instead of environment variables
version: 0.1
storage:
filesystem:
http:
addr: 0.0.0.0:5000
host: https://docker.example.com
tls:
letsencrypt:
cachefile: /etc/docker/registry/letsencrypt.json
email:
[email protected]
proxy:
remoteurl: https://registry-1.docker.io
Then run
docker run -d -p 443:5000 --name registry \
-v `pwd`:/etc/docker/registry/ \
-v registry:/var/lib/registry \
registry:2
If you want to use this as a remote repository and not just for proxying, remove the
proxy
entry in the configuration
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Feed
License
Recent Comments
sadsadsd on
Building Docker Images with Ka…
Carlos Sanchez
on
Running a JVM in a Container W…
Salaikumar
on
Running a JVM in a Container W…
Ford Lady
on
Como enviar un coche de USA a…
José Luis on
Como enviar un coche de USA a…
Recent Posts
Building a TripIt Visualizer in a few hours with Google Antigravity
Managing the Machine: A Practical Look at Google Antigravity
Self-Healing Rollouts: Automating Production Fixes with Agentic AI and Argo Rollouts
Monoliths vs micro-services, here we go again
Serverless Jenkins Pipelines with Google Cloud Run
Categories
development
(254)
ai
(3)
cloud
(23)
devops
(43)
eclipse
(26)
Java
(86)
Maven
(76)
ruby
(3)
docker
(20)
General
(46)
jenkins
(23)
kubernetes
(20)
maestrodev
(52)
ONess
(11)
Personal
(38)
Uncategorized
(5)
Archives
January 2026
October 2025
May 2023
June 2021
October 2020
September 2020
June 2019
March 2019
February 2019
January 2019
December 2018
November 2018
August 2018
June 2018
April 2018
February 2018
September 2017
August 2017
May 2017
January 2017
December 2016
November 2016
October 2016
September 2016
July 2016
May 2016
April 2016
November 2015
October 2015
December 2014
October 2014
August 2014
July 2014
June 2014
April 2014
February 2014
January 2014
November 2013
October 2013
September 2013
August 2013
April 2013
January 2013
September 2012
August 2012
June 2012
May 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
August 2011
July 2011
June 2011
May 2011
March 2011
February 2011
January 2011
November 2010
October 2010
May 2010
January 2010
November 2009
October 2009
August 2009
July 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
Carlos Sanchez's Weblog
Blog at WordPress.com.
Subscribed
Carlos Sanchez's Weblog
Already have a WordPress.com account?
Log in now.
Carlos Sanchez's Weblog
Subscribed
Report this content
View site in Reader
Manage subscriptions
Collapse this bar
Loading Comments...
%d