google cloud | Carlos Sanchez's Weblog
Building Docker Images with Kaniko
Building Docker Images with Kaniko Pushing to Docker Registries
Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)
Building Docker Images with Kaniko Pushing to Azure Container Registry (ACR)
Building Docker Images with Kaniko Pushing to Amazon Elastic Container Registry (ECR)
To push to Google Container Registry (GCR) we need to login to Google Cloud and mount our local
$HOME/.config/gcloud
containing our credentials into the kaniko container so it can push to GCR. GCR does support caching and so it will push the intermediate layers to
gcr.io/$PROJECT/kaniko-demo/cache:_some_large_uuid_
to be reused in subsequent builds.
git clone https://github.com/carlossg/kaniko-demo.git
cd kaniko-demo

gcloud auth application-default login # get the Google Cloud credentials
PROJECT=$(gcloud config get-value project 2> /dev/null) # Your Google Cloud project id
docker run \
-v $HOME/.config/gcloud:/root/.config/gcloud:ro \
-v `pwd`:/workspace \
gcr.io/kaniko-project/executor:v1.0.0 \
--destination gcr.io/$PROJECT/kaniko-demo:kaniko-docker \
--cache
kaniko can cache layers created by RUN commands in a remote repository. Before executing a command, kaniko checks the cache for the layer. If it exists, kaniko will pull and extract the cached layer instead of executing the command. If not, kaniko will execute the command and then push the newly created layer to the cache.
We can see in the output how kaniko uploads the intermediate layers to the cache.
INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0011] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0011] No cached layer found for cmd RUN cd /src && make
INFO[0011] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0051] Using files from context: [/workspace]
INFO[0051] ADD . /src
INFO[0051] Taking snapshot of files...
INFO[0051] RUN cd /src && make
INFO[0051] Taking snapshot of full filesystem...
INFO[0061] cmd: /bin/sh
INFO[0061] args: [-c cd /src && make]
INFO[0061] Running: [/bin/sh -c cd /src && make]
CGO_ENABLED=0 go build -ldflags '' -o bin/kaniko-demo main.go
INFO[0065] Taking snapshot of full filesystem...
INFO[0070] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f to cache now
INFO[0144] Saving file src/bin/kaniko-demo for later use
INFO[0144] Deleting filesystem...
INFO[0145] No base image, nothing to extract
INFO[0145] Executing 0 build triggers
INFO[0145] cmd: EXPOSE
INFO[0145] Adding exposed port: 8080/tcp
INFO[0145] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0146] No cached layer found for cmd COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Unpacking rootfs as cmd COPY --from=build-env /src/bin/kaniko-demo / requires it.
INFO[0146] EXPOSE 8080
INFO[0146] cmd: EXPOSE
INFO[0146] Adding exposed port: 8080/tcp
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] ENTRYPOINT ["/kaniko-demo"]
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Taking snapshot of files...
INFO[0146] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d to cache now
If we run kaniko twice we can see how the cached layers are pulled instead of rebuilt.
INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0010] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0012] Using caching version of cmd: RUN cd /src && make
INFO[0012] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0049] Using files from context: [/workspace]
INFO[0049] ADD . /src
INFO[0049] Taking snapshot of files...
INFO[0049] RUN cd /src && make
INFO[0049] Found cached layer, extracting to filesystem
INFO[0051] Saving file src/bin/kaniko-demo for later use
INFO[0051] Deleting filesystem...
INFO[0052] No base image, nothing to extract
INFO[0052] Executing 0 build triggers
INFO[0052] cmd: EXPOSE
INFO[0052] Adding exposed port: 8080/tcp
INFO[0052] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0054] Using caching version of cmd: COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Skipping unpacking as no commands require it.
INFO[0054] EXPOSE 8080
INFO[0054] cmd: EXPOSE
INFO[0054] Adding exposed port: 8080/tcp
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] ENTRYPOINT ["/kaniko-demo"]
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Found cached layer, extracting to filesystem
In Kubernetes
To deploy to GCR we can use a service account and mount it as a Kubernetes secret, but when running on Google Kubernetes Engine (GKE) it is more convenient and safe to use the node pool service account.
When creating the GKE node pool the default configuration only includes read-only access to Storage API, and we need
full
access in order to push to GCR. This is something that we need to change under Add a new node pool – Security – Access scopes – Set access for each API – Storage – Full. Note that the scopes cannot be changed once the node pool has been created.
If the nodes have the correct service account with full storage access scope then we do not need to do anything extra on our kaniko pod, as it will be able to push to GCR just fine.
PROJECT=$(gcloud config get-value project 2> /dev/null)

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: kaniko-gcr
spec:
restartPolicy: Never
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.0.0
imagePullPolicy: Always
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/carlossg/kaniko-demo.git",
"--destination=gcr.io/${PROJECT}/kaniko-demo:latest",
"--cache=true"]
resources:
limits:
cpu: 1
memory: 1Gi
EOF
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
While testing
Jenkins X
I hit an issue that puzzled me. I use
Kaniko
to build Docker images and push them into
Google Container Registry
. But the push to GCR was failing with
INFO[0000] Taking snapshot of files...
error pushing image: failed to push to destination gcr.io/myprojectid/croc-hunter:1: DENIED: Token exchange failed for project 'myprojectid'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
During installation Jenkins X creates a GCP Service Account based on the name of the cluster (in my case
jx-rocks
) called
jxkaniko-jx-rocks
with roles:
roles/storage.admin
roles/storage.objectAdmin
roles/storage.objectCreator
More roles are added if you install Jenkins X with Vault enabled.
A key is created for the service account and added to Kubernetes as
secrets/kaniko-secret
containing the service account key json, which is later on mounted in the pods running Kaniko
as described in their instructions
After looking and looking the service account and roles they all seemed correct in the GCP console, but the Kaniko build was still failing. I found
a stackoverflow post claiming that the permissions were cached if you had a previous service account with the same name
(WAT?), so I tried with a new service account with same permissions and different name and that worked. Weird. So I created a script to replace the service account by another one and update the Kubernetes secret.
ACCOUNT=jxkaniko-jx-rocks
PROJECT_ID=myprojectid

# delete the existing service account and policy binding
gcloud -q iam service-accounts delete ${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com
gcloud -q projects remove-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.admin
gcloud -q projects remove-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectAdmin
gcloud -q projects remove-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectCreator

# create a new one
gcloud -q iam service-accounts create ${ACCOUNT} --display-name ${ACCOUNT}
gcloud -q projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.admin
gcloud -q projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectAdmin
gcloud -q projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectCreator

# create a key for the service account and update the secret in Kubernetes
gcloud -q iam service-accounts keys create kaniko-secret --iam-account=${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com
kubectl create secret generic kaniko-secret --from-file=kaniko-secret
And it did also work, so no idea why it was failing, but at least I’ll remember now how to manually cleanup and recreate the service account.
Share this:
Share on Bluesky (Opens in new window)
Bluesky
Share on X (Opens in new window)
Share on LinkedIn (Opens in new window)
Share on Mastodon (Opens in new window)
Mastodon
Share on Facebook (Opens in new window)
Share on Reddit (Opens in new window)
Reddit
Email a link to a friend (Opens in new window)
Email
Print (Opens in new window)
More
Share on Pinterest (Opens in new window)
Pinterest
Share on Tumblr (Opens in new window)
Tumblr
Share on Telegram (Opens in new window)
Telegram
Share on Threads (Opens in new window)
Threads
Share on WhatsApp (Opens in new window)
WhatsApp
Like
Loading...
Feed
License
Recent Comments
sadsadsd on
Building Docker Images with Ka…
Carlos Sanchez
on
Running a JVM in a Container W…
Salaikumar
on
Running a JVM in a Container W…
Ford Lady
on
Como enviar un coche de USA a…
José Luis on
Como enviar un coche de USA a…
Recent Posts
Building a TripIt Visualizer in a few hours with Google Antigravity
Managing the Machine: A Practical Look at Google Antigravity
Self-Healing Rollouts: Automating Production Fixes with Agentic AI and Argo Rollouts
Monoliths vs micro-services, here we go again
Serverless Jenkins Pipelines with Google Cloud Run
Categories
development
(254)
ai
(3)
cloud
(23)
devops
(43)
eclipse
(26)
Java
(86)
Maven
(76)
ruby
(3)
docker
(20)
General
(46)
jenkins
(23)
kubernetes
(20)
maestrodev
(52)
ONess
(11)
Personal
(38)
Uncategorized
(5)
Archives
January 2026
October 2025
May 2023
June 2021
October 2020
September 2020
June 2019
March 2019
February 2019
January 2019
December 2018
November 2018
August 2018
June 2018
April 2018
February 2018
September 2017
August 2017
May 2017
January 2017
December 2016
November 2016
October 2016
September 2016
July 2016
May 2016
April 2016
November 2015
October 2015
December 2014
October 2014
August 2014
July 2014
June 2014
April 2014
February 2014
January 2014
November 2013
October 2013
September 2013
August 2013
April 2013
January 2013
September 2012
August 2012
June 2012
May 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
August 2011
July 2011
June 2011
May 2011
March 2011
February 2011
January 2011
November 2010
October 2010
May 2010
January 2010
November 2009
October 2009
August 2009
July 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
Subscribed
Carlos Sanchez's Weblog
Already have a WordPress.com account?
Log in now.
Carlos Sanchez's Weblog
Subscribed
Report this content
View site in Reader
Manage subscriptions
Collapse this bar
Loading Comments...
%d