GKE with Artifact Registry

Create a private Container Registry with Artifact Registry

Joaquín Menchaca (智裕)
6 min readJan 11, 2024

--

Popular cloud platform solutions have a container registry service where you can manage container images for your organization. Azure has ACR (Azure Container Registry), AWS has ECR (Elastic Container Registry), and Google Cloud always had GCR (Google Container Registry), but this has been deprecated.

Google Cloud has a new solution called Google Artifact Registry. The cool thing about this new service is that it not only supports container images but also supports:

So in a sense, this is like having either Sonotype repository or Artifactory, but from the cloud provider itself.

This article will cover how to configure Google Artifact Registry for container images, and authorize read access for a GKE cluster.

Prerequisites

Tools

These are the tools used in this article.

  • Google Cloud SDK [gcloud command] to interact with Google Cloud
  • Docker [docker] tool for building and pushing images to a container registry
  • POSIX Shell [sh] such as bash [bash] or zsh [zsh] are used to run the commands. These come standard on Linux, and with macOS you can get the latest with brew install bash zsh if Homebrew is installed.

Configure an Artifact Registry

These instructions walk through how to set up a private Docker registry using Artifact Registry.

Shared Environment Variables

Where are some shared environment variables that will be used for this part. Save this as a file, like gar_env.sh.

#############
# gar_env.sh
#########################
export GAR_PROJECT_ID="<my-artifact-registry>"
export CLOUD_BILLING_ACCOUNT="<my-cloud-billing-account>"
export GAR_REPO_NAME="<my-artifact-repo>"
export GAR_LOCATION="us-central1" # change if needed

export GAR_REPO="$GAR_LOCATION-docker.pkg.dev/$GAR_PROJECT_ID/$GAR_REPO_NAME"

When complete, with the values of the variables filled out, source the file to make the variables available in the current shell.

source gar_env.sh

Setup a Project

As a container registry is a shared service, the best practice is to give the shared service its own Google project. This process can be setup with the following steps below.


# enable billing
gcloud projects create $GAR_PROJECT_ID
gcloud beta billing projects link $GAR_PROJECT_ID \
--billing-account $CLOUD_BILLING_ACCOUNT

# enable API
gcloud config set project $GAR_PROJECT_ID
gcloud services enable artifactregistry.googleapis.com

Create a Container Repository

Now that the project for Artifact Registry is set up, a private container registry.

gcloud config set project $GAR_PROJECT_ID

# create repository
gcloud artifacts repositories create $GAR_REPO_NAME \
--repository-format=docker \
--location=$GAR_LOCATION \
--description="Docker repository"

# verify the the repository was created
gcloud artifacts repositories list

For our local Docker client build environment, we will need to fetch credentials and install them in our local Docker client environment with the following command:

# configure auth in $HOME/.docker/config.json
gcloud auth configure-docker $GAR_LOCATION-docker.pkg.dev

Push an Image

# fetch an example image
docker pull hello-world

# push a new image into the repository
docker tag hello-world $GAR_REPO/hello-world
docker push $GAR_REPO/hello-world

In the cloud web console, you should see something like this

Illustration: View of Repository via Google Cloud Console

Fetch the Image

Once an image has been pushed up to the private registry.

docker pull $GAR_REPO/hello-world

Integration with GKE

For this section, you will need to have a running GKE cluster and configure a Google Service Account that manages the cluster.

📔 NOTE: The default GCE service account is not secure as permits privilege escalation, and should never be used.

I have a article listed below to how up a least-privilege service account, a private network infrastructure, and the GKE cluster that uses this network and service account:

Configure Access to Artifact Repository

Both the env.sh from the Ultimate Baseline GKE cluster guide, as well as gar_env.sh used in this guide need to be sourced from this point forward.

source env.sh     # vars from the Ultimate Baseline GKE Cluster guide
source gar_env.sh # vars from this tutorial

Now you need to add the GKE service account to the role listed in the Artifact registry project.


gcloud projects add-iam-policy-binding $GAR_PROJECT_ID \
--member "serviceAccount:$GKE_SA_EMAIL" \
--role "roles/artifactregistry.reader"

📓 NOTE: This is how cross-project permissions work. The principals, that is, the service accounts, are added to the permissions. Each project contains a list of roles, and each of these roles contain a list of principals, where each principal listed can come from the same project or an external project. The format of the principal identity, contains both the service-account-name and the project-id from where the service account was created: service-account-name@project-id.iam.gserviceaccount.com.

Testing Access

Using the same environment variables specified in gar_env.sh, push an image for Apache HTTP server to the private container registry.

# Fetch docker.io/library/httpd:latest
# NOTE: --platform arg is only needed if your laptop uses an arm64 processor
docker pull httpd --platform amd64

# Push httpd to private container registry (make sure to push only amd64)
docker tag httpd $GAR_REPO/httpd && docker push $GAR_REPO/httpd

Now deploy a container that will pull the image from the private container registry:

kubectl create deployment httpd \
--image=$GAR_REPO/httpd \
--replicas=3 \
--port=80

After you can check for status with:

kubectl get all

You will have these components installed into the default namespace:

Illustration: Components Deployed as viewed from Monokle

You can check for events in the current namespace, and search for images that were pulled from the private container registry:

kubectl get events | grep "Pulled.*$GAR_REPO"

Cleanup

Delete Apache HTTP service

When finished, you can delete the deployed web service with the following:

kubectl delete deployment httpd

Destroy GKE Cluster

If you used the Ultimate Baseline GKE cluster guide to create the GKE cluster, you can run the following:

gcloud container clusters delete $GKE_CLUSTER_NAME --project $GKE_PROJECT_ID

The above guide will have configured an external KUBECONFIG file, which you can delete with the following:

rm -f $KUBECONFIG

Destroy Network Infrastructure

The network infrastructure can be removed with:

gcloud compute routers nats delete $GKE_NAT_NAME --router $GKE_ROUTER_NAME --project $GKE_PROJECT_ID
gcloud compute routers delete $GKE_ROUTER_NAME --project $GKE_PROJECT_ID
gcloud compute networks subnets delete $GKE_SUBNET_NAME --project $GKE_PROJECT_ID
gcloud compute networks delete $GKE_NETWORK_NAME --project $GKE_PROJECT_ID

Cleanup Role Assignments

It is a good practice to remove roles entries from the projects for service accounts that no longer exist.

#######################
# list of roles configured earlier
#######################################
ROLES=(
roles/logging.logWriter
roles/monitoring.metricWriter
roles/monitoring.viewer
roles/stackdriver.resourceMetadata.writer
)

#######################
# remove service account from roles
#######################################

# remove service account items from GKE project
for ROLE in ${ROLES[*]}; do
gcloud projects remove-iam-policy-binding $GKE_PROJECT_ID \
--member "serviceAccount:$GKE_SA_EMAIL" \
--role $ROLE
done

# remove service account items from artifact registry project
gcloud projects remove-iam-policy-binding $GAR_PROJECT_ID \
--member "serviceAccount:$GKE_SA_EMAIL" \
--role "roles/artifactregistry.reader"

Destroy service account for GKE

gcloud iam service-accounts delete $GKE_SA_EMAIL --project $GKE_PROJECT_ID

Destroy Artifact Repositories

When finished and you no longer need the repository, you can run this

gcloud artifacts repositories delete $GAR_REPO_NAME \
--project $GAR_PROJECT_ID \
--location=$GAR_LOCATION

Conclusion

GCR is deprecated, so it is important to get up to speed with the replacement Artifact Registry.

One thing not covered in this guide is adding permissions to write images to the private container registry, which is something would do if you are creating a build system, for example, CI pipelines. To facilitate this, you want to limit the privileges to only the pods that need this explicit permission, using Workload Identity. For such a build system, ideally you would want to use tools that do not require access to the host system, so maybe something like buildah or podman.

This guide combined with the Ultimate Baseline GKE cluster together is a useful tutorial to get started for a more secure GKE, as the default settings for GKE are not secure.

Resources

Google Documentation

Other documentation

--

--