AKS with Azure Container Registry

Using Azure container registry with Azure Kubernetes Server

Joaquín Menchaca (智裕)
8 min readJul 24, 2021

--

A private container registry is useful for building, well, private images, but it is also invaluable republish images that may not be otherwise available, due to outages or low availability, such images on the Quay registry in the last few years, or less reliable registries like GHCR (GitHub Container Registry).

In this article, we will cover using Azure Container Registry with Azure Kubernetes Service. Separately these components by themselves are not too complex, but combined together, logistically, process of deploying applications can get complex.

What this article will cover

This article will cover building a Python client that will connect to the Dgraph distributed graph database using gRPC. This client will be built with Docker, pushed to ACR, and finally deployed to AKS using the image pulled from ACR.

This will be implemented through running these steps:

  1. Provision Azure Resources
  2. Deploy Dgraph distributed graph database
  3. Build and Push pydgraph-client utility image to ACR
  4. Deploy pydgraph-client utility with imaged pulled from ACR
  5. Demonstrate using client by gRPC and HTTP mutations and queries

Articles in Series

This series shows how to both secure and load balance gRPC and HTTP traffic.

  1. AKS with Azure Container Registry (this article)
  2. AKS with Calico network policies
  3. AKS with Linkerd service mesh
  4. AKS with Istio service mesh

Requirements

For creation of Azure cloud resources, you will need to have a subscription that will allow you to create resources.

Required tools

These tools are required for this article:

  • Azure CLI tool (az): command line tool that interacts with Azure API.
  • Kubernetes client tool (kubectl): command line tool that interacts with Kubernetes API
  • Helm (helm): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages.
  • helm-diff plugin: allows you to see the changes made with helm or helmfile before applying the changes.
  • Helmfile (helmfile): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref)
  • Docker (docker): command line tool to build, test, and push docker images.

Optional tools

As most of the tools to interact with gRPC or HTTP are included in the Docker image, only shell is recommend to manage environment variables and run scripts:

  • POSIX shell (sh) such as GNU Bash (bash) or Zsh (zsh): these scripts in this guide were tested using either of these shells on macOS and Ubuntu Linux.

Project setup

Below is a file structure that will be used for this article:

~/azure_acr/
├── env.sh
└── examples
├── dgraph
│ └── helmfile.yaml
└── pydgraph
├── Dockerfile
├── Makefile
├── helmfile.yaml
├── load_data.py
├── requirements.txt
├── sw.nquads.rdf
└── sw.schema

With either Bash or Zsh, you can create the file structure with the following commands:

Project environment variables

Setup these environment variables below to keep a consistent environment amongst different tools used in this article. If you are using a POSIX shell, you can save these into a script and source that script whenever needed.

Copy this source script and save as env.sh:

Provision Azure Resources

Azure Resources

Both AKS and ACR cloud resources can be provisioned with the following steps below:

Verify AKS and KUBCONFIG

Verify that the AKS cluster was created and that you have a KUBCONFIG that is authorized to access the cluster by running the following:

source env.sh
kubectl get all --all-namespaces

The results should look similar to the following:

Dgraph Service

Dgraph is a distributed graph database that can be installed with these steps below.

Save the following as examples/dgraph/helmfile.yaml:

Now run this below to deploy the Dgraph service:

source env.sh
helmfile --file examples/dgraph/helmfile.yaml apply

When completed, it will take about 2 minutes for the Dgraph cluster to be ready. You can check it with:

kubectl --namespace dgraph get all

This should show something like the following:

Dgraph cluster

Building the pydgraph client

Now comes the time to build the pydgraph client utility.

The Dockerfile

Dockerfile

The Dockerfile will contain the instructions to build a client utility that contains the Python environment with a few tools, as well as the client script and data.

Copy the following as save as examples/pydgraph/Dockerfile:

The Makefile

Makefile

This Makefile will encapsulate steps that can be used to build Docker images and push them to the ACR repository.

Copy the following and save as examples/pydgraph/Makefile:

NOTE: Copy the above exactly, including tabs as tabs tell make to run a command.

The client script

load_data.py

This is a script that will load the Dgraph schema and data. Copy the following and save as examples/pydgraph/load_data.py:

The client package manifest

Copy the following and save as examples/pydgraph/requirements.txt:

The Dgraph schema

sw.schema

Copy the following and save as examples/pydgraph/sw.schema:

The Dgraph RDF data

sw.nquads.rdf

Copy the following and save as examples/pydgraph/sw.nquads.rdf:

Build and Push the Image

Now that all the required source files are available, build the image:

source env.sh
pushd examples/pydgraph
## build the image
make
build
## push the image to ACR
az
acr login --name ${AZ_ACR_NAME}
make push
popd

During the build process, you should see something similar to this:

docker build

Deploying the pydgraph client

Now comes the time to deploy the pydgraph client utility, so that we can run queries and mutations using gRPC with python or HTTP with curl.

The Helmfile.yaml configuration

This helmfile.yaml configuration can be used to deploy the client utility, once its image is available in the ACR.

Copy the following and save as examples/pydgraph/helmfile.yaml:

examples/pydgraph/helmfile.yaml

Deploy the Client

Once the pydgraph-client image is available on ACR, Kubernetes resources the use the image can now be deployed:

source env.sh
helmfile --file examples/pydgraph/helmfile.yaml apply

You can run this to check the status of deployment:

kubectl --namespace pydgraph-client get all 

This should result in something like the following:

pydgraph-client

Use the pydgraph client

Log into the container with the following command:

PYDGRAPH_POD=$(kubectl get pods \
--namespace pydgraph-client --output name
)
kubectl exec -ti --namespace pydgraph-client ${PYDGRAPH_POD} -- bash

Health Checks

Verify that the cluster is functional and healthy with this command:

curl ${DGRAPH_ALPHA_SERVER}:8080/health | jq

This should show something like:

/health (HTTP)

gRPC checks

Verify that gRPC is functional using grpcurl command:

grpcurl -plaintext -proto api.proto \
${DGRAPH_ALPHA_SERVER}:9080 \
api.Dgraph/CheckVersion

NOTE: Dgraph serves HTTP traffic through port 8080 and gRPC traffic through port 9080.

This should show something like:

api.Dgraph/CheckVersion (gRPC)

Run the Load Data Script

Load the schema and RDF data using the the load_data.py python script:

python3 load_data.py --plaintext \
--alpha ${DGRAPH_ALPHA_SERVER}:9080 \
--files ./sw.nquads.rdf \
--schema ./sw.schema

Query All Movies

Run this query to get all movies:

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'{ me(func: has(starring)) { name } }' | jq

This result set should look something similar to the following:

/query (HTTP)

Query movies released after 1980

Run this query to get movies released after 1980:

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request
POST \
--header "Content-Type: application/dql" \
--data $'
{
me(func: allofterms(name, "Star Wars"), orderasc: release_date) @filter(ge(release_date, "1980")) {
name
release_date
revenue
running_time
director {
name
}
starring (orderasc: name) {
name
}
}
}
' | jq

The result set should look similar to this:

/query (HTTP)

Clean up

All resources can be deleted with the following commands:

source env.sh
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}
rm -rf ${KUBECONFIG}

NOTE: Because Azure manages cloud resources like load balancers and external volumes under a resource group for the AKS cluster, deleting the AKS cluster will delete all cloud resource provisioned on Kubernetes.

Delete Kubernetes Resources

If you wish to continue to use the AKS cluster for other projects, and want to delete dgraph and pydgraph, you can delete these resources with the following commands:

source env.shhelm delete --namespace dgraph demo
kubectl delete pvc --namespace dgraph --selector release=demo
helm delete --namespace pydgraph-client pydgraph-client

Resources

Here are some links to topics, articles, and tools used in this article:

Blog Source Code

Container Image Standards

Azure Documentation

Conclusion

This article will lay the ground work for managing private container images, and deploying clients and services that can communicate both through HTTP and gRPC. Though this article uses ACR and AKS flavors, the same principals could apply to similar solutions:

This article will be part of a new series that I am developing that will cover the following topic areas:

  1. Build, Push, and Deploy a containerized gRPC client application and corresponding server application. (this article)
  2. Restrict traffic between designated clients and servers using network policies with Calico.
  3. Secure and load balance gRPC traffic between clients and servers that are apart of a service mesh such as Linkerd or Itsio.

This will serve as a springboard to explore more advance patterns with blue-green deploy scenarios and o11y (cloud native observability) with metrics, tracing, logging, and visualization and alerting , which there are a few popular solutions in these areas:

--

--

Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible