AKS with Azure Container Registry

Using Azure container registry with Azure Kubernetes Server

A private container registry is useful for building, well, private images, but it is also invaluable republish images that may not be otherwise available, due to outages or low availability, such images on the Quay registry in the last few years, or less reliable registries like GHCR (GitHub Container Registry).

What this article will cover

This article will cover building a Python client that will connect to the Dgraph distributed graph database using gRPC. This client will be built with Docker, pushed to ACR, and finally deployed to AKS using the image pulled from ACR.

  1. Deploy Dgraph distributed graph database
  2. Build and Push pydgraph-client utility image to ACR
  3. Deploy pydgraph-client utility with imaged pulled from ACR
  4. Demonstrate using client by gRPC and HTTP mutations and queries

Articles in Series

This series shows how to both secure and load balance gRPC and HTTP traffic.

  1. AKS with Calico network policies
  2. AKS with Linkerd service mesh
  3. AKS with Istio service mesh

Requirements

For creation of Azure cloud resources, you will need to have a subscription that will allow you to create resources.

Required tools

  • Kubernetes client tool (kubectl): command line tool that interacts with Kubernetes API
  • Helm (helm): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages.
  • helm-diff plugin: allows you to see the changes made with helm or helmfile before applying the changes.
  • Helmfile (helmfile): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref)
  • Docker (docker): command line tool to build, test, and push docker images.

Optional tools

Project setup

Below is a file structure that will be used for this article:

~/azure_acr/
├── env.sh
└── examples
├── dgraph
│ └── helmfile.yaml
└── pydgraph
├── Dockerfile
├── Makefile
├── helmfile.yaml
├── load_data.py
├── requirements.txt
├── sw.nquads.rdf
└── sw.schema

Project environment variables

Setup these environment variables below to keep a consistent environment amongst different tools used in this article. If you are using a POSIX shell, you can save these into a script and source that script whenever needed.

Provision Azure Resources

Azure Resources

Verify AKS and KUBCONFIG

Verify that the AKS cluster was created and that you have a KUBCONFIG that is authorized to access the cluster by running the following:

source env.sh
kubectl get all --all-namespaces

Dgraph Service

Dgraph is a distributed graph database that can be installed with these steps below.

source env.sh
helmfile --file examples/dgraph/helmfile.yaml apply
kubectl --namespace dgraph get all
Dgraph cluster

Building the pydgraph client

Now comes the time to build the pydgraph client utility.

The Dockerfile

Dockerfile

The Makefile

Makefile

The client script

load_data.py

The client package manifest

The Dgraph schema

sw.schema

The Dgraph RDF data

sw.nquads.rdf

Build and Push the Image

Now that all the required source files are available, build the image:

source env.sh
pushd examples/pydgraph
## build the image
make
build
## push the image to ACR
az
acr login --name ${AZ_ACR_NAME}
make push
popd
docker build

Deploying the pydgraph client

Now comes the time to deploy the pydgraph client utility, so that we can run queries and mutations using gRPC with python or HTTP with curl.

The Helmfile.yaml configuration

examples/pydgraph/helmfile.yaml

Deploy the Client

Once the pydgraph-client image is available on ACR, Kubernetes resources the use the image can now be deployed:

source env.sh
helmfile --file examples/pydgraph/helmfile.yaml apply
kubectl --namespace pydgraph-client get all 
pydgraph-client

Use the pydgraph client

Log into the container with the following command:

PYDGRAPH_POD=$(kubectl get pods \
--namespace pydgraph-client --output name
)
kubectl exec -ti --namespace pydgraph-client ${PYDGRAPH_POD} -- bash

Health Checks

Verify that the cluster is functional and healthy with this command:

curl ${DGRAPH_ALPHA_SERVER}:8080/health | jq
/health (HTTP)

gRPC checks

Verify that gRPC is functional using grpcurl command:

grpcurl -plaintext -proto api.proto \
${DGRAPH_ALPHA_SERVER}:9080 \
api.Dgraph/CheckVersion
api.Dgraph/CheckVersion (gRPC)

Run the Load Data Script

Load the schema and RDF data using the the load_data.py python script:

python3 load_data.py --plaintext \
--alpha ${DGRAPH_ALPHA_SERVER}:9080 \
--files ./sw.nquads.rdf \
--schema ./sw.schema

Query All Movies

Run this query to get all movies:

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'{ me(func: has(starring)) { name } }' | jq
/query (HTTP)

Query movies released after 1980

Run this query to get movies released after 1980:

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request
POST \
--header "Content-Type: application/dql" \
--data $'
{
me(func: allofterms(name, "Star Wars"), orderasc: release_date) @filter(ge(release_date, "1980")) {
name
release_date
revenue
running_time
director {
name
}
starring (orderasc: name) {
name
}
}
}
' | jq
/query (HTTP)

Clean up

All resources can be deleted with the following commands:

source env.sh
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}
rm -rf ${KUBECONFIG}

Delete Kubernetes Resources

If you wish to continue to use the AKS cluster for other projects, and want to delete dgraph and pydgraph, you can delete these resources with the following commands:

source env.shhelm delete --namespace dgraph demo
kubectl delete pvc --namespace dgraph --selector release=demo
helm delete --namespace pydgraph-client pydgraph-client

Resources

Here are some links to topics, articles, and tools used in this article:

Blog Source Code

Container Image Standards

Azure Documentation

Conclusion

This article will lay the ground work for managing private container images, and deploying clients and services that can communicate both through HTTP and gRPC. Though this article uses ACR and AKS flavors, the same principals could apply to similar solutions:

  1. Restrict traffic between designated clients and servers using network policies with Calico.
  2. Secure and load balance gRPC traffic between clients and servers that are apart of a service mesh such as Linkerd or Itsio.

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code