AKS with Azure Container Registry
Using Azure container registry with Azure Kubernetes Server
A private container registry is useful for building, well, private images, but it is also invaluable republish images that may not be otherwise available, due to outages or low availability, such images on the Quay registry in the last few years, or less reliable registries like GHCR (GitHub Container Registry).
In this article, we will cover using Azure Container Registry with Azure Kubernetes Service. Separately these components by themselves are not too complex, but combined together, logistically, process of deploying applications can get complex.
What this article will cover
This article will cover building a Python client that will connect to the Dgraph distributed graph database using gRPC. This client will be built with Docker, pushed to ACR, and finally deployed to AKS using the image pulled from ACR.
This will be implemented through running these steps:
- Provision Azure Resources
- Deploy Dgraph distributed graph database
- Build and Push
pydgraph-client
utility image to ACR - Deploy
pydgraph-client
utility with imaged pulled from ACR - Demonstrate using client by gRPC and HTTP mutations and queries
Articles in Series
This series shows how to both secure and load balance gRPC and HTTP traffic.
- AKS with Azure Container Registry (this article)
- AKS with Calico network policies
- AKS with Linkerd service mesh
- AKS with Istio service mesh
Requirements
For creation of Azure cloud resources, you will need to have a subscription that will allow you to create resources.
Required tools
These tools are required for this article:
- Azure CLI tool (
az
): command line tool that interacts with Azure API. - Kubernetes client tool (
kubectl
): command line tool that interacts with Kubernetes API - Helm (
helm
): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages. - helm-diff plugin: allows you to see the changes made with
helm
orhelmfile
before applying the changes. - Helmfile (
helmfile
): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref) - Docker (
docker
): command line tool to build, test, and push docker images.
Optional tools
As most of the tools to interact with gRPC or HTTP are included in the Docker image, only shell is recommend to manage environment variables and run scripts:
- POSIX shell (
sh
) such as GNU Bash (bash
) or Zsh (zsh
): these scripts in this guide were tested using either of these shells on macOS and Ubuntu Linux.
Project setup
Below is a file structure that will be used for this article:
~/azure_acr/
├── env.sh
└── examples
├── dgraph
│ └── helmfile.yaml
└── pydgraph
├── Dockerfile
├── Makefile
├── helmfile.yaml
├── load_data.py
├── requirements.txt
├── sw.nquads.rdf
└── sw.schema
With either Bash or Zsh, you can create the file structure with the following commands:
Project environment variables
Setup these environment variables below to keep a consistent environment amongst different tools used in this article. If you are using a POSIX shell, you can save these into a script and source that script whenever needed.
Copy this source script and save as env.sh
:
Provision Azure Resources
Both AKS and ACR cloud resources can be provisioned with the following steps below:
Verify AKS and KUBCONFIG
Verify that the AKS cluster was created and that you have a KUBCONFIG that is authorized to access the cluster by running the following:
source env.sh
kubectl get all --all-namespaces
The results should look similar to the following:
Dgraph Service
Dgraph is a distributed graph database that can be installed with these steps below.
Save the following as examples/dgraph/helmfile.yaml
:
Now run this below to deploy the Dgraph service:
source env.sh
helmfile --file examples/dgraph/helmfile.yaml apply
When completed, it will take about 2 minutes for the Dgraph cluster to be ready. You can check it with:
kubectl --namespace dgraph get all
This should show something like the following:
Building the pydgraph client
Now comes the time to build the pydgraph client utility.
The Dockerfile
The Dockerfile
will contain the instructions to build a client utility that contains the Python environment with a few tools, as well as the client script and data.
Copy the following as save as examples/pydgraph/Dockerfile
:
The Makefile
This Makefile
will encapsulate steps that can be used to build Docker images and push them to the ACR repository.
Copy the following and save as examples/pydgraph/Makefile
:
NOTE: Copy the above exactly, including tabs as tabs tell make
to run a command.
The client script
This is a script that will load the Dgraph schema and data. Copy the following and save as examples/pydgraph/load_data.py
:
The client package manifest
Copy the following and save as examples/pydgraph/requirements.txt
:
The Dgraph schema
Copy the following and save as examples/pydgraph/sw.schema
:
The Dgraph RDF data
Copy the following and save as examples/pydgraph/sw.nquads.rdf
:
Build and Push the Image
Now that all the required source files are available, build the image:
source env.sh
pushd examples/pydgraph## build the image
make build## push the image to ACR
az acr login --name ${AZ_ACR_NAME}
make pushpopd
During the build process, you should see something similar to this:
Deploying the pydgraph client
Now comes the time to deploy the pydgraph client utility, so that we can run queries and mutations using gRPC with python
or HTTP with curl
.
The Helmfile.yaml configuration
This helmfile.yaml
configuration can be used to deploy the client utility, once its image is available in the ACR.
Copy the following and save as examples/pydgraph/helmfile.yaml
:
Deploy the Client
Once the pydgraph-client image is available on ACR, Kubernetes resources the use the image can now be deployed:
source env.sh
helmfile --file examples/pydgraph/helmfile.yaml apply
You can run this to check the status of deployment:
kubectl --namespace pydgraph-client get all
This should result in something like the following:
Use the pydgraph client
Log into the container with the following command:
PYDGRAPH_POD=$(kubectl get pods \
--namespace pydgraph-client --output name
)kubectl exec -ti --namespace pydgraph-client ${PYDGRAPH_POD} -- bash
Health Checks
Verify that the cluster is functional and healthy with this command:
curl ${DGRAPH_ALPHA_SERVER}:8080/health | jq
This should show something like:
gRPC checks
Verify that gRPC is functional using grpcurl
command:
grpcurl -plaintext -proto api.proto \
${DGRAPH_ALPHA_SERVER}:9080 \
api.Dgraph/CheckVersion
NOTE: Dgraph serves HTTP traffic through port 8080
and gRPC traffic through port 9080
.
This should show something like:
Run the Load Data Script
Load the schema and RDF data using the the load_data.py
python script:
python3 load_data.py --plaintext \
--alpha ${DGRAPH_ALPHA_SERVER}:9080 \
--files ./sw.nquads.rdf \
--schema ./sw.schema
Query All Movies
Run this query to get all movies:
curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'{ me(func: has(starring)) { name } }' | jq
This result set should look something similar to the following:
Query movies released after 1980
Run this query to get movies released after 1980:
curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'
{
me(func: allofterms(name, "Star Wars"), orderasc: release_date) @filter(ge(release_date, "1980")) {
name
release_date
revenue
running_time
director {
name
}
starring (orderasc: name) {
name
}
}
}
' | jq
The result set should look similar to this:
Clean up
All resources can be deleted with the following commands:
source env.sh
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}
rm -rf ${KUBECONFIG}
NOTE: Because Azure manages cloud resources like load balancers and external volumes under a resource group for the AKS cluster, deleting the AKS cluster will delete all cloud resource provisioned on Kubernetes.
Delete Kubernetes Resources
If you wish to continue to use the AKS cluster for other projects, and want to delete dgraph and pydgraph, you can delete these resources with the following commands:
source env.shhelm delete --namespace dgraph demo
kubectl delete pvc --namespace dgraph --selector release=demo
helm delete --namespace pydgraph-client pydgraph-client
Resources
Here are some links to topics, articles, and tools used in this article:
Blog Source Code
- Build-Push Container Image to ACR: https://github.com/darkn3rd/blog_tutorials/tree/master/kubernetes/aks/series_2_network_mgmnt/part_0_docker
- AKS with ACR registry integration: https://github.com/darkn3rd/blog_tutorials/tree/master/kubernetes/aks/series_2_network_mgmnt/part_1_acr
Container Image Standards
Azure Documentation
- Azure Container Registry: https://azure.microsoft.com/services/container-registry/
- Azure Kubernetes Services: https://docs.microsoft.com/azure/aks/
- Tutorial: Deploy and use Azure Container Registry: https://docs.microsoft.com/azure/aks/tutorial-kubernetes-prepare-acr
- Authenticate with ACR from AKS: https://docs.microsoft.com/azure/aks/cluster-container-registry-integration
Conclusion
This article will lay the ground work for managing private container images, and deploying clients and services that can communicate both through HTTP and gRPC. Though this article uses ACR and AKS flavors, the same principals could apply to similar solutions:
- Kubernetes flavors: GKE, EKS, RKE, KubeSpray, PMK, microK8s
- Container Registry flavors: GCR, ECR, Harbor, Docker Distribution, Project Quay, Sonatype Docker Registry
This article will be part of a new series that I am developing that will cover the following topic areas:
- Build, Push, and Deploy a containerized gRPC client application and corresponding server application. (this article)
- Restrict traffic between designated clients and servers using network policies with Calico.
- Secure and load balance gRPC traffic between clients and servers that are apart of a service mesh such as Linkerd or Itsio.
This will serve as a springboard to explore more advance patterns with blue-green deploy scenarios and o11y (cloud native observability) with metrics, tracing, logging, and visualization and alerting , which there are a few popular solutions in these areas:
- blue-green deploy: ArgoCD, ArgoRollouts, Spinnaker, Kayenta, Flux or Flagger
- metrics: Prometheus, Metricbeat, or Telegraf
- tracing: Jaeger or Tempo
- logging: Fluentbit, Filebeat, or Loki
- visualization and alerting: Kibana, Grafana, AlertManager, Kapacitor, or Chronograf