AKS with Azure Container Registry

Using Azure container registry with Azure Kubernetes Server

What this article will cover

  1. Provision Azure Resources
  2. Deploy Dgraph distributed graph database
  3. Build and Push pydgraph-client utility image to ACR
  4. Deploy pydgraph-client utility with imaged pulled from ACR
  5. Demonstrate using client by gRPC and HTTP mutations and queries

Articles in Series

  1. AKS with Azure Container Registry (this article)
  2. AKS with Calico network policies
  3. AKS with Linkerd service mesh
  4. AKS with Istio service mesh

Requirements

Required tools

  • Azure CLI tool (az): command line tool that interacts with Azure API.
  • Kubernetes client tool (kubectl): command line tool that interacts with Kubernetes API
  • Helm (helm): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages.
  • helm-diff plugin: allows you to see the changes made with helm or helmfile before applying the changes.
  • Helmfile (helmfile): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref)
  • Docker (docker): command line tool to build, test, and push docker images.

Optional tools

  • POSIX shell (sh) such as GNU Bash (bash) or Zsh (zsh): these scripts in this guide were tested using either of these shells on macOS and Ubuntu Linux.

Project setup

~/azure_acr/
├── env.sh
└── examples
├── dgraph
│ └── helmfile.yaml
└── pydgraph
├── Dockerfile
├── Makefile
├── helmfile.yaml
├── load_data.py
├── requirements.txt
├── sw.nquads.rdf
└── sw.schema

Project environment variables

Provision Azure Resources

Azure Resources

Verify AKS and KUBCONFIG

source env.sh
kubectl get all --all-namespaces

Dgraph Service

source env.sh
helmfile --file examples/dgraph/helmfile.yaml apply
kubectl --namespace dgraph get all
Dgraph cluster

Building the pydgraph client

The Dockerfile

Dockerfile

The Makefile

Makefile

The client script

load_data.py

The client package manifest

The Dgraph schema

sw.schema

The Dgraph RDF data

sw.nquads.rdf

Build and Push the Image

source env.sh
pushd examples/pydgraph
## build the image
make
build
## push the image to ACR
az
acr login --name ${AZ_ACR_NAME}
make push
popd
docker build

Deploying the pydgraph client

The Helmfile.yaml configuration

examples/pydgraph/helmfile.yaml

Deploy the Client

source env.sh
helmfile --file examples/pydgraph/helmfile.yaml apply
kubectl --namespace pydgraph-client get all 
pydgraph-client

Use the pydgraph client

PYDGRAPH_POD=$(kubectl get pods \
--namespace pydgraph-client --output name
)
kubectl exec -ti --namespace pydgraph-client ${PYDGRAPH_POD} -- bash

Health Checks

curl ${DGRAPH_ALPHA_SERVER}:8080/health | jq
/health (HTTP)

gRPC checks

grpcurl -plaintext -proto api.proto \
${DGRAPH_ALPHA_SERVER}:9080 \
api.Dgraph/CheckVersion
api.Dgraph/CheckVersion (gRPC)

Run the Load Data Script

python3 load_data.py --plaintext \
--alpha ${DGRAPH_ALPHA_SERVER}:9080 \
--files ./sw.nquads.rdf \
--schema ./sw.schema

Query All Movies

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'{ me(func: has(starring)) { name } }' | jq
/query (HTTP)

Query movies released after 1980

curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request
POST \
--header "Content-Type: application/dql" \
--data $'
{
me(func: allofterms(name, "Star Wars"), orderasc: release_date) @filter(ge(release_date, "1980")) {
name
release_date
revenue
running_time
director {
name
}
starring (orderasc: name) {
name
}
}
}
' | jq
/query (HTTP)

Clean up

source env.sh
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}
rm -rf ${KUBECONFIG}

Delete Kubernetes Resources

source env.shhelm delete --namespace dgraph demo
kubectl delete pvc --namespace dgraph --selector release=demo
helm delete --namespace pydgraph-client pydgraph-client

Resources

Blog Source Code

Container Image Standards

Azure Documentation

Conclusion

  1. Build, Push, and Deploy a containerized gRPC client application and corresponding server application. (this article)
  2. Restrict traffic between designated clients and servers using network policies with Calico.
  3. Secure and load balance gRPC traffic between clients and servers that are apart of a service mesh such as Linkerd or Itsio.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Joaquín Menchaca (智裕)

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code