Dgraph on Kubernetes: Why?

Horizontally Scaling a graph database with high availability

Joaquín Menchaca (智裕)
8 min readJan 9, 2024

--

In a previous article I introduced Dgraph with the following:

Dgraph is a highly performant distributed graph database that speaks GraphQL natively. This means that you do not need to translate from GraphQL to other languages, such as Cypher with Neo4j, or to a non-graph database, such as the case with Hasura.

Dgraph also has a superset of the GraphQL language called DQL (Dgraph Query Language), which can communicate through either HTTP or through gRPC for higher performance.

In this article, I wanted to talk about high availability that is built into Dgraph, and how a orchestration scheduling platform like Kubernetes, especially on top of existing automation from cloud providers, can further enhance this high availability by more layers of redundancy.

For this article, we’ll talk about three layers:

  1. Application (Dgraph): distributed graph database that replicates data and has recovery.
  2. Platform (Kubernetes): manages containers and storage across several virtual machine instances (aka worker nodes). The containers and storage are resilient and recoverable, and will be replaced should they fail.
  3. Infrastructure (AWS, Azure, or GCP): the virtual machine instances themselves, called worker nodes, that managed by a cloud service that will recover or replace the failed nodes.

For the technically curious, this automation service is VMSS (Virtual Machine Scale Sets) on Azure, MIGs (Managed Instance Groups) on Google Cloud, and ASG (Auto Scaling Groups) on AWS

Dgraph Architecture

Dgraph is deployed as a single binary with no external dependencies. You can use the same binary to run all of Dgraph. The official Dgraph Docker image simplifies deploying to Kubernetes, where you can set up a multi-node highly available Dgraph cluster.

Two kinds of processes are in a Dgraph cluster: Dgraph Zero (cluster managers) and Dgraph Alpha (data servers). The Dgraph Zero members control the Dgraph cluster, store the group membership of the Dgraph Alpha members, and manage transactions cluster-wide. The Dgraph Alpha members store data, index, and serve all client requests.

We will need at least one Dgraph Zero member and one Dgraph Alpha member to run Dgraph. There can be multiple Dgraph Zero members and Dgraph Alpha member running in groups as a cluster. There is always a single group of Dgraph Zero members and one or more groups of Dgraph Alpha members. Each group is a Raft consensus group for high availability and consistent replication.

Illustration: Dgraph Raft Consensus groups

Dgraph is resilient to any one of these members failing. The cluster remains available for users to read and write their data. Distributed systems can fail for a myriad of reasons, and these failures shouldn’t make our backend go awry. As long as the majority of a group remains up, then requests can proceed. More specifically, if the number of replicas is 2N + 1, up to N members can be down without any impact on reads or writes. Groups are odd-numbered for a quorum.

For a great animation of Raft Consensus (using raftscope animation), check out https://raft.github.io/.

Kubernetes: An ideal Dgraph companion

Although Dgraph can run on a cluster of nodes for high availability, an orchestration tool is necessary for health checking, self-healing, storage volume management, and setting up the network. The Kubernetes platform provides all of this, making it the ideal platform to host Dgraph.

Kubernetes maintains Dgraph’s resiliency as it constantly monitors Dgraph instances for readiness and liveness, and it can automatically move instances off of unhealthy worker nodes and run them on healthy ones. Kubernetes manages stateful apps like Dgraph with StatefulSets.

Every Dgraph process is deployed as a Pod. In StatefulSets, Kubernetes gives each pod the same hostname identity and persistent volumes even when it moves to different worker nodes. If a worker node goes bad or if a Dgraph process gets restarted, then the pod will restart on a healthy worker node and re-sync the latest changes from replicated peers. Even when a pod restarts, Dgraph is available to process all requests on the rest of the healthy nodes.

The illustration below shows what the Dgraph StatefulSets (sts) as it relates to other components. This includes three pods and corresponding allocation of storage, called persistent volume claim (pvc) that comes from a persistent volume (pv). Alongside this, we deploy a service (svc) resource that will creates a proxy that connects to one of three highly available pods.

Illustration: Dgraph Deployment on Kubernetes

About Deploying on Kubernetes

Dgraph is deployed using Kubernetes manifests, which are files in either YAML or JSON that describe the desired state of the Kubernetes resource objects, such as the StatefulSets and Services. You can lovingly handcraft these files using your favorite text editor, or have them automatically generated using a Helm chart. It just so happens that there is a Dgraph helm chart that can be used to deploy Dgraph.

The default will create a Dgraph cluster with three Draph Alpha pods and three Dgraph Zero pods. You can optionally enable installation of the web client Ratel, or install it separately with the Ratel helm chart.

The Dgraph Alpha pods and Dgraph Zero pods are run via StatefulSets with a persistent storage provided through the cloud service, such as an Amazon Elastic Block Store (Amazon EBS) volume on Elastic Kubernetes Service (EKS), to store the data, so they can keep the same data around even if they’re restarted or rescheduled to different machines. The web client Ratel is stateless, so it is a Deployment with no extra persistent disks needed.

No services will be explosed to the public internet for security reasons. If you would like to add endpoints, you can configure the Helm config values to changes the default service type to LoadBalancer, or add an Ingress resource.

The following Kubernetes diagram is similar to diagrams used in the Kubernetes basic tutorial and other Kubernetes documentation. This is an overview of the components involved when deploying Dgraph on Kubernetes.

The Dgraph Zero pods and Dgraph Alpha pods deployed by a StatefulSet controller, as mentioned previously, will be distributed across the cluster and have an attached disk, indicated by the purple disk icon. The Ratel pod deployed by a Deployment controller will only have a single pod in one of the three worker nodes and does not have an attached persistent disk.

Illustration: Dgraph Kubernetes components

Some further notes on this diagram:

  • Current versions of Kubernetes no longer use Docker for container management.
  • Ratel and Dgraph should be deployed in separate namespaces as a best practice, such as ratel and dgraph.

I wrote a general tutorial about how to deploy Dgraph on on Kubernetes

Cloud Managed Kuberenetes Solutions

Cloud providers like Azure, AWS, and Google Cloud, provide managed services that are recommended:

These solutions will manage the master control plane and worker nodes and offers integration with cloud resources in security, networking, storage, monitoring, scaling, and more.

Although Kubernetes provides high availability through scaling and recovery of workloads called pods, a managed solution on the cloud will also provide scaling and recovery to the worker nodes themselves.

The illustration below is an example Amazon EKS cluster that has six worker nodes available to host Dgraph Alphas and Dgraph Zeros, as well multiple masters managed by Amazon EKS.

Illustration: Dgraph on Elastic Kubernetes Service

Kubernetes Recommendations

For any of these solutions, I recommend configuring high speed storage with SSD and adding some mimimal security measures to get started. I have some notes for each of these providers.

In general, you can use cloud CLI tools to provision a cluster, but for long term I recommend using a tool like Terraform to managed production infrastructure. There are some excellent open source terraform modules for AKS, GKE, and EKS, so I recommend using these rather than creating these from scratch.

Azure Kubernetes Service (AKS)

Azure’s manged service AKS is easy to setup and get started, but I would want to recommend some options to consider when setting up a cluster.

I currently do not have any articles on this yet. My previous articles documented how to use Azure AD pod identity, which provides least privilege access to cloud resources through Azure Active Directory.

Since then, Azure AD workload identity was developed to allow Kubernetes federate with external identity providers.

Google Kubernetes Engine (GKE)

Google Cloud’s managed service GKE is easy to setup but comes with insecure defaults that I recommend not using. For this reason, I recommend the following when provisioning GKE:

I wrote an article about how to provision GKE with Google Cloud SDK [gcloud command] with some of these options:

Elastic Kubernetes Service (EKS)

Amazon EKS is comparitively is far more complex to provision, especially as it no longer comes with any default storage. With EKS, I recommend installing and configuring the following

I wrote an article that covers how to quickly provision a robust EKS cluster using AWS CLI [aws] tool and eksctl [eksctl]tool.

Conclusion

I wanted to write some introductory material about Dgraph and Kubernetes that I can reference in future articles, as opposed to rewriting this for every new article, drowning people in verbosity and complexity.

Originally I wrote this material exclusively as an blog on AWS (see https://aws.amazon.com/blogs/opensource/dgraph-on-aws-setting-up-a-horizontally-scalable-graph-database/), and this is refreshed to be more generic high level with links to more explicit implementation tutorials.

I hope this helps, and if you have any questions or comments, feel free to drop a reply.

--

--