GKE with Emissary-Ingress

Running Emissary-Ingress on GKE

Joaquín Menchaca (智裕)
10 min readNov 20, 2023

--

Ambassador, now called emissary-ingress, is a popular gateway built around the famous Envoy proxy. The emissary-ingress, like other ingress controllers, supports the standard and quite limited ingress resource, but also has more robust CRDs (Mappings, Listener, Host) to configure routing back to your applications running on Kubernetes.

Emissary-ingress supports advance features such as the following:

  • advanced load balancing supporting policies with round robin, least request, ring hash, and maglev; some of these policies can support sticky sessions and session affinity configured to cookie, header, source IP for incoming requests.
  • service discovery with optional integration with Consul to route traffic to both Kubernetes services as well as applications on VMs outside of the Kubernetes cluster.
  • distributed circuit breaker support to improve resilience by preventing additional requests or connections that can overload a service.
  • support for external authorization, such as HTTP basic or SSO authentication protocols.
  • external log support to ship logs to an external log aggregation solution
  • distributed tracing support where x-request-id HTTP header is generated and populated.

For Google Cloud, we will install with the defaults, using an external load balancer (layer 4) for incoming traffic. The emissary-ingress will then further route traffic back to services running on Kubernetes based on what we configure.

Segue: Configuring HTTPS

This diagram above shows only insecure HTTP traffic. For secure traffic with HTTPS and a public facing service, you will need to have a publicly registered domain and then issue certificates using a trusted CA (Certificate Authority).

On GKE, the more popular method to implement this is setting up a domain with Cloud DNS (or another solution like Cloudera), using Cert Manager to automate installation of certificates, and optionally using External DNS to automate management of DNS records.

Another alternative, is to use Googles SSL management using the ManagedCertificate CRD, which requires placing a Layer 7 load balancer in front of emissary-ingress.

This is documented for Edge Stack under Configure and connect ambassador to the ingress. This part of the process should work for emissary-ingress, as the configuration at this stage happens with Google Cloud resources, but will need some adjustments, such as using applicable values for emissary-ingress, when configuring values for backend.serviceName and backend.servicePort.

I also wrote an an article on the topic for explicitely using ingress-gce as the sole L7 router for the cluster. To use that guide, you will need to remove the host key from the ingress configuration, which causes the host to default to *, and also filling in the values for backend so that it routes to emissary-ingress.

For this guide, only HTTP will be documented to keep the complexity of this guide to a minimum. If there is sufficient interests in emissary-ingress, I can document how to use HTTPS solution paths using either L4 with Cert Manager or L7 with ingress-gce.

Components

This was tested on following below and may not work if versions are significantly different.

* Kubernetes API 1.27.3-gke.100
* kubectl v1.27.3
* helm v3.12.2
* gcloud 455.0.0
* emissary-ingress 3.9.1
* Dgraph v21.03.2

Prerequisites

Setup Kubernetes Cluster with GKE

You will need to have setup a GKE cluster that preferably doesn’t use default network configuration and security configuration. I wrote how to setup baseline cluster in this article below:

Tools

These are the tools used in this article.

  • Google Cloud SDK [gcloud command] to interact with Google Cloud
  • kubectl client [kubectl] a the tool that can interact with the Kubernetes cluster. This can be installed using adsf tool.
  • helm [helm] is a tool that can install Kubernetes applications that are packaged as helm charts.
  • grpcurl [grpcurl] is a tool that can interact with gRPC protocol from the command-line.
  • curl [curl]: tool to interact with web servers from the command line.
  • POSIX Shell [sh] such as bash [bash] or zsh [zsh] are used to run the commands. These come standard on Linux, and with macOS you can get the latest with brew install bash zsh if Homebrew is installed.

These tools are highly recommended:

  • adsf [adsf] is a tool that installs versions of popular tools like kubectl.
  • jq [jq] is a tool to query and print JSON data
  • GNU Grep [grep] supports extracting string patterns using extended Regex and PCRE. This comes default on Linux distros, and for macOS it can be installed with brew install grep if Homebrew is installed.

Setup Environment Variables

If you ran through Ultimate Baseline GKE cluster tutorial, these environment variables should already be setup. You will need to configure access by setting up the KUBECONFIG environment variable. Below are environment variables required for that process.

# gke vars
export GKE_CLUSTER_NAME="base-gke" # change to match your cluster
export GKE_REGION="us-central1" # change to match your cluster

# kubectl client vars
export USE_GKE_GCLOUD_AUTH_PLUGIN="True"
export KUBECONFIG=~/.kube/gcp/$GKE_REGION-$GKE_CLUSTER_NAME.yaml

If you have not configured the KUBECONFIG yet, you can run the gcloud container clusters get-credentials command-line to configure access, for example:

gcloud container clusters get-credentials $GKE_CLUSTER_NAME \
--region $GKE_REGION

Emissary-Ingress

You can install emissary-ingress (previously called Ambassador) with the following commands below.

################
# Add repository if not done already
#########################
helm repo add datawire https://app.getambassador.io
helm repo update

################
# Install CRDs and create Namespace
#########################
kubectl create namespace emissary
kubectl apply \
--filename https://app.getambassador.io/yaml/emissary/3.9.0/emissary-crds.yaml
kubectl wait \
--timeout=90s \
--for=condition=available deployment emissary-apiext \
--namespace emissary-system

################
# Install Emissary-Ingress service
#########################
helm install emissary-ingress --namespace emissary datawire/emissary-ingress
kubectl wait \
--namespace emissary \
--for condition=available \
--timeout=90s deploy \
--selector app.kubernetes.io/instance=emissary-ingress

When completed, you can run these commands to see the components installed:

kubectl get all,secret,cm,sa --namespace emissary-system
kubectl get all,secret,cm,sa --namespace emissary

Deploy Dgraph

These steps will install Dgraph, a distributed graph database that supports the GraphQL language natively, as well as DQL, a superset of GraphQL. The following ports are supported to communicate with the database:

  • port 8080 (HTTP) for GraphQL and DQL
  • port 9080 (gRPC) for DQL

Run the following to install Dgraph:

export DGRAPH_ALLOW_LIST="0.0.0.0/0" # change as needed
export DGRAPH_RELEASE_NAME="dg"

# Add Dgraph repository if not done already
helm repo add dgraph https://charts.dgraph.io
helm update

# Install Dgraph
helm install $DGRAPH_RELEASE_NAME dgraph/dgraph \
--namespace dgraph \
--create-namespace \
--values - <<EOF
zero:
persistence:
storageClass: premium-rwo
size: 10Gi
alpha:
configFile:
config.yaml: |
security:
whitelist: ${DGRAPH_ALLOW_LIST}
persistence:
storageClass: premium-rwo
size: 30Gi
EOF

You can see the installed components with the following command:

kubectl get all --namespace dgraph

Configure Ingress for Dgraph

We can now configure access to Dgraph by configuring CRDs:

  • Host: ties together a Listener and one or more Mappings and determines how secure and insecure requests will be handled and how TLS certificates should be handled.
  • Listener: defines how to handle incoming traffic.
  • Mapping: associates incoming requests to a specific backend service based on HTTP path, header, and other request attributes.
export DGRAPH_HOSTNAME_HTTP="dgraph.local"
export DGRAPH_RELEASE_NAME="dg"

kubectl apply --namespace dgraph --filename - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: $DGRAPH_RELEASE_NAME-dgraph
spec:
hostname: "*"
requestPolicy:
insecure:
action: Route
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: $DGRAPH_RELEASE_NAME-dgraph
spec:
port: 8080
protocol: HTTP
securityModel: INSECURE
hostBinding:
namespace:
from: SELF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: $DGRAPH_RELEASE_NAME-dgraph-http
spec:
hostname: $DGRAPH_HOSTNAME_HTTP
prefix: /
service: $DGRAPH_RELEASE_NAME-dgraph-alpha.dgraph:8080
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: $DGRAPH_RELEASE_NAME-dgraph-grpc
spec:
hostname: "*"
prefix: /api.Dgraph/
rewrite: /api.Dgraph/
service: $DGRAPH_RELEASE_NAME-dgraph-alpha.dgraph:9080
grpc: True
EOF

We can look at the results of this deployment with:

kubectl get Host,Mapping,Listener --namespace dgraph

Testing Access to Dgraph

These commands can test access through both HTTP and gRPC. To simulate using hostname without a DNS server, we can add local entries to the /etc/hosts file (on macOS or Linux or WSL).

############
# Fetch endpoint and set other variables
#############################
export DGRAPH_HOSTNAME_HTTP="dgraph.local"
export DGRAPH_HOSTNAME_GRPC="grpc.dgraph.local"

export SERVICE_IP=$(kubectl get svc emissary-ingress \
--namespace emissary \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}'
)

############
# Add hostnames to /etc/hosts
# WARNING: Be very careful with these commands
#############################
sudo sh -c "echo ${SERVICE_IP} ${DGRAPH_HOSTNAME_HTTP} >> /etc/hosts"
sudo sh -c "echo ${SERVICE_IP} ${DGRAPH_HOSTNAME_GRPC} >> /etc/hosts"

############
# Test HTTP Access
#############################
curl -s http://$DGRAPH_HOSTNAME_HTTP/state

############
# Test gRPC Access
#############################
# Dowload api.proto specification locally
# TEST: Maybe use this one:
# https://raw.githubusercontent.com/dgraph-io/dgo/master/protos/api.proto
curl -sOL https://raw.githubusercontent.com/dgraph-io/pydgraph/master/pydgraph/proto/api.proto

# Interact with Dgraph using gRPC
grpcurl -plaintext -proto api.proto \
$DGRAPH_HOSTNAME_GRPC:80 api.Dgraph/CheckVersion

Cleaning Up

This will show how to delete the Kubernetes resources and some notes about deleting the Google Cloud resources.

Kubernetes resources

We can delete Dgraph with the following commands.

DGRAPH_RELEASE_NAME="dg"

# Delete dgraph application
helm delete $DGRAPH_RELEASE_NAME --namespace dgraph

# IMPORTANT: delete persistsent volume claims
kubectl delete pvc --namespace dgraph --selector release=$DGRAPH_RELEASE_NAME

# Delete namespace
kubectl namespace dgraph

Emissary-ingress can be deleted with the following:

# Delete Emissary application
helm delete emissary-ingress --namespace emissary

# Delete CRDs
kubectl delete --filename \
https://app.getambassador.io/yaml/emissary/3.9.0/emissary-crds.yaml

# Delete namespace
kubectl delete ns emissary

Google Cloud Resources

Before you delete the Kubernetes cluster, you will want to make sure there are no active persistent volumes claims. If these are not deleted, there can be leftover disks that will eat up costs. You can check the cluster for these with kubectl pvc -A.

When ready, deleting Kubernetes will depend on what tools were used to create it originally, such as terraform destroy or gcloud container clusters delete $GKE_CLUSTER_NAME.

Further Reading

Here are some videos and articles.

Videos

Source Code

Conclusion

Emissary-ingress back when it was called Ambassador has been one the celebrity ingress solutions for Kubernetes. The unique CRDs with Host, Listener, Mapping, and others are important in not only having was to configured advance features that are not available in the Kubernetes ingress API, but allows for separation of concerns and self-service.

Separation of Concerns

The separation of concerns is vital because it allows for developers to maintain part of the configuration that’s necessary for interacting with the application they are developing, such as the Mapping, while platform engineers can focus on the overall infrastructure as it applies to to several teams, which would be the Host and Listener. The model with Emissary-ingress allows for self-service.

The alternative, such as the current classic ingress API, is that the operations would handle infrastructure, including parts that are closely coupled with the application, and thus in this scenario, operations can easily become a bottleneck for software development lifecycle.

The Kubernetes Gateway API, which has recently went GA, also affords this separation of concerns that Emissary-ingress currently offers. So it will be interesting to see this project will evolve to support that API. As of this writing, Emissary-ingress does not support the GA release of the API as far as I understand.

Classic Ingress API support (broken)

Emissary-ingress supports that Kubernetes classic ingress API for those that need only the basic features. The documentation on this area has not been updated since 2020 (issue 5450). In writing this tutorial, I could not get this functionality to work, and wrote this up as issue 5451. The proprietary CRDs with Host, Listener, and Mapping, do however work.

Followup: I had some recent feedback from Ambassador Labs:

Just a note however that the ingress resource is not the preferred way to use Emissary. We recommend installing from the QuickStart because the Ingress resource is quite limited in functionality.

I hope this doesn’t mean that ingress API will not be investigated and left in a broken state.

General Emissary support

When testing Emissary, I ran into another serious issue. The Dgraph service is quite responsive, where a response through the Kubernetes service proxy is at 0.066s. In contrast, the response when going through Emissary is at 5.327s. This is a whopping extra +5.261s. That’s huge. Something is wrong. I opened another issue as 5469.

On issues in general with Emissary, I received some feedback through X/Twitter:

if you’re eager for continued and regular support we would recommend using Edge Stack. That will definitely get you further than the open-source version, Emissary.

This is scary, as it would seem to indicate that the open source is not well supported. If their commercial product is built from the open source, then it could mean that the commercial product inherits these problems.

Final Thoughts

My goal is to try out popular ingress controllers and API gateways and create some fun kickass tutorials around this. But unfortunately, at this moment I cannot recommend Emissary or the commercial Edge Stack at this time based on the following:

  • documentation is limited and not well organized, see Emissary Ingress to be a poorly documented nightmare of a project to work with that was posted in 2021. This adds significant lead time.
  • features such as ingress API are not working as documented, which is fine if this is not needed, but response from Ambassador Labs was troubling.
  • core functionality of API gateway, after deploying CRDs (Mapping, Host, Listener) configuration, adds +5.3 seconds to the response. Response from Ambassador Labs from my understanding was to get commercial support.

Given these issues, the time cost, and the experience interacting with the team, I would caution investing too much into this project unless there’s a specific feature required.

--

--