Using Kubernetes Ingress plus Ocelot API Gateways

When using Kubernetes (like in an Azure Kubernetes Service cluster) you usually unify all the HTTP requests through the Kuberentes Ingress tier based on Nginx.

In Kuberentes, if you don’t use any ingress approach, then your services and pods have IPs only routable by the cluster network.

But if you use an ingress approach, you will have a middle tier in between the Internet and your services (including your API Gateways), acting as a reverse proxy.

As a definition, an Ingress is a collection of rules that allow inbound connections to reach the cluster services. An ingress is usually configured to provide services externally-reachable URLs, load balance traffic, SSL termination and more. Users request ingress by POSTing the Ingress resource to the API server.

In eShopOnContainers, when developing locally and using just your development machine as the Docker host, you are not using any ingress but only the multiple API Gateways. But when moving to a “production” environment based on Kuberentes, eShopOnContainers is using an ingress in front of the API gateways so that the clients still call the same base URL but the requests are routed to multiple API Gateways or BFF.

Note that the API Gateways are front-ends of facades only for the services or Web APIs but not for the web applications plus they might hide certain internal microservices. The ingress, however, is just redirecting HTTP requests but not trying to hide anything.

Therefore, having an ingress Nginx tier in Kuberentes in front of the web applications plus the several Ocelot API Gateways / BFF is the ideal architecture, as shown in the following diagram.

Image

Figure 6-41. The ingress tier in eShopOnContainers when deployed into Kubernetes

 

The deployment of eShopOnContainers into Kuberentes expose only a few services or endpoints via ingress, basically the following list of postfixes on the URLs:

 

When deploying to Kubernetes, each Ocelot API Gateway is using a different “configuration.json” file for each pod running the API Gateways. Those “configuration.json” files are provided by mounting (originally with the deploy.ps1 script) a volume created based on a Kuberentes config map named ‘ocelot’. Each container mounts its related configuration file in the container’s folder named /app/configuration.

In the source code files of eShopOnContainers, the original “configuration.json” files can be found within the k8s/ocelot/ folder. There’s one file for each BFF/APIGateway.

Additional cross-cutting features in an Ocelot API Gateway

There are other important features to research and use when using an Ocelot API Gateway which are described in the following links.

Service discovery in the client side integrating Ocelot with Consul or Eureka

http://ocelot.readthedocs.io/en/latest/features/servicediscovery.html

Caching at the API Gateway tier

http://ocelot.readthedocs.io/en/latest/features/caching.html

Logging at the API Gateway tier

http://ocelot.readthedocs.io/en/latest/features/logging.html

Quality of Service (Retries and Circuit breakers) at the API Gateway tier

http://ocelot.readthedocs.io/en/latest/features/qualityofservice.html

Rate limiting

http://ocelot.readthedocs.io/en/latest/features/ratelimiting.html