Mesh doc

eiximenis 2019-10-31 09:29:58 +01:00
parent 3190db2691
commit de0bfae04b
4 changed files with 106 additions and 26 deletions

@ -102,7 +102,7 @@ Save the file and close the editor. This should reapply the deployment in the cl
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window: All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window:
``` ```
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev .\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev -useMesh $false
``` ```
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster. This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
@ -110,30 +110,32 @@ This will install all the [eShopOnContainers public images](https://hub.docker.c
Once the script is run, you should see following output when using `kubectl get deployment`: Once the script is run, you should see following output when using `kubectl get deployment`:
``` ```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE NAME READY UP-TO-DATE AVAILABLE AGE
eshop-apigwmm 1 1 1 1 4d eshop-apigwmm 1/1 1 1 29d
eshop-apigwms 1 1 1 1 4d eshop-apigwms 1/1 1 1 29d
eshop-apigwwm 1 1 1 1 4d eshop-apigwwm 1/1 1 1 29d
eshop-apigwws 1 1 1 1 4d eshop-apigwws 1/1 1 1 29d
eshop-basket-api 1 1 1 1 4d eshop-basket-api 1/1 1 1 30d
eshop-basket-data 1 1 1 1 4d eshop-basket-data 1/1 1 1 30d
eshop-catalog-api 1 1 1 1 4d eshop-catalog-api 1/1 1 1 30d
eshop-identity-api 1 1 1 1 4d eshop-identity-api 1/1 1 1 30d
eshop-keystore-data 1 1 1 1 4d eshop-keystore-data 1/1 1 1 30d
eshop-locations-api 1 1 1 1 4d eshop-locations-api 1/1 1 1 30d
eshop-marketing-api 1 1 1 1 4d eshop-marketing-api 1/1 1 1 30d
eshop-mobileshoppingagg 1 1 1 1 4d eshop-mobileshoppingagg 1/1 1 1 30d
eshop-nosql-data 1 1 1 1 4d eshop-nosql-data 1/1 1 1 30d
eshop-ordering-api 1 1 1 1 4d eshop-ordering-api 1/1 1 1 30d
eshop-ordering-backgroundtasks 1 1 1 1 4d eshop-ordering-backgroundtasks 1/1 1 1 30d
eshop-ordering-signalrhub 1 1 1 1 4d eshop-ordering-signalrhub 1/1 1 1 30d
eshop-payment-api 1 1 1 1 4d eshop-payment-api 1/1 1 1 30d
eshop-rabbitmq 1 1 1 1 4d eshop-rabbitmq 1/1 1 1 30d
eshop-sql-data 1 1 1 1 4d eshop-sql-data 1/1 1 1 30d
eshop-webmvc 1 1 1 1 4d eshop-webhooks-api 1/1 1 1 30d
eshop-webshoppingagg 1 1 1 1 4d eshop-webhooks-web 1/1 1 1 30d
eshop-webspa 1 1 1 1 4d eshop-webmvc 1/1 1 1 30d
eshop-webstatus 1 1 1 1 4d eshop-webshoppingagg 1/1 1 1 30d
eshop-webspa 1/1 1 1 30d
eshop-webstatus 1/1 1 1 30d
``` ```
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`: Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
@ -144,6 +146,8 @@ eshop-apigwms eshop.<your-guid>.<region>.aksapp.io <public-ip> 80
eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-webhooks-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-webhooks-web eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
@ -151,6 +155,8 @@ eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80
Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon. Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon.
### Allow large headers (needed for login to work)
One step more is needed: we need to configure the nginx ingress controller that AKS has to allow larger headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder): One step more is needed: we need to configure the nginx ingress controller that AKS has to allow larger headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder):
``` ```
@ -165,7 +171,25 @@ Then you can restart the pod that runs the nginx controller. Its name is `addon-
kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system
``` ```
You can view the MVC client at http://[dns]/webmvc and the SPA at the http://[dns]/ You can view the MVC client at `http://[dns]/webmvc` and the SPA at the `http://[dns]/`
## Using Linkerd as Service Mesh (Advanced Scenario)
There is the possibility to install eShopOnContainers ready to run with [Linkerd](https://linkerd.io/) Service Mesh. To use Linkerd, you must follow the following steps:
1. Install Linkerd on your cluster. We don't provide Linkerd installation scripts, but the process is described in the [Linkerd installation documentation](https://linkerd.io/2/getting-started/#step-0-setup). Steps 0 trough 3 needs to be completed.
2. Then install eShopOnContainers using the procedure described above, but in the `deploy-all.ps1` pass the parameter `useMesh` to `$true`.
Once eShop is installed you can check that all non-infrastructure pods have two containers:
![Pods with two containers](./images/Mesh/pods.png)
Now you can use the command `linkerd dashboard` to show the mesh and monitor all the connections between eShopOnContainer pods.
The mesh monitors all HTTP connections (including gRPC), but do not monitor RabbitMq or any other connection (SQL, Mongo, ...)
For more information read the [Resiliency and Service Meh](./Resiliency-and-mesh.md) page.
## Customizing the deployment ## Customizing the deployment

55
Resiliency-and-mesh.md Normal file

@ -0,0 +1,55 @@
# Resiliency and Service Mesh
Previous versions of eShopOnContainers used the [Polly library](https://github.com/App-vNext/Polly) to provide resiliency scenarios. Polly is a fantastic open source library that provides advanced resiliency scenarios and patterns like retries (with exponential backoff) or circuit breakers.
This version of eShops drops the use of Polly in the following cases:
* Http REST calls between microservices
* gRPC calls between microservices
Polly is still used to guarantee resiliency in database connections and RabbitMQ/Azure Service Bus connections, but is no longer used for resiliency between synchronous microservice-to-microservice communication.
## Service Mesh
The reason to drop Polly for microservice-to-microservice communications is to show the use of a [Service Mesh](https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/service-mesh-communication-infrastructure). One of the reasons to use a Service Mesh is to delegate on it the communications resiliency and set policies of retries, circuit breakers and QoS.
To use service mesh, is needed to deploy eShopOnContainers on a Kubernetes cluster. Using eShopOnContainers from docker host (using compose) can't use Service Mesh and in this case no resiliency for the communications is built.
eShopOnContainers is ready to use [Linkerd](https://linkerd.io) as Service Mesh. There were several options to choose from, we choose Linkerd mainly for its easy installation and configuration, and because has minimal impact on the cluster where is installed.
## Enabling Mesh
To use eShopOnContainers under Linkerd, you need to install Linkerd first in your cluster. This is an administrative task performed only once. We don't provide scripts for Linkerd installation, but the process is very straightforward and is clearly described on its [installation page](https://linkerd.io/2/getting-started/). Just follow steps 0 through 3.
Once Linkerd is installed you can deploy eShopOnContainers. To enable the integration with Linkerd, pass the parameter `useMesh` to `$true` when running the `deploy-all.ps1` script. For the curious what this parameter does is pass the value `inf.mesh.enabled` to `true` to all helm charts. When this value is enabled the helm charts do:
1. Adds the `linkerd.io/inject: enabled` to all needed deployments.
2. Adds the annotations declared in file `ingress_values.yaml` to all ingress resources. Provided `ingress_values.yaml` is as follows:
```yaml
ingress:
mesh:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
```
This is the specific configuration needed to enable the integration between NGINX ingress (and/or Http Application Routing as is derived from NGINX) and Linkerd. If you use other ingress controller you will need to update this file accordingly, following the [Linkerd ingress integration](https://linkerd.io/2/tasks/using-ingress/) instructions.
## Service profiles
By default Linkerd only monitors the network status and gives you detailed results that you can view by using the `linkerd` CLI tool.
To enable retries and other network policies you must declare a _service profile_ for the specified service you want to be controlled. A very detailed explanation about service profiles is in the [Linkerd documentation](https://linkerd.io/2/tasks/setting-up-service-profiles/)
Just for reference we include service profiles for basket and catalog API. Feel free to update them, play with it and explore all Linkerd scenarios!
You can find the service profiles in folder `/k8s/linkerd`. Just use `kubectl apply` to apply them to the cluster. Once a service profile is applied, Linkerd is able to give you detailed statistics (by route) and apply retries and other policies.
**Note** Previous versions of eShopOnContainers had specific business scenarios to demo the [circuit breaker](https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern) pattern. These secenarios have been removed as, when using Mesh, the circuit breakers are applied by the Mesh under-the-hoods, and the caller do not receive any specific information that a request has been aborted by the circuit breaker. Right now in Linkerd2 there is no specific option to set a circuit breaker policy. This could change in the future as the Mesh itself evolves.

@ -28,6 +28,7 @@
- Logging and Monitoring - Logging and Monitoring
- [Serilog & Seq](Serilog-and-Seq) - [Serilog & Seq](Serilog-and-Seq)
- [Using HealthChecks](Using-HealthChecks) - [Using HealthChecks](Using-HealthChecks)
- [Resiliency and Service Mesh](Resiliency-and-mesh)
- [ELK Stack](ELK-Stack) - [ELK Stack](ELK-Stack)
- [Application Insights](Application-Insights) - [Application Insights](Application-Insights)
- Tests - Tests

BIN
images/Mesh/pods.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB