Merge branch 'dotnet3-migration/merge'
@ -1,3 +1,18 @@
|
|||||||
For detailed info about "Implementing API Gateways with Ocelot" as implemented in eShopOnContainers, check out the following blog post:
|
eShopOnContainer uses four API Gateways that implement the [Backend for Frontends (BFF) pattern](https://samnewman.io/patterns/architectural/bff/).
|
||||||
|
|
||||||
<https://blogs.msdn.microsoft.com/cesardelatorre/2018/05/15/designing-and-implementing-api-gateways-with-ocelot-in-a-microservices-and-container-based-architecture/>
|
Overall architecture is shown in the next diagram:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The image above is the architecture when running eShopOnContainers on Kubernetes with Service Mesh enabled. If Service Mesh is disabled all of the "Linkerd" containers don't exist and if running outside Kubernetes the "Ingress controller" doesn't exists and you access directly the API Gateways.
|
||||||
|
|
||||||
|
In this architecture the four blue boxes in the column labelled as "eShop Ingress" are the four BFFs.
|
||||||
|
|
||||||
|
Currently they are implemented using [Envoy](https://www.envoyproxy.io/). Each BFF provides a unique endpoint for its clients and then forwards the call to the specific microservice or the custom aggregator.
|
||||||
|
|
||||||
|
It's important to highlight that:
|
||||||
|
|
||||||
|
- The communication between BFF and the microservices plus aggregator is HTTP/REST.
|
||||||
|
- The communication between the aggregators and the microservices is gRPC.
|
||||||
|
|
||||||
|
This could change in the future by using gRPC from the BFF to the microservices plus aggregators, while maintaining an HTTP/REST façade from the BFFs to the clients.
|
||||||
|
@ -1,3 +1,13 @@
|
|||||||
|
> **CONTENT**
|
||||||
|
|
||||||
|
- [Overview](#overview)
|
||||||
|
- [EventBus](#eventbus)
|
||||||
|
- [gRPC](#grpc)
|
||||||
|
- [API Gateways](#api-gateways)
|
||||||
|
- [Internal architectural patterns](#internal-architectural-patterns)
|
||||||
|
- [Database servers](#database-servers)
|
||||||
|
- [More on-line details and guidance](#more-on-line-details-and-guidance)
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
This reference application is cross-platform for both the server and client side, thanks to .NET Core services, it's capable of running on Linux or Windows containers depending on your Docker host. It also has a Xamarin mobile app that supports Android, iOS and Windows/UWP, as well as an ASP.NET Core Web MVC and an SPA apps.
|
This reference application is cross-platform for both the server and client side, thanks to .NET Core services, it's capable of running on Linux or Windows containers depending on your Docker host. It also has a Xamarin mobile app that supports Android, iOS and Windows/UWP, as well as an ASP.NET Core Web MVC and an SPA apps.
|
||||||
@ -14,16 +24,37 @@ eShopOnContainers includes a simplified EventBus abstraction to handle integrati
|
|||||||
|
|
||||||
For a production-grade solutions you should use a more robust implementation based on a robust product such as [NServiceBus](https://github.com/Particular/NServiceBus). You can even see a (somewhat outdated) implementation of eShopOnContainers with NServiceBus here: https://github.com/Particular/eShopOnContainers.
|
For a production-grade solutions you should use a more robust implementation based on a robust product such as [NServiceBus](https://github.com/Particular/NServiceBus). You can even see a (somewhat outdated) implementation of eShopOnContainers with NServiceBus here: https://github.com/Particular/eShopOnContainers.
|
||||||
|
|
||||||
|
## gRPC
|
||||||
|
|
||||||
|
Most communications between microservices are decoupled using the EventBus and the "pub/sub" pattern.
|
||||||
|
|
||||||
|
However, the communications between the custom aggregators and the internal microservices is currently implemented with gRPC, instead of HTTP/JSON. gRPC is a RPC-based protocol that has great performance and low bandwidth usage, making it the best candidate for internal microservices communication.
|
||||||
|
|
||||||
|
More information about gRPC and eShopOnContainers can be found [in the gRPC article in this wiki](./gRPC.md)
|
||||||
|
|
||||||
## API Gateways
|
## API Gateways
|
||||||
|
|
||||||
The architecture also includes an implementation of the API Gateway pattern and Backend-For-Front-End (BFF), to publish simplified APIs and include additional security measures for hiding/securing the internal microservices from the client apps or outside consumers.
|
The architecture also includes an implementation of the API Gateway and [Backends for Frontends (BFF)](https://samnewman.io/patterns/architectural/bff/) patterns, to publish simplified APIs and include additional security measures for hiding/securing the internal microservices from the client apps or outside consumers.
|
||||||
|
|
||||||
These sample API Gateways are based on [Ocelot](https://github.com/ThreeMammals/Ocelot), an OSS lightweight API Gateway solution. The API Gateways are deployed as autonomous microservices/containers, so you can test them in a simple development environment by just using Docker Desktop or even with orchestrators like Kubernetes in AKS or Service Fabric.
|
These API Gateways are implemented using [Envoy](https://www.envoyproxy.io/), an OSS high-performant, production ready, proxy and API Gateway. Currently these API Gateways only perform request forwarding to internal microservices and custom aggregators, giving the clients then experience of a single base URL. Features that could be implemented in the future are:
|
||||||
|
|
||||||
For a production-ready architecture you can either keep using Ocelot, which is simple and easy to use, and it's currently used in production by large companies. If you need additional functionality and a much richer set of features suitable for commercial APIs, you can also substitute those API Gateways and use Azure API Management or any other commercial API Gateway, as shown in the following diagram.
|
- Automatic translation from/to gRPC to/from HTTP/REST.
|
||||||
|
- Authentication and Authorization management
|
||||||
|
- Cache support
|
||||||
|
|
||||||
|
If you need additional functionality and a much richer set of features suitable for commercial APIs, you can also add a full API Gateway product like [Azure API Management](https://azure.microsoft.com/services/api-management/) on top of these API Gateways.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
Alongside the API Gateways a set of "custom aggregators" are provided. Those aggregators provide a simple API to the clients for some operations.
|
||||||
|
|
||||||
|
Currently two aggregators exists:
|
||||||
|
|
||||||
|
1. Mobile Shopping: Aggregator for shopping operations called by Xamarin App
|
||||||
|
2. Web Shopping: Aggregator for shopping operations called by Web clients (MVC & SPA)
|
||||||
|
|
||||||
|
>**Note** Previous versions of eShopOnContainers were using [Ocelot](https://github.com/ThreeMammals/Ocelot) instead of Envoy. Ocelot is a great .NET Core OSS open project, to create an API Gateway. Ocelot supports a wide set of features, and it's a serious candidate to be used in every :NET Core based project. However the lack of support for gRPC was the main reason to change Ocelot for Envoy in eShopOnContainers.
|
||||||
|
|
||||||
## Internal architectural patterns
|
## Internal architectural patterns
|
||||||
|
|
||||||
There are different types of microservices according to their internal architectural pattern and approaches depending on their purposes, as shown in the image below.
|
There are different types of microservices according to their internal architectural pattern and approaches depending on their purposes, as shown in the image below.
|
||||||
|
27
BFF-implementation.md
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
|
||||||
|
The current implementation of the [Backends for Frontends (BFF) pattern](https://samnewman.io/patterns/architectural/bff/) is shown in the following diagram:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**Note:** This schema only show one BFF. Each client type (web and mobile) has its own BFF.
|
||||||
|
|
||||||
|
The BFF is composed by two containers: One Envoy proxy and one custom aggregator (Note: The marketing BFF doesn't have a custom aggregator, because it doesn't have any complex logic).
|
||||||
|
|
||||||
|
## Envoy
|
||||||
|
|
||||||
|
An Envoy proxy acts as an ingress for the BFF and **provides a single URL** for the client. All client calls go through the Envoy proxy. Then, based on the some rules, the Envoy proxy can:
|
||||||
|
|
||||||
|
1. Forward the call to the custom aggregator.
|
||||||
|
2. Forward the call directly to a internal microservice
|
||||||
|
|
||||||
|
## Custom aggregator
|
||||||
|
|
||||||
|
The custom aggregator is another container, that exposes an HTTP/JSON API and has complex methods, that involves data from various internal microservices. Each method of the custom aggregator calls one (or usually more than one) internal microservice, aggregates the results (applying custom logic) and returns data to the client.
|
||||||
|
|
||||||
|
All calls from the aggregator to microservices are performed using gRPC (dashed lines in diagram).
|
||||||
|
|
||||||
|
## Client Application
|
||||||
|
|
||||||
|
A client application calls the BFF only through the single URL exposed by the Envoy proxy. Based on the request data, the request is then forwarded to a internal microservice (single crud calls) or to the custom aggregator (complex logic calls), but this is transparent to the client.
|
||||||
|
|
||||||
|
When the call is forwarded directly from Envoy to an internal microservice, it's performed using HTTP/JSON. That is, right now, internal microservices expose a mix of methods: some in gRPC (called by aggregators) and some in HTTP/JSON (called by Envoy). This is subject to change in the future (all microservices methods could be in gRPC and Envoy could automatically translate between gRPC and HTTP/JSON if needed).
|
@ -5,6 +5,8 @@
|
|||||||
- [Additional pre-requisites](#additional-pre-requisites)
|
- [Additional pre-requisites](#additional-pre-requisites)
|
||||||
- [Install Helm](#install-helm)
|
- [Install Helm](#install-helm)
|
||||||
- [Install eShopOnContainers using Helm](#install-eshoponcontainers-using-helm)
|
- [Install eShopOnContainers using Helm](#install-eshoponcontainers-using-helm)
|
||||||
|
- [Allow large headers (needed for login to work)](#allow-large-headers-needed-for-login-to-work)
|
||||||
|
- [Using Linkerd as Service Mesh (Advanced Scenario)](#using-linkerd-as-service-mesh-advanced-scenario)
|
||||||
- [Customizing the deployment](#customizing-the-deployment)
|
- [Customizing the deployment](#customizing-the-deployment)
|
||||||
- [Using your own images](#using-your-own-images)
|
- [Using your own images](#using-your-own-images)
|
||||||
- [Using specific DNS](#using-specific-dns)
|
- [Using specific DNS](#using-specific-dns)
|
||||||
@ -26,21 +28,21 @@ The following steps are using the Azure portal to create the AKS cluster:
|
|||||||
|
|
||||||
- Start the process by providing the general data, like in the following screenshot:
|
- Start the process by providing the general data, like in the following screenshot:
|
||||||
|
|
||||||

|
/create-kubernetes-cluster-basics.png)
|
||||||
|
|
||||||
- Then, very important, in the next step, enable RBAC:
|
- Then, very important, in the next step, enable RBAC:
|
||||||
|
|
||||||

|
/create-kubernetes-cluster-authentication.png)
|
||||||
|
|
||||||
- **Enable http routing**. Make sure to check the checkbox "Http application routing" on "Networking" settings. For more info, read the [documentation](https://docs.microsoft.com/en-us/azure/aks/http-application-routing)
|
- **Enable http routing**. Make sure to check the checkbox "Http application routing" on "Networking" settings. For more info, read the [documentation](https://docs.microsoft.com/en-us/azure/aks/http-application-routing)
|
||||||
|
|
||||||
You can use **basic network** settings since for a test you don't need integration into any existing VNET.
|
You can use **basic network** settings since for a test you don't need integration into any existing VNET.
|
||||||
|
|
||||||

|
/create-kubernetes-cluster-networking.png)
|
||||||
|
|
||||||
- You can also enable monitoring:
|
- You can also enable monitoring:
|
||||||
|
|
||||||

|
/create-kubernetes-cluster-monitoring.png)
|
||||||
|
|
||||||
- Finally, create the cluster. It'll take a few minutes for it to be ready.
|
- Finally, create the cluster. It'll take a few minutes for it to be ready.
|
||||||
|
|
||||||
@ -49,19 +51,19 @@ The following steps are using the Azure portal to create the AKS cluster:
|
|||||||
In order NOT to get errors in the Kubernetes dashboard, you'll need to set the following service-account steps.
|
In order NOT to get errors in the Kubernetes dashboard, you'll need to set the following service-account steps.
|
||||||
|
|
||||||
Here you can see the errors you might see:
|
Here you can see the errors you might see:
|
||||||

|
/rbac-kubernetes-dashboard.png)
|
||||||
|
|
||||||
- Because the cluster is using RBAC, you need to grant needed rights to the Service Account `kubernetes-dashboard` with this kubectl command:
|
- Because the cluster is using RBAC, you need to grant needed rights to the Service Account `kubernetes-dashboard` with this kubectl command:
|
||||||
|
|
||||||
`kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard`
|
`kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard`
|
||||||
|
|
||||||

|
/command-to-kubernetes-dashboard.png)
|
||||||
|
|
||||||
Now, just run the Azure CLI command to browse the Kubernetes Dashboard:
|
Now, just run the Azure CLI command to browse the Kubernetes Dashboard:
|
||||||
|
|
||||||
`az aks browse --resource-group pro-eshop-aks-helm-linux-resgrp --name pro-eshop-aks-helm-linux`
|
`az aks browse --resource-group pro-eshop-aks-helm-linux-resgrp --name pro-eshop-aks-helm-linux`
|
||||||
|
|
||||||

|
/kubernetes-dashboard.png)
|
||||||
|
|
||||||
## Additional pre-requisites
|
## Additional pre-requisites
|
||||||
|
|
||||||
@ -73,25 +75,25 @@ You need to have helm installed on your machine, and Tiller must be installed on
|
|||||||
|
|
||||||
**Note**: If your ASK cluster is not RBAC-enabled (default option in portal) you may receive following error when running a helm command:
|
**Note**: If your ASK cluster is not RBAC-enabled (default option in portal) you may receive following error when running a helm command:
|
||||||
|
|
||||||
```
|
```console
|
||||||
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused
|
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused
|
||||||
```
|
```
|
||||||
|
|
||||||
If so, type:
|
If so, type:
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
kubectl --namespace=kube-system edit deployment/tiller-deploy
|
kubectl --namespace=kube-system edit deployment/tiller-deploy
|
||||||
```
|
```
|
||||||
|
|
||||||
Your default text editor will popup with the YAML definition of the tiller deploy. Search for:
|
Your default text editor will popup with the YAML definition of the tiller deploy. Search for:
|
||||||
|
|
||||||
```
|
```yaml
|
||||||
automountServiceAccountToken: false
|
automountServiceAccountToken: false
|
||||||
```
|
```
|
||||||
|
|
||||||
And change it to:
|
And change it to:
|
||||||
|
|
||||||
```
|
```yaml
|
||||||
automountServiceAccountToken: true
|
automountServiceAccountToken: true
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -99,51 +101,55 @@ Save the file and close the editor. This should reapply the deployment in the cl
|
|||||||
|
|
||||||
## Install eShopOnContainers using Helm
|
## Install eShopOnContainers using Helm
|
||||||
|
|
||||||
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window:
|
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a PowerShell window:
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev
|
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev -useMesh $false
|
||||||
```
|
```
|
||||||
|
|
||||||
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
||||||
|
|
||||||
Once the script is run, you should see following output when using `kubectl get deployment`:
|
Once the script is run, you should see following output when using `kubectl get deployment`:
|
||||||
|
|
||||||
```
|
```console
|
||||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
eshop-apigwmm 1 1 1 1 4d
|
eshop-apigwmm 1/1 1 1 29d
|
||||||
eshop-apigwms 1 1 1 1 4d
|
eshop-apigwms 1/1 1 1 29d
|
||||||
eshop-apigwwm 1 1 1 1 4d
|
eshop-apigwwm 1/1 1 1 29d
|
||||||
eshop-apigwws 1 1 1 1 4d
|
eshop-apigwws 1/1 1 1 29d
|
||||||
eshop-basket-api 1 1 1 1 4d
|
eshop-basket-api 1/1 1 1 30d
|
||||||
eshop-basket-data 1 1 1 1 4d
|
eshop-basket-data 1/1 1 1 30d
|
||||||
eshop-catalog-api 1 1 1 1 4d
|
eshop-catalog-api 1/1 1 1 30d
|
||||||
eshop-identity-api 1 1 1 1 4d
|
eshop-identity-api 1/1 1 1 30d
|
||||||
eshop-keystore-data 1 1 1 1 4d
|
eshop-keystore-data 1/1 1 1 30d
|
||||||
eshop-locations-api 1 1 1 1 4d
|
eshop-locations-api 1/1 1 1 30d
|
||||||
eshop-marketing-api 1 1 1 1 4d
|
eshop-marketing-api 1/1 1 1 30d
|
||||||
eshop-mobileshoppingagg 1 1 1 1 4d
|
eshop-mobileshoppingagg 1/1 1 1 30d
|
||||||
eshop-nosql-data 1 1 1 1 4d
|
eshop-nosql-data 1/1 1 1 30d
|
||||||
eshop-ordering-api 1 1 1 1 4d
|
eshop-ordering-api 1/1 1 1 30d
|
||||||
eshop-ordering-backgroundtasks 1 1 1 1 4d
|
eshop-ordering-backgroundtasks 1/1 1 1 30d
|
||||||
eshop-ordering-signalrhub 1 1 1 1 4d
|
eshop-ordering-signalrhub 1/1 1 1 30d
|
||||||
eshop-payment-api 1 1 1 1 4d
|
eshop-payment-api 1/1 1 1 30d
|
||||||
eshop-rabbitmq 1 1 1 1 4d
|
eshop-rabbitmq 1/1 1 1 30d
|
||||||
eshop-sql-data 1 1 1 1 4d
|
eshop-sql-data 1/1 1 1 30d
|
||||||
eshop-webmvc 1 1 1 1 4d
|
eshop-webhooks-api 1/1 1 1 30d
|
||||||
eshop-webshoppingagg 1 1 1 1 4d
|
eshop-webhooks-web 1/1 1 1 30d
|
||||||
eshop-webspa 1 1 1 1 4d
|
eshop-webmvc 1/1 1 1 30d
|
||||||
eshop-webstatus 1 1 1 1 4d
|
eshop-webshoppingagg 1/1 1 1 30d
|
||||||
|
eshop-webspa 1/1 1 1 30d
|
||||||
|
eshop-webstatus 1/1 1 1 30d
|
||||||
```
|
```
|
||||||
|
|
||||||
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
|
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
|
||||||
|
|
||||||
```
|
```console
|
||||||
eshop-apigwmm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-apigwmm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-apigwms eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-apigwms eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
|
eshop-webhooks-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
|
eshop-webhooks-web eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||||
@ -151,9 +157,11 @@ eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80
|
|||||||
|
|
||||||
Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon.
|
Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon.
|
||||||
|
|
||||||
|
### Allow large headers (needed for login to work)
|
||||||
|
|
||||||
One step more is needed: we need to configure the nginx ingress controller that AKS has to allow larger headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder):
|
One step more is needed: we need to configure the nginx ingress controller that AKS has to allow larger headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder):
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
kubectl apply -f aks-httpaddon-cfg.yaml
|
kubectl apply -f aks-httpaddon-cfg.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -161,11 +169,31 @@ Then you can restart the pod that runs the nginx controller. Its name is `addon-
|
|||||||
|
|
||||||
**Note:** If running in a bash shell you can type:
|
**Note:** If running in a bash shell you can type:
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system
|
kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system
|
||||||
```
|
```
|
||||||
|
|
||||||
You can view the MVC client at http://[dns]/webmvc and the SPA at the http://[dns]/
|
You can view the MVC client at `http://[dns]/webmvc` and the SPA at the `http://[dns]/`
|
||||||
|
|
||||||
|
## Using Linkerd as Service Mesh (Advanced Scenario)
|
||||||
|
|
||||||
|
There is the possibility to install eShopOnContainers ready to run with the [Linkerd](https://linkerd.io/) [Service Mesh](./Resiliency-and-Service-Mesh.md). To use Linkerd, you must complete the following steps:
|
||||||
|
|
||||||
|
1. Install Linkerd on your cluster. The process is described in the [Linkerd installation documentation](https://linkerd.io/2/getting-started/#step-0-setup). Steps 0 trough 3 need to be done.
|
||||||
|
2. Then install eShopOnContainers as per the procedure described above, but using this command:
|
||||||
|
```powershell
|
||||||
|
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev -useMesh $true
|
||||||
|
```
|
||||||
|
|
||||||
|
Once eShop is installed you can check that all non-infrastructure pods have two containers:
|
||||||
|
|
||||||
|
/pods.png)
|
||||||
|
|
||||||
|
Now you can use the command `linkerd dashboard` to show the mesh and monitor all the connections between eShopOnContainer pods.
|
||||||
|
|
||||||
|
The mesh monitors all HTTP connections (including gRPC), but don't monitor RabbitMQ or any other connection (SQL, Mongo, ...)
|
||||||
|
|
||||||
|
For more information read the [Resiliency and Service Mesh](./Resiliency-and-Service-Mesh.md) page in the wiki.
|
||||||
|
|
||||||
## Customizing the deployment
|
## Customizing the deployment
|
||||||
|
|
||||||
|
@ -64,11 +64,15 @@ Then install Helm from the package manager:
|
|||||||
choco install kubernetes-helm
|
choco install kubernetes-helm
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
Get-FileHash -Path .\helm-v2.16.1-windows-amd64.zip -Algorithm sha256
|
||||||
|
```
|
||||||
|
|
||||||
### Install Helm server (Tiller)
|
### Install Helm server (Tiller)
|
||||||
|
|
||||||
To install Tiller:
|
To install Tiller:
|
||||||
|
|
||||||
- Go to the **k8s** folder in your local copy of the eShopOnContainers repo
|
- Go to the **deploy/k8s** folder in your local copy of the eShopOnContainers repo
|
||||||
|
|
||||||
- Create the Tiller service account by running:
|
- Create the Tiller service account by running:
|
||||||
|
|
||||||
@ -97,7 +101,7 @@ To install the NGINX Ingress controller, run the following commands:
|
|||||||
|
|
||||||
## Install eShopOnContainers using Helm
|
## Install eShopOnContainers using Helm
|
||||||
|
|
||||||
- Go to the **k8s\helm** folder in your local copy of the eShopOnContainers repo.
|
- Go to the **deploy/k8s/helm** folder in your local copy of the eShopOnContainers repo.
|
||||||
|
|
||||||
At this point you have two options for installing eShopOnContainers:
|
At this point you have two options for installing eShopOnContainers:
|
||||||
|
|
||||||
@ -106,13 +110,13 @@ At this point you have two options for installing eShopOnContainers:
|
|||||||
|
|
||||||
### Deploy your local images
|
### Deploy your local images
|
||||||
|
|
||||||
The first task to deploy your local images is to create them, which you can achieve by just running this command from the CLI on the root folder of your local repo:
|
The first task to deploy your local images is to create them, which you can achieve by just running the usual command from the CLI on the src folder of your local repo:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
docker-compose build
|
docker-compose build
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, just run this script from the `k8s\helm` folder, to deploy your local images:
|
Then, just run this script from the `deploy\k8s\helm` folder, to deploy your local images:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
.\deploy-all.ps1 -imageTag linux-latest -useLocalk8s $true -imagePullPolicy Never
|
.\deploy-all.ps1 -imageTag linux-latest -useLocalk8s $true -imagePullPolicy Never
|
||||||
@ -127,7 +131,7 @@ The parameter `useLocalk8s` to `$true`, forces the script to use `localhost` as
|
|||||||
If you prefer to deploy the public images (built from the **dev** branch on each commit), just run this script:
|
If you prefer to deploy the public images (built from the **dev** branch on each commit), just run this script:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
.\deploy-all.ps1 -imageTag dev -useLocalk8s $true
|
.\deploy-all.ps1 -imageTag dev
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check deployment status
|
### Check deployment status
|
||||||
@ -262,7 +266,13 @@ The reason is because MVC needs to access the Identity Server from both outside
|
|||||||
|
|
||||||
Solving this requires some manual steps:
|
Solving this requires some manual steps:
|
||||||
|
|
||||||
From the `/k8s` folder run `kubectl apply -f .\nginx-ingress\local-dockerk8s\mvc-fix.yaml`. This will create two additional ingresses (for MVC and Identity API) to any valid DNS that points to your machine. This enable the use of 10.75.0.1 IP.
|
From the `deploy/k8s` folder run the following command:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kubectl apply -f .\nginx-ingress\local-dockerk8s\mvc-fix.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create two additional ingresses (for MVC and Identity API) to any valid DNS that points to your machine. This enable the use of 10.75.0.1 IP.
|
||||||
|
|
||||||
Update the `configmap` of Web MVC by typing (**line breaks are mandatory**):
|
Update the `configmap` of Web MVC by typing (**line breaks are mandatory**):
|
||||||
|
|
||||||
|
109
Mac-setup.md
@ -61,8 +61,7 @@ This configuration is necessary so you don't get the following error when trying
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
That is because the by default IP used to redirect to the Identity service/app used by the application (based on IdentityServer4) is the IP `10.0.75.1`.
|
||||||
That is because the by default IP used to redirect to the Identity service/app used by the application (based on IdentityServer4) is the IP 10.0.75.1.
|
|
||||||
That IP is always set up when installing Docker for Windows in a Windows 10 machine. It is also used by Windows Server 2016 when using Windows Containers.
|
That IP is always set up when installing Docker for Windows in a Windows 10 machine. It is also used by Windows Server 2016 when using Windows Containers.
|
||||||
|
|
||||||
eShopOnContainers uses that IP as the "by default choice" so anyone testing the app don't need to configure further settings. However, that IP is not used by "Docker for Mac", so you need to change the config.
|
eShopOnContainers uses that IP as the "by default choice" so anyone testing the app don't need to configure further settings. However, that IP is not used by "Docker for Mac", so you need to change the config.
|
||||||
@ -71,47 +70,84 @@ If you were to access the Docker containers from remote machines or mobile phone
|
|||||||
|
|
||||||
### Setting up the docker-compose environment variables and settings
|
### Setting up the docker-compose environment variables and settings
|
||||||
|
|
||||||
As explained [here by Docker](https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds),
|
As explained in the [networking page of Docker Desktop for Mac](https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds),
|
||||||
the Mac has a changing IP address (or none if you have no network access). From June 2017 onwards our recommendation is to connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host.
|
the Mac has a changing IP address (or none if you have no network access). So the recommendation now is to connect to the special Mac-only DNS name `docker.for.mac.localhost` which will resolve to the internal IP address used by the host.
|
||||||
|
|
||||||
In the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
In the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
||||||
|
|
||||||
```bash
|
```yaml
|
||||||
IdentityUrl=http://docker.for.mac.localhost:5105
|
IdentityUrl=http://docker.for.mac.localhost:5105
|
||||||
```
|
```
|
||||||
|
|
||||||
You could also set your real IP at the Mac's network adapter. But that would be a worse solution as it'll depend on the network you are connecting your Mac development machine..
|
You could also set your real IP at the Mac's network adapter. But that would be a worse solution as it'll depend on the network you are connecting your Mac development machine..
|
||||||
|
|
||||||
Therefore, the WebMVC service definition at the `docker-compose.override.yml` should finally be configured as shown bellow:
|
Therefore, the WebMVC service definition at the `docker-compose.override.yml` should finally be configured as shown bellow:
|
||||||
|
|
||||||
```bash
|
```yaml
|
||||||
webmvc:
|
webmvc:
|
||||||
environment:
|
environment:
|
||||||
- ASPNETCORE_ENVIRONMENT=Development
|
- ASPNETCORE_ENVIRONMENT=Development
|
||||||
- ASPNETCORE_URLS=http://0.0.0.0:80
|
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||||
- CatalogUrl=http://catalog.api
|
- CatalogUrl=http://catalog.api
|
||||||
- OrderingUrl=http://ordering.api
|
- OrderingUrl=http://ordering.api
|
||||||
- BasketUrl=http://basket.api
|
- BasketUrl=http://basket.api
|
||||||
- LocationsUrl=http://locations.api
|
- LocationsUrl=http://locations.api
|
||||||
- IdentityUrl=http://docker.for.mac.localhost:5105
|
- IdentityUrl=http://docker.for.mac.localhost:5105
|
||||||
- MarketingUrl=http://marketing.api
|
- MarketingUrl=http://marketing.api
|
||||||
- CatalogUrlHC=http://catalog.api/hc
|
- CatalogUrlHC=http://catalog.api/hc
|
||||||
- OrderingUrlHC=http://ordering.api/hc
|
- OrderingUrlHC=http://ordering.api/hc
|
||||||
- IdentityUrlHC=http://identity.api/hc
|
- IdentityUrlHC=http://identity.api/hc
|
||||||
- BasketUrlHC=http://basket.api/hc
|
- BasketUrlHC=http://basket.api/hc
|
||||||
- MarketingUrlHC=http://marketing.api/hc
|
- MarketingUrlHC=http://marketing.api/hc
|
||||||
- PaymentUrlHC=http://payment.api/hc
|
- PaymentUrlHC=http://payment.api/hc
|
||||||
- UseCustomizationData=True
|
- UseCustomizationData=True
|
||||||
- ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY}
|
- ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY}
|
||||||
- OrchestratorType=${ORCHESTRATOR_TYPE}
|
- OrchestratorType=${ORCHESTRATOR_TYPE}
|
||||||
- UseLoadTest=${USE_LOADTEST:-False}
|
- UseLoadTest=${USE_LOADTEST:-False}
|
||||||
ports:
|
ports:
|
||||||
- "5100:80"
|
- "5100:80"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You also have to update the `.env` file as follows:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
ESHOP_EXTERNAL_DNS_NAME_OR_IP=docker.for.mac.localhost
|
||||||
|
```
|
||||||
|
|
||||||
|
> **IMPORTANT**
|
||||||
|
>
|
||||||
|
> If you've already encountered the error shown above, when trying to sign in, then **you have to delete the `sqldata` microservice**, because the **IdentityServer** database would've already been seeded, allowing only connections from `localhost`.
|
||||||
|
|
||||||
|
When starting up for the first time, the **IdentityServer** database is seeded with the allowed clients, taking the information from the environment variables below, in `docker-compose.override.yml`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
identity-api:
|
||||||
|
environment:
|
||||||
|
- ASPNETCORE_ENVIRONMENT=Development
|
||||||
|
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||||
|
- SpaClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5104
|
||||||
|
- XamarinCallback=http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5105/xamarincallback
|
||||||
|
- ConnectionString=${ESHOP_AZURE_IDENTITY_DB:-Server=sqldata;Database=Microsoft.eShopOnContainers.Service.IdentityDb;User Id=sa;Password=Pass@word}
|
||||||
|
- MvcClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5100
|
||||||
|
- LocationApiClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5109
|
||||||
|
- MarketingApiClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5110
|
||||||
|
- BasketApiClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5103
|
||||||
|
- OrderingApiClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5102
|
||||||
|
- MobileShoppingAggClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5120
|
||||||
|
- WebShoppingAggClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5121
|
||||||
|
- WebhooksApiClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5113
|
||||||
|
- WebhooksWebClient=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5114
|
||||||
|
- UseCustomizationData=True
|
||||||
|
- ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY}
|
||||||
|
- OrchestratorType=${ORCHESTRATOR_TYPE}
|
||||||
|
ports:
|
||||||
|
- "5105:80"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
If you re-deploy with `docker-compose up`, now the login page should work properly, as in the screenshot below.
|
If you re-deploy with `docker-compose up`, now the login page should work properly, as in the screenshot below.
|
||||||
|
|
||||||
NOTE: For some reason, if using SAFARI browser, it cannot reach docker.for.mac.localhost but using Chrome in Mac, it works with no issues. Since the usage of docker.for.mac.localhost is just for development purposes, just use Chrome for tests.
|
NOTE: For some reason, if using SAFARI browser, it can't reach `docker.for.mac.localhost` but using Chrome in Mac, it works with no issues. Since the usage of `docker.for.mac.localhost` is just for development purposes, just use Chrome for tests.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@ -152,13 +188,6 @@ While building the docker images should take between 15 and 30 minutes to comple
|
|||||||
|
|
||||||
The first time you run this command it'll take some more additional time as it needs to pull/download the dotnet/core/aspnet and SDK images, so it'll take its time.
|
The first time you run this command it'll take some more additional time as it needs to pull/download the dotnet/core/aspnet and SDK images, so it'll take its time.
|
||||||
|
|
||||||
Later on you can try adding a parameter to speed up the image building process:
|
|
||||||
|
|
||||||
```console
|
|
||||||
cd eShopOnContainers
|
|
||||||
docker-compose build --build-arg RESTORECMD=scripts/restore-packages
|
|
||||||
```
|
|
||||||
|
|
||||||
When the `docker-compose build` command finishes, you can check out with Docker CLI the images created with the following Docker command:
|
When the `docker-compose build` command finishes, you can check out with Docker CLI the images created with the following Docker command:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
@ -227,7 +256,7 @@ For running just the Docker containers and web apps, you'd just need the .NET Co
|
|||||||
|
|
||||||
But if you want to try the eShopOnContainers mobile app, that requires Xamarin and therefore, the iOS and Android platforms, too. Those mobile platforms are optional for this Wiki walkthrough, though.
|
But if you want to try the eShopOnContainers mobile app, that requires Xamarin and therefore, the iOS and Android platforms, too. Those mobile platforms are optional for this Wiki walkthrough, though.
|
||||||
|
|
||||||
**Make sure you have the latest SDK 2.2 version from <https://dotnet.microsoft.com/download/dotnet-core/2.2> installed.**
|
**Make sure you have the latest SDK 3.0 version from <https://dotnet.microsoft.com/download/dotnet-core/3.0> installed.**
|
||||||
|
|
||||||
### Open the solution with Visual Studio for Mac
|
### Open the solution with Visual Studio for Mac
|
||||||
|
|
||||||
@ -235,7 +264,7 @@ Run Visual Studio for Mac and open the solution `eShopOnContainers-ServicesAndWe
|
|||||||
|
|
||||||
If you just want to run the containers/microservices and web apps, do NOT open the other solutions, like `eShopOnContainers.sln` as those solutions will also open the Xamarin projects and that might slow you down when testing due to additional dependencies in VS.
|
If you just want to run the containers/microservices and web apps, do NOT open the other solutions, like `eShopOnContainers.sln` as those solutions will also open the Xamarin projects and that might slow you down when testing due to additional dependencies in VS.
|
||||||
|
|
||||||
After opening the `eShopOnContainers-ServicesAndWebApps.sln` solution for the first time, it is recommended to wait for a few minutes as VS will be restoring many NuGet packages and the solution won't be able to compile or run until it gets all the nuGet packages dependencies, in the first place (this time is only needed the first time you open the solution. Next times it is a lot faster).
|
After opening the `eShopOnContainers-ServicesAndWebApps.sln` solution for the first time, it is recommended to wait for a few minutes as VS will be restoring many NuGet packages and the solution won't be able to compile or run until it gets all the nuGet packages dependencies, in the first place (this time is only needed the first time you open the solution. Next times it's a lot faster).
|
||||||
|
|
||||||
This is VS for Mac with the `eShopOnContainers-ServicesAndWebApps.sln` solution.
|
This is VS for Mac with the `eShopOnContainers-ServicesAndWebApps.sln` solution.
|
||||||
|
|
||||||
|
92
Resiliency-and-Service-Mesh.md
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
> **CONTENT**
|
||||||
|
|
||||||
|
- [Service Mesh](#service-mesh)
|
||||||
|
- [Service Mesh and eShopOnContainers](#service-mesh-and-eshoponcontainers)
|
||||||
|
- [Why Linkerd?](#why-linkerd)
|
||||||
|
- [Installing the Mesh](#installing-the-mesh)
|
||||||
|
- [Enabling Mesh](#enabling-mesh)
|
||||||
|
- [Service profiles](#service-profiles)
|
||||||
|
|
||||||
|
Previous versions of eShopOnContainers used the [Polly library](https://github.com/App-vNext/Polly) to provide resiliency scenarios. Polly is a fantastic open source library that provides advanced resiliency scenarios and patterns like retries (with exponential backoff) or circuit breakers.
|
||||||
|
|
||||||
|
This version of eShops drops the use of Polly in the following cases:
|
||||||
|
|
||||||
|
- HTTP REST calls between microservices
|
||||||
|
- gRPC calls between microservices
|
||||||
|
|
||||||
|
Polly is still used to guarantee resiliency in database connections and RabbitMQ/Azure Service Bus connections, but it's no longer used for resiliency between synchronous microservice-to-microservice communication.
|
||||||
|
|
||||||
|
## Service Mesh
|
||||||
|
|
||||||
|
In a production scenarios based on Kubernetes using a [Service Mesh](https://docs.microsoft.com/azure/aks/servicemesh-about) is a good option to provide resiliency between your services.
|
||||||
|
|
||||||
|
A Service Mesh is a product that offers resiliency, observability and others features to your workloads running on the cluster. There are various meshes you can use and each one has its own characteristics, so you should evaluate carefully which one suits better your needs. Once installed the mesh will monitor all the traffic between your services and apply the needed policies. Those policies could be for resiliency (like using retries and/or circuit breakers) or for other tasks (like encrypting all traffic inside the cluster).
|
||||||
|
|
||||||
|
When you use a Mesh for resiliency, nothing special is needed **in your code**. The Mesh is a pure infrastructure concept, so your Kubernetes files will be affected, but your code won't. If the Mesh is controlling the network and applying a specific policy for making retries, these retries will be made automatically (at the Mesh level) without your code even notice. From your code perspective, you just make a single call, that can either succeed of fail after some retries. If the mesh is applying an open circuit-breaker, your code won't know that: simply all your network calls will fail (until the circuit is closed again).
|
||||||
|
|
||||||
|
This simplifies your code, and allows you to focus on the business requirements, and let the mesh apply the needed policies.
|
||||||
|
|
||||||
|
## Service Mesh and eShopOnContainers
|
||||||
|
|
||||||
|
The reason to drop Polly for microservice-to-microservice communications is to show the use of a Service Mesh. One of the reasons to use a Service Mesh is to delegate on it the communications resiliency and set policies of retries, circuit breakers and QoS.
|
||||||
|
|
||||||
|
To use service mesh, eShopOnContainers has to be deployed in a Kubernetes cluster. Using eShopOnContainers from a docker host (deployed with docker compose) means you can't use the service mesh and, in this case, there's no resiliency for the communications built-in.
|
||||||
|
|
||||||
|
eShopOnContainers is ready to use [Linkerd](https://linkerd.io) as Service Mesh. There were several options to choose from, but Linkerd was chosen mainly for its ease of installation and configuration, and because it has a minimal impact on the cluster where it's installed.
|
||||||
|
|
||||||
|
### Why Linkerd?
|
||||||
|
|
||||||
|
There are a various meshes out there, and selecting the right one for your project can be a hard choice. Every mesh offers a set of features and follows different approaches to solve the same set of problems. Based on your experience, deployment operations, code management and requirements, one mesh can fit better than others.
|
||||||
|
|
||||||
|
So, **before choosing a specific service mesh, evaluate various options, explore some of them, and take a decision based on your experience and needs**. Linkerd was chosen for eShopOnContainers, but this doesn't mean that Linkerd should be the mesh for your projects. In the future, eShopOnContainers might be integrated with other meshes as well.
|
||||||
|
|
||||||
|
Before selecting a Mesh, you should ask yourself some questions:
|
||||||
|
|
||||||
|
- Do you _really_ need the Mesh? A Mesh is a non-trivial piece of infrastructure that impacts your solution. Don't add complexity if it's not needed. Some of the problems that a Mesh solves can be solved using other ways (that is, resiliency can be solved by Polly, A/B testing can be solved using different services and with standard ingress controller, and so on).
|
||||||
|
|
||||||
|
- Can your infrastructure support the Mesh? The mesh is not free. Usually every Mesh comes with a set of containers, runs as a side-car containers for all your workload, and needs additional containers running as a control plane. Those containers require CPU and memory to run.
|
||||||
|
|
||||||
|
If you evaluate those questions and come up with the answer that you want to use a Mesh, then you need to choose the right one for you. There are various options out there:
|
||||||
|
|
||||||
|
- [Istio](https://istio.io/): Istio is a full featured and highly customizable Mesh. It offers a lot of powerful features, but comes with a high learning curve and complex deployment (for example 80+ CRDs are added to your cluster). Although a basic installation of istio is not complex, getting the real benefits requires a significant ammount of work. Istio integrates a lot of different products (Envoy, Grafana, Prometheus, Jaeger, Kali) with each one playing a specific role in the mesh.
|
||||||
|
|
||||||
|
- [Consul](https://www.consul.io/mesh.html): Consul from Hashicorp is another option for a service Mesh. Like Istio it uses Envoy as sidecars and offers a wide set of advanced capabilities.
|
||||||
|
|
||||||
|
- [Linkerd](https://linkerd.io/2/overview/): Linkerd (please note that in eShop, "Linkerd" always referrs to **Linkerd 2**), is a lightweight and easy-to-install service mesh. It doesn't offer the same broad range of capabilities as Istio or Consul, but it's easier to install and start with.
|
||||||
|
|
||||||
|
**Linkerd was chosen for eShopOnContainers because it's easy to install and setup**. Other meshes offer a broader range of services, but most of them go beyond the scope of a project like eShopOnContainers. However remember: you must evaluate the options for your needs before choosing your mesh, if any.
|
||||||
|
|
||||||
|
## Installing the Mesh
|
||||||
|
|
||||||
|
To use eShopOnContainers under Linkerd, you need to install Linkerd first in your cluster. This is an administrative task performed only once. You can find the details, very straightforward and clearly described in its [installation page](https://linkerd.io/2/getting-started/). Just follow steps 0 through 3.
|
||||||
|
|
||||||
|
## Enabling Mesh
|
||||||
|
|
||||||
|
Once Linkerd is installed you can deploy eShopOnContainers. To enable the integration with Linkerd, pass the parameter `useMesh` to `$true` when running the `deploy-all.ps1` script. For the curious ones, this parameter passes the value `true` for`inf.mesh.enabled` to all helm charts. When this value is enabled the helm charts:
|
||||||
|
|
||||||
|
1. Add the `linkerd.io/inject: enabled` to all needed deployments.
|
||||||
|
2. Add the annotations declared in file `ingress_values.yaml` to all ingress resources. Provided `ingress_values.yaml` is as follows:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
ingress:
|
||||||
|
mesh:
|
||||||
|
annotations:
|
||||||
|
nginx.ingress.kubernetes.io/configuration-snippet:
|
||||||
|
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
|
||||||
|
proxy_hide_header l5d-remote-ip;
|
||||||
|
proxy_hide_header l5d-server-id;
|
||||||
|
```
|
||||||
|
|
||||||
|
This is the specific configuration needed to enable the integration between NGINX ingress (and/or Http Application Routing as is derived from NGINX) and Linkerd. If you use other ingress controller you will need to update this file accordingly, following the [Linkerd ingress integration](https://linkerd.io/2/tasks/using-ingress/) instructions.
|
||||||
|
|
||||||
|
## Service profiles
|
||||||
|
|
||||||
|
By default Linkerd only monitors the network status and gives you detailed results that you can view by using the `linkerd` CLI tool.
|
||||||
|
|
||||||
|
To enable retries and other network policies you must declare a _service profile_ for the specified service you want to be controlled. A very detailed explanation about service profiles is in the [Linkerd documentation](https://linkerd.io/2/tasks/setting-up-service-profiles/)
|
||||||
|
|
||||||
|
Just for reference, the service profiles for basket and catalog API are provided. Feel free to update them, play with them and explore all Linkerd scenarios!
|
||||||
|
|
||||||
|
You can find the service profiles in folder `deploy/k8s/linkerd`. Just use `kubectl apply` to apply them to the cluster. Once a service profile is applied, Linkerd is able to give you detailed statistics (by route) and apply retries and other policies.
|
||||||
|
|
||||||
|
**Note** Previous versions of eShopOnContainers had specific business scenarios to demo the [circuit breaker](https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern) pattern. These scenarios have been removed since, when using a mesh, the circuit breakers are applied by the mesh under-the-hoods, and the caller doesn't receive any specific information that a request has been aborted by the circuit breaker. Right now in Linkerd2 there is no specific option to set a circuit breaker policy. This could change in the future as the mesh itself evolves.
|
23
Roadmap.md
@ -2,27 +2,6 @@
|
|||||||
|
|
||||||
Features that will be included in the next major release.
|
Features that will be included in the next major release.
|
||||||
|
|
||||||
- Migrate solution from ASP.NET Core 2.2 to 3.0 and update all projects to use the latest .NET Core 3.0 templates.
|
To Be Decided (TBD)
|
||||||
|
|
||||||
- Implement the new .NET Core 3.0 WorkerService in Ordering.API and other background processes.
|
|
||||||
|
|
||||||
- Improve Ordering.API
|
|
||||||
- Group order items
|
|
||||||
- apply discounts from Marketing.API
|
|
||||||
|
|
||||||
- Handle two deployment scenarios
|
|
||||||
- Basic deployment, better for learning:
|
|
||||||
- Docker compose
|
|
||||||
- Local Kubernetes
|
|
||||||
- Visual Studio F5 experience
|
|
||||||
|
|
||||||
- Advanced deployment, complex but more real-life:
|
|
||||||
- Sidecar implementation with Envoy/Istio
|
|
||||||
- Improved API Gateway and resilience
|
|
||||||
- gRPC for inter-service communications
|
|
||||||
- Test and Support Azure Dev Spaces
|
|
||||||
|
|
||||||
🔥 [Track Progress](https://github.com/dotnet-architecture/eShopOnContainers/tree/features/migration-dotnet3)
|
|
||||||
## Feature candidates
|
|
||||||
|
|
||||||
Check the [backlog](Backlog) for candidate features.
|
Check the [backlog](Backlog) for candidate features.
|
||||||
|
@ -99,7 +99,7 @@ git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
|||||||
|
|
||||||
```console
|
```console
|
||||||
cd eShopOnContainers
|
cd eShopOnContainers
|
||||||
docker-compose build --build-arg RESTORECMD=scripts/restore-packages
|
docker-compose build
|
||||||
```
|
```
|
||||||
|
|
||||||
While building the docker images, you should see something like the following image, and the process should take between 10 and 30 minutes to complete, depending on the system speed.
|
While building the docker images, you should see something like the following image, and the process should take between 10 and 30 minutes to complete, depending on the system speed.
|
||||||
@ -140,7 +140,7 @@ If you want to explore the code and debug the application to see it working, you
|
|||||||
|
|
||||||
You have to install at least VS 2017 (15.9) and you can install the latest release from https://visualstudio.microsoft.com/vs/.
|
You have to install at least VS 2017 (15.9) and you can install the latest release from https://visualstudio.microsoft.com/vs/.
|
||||||
|
|
||||||
**Make sure you have the latest SDK 2.2 version from <https://dotnet.microsoft.com/download/dotnet-core/2.2> installed.**
|
**Make sure you have the latest SDK 3.0 version from <https://dotnet.microsoft.com/download/dotnet-core/3.0> installed.**
|
||||||
|
|
||||||
Upon running the installer, select the following workloads depending on the apps you intend to test or work with:
|
Upon running the installer, select the following workloads depending on the apps you intend to test or work with:
|
||||||
|
|
||||||
|
@ -19,15 +19,18 @@
|
|||||||
## Explore
|
## Explore
|
||||||
|
|
||||||
- [Architecture](Architecture)
|
- [Architecture](Architecture)
|
||||||
|
- [BFF implementation](BFF-implementation)
|
||||||
- [Application](Explore-the-application)
|
- [Application](Explore-the-application)
|
||||||
- [Code](Explore-the-code)
|
- [Code](Explore-the-code)
|
||||||
- [Simplified CQRS & DDD](Simplified-CQRS-and-DDD)
|
- [Simplified CQRS & DDD](Simplified-CQRS-and-DDD)
|
||||||
|
- [gRPC](gRPC)
|
||||||
- [API gateways](API-gateways)
|
- [API gateways](API-gateways)
|
||||||
- [Webhooks](Webhooks)
|
- [Webhooks](Webhooks)
|
||||||
- [Azure Key Vault](Azure-Key-Vault)
|
- [Azure Key Vault](Azure-Key-Vault)
|
||||||
- Logging and Monitoring
|
- Logging and Monitoring
|
||||||
- [Serilog & Seq](Serilog-and-Seq)
|
- [Serilog & Seq](Serilog-and-Seq)
|
||||||
- [Using HealthChecks](Using-HealthChecks)
|
- [Using HealthChecks](Using-HealthChecks)
|
||||||
|
- [Resiliency and Service Mesh](Resiliency-and-Service-Mesh)
|
||||||
- [ELK Stack](ELK-Stack)
|
- [ELK Stack](ELK-Stack)
|
||||||
- [Application Insights](Application-Insights)
|
- [Application Insights](Application-Insights)
|
||||||
- Tests
|
- Tests
|
||||||
|
172
gRPC.md
Normal file
@ -0,0 +1,172 @@
|
|||||||
|
> **CONTENT**
|
||||||
|
|
||||||
|
- [gRPC usage in eShopOnContainers](#grpc-usage-in-eshoponcontainers)
|
||||||
|
- [gRPC implementation in eShopOnContainers](#grpc-implementation-in-eshoponcontainers)
|
||||||
|
- [Generating server or client stubs from proto file in .NET Core 3](#generating-server-or-client-stubs-from-proto-file-in-net-core-3)
|
||||||
|
- [Creating the gRPC server](#creating-the-grpc-server)
|
||||||
|
- [Adding the gRPC pipeline into ASP.NET Core](#adding-the-grpc-pipeline-into-aspnet-core)
|
||||||
|
- [Creating the gRPC client](#creating-the-grpc-client)
|
||||||
|
- [Using gRPC without TLS](#using-grpc-without-tls)
|
||||||
|
- [More information](#more-information)
|
||||||
|
|
||||||
|
One of the big news on .NET Core 3.0 is the native support for [gRPC](https://grpc.io/). eShopOnContainers makes use of gRPC for internal microservice-to-microservice synchronous communication. Note that, in eShop most of the communication between microservices is decoupled and asynchronous using an Event Bus (supporting either RabbitMQ or Azure Service Bus).
|
||||||
|
|
||||||
|
gRPC is a high-performance communication protocol, based on HTTP/2 and protocol buffers. It should be the primary choice for direct synchronous communication between services (as opposed to other protocols like AMQP used for asynchronous communication like queues or pub/sub).
|
||||||
|
|
||||||
|
Its main benefits over using HTTP with JSON are:
|
||||||
|
|
||||||
|
- Protocol buffers are a binary, high-performance serialization mechanism. Depending on the language implementation protocol buffers can be up to 8x faster than JSON serialization while the messages can be around 60%-80% smaller.
|
||||||
|
- Supports streaming of data
|
||||||
|
- Contract between service and client is explicit (by using _proto_ files)
|
||||||
|
|
||||||
|
## gRPC usage in eShopOnContainers
|
||||||
|
|
||||||
|
In the current implementation the use of gRPC is limited to the communication between aggregators and microservices. eShopOnContainers currently has the following synchronous communications between services:
|
||||||
|
|
||||||
|
1. External clients (that is, Xamarin App or Browser) to Api Gateways (BFFs): Use HTTP/REST
|
||||||
|
2. From BFFs to Aggregators: Use HTTP/REST
|
||||||
|
- This is basically a request forward. Based on the request route, it's forwarded from the BFF to the aggregator. This is performed for "logically complex" requests, when a single client call involves various microservices that are coordinated from the aggregator.
|
||||||
|
3. From BFFs to microservices: Use HTTP/REST
|
||||||
|
- This is basically a request forward. Based on the request route, it's forwarded from the BFF to the internal microservice. This is performed for simple CRUD requests.
|
||||||
|
4. From Aggregators to microservices: Use gRPC
|
||||||
|
|
||||||
|
Currently there's no transcoding from gRPC to HTTP/REST. This would allow the use of gRPC from the BFFs to both aggregators and microservices while keeping an HTTP/REST interface to the clients. This gRPC<->HTTP/REST translation could be done at BFF level.
|
||||||
|
|
||||||
|
Following microservices expose gRPC endpoints:
|
||||||
|
|
||||||
|
- Ordering API
|
||||||
|
- Catalog API
|
||||||
|
- Basket API
|
||||||
|
|
||||||
|
And following BFFs are gRPC clients:
|
||||||
|
|
||||||
|
- Mobile Shopping
|
||||||
|
- Web Shopping
|
||||||
|
|
||||||
|
## gRPC implementation in eShopOnContainers
|
||||||
|
|
||||||
|
gRPC is language agnostic: all services are defined using _proto_ files (usually with the `.proto` extension). These files are based on the [protobuffer language](https://developers.google.com/protocol-buffers/docs/proto), and define the interface of the service. Based on the _proto_ file, a code for creating the server and the client can be generated for every language. The canonical tool is _protoc_ which supports generate C# code.
|
||||||
|
|
||||||
|
Starting from .NET Core 3, gRPC is deeply integrated in both, the tooling and the framework, to make the experience of using gRPC as seamless as possible.
|
||||||
|
|
||||||
|
### Generating server or client stubs from proto file in .NET Core 3
|
||||||
|
|
||||||
|
The tooling, integrated in `msbuild` (so it can be used by Visual Studio and the `dotnet build` SDK command), allows the generation of the code needed to create a gRPC server or client based on a _proto_ file. The _proto_ file has to be referenced in the `csproj` using a `<ProtoBuf>` tag (inside a `<ItemGroup>`):
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<ItemGroup>
|
||||||
|
<Protobuf Include="Protos\catalog.proto" GrpcServices="Client" />
|
||||||
|
</ItemGroup>
|
||||||
|
```
|
||||||
|
|
||||||
|
The `GrpcServices` attributes is for specifying whether a `Server` or a `Client` stub has to be generated.
|
||||||
|
|
||||||
|
>**Note** You can include as many `<Protobuf>` tags as you need.
|
||||||
|
|
||||||
|
When you compile the code (either by running _Build_ from Visual Studio or `dotnet build`) all code will be generated and placed in the `obj` folder. This is intentionally: this code should never be in the source control repository.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Creating the gRPC server
|
||||||
|
|
||||||
|
The server stub generated code, defines an abstract base class with a set of abstract methods, that you have to implement. You will have one abstract method for every `rpc` method defined in the _proto_ file.
|
||||||
|
|
||||||
|
So, given a _proto_ file that defines the following methods:
|
||||||
|
|
||||||
|
```proto
|
||||||
|
service Catalog {
|
||||||
|
rpc GetItemById (CatalogItemRequest) returns (CatalogItemResponse) {}
|
||||||
|
rpc GetItemsByIds (CatalogItemsRequest) returns (PaginatedItemsResponse) {}
|
||||||
|
```
|
||||||
|
|
||||||
|
A `CatalogBase` abstract class will be generated:
|
||||||
|
|
||||||
|
```cs
|
||||||
|
public class CatalogService : CatalogBase
|
||||||
|
{
|
||||||
|
public CatalogService()
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
public override async Task<CatalogItemResponse> GetItemById(CatalogItemRequest request, ServerCallContext context)
|
||||||
|
{
|
||||||
|
// Code
|
||||||
|
}
|
||||||
|
|
||||||
|
public override async Task<PaginatedItemsResponse> GetItemsByIds(CatalogItemsRequest request, ServerCallContext context)
|
||||||
|
{
|
||||||
|
// Code
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
All needed C# types for parameters and return values will be generated automatically.
|
||||||
|
|
||||||
|
### Adding the gRPC pipeline into ASP.NET Core
|
||||||
|
|
||||||
|
ASP.NET Core supports direct integration of the gRPC pipeline. You only have to use the method `MapGrpcService` of the `IEndpointRouteBuilder` in your `Startup` class:
|
||||||
|
|
||||||
|
```cs
|
||||||
|
app.UseEndpoints(endpoints =>
|
||||||
|
{
|
||||||
|
endpoints.MapDefaultControllerRoute();
|
||||||
|
endpoints.MapControllers();
|
||||||
|
endpoints.MapGrpcService<CatalogService>();
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Creating the gRPC client
|
||||||
|
|
||||||
|
If you are creating a gRPC client instead of a server, you need to create a `GrpChannel` and then a gRPC client using this channel:
|
||||||
|
|
||||||
|
```cs
|
||||||
|
var channel = GrpcChannel.ForAddress(UrlOfService);
|
||||||
|
var client = new Basket.BasketClient(channel);
|
||||||
|
```
|
||||||
|
|
||||||
|
The class `Basket.BasketClient` is the stub generated from the _proto_ file. Then you can call the methods of the `BasketClient` class.
|
||||||
|
|
||||||
|
### Using gRPC without TLS
|
||||||
|
|
||||||
|
gRPC works with HTTP/2 only. Usually when a client connects to a server, the connection is done using HTTP1.1 and promoted to HTTP/2 only if both, server and client, support HTTP/2. This promotion is performed using a protocol negotiation, usually implemented using ALPN protocol which requires TLS.
|
||||||
|
|
||||||
|
**That means, that by default, you need to have a TLS endpoint enabled to be able to use gRPC.**
|
||||||
|
|
||||||
|
However in internal microservices, maybe you don't have a TLS enabled endpoints (because those endpoints are internal). In this case you have two options:
|
||||||
|
|
||||||
|
- Open a single Kestrel endpoint, listening on HTTP/2
|
||||||
|
- Open two Kestrel endpoints, one listening on HTTP1.1 the other listening on HTTP/2
|
||||||
|
|
||||||
|
The second option is the one needed if your server must support HTTP1.1 clients other than gRPC clients. Following C# code (in `Program.cs`) shows the second approach:
|
||||||
|
|
||||||
|
```cs
|
||||||
|
WebHost.CreateDefaultBuilder(args)
|
||||||
|
.ConfigureKestrel(options =>
|
||||||
|
{
|
||||||
|
options.Listen(IPAddress.Any, ports.httpPort, listenOptions =>
|
||||||
|
{
|
||||||
|
listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
|
||||||
|
});
|
||||||
|
options.Listen(IPAddress.Any, ports.grpcPort, listenOptions =>
|
||||||
|
{
|
||||||
|
listenOptions.Protocols = HttpProtocols.Http2;
|
||||||
|
});
|
||||||
|
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
But, this is not enough. We need to tell the gRPC client, that can connect directly to a HTTP/2 endpoint, without the need to have TLS. By default ASP.NET doesn't allow a gRPC client to connect to a non-TLS endpoint.
|
||||||
|
|
||||||
|
The following lines are needed on the client:
|
||||||
|
|
||||||
|
```cs
|
||||||
|
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
|
||||||
|
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2Support", true);
|
||||||
|
```
|
||||||
|
|
||||||
|
Those settings could be set only once at the beginning of the client.
|
||||||
|
|
||||||
|
## More information
|
||||||
|
|
||||||
|
- [gRPC](https://grpc.io/)
|
||||||
|
- [Introduction to gRPC services](https://docs.microsoft.com/aspnet/core/grpc/?view=aspnetcore-3.0)
|
BIN
images/API-gateways/eshop-overall-architecture.png
Normal file
After Width: | Height: | Size: 630 KiB |
BIN
images/BFF-implementation/bff-pattern.png
Normal file
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 37 KiB |
BIN
images/Deploy-to-Azure-Kubernetes-Service-(AKS)/create-kubernetes-cluster-authentication.png
Normal file
After Width: | Height: | Size: 233 KiB |
BIN
images/Deploy-to-Azure-Kubernetes-Service-(AKS)/create-kubernetes-cluster-basics.png
Normal file
After Width: | Height: | Size: 255 KiB |
BIN
images/Deploy-to-Azure-Kubernetes-Service-(AKS)/create-kubernetes-cluster-monitoring.png
Normal file
After Width: | Height: | Size: 266 KiB |
BIN
images/Deploy-to-Azure-Kubernetes-Service-(AKS)/create-kubernetes-cluster-networking.png
Normal file
After Width: | Height: | Size: 266 KiB |
After Width: | Height: | Size: 206 KiB |
BIN
images/Deploy-to-Azure-Kubernetes-Service-(AKS)/pods.png
Normal file
After Width: | Height: | Size: 81 KiB |
After Width: | Height: | Size: 366 KiB |
BIN
images/gRPC/grpc-generated-files.png
Normal file
After Width: | Height: | Size: 52 KiB |