General wiki update!
@ -1,48 +1,6 @@
|
||||
# Windows dev machine
|
||||
# Superseded
|
||||
|
||||
## Recommended Hardware requirements:
|
||||
- 16Gb of memory RAM - Since you need Hyper-V for Docker Community Edition (aka. Docker for Windows/Mac) in order to run the Linux Docker Host and we're also running a SQL Server container and a Redis container. An 8Gb RAM machine might be too tight.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
## Software requirements for Windows are:
|
||||
- Docker Community Edition (aka. Docker for Windows) - Requires Windows 10 Pro 64 bits and Hyper-V enabled.
|
||||
- Latest .NET Core 2.1 SDK from: https://www.microsoft.com/net/download
|
||||
- (Optional) Visual Studio 2017 **15.8** or later – Much better for debugging multi-containers apps
|
||||
|
||||
The requirements for VS 2017 are:
|
||||
Supported Operating Systems
|
||||
Visual Studio 2017 will install and run on the following operating systems:
|
||||
• Windows 10 version 1507 or higher (recommended Windows 1803): Home, Professional, Education, and Enterprise (LTSB is not supported)
|
||||
|
||||
• Windows Server 2016: Standard and Datacenter
|
||||
|
||||
• Windows 8.1 (with Update 2919355): Basic, Professional, and Enterprise
|
||||
|
||||
• Windows Server 2012 R2 (with Update 2919355): Essentials, Standard, Datacenter
|
||||
|
||||
• Windows 7 SP1 (with latest Windows Updates): Home Premium, Professional, Enterprise, Ultimate
|
||||
|
||||
However, the requirements for “Docker for Windows” are more restrictive:
|
||||
https://docs.docker.com/docker-for-windows/install/#download-docker-for-windows
|
||||
|
||||
**Why is Windows 10 required?**
|
||||
Docker for Windows uses Windows Hyper-V. While older Windows versions have Hyper-V, their Hyper-V implementations lack features critical for Docker for Windows to work.
|
||||
|
||||
Docker for Windows requires 64bit Windows 10 Pro, Enterprise and Education (1511 November update, Build 10586 or later) and Microsoft **Hyper-V**.
|
||||
|
||||
|
||||
# Mac dev machine
|
||||
|
||||
## Recommended Hardware requirements:
|
||||
- 16Gb of memory RAM - Since you a VM in the Mac with the Linux Docker host and we're also running a SQL Server container and a Redis container, 8Gb of RAM might not be enough.
|
||||
|
||||
## Software requirements for Mac are:
|
||||
- Docker Community for Mac (aka. Docker for Mac) - Requires OS X El Capitan 10.11 or newer macOS.
|
||||
- .NET Core 2.1 SDK for Mac
|
||||
- (Optional) Visual Studio for Mac
|
||||
- (Optional) Visual Studio Code for Mac
|
||||
|
||||
Docker for Mac requires OS X El Capitan 10.11 or newer macOS release running on a 2010 or newer Mac, with Intel’s hardware support for MMU virtualization.
|
||||
|
||||
## Questions
|
||||
[QUESTION] Answer +1 if the solution is working for you (Through VS2017 or CLI environment):
|
||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
||||
- [Windows setup](Windows-setup)
|
||||
- [Mac setup](Mac-setup)
|
||||
|
@ -1,187 +1,6 @@
|
||||
# Superseded
|
||||
|
||||
## Related readme files (use them for more information after reading this)
|
||||
|
||||
* [https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/branch-guide.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/branch-guide.md): Branches used and their purpose. Any branch not listed in this file is "temporary" and "unsupported" and can be deleted at any time.
|
||||
|
||||
## Version 2.1 (Current version based on .NET Core 2.1)
|
||||
eShopOnContainers v2.1 is evolving at the DEV and MASTER branch.
|
||||
Features supported on eShopOnContainers v2.1 are the features supported on v2.0 listed below, plus the following added features:
|
||||
|
||||
- Use of HttpClientFactory with Polly integration for resilient Http communication (Retries with exponential backoff and Circuit Breakers)
|
||||
|
||||
- Real time communication: SignalR Hub microservice/container for real-time communication. Scenario: Notification of Order's status changes. This is new since SignalR is provided in ASP.NET Core 2.1.
|
||||
|
||||
|
||||
## Version 2 (Based on .NET Core 2.0)
|
||||
|
||||
- Docker containers with .NET Core, Linux images and Windows containers supported and tested.
|
||||
- .NET Core 2.0 and EF Core 2.0 support
|
||||
- Visual Studio 2017, including VS Docker Tooling based on docker-compose.yml files supporting multi-container debugging, etc. CLI and VS Code environments are also supported.
|
||||
- CLI build process using a Docker ASPNETCore build-image (microsoft/aspnetcore-build) with all the needed SDKs so the compilation takes place in the same container recommended to be used in your CI pipeline (Continuous Integration). No need to have all dependencies in the dev machine when using this method. [Using this CI docker-compose file](https://github.com/dotnet/eShopOnContainers/blob/master/docker-compose.ci.build.yml).
|
||||
- Microservice oriented architecture, easy to get started, described in this <a href='https://aka.ms/microservicesebook'>Guide/eBook </a>.
|
||||
|
||||
- Implementation of the [API Gateway and BFF (Backend-For-Front) patterns](http://microservices.io/patterns/apigateway.html) so you can filter and publish simplified APIs and URIs and apply additional security in that tier while hiding/securing the internal microservices to the client apps or outside consumers. These sample API Gateways in eShopOnContainers are based on [Ocelot](https://github.com/ThreeMammals/Ocelot), an OSS lightweight API Gateway solution explained [here](http://threemammals.com/ocelot).
|
||||
|
||||
- Support for Windows Containers running on Windows Server Nano using different Docker base images instead of the Linux based images
|
||||
|
||||
- INTEGRATION EVENTS with Event Bus implementations: Implement Event-Driven communication between microservices/containers based on Event Bus interfaces and two implementations:
|
||||
1. (Implemented as PoC) Standalone Pub/Subs messaging implementation based on an out-of-proc RabbitMQ Container
|
||||
2. (Future version) Azure-attached implementation based on Azure Service Bus using Topics for Pub/Subs
|
||||
Two integration event scenarios to implement in the app:
|
||||
1. Simple (higher priority): Change Product info (name, image URL, etc.) in the Catalog and update that in the existing Orders and Baskets (all, except the price)
|
||||
2. (Future version) Complex: Events propagating Order's states changes related to the Order-Process SAGA (InProcess, Paid, Handling, Shipped, Canceled if timeout because it was not paid, etc.)
|
||||
|
||||
- DOMAIN EVENTS: Implement Domain Events which is related but not the same as integration events for inter-microservice-communication. Domain Events are initially intended to be used within a single microservice's Domain, like communicating state-changes between Aggregates, although they could derive to Integration Events if what happened in a microservice's domain should impact other additional microservices.
|
||||
SCENARIOS IMPLEMENTED:
|
||||
1. Check price change basket vs. catalog when converting to Order: https://github.com/dotnet/eShopOnContainers/issues/38
|
||||
2. Multiple AGGREGATE changes within the same Command-Handler, decoupled by domain events.
|
||||
|
||||
- Resiliency Communication: Resilient synchronous HTTP communication with retry-loops with exponential backoff and circuit-breaker pattern implementations to avoid DDoS initiated by clients. Implementation with Polly: https://github.com/App-vNext/Polly/ OSS lib.
|
||||
|
||||
- Idempotent updates at microservices, so the same update (like OrderCreation) cannot be executed multiple times. Server must implement operations idempotently. An operation is idempotent if it gets the same result when performed multiple times. Implementing idempotency is domain-specific.
|
||||
|
||||
- Exception Handling - ASP.NET middleware extension: Business-Exceptions + Generic-Exception-Handler (ExceptionHandlerHandler)
|
||||
|
||||
- Command Validations with MediatR Decorator - FluentValidation
|
||||
https://github.com/JeremySkinner/FluentValidation
|
||||
|
||||
- HEALTHCHECKS / Health Check Library (Preview) from the ASP.NET team. It provides: A model of healthcheckresults, A Middleware to return ok/bad, A polling service calling the healthchek service and publishing results (open/pluggable to orchestrators, App Insights, etc.)
|
||||
|
||||
- Legacy ASP.NET WebForms client running on a Windows Container (Catalog Manager/Simple CRUD maintenance). Consuming the same Catalog microservice. This is an example of a "lift and shift" scenario.
|
||||
|
||||
- Monolithic ASP.NET MVC app (just web app, no microservices) with public area of the eShop (Home page with catalog-view functionality, basically).
|
||||
. Running as a container: https://github.com/dotnet/eShopOnContainers/tree/master/src/Web/WebMonolithic
|
||||
. More advanced layered web app: As segregated project available at: https://github.com/dotnet/eShopOnWeb This app is not using Docker containers, just plain web
|
||||
|
||||
- Additional microservice (Marketing and Location microservices) with data stored in MongoDB containers or Azure CosmosDB
|
||||
|
||||
## Deploying to Azure
|
||||
<img src="img/exploring-to-production-ready.png">
|
||||
|
||||
- Deployment/support into Kubernetes in ACS (Linux containers tested)
|
||||
|
||||
- Deployment/support into Azure Service Fabric (Windows Containers and Linux Containers).
|
||||
|
||||
- Azure Storage Blob: Using Azure Blobs to store the Product Images instead of plain files in folders
|
||||
|
||||
- Azure Functions: Azure Function microservice (The Marketing feature has an Azure function returning the Marketing Campaign content)
|
||||
|
||||
- DevOps: eShopOnContainers scripts/procedures of CI/CD pipelines in Visual Studio Team Services
|
||||
|
||||
## Previous version 1.0 (Based on .NET Core 1.1)
|
||||
This is an older version supporting .NET Core 1.1, tagged as v1.0 and available here:
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/releases/tag/netcore1.1
|
||||
|
||||
# VNext
|
||||
|
||||
Other possible features, to be evaluated for the backlog:
|
||||
|
||||
- Helm support for Kubernetes deployment and Azure Dev Spaces compatibility
|
||||
|
||||
- Support for Service Fabric Mesh (codename "SeaBreeze")
|
||||
|
||||
- Manage Secrets with Azure KeyVault
|
||||
|
||||
- Split Unit Test projects and distribute across microservices, so each microservice "owns" its own tests.
|
||||
|
||||
- Implement a more advanced versioning system based on [aspnet-api-versioning](https://github.com/Microsoft/aspnet-api-versioning) or comparable system. Current API versioning is very basic, simply based on the URLs.
|
||||
|
||||
https://github.com/Microsoft/aspnet-api-versioning
|
||||
- Implement more advanced logging suchs as using Serilog: https://serilog.net/ or the selected approach.
|
||||
- Azure Event Grid: Implement an additional Event Bus implementation based on Azure Event Grid.
|
||||
- Azure Functions integrated with Azure Event Grid: Additional event-driven Azure Function microservice (i.e. grabbing uploaded images and adding a watermark and putting it into Azure blobs) - The notification would come from Azure Event Grid when any image is uploaded into a BLOB storage.
|
||||
- Monitoring/Diagnostics of microservices based on Application Insights with custom perfkeys
|
||||
- Service Fabric Stateful Service implementation in the SF branch
|
||||
- Gracefully stopping or shutting down microservice instances - Implemented as an ASP.NET Core middleware in the ASP.NET Core pipeline. Drain in-flight requests before stopping the microservice/container process.
|
||||
|
||||
- (To be Confirmed) Support for .NET Core 2.0 Razor Pages as additional client app.
|
||||
|
||||
- Security:
|
||||
- Encrypt secrets at configuration files (like in docker-compose.yml). Multiple possibilities, Azure Key Vault or using simple Certificates at container level, Consul, etc.
|
||||
- Other "secure-code" practices
|
||||
- Encrypt communication with SSL (related to the specific cloud infrastructure being used)
|
||||
- Implement security best practices about app's secrets (conn-strings, env vars, etc.)
|
||||
(However, this subject depends on the chosen orchestrator...)
|
||||
See when using Swarm: https://blog.docker.com/2017/02/docker-secrets-management/
|
||||
|
||||
- Create a building block to handle Idempotency in a generic way ([Issue 143](https://github.com/dotnet/eShopOnContainers/issues/143))
|
||||
|
||||
- Implement example of Optimistic Concurrency updates and optimistic concurrency exceptions
|
||||
|
||||
- (To be Confirmed) Nancy: Add a Nancy based microservice, also with DocDB, etc.
|
||||
|
||||
- (To be Confirmed) Support other DataProtection providers such as AspNetCore.DataProtection.ServiceFabric
|
||||
|
||||
- (To be Confirmed) Guide on Kubernetes
|
||||
• Possible "Getting started guide w/ local dev Kubernetes integrated to Docker for Windows and Mac"
|
||||
|
||||
- (To be Confirmed) In the Windows Containers fork, implement and add a simple WCF microservice/container implementing any logic like a simulated legacy Payment Gateway, as an example of "lift and shift" scenario.
|
||||
|
||||
- (To be Confirmed) Semantic log - Semantic logic - Related to the Azure app version and Application Insight usage
|
||||
Monitor what microservices are up/down, etc. related to App Insights, but the events are custom
|
||||
ETW events and "Semantic Application Log" from P&P
|
||||
Multiple implementations for the storage of the events, Azure Diagnostics, Elastic Search.
|
||||
Using EventSource base class, etc.
|
||||
|
||||
- (To be Confirmed) Composite UI based on microservices.
|
||||
Including the “UI per microservice”.
|
||||
References on Composite UI with microservices:
|
||||
|
||||
Composite UI using ASP.NET (Particular’s Workshop)
|
||||
http://bit.ly/particular-microservices
|
||||
|
||||
The Monolithic Frontend in the Microservices Architecture
|
||||
http://blog.xebia.com/the-monolithic-frontend-in-the-microservices-architecture/
|
||||
|
||||
The secret of better UI composition
|
||||
https://particular.net/blog/secret-of-better-ui-composition
|
||||
|
||||
Including Front-End Web Components Into Microservices
|
||||
https://technologyconversations.com/2015/08/09/including-front-end-web-components-into-microservices/
|
||||
|
||||
Managing Frontend in the Microservices Architecture
|
||||
http://allegro.tech/2016/03/Managing-Frontend-in-the-microservices-architecture.html
|
||||
|
||||
- Enhance the domain logic for Order Root-Aggregate.
|
||||
|
||||
Already implemented item stock validation (cancels order when quantity is not enough), but could add additional features, check [issue #5](https://github.com/dotnet-architecture/eShopOnContainers/issues/5).
|
||||
|
||||
- Support "multiple redirect urls" for the STS container based on Identity Server 4, check [issue #113](https://github.com/dotnet-architecture/eShopOnContainers/issues/113).
|
||||
|
||||
- Add proper handling of Authentication Token lifetime, check [issue #118](https://github.com/dotnet-architecture/eShopOnContainers/issues/118) for details.
|
||||
|
||||
- Refactor/Improve Polly's resilient code, check [issue #177](https://github.com/dotnet-architecture/eShopOnContainers/issues/177) for details.
|
||||
|
||||
- Add a Jitter strategy to the Retry policy, check [issue #188](https://github.com/dotnet-architecture/eShopOnContainers/issues/188) for details.
|
||||
|
||||
- Increase the resilience of the RequestProvider class in the mobile app, check [issue #206](https://github.com/dotnet-architecture/eShopOnContainers/issues/206) for details.
|
||||
|
||||
- Investigate using the OIDC library to communicate with IdentityServer from the Xamarin client, check [issue #215](https://github.com/dotnet-architecture/eShopOnContainers/issues/215) for details.
|
||||
|
||||
- Replace the WebView in the Xamarin client with device web browsers, check [issue #216](https://github.com/dotnet-architecture/eShopOnContainers/issues/216) for details.
|
||||
|
||||
- Consider using Bash instead of Powershell scripts, check [issue #228](https://github.com/dotnet-architecture/eShopOnContainers/issues/228) for details.
|
||||
|
||||
- Improve app startup time of Xamarin client, check [issue #231](https://github.com/dotnet-architecture/eShopOnContainers/issues/231) for details.
|
||||
|
||||
- Add social login to MVC and SPA apps, check [issue #475](https://github.com/dotnet-architecture/eShopOnContainers/issues/475) for details.
|
||||
|
||||
- Create a new "ServerProblemDetails" response that conforms with RFC 7807, check [issue #602](https://github.com/dotnet-architecture/eShopOnContainers/issues/602) for details.
|
||||
|
||||
- Include some guidance on testing in CI/CD pipelines, check [issue #549](https://github.com/dotnet-architecture/eShopOnContainers/issues/549) for details.
|
||||
|
||||
- Encrypt sensitive information, such as credit card number, along the ordering process, check [issue #407](https://github.com/dotnet-architecture/eShopOnContainers/issues/407)
|
||||
|
||||
- Fix naming inconsistency in EventBus projects and namespaces, they should be EventBus.RabbitMQ" and "EventBus.ServiceBus", check [issue #943](https://github.com/dotnet-architecture/eShopOnContainers/issues/943)
|
||||
|
||||
- Create load testing alternative that's not dependent on the about-to-be-deprecated load testing feature of VS Enterprise, see [issue #950](https://github.com/dotnet-architecture/eShopOnContainers/issues/950) for more details.
|
||||
|
||||
- Revamp the UI to a more modern aspect, consider using semantically correct html, check [issue #1017](https://github.com/dotnet-architecture/eShopOnContainers/issues/1017).
|
||||
|
||||
- Use the JSON:API specification for implementing APIs in eShopOnContainers, check [issue #1064](https://github.com/dotnet-architecture/eShopOnContainers/issues/1064)
|
||||
|
||||
## Sending feedback and pull requests
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Roadmap](Roadmap)
|
||||
- [Backlog](Backlog)
|
||||
|
@ -1,249 +1,5 @@
|
||||
## Want to try it out from Visual Studio 2017?
|
||||
|
||||
## IMPORTANT NOTE ON VISUAL STUDIO 2017 VERSION NEEDED!
|
||||
|
||||
**The current supported Visual Studio version for eShopOnContainers is Visual Studio 2017 15.8** or later version.
|
||||
|
||||
Also, make sure you have the latest '.NET Core 2.2 SDK' from: https://www.microsoft.com/net/download
|
||||
|
||||
Main steps to run it in Visual Studio:
|
||||
|
||||
```
|
||||
- Git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
||||
- Open solution eShopOnContainers-ServicesAndWebApps.sln with Visual Studio 2017
|
||||
- Set the VS startup project to the "docker-compose" project
|
||||
- Hit F5! (Or Ctrl+F5 for a faster start up)
|
||||
```
|
||||
|
||||
**NOTE:** In order for the authentication based on the STS (Security Token Service) to properly work and have access from remote client apps like the Xamarin mobile app, you also need to open the ports in your firewall as specified in the procedure below.
|
||||
|
||||
For further instructions, especially if this is the first time you are going to try .NET Core on Docker, see the detailed instructions below.
|
||||
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
## Detailed procedure: Setting eShopOnContainers up in a Visual Studio 2017 development machine
|
||||
|
||||
Visual Studio 2017 provides built-in Docker Tools with features like:
|
||||
|
||||
* Docker-compose support
|
||||
* Multi-container debugging, supporting true microservice scenarios
|
||||
* Linux Docker Containers (usually, for .NET Core apps)
|
||||
* Windows Docker Containers (usually for .NET Framework apps)
|
||||
|
||||
So, here's how to setup a VS 2017 environment where you can test eShopOnContainers.
|
||||
|
||||
### GitHub branch to use
|
||||
|
||||
By default, use the DEV branch which has the latest changes and testing.
|
||||
The MASTER branch is also an option but it'll usually be less up to date while we keep evolving the application.
|
||||
|
||||
### Software requirements
|
||||
|
||||
Software installation requirements for a Windows dev machine with Visual Studio 2017 and Docker for Windows:
|
||||
|
||||
- <a href='https://docs.docker.com/docker-for-windows/install/'>Docker for Windows</a> with the concrete configuration specified below.
|
||||
- <a href='https://www.visualstudio.com/vs/'>Visual Studio 2017 version 15.5</a> (Minimum version) with the workloads specified below.
|
||||
- NPM and related dependencies for running the SPA Web app. <a href='https://github.com/dotnet/eShopOnContainers/wiki/06.-Setting-the-Web-SPA-application-up'>Setup process described here </a>
|
||||
|
||||
### Installing and configuring Docker in your development machine
|
||||
|
||||
#### Install Docker CE for Windows
|
||||
|
||||
Install Docker CE for Windows (The Stable channel should suffice) from this page: https://docs.docker.com/docker-for-windows/install/
|
||||
About further info on Docker for windows, check this additional page
|
||||
https://docs.docker.com/docker-for-windows/
|
||||
|
||||
Docker for Windows uses Hyper-V to run a Linux VM which is the by default Docker host. If you don't have Hyper-V installed/enabled, it'll be installed and you will probably need to reboot your machine. Docker's setup should warn you about it, though.
|
||||
|
||||
**IMPORTANT**: Check that you don't have any other hypervisor installed that might be not compatible with Hyper-V. For instance, Intel HAXM can be installed by VS 2017 if you chose to install Google's Android emulator which works on top of Intel HAXM. In that case, you'd need to uninstall Google's Android emulator and Intel HAXM.
|
||||
VS 2017 recommends to install the Google Android emulator because it is the only Android emulator with support for Google Play Store, Google Maps, etc. However, take into account that it currently is not compatible with Hyper-V, so you might have incompatibilities with this scenario.
|
||||
|
||||
#### Set needed assigned Memory and CPU to Docker
|
||||
|
||||
For the development environment of eShopOnContainers, by default, it runs 1 instance of SQL Server running as a container with multiple databases (one DB per microservice), other 6 additional ASP.NET Core apps/services each one running as a container, plus 1 Redis server running as a container. Therefore, especially because of the SQL Server requirements on memory, it is important to set Docker up properly with enough memory RAM and CPU assigned to it or you will get errors when starting the containers with VS 2017 or "docker-compose up".
|
||||
|
||||
Once Docker for Windows is installed in your machine, enter into its Settings and the Advanced menu option so you are able to adjust it to the minimum amount of memory and CPU (Memory: Around 4096MB and CPU:3) as shown in the image. Usually you might need a 16GB memory machine for this configuration if you also want to run the Android emulator for the Xamarin app or multiple instances of applications demanding significant memory at the same time. If you have a less powerful machine, you can try with a lower configuration and/or by not starting certain containers like the basket and Redis. But if you don't start all the containers, the application will not fully function properly, of course.
|
||||
|
||||
<img src="img/docker_settings.png">
|
||||
|
||||
#### Share drives in Docker settings (In order to deploy and debug with Visual Studio 2017)
|
||||
(Note, this is not required if running from Docker CLI with docker-compose up and using VS 2015 or any other IDE or Editor)<p>
|
||||
In order to deploy/debug from Visual Studio 2017, you'll need to share the drives from Settings-> Shared Drives in the "Docker for Windows" configuration.
|
||||
If you don't do this, you will get an error when trying to deploy/debug from VS 2017, like "Cannot create container for service yourApplication: C: drive is not shared". <p>
|
||||
The drive you'll need to share depends on where you place your source code.
|
||||
|
||||
|
||||
<img src="img/docker_settings_shared_drives.png">
|
||||
|
||||
### IMPORTANT: Open ports in local Firewall so Authentication to the STS (Security Token Service container) can be done through the 10.0.75.1 IP which should be available and already setup by Docker. Also needed for client remote apps like Xamarin app or SPA app in remote browser.
|
||||
|
||||
- You can manually create a rule in your local firewall in your development machine or you can also create that rule by just executing the <b>add-firewall-rules-for-sts-auth-thru-docker.ps1</b> script available in the solution's **cli-windows** folder.
|
||||
- Basically, you need to open the ports 5100 to 5110 that are used by the solution by creating an IN-BOUND RULE in your firewall, as shown in the screenshot below (for Windows).
|
||||
|
||||
<img src="img/firewall-rule-for-eshop.png">
|
||||
|
||||
- **NOTE:** If you get the error **Unable to obtain configuration from: `http://10.0.75.1:5105/.well-known/openid-configuration`** you might need to allow to the program `vpnkit` for connections to and from any computer through all ports (see [issue #295](https://github.com/dotnet-architecture/eShopOnContainers/issues/295#issuecomment-327973650).
|
||||
|
||||
### Installing and configuring Visual Studio 2017 in your development machine
|
||||
|
||||
#### Install Visual Studio 2017
|
||||
|
||||
Run the VS 2017 setup file (latest RTM version, **Visual Studio 2017 15.5 or later**) and select the following workloads depending on the apps you intend to test or work with:
|
||||
|
||||
##### Working only with the server side (Microservices and web applications) - Workloads
|
||||
|
||||
- ASP.NET and web development
|
||||
- .NET Core cross-platofrm development
|
||||
- Azure development (Optional) - It is optional but recommended in case you want to deploy to Docker hosts in Azure or use any other infrastructure in Azure.
|
||||
-
|
||||
<img src="img/vs2017/vs2017_server_workload.png">
|
||||
|
||||
##### Working with the mobile app (Xamarin Mobile apps for iOS, Android and Windows UWP) - Workloads
|
||||
|
||||
If you also want to test/work with the eShopOnContainer model app based on Xamarin, you need to install the following additional workloads:
|
||||
|
||||
- Mobile development with .NET (Xamarin)
|
||||
- Universal Windows Platform development
|
||||
- .NET desktop development (Optional) - This is not required, but just in case you also want to make tests consuming the microservices from WPF or WinForms desktop apps
|
||||
|
||||
<img src="img/vs2017/vs2017_additional_mobile_workloads.png">
|
||||
|
||||
IMPORTANT: As mentioned above, make sure you are NOT installing Google's Android emlulator with Intel HAXM hypervisor or you will run on an incompatibility and Hyper-V won't work in your machine, therefore, Docker for Windows wont work when trying to run the Linux host or any host witih Hyper-V.
|
||||
|
||||
Make sur eyou are NOT selecting
|
||||
the highlighted options below with a red arrows:
|
||||
|
||||
<img src="img/vs2017/xamarin-workload-options.png">
|
||||
|
||||
### Issue/Workarounds for "Visual Studio 2017 Tools for Docker" when there's a network proxy in between your machine and Docker Hub in the Internet
|
||||
After installing VS2017 with docker support, if you cannot debug properly and you are trying from a corporate network behind a proxy, take into account the following issue and workarounds, until this issue is fixed in Visual Studio:
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/issues/224#issuecomment-319462344
|
||||
|
||||
### Clone the eShopOnContainers code from GitHub
|
||||
By default, clone the DEV branch which is currently the by default branch to accept Pull Requests, etc.
|
||||
Like here:
|
||||
|
||||
`git clone https://github.com/dotnet-architecture/eShopOnContainers.git`
|
||||
|
||||
**Note:** Remember that the active development is done in `dev` branch. To test the latest code, use this branch instead of `master`.
|
||||
|
||||
### Open eShopOnContainers solution, Build, Run
|
||||
|
||||
#### Open eShopOnContainers solution in Visual Studio 2017
|
||||
|
||||
- If testing/working only with the server-side applications and services, open the solution: **eShopOnContainers-ServicesAndWebApps.sln** (Recommended for most cases testing the containers and web apps)
|
||||
|
||||
- If testing/working either with the server-side applications and services plus the Xamarin mobile apps, open the solution: **eShopOnContainers.sln**
|
||||
|
||||
Below you can see the full **eShopOnContainers-ServicesAndWebApps.sln** solution (server side) opened in Visual Studio 2017:
|
||||
|
||||
<img src="img/vs2017/vs-2017-eshoponcontainers-servicesandwebapps-solution.png">
|
||||
|
||||
Note how VS 2017 loads the docker-compose.yml files in a special node-tree so it uses that configuration to deploy/debug all the containers configured, at the same time into your Docker host.
|
||||
|
||||
#### Build and run eShopOnContainers from Visual Studio 2017
|
||||
|
||||
##### Set docker-compose as the default StartUp project
|
||||
**IMPORTANT**: If the **"docker-compose" project** is not your "by default startup project", right click on the "docker-compose" node and select the "Set as Startup Project" menu option, as shown below:
|
||||
<img src="img/vs2017/set-docker-node-as-default.png">
|
||||
|
||||
At this point, after waiting sometime for the Nuget packages to be properly restored, you should be able to build the whole solution or even directly deploy/debug it into Docker by simple hitting F5 or pressing the debug "Play" button that now should be labeled as "Docker":
|
||||
|
||||
<img src="img/vs2017/debug-F5-button.png">
|
||||
|
||||
VS 2017 should compile the .NET projects, then create the Docker images and finally deploy the containers in the Docker host (your by default Linux VM in Docker for Windows).
|
||||
Note that the first time you hit F5 it'll take more time, a few minutes at least, because in addition to compile your bits, it needs to pull/download the base images (SQL for Linux Docker Image, Redis Image, ASPNET image, etc.) and register them in the local image repo of your PC. The next time you hit F5 it'll be much faster.
|
||||
|
||||
Finally, because the docker-compose configuration project is configured to open the MVC application, it should open your by default browser and show the MVC application with data coming from the microservices/containers:
|
||||
<img src="img/vs2017/vs2017-f5-with-eshoponcontainers-web-mvc-in-browser.png">
|
||||
|
||||
Here's how the docker-compose configuration project is configured to open the MVC application:
|
||||
|
||||
<img src="img/vs2017/docker-compose-properties.png">
|
||||
|
||||
Finally, you can check out how the multiple containers are running in your Docker host by running the command **"docker ps"** like below:
|
||||
|
||||
<img src="img/vs2017/docker-ps.png">
|
||||
|
||||
You can see the 8 containers are running and what ports are being exposed, etc.
|
||||
|
||||
### Debug with several breakpoints across the multiple containers/projects
|
||||
|
||||
Something very compelling and productive in VS 2017 is the capability to debug several breakpoints across the multiple containers/projects.
|
||||
For instance, you could set a breakpoint in a controller within the MVC web app plus a second breakpoint in a second controller within the Catalog Web API microservice, then refresh the browser if you were still running the app or F5 again, and VS will be stopping within your microservices running in Docker as shown below! :)
|
||||
|
||||
Breakpoint at the MVC app running as Docker container in the Docker host:
|
||||
<img src="img/vs2017/debugging-mvc-app.png">
|
||||
|
||||
Press F5 again...
|
||||
|
||||
Breakpoint at the Catalog microservice running as Docker container in the Docker host:
|
||||
<img src="img/vs2017/debugging-webapi-microservice.png">
|
||||
|
||||
And that's it! Super simple! Visual Studio is handling all the complexities under the covers and you can directly do F5 and debug a multi-container application!
|
||||
|
||||
|
||||
### Test the SPA Web app
|
||||
While having the containers running, open a browser and type `http://localhost:5104/` and hit enter.
|
||||
You should see the SPA application like in the following screenshot:
|
||||
|
||||
<img src="img/eshop-webspa-app-screenshot.png">
|
||||
<br>
|
||||
|
||||
|
||||
### Test a microservice's Swagger interface (i.e. the Catalog microservice)
|
||||
While having the containers running, open a browser and type `http://localhost:5101` and hit enter.
|
||||
You should see the Swagger page for that microservice that allows you to test the Web API, like in the following screenshot:
|
||||
|
||||
<img src="img/swagger-catalog-1.png">
|
||||
|
||||
Then, after providing the size (i.e. 10) and the current page (i.e. 1) for the data of the catalog, you can run the service hitting the "Try it out!" button and see the returned JSON Data:
|
||||
<img src="img/swagger-catalog-2.png">
|
||||
|
||||
<br>
|
||||
|
||||
----
|
||||
|
||||
### Testing all the applications and microservices
|
||||
Once the containers are deployed, you should be able to access any of the services in the following URLs or connection string, from your dev machine:
|
||||
|
||||
<a href="" target="top"></a>
|
||||
- Web MVC: <a href="http://localhost:5100" target="top">http://localhost:5100</a>
|
||||
- Web Spa: <a href="http://localhost:5104" target="top">http://localhost:5104</a> (Important, check how to set up the SPA app and requirements before building the Docker images. Instructions at https://github.com/dotnet/eShopOnContainers/wiki/06.-Setting-the-Web-SPA-application-up)
|
||||
- Catalog microservice: <a href="http://localhost:5101" target="top">http://localhost:5101</a> (Not secured)
|
||||
- Ordering microservice: <a href="http://localhost:5102" target="top">http://localhost:5102</a> (Requires token for authorization)
|
||||
- Basket microservice: <a href="http://localhost:5103" target="top">http://localhost:5103</a> (Requires token for authorization)
|
||||
- Identity microservice: <a href="http://localhost:5105" target="top">http://localhost:5105</a>
|
||||
- Orders database (SQL Server connection string): Server=tcp:localhost,5432;Database=Microsoft.eShopOnContainers.Services.OrderingDb;User Id=sa;Password=Pass@word;
|
||||
- Catalog database (SQL Server connection string): Server=tcp:localhost,5434;Database=CatalogDB;User Id=sa;Password=Pass@word
|
||||
- ASP.NET Identity database (SQL Server connection string): Server=localhost,5433;Database=aspnet-Microsoft.eShopOnContainers;User Id=sa;Password=Pass@word
|
||||
- Basket data (Redis): listening at localhost:6379
|
||||
|
||||
#### Creating and Order and Authenticating on the Web MVC application with the DemoUser@microsoft.com user account
|
||||
When you try the Web MVC application by using the url http://localhost:5100, you'll be able to test the home page which is also the catalog page. But if you want to add articles to the basket you need to login first at the login page which is handled by the STS microservice/container (Security Token Service). At this point, you could register your own user/customer or you can also use a convenient default user/customer named **demoUser@microsoft.com** so you don't need to register your own user and it'll be easier to test.
|
||||
The credentials for this demo user are:
|
||||
- User: **demouser@microsoft.com**
|
||||
- Password: **Pass@word1**
|
||||
|
||||
Below you can see the login page when providing those credentials.
|
||||
<img src="img/login-demo-user.png">
|
||||
|
||||
#### Trying the Xamarin.Forms mobile apps for Android, iOS and Windows
|
||||
You can deploy the Xamarin app to real iOS, Android or Windows devices.
|
||||
You can also test it on an Android Emulator based on Hyper-V like the Visual Studio Android Emulator (Do NOT install the Google's Android emulator or it will break Docker and Hyper-V, as mentioned above).
|
||||
|
||||
By default, the Xamarin app shows fake data from mock-services. In order to really access the microservices/containers in Docker from the mobile app, you need to:
|
||||
- Disable mock-services in the Xamarin app by setting the <b>UseMockServices = false</b> in the App.xaml.cs and specify the host IP in BaseEndpoint = "http://10.106.144.28" at the GlobalSettings.cs. Both files in the Xamarin.Forms project (PCL).
|
||||
- Another alternative is to change that IP through the app UI, by modifying the IP address in the Settings page of the App as shown in the screenshot below.
|
||||
- In addition, you need to make sure that the used TCP ports of the services are open in the local firewall. <img src="img/xamarin-settings.png">
|
||||
|
||||
|
||||
## Sending feedback and pull requests
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
|
||||
## Questions
|
||||
[QUESTION] Answer +1 if the solution is working for you (Through VS2017 or CLI environment):
|
||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
||||
# Superseded
|
||||
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Windows setup](Windows-setup)
|
||||
|
@ -1,277 +1,5 @@
|
||||
## Related readme files (use them for more information after reading this)
|
||||
|
||||
* [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme-docker-compose.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme-docker-compose.md): All docker compose files that we have and how to use them
|
||||
|
||||
## .NET Core SDK
|
||||
Make sure you have the latest '.NET Core 2.1 SDK' installed from: https://www.microsoft.com/net/download
|
||||
|
||||
## Want to try it out from the CLI?
|
||||
|
||||
Main steps to run on the CLI command window:
|
||||
|
||||
```
|
||||
- Git clone https://github.com/dotnet/eShopOnContainers.git
|
||||
- cd eShopOnContainers
|
||||
- Docker-compose build
|
||||
- Docker-compose up
|
||||
(Alternatively, you can directly just run Docker-compose up and it will run the "build" command, first)
|
||||
- Using a browser, try the MVC app at http://localhost:5100
|
||||
```
|
||||
NOTE: In order for the authentication based on the STS (Security Token Service) to properly work and have access from remote client apps like the Xamarin mobile app, you also need to open the ports in your firewall as specified in the procedure below.
|
||||
For further instructions, especially if this is the first time you are going to try .NET Core on Docker, see the detailed instructions below. This is also important in order to make the SPA app (Single Page Application) to work as there are some considerations (npm install, etc.) in regards when using NPM from Windows and Linux.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
|
||||
# Detailed procedure - Setting eShopOnContainers up in a CLI and Windows based development machine
|
||||
This CLI environment means that you want to build/run by using the CLI (Command line interface) available in .NET Core (dotnetcore) and Docker CLI.
|
||||
<p>
|
||||
You don't need Visual Studio 2017 for this environment but can use any code editor like Visual Studio Code, Sublime, etc. Of course, you could still use VS 2017 at the same time, as well.
|
||||
|
||||
## Docker Multi-stage support
|
||||
Since December 2017, Visual Studio 2017 15.5 and eShopOnContainers support [Docker Multi-stage](https://blogs.msdn.microsoft.com/stevelasker/2017/09/11/net-and-multistage-dockerfiles/), therefore, the steps in order to compile the .NET apps/projects before creating the Docker images can now be performed in a single step with "docker-compose build" or "docker build".
|
||||
|
||||
## Prerequisites (Software requirements)
|
||||
|
||||
1. [Docker for Windows](https://docs.docker.com/docker-for-windows/install/). Important, follow the concrete configuration specified in the steps below.
|
||||
1. A Git client. The [git-scm site](https://git-scm.com/download/gui/mac) maintains a great list of clients.
|
||||
1. (OPTIONAL) [Node.js](http://nodejs.org). The stable channel is fine as well.
|
||||
1. (OPTIONAL) Bower (/> npm install -g bower) needed for the MVC web app.
|
||||
1. [.NET Core SDK](http://dot.net). Install the latest SDK and runtime.
|
||||
1. Any code editor, like [Visual Studio Code](https://code.visualstudio.com/)
|
||||
|
||||
*IMPORTANT NOTE:* When building with Docker Multi-stage you don't really need to have installed Node, NPM, Bower or not even .NET Core SDK in your local Windows machine, as the build image used by Docker Multi-stage has all the needed SDKs to compile the projects. However, we recommend to have it installed on Windows so you can do further development and testing.
|
||||
|
||||
# Setting up the development environment
|
||||
|
||||
## Installing and configuring Docker in your development machine
|
||||
|
||||
### Install Docker for Windows
|
||||
Install Docker for Windows (The Stable channel should suffice) from this page: https://docs.docker.com/docker-for-windows/install/
|
||||
About further info on Docker for windows, check this additional page
|
||||
https://docs.docker.com/docker-for-windows/
|
||||
|
||||
Docker for Windows uses Hyper-V to run a Linux VM which is the by default Docker host. If you don't have Hyper-V installed/enabled, it'll be installed and you will probably need to reboot your machine. Docker's setup should warn you about it, though.
|
||||
|
||||
**IMPORTANT**: Check that you don't have any other hypervisor installed that might be not compatible with Hyper-V. For instance, Intel HAXM can be installed by VS 2017 if you chose to install Google's Android emulator which works on top of Intel HAXM. In that case, you'd need to uninstall Google's Android emulator and Intel HAXM.
|
||||
VS 2017 recommends to install the Google Android emulator because it is the only Android emulator with support for Google Play Store, Google Maps, etc. However, take into account that it currently is not compatible with Hyper-V, so you might have incompatibilities with this scenario.
|
||||
|
||||
### Set needed assigned Memory and CPU to Docker
|
||||
For the development environment of eShopOnContainers, by default, it runs 1 instance of SQL Server running as a container with multiple databases (one DB per microservice), other 6 additional ASP.NET Core apps/services each one running as a container, plus 1 Redis server running as a container. Therefore, especially because of the SQL Server requirements on memory, it is important to set Docker up properly with enough memory RAM and CPU assigned to it or you will get errors when starting the containers with "docker-compose up".
|
||||
|
||||
Once Docker for Windows is installed in your machine, enter into its Settings and the Advanced menu option so you are able to adjust it to the minimum amount of memory and CPU (Memory: Around 4096MB and CPU:3) as shown in the image. Usually you might need a 16GB memory machine for optimal configuration.
|
||||
|
||||
<img src="img/docker_settings.png">
|
||||
|
||||
### Share drives in Docker settings
|
||||
This step is optional but recommended, as Docker sometimes needs to access the shared drives when building, depending on the build actions.
|
||||
With the by default eShopOnContainers build process in the CLI, you don't need it.
|
||||
But if you were to use Visual Studio, it is mandatory to share the drive where your code resides.
|
||||
|
||||
The drive you'll need to share depends on where you place your source code.
|
||||
|
||||
<img src="img/docker_settings_shared_drives.png">
|
||||
|
||||
|
||||
### IMPORTANT: Open ports in local Firewall so Authentication to the STS (Security Token Service container) can be done through the 10.0.75.1 IP which should be available and already setup by Docker. Also needed for client remote apps like Xamarin app or SPA app in remote browser.
|
||||
|
||||
- You can manually create a rule in your local firewall in your development machine or you can also create that rule by just executing the <b>add-firewall-rules-for-sts-auth-thru-docker.ps1</b> script available in the solution's **cli-windows** folder.
|
||||
|
||||
- Basically, you need to open the ports 5100 to 5105 that are used by the solution by creating an IN-BOUND RULE in your firewall, as shown in the screenshot below (for Windows).
|
||||
|
||||
<img src="img/firewall-rule-for-eshop.png">
|
||||
|
||||
- **NOTE:** If you get the error **Unable to obtain configuration from: `http://10.0.75.1:5105/.well-known/openid-configuration`** you might need to allow to the program `vpnkit` for connections to and from any computer through all ports (see [issue #295](https://github.com/dotnet-architecture/eShopOnContainers/issues/295#issuecomment-327973650).
|
||||
|
||||
## .NET Core SDK setup
|
||||
(OPTIONAL) As mentioned, this requirement is optional because when building through Docker Multi-Stage it will be using the .NET SDK available within the ASP.NET Core build image, not the local .NET Core SDK. However, it is recommended to have it installed locally for any further building/testing of the ASP.NET Core projects without Docker.
|
||||
The .NET Core SDK install the .NET Core framework plus the SDK CLI tools with commands like "dotnet build", "dotnet publish", etc.
|
||||
|
||||
Install the .NET Core SDK from here:
|
||||
https://www.microsoft.com/net/download/windows#/current
|
||||
|
||||
## Install NPM (Optional, this local installation is not required when using Docker Multi-Stage)
|
||||
|
||||
(OPTIONAL) As mentioned, this requirement is optional because when building through Docker Multi-Stage it will be using the dependencies available within the ASP.NET Core build image, not the installed software on the local machine/PC.
|
||||
|
||||
In order to be able to build the JavaScript dependencies from command line by using npm you need to install npm globally.
|
||||
|
||||
NPM is bundled with NODE.JS. Installing NPM and NODE is pretty straightforward by using the installer package available at https://nodejs.org/en/
|
||||
|
||||
<img src="img/spa/installing_npm_node.png">
|
||||
You can install the version "Recommended For Most Users" of Node (LTS version).
|
||||
|
||||
After installing Node, you can check the installed NPM version with the command <b>npm -v</b>, as shown below.
|
||||
|
||||
<img src="img/spa/npm-versions-powershell.png">
|
||||
|
||||
## Install Bower (Optional, this local installation is not required when using Docker Multi-Stage)
|
||||
|
||||
(OPTIONAL) As mentioned, this requirement is optional because when building through Docker Multi-Stage it will be using the dependencies available within the ASP.NET Core build image, not the installed software on the local machine/PC.
|
||||
|
||||
Bower is needed by minor dependencies at the MVC web app. If using Visual Studio, VS will handle this installation. But if using the CLI in Windows with no Docker multi-stage, then you'd need to install Bower globally by running the following NPM command:
|
||||
|
||||
`npm install -g bower `
|
||||
|
||||

|
||||
|
||||
# Clone the eShopOnContainers GitHub code Repository into your dev machine
|
||||
|
||||
## GitHub branch to use/pull
|
||||
Use the default branch at eShopOnContainers Github repo. The same branch's code supports Visual Studio 2017 or CLI scenarios, simultaneously, depending on each developer's preference.
|
||||
|
||||
Clone the code from: https://github.com/dotnet/eShopOnContainers.git
|
||||
as in the following screenshot:
|
||||
|
||||
<img src="img/cli-windows/git-clone-powershell.png">
|
||||
|
||||
# Compile the application's projects and build the Docker images with a single command thanks to Docker Multi-Stage
|
||||
|
||||
The recommended approach is to build the .NET application/microservices bits and Docker images with a single command based on [Docker Multi-Stage](https://blogs.msdn.microsoft.com/stevelasker/2017/09/11/net-and-multistage-dockerfiles/) by simply running the following commands within the solution's root folder:
|
||||
|
||||
Move to the root folder of the solution:
|
||||
|
||||
`cd YourPath\eShopOnContainers\`
|
||||
|
||||
Then, run the following docker command:
|
||||
|
||||
`docker-compose build`
|
||||
|
||||

|
||||
|
||||
The first time you run this command it'll take some more additional time as it needs to pull/download the aspnet-build image with the SDKs, so it'll take its time.
|
||||
|
||||
It should take a few minutes to compile the .NET Core projects plus the SPA application (TypeScript/JavaScript).
|
||||
|
||||
- You can check out with Docker CLI the images created by typing in the PowerShell console the command:
|
||||
|
||||
`docker images`
|
||||
|
||||

|
||||
|
||||
Those Docker images you see are the ones you have available in your local image repository in your machine.
|
||||
You might have additional images, but at least, you should see the the custom images starting with the prefix "eshop" which is the name of the image repo. The rest of the images that are not starting with "eshop" will probably be official base-images like the microsoft/aspnetcore or the SQL Server for Linux images, etc.
|
||||
|
||||
# Deploy the containers into the local Docker host
|
||||
|
||||
With a single command you can deploy the whole solution into your local Docker host by just executing the following:
|
||||
|
||||
`docker-compose up`
|
||||
|
||||
<img src="img/cli-windows/docker-compose-up-1.png">
|
||||
|
||||
Note that the first time you try to run the application (with docker run or docker-compose) it detects that it needs a few related infrastructure images, like the SQL Server image, Redis image, RabbitMQ image, etc. so it will pull or download those base images from the Internet, from the public repo at the Docker registry named DOCKER HUB, by pulling the "microsoft/mssql-server-linux" which is the base image for the SQL Server for Linux on containers, and the "library/redis" which is the base Redis image, and so on. Therefore, the first time you run "docker-compose up" it might take a few minutes pulling those images before it spins up your custom containers.
|
||||
|
||||
Finally, you can see how the scripts waits after deploying all the containers:
|
||||
|
||||
<img src="img/cli-windows/docker-compose-up-1.2.png">
|
||||
|
||||
- The next time you run "docker-compose up" again, because you already have all the base images downloaded and registered in your local repo and your custom images built and ready to go, it'll be much faster since it just needs to deploy the containers, like the following screenshot:
|
||||
|
||||
<img src="img/cli-windows/docker-compose-up-2.png">
|
||||
|
||||
- <b>Check out the containers running in your Docker host</b>: Once docker-compose up finishes, you will have the original PowerShell window busy and showing the execution's output in a "wait state", so in order to ask to Docker about "how it went" and see what containers are running, you need to open a second PowerShell window and type "docker ps" so you'll see all the running containers, as shown in the following screenshot.
|
||||
|
||||

|
||||
|
||||
|
||||
### Test the MVC Web app
|
||||
Open a browser and type `http://localhost:5100/` and hit enter.
|
||||
You should see the MVC application like in the following screenshot:
|
||||
|
||||
<img src="img/eshop-webmvc-app-screenshot.png">
|
||||
<br>
|
||||
|
||||
|
||||
### Test the SPA Web app
|
||||
Open a browser and type `http://localhost:5104/` and hit enter.
|
||||
You should see the SPA application like in the following screenshot:
|
||||
|
||||
<img src="img/eshop-webspa-app-screenshot.png">
|
||||
<br>
|
||||
|
||||
|
||||
### Test a microservice's Swagger interface (i.e. the Catalog microservice)
|
||||
Open a browser and type `http://localhost:5101` and hit enter.
|
||||
You should see the Swagger page for that microservice that allows you to test the Web API, like in the following screenshot:
|
||||
|
||||
<img src="img/swagger-catalog-1.png">
|
||||
|
||||
Then, after providing the size (i.e. 10) and the current page (i.e. 1) for the data of the catalog, you can run the service hitting the "Try it out!" button and see the returned JSON Data:
|
||||
<img src="img/swagger-catalog-2.png">
|
||||
|
||||
<br>
|
||||
|
||||
### Using Visual Code to edit C# code or .yml code
|
||||
After installing VS code from <a href='https://code.visualstudio.com/'>Visual Studio Code</a> you can edit particular file or "open" the whole solution forlder like in the following screenshots:
|
||||
|
||||
`Opening the Solution's folder`
|
||||
<img src="img/cli-windows/vs-code-1.png">
|
||||
|
||||
`Editing a .yml file`
|
||||
<img src="img/cli-windows/vs-code-2.png">
|
||||
|
||||
It is also recommended to install the C# extension and the Docker extension for VS Code:
|
||||
<img src="img/cli-windows/vs-code-3-extensions.png">
|
||||
|
||||
|
||||
----
|
||||
|
||||
### Testing all the applications and microservices
|
||||
Once the containers are deployed, you should be able to access any of the services in the following URLs or connection string, from your dev machine:
|
||||
|
||||
<a href="" target="top"></a>
|
||||
- Web MVC: <a href="http://localhost:5100" target="top">http://localhost:5100</a>
|
||||
- Web Spa: <a href="http://localhost:5104" target="top">http://localhost:5104</a> (Important, check how to set up the SPA app and requirements before building the Docker images. Instructions at https://github.com/dotnet/eShopOnContainers/tree/master/src/Web/WebSPA/eShopOnContainers.WebSPA or the README.MD from eShopOnContainers/src/Web/WebSPA/eShopOnContainers.WebSPA)
|
||||
- Catalog microservice: <a href="http://localhost:5101" target="top">http://localhost:5101</a> (Not secured)
|
||||
- Ordering microservice: <a href="http://localhost:5102" target="top">http://localhost:5102</a> (Requires token for authorization)
|
||||
- Basket microservice: <a href="http://localhost:5103" target="top">http://localhost:5103</a> (Requires token for authorization)
|
||||
- Identity microservice: <a href="http://localhost:5105" target="top">http://localhost:5105</a>
|
||||
- Orders database (SQL Server connection string): Server=tcp:localhost,5432;Database=Microsoft.eShopOnContainers.Services.OrderingDb;User Id=sa;Password=Pass@word;
|
||||
- Catalog database (SQL Server connection string): Server=tcp:localhost,5434;Database=CatalogDB;User Id=sa;Password=Pass@word
|
||||
- ASP.NET Identity database (SQL Server connection string): Server=localhost,5433;Database=aspnet-Microsoft.eShopOnContainers;User Id=sa;Password=Pass@word
|
||||
- Basket data (Redis): listening at localhost:6379
|
||||
|
||||
#### Creating and Order and Authenticating on the Web MVC application with the DemoUser@microsoft.com user account
|
||||
When you try the Web MVC application by using the url http://localhost:5100, you'll be able to test the home page which is also the catalog page. But if you want to add articles to the basket you need to login first at the login page which is handled by the STS microservice/container (Security Token Service). At this point, you could register your own user/customer or you can also use a convenient default user/customer named **demoUser@microsoft.com** so you don't need to register your own user and it'll be easier to test.
|
||||
The credentials for this demo user are:
|
||||
- User: **demouser@microsoft.com**
|
||||
- Password: **Pass@word1**
|
||||
|
||||
Below you can see the login page when providing those credentials.
|
||||
<img src="img/login-demo-user.png">
|
||||
|
||||
#### Trying the Xamarin.Forms mobile apps for Android, iOS and Windows
|
||||
You can deploy the Xamarin app to real iOS, Android or Windows devices.
|
||||
You can also test it on an Android Emulator based on Hyper-V like the Visual Studio Android Emulator (Do NOT install the Google's Android emulator or it will break Docker and Hyper-V, as mentioned aboce).
|
||||
|
||||
By default, the Xamarin app shows fake data from mock-services. In order to really access the microservices/containers in Docker from the mobile app, you need to:
|
||||
- Disable mock-services in the Xamarin app by setting the <b>UseMockServices = false</b> in the App.xaml.cs and specify the host IP in BaseEndpoint = "http://10.106.144.28" at the GlobalSettings.cs. Both files in the Xamarin.Forms project (PCL).
|
||||
- Another alternative is to change that IP through the app UI, by modifying the IP address in the Settings page of the App as shown in the screenshot below.
|
||||
- In addition, you need to make sure that the used TCP ports of the services are open in the local firewall. <img src="img/xamarin-settings.png">
|
||||
|
||||
## Sending feedback and pull requests
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
|
||||
## Questions
|
||||
[QUESTION] Answer +1 if the solution is working for you (Through VS2017 or CLI environment):
|
||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# Superseded
|
||||
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Windows setup](Windows-setup)
|
||||
|
@ -1,313 +1,5 @@
|
||||
# Global prerequisites
|
||||
# Superseded
|
||||
|
||||
## Docker for Mac
|
||||
Install [Docker for Mac](https://docs.docker.com/docker-for-mac/install/). The stable channel is fine.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
## .NET Core SDK
|
||||
Make sure you have the latest '.NET Core 2.1 SDK' installed from: https://www.microsoft.com/net/download
|
||||
|
||||
## Configure Docker for Mac
|
||||
|
||||
### Docker for Mac (Linux VM) memory assigned
|
||||
The SQL Server image for Docker requires more memory to run. You will need to update your Docker settings to allocate at least 4 GB of memory:
|
||||
|
||||

|
||||
|
||||
Depending on how many apps you are running in your Mac you might need to assign more memory to Docker in the Mac. Usually, 4GB should suffice, but we got feedback from devs who needed to assign up to 8GB of ram to Docker in the Mac.
|
||||
|
||||
### Folder shares in Docker for Mac
|
||||
If your projects are placed within the /Users folder, you don't need to configure anything additional, as that is a pre-shared folder. However, if you place your projects under a different path, like /MyRootProjects, then you'd need to add that shared folder to Docker's configuration.
|
||||
|
||||
If using Visual Studio for Mac, it is also important that you share the folder `/usr/local/share/dotnet`, like here:
|
||||
|
||||

|
||||
|
||||
## Docker Multi-stage support
|
||||
Since December 2017, eShopOnContainers supports [Docker Multi-stage](https://blogs.msdn.microsoft.com/stevelasker/2017/09/11/net-and-multistage-dockerfiles/), therefore, the steps in order to compile the .NET apps/projects before creating the Docker images can now be performed with a single command based on "docker-compose build" or "docker build".
|
||||
|
||||
|
||||
# Option A: Use a CLI environment (dotnet CLI, Docker CLI with the bash shell) and VS Code as plain editor
|
||||
|
||||
As a summary, with the following simple CLI commands in a bash window you'll be able to build the Docker images and deploy the containers into your local Docker host:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
||||
$ cd eShopOnContainers
|
||||
$ docker-compose build
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
The `docker-compose build` is in reality optional because if you run the `docker-compose up` command and you don't have the Docker images built, Docker will run the `docker-compose build` under the covers.
|
||||
But splitting the commands in two makes it clearer for you to know what it is doing.
|
||||
|
||||
The following explanations show you in detail how you should set it up, build and deploy.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. [Docker for Mac](https://docs.docker.com/docker-for-mac/install/). You should already have this.
|
||||
1. A Git client. The [git-scm site](https://git-scm.com/download/gui/mac) maintains a great list of clients.
|
||||
1. (OPTIONAL) NPM installed with [Node.js](http://nodejs.org). The stable channel is fine as well.
|
||||
1. (OPTIONAL) Bower ($ sudo npm install -g bower) needed by the MVC web app.
|
||||
1. [.NET Core and SDK](http://dot.net). Install the SDK and runtime.
|
||||
|
||||
### (OPTIONAL) Installation of NPM/Node, Bower and .NET Core SDK on the local Mac is not required when using Docker Multi-Stage
|
||||
|
||||
The SDKs and dependencies like NPM, Bower and even .NET Core SDK are optional because when building through Docker Multi-Stage it will be using the dependencies available within the ASP.NET Core build image container instead of the installed software on the local machine/Mac.
|
||||
However, if you will be developing .NET apps in the Mac or customizing eShopOnContainers, it is recommended to install all the dependencies locally, as well.
|
||||
|
||||
# Compile the application's projects and build the Docker images with a single command thanks to Docker Multi-Stage
|
||||
|
||||
The recommended approach is to build the .NET application/microservices bits and Docker images with a single command based on [Docker Multi-Stage](https://blogs.msdn.microsoft.com/stevelasker/2017/09/11/net-and-multistage-dockerfiles/) by simply running the following commands within the solution's root folder:
|
||||
|
||||
Move to the root folder of the solution:
|
||||
|
||||
`cd YourPath\eShopOnContainers\`
|
||||
|
||||
Then, run the following docker command:
|
||||
|
||||
`docker-compose build`
|
||||
|
||||

|
||||
|
||||
The first time you run this command it'll take some more additional time as it needs to pull/download the aspnet-build image with the SDKs, so it'll take its time.
|
||||
|
||||
It should take a few minutes to compile all the .NET Core projects plus the SPA application (Angular/TypeScript/JavaScript) which has additional processes and dependencies using NPM.
|
||||
|
||||
- When the `docker-compose build` command finishes, you can check out with Docker CLI the images created by typing in Bash the following Docker command:
|
||||
|
||||
`docker images`
|
||||
|
||||

|
||||
|
||||
Those Docker images you see are the ones you have available in your local image repository in your machine.
|
||||
You might have additional images, but at least, you should see the the custom images starting with the prefix "eshop" which is the name of the image repo. The rest of the images that are not starting with "eshop" will probably be official base-images like the microsoft/aspnetcore or the SQL Server for Linux images, etc.
|
||||
|
||||
# Deploy the containers into the local Docker host
|
||||
|
||||
With a single command you can deploy the whole solution into your local Docker host by just executing the following:
|
||||
|
||||
`docker-compose up`
|
||||
|
||||

|
||||
|
||||
Ignore the warnings about environment variables for Azure, as that is only needed if you were using infrastructure services in Azure (Azure SQL Database, Redis as a service, Azure Service Bus, etc.) which is the "next step" when using eShopOncontainers.
|
||||
|
||||
Note that the first time you try to run the application (with docker run or docker-compose) it detects that it needs a few related infrastructure images, like the SQL Server image, Redis image, RabbitMQ image, etc. so it will pull or download those base images from the Internet, from the public repo at the Docker registry named DOCKER HUB, by pulling the "microsoft/mssql-server-linux" which is the base image for the SQL Server for Linux on containers, and the "library/redis" which is the base Redis image, and so on. Therefore, the first time you run "docker-compose up" it might take a few minutes pulling those images before it spins up your custom containers.
|
||||
|
||||
Finally, you can see how the scripts waits after deploying all the containers:
|
||||
|
||||

|
||||
|
||||
- The next time you run "docker-compose up" again, because you already have all the base images downloaded and registered in your local repo and your custom images built and ready to go, it'll be much faster since it just needs to deploy the containers.
|
||||
|
||||
- <b>Check out the containers running in your Docker host</b>: Once docker-compose up finishes, you will have the original bash window busy and showing the execution's output in a "wait state", so in order to ask to Docker about "how it went" and see what containers are running, you need to open a second bash window and type "docker ps" so you'll see all the running containers, as shown in the following screenshot.
|
||||
|
||||
Type `docker ps`
|
||||
|
||||

|
||||
|
||||
## How to run/test the application
|
||||
Basically, at this point you can test the app with a browser at `http://localhost:5100`.
|
||||
|
||||
For further apps and tests, check the last section of this post named "Running/testing the application" so you see how to run the multiple apps and services in the browser.
|
||||
|
||||
# Option B: Use Visual Studio for Mac
|
||||
|
||||
The quickest path to get eShopOnContainers running on your Mac is by using VS for Mac which will install most of the pre-requisites you need.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. [Docker for Mac](https://docs.docker.com/docker-for-mac/install/). (You should already have this installed)
|
||||
1. [Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/).
|
||||
|
||||
## Install Visual Studio for Mac
|
||||
|
||||
When installing [Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/), you can select between multiple workloads or platforms.
|
||||
|
||||
Make sure you select the .NET Core platform:
|
||||
|
||||

|
||||
|
||||
Before completing the VS for Mac installation, it will demand you to install XCode, that is needed for multiple dependencies.
|
||||
|
||||
If you install Android as a target platform, Java will also be installed as a dependency for building mobile apps for Android.
|
||||
|
||||
For running just the Docker containers and web apps, you'd just need the .NET Core platform.
|
||||
|
||||
But if you want to try the eShopOnContainers mobile app, that requires Xamarin and therefore, the iOS and Android platforms, too. Those mobile platforms are optional for this Wiki walkthrough, though.
|
||||
|
||||
## Clone the eShopOnContainers repo
|
||||
|
||||
Open a bash shell and run the following command:
|
||||
|
||||
```
|
||||
$ mkdir MyGitRepos
|
||||
$ cd MyGitRepos
|
||||
$ git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
||||
$ cd eShopOnContainers
|
||||
```
|
||||
|
||||
With that, you'll have the code at /Users/yourUser/MyGitRepos/eShopOnContainers folder.
|
||||
|
||||
## Open the 'eShopOnContainers-ServicesAndWebApps.sln' solution with VS for Mac
|
||||
|
||||
Run Visual Studio for Mac and open the solution `eShopOnContainers-ServicesAndWebApps.sln`.
|
||||
|
||||
If you just want to run the containers/microservices and web apps, do NOT open the other solutions, like `eShopOnContainers.sln` as those solutions will also open the Xamarin projects and that might slow you down when testing due to additional dependencies in VS.
|
||||
|
||||
After opening the `eShopOnContainers-ServicesAndWebApps.sln` solution for the first time, it is recommended to wait for a few minutes as VS will be restoring many NuGet packages and the solution won't be able to compile or run until it gets all the nuGet packages dependencies, in the first place (this time is only needed the first time you open the solution. Next times it is a lot faster).
|
||||
|
||||
This is VS for Mac with the `eShopOnContainers-ServicesAndWebApps.sln` solution.
|
||||
|
||||

|
||||
|
||||
## Run eShopOnContainers from VS for Mac (F5 or Ctrl+F5)
|
||||
|
||||
Make sure that the by default start-up project is the Docker project named `docker-compose`.
|
||||
|
||||
Hit Ctrl+F5 or press the "play" button in VS for Mac.
|
||||
|
||||
IMPORTANT: The first time you run eShopOnContainers, it will take longer than the next time you launch it. Under the covers, Docker is pulling quite a few "heavy" images from Docker Hub (the public image registry), like the SQL Server image, Redis image, RabbitMQ image and the base ASP.NET Core images. That pull/download process will take a few minutes. Then, VS will launch the application custom containers plus the infrastructure containers (SQL, Redis, RabbitMQ and MongoDB), populate sample data in the databases and finally run the microservices and web apps on custom containers.
|
||||
|
||||
Note that you will see normal/controlled Http exceptions caused by our retries with exponential backoff, as the web apps have to wait until the microservices are ready for the first time which need first to run SQL sentences populating sample data, etc.
|
||||
|
||||
Once the solution is up and running, you should be able to see it in the browser at:
|
||||
|
||||
http://localhost:5100
|
||||
|
||||

|
||||
|
||||
If you open a bash window, you can type `docker images` and see the pulled/downloaded images plus the custom images created by VS for Mac:
|
||||
|
||||

|
||||
|
||||
And by typing `docker ps` you can see the containers running in Docker. The infrastructure containers like SQL, Redis, RabbitMQ plus the custom containers running Web API microservices and the web apps.
|
||||
|
||||

|
||||
|
||||
*IMPORTANT:* In order to have the full app working, like being able to login with a user and add items to the basket and create orders, or being able to consume the services from a remote Xamarin or web SPA, you need to configure additional steps for the app, like the IP to be used by the Identity Service because it needs to be redirected, etc. - Check the additional configuration at the end of this post.
|
||||
|
||||
|
||||
# Running/testing the application
|
||||
|
||||
Once the containers have launched, open a browser and navigate to `http://localhost:5100` to visit the MVC application:
|
||||
|
||||

|
||||
|
||||
You can also try/test the SPA (Single Page Application), which is based on Angular, by running this URL in a browser: `http://localhost:5104`
|
||||
|
||||

|
||||
|
||||
|
||||
Here you can see all the URLs for the multiple apps and services:
|
||||
|
||||
- MVC web app: `http://localhost:5100`
|
||||
- SPA web app: `http://localhost:5105`
|
||||
- Health Status web app: `http://localhost:5107`
|
||||
- Catalog Microservice API: `http://localhost:5101`
|
||||
- Ordering Microservice API: `http://localhost:5102`
|
||||
- Basket Microservice API: `http://localhost:5103`
|
||||
- Identity Microservice API: `http://localhost:5105`
|
||||
- Payment API: `http://localhost:5108`
|
||||
- Marketing API: `http://localhost:5110`
|
||||
- Locations API: `http://localhost:5109`
|
||||
|
||||
To add items to the shopping cart or check out, you'll need to login to the site.
|
||||
The credentials for a demo user are:
|
||||
|
||||
- User: **demouser@microsoft.com**
|
||||
- Password: **Pass@word1**
|
||||
|
||||
|
||||
# Configuring the app for Authentication and access from remote client apps (Remote access through the network)
|
||||
|
||||
If you don't configure these further settings, you should get the following error when trying to login in the MVC web app.
|
||||
|
||||

|
||||
|
||||
That is because the by default IP used to redirect to the Identity service/app used by the application (based on IdentityServer4) is the IP 10.0.75.1.
|
||||
That IP is always set up when installing Docker for Windows in a Windows 10 machine. It is also used by Windows Server 2016 when using Windows Containers.
|
||||
|
||||
eShopOnContainers uses that IP as the "by default choice" so anyone testing the app don't need to configure further settings. However, that IP is not used by "Docker for Mac", so you need to change the config.
|
||||
|
||||
If you were to access the Docker containers from remote machines or mobile phones, like when using the Xamarin app or the web apps in remote PCs, then you would also need to change that IP and use a real IP from the network adapter.
|
||||
|
||||
## Setting up the docker-compose file environment variables and settings
|
||||
|
||||
As explained [here by Docker](https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds),
|
||||
the Mac has a changing IP address (or none if you have no network access). From June 2017 onwards our recommendation is to connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host.
|
||||
|
||||
In the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
||||
|
||||
```bash
|
||||
IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
```
|
||||
You could also set your real IP at the Mac's network adapter. But that would be a worse solution as it'll depend on the network you are connecting your Mac development machine..
|
||||
|
||||
Therefore, the WebMVC service definition at the `docker-compose.override.yml` should finally be configured as shown bellow:
|
||||
|
||||
```bash
|
||||
webmvc:
|
||||
environment:
|
||||
- ASPNETCORE_ENVIRONMENT=Development
|
||||
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||
- CatalogUrl=http://catalog.api
|
||||
- OrderingUrl=http://ordering.api
|
||||
- BasketUrl=http://basket.api
|
||||
- LocationsUrl=http://locations.api
|
||||
- IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
- MarketingUrl=http://marketing.api
|
||||
- CatalogUrlHC=http://catalog.api/hc
|
||||
- OrderingUrlHC=http://ordering.api/hc
|
||||
- IdentityUrlHC=http://identity.api/hc
|
||||
- BasketUrlHC=http://basket.api/hc
|
||||
- MarketingUrlHC=http://marketing.api/hc
|
||||
- PaymentUrlHC=http://payment.api/hc
|
||||
- UseCustomizationData=True
|
||||
- ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY}
|
||||
- OrchestratorType=${ORCHESTRATOR_TYPE}
|
||||
- UseLoadTest=${USE_LOADTEST:-False}
|
||||
ports:
|
||||
- "5100:80"
|
||||
```
|
||||
|
||||
If you re-deploy with `docker-compose up`, now the login page should work properly, as in the screenshot below.
|
||||
|
||||
NOTE: For some reason, if using SAFARI browser, it cannot reach docker.for.mac.localhost but using Chrome in Mac, it works with no issues. Since the usage of docker.for.mac.localhost is just for development purposes, just use Chrome for tests.
|
||||
|
||||

|
||||
|
||||
|
||||
# Configuring the app for external access from remote client apps
|
||||
|
||||
If using the services from remote apps, like the a phone with the Xamarin mobile app in the same Wifi network, or the web apps accessing remotely to the Docker Host, you need to change a few by-default URLs.
|
||||
|
||||
eShopOnContainers app uses the .env file to set certain by-default environment variables used by the multiple docker-compose.override you can have.
|
||||
|
||||
Therefore, the following change must be done in the .env file at the root of the eShopOnContainers folder.
|
||||
If you don't see the .env file, run the following command and re-start the Finder:
|
||||
|
||||
```bash
|
||||
$ defaults write com.apple.finder AppleShowAllFiles TRUE
|
||||
|
||||
$ killall Finder
|
||||
```
|
||||
Then, edit the .env file (with VS code, for instance) and change the ESHOP_EXTERNAL_DNS_NAME_OR_IP variable, and instead of using "localhost" as value, set a real IP or a real DNS name:
|
||||
|
||||
`
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=192.168.0.25
|
||||
`
|
||||
or
|
||||
`
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=myserver.mydomain.com
|
||||
`
|
||||
This is something you'll want to do if deploying to a real Docker Host, like in a VM in Azure, where you can use a DNS name for that.
|
||||
|
||||
## Sending feedback and pull requests
|
||||
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
|
||||
## Questions
|
||||
[QUESTION] Answer +1 if the solution is working for you on the Mac:
|
||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
||||
- [Mac setup](Mac-setup)
|
||||
|
@ -1,38 +1,5 @@
|
||||
## Related readme files (use them for more information after reading this)
|
||||
# Superseded
|
||||
|
||||
* [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme-docker-compose.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme-docker-compose.md): List of all docker-compose files and how to use them
|
||||
* [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md): How to use the `.env` file to configure the external resources
|
||||
|
||||
## Deploying on a "production" environment
|
||||
|
||||
_IMPORTANT: This instructions section is in early draft state because the current version of eShopOncontainers has been tested most of all on plain Docker engine and just smoke tests on some orchestrators like ACS Kubernetes and Docker Swarm._
|
||||
|
||||
However, since there are a few folks testing it in "production" environments out of the dev PC machine (VS2017+Docker) but in a prod. environment like Azure or a regular Docker Host, here's some important infor to take into account, in addition to the CLI Deployment procedure explained here: https://github.com/dotnet/eShopOnContainers/wiki/03.-Setting-the-eShopOnContainers-solution-up-in-a-Windows-CLI-environment-(dotnet-CLI,-Docker-CLI-and-VS-Code)
|
||||
|
||||
The "by default configuration" in the docker-compose.override.yml file is set with specific config so it makes it very straightforward to test the solution in a Windows PC with Visual Studio 2017, almost just F5 after the first configuration. But, for instance, it is using the "**10.0.75.1**" IP used by default in all "Docker for Windows" installations, so it can be used by the Identity Container for the login page when being redirected from the client apps **without you having to change any specific external IP**, etc.
|
||||
|
||||
However, when depoying eShopOnContainers to other environments like a real Docker Host, or simply if you want to access the apps from remote applications, there are some settings for the Identity Service that need to be changed by using the config especified at a "PRODUCTION" docker-compose file:
|
||||
docker-compose.**prod**.yml (**for "production environments"**).
|
||||
That file is using the environment variables provided by the "**.env**" file which basically has the local name and the "External" IP or DNS name to be used by the remote client apps.
|
||||
|
||||
**FIRST STEP:**
|
||||
Basically, you'd need to change the IP (or DNS name) at the .env file with the IP or DNS name you have for your Docker host or orchestrator cluster. If it is a local machine using Docker for Windows, then, it'll be your real WiFi or Ethernet card IP:
|
||||
https://github.com/dotnet/eShopOnContainers/blob/master/**.env**
|
||||
|
||||
The IP below should be swapped and use your real IP or DNS name, like 192.168.88.248 or your DNS name, etc. if testing from remote browsers or mobile devices.
|
||||
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=localhost
|
||||
ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP=10.121.122.92
|
||||
|
||||
**SECOND STEP:**
|
||||
For deploying with docker-compose, instead of doing a regular “docker-compose up” do:
|
||||
|
||||
**docker-compose -f docker-compose.yml -f docker-compose.prod.yml up**
|
||||
|
||||
So it uses the docker-compose.**prod**.yml file which uses the EXTERNAL IP or DNS name.
|
||||
https://github.com/dotnet/eShopOnContainers/blob/master/docker-compose.prod.yml
|
||||
|
||||
And take into account the procedure here:
|
||||
https://github.com/dotnet/eShopOnContainers/wiki/03.-Setting-the-eShopOnContainers-solution-up-in-a-Windows-CLI-environment-(dotnet-CLI,-Docker-CLI-and-VS-Code)
|
||||
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Docker host](Docker-host)
|
@ -1,102 +1,3 @@
|
||||
## Important notice
|
||||
# Deprecated
|
||||
|
||||
The Web SPA application currently builds just fine while building the Docker images with docker-compose.
|
||||
|
||||
**You only need to go through this article if you want to run it locally with Visual Studio, to build the required JavaScript code and dependencies.**
|
||||
|
||||
## Requirements and set up
|
||||
|
||||
### Install NPM
|
||||
|
||||
You need to use **npm** from the command line to build the JS application, so it has to be installed globally.
|
||||
|
||||
**NPM** is bundled with NODE.JS, and Installing NODE and NPM is pretty straightforward by using the installer package available at https://nodejs.org/en/
|
||||
|
||||

|
||||
|
||||
You can install the Long Term Support (LTS) version "Recommended For Most Users" of Node, however, the current version used in the WebSPA application is [8.11](https://nodejs.org/download/release/v8.11.4/).
|
||||
|
||||
You can also see the installed NPM version with the command `npm -v`, as shown below.
|
||||
|
||||

|
||||
|
||||
### Set NPM path into Visual Studio
|
||||
|
||||
This step is only required if you are also using the full Visual Studio 2017.
|
||||
|
||||
NPM (just installed by you) will be usually installed under this path:
|
||||
**C:\Program Files\nodejs**.
|
||||
|
||||
You might need to update that path in Visual Studio under the "External Web Tools" location paths, as shown below:
|
||||
|
||||

|
||||
|
||||
If you don't do this step you might have issues because of using different versions from VS versus the command line accessing the same JavaScript code from both environments.
|
||||
See:
|
||||
http://www.hanselman.com/blog/VisualStudio2015FixingDependenciesNpmNotInstalledFromFseventsWithNodeOnWindows.aspx
|
||||
|
||||
### Build the SPA app with NPM
|
||||
|
||||
Now, you need to build the SPA app (TypeScript and Angular 6+ based client app) with NPM.
|
||||
|
||||
- Open a command-prompt window and move to the root of the SPA application (src\Web\WebSPA\)
|
||||
|
||||
- Run the command `npm install` as shown below:
|
||||
|
||||

|
||||
|
||||
**IMPORTANT NOTE/UPDATE:** Seems like in some NPM environments running just "npm install" does not work properly. If you have a similar issue than [this issue](https://github.com/dotnet-architecture/eShopOnContainers/issues/253): try running **"npm install enhanced-resolve@3.3.0"** instead "npm install". (Please, provide your experience at that mentioned issue)
|
||||
|
||||
- Then, run the command `npm run build:prod` as shown below:
|
||||
|
||||

|
||||
|
||||
- If you get an error like **"Node Sass could not find a binding for your current environment: Windows 64-bit with Node.js 6.x"**, then run the command `npm rebuild node-sass` as in the following screenshot:
|
||||
|
||||

|
||||
|
||||
Then, run again the `npm run build:prod` command that should finish with no errors.
|
||||
|
||||

|
||||
|
||||
### Run the SPA locally
|
||||
|
||||
To run the SPA locally you have to set the WebSPA as the startup project for the solution, and then press [F5] or [Ctrl+F5] as usual.
|
||||
|
||||
The SPA application should start in port 58018, but `localhost:58018` is not in one of the authorized client redirections in `Identity.API`, so this address has to be added manually.
|
||||
|
||||
The easiest is to just edit the database proper table and add the required record.
|
||||
|
||||
If using SQL Server Management Studio do the following:
|
||||
|
||||
1. Connect to `localhost, 5433` with username `sa` and password `Pass@word`
|
||||
|
||||

|
||||
|
||||
2. Find the table `dbo.ClientRedrectUris` in the `IdentityDb` database and add a new record like the one that contains `http://localhost:5104`, but with the `http://localhost:58018` as shown in the next image:
|
||||
|
||||

|
||||
|
||||
You should be able to run now the SPA application with VS, using [F5] or [Ctrl+F5] and log in as usual, as shown here:
|
||||
|
||||

|
||||
|
||||
### (Optional) Run NPM tasks from within Visual Studio 2017
|
||||
|
||||
As the chosen workload method when developing a client frontend app (JS frameworks, etc.), you, as a developer have to be able to trigger the npm tasks when you want.
|
||||
|
||||
Of course, you can always open a command prompt and run npm from the CLI as you just did in the steps above (which is, in fact, what most front-end developers do).
|
||||
|
||||
However, you can also run npm tasks inside Visual Studio if you install the following VS extension: https://marketplace.visualstudio.com/items?itemName=MadsKristensen.NPMTaskRunner
|
||||
|
||||
This extension adds the option to run npm tasks from the "Task Runner Explorer" (since, out of the box, only gulp/grunt tasks are supported by VS2017). After this extension is installed you can run npm tasks from inside VS2017 and also set build bindings if you want.
|
||||
|
||||

|
||||
|
||||
This extension honors the VS External Web Tools configuration, and allows you to use bindings, so if you want to run npm tasks automatically on every VS build, you could do so. This is not set as default in the eShopOnContainers provided code as it would slow down each VS build with the npm build tasks.
|
||||
|
||||
## Sending feedback and pull requests
|
||||
|
||||
We'd appreciate your feedback, improvements and ideas.
|
||||
|
||||
You can create new [issues](https://github.com/dotnet-architecture/eShopOnContainers/issues) or [pull requests](https://github.com/dotnet-architecture/eShopOnContainers/pulls) in this repo or send emails to [eshop_feedback@service.microsoft.com](mailto:eshop_feedback@service.microsoft.com)
|
||||
This wiki page has been deprecated as it's no longer necessary.
|
||||
|
@ -1,17 +1,5 @@
|
||||
_IMPORTANT: This section is in early draft state. Will be evolving, eventually_
|
||||
# Superseded
|
||||
|
||||
# Important Notes for the Xamarin app:
|
||||
* When running the Xamarin app note that it can run in "Mock mode" so you won't need any connection to the microservices, so the data shown is "fake data" generated by the client Xamarin app.
|
||||
* In order to really access the microservices/containers you'll need to deploy the containers following this ["production" deployment procedure for the containers ](https://github.com/dotnet-architecture/eShopOnContainers/wiki/05.-Deploying-eShopOnContainers-to-a-Docker-Host-Production-environment) and then, provide the external IP of your dev machine or the DNS name or IP of the Docker Host you are using into the Xamarin app settings when NOT using the "mock mode".
|
||||
This wiki page has been superseded by:
|
||||
|
||||
# Guidance on Architecture patterns of Xamarin.Forms apps
|
||||
The following book (early draft state) is being created aligned with this sample/reference Xamarin app.
|
||||
You can download it here:
|
||||
|
||||
<a href='https://aka.ms/xamarinpatternsebook'><img src="/dotnet/eShopOnContainers/blob/master/img/xamarin-enterprise-patterns-ebook-cover-small.png"> </a>
|
||||
|
||||
<a href='https://aka.ms/xamarinpatternsebook'>**Download** (Early DRAFT, still work in progress)</a>
|
||||
|
||||
## Sending feedback and pull requests
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
- [Xamarin](Xamarin-setup)
|
||||
|
@ -1,131 +1,5 @@
|
||||
This is a draft page which will be evolving while our tests and dev regarding Windows Containers are completed.
|
||||
# Superseded
|
||||
|
||||
Windows Containers support:
|
||||
|
||||
**Windows 10** - Development Environment:
|
||||
|
||||
- Install **[Docker Community Edition](https://store.docker.com/editions/community/docker-ce-desktop-windows?tab=description)** (Docker CE, formerly **Docker for Windows**)
|
||||
- Support via forums/GitHub
|
||||
- Can switch between Windows container development and Linux (in VM). There is no plan to drop either OS from Docker CE
|
||||
- Designed for devs only. Not production
|
||||
|
||||
**Windows Server 2016** - Production Environment:
|
||||
- Install **[Docker Enterprise Edition](https://store.docker.com/editions/enterprise/docker-ee-server-windows?tab=description)** (Docker EE)
|
||||
- Designed to run apps in production
|
||||
- Call Microsoft for support. If it's a Docker rather than Windows problem, they escalate to Docker and get it solved
|
||||
|
||||
Docker might provide per incident support system for Docker Community Edition, or provide a "EE Desktop" for developers, but it's their call to do so. Not Microsoft's.
|
||||
|
||||
## Set Docker to use Windows Container (Windows 10 only)
|
||||
|
||||
In Windows 10 you need to set Docker to use "Windows container" instead of Linux containers (in Windows Server 2016 Windows Containers are used by default). To do this, first you must have enabled container support in Windows 10. In "Turn Windows features on/off" select "Containers":
|
||||
|
||||
[[/img/win-containers/enable-windows-containers.png|Enabling windows containers]]
|
||||
|
||||
Then right click in the Docker icon on the notification bar and select the option "Switch to Windows Containers". If you don't see this option and see the option "Switch to Linux Containers" you're already using Windows Containers.
|
||||
|
||||
## The localhost loopback limitation in Windows Containers Docker hosts
|
||||
|
||||
Due to a default NAT limitation in current versions of Windows (see [https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-loopback/](https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-loopback/)) you can't access your containers using `localhost` from the host computer.
|
||||
You have further information here, too: https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/
|
||||
|
||||
Although that [limitation has been removed beginning with Build 17025](https://blogs.technet.microsoft.com/networking/2017/11/06/available-to-windows-10-insiders-today-access-to-published-container-ports-via-localhost127-0-0-1/) (as of early 2018, still only available today to Windows Insiders, not public/stable release). With that version (Windows 10 Build 17025 or later), access to published container ports via “localhost”/127.0.0.1 is available.
|
||||
|
||||
Until you can use newer build of Windows 10 or Windows Server 2016, instead of localhost you can use either an IP address from the host's network card of (for example, let's suppose you have the 192.168.0.1 address) or you could also use the DockerNAT IP address, that is `10.0.75.1`. If you don't have that IP (`10.0.75.1`) shown when you get the info with `ipconfig`, you'll need to switch to Linux Containers so it creates that Docker NAT and then go back to Windows Containers (right click on Docker icon on the task bar).
|
||||
|
||||
If you use `start-windows-containers.ps1` to start the containers, as explained in the following section, that script will create environment variables with that IP for you, but if you directly use docker-compose, then you have to set the following environment variables:
|
||||
|
||||
Where you see `10.75.0.1` you could also use your network card IP discovered with `ipconfig`, or a production DNS name or IP if this were a production deployment.
|
||||
|
||||
* `ESHOP_EXTERNAL_DNS_NAME_OR_IP` to `10.75.0.1`
|
||||
* `ESHOP_AZURE_STORAGE_CATALOG_URL` to `http://10.0.75.1:5101/api/v1/catalog/items/[0]/pic/`
|
||||
* `ESHOP_AZURE_STORAGE_MARKETING_URL` to `http://10.0.75.1:5110/api/v1/campaigns/[0]/pic/`
|
||||
|
||||
Note that the two last env-vars must be set only if you have not set them already because you were using Azure Storage for the images. If you are using azure storage for the images, you don't need to provide those URLs.
|
||||
|
||||
Once these variables are set you can run docker-compose to start the containers and navigate to `http://10.0.75.1:5100` to view the MVC Web app.
|
||||
|
||||
Using `start-windows-containers.ps1` is simpler as it'll create the env-vars for you.
|
||||
|
||||
## Deploy Windows Containers of eShopOnContaners
|
||||
Since eShopOnContainers is using Docker Multi-Stage builds, the compilation of the .NET application bits is now performed by Docker itself right before building the Docker images.
|
||||
|
||||
Although you can create the Docker images when trying to run the containers, let's split it in two steps, so it is clearer.
|
||||
|
||||
### 1. Compile the .NET application/services bits and build the Docker images for Windows Containers
|
||||
|
||||
In order compile the bits and build the Docker images, run:
|
||||
```
|
||||
cd <root-folder-of--eshoponcontainers>
|
||||
docker-compose -f docker-compose.yml -f docker-compose.windows.yml build
|
||||
```
|
||||
|
||||
**Note** Be sure to pass both `-f` when building containers for windows containers!
|
||||
|
||||
### 2. Deploy/run the containers
|
||||
|
||||
The esasiest way to run/start the Windows Containers of eShopOnContainers is by running this PowerShell script:
|
||||
|
||||
`start-windows-containers.ps1`
|
||||
|
||||
You can find this script at /cli-windows/start-windows-containers.ps1
|
||||
|
||||
Otherwise, you could also run it directly with `docker-compose up` but then you'd be missing a few environment variables needed for Windows Containers. See the section below on the environment variables you will also need to configure.
|
||||
|
||||
Under the covers, in any case, the start-windows-containers.ps1 is running this command to deploy/run the containers:
|
||||
|
||||
```
|
||||
set ESHOP_OCELOT_VOLUME_SPEC=C:\app\configuration
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f -f docker-compose.windows.yml -f docker-compose.override.windows.yml up
|
||||
```
|
||||
|
||||
**IMPORTANT**: You need to include those files when running docker-compose up **and the `ESHOP_OCELOT_VOLUME_SPEC` environment variable must be set to `C:\app\configuration`**. Also you have to set the environment variables related to the localhost loopback limitation mentioned at the beginning of this post (if it applies to your environment).
|
||||
|
||||
Just for reference here are the docker compose files and what they do:
|
||||
|
||||
1. `docker-compose.yml`: Main compose file. Define all services for both Linux & Windows and set base images for Linux
|
||||
2. `docker-compose.override.yml`: Main override file. Define all config for both Linux & Windows, with Linux-based defaults
|
||||
3. `docker-compose.windows.yml`: Overrides some previous data (like images) for Windows containers
|
||||
4. `docker-compose.override.windows.yml`: Adds specific windows-only configuration
|
||||
|
||||
## Test/use the eShopOnContainers MVC app in a browser
|
||||
|
||||
Open a browser and navigate to the following URL:
|
||||
|
||||
`http://10.0.75.1:5100`
|
||||
|
||||
## RabbitMQ user and password
|
||||
|
||||
For RabbitMQ we are using the [https://hub.docker.com/r/spring2/rabbitmq/](spring2/rabbitmq) image, which provides a ready RabbitMQ to use. This RabbitMQ is configured to accept AMQP connections from the user `admin:password`(this is different from the RabbitMQ Linux image which do not require any user/password when creating AMQP connections)
|
||||
|
||||
If you use `start-windows-containers.ps1` script to launch the containers or include the file `docker-compose.override.windows.yml` in the `docker-compose` command, then the containers will be configured to use this login/password, so everything will work.
|
||||
|
||||
## Using custom login/password for RabbitMQ (if needed)
|
||||
|
||||
**Note**: Read this only if you use any other RabbitMQ image (or server) that have its own user/password needed.
|
||||
|
||||
We support any user/password needed using the environment variables `ESHOP_SERVICE_BUS_USERNAME` and `ESHOP_SERVICE_BUS_PASSWORD`. These variables are used to set a username and password when connecting to RabbitMQ. So:
|
||||
|
||||
* In Linux these variables should be unset (or empty) **unless you're using any external RabbitMQ that requires any specific login/password**
|
||||
* In Windows these variables should be set
|
||||
|
||||
To set this variables you have two options
|
||||
|
||||
1. Just set them on your shell
|
||||
2. Edit the `.env` file and add these variables
|
||||
|
||||
If you have set this images and you want to launch the containers you can use:
|
||||
|
||||
```
|
||||
.\cli-windows\start-windows-containers.ps1 -customEventBusLoginPassword $true
|
||||
```
|
||||
|
||||
When passing the parameter `-customEventBusLoginPassword $true` to the script you are forcing to use the login/password set in the environment variables instead the default one (the one needed for spring2/rabbitmq).
|
||||
|
||||
If you prefer to use `docker-compose` you can do it. Just call it without the `docker-compose.override.windows.yml` file:
|
||||
|
||||
```
|
||||
set ESHOP_OCELOT_VOLUME_SPEC=C:\app\configuration
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.windows.yml up
|
||||
```
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Windows containers](Deploy-to-Windows-containers)
|
||||
|
@ -1,376 +1,5 @@
|
||||
# Using Helm Charts to deploy eShopOnContainers to AKS
|
||||
|
||||
It is possible to deploy eShopOnContainers on a AKS using [Helm](https://helm.sh/) instead of custom scripts (that will be deprecated soon).
|
||||
|
||||
## Create Kubernetes cluster in AKS
|
||||
You can create the AKS cluster by using two ways:
|
||||
|
||||
- A. Use Azure CLI: Follow a procedure suing [Azure CLI like here](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), but make sure you **enable RBAC** with `--enable-rbac` and **enable application routing** with `--enable-addons http_application_routing` in `az aks create` command.
|
||||
|
||||
- B. Use Azure's portal
|
||||
|
||||
The following steps are using the Azure portal to create the AKS cluster:
|
||||
|
||||
- Start the process by providing the general data, like in the following screenshot:
|
||||
|
||||

|
||||
|
||||
- Then, very important, in the next step, enable RBAC:
|
||||
|
||||

|
||||
|
||||
- **Enable http routing**. Make sure to check the checkbox "Http application routing" on "Networking" settings. For more info, read the [documentation](https://docs.microsoft.com/en-us/azure/aks/http-application-routing)
|
||||
|
||||
You can use **basic network** settings since for a test you don't need integration into any existing VNET.
|
||||
|
||||

|
||||
|
||||
- You can also enable monitoring:
|
||||
|
||||

|
||||
|
||||
- Finally, create the cluster. It'll take a few minutes for it to be ready.
|
||||
|
||||
### Configure RBAC security for K8s dashboard service-account
|
||||
|
||||
In order NOT to get errors in the Kubernetes dashboard, you'll need to set the following service-account steps.
|
||||
|
||||
Here you can see the errors you might see:
|
||||

|
||||
|
||||
- Because the cluster is using RBAC, you need to grant needed rights to the Service Account `kubernetes-dashboard` with this kubectl command:
|
||||
|
||||
`kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard`
|
||||
|
||||

|
||||
|
||||
Now, just run the Azure CLI command to browse the Kubernetes Dashboard:
|
||||
|
||||
`az aks browse --resource-group pro-eshop-aks-helm-linux-resgrp --name pro-eshop-aks-helm-linux`
|
||||
|
||||

|
||||
|
||||
|
||||
## Additional pre-requisites
|
||||
|
||||
In addition to having an AKS cluster created in Azure and having kubectl and Azure CLI installed in your local machine and configured to use your Azure subscription, you also need the following pre-requisites:
|
||||
|
||||
### Install Helm
|
||||
|
||||
You need to have helm installed on your machine, and Tiller must be installed on the AKS. Follow these instructions on how to ['Install applications with Helm in Azure Kubernetes Service (AKS)'](https://docs.microsoft.com/en-us/azure/aks/kubernetes-helm) to setup Helm and Tiller for AKS.
|
||||
|
||||
**Note**: If your ASK cluster is not RBAC-enabled (default option in portal) you may receive following error when running a helm command:
|
||||
|
||||
```
|
||||
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused
|
||||
```
|
||||
|
||||
If so, type:
|
||||
|
||||
```
|
||||
kubectl --namespace=kube-system edit deployment/tiller-deploy
|
||||
```
|
||||
|
||||
Your default text editor will popup with the YAML definition of the tiller deploy. Search for:
|
||||
|
||||
```
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
And change it to:
|
||||
|
||||
```
|
||||
automountServiceAccountToken: true
|
||||
```
|
||||
|
||||
Save the file and close the editor. This should reapply the deployment in the cluster. Now Helm commands should work.
|
||||
|
||||
## Install eShopOnContainers using Helm
|
||||
|
||||
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window:
|
||||
|
||||
```
|
||||
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev
|
||||
```
|
||||
|
||||
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
||||
|
||||
Once the script is run, you should see following output when using `kubectl get deployment`:
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
eshop-apigwmm 1 1 1 1 4d
|
||||
eshop-apigwms 1 1 1 1 4d
|
||||
eshop-apigwwm 1 1 1 1 4d
|
||||
eshop-apigwws 1 1 1 1 4d
|
||||
eshop-basket-api 1 1 1 1 4d
|
||||
eshop-basket-data 1 1 1 1 4d
|
||||
eshop-catalog-api 1 1 1 1 4d
|
||||
eshop-identity-api 1 1 1 1 4d
|
||||
eshop-keystore-data 1 1 1 1 4d
|
||||
eshop-locations-api 1 1 1 1 4d
|
||||
eshop-marketing-api 1 1 1 1 4d
|
||||
eshop-mobileshoppingagg 1 1 1 1 4d
|
||||
eshop-nosql-data 1 1 1 1 4d
|
||||
eshop-ordering-api 1 1 1 1 4d
|
||||
eshop-ordering-backgroundtasks 1 1 1 1 4d
|
||||
eshop-ordering-signalrhub 1 1 1 1 4d
|
||||
eshop-payment-api 1 1 1 1 4d
|
||||
eshop-rabbitmq 1 1 1 1 4d
|
||||
eshop-sql-data 1 1 1 1 4d
|
||||
eshop-webmvc 1 1 1 1 4d
|
||||
eshop-webshoppingagg 1 1 1 1 4d
|
||||
eshop-webspa 1 1 1 1 4d
|
||||
eshop-webstatus 1 1 1 1 4d
|
||||
```
|
||||
|
||||
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
|
||||
|
||||
```
|
||||
eshop-apigwmm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwms eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
```
|
||||
|
||||
Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon.
|
||||
|
||||
One step more is needed: we need to configure the nginx ingress controller that AKS has to allow more large headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder):
|
||||
|
||||
```
|
||||
kubectl apply -f aks-httpaddon-cfg.yaml
|
||||
```
|
||||
|
||||
Then you can restart the pod that runs the nginx controller. Its name is `addon-http-application-routing-nginx-ingress-controller-<something>` and runs on `kube-system` namespace. So run a `kubectl get pods -n kube-system` find it and delete with `kubectl delete pod <pod-name> -n kube-system`.
|
||||
|
||||
**Note:** If running in a bash shell you can type:
|
||||
|
||||
```
|
||||
kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system
|
||||
```
|
||||
|
||||
You can view the MVC client at http://[dns]/webmvc and the SPA at the http://[dns]/
|
||||
|
||||
## Customizing the deployment
|
||||
|
||||
### Using your own images
|
||||
|
||||
To use your own images instead of the public ones, you have to pass following additional parameters to the `deploy-all.ps1` script:
|
||||
|
||||
* `registry`: Login server for the Docker registry
|
||||
* `dockerUser`: User login for the Docker registry
|
||||
* `dockerPassword`: User password for the Docker registry
|
||||
|
||||
This will deploy a secret on the cluster to connect to the specified server, and all image names deployed will be prepended with `registry/` value.
|
||||
|
||||
### Using specific DNS
|
||||
|
||||
The `-externalDns` parameter controls the DNS bounded to ingresses. You can pass a custom DNS (like `my.server.com`), or the `aks` value to autodiscover the AKS DNS. For autodiscover to work you also need to pass which AKS is, using the `-aksName` and `-aksRg` parameters.
|
||||
Autodiscovering works using Azure CLI under the hood, so ensure that Azure CLI is logged and pointing to the right subscription.
|
||||
|
||||
If you don't pass any external DNS at all, ingresses are'nt bound to any DNS, and you have to use public IP to access the resources.
|
||||
|
||||
### Not deploying infrastructure containers
|
||||
|
||||
If you want to use external resources, use `-deployInfrastructure $false` to not deploy infrastructure containers. However **you still have to manually update the scripts to provide your own configuration** (see next section).
|
||||
|
||||
### Providing your own configuration
|
||||
|
||||
The file `inf.yaml` contains the description of the infrastructure used. File is docummented so take a look on it to understand all of its entries. If using external resources you need to edit this file according to your needs. You'll need to edit:
|
||||
|
||||
* `inf.sql.host` with the host name of the SQL Server
|
||||
* `inf.sql.common` entries to provide your SQL user, password. `Pid` is not used when using external resources (it is used to set specific product id for the SQL Server container).
|
||||
* `inf.sql.catalog`, `inf.sql.ordering`, `inf.sql.identity`: To provide the database names for catalog, ordering and identity services
|
||||
* `mongo.host`: With the host name of the Mongo DB
|
||||
* `mongo.locations`, `mongo.marketing` with the database names for locations and marketing services
|
||||
* `redis.basket.constr` with the connection string to Redis for Basket Service. Note that `redis.basket.svc` is not used when using external services
|
||||
* `redis.keystore.constr` with the connection string to Redis for Keystore Service. Note that `redis.keystore.svc` is not used when using external services
|
||||
* `eventbus.constr` with the connection string to Azure Service Bus and `eventbus.useAzure` to `true` to use Azure service bus. Note that `eventbus.svc` is not used when using external services
|
||||
|
||||
### Using Azure storage for Catalog Photos
|
||||
|
||||
Using Azure storage for catalog (and marketing) photos is not directly supported, but you can accomplish it by editing the file `k8s/helm/catalog-api/templates/configmap.yaml`. Search for lines:
|
||||
|
||||
```
|
||||
catalog__PicBaseUrl: http://{{ $webshoppingapigw }}/api/v1/c/catalog/items/[0]/pic/
|
||||
```
|
||||
|
||||
And replace it for:
|
||||
|
||||
```
|
||||
catalog__PicBaseUrl: http://<url-of-the-storage>/
|
||||
```
|
||||
|
||||
In the same way, to use Azure storage for the marketing service, have to edit the file `k8s/helm/marketing-api/templates/configmap.yaml` and replacing the line:
|
||||
|
||||
```
|
||||
marketing__PicBaseUrl: http://{{ $webshoppingapigw }}/api/v1/c/catalog/items/[0]/pic/
|
||||
```
|
||||
|
||||
by:
|
||||
|
||||
```
|
||||
marketing__PicBaseUrl: http://<url-of-the-storage>/
|
||||
```
|
||||
|
||||
# Using Helm Charts to deploy eShopOnContainers to a local Kubernetes in Windows with 'Docker for Windows'
|
||||
|
||||
## Additional pre-requisites
|
||||
|
||||
In addition to having Docker for Windows/Mac with Kubernetes enabled and having kubectl ayou also need the following pre-requisites:
|
||||
|
||||
### Install Helm
|
||||
|
||||
You need to have helm installed on your machine, and Tiller must be installed on the local Docker Kubernetes cluster. Once you have [Helm downloaded](https://helm.sh/) and installed on your machine you must:
|
||||
|
||||
1. Create the tiller service account, by running `kubectl apply -f helm-rbac.yaml` from `/k8s` folder
|
||||
2. Install tiller and configure it to use the tiller service account by typing `helm init --service-account tiller`
|
||||
|
||||
### Install NGINX ingress controller
|
||||
|
||||
Docker local Kubernetes cluster do not have any ingress controller installed by default, so you need to install one. Any intress controller should work, but we have created the scripts for installing the NGINX ingress controller. To install it, just type (from `/k8s` folder):
|
||||
|
||||
1. `.\deploy-ingress.ps1`
|
||||
2. `.\deploy-ingress-dockerlocal.ps1`
|
||||
|
||||
## Install eShopOnContainers using Helm
|
||||
|
||||
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window:
|
||||
|
||||
```
|
||||
.\deploy-all.ps1 -imageTag dev -useLocalk8s $true
|
||||
```
|
||||
|
||||
The parameter `useLocalk8s` to $true, forces the script to use `localhost` as the DNS for all Helm charts and also creates the ingress with the correct ingress class.
|
||||
|
||||
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the Docker local Kubernetes cluster. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
||||
|
||||
Once the script is run, you should see following output when using `kubectl get deployment`:
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
eshop-apigwmm 1 1 1 1 2h
|
||||
eshop-apigwms 1 1 1 1 2h
|
||||
eshop-apigwwm 1 1 1 1 2h
|
||||
eshop-apigwws 1 1 1 1 2h
|
||||
eshop-basket-api 1 1 1 1 2h
|
||||
eshop-basket-data 1 1 1 1 2h
|
||||
eshop-catalog-api 1 1 1 1 2h
|
||||
eshop-identity-api 1 1 1 1 2h
|
||||
eshop-keystore-data 1 1 1 1 2h
|
||||
eshop-locations-api 1 1 1 1 2h
|
||||
eshop-marketing-api 1 1 1 1 2h
|
||||
eshop-mobileshoppingagg 1 1 1 1 2h
|
||||
eshop-nosql-data 1 1 1 1 2h
|
||||
eshop-ordering-api 1 1 1 1 2h
|
||||
eshop-ordering-backgroundtasks 1 1 1 1 2h
|
||||
eshop-ordering-signalrhub 1 1 1 1 2h
|
||||
eshop-payment-api 1 1 1 1 2h
|
||||
eshop-rabbitmq 1 1 1 1 2h
|
||||
eshop-sql-data 1 1 1 1 2h
|
||||
eshop-webmvc 1 1 1 1 2h
|
||||
eshop-webshoppingagg 1 1 1 1 2h
|
||||
eshop-webspa 1 1 1 1 2h
|
||||
eshop-webstatus 1 1 1 1 2h
|
||||
```
|
||||
|
||||
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
|
||||
|
||||
```
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
eshop-apigwmm localhost localhost 80 2h
|
||||
eshop-apigwms localhost localhost 80 2h
|
||||
eshop-apigwwm localhost localhost 80 2h
|
||||
eshop-apigwws localhost localhost 80 2h
|
||||
eshop-identity-api localhost localhost 80 2h
|
||||
eshop-webmvc localhost localhost 80 2h
|
||||
eshop-webspa localhost localhost 80 2h
|
||||
eshop-webstatus localhost localhost 80 2h
|
||||
```
|
||||
|
||||
Note that intgresses are bound to DNS localhost and the host is also "localhost". So, you can access the webspa by typing `http://localhost` and the MVC by typing `http://localhost/webmvc`
|
||||
|
||||
As this is the Docker local K8s cluster, you can see also the containers running on your machine. If you type `docker ps` you'll see all them:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
fec1e3499416 a3f21ec4bd11 "/entrypoint.sh /ngi…" 9 minutes ago Up 9 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-f88c75bc6-5xs2n_ingress-nginx_f1cc7094-e68f-11e8-b4b6-00155d016146_0
|
||||
76485867f032 eshop/payment.api "dotnet Payment.API.…" 2 hours ago Up 2 hours k8s_payment-api_eshop-payment-api-75d5f9bdf6-6zx2v_default_4a3cdab4-e67f-11e8-b4b6-00155d016146_1
|
||||
c2c4640ed610 eshop/marketing.api "dotnet Marketing.AP…" 2 hours ago Up 2 hours k8s_marketing-api_eshop-marketing-api-6b8c5989fd-jpxqv_default_45780626-e67f-11e8-b4b6-00155d016146_1
|
||||
85301d538574 eshop/ordering.signalrhub "dotnet Ordering.Sig…" 2 hours ago Up 2 hours k8s_ordering-signalrhub_eshop-ordering-signalrhub-58cf5ff6-cnlm8_default_4932c344-e67f-11e8-b4b6-00155d016146_1
|
||||
7a408a98000e eshop/ordering.backgroundtasks "dotnet Ordering.Bac…" 2 hours ago Up 2 hours k8s_ordering-backgroundtasks_eshop-ordering-backgroundtasks-cc8f6d4d8-ztfk7_default_47f9cf10-e67f-11e8-b4b6-00155d016146_1
|
||||
12c64b3a13e0 eshop/basket.api "dotnet Basket.API.d…" 2 hours ago Up 2 hours k8s_basket-api_eshop-basket-api-658546684d-6hlvd_default_4262d022-e67f-11e8-b4b6-00155d016146_1
|
||||
133fccfeeff3 eshop/webstatus "dotnet WebStatus.dll" 2 hours ago Up 2 hours k8s_webstatus_eshop-webstatus-7f46479dc4-bqnq7_default_4dc13eb2-e67f-11e8-b4b6-00155d016146_0
|
||||
00c6e4c52135 eshop/webspa "dotnet WebSPA.dll" 2 hours ago Up 2 hours k8s_webspa_eshop-webspa-64cb8df9cb-dcbwg_default_4cd47376-e67f-11e8-b4b6-00155d016146_0
|
||||
d4507f1f6b1a eshop/webshoppingagg "dotnet Web.Shopping…" 2 hours ago Up 2 hours k8s_webshoppingagg_eshop-webshoppingagg-cc94fc86-sxd2v_default_4be6cdb9-e67f-11e8-b4b6-00155d016146_0
|
||||
9178e26703da eshop/webmvc "dotnet WebMVC.dll" 2 hours ago Up 2 hours k8s_webmvc_eshop-webmvc-985779684-4br5z_default_4addd4d6-e67f-11e8-b4b6-00155d016146_0
|
||||
1088c281c710 eshop/ordering.api "dotnet Ordering.API…" 2 hours ago Up 2 hours k8s_ordering-api_eshop-ordering-api-fb8c548cb-k68x9_default_4740958a-e67f-11e8-b4b6-00155d016146_0
|
||||
12424156d5c9 eshop/mobileshoppingagg "dotnet Mobile.Shopp…" 2 hours ago Up 2 hours k8s_mobileshoppingagg_eshop-mobileshoppingagg-b54645d7b-rlrgh_default_46c00017-e67f-11e8-b4b6-00155d016146_0
|
||||
65463ffd437d eshop/locations.api "dotnet Locations.AP…" 2 hours ago Up 2 hours k8s_locations-api_eshop-locations-api-577fc94696-dfhq8_default_44929c4b-e67f-11e8-b4b6-00155d016146_0
|
||||
5b3431873763 eshop/identity.api "dotnet Identity.API…" 2 hours ago Up 2 hours k8s_identity-api_eshop-identity-api-85d9b79f4-s5ks7_default_43d6eb7c-e67f-11e8-b4b6-00155d016146_0
|
||||
7c8e77252459 eshop/catalog.api "dotnet Catalog.API.…" 2 hours ago Up 2 hours k8s_catalog-api_eshop-catalog-api-59fd444fb-ztvhz_default_4356705a-e67f-11e8-b4b6-00155d016146_0
|
||||
94d95d0d3653 eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwws_eshop-apigwws-65474b979d-n99jw_default_41395473-e67f-11e8-b4b6-00155d016146_0
|
||||
bc4bbce71d5f eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwwm_eshop-apigwwm-857c549dd8-8w5gv_default_4098d770-e67f-11e8-b4b6-00155d016146_0
|
||||
840aabcceaa9 eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwms_eshop-apigwms-5b94dfb54b-dnmr9_default_401fc611-e67f-11e8-b4b6-00155d016146_0
|
||||
aabed7646f5b eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwmm_eshop-apigwmm-85f96cbdb4-dhfwr_default_3ed7967a-e67f-11e8-b4b6-00155d016146_0
|
||||
49c5700def5a f06a5773f01e "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_basket-data_eshop-basket-data-66fbc788cc-csnlw_default_3e0c45fe-e67f-11e8-b4b6-00155d016146_0
|
||||
a5db4c521807 f06a5773f01e "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_keystore-data_eshop-keystore-data-5c9c85cb99-8k56s_default_3ce1a273-e67f-11e8-b4b6-00155d016146_0
|
||||
aae88fd2d810 d69a5113ceae "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_rabbitmq_eshop-rabbitmq-6b68647bc4-gr565_default_3c37ee6a-e67f-11e8-b4b6-00155d016146_0
|
||||
65d49ca9589d bbed8d0e01c1 "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_nosql-data_eshop-nosql-data-579c9d89f8-mtt95_default_3b9c1f89-e67f-11e8-b4b6-00155d016146_0
|
||||
090e0dde2ec4 bbe2822dfe38 "/opt/mssql/bin/sqls…" 2 hours ago Up 2 hours k8s_sql-data_eshop-sql-data-5c4fdcccf4-bscdb_default_3afd29b8-e67f-11e8-b4b6-00155d016146_0
|
||||
```
|
||||
|
||||
## Known issues
|
||||
|
||||
Login from the webmvc results in following error: HttpRequestException: Response status code does not indicate success: 404 (Not Found).
|
||||
|
||||
The reason is because MVC needs to access the Identity Server from both outside the container (browser) and inside the container (C# code). Thus, the configuration uses always the *external url* of the Identity Server, which in this case is just `http://localhost/identity-api`. But this external url is incorrect when used from C# code, and the web mvc can't access the identity api. This is the only case when this issue happens (and is the reason why we use 10.0.75.1 for local address in web mvc in local development mode)
|
||||
|
||||
Solving this requires some manual steps:
|
||||
|
||||
From the `/k8s` folder run `kubectl apply -f .\nginx-ingress\local-dockerk8s\mvc-fix.yaml`. This will create two additional ingresses (for MVC and Identity API) to any valid DNS that points to your machine. This enable the use of 10.75.0.1 IP.
|
||||
|
||||
Update the configmap of Web MVC by typing (**line breaks are mandatory**):
|
||||
|
||||
```
|
||||
kubectl patch cm cfg-eshop-webmvc --type strategic --patch @'
|
||||
data:
|
||||
urls__IdentityUrl: http://10.0.75.1/identity
|
||||
urls__mvc: http://10.0.75.1/webmvc
|
||||
'@
|
||||
```
|
||||
|
||||
Update the configmap of Identity API by typing (**line breaks are mandatory**):
|
||||
|
||||
```
|
||||
kubectl patch cm cfg-eshop-identity-api --type strategic --patch @'
|
||||
data:
|
||||
mvc_e: http://10.0.75.1/webmvc
|
||||
'@
|
||||
```
|
||||
|
||||
Restart the SQL Server pod to ensure the database is recreated again:
|
||||
|
||||
```
|
||||
kubectl delete pod --selector app=sql-data
|
||||
```
|
||||
|
||||
Wait until SQL Server pod is ready to accept connections and then restart all other pods:
|
||||
|
||||
```
|
||||
kubectl delete pod --selector="app!=sql-data"
|
||||
```
|
||||
|
||||
**Note:** Pods are deleted to ensure the databases are recreated again, as identity api stores its client names and urls in the database.
|
||||
|
||||
Now, you can access the MVC app using: `http://10.0.75.1/webmvc`. All other services (like SPA) must be accessed using `http://localhost`
|
||||
|
||||
|
||||
|
||||
|
||||
# Superseded
|
||||
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Azure Kubernetes Service (AKS)](Deploy-to-Azure-Kubernetes-Service-(AKS))
|
||||
|
@ -1,86 +1,5 @@
|
||||
# Superseded
|
||||
|
||||
# Azure Devspaces Support
|
||||
|
||||
Please [go to official devspaces doc](https://docs.microsoft.com/en-us/azure/dev-spaces/). You should be familiar with:
|
||||
|
||||
* Enabling devspaces to a cluster
|
||||
* Creating a devspace
|
||||
* Creating a child devspace
|
||||
* Deploy to a devspace
|
||||
|
||||
## Enabling devspaces
|
||||
|
||||
You need an AKS created in a admitted Devspaces region. Then just type:
|
||||
|
||||
```
|
||||
az aks use-dev-spaces -g your-aks-devspaces-resgrp -n YourAksDevSpacesCluster
|
||||
```
|
||||
|
||||
Note: This command will install the _Azure Devspaces CLI_ if not installed in your computer.
|
||||
|
||||
The tool will ask us to create a dev space. Enter the name of the devspace (i. e. `dev`) and make it a root devspace by selecting _none_ when prompted for their parent dev space:
|
||||
|
||||

|
||||
|
||||
Once devspaces tooling is added, type `azds --version` to get the version of DevSpaces tooling.
|
||||
Tested DevSpaces tooling version was:
|
||||
|
||||
```
|
||||
Azure Dev Spaces CLI (Preview)
|
||||
0.1.20190320.5
|
||||
API v2.17
|
||||
```
|
||||
|
||||
Future versions should work, unless they introduce _breaking changes_.
|
||||
|
||||
## Prepare environment for DevSpaces
|
||||
|
||||
From a Powershell console, go to `/src` folder and run `prepare-devspaces.ps1` (no parameters needed). This script will copy the `inf.yaml` and `app.yaml` files from `/k8s/helm` to all project folders. This is needed due to a limitation of devspaces tooling used. Note that the files copied are added in `.gitignore`.
|
||||
|
||||
Remember that the `inf.yaml` and `app.yaml` contains the parameters needed for the helm charts to run.
|
||||
|
||||

|
||||
|
||||
## Deploy to a devspace
|
||||
|
||||
Devspaces deployment is done using the **same helm charts used to deploy on a "production" cluster**.
|
||||
|
||||
If you want to deploy a project to a specific devspace, just go to its source folder (where the `.csproj` is) and type `azds up`. This will deploy the project to the current devspace. You can use the `-v` modifier to have more verbosity and the `-d` modifier to dettach the terminal from the application. If `-d` is not used, the `azds up` command is attached to the service running and you are able to see its logs.
|
||||
|
||||
The command `azds up` will:
|
||||
|
||||
1. Sync files with the devspace builder container
|
||||
2. Deploy the helm chart
|
||||
3. Build the service container
|
||||
4. Attach current console to the container output (if not `-d` is passed)
|
||||
|
||||
**Note** You should deploy **all** enabled devspaces projects (one by one) in the parent devspace
|
||||
|
||||
The command `azds list-up` will show which APIs are deployed in the devspace. The command `azds list-uris` will show the ingress URLs:
|
||||
|
||||

|
||||
|
||||
## Deploy to a child devspace
|
||||
|
||||
Once everything is deployed to the root devspace, use `azds space select` to create a child devspace.
|
||||
|
||||

|
||||
|
||||
Then deploy the desired service to this child devspace (using `azds up` again). Use `azds list-up` to verify that the service is deployed in the child devspace. The image shows the _WebMVC_ deployed in the child devspace _alice_:
|
||||
|
||||

|
||||
|
||||
The `azds list-uris` will show you the new ingress URL for the child devspace:
|
||||
|
||||

|
||||
|
||||
If you use the child URL (starting with `alice.s.`), the Web MVC that will run will be the one that is deployed in the child devspace. This web will use all services deployed in the child devspaces and if not found, will use the ones deployed in the parent devspace.
|
||||
|
||||
If using the parent devspace URL, Web MVC that will run will be the one deployed in parent devspace, using only the services deployed in parent devspace.
|
||||
|
||||
Usually you deploy everything in the parent devspace, and then create one child devspace per developer. The developer deploys only the service he is updating in his/her namespace.
|
||||
|
||||
Please refer to [Devspaces documentation](https://docs.microsoft.com/en-us/azure/dev-spaces/) for more info.
|
||||
|
||||
**Note**: _Web SPA_ is not enabled to use Dev Spaces (so, you can't deploy the SPA in devspace). Use the Web MVC for testing.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Azure Dev Spaces](Azure-Dev-Spaces)
|
||||
|
@ -1,116 +1,3 @@
|
||||
# Deploying eShopOncontainers to Azure Service Fabric
|
||||
Service Fabric supports both, Linux and Windows Containers, so this guidance is split in two depending if you want to deploy eShopOnContainers as Linux or Windows Containers because the SF cluster has to be created for one environment or the other.
|
||||
# Deprecated
|
||||
|
||||
## Deploying eShopOncontainers as Windows Containers to Azure Service Fabric
|
||||
|
||||
>**Note**: This deployment procedure use the public eShopOnContainers images published in [DockerHub](https://cloud.docker.com/u/eshop/)
|
||||
|
||||
### Creating a secured Azure Service Fabric cluster based on Windows nodes/VMs
|
||||
In order to deploy eShopOnContainers as Windows Containers an Azure SF environment must be first set. There is available an ARM template to do that job in the following link [SF Win ARM deployment](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/servicefabric/WindowsContainers). Follow the steps in [create SF](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/servicefabric/WindowsContainers/readme.md).
|
||||
The ARM script will generate all the necessary Azure resources to publish eShopOnContainers as windows containers.
|
||||
|
||||
Once the SF resources have been successfully created, the next step is to publish the SF projects. These projects are under the directory [SF Directory](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/ServiceFabric/Windows) and contains all the xml config files needed to configure and publish eShopOnContainers. The solution file is `sfwin.sln`:
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/sf-directory.PNG">
|
||||
|
||||
Before deploying, replace **ALL** the external urls in the cloud.xml config file of each SF app which reference the SF dns name with the dns name of your SF. Therefore, replace the token **#{your_sf_dns}#** with your cluster DNS name.
|
||||
Image tags in the servicemanifest.xml files need to be changed as well by replacing the token **#{tag}#** with an existing tag (Ex: eshop/ordering.api:**latest**).
|
||||
|
||||
Example:
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config-idsrv.PNG">
|
||||
|
||||
Additionally, to enable AppInsights Azure in SF, follow steps described bellow:
|
||||
- Create Azure AppInsights
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/create-insights.PNG">
|
||||
|
||||
- Enable Multi-role application map
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/settings-insights.PNG">
|
||||
|
||||
- Retrieve the Instrumentation Key from your AppInsights service properties and set it in the cloud.xml config file of each SF app.
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/set-instrumentationkey.PNG">
|
||||
|
||||
To deploy the SF apps:
|
||||
- Open the eShopOnContainers-ServicesAndWebApps.sln with vs2017 (Service Fabric SDK installation must be installed).
|
||||
- Add the existing SF projects in the solution in order to be published.
|
||||
- Right-click on each SF project selecting the publish button. Firstly, publish the infrastructure services [SF infrastructure services](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/ServiceFabric/Windows/Infrastructure) and once deployed, do the same process for the rest of apps.
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/publish-button.PNG">
|
||||
|
||||
A new window will be prompted allowing you to select the SF cluster you have previously created.
|
||||
|
||||

|
||||
|
||||
Open the SF explorer page to check out the deployment and healthcheck status.
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/explorer-apps-status.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/explorer-deployment-status.PNG">
|
||||
|
||||
## Deploying eShopOncontainers as Linux Containers to Azure Service Fabric
|
||||
|
||||
### Creating a secured Azure Service Fabric cluster based on Windows nodes/VMs
|
||||
|
||||
...
|
||||
TBD
|
||||
...
|
||||
|
||||
|
||||
In order to deploy eShopOnContainers as Linux Containers an Azure SF environment must be first set. There is available an ARM template to do that job in the following link [SF Linux ARM deployment](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/servicefabric/LinuxContainers). Follow the steps in [create SF](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/servicefabric/LinuxContainers/readme.md).
|
||||
The ARM script will generate all the necessary Azure resources to publish eShopOnContainers as Linux containers.
|
||||
|
||||
Once the SF resources have been successfully created, the next step is to publish the SF projects. These projects are under the directory [SF Directory](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/ServiceFabric/Linux) and contains all the xml config files needed to configure and publish eShopOnContainers. It is composed by the following SF projects:
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/sf-directory.PNG">
|
||||
|
||||
- eShopOnServiceFabric: contains all the api services consumed by eShop.
|
||||
- eShopOnServiceFabricIdSrv: contains the Identity server for authentication.
|
||||
- eShopOnServiceFabricWebMVC: contains the MVC web app.
|
||||
- eShopOnServiceFabricWebSPA: containes the SPA web app.
|
||||
- eShopOnServiceFabricWebStatus: contains the web app for service health checking.
|
||||
- eShopOnServiceFabricBus: contains a bus service (Rabbitmq).
|
||||
- eShopOnServiceFabricNoSql: contains a no sql service (MongoDB).
|
||||
- eShopOnServiceFabricRedis: contains a cache service (Redis).
|
||||
- eShopOnServiceFabricSql: contains a sql service (Mssql).
|
||||
|
||||
Before deploying, replace **ALL** the external urls in the cloud.xml config file of each SF app which reference the SF dns name with the dns name of your SF. Therefore, replace the token **#{your_sf_dns}#** with your cluster DNS name.
|
||||
Image tags in the servicemanifest.xml files need to be changed as well by replacing the token **#{your_sf_dns}#** with an existing tag (Ex: eshop/ordering.api:**latest**).
|
||||
|
||||
Example:
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config-idsrv.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config-spa.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/cloud-config-mvc.PNG">
|
||||
|
||||
Additionally, to enable AppInsights Azure in SF, follow steps described bellow:
|
||||
- Create Azure AppInsights
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/create-insights.PNG">
|
||||
|
||||
- Enable Multi-role application map
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/settings-insights.PNG">
|
||||
|
||||
- Retrieve the Instrumentation Key from your AppInsights service properties and set it in the cloud.xml config file of each SF app.
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/set-instrumentationkey.PNG">
|
||||
|
||||
To deploy the SF apps:
|
||||
- Open the eShopOnContainers-ServicesAndWebApps.sln with vs2017 (Service Fabric SDK installation must be installed).
|
||||
- Add the existing SF projects in the solution in order to be published.
|
||||
- Right-click on each SF project selecting the publish button. Firstly, publish the infrastructure services [SF infrastructure services](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/ServiceFabric/Linux/Infrastructure) and once deployed, do the same process for the rest of apps.
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/publish-button.PNG">
|
||||
|
||||
A new window will be prompted allowing you to select the SF cluster you have previously created.
|
||||
|
||||

|
||||
|
||||
Open the SF explorer page to check out the deployment and healthcheck status.
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/explorer-apps-status.PNG">
|
||||
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/explorer-deployment-status.PNG">
|
||||
This wiki page has been deprecated.
|
||||
|
@ -1,6 +1,5 @@
|
||||
_Temporal page to be defined in better detail._
|
||||
Current setup procedures on "Deploying the infrastructure resources into Azure (DB, Cache, Service Bus, etc.)" is here:
|
||||
# Superseded
|
||||
|
||||
- Deploying Resources On Azure
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/master/deploy/readme.md
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [Using Azure resources](Using-Azure-resources)
|
||||
|
@ -1,82 +1,5 @@
|
||||
ASP.NET Core 2.2 Healtchecks package is used in all APIs and applications of eShopOnContainers.
|
||||
# Superseded
|
||||
|
||||
All applications and APIs expose two endpoints (`/liveness` and `/hc`) to check the current application and all their dependencies. The `liveness` endpoint is intended to be used as a liveness probe in Kubernetes and the `hc` is intended to be used as a readiness probe in Kubernetes.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
## Implementing health checks in ASP.NET Core services
|
||||
|
||||
Here is the documentation about how implement healthchecks in ASP.NET Core 2.2:
|
||||
|
||||
https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health
|
||||
|
||||
Also, there's a **nice blog post** on HealthChecks by @scottsauber
|
||||
https://scottsauber.com/2017/05/22/using-the-microsoft-aspnetcore-healthchecks-package/
|
||||
|
||||
## Implementation in eShopOnContainers
|
||||
|
||||
The readiness endpoint (`/hc`) checks all the dependencies of the API. Let's take the MVC client as an example. This client depends on:
|
||||
|
||||
* Web purchasing BFF
|
||||
* Web marketing BFF
|
||||
* Identity API
|
||||
|
||||
So, following code is added to `ConfigureServices` in `Startup`:
|
||||
|
||||
```cs
|
||||
services.AddHealthChecks()
|
||||
.AddCheck("self", () => HealthCheckResult.Healthy())
|
||||
.AddUrlGroup(new Uri(configuration["PurchaseUrlHC"]), name: "purchaseapigw-check", tags: new string[] { "purchaseapigw" })
|
||||
.AddUrlGroup(new Uri(configuration["MarketingUrlHC"]), name: "marketingapigw-check", tags: new string[] { "marketingapigw" })
|
||||
.AddUrlGroup(new Uri(configuration["IdentityUrlHC"]), name: "identityapi-check", tags: new string[] { "identityapi" });
|
||||
return services;
|
||||
```
|
||||
|
||||
Four checkers are added: one named "self" that will return always OK, and three that will check the dependent services. Next step is to add the two endpoints (`/liveness` and `/hc`). Note that the `/liveness` must return always an HTTP 200 (if liveness endpoint can be reached that means that the MVC web is in healthy state (althought it may not be usable if some dependent service is not healthy).
|
||||
|
||||
```cs
|
||||
app.UseHealthChecks("/liveness", new HealthCheckOptions
|
||||
{
|
||||
Predicate = r => r.Name.Contains("self")
|
||||
});
|
||||
```
|
||||
|
||||
The predicate defines wich checkers are executed. In this case for the `/liveness` endpoint we only want to run the checker named "self" (the one that returns always OK).
|
||||
|
||||
Next step is to define the `/hc` endpoint:
|
||||
|
||||
```cs
|
||||
app.UseHealthChecks("/hc", new HealthCheckOptions()
|
||||
{
|
||||
Predicate = _ => true,
|
||||
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
|
||||
});
|
||||
```
|
||||
|
||||
In this case we want to run **all checkers defined** (so, the predicate will always return true to select all checkers).
|
||||
|
||||
## Configuring probes for Kubernetes using health checks
|
||||
|
||||
Helm charts already configure the needed probes in kubernetes using the healthchecks, but you can override the configuration provided by **editing the file `/k8s/helm/<chart-folder>/values.yaml`**. You'll see a code like that:
|
||||
|
||||
```yaml
|
||||
probes:
|
||||
liveness:
|
||||
path: /liveness
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 15
|
||||
port: 80
|
||||
readiness:
|
||||
path: /hc
|
||||
timeoutSeconds: 5
|
||||
initialDelaySeconds: 90
|
||||
periodSeconds: 60
|
||||
port: 80
|
||||
```
|
||||
|
||||
You can remove a probe if you want or update its configuration. Default configuration is the same for all charts:
|
||||
|
||||
* 10 seconds before k8s starts to test the liveness probe
|
||||
* 1 sec of timeout for liveness probe (**not configurable**)
|
||||
* 15 sec between liveness probes calls
|
||||
* 90 seconds before k8s starts to test the readiness probe
|
||||
* 5 sec of timeout for readiness probe
|
||||
* 60 sec between readiness probes calls
|
||||
- [Using HealthChecks](Using-HealthChecks)
|
||||
|
@ -1,92 +1,5 @@
|
||||
## eShopOnContainers Application Insights
|
||||
# Superseded
|
||||
|
||||
Follow the steps bellow to configure App Insights as a logging service. It is included the instructions for setting up App Insights in case you decide to register eShopOnContainers logs locally, or for instance in a cluster environment with Kubernetes or Service Fabric.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
### Add App Insights to an aspnetcore app
|
||||
Install the following nuget packages:
|
||||
* Microsoft.ApplicationInsights.AspNetCore
|
||||
* Microsoft.ApplicationInsights.DependencyCollector
|
||||
|
||||
In case of deploying your app in a cluster environment, install these additional packages:
|
||||
* Microsoft.ApplicationInsights.Kubernetes (for a Kubernetes environment)
|
||||
* Microsoft.ApplicationInsights.ServiceFabric (for a Service Fabric environment)
|
||||
|
||||
Include the **UseApplicationInsights** extension method in your webhostbuilder located at program.cs:
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/useappinsights-program.PNG">
|
||||
|
||||
Register AppInsights service by including the **AddApplicationInsightsTelemetry** extension method in your ConfigureServices Startup method located at Startup.cs. In case of using K8s or SF, add **EnableKubernetes** or **FabricTelemetryInitializer** method respectively:
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/appinsights-register.PNG">
|
||||
|
||||
To enable AppInsights in your ILogger,include the **AddAzureWebAppDiagnostics** and **AddApplicationInsights** ILoggerFactory extension method in your Configure Startup method located at Startup.cs:
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/appinsights-loggerfactory.PNG">
|
||||
|
||||
### Create an App Insights service
|
||||
Go to the Azure portal and create the service:
|
||||
- Create Azure AppInsights
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/create-insights.PNG">
|
||||
|
||||
In case of using App Insights for logging eShopOnContainers in a cluster environment, Multi-role application map must be enabled:
|
||||
- Enable Multi-role application map
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/settings-insights.PNG">
|
||||
|
||||
Retrive the Instrumentation Key generated to be used later on. Go to properties in the portal and copy the key.
|
||||
|
||||
### Setting up Application Insights locally
|
||||
Go to the root of the project and open the **.env** file where all the environment variables are set, uncomment the **INSTRUMENTATION_KEY** variable and set the Instrumentation Key from your App Insights service that has previously been created:
|
||||
|
||||
```
|
||||
#ESHOP_AZURE_REDIS_BASKET_DB=<YourAzureRedisBasketInfo>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_URL=<YourAzureStorage_Catalog_BLOB_URL>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_URL=<YourAzureStorage_Marketing__BLOB_URL>
|
||||
#ESHOP_AZURE_SERVICE_BUS=<YourAzureServiceBusInfo>
|
||||
#ESHOP_AZURE_COSMOSDB=<YourAzureCosmosDBConnData>
|
||||
#ESHOP_AZURE_CATALOG_DB=<YourAzureSQLDBCatalogDBConnString>
|
||||
#ESHOP_AZURE_IDENTITY_DB=<YourAzureSQLDBIdentityDBConnString>
|
||||
#ESHOP_AZURE_ORDERING_DB=<YourAzureSQLDBOrderingDBConnString>
|
||||
#ESHOP_AZURE_MARKETING_DB=<YourAzureSQLDBMarketingDBConnString>
|
||||
#ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI=<YourAzureFunctionCampaignDetailsURI>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_NAME=<YourAzureStorageCatalogName>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_KEY=<YourAzureStorageCatalogKey>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_NAME=<YourAzureStorageMarketingName>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_KEY=<YourAzureStorageMarketingKey>
|
||||
#ESHOP_SERVICE_BUS_USERNAME=<ServiceBusUserName-OnlyUsedIfUsingRabbitMQUnderwindows>
|
||||
#ESHOP_SERVICE_BUS_PASSWORD=<ServiceBusUserPassword-OnlyUsedIfUsingRabbitMQUnderwindows>
|
||||
INSTRUMENTATION_KEY=
|
||||
#USE_LOADTEST=<True/False>
|
||||
```
|
||||
|
||||
### Setting up Application Insights in Service Fabric
|
||||
- Retrieve the Instrumentation Key from your AppInsights service properties and set it in the cloud.xml config file of each SF app.
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/sf/set-instrumentationkey.PNG">
|
||||
|
||||
### Setting up Application Insights in Kubernetes
|
||||
- Retrieve the Instrumentation Key from your AppInsights service properties. Open the **conf_local.yml** config file located in '/k8s' directory and set the variable **Instrumentation_Key** with the key.
|
||||
```
|
||||
BasketBus: rabbitmq
|
||||
BasketRedisConStr: basket-data
|
||||
CatalogBus: rabbitmq
|
||||
CatalogSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=Pass@word;
|
||||
CatalogAzureStorageEnabled: "False"
|
||||
IdentitySqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.IdentityDb;User Id=sa;Password=Pass@word;
|
||||
LocationsBus: rabbitmq
|
||||
LocationsNoSqlDb: mongodb://nosql-data
|
||||
LocationsNoSqlDbName: LocationsDb
|
||||
MarketingBus: rabbitmq
|
||||
MarketingNoSqlDb: mongodb://nosql-data
|
||||
MarketingNoSqlDbName: MarketingDb
|
||||
MarketingSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.MarketingDb;User Id=sa;Password=Pass@word;
|
||||
OrderingBus: rabbitmq
|
||||
OrderingSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.OrderingDb;User Id=sa;Password=Pass@word;
|
||||
PaymentBus: rabbitmq
|
||||
UseAzureServiceBus: "False"
|
||||
EnableLoadTest: "False"
|
||||
keystore: keystore-data
|
||||
GracePeriodManager_GracePeriodTime: "1"
|
||||
GracePeriodManager_CheckUpdateTime: "15000"
|
||||
Instrumentation_Key: ""
|
||||
```
|
||||
|
||||
### Application Insights filters
|
||||
- Go to App Insights Search option in the Azure portal and check all the telemetry and traces that your app is generating.
|
||||
- In a cluster environment with multiple containers with their own set of instances, notice that we can actually filter and trace logs by these service and instances:
|
||||
<img src="https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/img/appinsights/appinsights-screenshot.PNG">
|
||||
- [Application Insights](Application-Insights)
|
||||
|
@ -1,3 +1,5 @@
|
||||
For detailed info about "Implementing API Gateways with Ocelot" as implemented in eShopOnContainers, check out the following blog post:
|
||||
# Superseded
|
||||
|
||||
https://blogs.msdn.microsoft.com/cesardelatorre/2018/05/15/designing-and-implementing-api-gateways-with-ocelot-in-a-microservices-and-container-based-architecture/
|
||||
This wiki page has been superseded by:
|
||||
|
||||
- [API gateways](API-gateways)
|
||||
|
@ -1,81 +1,5 @@
|
||||
# Introduction
|
||||
# Superseded
|
||||
|
||||
The management of secrets in our business applications is always an important issue where we are looking for different alternatives. Of course, with the arrival of .NET Core and its configuration system *streamed* has given us a lot of flexibility when it comes to playing with the different configuration parameters of our applications. Being able to have them in the settings files such as our *appsettings.json* and replace or expand them through other sources such as environment variables is something that we use very often in this project in the creation of containers and composition.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
The use of [User Secrets](https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets?view=aspnetcore-2.1&tabs=windows) or the use of *.env* files simplify some scenarios, generally for development environments but they do not save us in our release processes. In these release processes, as for example in VSTS, we can modify the configuration values by means of release variables but we will still have to know those secrets and those people who have the power to edit a release will be able to see these secret values
|
||||
|
||||
## Azure Key Vault
|
||||
|
||||
A valid option to prevent the values of secrets being known by different people is the use of a Key Vault, or private warehouse of sercrets. Azure offers a Key Vault service known as [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) which gives us many of the desired features in a service in the cloud.
|
||||
|
||||
> If you are not aware of Azure Key Vault, I recommend that you review the [product documentation](https://docs.microsoft.com/en-us/azure/key-vault/) to get a quick idea of everything that the service offers.
|
||||
|
||||
.NET Core offers us a configuration provider based on Azure Key Vault, which we can integrate as another provider and where we will delegate the management of secrets. Below you can see the code fragment that Azure Key Vault configures in one of the *eshopOnContainers* services.
|
||||
|
||||
```csharp
|
||||
public static IWebHost BuildWebHost(string[] args) =>
|
||||
WebHost.CreateDefaultBuilder(args)
|
||||
.UseStartup<Startup>()
|
||||
.UseHealthChecks("/hc")
|
||||
.UseContentRoot(Directory.GetCurrentDirectory())
|
||||
.ConfigureAppConfiguration((builderContext, config) =>
|
||||
{
|
||||
config.AddJsonFile("settings.json");
|
||||
|
||||
var builtConfig = config.Build();
|
||||
|
||||
var configurationBuilder = new ConfigurationBuilder();
|
||||
|
||||
if (Convert.ToBoolean(builtConfig["UseVault"]))
|
||||
{
|
||||
configurationBuilder.AddAzureKeyVault(
|
||||
$"https://{builtConfig["Vault:Name"]}.vault.azure.net/",
|
||||
builtConfig["Vault:ClientId"],
|
||||
builtConfig["Vault:ClientSecret"]);
|
||||
}
|
||||
|
||||
configurationBuilder.AddEnvironmentVariables();
|
||||
|
||||
config.AddConfiguration(configurationBuilder.Build());
|
||||
})
|
||||
.ConfigureLogging((hostingContext, builder) =>
|
||||
{
|
||||
builder.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
|
||||
builder.AddConsole();
|
||||
builder.AddDebug();
|
||||
})
|
||||
.UseApplicationInsights()
|
||||
.Build();
|
||||
```
|
||||
|
||||
As you can see, the **AddAzureKeyVault** method is responsible for adding the configuration provider. Therefore, each time someone requests a configuration key, it will be searched for in the appropriate order, within the Azure Key Vault service.
|
||||
|
||||
> By default the keys with the separator -- are transformed to:. If you want to change this behavior **AddAzureKeyVault** supports the specification of a new implementation of IKeyVaultSecretManager instead of the default DefaultKeyVaultSecretManager.
|
||||
|
||||
## Azure MSI
|
||||
|
||||
If we look closely, in reality we are facing a situation that could be described as *the snake that bites its tail*, as we no longer need to keep secrets in our applications as they will be in Azure Key Vault. But if we need to keep our Key Vault credentials so that everyone who knows them needs to, they can access the service and obtain the information it contains.
|
||||
|
||||
One way to overcome this problem in Azure, is by using [Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/overview) or MSI which is relatively simple to operate. Basically, Azure MSI allows the resources created in Azure (for the moment the list of possible resources is limited to AppServices, VM, AKeyVault, Azure Functions) to have an identity, an SPN, within the active directory associated with the subscription. Once this SPN is created, it can be used to give access permission to other resources.
|
||||
|
||||
> If you use ARM templates, establish that an MSI support resource is as easy as setting the attribute *identity* to *systemAssigned*.
|
||||
|
||||
Once we have a service, put a WebApp, with MSI enabled and in the policies of our Azure Key Vault, we have established their permissions and can connect with Key Vault without using a username or password. Below, we can see an example of how the **AddAzureKeyVault** method would change using MSI with respect to the previous form.
|
||||
|
||||
```csharp
|
||||
public static IWebHost BuildWebHost(string[] args) =>
|
||||
WebHost.CreateDefaultBuilder(args)
|
||||
.UseStartup<Startup>()
|
||||
.ConfigureAppConfiguration((context,builder)=>
|
||||
{
|
||||
var azureServiceTokenProvider = new AzureServiceTokenProvider();
|
||||
|
||||
var keyVaultClient = new KeyVaultClient(
|
||||
new KeyVaultClient.AuthenticationCallback(
|
||||
azureServiceTokenProvider.KeyVaultTokenCallback));
|
||||
|
||||
builder.AddAzureKeyVault("https://[mykeyvault].vault.azure.net/",
|
||||
keyVaultClient,
|
||||
new DefaultKeyVaultSecretManager());
|
||||
}).Build();
|
||||
```
|
||||
- [Azure Key Vault](Azure-Key-Vault)
|
||||
|
@ -1,48 +1,5 @@
|
||||
# Using Webhooks in eShopOnContainers
|
||||
# Superseded
|
||||
|
||||
eShopOnContainers supports using _webhooks_ to notify external services about events that happened inside eShopOnContainers. A new API and a webhooks demo client were developed.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
## Webhooks API
|
||||
|
||||
Webhooks API is exposed directly (not through any BFF) because its usage is not tied to any particular client. The API offers endpoints to register and view the current webhooks. The API is authenticated, so you can only register a new webhook when authenticated in the Identity.API and when you list the webhooks, you only see the webhooks registered by you.
|
||||
|
||||
### Registering a webhook
|
||||
|
||||
Registering a webhook is a process that involves two parties: the Webhooks API and the Webhooks client (outside eShopOnContainers). To avoid allowing clients that aren't under your control a basic security mechanism (known as URL Granting) is used when registering webhooks:
|
||||
|
||||
* When registering the webhook (using Webhooks API under authenticated account) you must pass a token (any string value up to you) and a "Grant URL".
|
||||
* Webhooks API will call the "Grant URL" using HTTP `OPTIONS` and passing the token sent by you in the `X-eshop-whtoken` header.
|
||||
* Webhooks API expects to receive a HTTP successful status code **and** the same token in the same `x-eshop-whtoken` header, in the response
|
||||
|
||||
If token is not sent in the response, or the HTTP Status code is not successful, then the Webhooks API, returns a HTTP Status Code 418 (because trying to register a URL owned by someone else is almost the same as making coffee in a teapot ;-)).
|
||||
|
||||
Due to security reasons the "Grant URL" used and the URL for the webhook MUST belong to the same
|
||||
|
||||
When eShopOnContainers sends the webhook, this token is also sent (in the same header), giving the client the choice to process or not the hook.
|
||||
|
||||
## Webhooks client
|
||||
|
||||
Webhooks Client is a basic web (developed with Razor Pages) that allows you to test the eShopOnContainers webhooks system. It allows you to register the "OrderPaid" webhook.
|
||||
|
||||
The client is exposed directly (like all other clients). In k8s the ingress path is `/webhooks-web`.
|
||||
|
||||
Here are the configuration values of this demo client (with the values used in default compose file):
|
||||
|
||||
```yaml
|
||||
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||
- Token=6168DB8D-DC58-4094-AF24-483278923590 # Webhooks are registered with this token
|
||||
- IdentityUrl=http://10.0.75.1:5105
|
||||
- CallBackUrl=http://localhost:5114
|
||||
- WebhooksUrl=http://webhooks.api
|
||||
- SelfUrl=http://webhooks.client/
|
||||
```
|
||||
|
||||
- `Token`: Client will send always this token when webhooks asks for url grant. Also client expects this token to be in the webhooks sent by eShopOnContainers.
|
||||
- `IdeneityUrl`: URL of Identity API
|
||||
- `CallBackUrl`: Callback url for Identity API
|
||||
- `WebhooksUrl`: URL of webhooks API (using internal containers networking)
|
||||
- `SelfUrl`: URL where demo client can be reached from webhooks api. In k8s deployments ingress-based url is used, in compose the internal url has to be used.
|
||||
|
||||
There is an additional configuration value named `ValidateToken`. If set to `true` (defaults to `false`), the webhook demo client ensures that the webhook sent by eShopOnContainers has the same token as the `Token` configuration value. If set to `false`, the client will accept any hook, regardless its token header value.
|
||||
|
||||
>**Note**: Regardless the value of `ValidateToken` configuration entry, note that the client **always sends back the value of `Token` entry when granting the url**.
|
||||
- [Webhooks](Webhooks)
|
||||
|
@ -1,250 +1,5 @@
|
||||
This article contains a brief introduction to centralized structured logging with [Serilog](https://serilog.net/) and event viewing with [Seq](https://getseq.net/) in eShopOnContainers.
|
||||
# Superseded
|
||||
|
||||
Serilog is an [open source project in GitHub](https://github.com/serilog/serilog) and even though Seq is not, it's possible to [use it for free in development and small projects](https://getseq.net/Pricing), so it fits nicely for eShopOnContainers.
|
||||
This wiki page has been superseded by:
|
||||
|
||||
This article begins with a few sample use cases for logging, that also showcase the internals of some of the most interesting DDD patterns, that are not obvious by simply using the application.
|
||||
|
||||
Then we cover the most important tips for using structured logging in C# and conclude with some details on the setup of the logging system.
|
||||
|
||||
## Logging samples in eShopOnContainers
|
||||
|
||||
These are just a few samples of what you can get when you combine proper structured logging with filtering by some convenient properties, as seen from **Seq**.
|
||||
|
||||
The filter expression is highlighted on the top of each image.
|
||||
|
||||
### Application startup
|
||||
|
||||
Get the details of application startup:
|
||||
|
||||

|
||||
|
||||
Filtering by `ApplicationContext` shows all events from the application, in this sample we just added a `DateTime` limit to show only the initial traces.
|
||||
|
||||
The "level" of the events shown, such as `Debug`, `Information`, `Warning`, can be configured as explained in the [setup and configuration section](#setup-and-configuration).
|
||||
|
||||
### Closing in on a specific type of trace
|
||||
|
||||
You can focus on a specific type of trace by filtering by "event template" (for a specific `ApplicationContext` here):
|
||||
|
||||

|
||||
|
||||
You can also show the same event template or "type" for all applications:
|
||||
|
||||

|
||||
|
||||
### Integration event handling
|
||||
|
||||
Filtering by `IntegrationEventId` and `IntegrationEventContext` shows the publishing (1) and handling (2) of the `UserCheckoutAcceptedIntegrationEvent`. This handling begins a transaction (3), creates an order (4), commits the transaction (5) and publishes the events `OrderStartedIntegrationEvent` (6) and `OrderStatusChangedToSubmittedIntegrationEvent` (7).
|
||||
|
||||
Worth noting here is that integration events are queued while in the scope of the transaction, and then published after it finishes:
|
||||
|
||||

|
||||
|
||||
### Tracing an integration event from publishing to handling in other microservices
|
||||
|
||||
A filter similar to the previous one, but showing the logging event details, with an `OrderStatusChangedToStockConfirmedIntegrationEvent` published in `Ordering.API` (1) and handled in `Ordering.SignalrHub` (2) and in `Payment.API` (3). Notice that while still handling the event in `Payment.API`, a new `OrderPaymentSuccededIntegrationEvent` (4) is published:
|
||||
|
||||

|
||||
|
||||
### Viewing the log event details
|
||||
|
||||
If you use [Firefox Developer Edition](https://www.mozilla.org/firefox/developer/) or your browser has a JSON files viewer, you can get the raw JSON event:
|
||||
|
||||

|
||||
|
||||
And view or navigate/expand/colapse all event details much more easily:
|
||||
|
||||

|
||||
|
||||
## Using structured logging
|
||||
|
||||
This section explores the code-related aspects of logging, beginning with the "structured logging" concept that makes it possible to get the samples show above.
|
||||
|
||||
In a few word, **structured logging** can be thought of as a stream of key-value pairs for every event logged, instead of just the plain text line of conventional logging.
|
||||
|
||||
The key-value pairs are then the base to query the events, as was shown in the samples above.
|
||||
|
||||
### Getting the logger
|
||||
|
||||
The logging infrastructure of .NET supports structured logging when used with a `LoggerFactory`, such as **Serilog**, that supports it, and the simplest way to use is by requesting an `ILogger<T>` through Dependency Injection (DI) in the class constructor as shown here:
|
||||
|
||||
```cs
|
||||
public class WorkerClass
|
||||
{
|
||||
private readonly ILogger<WorkerClass> _logger;
|
||||
|
||||
public WorkerClass(ILogger<WorkerClass> logger) => _logger = logger;
|
||||
|
||||
// If you have to use ILoggerFactory, change the constructor like this:
|
||||
public WorkerClass(ILoggerFactory loggerFactory) => _logger = loggerFactory.CreateLogger<WorkerClass>();
|
||||
}
|
||||
```
|
||||
|
||||
The nice part of using the `ILogger<T>` is that you get a nice `SourceContext` property as shown here:
|
||||
|
||||

|
||||
|
||||
### Logging events
|
||||
|
||||
Logging events is pretty simple, as shown in the following code that produces the trace shown in image above:
|
||||
|
||||
```cs
|
||||
_logger.LogInformation("----- Publishing integration event: {IntegrationEventId} from {AppName} - ({@IntegrationEvent})", pubEvent.EventId, Program.AppName, pubEvent.IntegrationEvent);
|
||||
```
|
||||
|
||||
The code above is similar to what you've seen in the `string.format()` method, with three very important differences:
|
||||
|
||||
1. The first string defines a **type of event** or **template** property that can also be queried, along with any other of the event properties.
|
||||
|
||||
2. Every name in curly braces in the **template** defines a **property** that gets it's value from a parameter after the template, just as in `string.Format()`.
|
||||
|
||||
3. If a property name begins with `@` then the whole object graph is stored in the event log (some limits apply / can be configured).
|
||||
|
||||
#### Important logging rules
|
||||
|
||||
There a just a few simple rules to get the most from structured logging:
|
||||
|
||||
1. NEVER use string interpolation with variables as the template.
|
||||
|
||||
If you use interpolation, then the "template" will lose it's meaning as an event type, you will also lose the key-value pairs and the trace will become a plain old simple text trace.
|
||||
|
||||
2. Log exceptions with the proper overload as shown in the following code fragments:
|
||||
|
||||
```cs
|
||||
catch (Exception ex)
|
||||
{
|
||||
_logger.LogWarning(ex, "Could not publish event: {EventId} after {Timeout}s ({ExceptionMessage})", @event.Id, $"{time.TotalSeconds:n1}", ex.Message);
|
||||
}
|
||||
|
||||
.../...
|
||||
|
||||
catch (Exception ex)
|
||||
{
|
||||
_logger.LogError(ex, "Program terminated unexpectedly ({Application})!", AppName);
|
||||
return 1;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Don't log only the exception message, because it would be like violating rule #1.
|
||||
|
||||
### Logging contexts and correlation Ids
|
||||
|
||||
Logging context allows you to define a scope, so you can trace and correlate a set of events, even across the boundaries of the applications involved. The use of different types of contexts was shown in the [logging samples section](#logging-samples-in-eshoponcontainers) above.
|
||||
|
||||
Correlation Ids are a mean to establish a link between two or more contexts or applications, but can get difficult to trace. At some point it might be better to handle contexts that cover business concepts or entities, such as an **OrderContext** that can be easily identified across different applications, even when using different technologies.
|
||||
|
||||
These are some of the context properties used in eShopOnContainers:
|
||||
|
||||
- **ApplicationContext** Is defined on application startup and adds the `ApplicationContext` property to all events.
|
||||
|
||||
- **SourceContext** Identifies the full name of the class where the event is logged, it's usually defined when creating or injecting the logger.
|
||||
|
||||
- **RequestId** Is a typical context that covers all events while serving a request. It's defined by the ASP.NET Core request pipeline.
|
||||
|
||||
- **Transaction context** Covers the events from the beginning of the database transaction up to it's commit.
|
||||
|
||||
- **IntegrationEventContext** - Identifies all events that occur while handling an integration event in an application.
|
||||
|
||||
## Setup and configuration
|
||||
|
||||
### Serilog
|
||||
|
||||
The logging setup used in eShopOnContainers is somewhat different from the usual samples in ASP.NET Core and it's taken mostly from <https://github.com/serilog/serilog-aspnetcore>. The main reason is to have logging services available as soon as possible during application startup.
|
||||
|
||||
These are the packages typically used to enable Serilog in the applications:
|
||||
|
||||
- Serilog.AspNetCore
|
||||
- Serilog.Enrichers.Environment
|
||||
- Serilog.Settings.Configuration
|
||||
- Serilog.Sinks.Console
|
||||
- Serilog.Sinks.Seq
|
||||
|
||||
Logger configuration is done in `Program.cs` as shown here:
|
||||
|
||||
```cs
|
||||
private static Serilog.ILogger CreateSerilogLogger(IConfiguration configuration)
|
||||
{
|
||||
var seqServerUrl = configuration["Serilog:SeqServerUrl"];
|
||||
|
||||
return new LoggerConfiguration()
|
||||
.MinimumLevel.Verbose()
|
||||
.Enrich.WithProperty("ApplicationContext", AppName)
|
||||
.Enrich.FromLogContext()
|
||||
.WriteTo.Console()
|
||||
.WriteTo.Seq(string.IsNullOrWhiteSpace(seqServerUrl) ? "http://seq" : seqServerUrl)
|
||||
.ReadFrom.Configuration(configuration)
|
||||
.CreateLogger();
|
||||
}
|
||||
```
|
||||
|
||||
The following aspects can be highlighted from the code above:
|
||||
|
||||
- `.Enrich.WithProperty("ApplicationContext", AppName)` defines the `ApplicationContext` for all traces in the application.
|
||||
- `.Enrich.FromLogContext()` allows you to define a log context anywhere you need it.
|
||||
- `.ReadFrom.Configuration(configuration)` allows you to override the configuration from values in `appsettings.json`, or environment variables, which becomes very handy for containers.
|
||||
|
||||
The next JSON fragment shows the typical default configuration for `appsettings.json` eShopOnContainers microservices:
|
||||
|
||||
```json
|
||||
"Serilog": {
|
||||
"SeqServerUrl": null,
|
||||
"MinimumLevel": {
|
||||
"Default": "Information",
|
||||
"Override": {
|
||||
"Microsoft": "Warning",
|
||||
"Microsoft.eShopOnContainers": "Information",
|
||||
"System": "Warning"
|
||||
}
|
||||
}
|
||||
},
|
||||
```
|
||||
|
||||
The previous JSON fragment shows how to configure the MinimumLevel for traces, according to the Namespace of the `SourceContext`, such that the default is **Information**, except for namespaces Microsoft.* and System.*, except again for **Microsoft.eShopOnContainers**, that's also Information.
|
||||
|
||||
### Seq
|
||||
|
||||
Seq is added as another container in the `docker-compose` files as shown here:
|
||||
|
||||
```yml
|
||||
# In docker-compose.yml
|
||||
services:
|
||||
seq:
|
||||
image: datalust/seq:latest
|
||||
|
||||
# in docker-compose.override.yml
|
||||
seq:
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
ports:
|
||||
- "5340:80"
|
||||
```
|
||||
|
||||
With the above configuration **Seq** will be availiable at `http://10.0.75.1:5340` or `http://localhost:5340`
|
||||
|
||||
**Important configuration note**
|
||||
|
||||
To limit the amount of disk space used by the event store, it's recommended that you create a retention policy of **one day**, with the option: **settings > RETENTION > ADD POLICY -> Delete all events after 1 day**.
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **Logging in ASP.NET Core** \
|
||||
<https://docs.microsoft.com/aspnet/core/fundamentals/logging/>
|
||||
|
||||
- **Serilog — simple .NET logging with fully-structured events** \
|
||||
<https://serilog.net/>
|
||||
|
||||
- **Seq — structured logs for .NET apps** \
|
||||
<https://getseq.net/>
|
||||
|
||||
- **Structured logging concepts in .NET Series (1)** \
|
||||
<https://nblumhardt.com/2016/06/structured-logging-concepts-in-net-series-1/>
|
||||
|
||||
- **Events and levels - structured logging concepts in .NET (2)** \
|
||||
<https://nblumhardt.com/2016/06/events-and-levels-structured-logging-concepts-in-net-2/>
|
||||
|
||||
- **Smart Logging Middleware for ASP.NET Core** \
|
||||
<https://blog.getseq.net/smart-logging-middleware-for-asp-net-core/>
|
||||
|
||||
- **Tagging log events for effective correlation** \
|
||||
<https://nblumhardt.com/2015/01/designing-log-events-for-effective-correlation/>)
|
||||
- [Serilog & Seq](Serilog-and-Seq)
|
||||
|
@ -1,90 +1,5 @@
|
||||
# Logging using ELK stack
|
||||
# Superseded
|
||||
|
||||
This article contains a brief introduction to centralized structured logging with [Serilog](https://serilog.net/) and event viewing with [ELK](https://www.elastic.co/elk-stack) in eShopOnContainers. ELK is an acronym of ElasticSearch, LogStash and Kibana. This is one of the most used tools in the industry standards.
|
||||
This wiki page has been superseded by:
|
||||
|
||||

|
||||
|
||||
## Wiring eshopOnContainers with ELK in Localhost
|
||||
|
||||
eshopOnContainers is ready for work with ELK, you only need to setup the configuration parameter **LogstashUrl**, in **Serilog** Section, for achieve this, you can do it modifing this parameter in every appsettings.json of every service, or via Environment Variable **Serilog:LogstashUrl**.
|
||||
|
||||
There is another option, a zero-configuration environment for testing the integration launching via ```docker-compose``` command, on the root directory of eshopOnContainers:
|
||||
|
||||
```sh
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.elk.yml build
|
||||
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.elk.yml up
|
||||
```
|
||||
|
||||
### Configuring Logstash index on Kibana
|
||||
|
||||
Once time you have started and configured your application, you only need to configure the logstash index on kibana.
|
||||
You can address to Kibana, with docker-compose setup is at [http://localhost:5601](http://localhost:5601)
|
||||
|
||||
If you have accessed to kibana too early, you can see this error. It's normal, depending of your machine the kibana stack needs a bit of time to startup.
|
||||

|
||||
|
||||
You can wait a bit and refresh the page, the first time you enter, you need to configure and index pattern, in the ```docker-compose``` configuration, the index pattern name is **eshops-\***.
|
||||

|
||||
|
||||
With the index pattern configured, you can enter in the discover section and start viewing how the tool is recollecting the logging information.
|
||||
|
||||

|
||||
|
||||
## Configuring ELK on Azure VM
|
||||
Another option is to use a preconfigured virtual machine with Logstash, ElasticSearch and Kibana and point the configuration parameter **LogstashUrl**. For doing this you can address to Microsoft Azure, and start searching a Certified ELK Virtual Machine
|
||||
|
||||

|
||||
|
||||
This options it have a certified preconfigured options (Network, VirtualMachine type, OS, RAM, Disks) for having a good starting point of ELK with good performance.
|
||||
|
||||

|
||||
|
||||
When you have configured the main aspects of your virtual machine, you will have a "review & create" last step like this:
|
||||

|
||||
|
||||
### Configuring the bitnami environment
|
||||
|
||||
This virtual machine has a lot of configuration pipeing done. If you want to change something of the default configuration you can address this documentation:
|
||||
[https://docs.bitnami.com/virtual-machine/apps/elk/get-started/](https://docs.bitnami.com/virtual-machine/apps/elk/get-started/)
|
||||
|
||||
The only thing you have to change is the logstash configuration inside the machine. This configuration is at the file ```/opt/bitnami/logstash/conf/logstash.conf```
|
||||
You must edit the file and overwrite with this configuration:
|
||||
```conf
|
||||
input {
|
||||
http {
|
||||
#default host 0.0.0.0:8080
|
||||
codec => json
|
||||
}
|
||||
}
|
||||
|
||||
## Add your filters / logstash plugins configuration here
|
||||
filter {
|
||||
split {
|
||||
field => "events"
|
||||
target => "e"
|
||||
remove_field => "events"
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => "elasticsearch:9200"
|
||||
index=>"eshops-%{+xxxx.ww}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For doing this you can connect via ssh to the vm and edit the file using the vi editor for example.
|
||||
When the file will be edited, check there are Inbound Port Rules created for the logstash service. You can do it going to Networking Menu on your ELK Virtual Machine Resource in Azure.
|
||||
|
||||

|
||||
|
||||
The only thing that remains is to connect to your vm vía browser. And check the bitnami splash page is showing.
|
||||
|
||||

|
||||
|
||||
You can get the password for accessing going to your virtual machine in azure and check the boot diagnostics, theres a message that shows to you which is your password.
|
||||
|
||||
When you have the user and password you can access to the kibana tool, and create the ```eshops-*``` index pattern that is well documented at the beggining of this documentation and then start to discover.
|
||||

|
||||
- [ELK Stack](ELK-Stack)
|
||||
|
@ -1,81 +1,5 @@
|
||||
# Superseded
|
||||
|
||||
## Build pipelines YAML definitions
|
||||
This wiki page has been superseded by:
|
||||
|
||||
Folder `/build/azure-devops` has all the YAML files for all build pipelines. Although (for simplicity reasons) eShopOnContainers has all code in the same repo, we have one separated build per microservice. All builds have two jobs (named `BuildLinux` and `BuildWindows`) that build the linux version and the windows version of the microservice.
|
||||
|
||||
We use _path filters_ to queue only the build when commits have files in certain paths. For example, this is the _path filters_ sections of the Web Status microservice:
|
||||
|
||||
```yaml
|
||||
paths:
|
||||
include:
|
||||
- src/BuildingBlocks/*
|
||||
- src/Web/WebStatus/*
|
||||
- k8s/helm/webstatus/*
|
||||
```
|
||||
|
||||
The build will be triggered if the commits have some file in these folders. Any other change won't trigger the build. Using _path filters_ we have the flexibility to use a single repository, separated builds and trigger only the needed builds.
|
||||
|
||||
Plase, refer the [Azure Devops YAML build pipelines documentation](https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema) for more information.
|
||||
|
||||
### Azure Devops Build pipelines
|
||||
|
||||
We have a build pipeline for each service in Azure Devops created from the YAML file. To create build pipeline from an existing YAML file, create a new build pipeline in Azure Devops:
|
||||
|
||||

|
||||
|
||||
Select the repository where you have the code (like Github) and select the repository where you have the eShopOnContainers code. Then in the _Configure_ tab select the option _Existing Azure Pipelines YAML file_:
|
||||
|
||||

|
||||
|
||||
Then you need to enter the YAML file to use:
|
||||
|
||||

|
||||
|
||||
In the _Review_ tab you'll see the content of the YAML file, and you could review (and update it if needed).
|
||||
|
||||
Once the build pipeline is created in Azure Devops you can override some of its parameters (i. e. the branches affected or if the build is a CI build or must be launched manually). For it, select the build pipeline and edit it. In the _review_ tab you can click the 3-dots-button to override the parameters:
|
||||
|
||||

|
||||
|
||||
## Pull Request Builds
|
||||
|
||||
We have enabled the build pipelines for Pull Request. There are some differences between a normal build and the build triggered by a PR though:
|
||||
|
||||
* The build triggered from a PR do not push the docker images to any docker registry.
|
||||
|
||||
## Windows vs Linux images
|
||||
|
||||
Each build generates the windows AND linux images (note that we could have two separated builds instead). Build pushes the images to [eshop Dockerhub](https://hub.docker.com/u/eshop/).
|
||||
|
||||
* Linux image have tag `linux-<branch>` where `<branch>` is the git branch that triggered the build.
|
||||
* Windows image have tag `win-<branch>` where `<branch>` is the git branch that triggered the build.
|
||||
|
||||
We have **multiarch tags**, for the tags `dev`, `master` and `latest`, so you don't need to use `win-dev` or `linux-dev`, the tag `dev` will pick the right architecture automatically. Only this three tags have multiarch, **and they are the only tags intended to be used**. The tag `dev` is the most updated.
|
||||
|
||||
## Release pipelines
|
||||
|
||||
We have an Azure Devops Release pipeline per microservice. Source artifact for the release is the build.
|
||||
|
||||
All releases pipelines are very similar, as we use Helm to deploy to Kubernetes:
|
||||
|
||||

|
||||
|
||||
We use three Azure Devops Tasks:
|
||||
|
||||
* A Helm task to install Helm in the build agent
|
||||
* A Helm task to perform the `helm init`
|
||||
* A Helm task to perform the `helm upgrade`
|
||||
|
||||
The helm upgrade task **uses the chart files which are produced as a build output** to install the helm chart from disk
|
||||
|
||||

|
||||
|
||||
Following arguments are passed to the task:
|
||||
|
||||
* Namespace where to install the release
|
||||
* Chart Type as "File Path" as we are using the yaml files
|
||||
* Chart Path: Path with the chart files (part of build output)
|
||||
* Release Name: Name of the release to upgrade
|
||||
* Set Values: The values to set to the chart. We have to pass the `inf.k8s.dns`and the image name and tag
|
||||
* Mark the checkboxes: Install if not present, recreate pods and Wait
|
||||
* Arguments: Additional yaml files needed
|
||||
- [Azure DevOps pipelines](Azure-DevOps-pipelines)
|
||||
|
@ -1,113 +1,5 @@
|
||||
# FAQ about eShopOnContainers
|
||||
# Superseded
|
||||
|
||||
## Support questions
|
||||
This wiki page has been superseded by:
|
||||
|
||||
### Is Visual Studio 2015 supported
|
||||
Short answer is NO. There are a lot of changes between VS2017 and VS2015 regarding netcore project format (no more `project.json`) and Docker support.
|
||||
|
||||
When the project started, VS2017 were not still available, so the project started with VS2015. There is **an old version of the project** that works with VS2015 and Docker. You can find it [using the vs2015 tag](https://github.com/dotnet-architecture/eShopOnContainers/tree/vs2015) but is currently **not supported**. For more information about this version read the [corresponding wiki section](https://github.com/dotnet-architecture/eShopOnContainers/wiki/05.-Setting-up-the-eShopOnContainers-solution-version-based-on-project.json-files-and-Visual-Studio-2015-environment).
|
||||
|
||||
### I want to submit a PR
|
||||
|
||||
Glad to hear it! We are open to PRs, just keep in mind three (little) things:
|
||||
|
||||
1. Ensure that there is no any active development that is targeting your PR. If you find a bug and want to solve it, ensure that there are not any active issue with that bug. We open issues for any bug we found during our testing. If you find an issue with the same bug, probably this bug is in development. **In case of doubt feel free to ask in the issue**. If you don't find the issue, **feel free to open a new one** and just note in the comments that you will address it a upcoming PR.
|
||||
|
||||
2. When submitting a PR, reference the issue that the PRs solves (if any)
|
||||
|
||||
3. **All PR must be done against the `dev` branch**. Any PR against `master` won't be accepted (with a few exceptions). This is to avoid merge conflicts between master and dev. Dev is where all development is happening.
|
||||
|
||||
### Are Linux or Mac supported?
|
||||
|
||||
Yes. For Linux the CLI is supported. For Mac we support both CLI and Visual Studio for Mac. Feel free to report any issue you found using Mac or Linux.
|
||||
|
||||
### How can I manage and test the RabbitMQ server running in the container?
|
||||
You need to connect to the port 15672 like:
|
||||
http://127.0.0.1:15672/
|
||||
|
||||
And using these credentials:
|
||||
user:guest
|
||||
pwd:guest
|
||||
|
||||
|
||||
## Bugs or warnings
|
||||
|
||||
### The SQL Server container is not running
|
||||
It looks like the SQL container tried to start but then it exited?
|
||||
If I do a "docker ps -a", the STATUS column for the SQL container does NOT show a status of "Up" but shows the STATUS as "Exited".
|
||||
Workaround: Usually this is due to not enough memory assigned to the Docker Host Linux VM.
|
||||
IMPORTANT: Note that sometimes after installing a "Docker for Windows" update it might have reset the assigned memory value and it might be 2GB again (see Docker issue https://github.com/docker/for-win/issues/1169), which is not enough for the SQL container. Set, at least, 4GB of memory to the Docker Host in "Docker for Windows" settings.
|
||||
|
||||
### When I run the solution(using Visual Studio 2017 or the CLI) I see one or more warnings like "The ESHOP_AZURE_XXXX variable is not set. Defaulting to a blank string."
|
||||
|
||||
You can ignore those warnings. They're not from VS2017 but from docker-compose. These variables are used to allow eShopOnContainers use external resources (like Redis or SQL Server) from Azure. If they're not set, the `docker-compose.override.yml` file use defaults values that are good when running everything locally. So, the rule is:
|
||||
|
||||
* If you run everything locally: No need to setup this variables, and you can forget this warnings
|
||||
* If you run all or some resources externally (i.e. in Azure) you need to setup this variables. Refer to [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md) for more information about how to setup them.
|
||||
|
||||
### When I run "docker-compose up" I receive an error like ERROR: Service 'xxxxx' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder123456789/obj/Docker/publish: no such file or directory
|
||||
|
||||
This error is produced because some Docker image can't be built. This is because the project is not published. All projects are published in their `obj/Docker/publish` folder. If there is any compilation error, the project won't be published and the corresponding docker image can't be built, and you will receive this error.
|
||||
|
||||
**Note**: When you run the project using F5 from VS2017, projects are not published, so you won't receive this error in VS2017.
|
||||
|
||||
### When I build the SPA I receive a `Cannot read property '0' of undefined` in "npm install"
|
||||
|
||||
This is because of npm 5.3.0 on newer versions of nodejs. Downgrade to npm 5.2.0 until a fix is released. For more info check [this issue](https://github.com/dotnet-architecture/eShopOnContainers/issues/268)
|
||||
|
||||
### When trying to logging from MVC app the following error appears: IDX10803: Unable to obtain configuration from: 'http://10.0.75.1:5105/.well-known/openid-configuration'
|
||||
|
||||
#### Deploying in Windows with Docker for Windows
|
||||
First open a browser and navigate to `http://10.0.75.1:5105/.well-known/openid-configuration`. You should receive json response. If not, ensure that Identity.API and Docker are running without issues.
|
||||
|
||||
If response is received the problem is that the request from a container cannot reach the `10.0.75.1` (which is the IP of the host machine inside the DockerNAT). Be sure that:
|
||||
|
||||
* You have opened the ports of the firewall (run the script `cli-windows\add-firewall-rules-for-sts-auth-thru-docker.ps1`
|
||||
|
||||
If this do not solved your problem ensure that the `vpnkit` of the firewall is disabled. For more info refer to @huangmaoyixxx's comment in [issue #295](https://github.com/dotnet-architecture/eShopOnContainers/issues/295)
|
||||
|
||||
Another possibility is that the ASP.NET Identity database was not generated right or in time by EF Migrations when the app first started because the SQL container was too slow to be ready for the Identity service. You can have a workaround for that by increasing the number of retries with exponential backoff of the EF Contexts within the Identity.API service (i.e. increase maxRetryCount at the sqlOptions provided to the ConfigureDbContext). Or simply, try re-deploying the app into Docker.
|
||||
|
||||
#### Deploying in a Mac with Docker for Mac
|
||||
In a Mac, youn cannot use the 10.0.75.1 IP, so you need to change that in the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
||||
|
||||
```bash
|
||||
IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
```
|
||||
Now, open a browser and navigate to `http://docker.for.mac.localhost:5105/.well-known/openid-configuration`.
|
||||
|
||||
You should receive json response. If not, ensure that Identity.API and Docker are running without issues.
|
||||
|
||||
|
||||
### When I try to run the solution in "Docker for Windows" (on the Linux VM) I get the error "Did you mean to run dotnet SDK commands?"
|
||||
If you get this error:
|
||||
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
|
||||
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
|
||||
|
||||
That usually happens when you just switched from Windows Containers to Linux Containers in "Docker for Windows".
|
||||
This might be a temporal bug in "Docker for Windows" environment.
|
||||
Workaround: Reboot your machine and you should be able to deploy to Linux Containers without these issues.
|
||||
|
||||
|
||||
### When I build the bits with the Docker build-container, when it is running "dotnet publish" against the whole solution, it tries to use the docker-compose.dcproj as if it were a .NET project and you get the error 'The SDK Microsoft.Docker.Sdk specified could not be found'
|
||||
|
||||
This issue is related to this issue/bug:
|
||||
https://github.com/dotnet/cli/issues/6178
|
||||
|
||||
#### WORKAROUND when using a Docker Linux build-container:
|
||||
When trying to get the docker image (microsoft/aspnetcore-build) working see:
|
||||
https://github.com/aspnet/aspnet-docker/issues/299
|
||||
|
||||
Use the **microsoft/aspnetcore-build:1.0-2.0** image that comes with the Microsoft.Docker.Sdk.
|
||||
|
||||
#### WORKAROUND until fixed, if using just .NET Core with NO Docker build-container:
|
||||
|
||||
Copy the Microsoft.Docker.Sdk folder from C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Sdks or C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Sdks if you are using the Enterprise version (Only the Sdk subfolder. Do not copy build and tools subfolders).
|
||||
|
||||
Then paste it on C:\Program Files\dotnet\sdk\2.0.0\Sdks on Windows or /usr/share/dotnet/sdk/2.0.0./Sdks on Linux (Ubuntu)
|
||||
Then dotnet build SolutionName.sln will work fine.
|
||||
These steps will fix both errors:
|
||||
|
||||
error MSB4236: The SDK 'Microsoft.Docker.Sdk' specified could not be found.
|
||||
|
||||
error MSB4022: The result "" of evaluating the value "$(DockerBuildTasksAssembly)" of the "AssemblyFile" attribute in element is not valid.
|
||||
- [FREQUENT ERRORS](Frecuent-errors)
|
||||
|
3
API-gateways.md
Normal file
@ -0,0 +1,3 @@
|
||||
For detailed info about "Implementing API Gateways with Ocelot" as implemented in eShopOnContainers, check out the following blog post:
|
||||
|
||||
<https://blogs.msdn.microsoft.com/cesardelatorre/2018/05/15/designing-and-implementing-api-gateways-with-ocelot-in-a-microservices-and-container-based-architecture/>
|
109
Application-Insights.md
Normal file
@ -0,0 +1,109 @@
|
||||
Follow the steps bellow to configure App Insights as a logging service. It is included the instructions for setting up App Insights in case you decide to register eShopOnContainers logs locally, or for instance in a cluster environment with Kubernetes or Service Fabric.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Add Application Insights to an ASP.NET Core app](#add-application-insights-to-an-aspnet-core-app)
|
||||
- [Create an App Insights service](#create-an-app-insights-service)
|
||||
- [Setting up Application Insights locally](#setting-up-application-insights-locally)
|
||||
- [Setting up Application Insights in Kubernetes](#setting-up-application-insights-in-kubernetes)
|
||||
- [Application Insights filters](#application-insights-filters)
|
||||
|
||||
## Add Application Insights to an ASP.NET Core app
|
||||
|
||||
Install the following nuget packages:
|
||||
|
||||
- Microsoft.ApplicationInsights.AspNetCore
|
||||
- Microsoft.ApplicationInsights.DependencyCollector
|
||||
|
||||
In case of deploying your app in a cluster environment, install these additional packages:
|
||||
|
||||
- Microsoft.ApplicationInsights.Kubernetes (for a Kubernetes environment)
|
||||
- Microsoft.ApplicationInsights.ServiceFabric (for a Service Fabric environment)
|
||||
|
||||
Include the **UseApplicationInsights** extension method in your `webhostbuilder` located at program.cs:
|
||||
|
||||

|
||||
|
||||
Register AppInsights service by including the **AddApplicationInsightsTelemetry** extension method in your ConfigureServices Startup method located at Startup.cs. In case of using K8s or SF, add **EnableKubernetes** or **FabricTelemetryInitializer** method respectively:
|
||||
|
||||

|
||||
|
||||
To enable AppInsights in your ILogger,include the **AddAzureWebAppDiagnostics** and **AddApplicationInsights** ILoggerFactory extension method in your Configure Startup method located at Startup.cs:
|
||||
|
||||

|
||||
|
||||
## Create an App Insights service
|
||||
|
||||
Go to the Azure portal and create the service:
|
||||
|
||||
- Create Azure AppInsights
|
||||
|
||||

|
||||
|
||||
In case of using App Insights for logging eShopOnContainers in a cluster environment, Multi-role application map must be enabled:
|
||||
- Enable Multi-role application map
|
||||
|
||||

|
||||
|
||||
Retrieve the Instrumentation Key generated to be used later on. Go to properties in the portal and copy the key.
|
||||
|
||||
## Setting up Application Insights locally
|
||||
|
||||
Go to the root of the project and open the **.env** file where all the environment variables are set, uncomment the **INSTRUMENTATION_KEY** variable and set the Instrumentation Key from your App Insights service that has previously been created:
|
||||
|
||||
```
|
||||
#ESHOP_AZURE_REDIS_BASKET_DB=<YourAzureRedisBasketInfo>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_URL=<YourAzureStorage_Catalog_BLOB_URL>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_URL=<YourAzureStorage_Marketing__BLOB_URL>
|
||||
#ESHOP_AZURE_SERVICE_BUS=<YourAzureServiceBusInfo>
|
||||
#ESHOP_AZURE_COSMOSDB=<YourAzureCosmosDBConnData>
|
||||
#ESHOP_AZURE_CATALOG_DB=<YourAzureSQLDBCatalogDBConnString>
|
||||
#ESHOP_AZURE_IDENTITY_DB=<YourAzureSQLDBIdentityDBConnString>
|
||||
#ESHOP_AZURE_ORDERING_DB=<YourAzureSQLDBOrderingDBConnString>
|
||||
#ESHOP_AZURE_MARKETING_DB=<YourAzureSQLDBMarketingDBConnString>
|
||||
#ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI=<YourAzureFunctionCampaignDetailsURI>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_NAME=<YourAzureStorageCatalogName>
|
||||
#ESHOP_AZURE_STORAGE_CATALOG_KEY=<YourAzureStorageCatalogKey>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_NAME=<YourAzureStorageMarketingName>
|
||||
#ESHOP_AZURE_STORAGE_MARKETING_KEY=<YourAzureStorageMarketingKey>
|
||||
#ESHOP_SERVICE_BUS_USERNAME=<ServiceBusUserName-OnlyUsedIfUsingRabbitMQUnderwindows>
|
||||
#ESHOP_SERVICE_BUS_PASSWORD=<ServiceBusUserPassword-OnlyUsedIfUsingRabbitMQUnderwindows>
|
||||
INSTRUMENTATION_KEY=
|
||||
#USE_LOADTEST=<True/False>
|
||||
```
|
||||
|
||||
## Setting up Application Insights in Kubernetes
|
||||
|
||||
- Retrieve the Instrumentation Key from your AppInsights service properties. Open the **conf_local.yml** config file located in '/k8s' directory and set the variable **Instrumentation_Key** with the key.
|
||||
|
||||
```
|
||||
BasketBus: rabbitmq
|
||||
BasketRedisConStr: basket-data
|
||||
CatalogBus: rabbitmq
|
||||
CatalogSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=Pass@word;
|
||||
CatalogAzureStorageEnabled: "False"
|
||||
IdentitySqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.IdentityDb;User Id=sa;Password=Pass@word;
|
||||
LocationsBus: rabbitmq
|
||||
LocationsNoSqlDb: mongodb://nosql-data
|
||||
LocationsNoSqlDbName: LocationsDb
|
||||
MarketingBus: rabbitmq
|
||||
MarketingNoSqlDb: mongodb://nosql-data
|
||||
MarketingNoSqlDbName: MarketingDb
|
||||
MarketingSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.MarketingDb;User Id=sa;Password=Pass@word;
|
||||
OrderingBus: rabbitmq
|
||||
OrderingSqlDb: Server=sql-data;Initial Catalog=Microsoft.eShopOnContainers.Services.OrderingDb;User Id=sa;Password=Pass@word;
|
||||
PaymentBus: rabbitmq
|
||||
UseAzureServiceBus: "False"
|
||||
EnableLoadTest: "False"
|
||||
keystore: keystore-data
|
||||
GracePeriodManager_GracePeriodTime: "1"
|
||||
GracePeriodManager_CheckUpdateTime: "15000"
|
||||
Instrumentation_Key: ""
|
||||
```
|
||||
|
||||
## Application Insights filters
|
||||
|
||||
- Go to App Insights Search option in the Azure portal and check all the telemetry and traces that your app is generating.
|
||||
- In a cluster environment with multiple containers with their own set of instances, notice that we can actually filter and trace logs by these service and instances:
|
||||
|
||||

|
62
Architecture.md
Normal file
@ -0,0 +1,62 @@
|
||||
## Overview
|
||||
|
||||
This reference application is cross-platform for both the server and client side, thanks to .NET Core services, it's capable of running on Linux or Windows containers depending on your Docker host. It also has a Xamarin mobile app that supports Android, iOS and Windows/UWP, as well as an ASP.NET Core Web MVC and an SPA apps.
|
||||
|
||||
The architecture proposes a microservice oriented architecture implementation with multiple autonomous microservices (each one owning its own data/db). The microservices also showcase different approaches from simple CRUD to more elaborate DDD/CQRS patterns. HTTP is the communication protocol between client apps and microservices, and asynchronous message based communication between microservices. Message queues can be handled either with RabbitMQ or Azure Service Bus, to convey integration events.
|
||||
|
||||
Domain events are handled in the ordering microservice, by using [MediatR](https://github.com/jbogard/MediatR), a simple in-process implementation the Mediator pattern.
|
||||
|
||||

|
||||
|
||||
## EventBus
|
||||
|
||||
eShopOnContainers includes a simplified EventBus abstraction to handle integration events, as well as two implementations, one based on [RabbitMQ](https://www.rabbitmq.com/) and another based on [Azure Service Bus](https://docs.microsoft.com/en-us/azure/service-bus/).
|
||||
|
||||
For a production-grade solutions you should use a more robust implementation based on a robust product such as [NServiceBus](https://github.com/Particular/NServiceBus). You can even see a (somewhat outdated) implementation of eShopOnContainers with NServiceBus here: https://github.com/Particular/eShopOnContainers.
|
||||
|
||||
## API Gateways
|
||||
|
||||
The architecture also includes an implementation of the API Gateway pattern and Backend-For-Front-End (BFF), to publish simplified APIs and include additional security measures for hiding/securing the internal microservices from the client apps or outside consumers.
|
||||
|
||||
These sample API Gateways based on [Ocelot](https://github.com/ThreeMammals/Ocelot), an OSS lightweight API Gateway solution. The API Gateways are deployed as autonomous microservices/containers, so you can test them in a simple development environment by just using Docker Desktop or even with orchestrators like Kubernetes in AKS or Service Fabric.
|
||||
|
||||
For a production-ready architecture you can either keep using Ocelot, which is simple and easy to use, and it's currently used in production by large companies. If you need additional functionality and a much richer set of features suitable for commercial APIs, you can also substitute those API Gateways and use Azure API Management or any other commercial API Gateway, as shown in the following diagram.
|
||||
|
||||

|
||||
|
||||
## Internal architectural patterns
|
||||
|
||||
There are different types of microservices according to their internal architectural pattern and approaches depending on their purposes, as shown in the image below.
|
||||
|
||||

|
||||
|
||||
## Database servers
|
||||
|
||||
There are four SQL Server databases, but they are all deployed to a single container, to keep memory requirements as low as possible. This is not a recommended approach for production deployment, where you should use high availability solutions.
|
||||
|
||||
There are also one Redis and one MongoDb instances, in separate containers, as a sample of two widely used NO-SQL databases.
|
||||
|
||||
## More on-line details and guidance
|
||||
|
||||
You can get more details on the related technologies and components in these selected articles from the [.NET Microservices architecture guide](https://docs.microsoft.com/dotnet/standard/microservices-architecture/):
|
||||
|
||||
- [Introduction to containers and Docker](https://docs.microsoft.com/dotnet/standard/microservices-architecture/container-docker-introduction/)
|
||||
|
||||
- [Key concepts of microservices container based applications](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/)
|
||||
- [Data sovereignty and eventual consistency](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice)
|
||||
- [API gateways](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern)
|
||||
- [Communication in a microservices architecture](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture)
|
||||
- [Asynchronous message-based communications](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication)
|
||||
- [Resiliency](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices)
|
||||
- [Clusters and orchestrators](https://docs.microsoft.com/dotnet/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications)
|
||||
|
||||
- [Details on the eShopOnContainers sample application](https://docs.microsoft.com/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/)
|
||||
|
||||
- [Key concepts of Domain Driven Design (DDD) and Command and Query Responsibility Segregation (CQRS)](https://docs.microsoft.com/dotnet/standard/microservices-architecture/microservice-ddd-cqrs-patterns/)
|
||||
|
||||
- [Implementing resilient applications](https://docs.microsoft.com/dotnet/standard/microservices-architecture/implement-resilient-applications/)
|
||||
|
||||
- [Authentication and authorization](https://docs.microsoft.com/dotnet/standard/microservices-architecture/secure-net-microservices-web-applications/)
|
||||
|
||||
- [The development process for Docker-based applications](https://docs.microsoft.com/dotnet/standard/microservices-architecture/docker-application-development-process/)
|
||||
|
86
Azure-Dev-Spaces.md
Normal file
@ -0,0 +1,86 @@
|
||||
Please [go to the official Dev Spaces documentation](https://docs.microsoft.com/azure/dev-spaces/) for details.
|
||||
|
||||
You should be familiar with the topics that follow.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Enabling Dev Spaces](#enabling-dev-spaces)
|
||||
- [Prepare environment for Dev Spaces](#prepare-environment-for-dev-spaces)
|
||||
- [Deploy to a dev space](#deploy-to-a-dev-space)
|
||||
- [Deploy to a child dev space](#deploy-to-a-child-dev-space)
|
||||
|
||||
## Enabling Dev Spaces
|
||||
|
||||
You need an AKS created in an admitted Dev Spaces region. Then just type:
|
||||
|
||||
```console
|
||||
az aks use-dev-spaces -g your-aks-devspaces-resgrp -n YourAksDevSpacesCluster
|
||||
```
|
||||
|
||||
Note: This command will install the _Azure Dev Spaces CLI_ if not installed in your computer.
|
||||
|
||||
The tool will ask us to create a dev space. Enter the name of the dev space (i. e. `dev`) and make it a root dev space by selecting _none_ when prompted for their parent dev space:
|
||||
|
||||

|
||||
|
||||
Once Dev Spaces tooling is added, type `azds --version` to get the version of Dev Spaces tooling.
|
||||
Tested DevSpaces tooling version was:
|
||||
|
||||
```console
|
||||
Azure Dev Spaces CLI (Preview)
|
||||
0.1.20190320.5
|
||||
API v2.17
|
||||
```
|
||||
|
||||
Future versions should work, unless they introduce _breaking changes_.
|
||||
|
||||
## Prepare environment for Dev Spaces
|
||||
|
||||
From a Power Shell console, go to `/src` folder and run `prepare-devspaces.ps1` (no parameters needed). This script will copy the `inf.yaml` and `app.yaml` files from `/k8s/helm` to all project folders. This is needed due to a limitation of Dev Spaces tooling used. Note that the files copied are added in `.gitignore`.
|
||||
|
||||
Remember that the `inf.yaml` and `app.yaml` contains the parameters needed for the helm charts to run.
|
||||
|
||||

|
||||
|
||||
## Deploy to a dev space
|
||||
|
||||
Dev Spaces deployment is done using the **same helm charts used to deploy on a "production" cluster**.
|
||||
|
||||
If you want to deploy a project to a specific dev space, just go to its source folder (where the `.csproj` is) and type `azds up`. This will deploy the project to the current dev space. You can use the `-v` modifier to have more verbosity and the `-d` modifier to detach the terminal from the application. If `-d` is not used, the `azds up` command is attached to the service running and you are able to see its logs.
|
||||
|
||||
The command `azds up` will:
|
||||
|
||||
1. Sync files with the dev space builder container
|
||||
2. Deploy the helm chart
|
||||
3. Build the service container
|
||||
4. Attach current console to the container output (if not `-d` is passed)
|
||||
|
||||
**Note** You should deploy **all** enabled Dev Spaces projects (one by one) in the parent dev space
|
||||
|
||||
The command `azds list-up` will show which APIs are deployed in the dev space. The command `azds list-uris` will show the ingress URLs:
|
||||
|
||||

|
||||
|
||||
## Deploy to a child dev space
|
||||
|
||||
Once everything is deployed to the root dev space, use `azds space select` to create a child dev space.
|
||||
|
||||

|
||||
|
||||
Then deploy the desired service to this child dev space (using `azds up` again). Use `azds list-up` to verify that the service is deployed in the child dev space. The image shows the _WebMVC_ deployed in the child dev space _alice_:
|
||||
|
||||

|
||||
|
||||
The `azds list-uris` will show you the new ingress URL for the child dev space:
|
||||
|
||||

|
||||
|
||||
If you use the child URL (starting with `alice.s.`), the Web MVC that will run will be the one that is deployed in the child dev space. This web will use all services deployed in the child dev space and if not found, will use the ones deployed in the parent dev space.
|
||||
|
||||
If using the parent dev space URL, Web MVC that will run will be the one deployed in parent dev space, using only the services deployed in parent dev space.
|
||||
|
||||
Usually you deploy everything in the parent dev space, and then create one child dev space per developer. The developer deploys only the service he is updating in his/her namespace.
|
||||
|
||||
Please refer to [Dev Spaces documentation](https://docs.microsoft.com/azure/dev-spaces/) for more info.
|
||||
|
||||
**Note**: _Web SPA_ is not enabled to use Dev Spaces (so, you can't deploy the SPA in a dev space). Use the Web MVC for testing.
|
89
Azure-DevOps-pipelines.md
Normal file
@ -0,0 +1,89 @@
|
||||
This page contains a brief setup description for CI/CD pipelines in Azure DevOps
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Build pipelines YAML definitions](#build-pipelines-yaml-definitions)
|
||||
- [Azure DevOps Build pipelines](#azure-devops-build-pipelines)
|
||||
- [Pull Request Builds](#pull-request-builds)
|
||||
- [Windows vs Linux images](#windows-vs-linux-images)
|
||||
- [Release pipelines](#release-pipelines)
|
||||
## Build pipelines YAML definitions
|
||||
|
||||
Folder `/build/azure-devops` has all the YAML files for all build pipelines. Although (for simplicity reasons) eShopOnContainers has all code in the same repo, we have one separated build per microservice. All builds have two jobs (named `BuildLinux` and `BuildWindows`) that build the Linux version and the windows version of the microservice.
|
||||
|
||||
We use _path filters_ to queue only the build when commits have files in certain paths. For example, this is the _path filters_ sections of the Web Status microservice:
|
||||
|
||||
```yaml
|
||||
paths:
|
||||
include:
|
||||
- src/BuildingBlocks/*
|
||||
- src/Web/WebStatus/*
|
||||
- k8s/helm/webstatus/*
|
||||
```
|
||||
|
||||
The build will be triggered if the commits have some file in these folders. Any other change won't trigger the build. Using _path filters_ we have the flexibility to use a single repository, separated builds and trigger only the needed builds.
|
||||
|
||||
Please, refer the [Azure DevOps YAML build pipelines documentation](https://docs.microsoft.com/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema) for more information.
|
||||
|
||||
### Azure DevOps Build pipelines
|
||||
|
||||
We have a build pipeline for each service in Azure DevOps created from the YAML file. To create build pipeline from an existing YAML file, create a new build pipeline in Azure DevOps:
|
||||
|
||||

|
||||
|
||||
Select the repository where you have the code (like GitHub) and select the repository where you have the eShopOnContainers code. Then in the _Configure_ tab select the option _Existing Azure Pipelines YAML file_:
|
||||
|
||||

|
||||
|
||||
Then you need to enter the YAML file to use:
|
||||
|
||||

|
||||
|
||||
In the _Review_ tab you'll see the content of the YAML file, and you could review (and update it if needed).
|
||||
|
||||
Once the build pipeline is created in Azure DevOps you can override some of its parameters (that is, the branches affected or if the build is a CI build or must be launched manually). For it, select the build pipeline and edit it. In the _review_ tab you can click the 3-dots-button to override the parameters:
|
||||
|
||||

|
||||
|
||||
## Pull Request Builds
|
||||
|
||||
We have enabled the build pipelines for Pull Request. There are some differences between a normal build and the build triggered by a PR though:
|
||||
|
||||
* The build triggered from a PR do not push the docker images to any docker registry.
|
||||
|
||||
## Windows vs Linux images
|
||||
|
||||
Each build generates the windows AND Linux images (note that we could have two separated builds instead). Build pushes the images to [eshop Dockerhub](https://hub.docker.com/u/eshop/).
|
||||
|
||||
* Linux image have tag `linux-<branch>` where `<branch>` is the git branch that triggered the build.
|
||||
* Windows image have tag `win-<branch>` where `<branch>` is the git branch that triggered the build.
|
||||
|
||||
We have **multi-arch tags**, for the tags `dev`, `master` and `latest`, so you don't need to use `win-dev` or `linux-dev`, the tag `dev` will pick the right architecture automatically. Only this three tags have multi-arch, **and they are the only tags intended to be used**. The tag `dev` is the most updated.
|
||||
|
||||
## Release pipelines
|
||||
|
||||
We have an Azure DevOps Release pipeline per microservice. Source artifact for the release is the build.
|
||||
|
||||
All releases pipelines are very similar, as we use Helm to deploy to Kubernetes:
|
||||
|
||||

|
||||
|
||||
We use three Azure DevOps Tasks:
|
||||
|
||||
* A Helm task to install Helm in the build agent
|
||||
* A Helm task to perform the `helm init`
|
||||
* A Helm task to perform the `helm upgrade`
|
||||
|
||||
The helm upgrade task **uses the chart files which are produced as a build output** to install the helm chart from disk
|
||||
|
||||

|
||||
|
||||
Following arguments are passed to the task:
|
||||
|
||||
* Namespace where to install the release
|
||||
* Chart Type as "File Path" as we are using the yaml files
|
||||
* Chart Path: Path with the chart files (part of build output)
|
||||
* Release Name: Name of the release to upgrade
|
||||
* Set Values: The values to set to the chart. We have to pass the `inf.k8s.dns`and the image name and tag
|
||||
* Mark the checkboxes: Install if not present, recreate pods and Wait
|
||||
* Arguments: Additional yaml files needed
|
84
Azure-Key-Vault.md
Normal file
@ -0,0 +1,84 @@
|
||||
The management of secrets in our business applications is always an important issue where we are looking for different alternatives. Of course, with the arrival of .NET Core and its configuration system *streamed* has given us a lot of flexibility when it comes to playing with the different configuration parameters of our applications. Being able to have them in the settings files such as our *appsettings.json* and replace or expand them through other sources such as environment variables is something that we use very often in this project in the creation of containers and composition.
|
||||
|
||||
The use of [User Secrets](https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets?view=aspnetcore-2.1&tabs=windows) or the use of *.env* files simplify some scenarios, generally for development environments but they do not save us in our release processes. In these release processes, as for example in VSTS, we can modify the configuration values by means of release variables but we will still have to know those secrets and those people who have the power to edit a release will be able to see these secret values
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Azure Key Vault](#azure-key-vault)
|
||||
- [Azure MSI](#azure-msi)
|
||||
|
||||
## Azure Key Vault
|
||||
|
||||
A valid option to prevent the values of secrets being known by different people is the use of a Key Vault, or private warehouse of sercrets. Azure offers a Key Vault service known as [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) which gives us many of the desired features in a service in the cloud.
|
||||
|
||||
> If you are not aware of Azure Key Vault, I recommend that you review the [product documentation](https://docs.microsoft.com/en-us/azure/key-vault/) to get a quick idea of everything that the service offers.
|
||||
|
||||
.NET Core offers us a configuration provider based on Azure Key Vault, which we can integrate as another provider and where we will delegate the management of secrets. Below you can see the code fragment that Azure Key Vault configures in one of the *eshopOnContainers* services.
|
||||
|
||||
```csharp
|
||||
public static IWebHost BuildWebHost(string[] args) =>
|
||||
WebHost.CreateDefaultBuilder(args)
|
||||
.UseStartup<Startup>()
|
||||
.UseHealthChecks("/hc")
|
||||
.UseContentRoot(Directory.GetCurrentDirectory())
|
||||
.ConfigureAppConfiguration((builderContext, config) =>
|
||||
{
|
||||
config.AddJsonFile("settings.json");
|
||||
|
||||
var builtConfig = config.Build();
|
||||
|
||||
var configurationBuilder = new ConfigurationBuilder();
|
||||
|
||||
if (Convert.ToBoolean(builtConfig["UseVault"]))
|
||||
{
|
||||
configurationBuilder.AddAzureKeyVault(
|
||||
$"https://{builtConfig["Vault:Name"]}.vault.azure.net/",
|
||||
builtConfig["Vault:ClientId"],
|
||||
builtConfig["Vault:ClientSecret"]);
|
||||
}
|
||||
|
||||
configurationBuilder.AddEnvironmentVariables();
|
||||
|
||||
config.AddConfiguration(configurationBuilder.Build());
|
||||
})
|
||||
.ConfigureLogging((hostingContext, builder) =>
|
||||
{
|
||||
builder.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
|
||||
builder.AddConsole();
|
||||
builder.AddDebug();
|
||||
})
|
||||
.UseApplicationInsights()
|
||||
.Build();
|
||||
```
|
||||
|
||||
As you can see, the **AddAzureKeyVault** method is responsible for adding the configuration provider. Therefore, each time someone requests a configuration key, it will be searched for in the appropriate order, within the Azure Key Vault service.
|
||||
|
||||
> By default the keys with the separator -- are transformed to:. If you want to change this behavior **AddAzureKeyVault** supports the specification of a new implementation of IKeyVaultSecretManager instead of the default DefaultKeyVaultSecretManager.
|
||||
|
||||
## Azure MSI
|
||||
|
||||
If we look closely, in reality we are facing a situation that could be described as *the snake that bites its tail*, as we no longer need to keep secrets in our applications as they will be in Azure Key Vault. But if we need to keep our Key Vault credentials so that everyone who knows them needs to, they can access the service and obtain the information it contains.
|
||||
|
||||
One way to overcome this problem in Azure, is by using [Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/overview) or MSI which is relatively simple to operate. Basically, Azure MSI allows the resources created in Azure (for the moment the list of possible resources is limited to AppServices, VM, AKeyVault, Azure Functions) to have an identity, an SPN, within the active directory associated with the subscription. Once this SPN is created, it can be used to give access permission to other resources.
|
||||
|
||||
> If you use ARM templates, establish that an MSI support resource is as easy as setting the attribute *identity* to *systemAssigned*.
|
||||
|
||||
Once we have a service, put a WebApp, with MSI enabled and in the policies of our Azure Key Vault, we have established their permissions and can connect with Key Vault without using a username or password. Below, we can see an example of how the **AddAzureKeyVault** method would change using MSI with respect to the previous form.
|
||||
|
||||
```csharp
|
||||
public static IWebHost BuildWebHost(string[] args) =>
|
||||
WebHost.CreateDefaultBuilder(args)
|
||||
.UseStartup<Startup>()
|
||||
.ConfigureAppConfiguration((context,builder)=>
|
||||
{
|
||||
var azureServiceTokenProvider = new AzureServiceTokenProvider();
|
||||
|
||||
var keyVaultClient = new KeyVaultClient(
|
||||
new KeyVaultClient.AuthenticationCallback(
|
||||
azureServiceTokenProvider.KeyVaultTokenCallback));
|
||||
|
||||
builder.AddAzureKeyVault("https://[mykeyvault].vault.azure.net/",
|
||||
keyVaultClient,
|
||||
new DefaultKeyVaultSecretManager());
|
||||
}).Build();
|
||||
```
|
106
Backlog.md
Normal file
@ -0,0 +1,106 @@
|
||||
These are candidate features to consider for the [roadmap](Roadmap).
|
||||
|
||||
- [Create an issue to add a feature to this list](https://github.com/dotnet-architecture/eShopOnContainers/issues/new).
|
||||
|
||||
## API handling
|
||||
|
||||
- Implement a more advanced versioning system based on [aspnet-api-versioning](https://github.com/Microsoft/aspnet-api-versioning) or comparable system. Current API versioning is very basic, simply based on the URLs. \
|
||||
<https://github.com/Microsoft/aspnet-api-versioning>
|
||||
|
||||
## Integration event messaging
|
||||
|
||||
- Azure Event Grid: Implement an additional Event Bus implementation based on Azure Event Grid.
|
||||
|
||||
## Diagnostics and monitoring
|
||||
|
||||
- Monitoring/Diagnostics of microservices based on Application Insights with custom perfkeys.
|
||||
|
||||
- Semantic log - Semantic logic - Related to the Azure app version and Application Insight usage Monitor what microservices are up/down, etc. related to App Insights, but the events are custom ETW events and "Semantic Application Log" from P&P Multiple implementations for the storage of the events, Azure Diagnostics, Elastic Search. Using EventSource base class, etc.
|
||||
|
||||
- Create a new "ServerProblemDetails" response that conforms with RFC 7807, check [issue #602](https://github.com/dotnet-architecture/eShopOnContainers/issues/602) for details.
|
||||
|
||||
## Orchestrators
|
||||
|
||||
- Service Fabric Stateful Service implementation in the SF branch
|
||||
|
||||
## Front-end
|
||||
|
||||
- Support for .NET Core 2.0 Razor Pages as additional client app.
|
||||
|
||||
- Composite UI based on microservices. Including the “UI per microservice”.
|
||||
|
||||
- Composite UI using ASP.NET (Particular’s Workshop) \
|
||||
<http://bit.ly/particular-microservices>
|
||||
|
||||
- The Monolithic Frontend in the Microservices Architecture \
|
||||
<http://blog.xebia.com/the-monolithic-frontend-in-the-microservices-architecture/>
|
||||
|
||||
- The secret of better UI composition \
|
||||
<https://particular.net/blog/secret-of-better-ui-composition>
|
||||
|
||||
- Including Front-End Web Components Into Microservices \
|
||||
<https://technologyconversations.com/2015/08/09/including-front-end-web-components-into-microservices/>
|
||||
|
||||
- Managing Frontend in the Microservices Architecture \
|
||||
<http://allegro.tech/2016/03/Managing-Frontend-in-the-microservices-architecture.html>
|
||||
|
||||
- Revamp the whole UI to a more modern look, with a CSS framework/template like [CoreUI](https://coreui.io/)
|
||||
|
||||
- Explore [Micro Frontends](https://micro-frontends.org/)
|
||||
|
||||
## Security
|
||||
|
||||
- Encrypt secrets at configuration files (like in docker-compose.yml). Multiple possibilities, Azure Key Vault or using simple Certificates at container level, Consul, etc.
|
||||
|
||||
- Other "secure-code" practices
|
||||
|
||||
- Encrypt communication with SSL (related to the specific cloud infrastructure being used)
|
||||
|
||||
- Implement security best practices about app's secrets (conn-strings, env vars, etc.)
|
||||
(However, this subject depends on the chosen orchestrator...)
|
||||
See when using Swarm: https://blog.docker.com/2017/02/docker-secrets-management/
|
||||
|
||||
- Support "multiple redirect urls" for the STS container based on Identity Server 4, check [issue #113](https://github.com/dotnet-architecture/eShopOnContainers/issues/113).
|
||||
|
||||
- Add social login to MVC and SPA apps, check [issue #475](https://github.com/dotnet-architecture/eShopOnContainers/issues/475) for details.
|
||||
|
||||
- Encrypt sensitive information, such as credit card number, along the ordering process, check [issue #407](https://github.com/dotnet-architecture/eShopOnContainers/issues/407)
|
||||
|
||||
## Resiliency
|
||||
|
||||
- Refactor/Improve Polly's resilient code, check [issue #177](https://github.com/dotnet-architecture/eShopOnContainers/issues/177) for details.
|
||||
|
||||
- Add a Jitter strategy to the Retry policy, check [issue #188](https://github.com/dotnet-architecture/eShopOnContainers/issues/188) for details.
|
||||
|
||||
## Domain Driven Design
|
||||
|
||||
- Enhance the domain logic for Order Root-Aggregate.
|
||||
|
||||
- Already implemented item stock validation (cancels order when quantity is not enough), but could add additional features, check [issue #5](https://github.com/dotnet-architecture/eShopOnContainers/issues/5).
|
||||
|
||||
- Handle validation results from MediatR's pipeline.
|
||||
|
||||
## Testing
|
||||
|
||||
- Include some guidance on testing in CI/CD pipelines, check [issue #549](https://github.com/dotnet-architecture/eShopOnContainers/issues/549) for details.
|
||||
|
||||
- Create load testing alternative that's not dependent on the about-to-be-deprecated load testing feature of VS Enterprise, see [issue #950](https://github.com/dotnet-architecture/eShopOnContainers/issues/950) for more details.
|
||||
## Other
|
||||
|
||||
- Azure Functions integrated with Azure Event Grid: Additional event-driven Azure Function microservice (i.e. grabbing uploaded images and adding a watermark and putting it into Azure blobs) - The notification would come from Azure Event Grid when any image is uploaded into a BLOB storage.
|
||||
|
||||
- Gracefully stopping or shutting down microservice instances - Implemented as an ASP.NET Core middleware in the ASP.NET Core pipeline. Drain in-flight requests before stopping the microservice/container process.
|
||||
|
||||
- Create a building block to handle Idempotency in a generic way ([Issue 143](https://github.com/dotnet/eShopOnContainers/issues/143))
|
||||
|
||||
- Implement example of Optimistic Concurrency updates and optimistic concurrency exceptions
|
||||
|
||||
- Nancy: Add a Nancy based microservice, also with DocDB, etc.
|
||||
|
||||
- Support other DataProtection providers such as AspNetCore.DataProtection.ServiceFabric
|
||||
|
||||
- In the Windows Containers fork, implement and add a simple WCF microservice/container implementing any logic like a simulated legacy Payment Gateway, as an example of "lift and shift" scenario.
|
||||
|
||||
- Consider using Bash instead of PowerShell scripts, check [issue #228](https://github.com/dotnet-architecture/eShopOnContainers/issues/228) for details.
|
||||
|
||||
- Fix naming inconsistency in EventBus projects and namespaces, they should be EventBus.RabbitMQ" and "EventBus.ServiceBus", check [issue #943](https://github.com/dotnet-architecture/eShopOnContainers/issues/943)
|
1
Build-eShopOnContainers.md
Normal file
@ -0,0 +1 @@
|
||||
Build and run the application placeholder.
|
3
Building.md
Normal file
@ -0,0 +1,3 @@
|
||||
## Building placeholder
|
||||
|
||||
- Explain scripts in CLI-* folders
|
7
Databases-and-containers.md
Normal file
@ -0,0 +1,7 @@
|
||||
# IMPORTANT
|
||||
|
||||
In this solution's current configuration for a development environment, the SQL databases are automatically deployed with sample data into a single SQL Server container (a single shared Docker container for SQL databases) so the whole solution can be up and running without any dependency to any cloud or a specific server. Each database could also be deployed as a single Docker container, but then you'd need much more than 8GB of RAM assigned to Docker in your development machine, just to be able to run five SQL Server Docker containers.
|
||||
|
||||
A similar case is defined in regard to Redis cache running as a container for the development environment. Or a No-SQL database (MongoDB) running as a container.
|
||||
|
||||
However, in a real production environment it is recommended to have your databases (SQL Server, Redis, and the NO-SQL database, in this case) in HA (High Availability) services like Azure SQL Database, Redis as a service and Azure CosmosDB instead the MongoDB container (as both systems share the same access protocol). If you want to change to a production configuration, you'll just need to change the connection strings once you have set up the servers in an HA cloud or on-premises.
|
230
Deploy-to-Azure-Kubernetes-Service-(AKS).md
Normal file
@ -0,0 +1,230 @@
|
||||
> **CONTENT**
|
||||
|
||||
- [Create Kubernetes cluster in AKS](#create-kubernetes-cluster-in-aks)
|
||||
- [Configure RBAC security for K8s dashboard service-account](#configure-rbac-security-for-k8s-dashboard-service-account)
|
||||
- [Additional pre-requisites](#additional-pre-requisites)
|
||||
- [Install Helm](#install-helm)
|
||||
- [Install eShopOnContainers using Helm](#install-eshoponcontainers-using-helm)
|
||||
- [Customizing the deployment](#customizing-the-deployment)
|
||||
- [Using your own images](#using-your-own-images)
|
||||
- [Using specific DNS](#using-specific-dns)
|
||||
- [Not deploying infrastructure containers](#not-deploying-infrastructure-containers)
|
||||
- [Providing your own configuration](#providing-your-own-configuration)
|
||||
- [Using Azure storage for Catalog Photos](#using-azure-storage-for-catalog-photos)
|
||||
|
||||
It's possible to deploy eShopOnContainers on a AKS using [Helm](https://helm.sh/) instead of custom scripts (that will be deprecated soon).
|
||||
|
||||
## Create Kubernetes cluster in AKS
|
||||
|
||||
You can create the AKS cluster by using two ways:
|
||||
|
||||
- A. Use Azure CLI: Follow a procedure suing [Azure CLI like here](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), but make sure you **enable RBAC** with `--enable-rbac` and **enable application routing** with `--enable-addons http_application_routing` in `az aks create` command.
|
||||
|
||||
- B. Use Azure's portal
|
||||
|
||||
The following steps are using the Azure portal to create the AKS cluster:
|
||||
|
||||
- Start the process by providing the general data, like in the following screenshot:
|
||||
|
||||

|
||||
|
||||
- Then, very important, in the next step, enable RBAC:
|
||||
|
||||

|
||||
|
||||
- **Enable http routing**. Make sure to check the checkbox "Http application routing" on "Networking" settings. For more info, read the [documentation](https://docs.microsoft.com/en-us/azure/aks/http-application-routing)
|
||||
|
||||
You can use **basic network** settings since for a test you don't need integration into any existing VNET.
|
||||
|
||||

|
||||
|
||||
- You can also enable monitoring:
|
||||
|
||||

|
||||
|
||||
- Finally, create the cluster. It'll take a few minutes for it to be ready.
|
||||
|
||||
## Configure RBAC security for K8s dashboard service-account
|
||||
|
||||
In order NOT to get errors in the Kubernetes dashboard, you'll need to set the following service-account steps.
|
||||
|
||||
Here you can see the errors you might see:
|
||||

|
||||
|
||||
- Because the cluster is using RBAC, you need to grant needed rights to the Service Account `kubernetes-dashboard` with this kubectl command:
|
||||
|
||||
`kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard`
|
||||
|
||||

|
||||
|
||||
Now, just run the Azure CLI command to browse the Kubernetes Dashboard:
|
||||
|
||||
`az aks browse --resource-group pro-eshop-aks-helm-linux-resgrp --name pro-eshop-aks-helm-linux`
|
||||
|
||||

|
||||
|
||||
## Additional pre-requisites
|
||||
|
||||
In addition to having an AKS cluster created in Azure and having kubectl and Azure CLI installed in your local machine and configured to use your Azure subscription, you also need the following pre-requisites:
|
||||
|
||||
### Install Helm
|
||||
|
||||
You need to have helm installed on your machine, and Tiller must be installed on the AKS. Follow these instructions on how to ['Install applications with Helm in Azure Kubernetes Service (AKS)'](https://docs.microsoft.com/en-us/azure/aks/kubernetes-helm) to setup Helm and Tiller for AKS.
|
||||
|
||||
**Note**: If your ASK cluster is not RBAC-enabled (default option in portal) you may receive following error when running a helm command:
|
||||
|
||||
```
|
||||
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused
|
||||
```
|
||||
|
||||
If so, type:
|
||||
|
||||
```
|
||||
kubectl --namespace=kube-system edit deployment/tiller-deploy
|
||||
```
|
||||
|
||||
Your default text editor will popup with the YAML definition of the tiller deploy. Search for:
|
||||
|
||||
```
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
And change it to:
|
||||
|
||||
```
|
||||
automountServiceAccountToken: true
|
||||
```
|
||||
|
||||
Save the file and close the editor. This should reapply the deployment in the cluster. Now Helm commands should work.
|
||||
|
||||
## Install eShopOnContainers using Helm
|
||||
|
||||
All steps need to be performed on `/k8s/helm` folder. The easiest way is to use the `deploy-all.ps1` script from a Powershell window:
|
||||
|
||||
```
|
||||
.\deploy-all.ps1 -externalDns aks -aksName eshoptest -aksRg eshoptest -imageTag dev
|
||||
```
|
||||
|
||||
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the AKS named `eshoptest` in the resource group `eshoptest`. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
||||
|
||||
Once the script is run, you should see following output when using `kubectl get deployment`:
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
eshop-apigwmm 1 1 1 1 4d
|
||||
eshop-apigwms 1 1 1 1 4d
|
||||
eshop-apigwwm 1 1 1 1 4d
|
||||
eshop-apigwws 1 1 1 1 4d
|
||||
eshop-basket-api 1 1 1 1 4d
|
||||
eshop-basket-data 1 1 1 1 4d
|
||||
eshop-catalog-api 1 1 1 1 4d
|
||||
eshop-identity-api 1 1 1 1 4d
|
||||
eshop-keystore-data 1 1 1 1 4d
|
||||
eshop-locations-api 1 1 1 1 4d
|
||||
eshop-marketing-api 1 1 1 1 4d
|
||||
eshop-mobileshoppingagg 1 1 1 1 4d
|
||||
eshop-nosql-data 1 1 1 1 4d
|
||||
eshop-ordering-api 1 1 1 1 4d
|
||||
eshop-ordering-backgroundtasks 1 1 1 1 4d
|
||||
eshop-ordering-signalrhub 1 1 1 1 4d
|
||||
eshop-payment-api 1 1 1 1 4d
|
||||
eshop-rabbitmq 1 1 1 1 4d
|
||||
eshop-sql-data 1 1 1 1 4d
|
||||
eshop-webmvc 1 1 1 1 4d
|
||||
eshop-webshoppingagg 1 1 1 1 4d
|
||||
eshop-webspa 1 1 1 1 4d
|
||||
eshop-webstatus 1 1 1 1 4d
|
||||
```
|
||||
|
||||
Every public service is exposed through its own ingress resource, as you can see if using `kubectl get ing`:
|
||||
|
||||
```
|
||||
eshop-apigwmm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwms eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwwm eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-apigwws eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-identity-api eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webmvc eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webspa eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
eshop-webstatus eshop.<your-guid>.<region>.aksapp.io <public-ip> 80 4d
|
||||
```
|
||||
|
||||
Ingresses are automatically configured to use the public DNS of the AKS provided by the "https routing" addon.
|
||||
|
||||
One step more is needed: we need to configure the nginx ingress controller that AKS has to allow larger headers. This is because the headers sent by identity server exceed the size configured by default. Fortunately this is very easy to do. Just type (from the `/k8s/helm` folder):
|
||||
|
||||
```
|
||||
kubectl apply -f aks-httpaddon-cfg.yaml
|
||||
```
|
||||
|
||||
Then you can restart the pod that runs the nginx controller. Its name is `addon-http-application-routing-nginx-ingress-controller-<something>` and runs on `kube-system` namespace. So run a `kubectl get pods -n kube-system` find it and delete with `kubectl delete pod <pod-name> -n kube-system`.
|
||||
|
||||
**Note:** If running in a bash shell you can type:
|
||||
|
||||
```
|
||||
kubectl delete pod $(kubectl get pod -l app=addon-http-application-routing-nginx-ingress -n kube-system -o jsonpath="{.items[0].metadata.name}) -n kube-system
|
||||
```
|
||||
|
||||
You can view the MVC client at http://[dns]/webmvc and the SPA at the http://[dns]/
|
||||
|
||||
## Customizing the deployment
|
||||
|
||||
### Using your own images
|
||||
|
||||
To use your own images instead of the public ones, you have to pass following additional parameters to the `deploy-all.ps1` script:
|
||||
|
||||
* `registry`: Login server for the Docker registry
|
||||
* `dockerUser`: User login for the Docker registry
|
||||
* `dockerPassword`: User password for the Docker registry
|
||||
|
||||
This will deploy a secret on the cluster to connect to the specified server, and all image names deployed will be prepended with `registry/` value.
|
||||
|
||||
### Using specific DNS
|
||||
|
||||
The `-externalDns` parameter controls the DNS bounded to ingresses. You can pass a custom DNS (like `my.server.com`), or the `aks` value to autodiscover the AKS DNS. For autodiscover to work you also need to pass which AKS is, using the `-aksName` and `-aksRg` parameters.
|
||||
Autodiscovering works using Azure CLI under the hood, so ensure that Azure CLI is logged and pointing to the right subscription.
|
||||
|
||||
If you don't pass any external DNS at all, ingresses are'nt bound to any DNS, and you have to use public IP to access the resources.
|
||||
|
||||
### Not deploying infrastructure containers
|
||||
|
||||
If you want to use external resources, use `-deployInfrastructure $false` to not deploy infrastructure containers. However **you still have to manually update the scripts to provide your own configuration** (see next section).
|
||||
|
||||
### Providing your own configuration
|
||||
|
||||
The file `inf.yaml` contains the description of the infrastructure used. File is docummented so take a look on it to understand all of its entries. If using external resources you need to edit this file according to your needs. You'll need to edit:
|
||||
|
||||
* `inf.sql.host` with the host name of the SQL Server
|
||||
* `inf.sql.common` entries to provide your SQL user, password. `Pid` is not used when using external resources (it is used to set specific product id for the SQL Server container).
|
||||
* `inf.sql.catalog`, `inf.sql.ordering`, `inf.sql.identity`: To provide the database names for catalog, ordering and identity services
|
||||
* `mongo.host`: With the host name of the Mongo DB
|
||||
* `mongo.locations`, `mongo.marketing` with the database names for locations and marketing services
|
||||
* `redis.basket.constr` with the connection string to Redis for Basket Service. Note that `redis.basket.svc` is not used when using external services
|
||||
* `redis.keystore.constr` with the connection string to Redis for Keystore Service. Note that `redis.keystore.svc` is not used when using external services
|
||||
* `eventbus.constr` with the connection string to Azure Service Bus and `eventbus.useAzure` to `true` to use Azure service bus. Note that `eventbus.svc` is not used when using external services
|
||||
|
||||
### Using Azure storage for Catalog Photos
|
||||
|
||||
Using Azure storage for catalog (and marketing) photos is not directly supported, but you can accomplish it by editing the file `k8s/helm/catalog-api/templates/configmap.yaml`. Search for lines:
|
||||
|
||||
```
|
||||
catalog__PicBaseUrl: http://{{ $webshoppingapigw }}/api/v1/c/catalog/items/[0]/pic/
|
||||
```
|
||||
|
||||
And replace it for:
|
||||
|
||||
```
|
||||
catalog__PicBaseUrl: http://<url-of-the-storage>/
|
||||
```
|
||||
|
||||
In the same way, to use Azure storage for the marketing service, have to edit the file `k8s/helm/marketing-api/templates/configmap.yaml` and replacing the line:
|
||||
|
||||
```
|
||||
marketing__PicBaseUrl: http://{{ $webshoppingapigw }}/api/v1/c/catalog/items/[0]/pic/
|
||||
```
|
||||
|
||||
by:
|
||||
|
||||
```
|
||||
marketing__PicBaseUrl: http://<url-of-the-storage>/
|
||||
```
|
336
Deploy-to-Local-Kubernetes.md
Normal file
@ -0,0 +1,336 @@
|
||||
|
||||
<h3>Content></h3>
|
||||
|
||||
- [Install/upgrade to the latest version of Docker for Desktop](#installupgrade-to-the-latest-version-of-docker-for-desktop)
|
||||
- [Enable Kubernetes](#enable-kubernetes)
|
||||
- [Disable / stop Kubernetes](#disable--stop-kubernetes)
|
||||
- [Install Helm](#install-helm)
|
||||
- [Install Helm client](#install-helm-client)
|
||||
- [Install Helm server (Tiller)](#install-helm-server-tiller)
|
||||
- [Install the NGINX Ingress controller](#install-the-nginx-ingress-controller)
|
||||
- [Install eShopOnContainers using Helm](#install-eshoponcontainers-using-helm)
|
||||
- [Known issues](#known-issues)
|
||||
- [Optional - Install Kubernetes Dashboard UI](#optional---install-kubernetes-dashboard-ui)
|
||||
- [IMPORTANT](#important)
|
||||
- [Explore eShopOnContainers](#explore-eshoponcontainers)
|
||||
- [Additional resources](#additional-resources)
|
||||
|
||||
## Install/upgrade to the latest version of Docker for Desktop
|
||||
|
||||
Start by installing or upgrading to the latest version of Docker Desktop for Windows, that includes Kubernetes support.
|
||||
|
||||
## Enable Kubernetes
|
||||
|
||||
To enable Kubernetes (k8s) click the **Enable Kubernetes** checkbox in the **Kubernetes** tab in **Docker Settings** and then click the "Apply" button.
|
||||
|
||||

|
||||
|
||||
If you also enable the "Show system containers" checkbox, you can see Kubernetes system containers running by using `docker ps`
|
||||
|
||||

|
||||
|
||||
This will start 11 containers in your Docker installation.
|
||||
|
||||
Your Docker Desktop Kubernetes installation already contains [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), which is the CLI to run Kubernetes commands and you'll need for the rest of steps here.
|
||||
|
||||
**IMPORTANT:** You'll also have to increase the memory allocated to Docker to at least **6144 MB**, because you'll have 70+ containers running after deploying eShopOnContainers.
|
||||
|
||||
### Disable / stop Kubernetes
|
||||
|
||||
If you ever want to remove Kubernetes from Docker Desktop, just disable Kubernetes in the **Settings > Kubernetes** page above and click "Apply".
|
||||
|
||||
You can stop/start the Kubernetes cluster from the Docker context menu, on the System tray:
|
||||
|
||||

|
||||
|
||||
## Install Helm
|
||||
|
||||
[Helm](https://helm.sh/) is the package manager for Kubernetes.
|
||||
|
||||
### Install Helm client
|
||||
|
||||
You can see the installation details in the [documentation page](https://helm.sh/docs/using_helm/#installing-helm).
|
||||
|
||||
The easiest way is probably to use a package manager, like [Chocolatey for Windows](https://chocolatey.org/).
|
||||
|
||||
Then install Helm from the package manager:
|
||||
|
||||
```powershell
|
||||
choco install kubernetes-helm
|
||||
```
|
||||
|
||||
### Install Helm server (Tiller)
|
||||
|
||||
To install Tiller:
|
||||
|
||||
- Go to the **k8s** folder in your local copy of the eShopOnContainers repo
|
||||
|
||||
- Create the Tiller service account by running:
|
||||
|
||||
```powershell
|
||||
kubectl apply -f helm-rbac.yaml
|
||||
```
|
||||
|
||||
- Install tiller and configure it to use the tiller service account with the command:
|
||||
|
||||
```powershell
|
||||
helm init --service-account tiller
|
||||
```
|
||||
|
||||
## Install the NGINX Ingress controller
|
||||
|
||||
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is an API object that allows access to your clustered services from the outside.
|
||||
|
||||
It's like a reverse proxy, that can handle load balancing, TLS, virtual hosting and the like.
|
||||
|
||||
[NGINX](https://github.com/kubernetes/ingress-nginx/blob/master/README.md) is the Ingress controller used for eShopOnContainers.
|
||||
|
||||
To install the NGINX Ingress controller, run the following commands:
|
||||
|
||||
1. `.\deploy-ingress.ps1`
|
||||
2. `.\deploy-ingress-dockerlocal.ps1`
|
||||
|
||||
## Install eShopOnContainers using Helm
|
||||
|
||||
- Go to the **k8s\helm** folder in your local copy of the eShopOnContainers repo.
|
||||
|
||||
- Run this script to create all eShopOnContainers services in the Kubernetes cluster:
|
||||
|
||||
```powershell
|
||||
.\deploy-all.ps1 -imageTag dev -useLocalk8s $true
|
||||
```
|
||||
|
||||
The parameter `useLocalk8s` to $true, forces the script to use `localhost` as the DNS for all Helm charts and also creates the ingress with the correct ingress class.
|
||||
|
||||
This will install all the [eShopOnContainers public images](https://hub.docker.com/u/eshop/) with tag `dev` on the Docker local Kubernetes cluster. By default all infrastructure (sql, mongo, rabbit and redis) is installed also in the cluster.
|
||||
|
||||
To check the services are running, when you execute this command:
|
||||
|
||||
```powershell
|
||||
kubectl get deployment
|
||||
```
|
||||
|
||||
You should get an output similar to this one:
|
||||
|
||||
```console
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
eshop-apigwmm 1 1 1 1 2h
|
||||
eshop-apigwms 1 1 1 1 2h
|
||||
eshop-apigwwm 1 1 1 1 2h
|
||||
eshop-apigwws 1 1 1 1 2h
|
||||
eshop-basket-api 1 1 1 1 2h
|
||||
eshop-basket-data 1 1 1 1 2h
|
||||
eshop-catalog-api 1 1 1 1 2h
|
||||
eshop-identity-api 1 1 1 1 2h
|
||||
eshop-keystore-data 1 1 1 1 2h
|
||||
eshop-locations-api 1 1 1 1 2h
|
||||
eshop-marketing-api 1 1 1 1 2h
|
||||
eshop-mobileshoppingagg 1 1 1 1 2h
|
||||
eshop-nosql-data 1 1 1 1 2h
|
||||
eshop-ordering-api 1 1 1 1 2h
|
||||
eshop-ordering-backgroundtasks 1 1 1 1 2h
|
||||
eshop-ordering-signalrhub 1 1 1 1 2h
|
||||
eshop-payment-api 1 1 1 1 2h
|
||||
eshop-rabbitmq 1 1 1 1 2h
|
||||
eshop-sql-data 1 1 1 1 2h
|
||||
eshop-webmvc 1 1 1 1 2h
|
||||
eshop-webshoppingagg 1 1 1 1 2h
|
||||
eshop-webspa 1 1 1 1 2h
|
||||
eshop-webstatus 1 1 1 1 2h
|
||||
```
|
||||
|
||||
To check the public service exposed, run:
|
||||
|
||||
```powershell
|
||||
kubectl get ing
|
||||
```
|
||||
|
||||
You should get an output similar to this one:
|
||||
|
||||
```console
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
eshop-apigwmm localhost localhost 80 2h
|
||||
eshop-apigwms localhost localhost 80 2h
|
||||
eshop-apigwwm localhost localhost 80 2h
|
||||
eshop-apigwws localhost localhost 80 2h
|
||||
eshop-identity-api localhost localhost 80 2h
|
||||
eshop-webmvc localhost localhost 80 2h
|
||||
eshop-webspa localhost localhost 80 2h
|
||||
eshop-webstatus localhost localhost 80 2h
|
||||
```
|
||||
|
||||
Note that ingresses are bound to DNS localhost and the host is also "localhost". So, you can access the **webspa** app in the address: `http://localhost` and the **MVC** in: `http://localhost/webmvc`
|
||||
|
||||
As this is the Docker local K8s cluster, you can see also the containers running on your machine.
|
||||
|
||||
If you type the command:
|
||||
|
||||
```powershell
|
||||
docker ps
|
||||
```
|
||||
|
||||
You should see them all (something similar to this):
|
||||
|
||||
```console
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
fec1e3499416 a3f21ec4bd11 "/entrypoint.sh /ngi…" 9 minutes ago Up 9 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-f88c75bc6-5xs2n_ingress-nginx_f1cc7094-e68f-11e8-b4b6-00155d016146_0
|
||||
76485867f032 eshop/payment.api "dotnet Payment.API.…" 2 hours ago Up 2 hours k8s_payment-api_eshop-payment-api-75d5f9bdf6-6zx2v_default_4a3cdab4-e67f-11e8-b4b6-00155d016146_1
|
||||
c2c4640ed610 eshop/marketing.api "dotnet Marketing.AP…" 2 hours ago Up 2 hours k8s_marketing-api_eshop-marketing-api-6b8c5989fd-jpxqv_default_45780626-e67f-11e8-b4b6-00155d016146_1
|
||||
85301d538574 eshop/ordering.signalrhub "dotnet Ordering.Sig…" 2 hours ago Up 2 hours k8s_ordering-signalrhub_eshop-ordering-signalrhub-58cf5ff6-cnlm8_default_4932c344-e67f-11e8-b4b6-00155d016146_1
|
||||
7a408a98000e eshop/ordering.backgroundtasks "dotnet Ordering.Bac…" 2 hours ago Up 2 hours k8s_ordering-backgroundtasks_eshop-ordering-backgroundtasks-cc8f6d4d8-ztfk7_default_47f9cf10-e67f-11e8-b4b6-00155d016146_1
|
||||
12c64b3a13e0 eshop/basket.api "dotnet Basket.API.d…" 2 hours ago Up 2 hours k8s_basket-api_eshop-basket-api-658546684d-6hlvd_default_4262d022-e67f-11e8-b4b6-00155d016146_1
|
||||
133fccfeeff3 eshop/webstatus "dotnet WebStatus.dll" 2 hours ago Up 2 hours k8s_webstatus_eshop-webstatus-7f46479dc4-bqnq7_default_4dc13eb2-e67f-11e8-b4b6-00155d016146_0
|
||||
00c6e4c52135 eshop/webspa "dotnet WebSPA.dll" 2 hours ago Up 2 hours k8s_webspa_eshop-webspa-64cb8df9cb-dcbwg_default_4cd47376-e67f-11e8-b4b6-00155d016146_0
|
||||
d4507f1f6b1a eshop/webshoppingagg "dotnet Web.Shopping…" 2 hours ago Up 2 hours k8s_webshoppingagg_eshop-webshoppingagg-cc94fc86-sxd2v_default_4be6cdb9-e67f-11e8-b4b6-00155d016146_0
|
||||
9178e26703da eshop/webmvc "dotnet WebMVC.dll" 2 hours ago Up 2 hours k8s_webmvc_eshop-webmvc-985779684-4br5z_default_4addd4d6-e67f-11e8-b4b6-00155d016146_0
|
||||
1088c281c710 eshop/ordering.api "dotnet Ordering.API…" 2 hours ago Up 2 hours k8s_ordering-api_eshop-ordering-api-fb8c548cb-k68x9_default_4740958a-e67f-11e8-b4b6-00155d016146_0
|
||||
12424156d5c9 eshop/mobileshoppingagg "dotnet Mobile.Shopp…" 2 hours ago Up 2 hours k8s_mobileshoppingagg_eshop-mobileshoppingagg-b54645d7b-rlrgh_default_46c00017-e67f-11e8-b4b6-00155d016146_0
|
||||
65463ffd437d eshop/locations.api "dotnet Locations.AP…" 2 hours ago Up 2 hours k8s_locations-api_eshop-locations-api-577fc94696-dfhq8_default_44929c4b-e67f-11e8-b4b6-00155d016146_0
|
||||
5b3431873763 eshop/identity.api "dotnet Identity.API…" 2 hours ago Up 2 hours k8s_identity-api_eshop-identity-api-85d9b79f4-s5ks7_default_43d6eb7c-e67f-11e8-b4b6-00155d016146_0
|
||||
7c8e77252459 eshop/catalog.api "dotnet Catalog.API.…" 2 hours ago Up 2 hours k8s_catalog-api_eshop-catalog-api-59fd444fb-ztvhz_default_4356705a-e67f-11e8-b4b6-00155d016146_0
|
||||
94d95d0d3653 eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwws_eshop-apigwws-65474b979d-n99jw_default_41395473-e67f-11e8-b4b6-00155d016146_0
|
||||
bc4bbce71d5f eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwwm_eshop-apigwwm-857c549dd8-8w5gv_default_4098d770-e67f-11e8-b4b6-00155d016146_0
|
||||
840aabcceaa9 eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwms_eshop-apigwms-5b94dfb54b-dnmr9_default_401fc611-e67f-11e8-b4b6-00155d016146_0
|
||||
aabed7646f5b eshop/ocelotapigw "dotnet OcelotApiGw.…" 2 hours ago Up 2 hours k8s_apigwmm_eshop-apigwmm-85f96cbdb4-dhfwr_default_3ed7967a-e67f-11e8-b4b6-00155d016146_0
|
||||
49c5700def5a f06a5773f01e "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_basket-data_eshop-basket-data-66fbc788cc-csnlw_default_3e0c45fe-e67f-11e8-b4b6-00155d016146_0
|
||||
a5db4c521807 f06a5773f01e "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_keystore-data_eshop-keystore-data-5c9c85cb99-8k56s_default_3ce1a273-e67f-11e8-b4b6-00155d016146_0
|
||||
aae88fd2d810 d69a5113ceae "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_rabbitmq_eshop-rabbitmq-6b68647bc4-gr565_default_3c37ee6a-e67f-11e8-b4b6-00155d016146_0
|
||||
65d49ca9589d bbed8d0e01c1 "docker-entrypoint.s…" 2 hours ago Up 2 hours k8s_nosql-data_eshop-nosql-data-579c9d89f8-mtt95_default_3b9c1f89-e67f-11e8-b4b6-00155d016146_0
|
||||
090e0dde2ec4 bbe2822dfe38 "/opt/mssql/bin/sqls…" 2 hours ago Up 2 hours k8s_sql-data_eshop-sql-data-5c4fdcccf4-bscdb_default_3afd29b8-e67f-11e8-b4b6-00155d016146_0
|
||||
```
|
||||
|
||||
## Known issues
|
||||
|
||||
Login from the webmvc results in following error: HttpRequestException: Response status code does not indicate success: 404 (Not Found).
|
||||
|
||||
The reason is because MVC needs to access the Identity Server from both outside the container (browser) and inside the container (C# code). Thus, the configuration uses always the *external url* of the Identity Server, which in this case is just `http://localhost/identity-api`. But this external url is incorrect when used from C# code, and the web mvc can't access the identity api. This is the only case when this issue happens (and is the reason why we use 10.0.75.1 for local address in web mvc in local development mode)
|
||||
|
||||
Solving this requires some manual steps:
|
||||
|
||||
From the `/k8s` folder run `kubectl apply -f .\nginx-ingress\local-dockerk8s\mvc-fix.yaml`. This will create two additional ingresses (for MVC and Identity API) to any valid DNS that points to your machine. This enable the use of 10.75.0.1 IP.
|
||||
|
||||
Update the configmap of Web MVC by typing (**line breaks are mandatory**):
|
||||
|
||||
```
|
||||
kubectl patch cm cfg-eshop-webmvc --type strategic --patch @'
|
||||
data:
|
||||
urls__IdentityUrl: http://10.0.75.1/identity
|
||||
urls__mvc: http://10.0.75.1/webmvc
|
||||
'@
|
||||
```
|
||||
|
||||
Update the configmap of Identity API by typing (**line breaks are mandatory**):
|
||||
|
||||
```
|
||||
kubectl patch cm cfg-eshop-identity-api --type strategic --patch @'
|
||||
data:
|
||||
mvc_e: http://10.0.75.1/webmvc
|
||||
'@
|
||||
```
|
||||
|
||||
Restart the SQL Server pod to ensure the database is recreated again:
|
||||
|
||||
```
|
||||
kubectl delete pod --selector app=sql-data
|
||||
```
|
||||
|
||||
Wait until SQL Server pod is ready to accept connections and then restart all other pods:
|
||||
|
||||
```
|
||||
kubectl delete pod --selector="app!=sql-data"
|
||||
```
|
||||
|
||||
**Note:** Pods are deleted to ensure the databases are recreated again, as identity api stores its client names and urls in the database.
|
||||
|
||||
Now, you can access the MVC app using: `http://10.0.75.1/webmvc`. All other services (like SPA) must be accessed using `http://localhost`
|
||||
|
||||
## Optional - Install Kubernetes Dashboard UI
|
||||
|
||||
You can deploy Kubernetes Web UI (Dashboard) to monitor the cluster locally.
|
||||
|
||||
To enable the dashboard:
|
||||
|
||||
1. Go to the **k8s** folder in your local copy of the eShopOnContainers repo.
|
||||
|
||||
2. Deploy the dashboard with this command:
|
||||
|
||||
```powershell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
|
||||
```
|
||||
|
||||
3. Create a sample admin user by running the script:
|
||||
|
||||
```powershell
|
||||
.\dashboard-adminuser.yaml
|
||||
```
|
||||
|
||||
4. Bind admin-user to admin role by running the script:
|
||||
|
||||
```powershell
|
||||
.\dashboard-bind-adminrole.yml
|
||||
```
|
||||
|
||||
5. Execute the dashboard by running this command:
|
||||
|
||||
```powershell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
6. Get the bearer token to login to the dashboard by running this command:
|
||||
|
||||
```powershell
|
||||
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||
```
|
||||
|
||||
You should get something like this:
|
||||
|
||||
```console
|
||||
Name: admin-user-token-95nxr
|
||||
Namespace: kube-system
|
||||
Labels: <none>
|
||||
Annotations: kubernetes.io/service-account.name=admin-user
|
||||
kubernetes.io/service-account.uid=aec979a2-7cb4-11e9-96aa-00155d013633
|
||||
|
||||
Type: kubernetes.io/service-account-token
|
||||
|
||||
Data
|
||||
====
|
||||
ca.crt: 1025 bytes
|
||||
namespace: 11 bytes
|
||||
token: eyJhbGciOiJSUzI1NiIsImtpZCI...(800+ characters)...FkM_tAclj9o8T7ALdPZciaQ
|
||||
````
|
||||
|
||||
7. Copy the token and navigate to: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
|
||||
|
||||
8. Select "Token" and paste the copied the token in the "Enter token" filed: \
|
||||
\
|
||||

|
||||
|
||||
You should see something like this:
|
||||
|
||||

|
||||
|
||||
From there you can explore all components of your cluster.
|
||||
|
||||
### IMPORTANT
|
||||
|
||||
You have to manually start the dashboard every time you restart the cluster, with the command:
|
||||
|
||||
```powershell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
## Explore eShopOnContainers
|
||||
|
||||
After a while, when all services are running OK, you should get something like this:
|
||||
|
||||

|
||||
|
||||
- WebStatus: <http://localhost/webstatus>
|
||||
- WebMVC: <http://10.0.75.1/webmvc>
|
||||
- WebSPA: <http://10.0.75.1/webspa>
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **Kubernetes Web UI setup** \
|
||||
<https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/>
|
145
Deploy-to-Windows-containers.md
Normal file
@ -0,0 +1,145 @@
|
||||
This is a draft page which will be evolving while our tests and dev regarding Windows Containers are completed.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Supported platforms](#supported-platforms)
|
||||
- [Set Docker to use Windows Container (Windows 10 only)](#set-docker-to-use-windows-container-windows-10-only)
|
||||
- [The localhost loopback limitation in Windows Containers Docker hosts](#the-localhost-loopback-limitation-in-windows-containers-docker-hosts)
|
||||
- [Deploy Windows Containers of eShopOnContainers](#deploy-windows-containers-of-eshoponcontainers)
|
||||
- [1. Compile the .NET application/services and build Docker images for Windows Containers](#1-compile-the-net-applicationservices-and-build-docker-images-for-windows-containers)
|
||||
- [2. Deploy/run the containers](#2-deployrun-the-containers)
|
||||
- [Test/use the eShopOnContainers MVC app in a browser](#testuse-the-eshoponcontainers-mvc-app-in-a-browser)
|
||||
- [RabbitMQ user and password](#rabbitmq-user-and-password)
|
||||
- [Using custom login/password for RabbitMQ (if needed)](#using-custom-loginpassword-for-rabbitmq-if-needed)
|
||||
|
||||
## Supported platforms
|
||||
|
||||
**Windows 10** - Development Environment:
|
||||
|
||||
- Install **[Docker Community Edition](https://store.docker.com/editions/community/docker-ce-desktop-windows?tab=description)** (Docker CE, formerly **Docker for Windows**)
|
||||
- Support via forums/GitHub
|
||||
- Can switch between Windows container development and Linux (in VM). There is no plan to drop either OS from Docker CE
|
||||
- Designed for devs only. Not production
|
||||
|
||||
**Windows Server 2016** - Production Environment:
|
||||
|
||||
- Install **[Docker Enterprise Edition](https://store.docker.com/editions/enterprise/docker-ee-server-windows?tab=description)** (Docker EE)
|
||||
- Designed to run apps in production
|
||||
- Call Microsoft for support. If it's a Docker rather than Windows problem, they escalate to Docker and get it solved
|
||||
|
||||
Docker might provide per incident support system for Docker Community Edition, or provide a "EE Desktop" for developers, but it's their call to do so. Not Microsoft's.
|
||||
|
||||
## Set Docker to use Windows Container (Windows 10 only)
|
||||
|
||||
In Windows 10 you need to set Docker to use "Windows container" instead of Linux containers (in Windows Server 2016 Windows Containers are used by default). To do this, first you must have enabled container support in Windows 10. In "Turn Windows features on/off" select "Containers":
|
||||
|
||||

|
||||
|
||||
Then right click in the Docker icon on the notification bar and select the option "Switch to Windows Containers". If you don't see this option and see the option "Switch to Linux Containers" you're already using Windows Containers.
|
||||
|
||||
## The localhost loopback limitation in Windows Containers Docker hosts
|
||||
|
||||
Due to a default NAT limitation in current versions of Windows (see [https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-loopback/](https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-loopback/)) you can't access your containers using `localhost` from the host computer.
|
||||
You have further information here, too: https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/
|
||||
|
||||
Although that [limitation has been removed beginning with Build 17025](https://blogs.technet.microsoft.com/networking/2017/11/06/available-to-windows-10-insiders-today-access-to-published-container-ports-via-localhost127-0-0-1/) (as of early 2018, still only available today to Windows Insiders, not public/stable release). With that version (Windows 10 Build 17025 or later), access to published container ports via “localhost”/127.0.0.1 is available.
|
||||
|
||||
Until you can use newer build of Windows 10 or Windows Server 2016, instead of localhost you can use either an IP address from the host's network card of (for example, let's suppose you have the 192.168.0.1 address) or you could also use the DockerNAT IP address, that is `10.0.75.1`. If you don't have that IP (`10.0.75.1`) shown when you get the info with `ipconfig`, you'll need to switch to Linux Containers so it creates that Docker NAT and then go back to Windows Containers (right click on Docker icon on the task bar).
|
||||
|
||||
If you use `start-windows-containers.ps1` to start the containers, as explained in the following section, that script will create environment variables with that IP for you, but if you directly use docker-compose, then you have to set the following environment variables:
|
||||
|
||||
Where you see `10.75.0.1` you could also use your network card IP discovered with `ipconfig`, or a production DNS name or IP if this were a production deployment.
|
||||
|
||||
* `ESHOP_EXTERNAL_DNS_NAME_OR_IP` to `10.75.0.1`
|
||||
* `ESHOP_AZURE_STORAGE_CATALOG_URL` to `http://10.0.75.1:5101/api/v1/catalog/items/[0]/pic/`
|
||||
* `ESHOP_AZURE_STORAGE_MARKETING_URL` to `http://10.0.75.1:5110/api/v1/campaigns/[0]/pic/`
|
||||
|
||||
Note that the two last env-vars must be set only if you have not set them already because you were using Azure Storage for the images. If you are using azure storage for the images, you don't need to provide those URLs.
|
||||
|
||||
Once these variables are set you can run docker-compose to start the containers and navigate to `http://10.0.75.1:5100` to view the MVC Web app.
|
||||
|
||||
Using `start-windows-containers.ps1` is simpler as it'll create the env-vars for you.
|
||||
|
||||
## Deploy Windows Containers of eShopOnContainers
|
||||
|
||||
Since eShopOnContainers is using Docker Multi-Stage builds, the compilation of the .NET application bits is now performed by Docker itself right before building the Docker images.
|
||||
|
||||
Although you can create the Docker images when trying to run the containers, let's split it in two steps, so it is clearer.
|
||||
|
||||
### 1. Compile the .NET application/services and build Docker images for Windows Containers
|
||||
|
||||
In order compile the bits and build the Docker images, run:
|
||||
|
||||
```console
|
||||
cd <root-folder-of--eshoponcontainers>
|
||||
docker-compose -f docker-compose.yml -f docker-compose.windows.yml build
|
||||
```
|
||||
|
||||
**Note** Be sure to pass both `-f` when building containers for windows containers!
|
||||
|
||||
### 2. Deploy/run the containers
|
||||
|
||||
The easiest way to run/start the Windows Containers of eShopOnContainers is by running this PowerShell script:
|
||||
|
||||
`start-windows-containers.ps1`
|
||||
|
||||
You can find this script at /cli-windows/start-windows-containers.ps1
|
||||
|
||||
Otherwise, you could also run it directly with `docker-compose up` but then you'd be missing a few environment variables needed for Windows Containers. See the section below on the environment variables you will also need to configure.
|
||||
|
||||
Under the covers, in any case, the start-windows-containers.ps1 is running this command to deploy/run the containers:
|
||||
|
||||
```console
|
||||
set ESHOP_OCELOT_VOLUME_SPEC=C:\app\configuration
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f -f docker-compose.windows.yml -f docker-compose.override.windows.yml up
|
||||
```
|
||||
|
||||
**IMPORTANT**: You need to include those files when running docker-compose up **and the `ESHOP_OCELOT_VOLUME_SPEC` environment variable must be set to `C:\app\configuration`**. Also you have to set the environment variables related to the localhost loopback limitation mentioned at the beginning of this post (if it applies to your environment).
|
||||
|
||||
Just for reference here are the docker compose files and what they do:
|
||||
|
||||
1. `docker-compose.yml`: Main compose file. Define all services for both Linux & Windows and set base images for Linux
|
||||
2. `docker-compose.override.yml`: Main override file. Define all config for both Linux & Windows, with Linux-based defaults
|
||||
3. `docker-compose.windows.yml`: Overrides some previous data (like images) for Windows containers
|
||||
4. `docker-compose.override.windows.yml`: Adds specific windows-only configuration
|
||||
|
||||
## Test/use the eShopOnContainers MVC app in a browser
|
||||
|
||||
Open a browser and navigate to the following URL:
|
||||
|
||||
`http://10.0.75.1:5100`
|
||||
|
||||
## RabbitMQ user and password
|
||||
|
||||
For RabbitMQ we are using the [https://hub.docker.com/r/spring2/rabbitmq/](spring2/rabbitmq) image, which provides a ready RabbitMQ to use. This RabbitMQ is configured to accept AMQP connections from the user `admin:password`(this is different from the RabbitMQ Linux image which do not require any user/password when creating AMQP connections)
|
||||
|
||||
If you use `start-windows-containers.ps1` script to launch the containers or include the file `docker-compose.override.windows.yml` in the `docker-compose` command, then the containers will be configured to use this login/password, so everything will work.
|
||||
|
||||
## Using custom login/password for RabbitMQ (if needed)
|
||||
|
||||
**Note**: Read this only if you use any other RabbitMQ image (or server) that have its own user/password needed.
|
||||
|
||||
We support any user/password needed using the environment variables `ESHOP_SERVICE_BUS_USERNAME` and `ESHOP_SERVICE_BUS_PASSWORD`. These variables are used to set a username and password when connecting to RabbitMQ. So:
|
||||
|
||||
* In Linux these variables should be unset (or empty) **unless you're using any external RabbitMQ that requires any specific login/password**
|
||||
* In Windows these variables should be set
|
||||
|
||||
To set this variables you have two options
|
||||
|
||||
1. Just set them on your shell
|
||||
2. Edit the `.env` file and add these variables
|
||||
|
||||
If you have set this images and you want to launch the containers you can use:
|
||||
|
||||
```powershell
|
||||
.\cli-windows\start-windows-containers.ps1 -customEventBusLoginPassword $true
|
||||
```
|
||||
|
||||
When passing the parameter `-customEventBusLoginPassword $true` to the script you are forcing to use the login/password set in the environment variables instead the default one (the one needed for spring2/rabbitmq).
|
||||
|
||||
If you prefer to use `docker-compose` you can do it. Just call it without the `docker-compose.override.windows.yml` file:
|
||||
|
||||
```cmd
|
||||
set ESHOP_OCELOT_VOLUME_SPEC=C:\app\configuration
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.windows.yml up
|
||||
```
|
31
Deploying-Azure-resources.md
Normal file
@ -0,0 +1,31 @@
|
||||
This page contains links to README files about deploying Azure resources, see [Using Azure resources](Using-Azure-resources) for details on using them with eShopOnContainers.
|
||||
|
||||
All related information is in the folder [**deploy/az** on the eShopOnContainers repo](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az).
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
1. [Azure CLI 2.0 Installed](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
2. Azure subscription created
|
||||
|
||||
Login into your azure subscription by typing `az login` (note that you maybe need to use `az account set` to set the subscription to use). Refer to [this article](https://docs.microsoft.com/cli/azure/authenticate-azure-cli) for more details
|
||||
|
||||
## Deploying using CLI
|
||||
|
||||
See the [README for Azure resource creation scripts](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/readme.md).
|
||||
|
||||
### Virtual machines
|
||||
|
||||
1. [Deploying a Linux VM to run single-server **development environment** using docker-machine](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/vms/docker-machine.md)
|
||||
2. [Deploying a Linux VM or Windows Server 2016 to run a single-server **testing environment** using ARM template](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/vms/plain-vm.md)
|
||||
|
||||
Using `docker-machine` is the recommended way to create a VM with docker installed. But it is limited to Linux based VMs.
|
||||
|
||||
### Azure resources used by the services
|
||||
|
||||
1. [Deploying SQL Server and databases](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/sql/readme.md)
|
||||
2. [Deploying Azure Service Bus](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/servicebus/readme.md)
|
||||
3. [Deploying Redis Cache](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/redis/readme.md)
|
||||
4. [Deploying CosmosDb](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/cosmos/readme.md)
|
||||
5. [Deploying Catalog Storage](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/storage/catalog/readme.md)
|
||||
6. [Deploying Marketing Storage](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/storage/marketing/readme.md)
|
||||
7. [Deploying Marketing Azure functions](https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/deploy/az/azurefunctions/readme.md)
|
4
Deployment.md
Normal file
@ -0,0 +1,4 @@
|
||||
## Deployment placeholder
|
||||
|
||||
- [Deploy to local Kubernetes](Deploy-to-Local-Kubernetes)
|
||||
|
6
DevOps.md
Normal file
@ -0,0 +1,6 @@
|
||||
## DevOps placeholder
|
||||
|
||||
## Additional resources
|
||||
|
||||
- Azure DevOps build/release pipeline best practices and artifacts to reproduce it \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/issues/949
|
78
Docker-compose-deployment-files.md
Normal file
@ -0,0 +1,78 @@
|
||||
The root folder of the repo contains all docker-compose files (`docker-compose*.yml`). Here is a list of all of them and what's their purpose, for different deployment needs.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Run eShopOnContainers locally](#run-eshoponcontainers-locally)
|
||||
- [Run eShopOnContainers on a remote docker host](#run-eshoponcontainers-on-a-remote-docker-host)
|
||||
- [Run eShopOnContainers on Windows containers](#run-eshoponcontainers-on-windows-containers)
|
||||
- [Run "infrastructure" containers](#run-%22infrastructure%22-containers)
|
||||
- [Other files](#other-files)
|
||||
|
||||
## Run eShopOnContainers locally
|
||||
|
||||
* `docker-compose.yml`: This file contains **the definition of all images needed for running eShopOnContainers**.
|
||||
* `docker-compose.override.yml`: This file contains the base configuration for all images of the previous file
|
||||
|
||||
Usually these two files are using together. The standard way to start eShopOnContainers from CLI is:
|
||||
|
||||
```console
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml
|
||||
```
|
||||
|
||||
This will start eShopOnContainers with all containers running locally, and it is the default development environment.
|
||||
|
||||
## Run eShopOnContainers on a remote docker host
|
||||
|
||||
* `docker-compose.prod.yml`: This file is a replacement of the `docker-compose.override.yml` but contains some configurations more suitable for a "production" environment or when you need to run the services using an external docker host.
|
||||
|
||||
```console
|
||||
docker-compose -f docker-compose.yml -f docker-compose.prod.yml
|
||||
```
|
||||
|
||||
When using this file the following environments variables must be set:
|
||||
|
||||
* `ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP` with the IP or DNS name of the docker host that runs the services (can use `localhost` if needed).
|
||||
* `ESHOP_AZURE_STORAGE_CATALOG` with the URL of the Azure Storage that will host the catalog images
|
||||
* `ESHOP_AZURE_STORAGE_MARKETING` with the URL of the Azure Storage that will host the marketing campaign images
|
||||
|
||||
You might wonder why an external image resource (storage) is needed when using `docker-compose.prod.yml` instead of `docker-compose.override.yml`. Answer to this is related to a limitation of Docker Compose file format. This is how we set the environment configuration of Catalog microservice in `docker-compose.override.yml`:
|
||||
|
||||
```yml
|
||||
PicBaseUrl=${ESHOP_AZURE_STORAGE_CATALOG:-http://localhost:5101/api/v1/catalog/items/[0]/pic/}
|
||||
```
|
||||
|
||||
The `PicBaseUrl` variable is set to the value of `ESHOP_AZURE_STORAGE_CATALOG` if this variable is set to any value other than blank string. If not, the value is set to `http://localhost:5101/api/v1/catalog/items/[0]/pic/`. That works perfectly in a local environment where you run all your services in `localhost` and setting `ESHOP_AZURE_STORAGE_CATALOG` you can use or not Azure Storage for the images (if you don't use Azure Storage images are served locally by catalog servide). But when you run the services in a external docker host, specified in `ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP`, the configuration should be as follows:
|
||||
|
||||
```yml
|
||||
PicBaseUrl=${ESHOP_AZURE_STORAGE_CATALOG:-http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}
|
||||
```
|
||||
|
||||
So, use `ESHOP_AZURE_STORAGE_CATALOG` if set, and if not use `http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}`. Unfortunately seems that docker-compose do not substitute variables inside variables, so the value that `PicBaseUrl` gets if `ESHOP_AZURE_STORAGE_CATALOG` is not set is literally `http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}` without any substitution.
|
||||
|
||||
## Run eShopOnContainers on Windows containers
|
||||
|
||||
All `docker-compose-windows*.yml` files have a 1:1 relationship with the same file without the `-windows` in its name. Those files are used to run Windows Containers instead of Linux Containers.
|
||||
|
||||
* `docker-compose-windows.yml`: Contains the definitions of all containers that are needed to run eShopOnContainers using windows containers (equivalent to `docker-compose.yml`).
|
||||
* `docker-compose-windows.override.yml`: Contains the base configuration for all windows containers
|
||||
|
||||
**Note** We plan **to remove** the `docker-compose-windows.override.yml` file, because it is **exactly the same** as the `docker-compose.override.yml`. The reason of its existence is historical, but is no longer needed. You can use `docker-compose.override.yml` instead.
|
||||
|
||||
* `docker-compose-windows.prod.yml` is the equivalent of `docker-compose.prod.yml` for containers windows. As happens with `docker-compose-windows.override.yml` this file will be deleted in a near future, so you should use `docker-compose.prod.yml` instead.
|
||||
|
||||
## Run "infrastructure" containers
|
||||
|
||||
These files were intended to provide a fast way to start only "infrastructure" containers (SQL Server, Redis, etc). *This files are deprecated and will be deleted in a near future**:
|
||||
|
||||
* `docker-compose-external.override.yml`
|
||||
* `docker-compose-external.yml`
|
||||
|
||||
If you want to start only certain containers use `docker-compose -f ... -f ... up container1 contaner2 containerN` as specified in [compose doc](https://docs.docker.com/compose/reference/up/)
|
||||
|
||||
## Other files
|
||||
|
||||
* `docker-compose.nobuild.yml`: This file contains the definition of all images needed to run the eShopOnContainers. Contains **the same images that `docker-compose.yml`** but without any `build` instruction. If you use this file instead of `docker-compose.yml` when launching the project and you don't have the images built locally, **the images will be pulled from dockerhub**. This file is not intended for development usage, but for some CI/CD scenarios.
|
||||
* `docker-compose.vs.debug.yml`: This file is used by Docker Tools of VS2017, and should not be used directly.
|
||||
* `docker-compose.vs.release.yml`: This file is used by Docker Tools of VS2017, and should not be used directly.
|
||||
|
||||
**Note**: The reason why we need the `docker-compose.nobuild.yml` is that [docker-compose issue #3391](https://github.com/docker/compose/issues/3391). Once solved, parameter `--no-build` of docker-compose could be used safely in a CI/CD environments and the need for this file will disappear.
|
76
Docker-configuration.md
Normal file
@ -0,0 +1,76 @@
|
||||
The initial Docker for Desktop configuration is not suitable to run eShopOnContainers because it runs a total of 25 Linux containers.
|
||||
|
||||
Even though the microservices are rather light, the application also runs SQL Server, Redis, MongoDb, RabbitMQ and Seq as separate containers. The SQL Server container has four databases (for different microservices) and takes an important amount of memory.
|
||||
|
||||
So it's important to configure enough memory RAM and CPU to Docker.
|
||||
|
||||
## Memory and CPU
|
||||
|
||||
Once Docker for Windows is installed, go to the Settings > Advanced option to configure the minimum amount of memory and CPU:
|
||||
|
||||
- Memory: 4096 MB
|
||||
- CPU: 2
|
||||
|
||||
This amount of memory is the absolute minimum to have the app running, and that's why you'll a 16GB RAM machine for optimal configuration.
|
||||
|
||||

|
||||
|
||||
[What can I do if my computer only has 8 GB RAM?](#low-memory-configuration)
|
||||
|
||||
## Shared drives
|
||||
|
||||
This step is optional but recommended, as Docker sometimes needs to access the shared drives when building, depending on the build actions.
|
||||
|
||||
This is not really necessary when building from the CLI, but it's mandatory when building from Visual Studio to access the code to build.
|
||||
|
||||
The drive you'll need to share depends on where you place your source code.
|
||||
|
||||

|
||||
|
||||
## Networking
|
||||
|
||||
IMPORTANT: Ports 5100 to 5105 must be open in the local Firewall, so authentication to the STS (Security Token Service container, based on IdentityServer) can be done through the 10.0.75.1 IP, which should be available and already setup by Docker. These ports are also needed for client remote apps like Xamarin app or SPA app in a remote browser.
|
||||
|
||||
You can manually create a rule in your local firewall in your development machine or you can just run the <b>add-firewall-rules-for-sts-auth-thru-docker.ps1</b> script available in the solution's **cli-windows** folder.
|
||||
|
||||

|
||||
|
||||
**NOTE:** If you get the error **Unable to obtain configuration from: `http://10.0.75.1:5105/.well-known/openid-configuration`** you might need to allow the program `vpnkit` for connections to and from any computer through all ports.
|
||||
|
||||
If you are working within a corporate VPN you might need to run this power shell command every time you power up your machine, to allow access from `DockerNAT` network:
|
||||
|
||||
```
|
||||
Get-NetConnectionProfile | Where-Object { $_.InterfaceAlias -match "(DockerNAT)" } | ForEach-Object { Set-NetConnectionProfile -InterfaceIndex $_.InterfaceIndex -NetworkCategory Private }
|
||||
```
|
||||
|
||||
## Low memory configuration
|
||||
|
||||
If your computer only has 8 GB RAM, you **might** still get eShopOnContainers up and running, but it's not sure and you will not be able to run Visual Studio. You might be able to run VS Code and you'll be limited to the CLI. You might even need to run Chromium or any other bare browser, Chrome will most probably not do. You'll also need to close any other program running in your machine.
|
||||
|
||||
The easiest way to get Chromium binaries directly from Google is to install the [node-chromium package](https://www.npmjs.com/package/chromium) in a folder and then look for the `chrome.exe` program, as follows:
|
||||
|
||||
1. Install node.
|
||||
|
||||
2. Create a folder wherever best suits you.
|
||||
|
||||
3. Run `npm install --save chromium`
|
||||
|
||||
4. After installation finishes go to folder `node_modules\chromium\lib\chromium\chrome-win` (in Windows) to find `chrome.exe`
|
||||
|
||||
The installation process should look something like this:
|
||||
|
||||

|
||||
|
||||
## Additional resources
|
||||
|
||||
- **[eShopOnContainers issue] Can't display login page on MVC app** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/295#issuecomment-327973650>
|
||||
|
||||
- **[docs.microsoft.com issue] Configuring Windows vEthernet Adapter Networks to Properly Support Docker Container Volumes** \
|
||||
<https://github.com/dotnet/docs/issues/11528#issuecomment-486662817>
|
||||
|
||||
- **[eShopOnContainers PR] Add Power Shell script to set network category to private for DockerNAT** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/pull/1019>
|
||||
|
||||
- **Troubleshoot Visual Studio development with Docker (Networking)** \
|
||||
<https://docs.microsoft.com/en-us/visualstudio/containers/troubleshooting-docker-errors?view=vs-2019#errors-specific-to-networking-when-debugging-your-application>
|
44
Docker-host.md
Normal file
@ -0,0 +1,44 @@
|
||||
## Deploying to a "production" environment
|
||||
|
||||
_IMPORTANT: This instructions section is in early draft state because the current version of eShopOnContainers has been tested mostly on plain Docker engine and just smoke tests on some orchestrators like AKS and Kubernetes._
|
||||
|
||||
However, since there are a few folks testing it in "production" environments, out of the dev PC machine (VS2017+Docker) but in a prod. environment like Azure or a regular Docker Host, here's some important information to consider, in addition to the [CLI setup procedure detailed for Windows](Windows-setup).
|
||||
|
||||
The "by default configuration" in the `docker-compose.override.yml` file is set with specific config so it makes it very straightforward to test the solution in a Windows PC with Visual Studio or the CLI, almost just F5 after the first configuration. But, for instance, it is using the "**10.0.75.1**" IP used by default in all "Docker for Windows" installations, so it can be used by the Identity Container for the login page when being redirected from the client apps **without you having to change any specific external IP**, etc.
|
||||
|
||||
However, when deploying eShopOnContainers to other environments like a real Docker Host, or simply if you want to access the apps from remote applications, there are some settings for the Identity Service that need to be changed by using the config specified at a "PRODUCTION" docker-compose file:
|
||||
|
||||
docker-compose.**prod**.yml (**for "production environments"**).
|
||||
|
||||
That file is using the environment variables provided by the "**.env**" file, which basically has the local name and the "External" IP or DNS name to be used by the remote client apps.
|
||||
|
||||
## Steps
|
||||
|
||||
It's just a couple of basic steps to deploy to a simple test "production" environment.
|
||||
|
||||
### 1. Configure a valid "external" IP address
|
||||
|
||||
Basically, you'd need to change the IP (or DNS name) at the .env file with the IP or DNS name you have for your Docker host or orchestrator cluster. If it is a local machine using Docker for Windows, then, it'll be your real WiFi or Ethernet card IP:
|
||||
<https://github.com/dotnet/eShopOnContainers/blob/master/.env>
|
||||
|
||||
The IP below should be swapped and use your real IP or DNS name, like 192.168.88.248 or your DNS name, etc. if testing from remote browsers or mobile devices.
|
||||
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=localhost
|
||||
ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP=10.121.122.92
|
||||
|
||||
### 2. Include docker-compose "prod" .yml files
|
||||
|
||||
For deploying with docker-compose, instead of doing a regular “docker-compose up” do:
|
||||
|
||||
```console
|
||||
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
|
||||
```
|
||||
|
||||
So it uses the docker-compose.**prod**.yml file which uses the EXTERNAL IP or DNS name.
|
||||
<https://github.com/dotnet/eShopOnContainers/blob/master/docker-compose.prod.yml>
|
||||
|
||||
## Additional resources
|
||||
|
||||
- [Windows setup](Windows-setup)
|
||||
- [Docker-compose files](Docker-compose-deployment-files)
|
||||
- [Using Azure resource](Using-Azure-resources)
|
102
ELK-stack.md
Normal file
@ -0,0 +1,102 @@
|
||||
This article contains a brief introduction to setting up the [ELK stack](https://www.elastic.co/elk-stack) in eShopOnContainers. ELK is an acronym of ElasticSearch, LogStash and Kibana. This is one of the most used tools in the industry.
|
||||
|
||||
For a more general introduction to structured logging, see the [Serilog & Seq](Serilog-and-Seq) page in this wiki.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Configuring ELK in Localhost](#configuring-elk-in-localhost)
|
||||
- [Configuring Logstash index on Kibana](#configuring-logstash-index-on-kibana)
|
||||
- [Configuring ELK on Azure VM](#configuring-elk-on-azure-vm)
|
||||
- [Configuring the bitnami environment](#configuring-the-bitnami-environment)
|
||||
|
||||

|
||||
|
||||
## Configuring ELK in Localhost
|
||||
|
||||
eShopOnContainers is ready for work with ELK, you only need to setup the configuration parameter **LogstashUrl**, in **Serilog** Section, for achieve this, you can do it modifying this parameter in every appsettings.json of every service, or via Environment Variable **Serilog:LogstashUrl**.
|
||||
|
||||
There is another option, a zero-configuration environment for testing the integration launching via ```docker-compose``` command, on the root directory of eShopOnContainers:
|
||||
|
||||
```sh
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.elk.yml build
|
||||
|
||||
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.elk.yml up
|
||||
```
|
||||
|
||||
### Configuring Logstash index on Kibana
|
||||
|
||||
Once time you have started and configured your application, you only need to configure the logstash index on kibana.
|
||||
You can address to Kibana, with docker-compose setup is at <http://localhost:5601>
|
||||
|
||||
If you have accessed to kibana too early, you can see this error. It's normal, depending of your machine the kibana stack needs a bit of time to startup.
|
||||
|
||||

|
||||
|
||||
You can wait a bit and refresh the page, the first time you enter, you need to configure and index pattern, in the ```docker-compose``` configuration, the index pattern name is **eshops-\***.
|
||||
|
||||

|
||||
|
||||
With the index pattern configured, you can enter in the discover section and start viewing how the tool is recollecting the logging information.
|
||||
|
||||

|
||||
|
||||
## Configuring ELK on Azure VM
|
||||
|
||||
Another option is to use a preconfigured virtual machine with Logstash, ElasticSearch and Kibana and point the configuration parameter **LogstashUrl**. For doing this you can address to Microsoft Azure, and start searching a Certified ELK Virtual Machine
|
||||
|
||||

|
||||
|
||||
This options it have a certified preconfigured options (Network, VirtualMachine type, OS, RAM, Disks) for having a good starting point of ELK with good performance.
|
||||
|
||||

|
||||
|
||||
When you have configured the main aspects of your virtual machine, you will have a "review & create" last step like this:
|
||||

|
||||
|
||||
### Configuring the bitnami environment
|
||||
|
||||
This virtual machine has a lot of configuration pipeing done. If you want to change something of the default configuration you can address this documentation:
|
||||
<https://docs.bitnami.com/virtual-machine/apps/elk/get-started/>
|
||||
|
||||
The only thing you have to change is the logstash configuration inside the machine. This configuration is at the file ```/opt/bitnami/logstash/conf/logstash.conf```
|
||||
You must edit the file and overwrite with this configuration:
|
||||
|
||||
```conf
|
||||
input {
|
||||
http {
|
||||
#default host 0.0.0.0:8080
|
||||
codec => json
|
||||
}
|
||||
}
|
||||
|
||||
## Add your filters / logstash plugins configuration here
|
||||
filter {
|
||||
split {
|
||||
field => "events"
|
||||
target => "e"
|
||||
remove_field => "events"
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => "elasticsearch:9200"
|
||||
index=>"eshops-%{+xxxx.ww}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For doing this you can connect via ssh to the vm and edit the file using the vi editor for example.
|
||||
When the file will be edited, check there are Inbound Port Rules created for the Logstash service. You can do it going to Networking Menu on your ELK Virtual Machine Resource in Azure.
|
||||
|
||||

|
||||
|
||||
The only thing that remains is to connect to your vm vía browser. And check the Bitnami splash page is showing.
|
||||
|
||||

|
||||
|
||||
You can get the password for accessing going to your virtual machine in azure and check the boot diagnostics, there's a message that shows to you which is your password.
|
||||
|
||||
When you have the user and password you can access to the Kibana tool, and create the ```eshops-*``` index pattern that is well documented at the beginning of this documentation and then start to discover.
|
||||
|
||||

|
86
Explore-the-application.md
Normal file
@ -0,0 +1,86 @@
|
||||
This page covers the exploration of the eShopOnContainers application and assumes you've already:
|
||||
|
||||
- Setup your development system for [Windows](Windows-setup) or [Mac](Mac-setup), at least up to the point of running eShopOnContainers from the CLI.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [MVC Web app](#mvc-web-app)
|
||||
- [Authenticating and creating an order on the Web MVC app](#authenticating-and-creating-an-order-on-the-web-mvc-app)
|
||||
- [SPA Web app](#spa-web-app)
|
||||
- [Swagger UI - REST API microservices - Catalog](#swagger-ui---rest-api-microservices---catalog)
|
||||
- [Xamarin.Forms mobile apps for Android, iOS and Windows](#xamarinforms-mobile-apps-for-android-ios-and-windows)
|
||||
- [All applications and microservices](#all-applications-and-microservices)
|
||||
|
||||
## MVC Web app
|
||||
|
||||
Open a browser and type <http://localhost:5100> and hit enter.
|
||||
You should see the MVC application like in the following screenshot:
|
||||
|
||||

|
||||
|
||||
### Authenticating and creating an order on the Web MVC app
|
||||
|
||||
When you try the Web MVC application by using the url <http://localhost:5100>, you'll be able to test the home page which is also the catalog page. But if you want to add articles to the basket you need to login first at the login page which is handled by the STS microservice/container (Security Token Service). At this point, you could register your own user/customer or you can also use a convenient default user/customer named **demoUser@microsoft.com** so you don't need to register your own user and it'll be easier to explore.
|
||||
The credentials for this demo user are:
|
||||
|
||||
- User: **demouser@microsoft.com**
|
||||
- Password: **Pass@word1**
|
||||
|
||||
Below you can see the login page when providing those credentials.
|
||||
|
||||

|
||||
|
||||
## SPA Web app
|
||||
|
||||
While having the containers running, open a browser and type `http://localhost:5104/` and hit enter.
|
||||
You should see the SPA application like in the following screenshot:
|
||||
|
||||

|
||||
|
||||
## Swagger UI - REST API microservices - Catalog
|
||||
|
||||
While having the containers running, open a browser and type `http://localhost:5101` and hit enter.
|
||||
You should see the Swagger UI page for that microservice that allows you to test the Web API, like in the following screenshot:
|
||||
|
||||

|
||||
|
||||
Then, after providing the size (i.e. 10) and the current page (i.e. 1) for the data of the catalog, you can run the service hitting the "Try it out!" button and see the returned JSON Data:
|
||||
|
||||

|
||||
|
||||
## Xamarin.Forms mobile apps for Android, iOS and Windows
|
||||
|
||||
You can deploy the Xamarin app to real iOS, Android or Windows devices.
|
||||
You can also test it on an Android Emulator based on Hyper-V like the Visual Studio Android Emulator (Do NOT install the Google's Android emulator or it will break Docker and Hyper-V, as mentioned in the [Windows setup page](Windows-setup)).
|
||||
|
||||
By default, the Xamarin app shows fake data from mock-services. In order to really access the microservices/containers in Docker from the mobile app, you need to:
|
||||
|
||||
- Disable mock-services in the Xamarin app by setting the <b>UseMockServices = false</b> in the App.xaml.cs and specify the host IP in BaseEndpoint = "http://10.106.144.28" at the GlobalSettings.cs. Both files in the Xamarin.Forms project (PCL).
|
||||
- Another alternative is to change that IP through the app UI, by modifying the IP address in the Settings page of the App as shown in the screenshot below.
|
||||
- In addition, you need to make sure that the used TCP ports of the services are open in the local firewall.
|
||||
|
||||

|
||||
|
||||
## All applications and microservices
|
||||
|
||||
Once the containers are deployed, you should be able to access any of the services in the following URLs or connection string, from your dev machine:
|
||||
|
||||
- Web apps
|
||||
- Web MVC: <http://localhost:5100>
|
||||
- Web SPA: <http://localhost:5104>
|
||||
- Web Status: <http://localhost:5107>
|
||||
- Microservices
|
||||
- Catalog microservice: <http://localhost:5101> (Not secured)
|
||||
- Ordering microservice: <http://localhost:5102> (Requires login - Click on Authorize button)
|
||||
- Basket microservice: <http://localhost:5103> (Requires login - Click on Authorize button)
|
||||
- Identity microservice: <http://localhost:5105> (View "discovery document")
|
||||
- Infrastructure
|
||||
- SQL Server (connect with [SSMS](https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms) to `tcp:localhost,5433` with `User Id=sa;Password=Pass@word;` and explore databases:
|
||||
- Identity: `Microsoft.eShopOnContainers.Service.IdentityDb`
|
||||
- Catalog: `Microsoft.eShopOnContainers.Services.CatalogDb`
|
||||
- Marketing: `Microsoft.eShopOnContainers.Services.MarketingDb`
|
||||
- Ordering: `Microsoft.eShopOnContainers.Services.OrdeingDb`
|
||||
- Webhooks: `Microsoft.eShopOnContainers.Services.WebhooksDb`
|
||||
- Redis (Basket data): install and run [redis-commander](https://www.npmjs.com/package/redis-commander) and explore in <http://localhost:8081/>
|
||||
- RabbitMQ (Queue management): <http://10.0.75.1:15672/> (login with username=guest, password=guest)
|
||||
- Seq (Logs collector): <http://10.0.75.1:5340>
|
76
Explore-the-code.md
Normal file
@ -0,0 +1,76 @@
|
||||
This page covers the exploration of eShopOnContainers' code base assumes you've already:
|
||||
|
||||
- Setup your development system for [Windows](Windows-setup) or [Mac](Mac-setup)
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Overview of the application code](#Overview-of-the-application-code)
|
||||
- [MVC Application (ASP.NET Core)](#MVC-Application-ASPNET-Core)
|
||||
- [SPA (Single Page Application)](#SPA-Single-Page-Application)
|
||||
- [Xamarin Mobile App (For iOS, Android and Windows/UWP)](#Xamarin-Mobile-App-For-iOS-Android-and-WindowsUWP)
|
||||
- [Additional resources](#Additional-resources)
|
||||
|
||||
## Overview of the application code
|
||||
|
||||
In this repo you can find a sample reference application that will help you to understand how to implement a microservice architecture based application using **.NET Core** and **Docker**.
|
||||
|
||||
The example business domain or scenario is based on an eShop or eCommerce which is implemented as a multi-container application. Each container is a microservice deployment (like the basket-microservice, catalog-microservice, ordering-microservice and the identity-microservice) which is developed using ASP.NET Core running on .NET Core so they can run either on Linux Containers and Windows Containers.
|
||||
The screenshot below shows the VS Solution structure for those microservices/containers and client apps.
|
||||
|
||||
- (*Recommended when getting started*) Open **eShopOnContainers-ServicesAndWebApps.sln** for a solution containing just the server-side projects related to the microservices and web applications.
|
||||
- Open **eShopOnContainers-MobileApps.sln** for a solution containing just the client mobile app projects (Xamarin mobile apps only). It works independently based on mocks, too.
|
||||
- Open **eShopOnContainers.sln** for a solution containing all the projects (All client apps and services).
|
||||
|
||||

|
||||
|
||||
Finally, those microservices are consumed by multiple client web and mobile apps, as described below.
|
||||
|
||||
## MVC Application (ASP.NET Core)
|
||||
|
||||
It's an MVC application where you can find interesting scenarios on how to consume HTTP-based microservices from C# running in the server side, as it is a typical ASP.NET Core MVC application. Since it is a server-side application, access to other containers/microservices is done within the internal Docker Host network with its internal name resolution.
|
||||
|
||||

|
||||
|
||||
## SPA (Single Page Application)
|
||||
|
||||
Providing similar "eShop business functionality" but developed with Angular, Typescript and slightly using ASP.NET Core MVC. This is another approach for client web applications to be used when you want to have a more modern client behavior which is not behaving with the typical browser round-trip on every action but behaving like a Single-Page-Application which is more similar to a desktop app usage experience. The consumption of the HTTP-based microservices is done from TypeScript/JavaScript in the client browser, so the client calls to the microservices come from out of the Docker Host internal network (Like from your network or even from the Internet).
|
||||
|
||||

|
||||
|
||||
## Xamarin Mobile App (For iOS, Android and Windows/UWP)
|
||||
|
||||
It's a client mobile app supporting the most common mobile OS platforms (iOS, Android and Windows/UWP). In this case, the consumption of the microservices is done from C# but running on the client devices, so out of the Docker Host internal network (Like from your network or even the Internet).
|
||||
|
||||

|
||||
|
||||
## Additional resources
|
||||
|
||||
- **General setup and initial exploration** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1032>
|
||||
|
||||
- **Why doesn't OrdersController follow common REST guidelines?** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1002>
|
||||
|
||||
- **How WebMVC calls Identity.API?** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1043>
|
||||
|
||||
- **Login workflow does not work from mvc app to identity.api** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1050>
|
||||
|
||||
- **How to use an external SQL Server machine?** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/172>
|
||||
|
||||
- **Using Ocelot Configuration.Json files in multiple projects for BFF framework** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/593>
|
||||
|
||||
- **Shared integration events** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/724>
|
||||
|
||||
- **There seems to be an atomicity issue while rising domain events in Ordering.Api** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/700>
|
||||
|
||||
- **Should the domain model be completely isolated?** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/869>
|
||||
|
||||
- **Event design and testing** - [eShopOnContainers issue] \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/924>
|
80
Frecuent-errors.md
Normal file
@ -0,0 +1,80 @@
|
||||
These are the most frequent errors encountered when running eShopOnContainers for the first time.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [When trying to log in from the MVC app I get an error](#When-trying-to-log-in-from-the-MVC-app-I-get-an-error)
|
||||
- [Deploying in Windows with Docker for Windows](#Deploying-in-Windows-with-Docker-for-Windows)
|
||||
- [Deploying in a Mac with Docker for Mac](#Deploying-in-a-Mac-with-Docker-for-Mac)
|
||||
- [Additional resources](#Additional-resources)
|
||||
- [The SQL Server container is not running](#The-SQL-Server-container-is-not-running)
|
||||
- [When I run the solution (using Visual Studio or the CLI) I get warnings like 'The ESHOP_AZURE_XXXX variable is not set...'](#When-I-run-the-solution-using-Visual-Studio-or-the-CLI-I-get-warnings-like-The-ESHOPAZUREXXXX-variable-is-not-set)
|
||||
- [When I run 'docker-compose up' I get an error like ERROR: Service 'xxxxx' failed to build: COPY failed: stat ...: no such file or directory](#When-I-run-docker-compose-up-I-get-an-error-like-ERROR-Service-xxxxx-failed-to-build-COPY-failed-stat--no-such-file-or-directory)
|
||||
- [When I try to run the solution in 'Docker for Windows' (on the Linux VM) I get the error: 'Did you mean to run dotnet SDK commands?'](#When-I-try-to-run-the-solution-in-Docker-for-Windows-on-the-Linux-VM-I-get-the-error-Did-you-mean-to-run-dotnet-SDK-commands)
|
||||
|
||||
## When trying to log in from the MVC app I get an error
|
||||
|
||||
There are usually two errors related to this:
|
||||
|
||||
- IDX10803: Unable to obtain configuration from: `http://10.0.75.1:5105/.well-known/openid-configuration`
|
||||
- IDX20804: Unable to retrieve document from: '[PII is hidden]'
|
||||
|
||||
### Deploying in Windows with Docker for Windows
|
||||
|
||||
First open a browser and navigate to <http://10.0.75.1:5105/.well-known/openid-configuration>. You should receive json response. If not, ensure that Identity.API and Docker are running without issues.
|
||||
|
||||
If response is received the problem is that the request from a container cannot reach the `10.0.75.1` (which is the IP of the host machine inside the DockerNAT). Be sure that:
|
||||
|
||||
- You have opened the ports of the firewall (run the script `cli-windows\add-firewall-rules-for-sts-auth-thru-docker.ps1`
|
||||
|
||||
If this do not solved your problem ensure that the `vpnkit` of the firewall is disabled. For more info refer to @huangmaoyixxx's comment in [issue #295](https://github.com/dotnet-architecture/eShopOnContainers/issues/295)
|
||||
|
||||
Another possibility is that the ASP.NET Identity database was not generated right or in time by EF Migrations when the app first started because the SQL container was too slow to be ready for the Identity service. You can have a workaround for that by increasing the number of retries with exponential backoff of the EF Contexts within the Identity.API service (i.e. increase maxRetryCount at the sqlOptions provided to the ConfigureDbContext). Or simply, try re-deploying the app into Docker.
|
||||
|
||||
### Deploying in a Mac with Docker for Mac
|
||||
|
||||
In a Mac, youn cannot use the 10.0.75.1 IP, so you need to change that in the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
||||
|
||||
```bash
|
||||
IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
```
|
||||
|
||||
Now, open a browser and navigate to `http://docker.for.mac.localhost:5105/.well-known/openid-configuration`.
|
||||
|
||||
You should receive json response. If not, ensure that Identity.API and Docker are running without issues.
|
||||
|
||||
### Additional resources
|
||||
|
||||
- **Working behind corporate firewall** - [docs.microsoft.com issue]\
|
||||
https://github.com/dotnet/docs/issues/11528
|
||||
|
||||
## The SQL Server container is not running
|
||||
|
||||
It looks like the SQL container tried to start but then it exited?
|
||||
If I do a "docker ps -a", the STATUS column for the SQL container does NOT show a status of "Up" but shows the STATUS as "Exited".
|
||||
Workaround: Usually this is due to not enough memory assigned to the Docker Host Linux VM.
|
||||
IMPORTANT: Note that sometimes after installing a "Docker for Windows" update it might have reset the assigned memory value and it might be 2GB again (see Docker issue https://github.com/docker/for-win/issues/1169), which is not enough for the SQL container. Set, at least, 4GB of memory to the Docker Host in "Docker for Windows" settings.
|
||||
|
||||
For further information see the [Windows setup](Windows-setup) and [Mac setup](Mac-setup) pages.
|
||||
|
||||
## When I run the solution (using Visual Studio or the CLI) I get warnings like 'The ESHOP_AZURE_XXXX variable is not set...'
|
||||
|
||||
You can ignore those warnings. They're not from Visual Studio but from docker-compose. These variables are used to allow eShopOnContainers use external resources (like Redis or SQL Server) from Azure. If they're not set, the `docker-compose.override.yml` file use defaults values that are good when running everything locally. So, the rule is:
|
||||
|
||||
- If you run everything locally: No need to setup this variables, and you can ignore these warnings
|
||||
- If you run all or some resources externally (say, in Azure) you need to setup these variables. Refer to [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/README.ENV.md) for more information about how to setup them.
|
||||
|
||||
## When I run 'docker-compose up' I get an error like ERROR: Service 'xxxxx' failed to build: COPY failed: stat ...: no such file or directory
|
||||
|
||||
This error is produced because some Docker image can't be built. This is because the project is not published. All projects are published in their `obj/Docker/publish` folder. If there is any compilation error, the project won't be published and the corresponding docker image can't be built, and you will receive this error.
|
||||
|
||||
**Note**: When you run the project using F5 from VS2017, projects are not published, so you won't receive this error in VS2017.
|
||||
|
||||
## When I try to run the solution in 'Docker for Windows' (on the Linux VM) I get the error: 'Did you mean to run dotnet SDK commands?'
|
||||
|
||||
If you get this error:
|
||||
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
|
||||
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
|
||||
|
||||
That usually happens when you just switched from Windows Containers to Linux Containers in "Docker for Windows".
|
||||
This might be a temporal bug in "Docker for Windows" environment.
|
||||
Workaround: Reboot your machine and you should be able to deploy to Linux Containers without these issues.
|
74
Home.md
@ -1,40 +1,40 @@
|
||||
# Welcome to the eShopOnContainers wiki!
|
||||
|
||||
## NEWS / ANNOUNCEMENTS
|
||||
You'll find here a lot of complementary information that, along with the related [e-books](eBooks) will help you get the most from this learning resource.
|
||||
|
||||
These are the main sections in the wiki that you can always access from the sidebar to the right:
|
||||
|
||||
## Getting Started
|
||||
|
||||
Information about setting up your development environment, including:
|
||||
|
||||
- System requirements and setup steps for
|
||||
|
||||
- [Windows](Windows-setup) and
|
||||
- [Mac](Mac-setup)
|
||||
|
||||
- [Frequent errors](Frequent-errors)
|
||||
|
||||
## Explore
|
||||
|
||||
Information and details to help you getting to know eShopOnContainers from these points of view:
|
||||
|
||||
- [Architecture](Architecture)
|
||||
- [Application UI](Explore-the-application)
|
||||
- [Code](Explore-the-code)
|
||||
- Technology
|
||||
|
||||
In the [Explore the code page](Explore-the-code) you'll find links to relevant issues, that include value-bearing discussions.
|
||||
|
||||
## Deployment
|
||||
|
||||
Information about deploying to local Kubernetes, on-premises, and Azure environments.
|
||||
|
||||
## DevOps
|
||||
|
||||
Information about setting up CI/CD pipelines for eShopOnContainers in the Azure DevOps service.
|
||||
|
||||
# NEWS / ANNOUNCEMENTS
|
||||
|
||||
Want to be up-to-date on .NET Architecture guidance and reference apps like eShopOnContainers?
|
||||
Subscribe by "WATCHING" this new GitHub repo: https://github.com/dotnet-architecture/News
|
||||
|
||||
## Related readme files (use them for more information after reading this)
|
||||
|
||||
* Documentation index: [https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme.md](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/readme/readme.md)
|
||||
|
||||
## Questions
|
||||
[QUESTION] Answer +1 if the solution is working for you (Through VS2017 or CLI environment):
|
||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
||||
|
||||
## Roadmap
|
||||
https://github.com/dotnet/eShopOnContainers/wiki/01.-Roadmap-and-Milestones-for-future-releases
|
||||
|
||||
## Setting up your development environment for eShopOnContainers
|
||||
### Visual Studio 2017 and Windows based
|
||||
This is the more straightforward way to get started:
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/wiki/02.-Setting-eShopOnContainers-in-a-Visual-Studio-2017-environment
|
||||
|
||||
### CLI and Windows based
|
||||
For those who prefer the CLI on Windows, using dotnet CLI, docker CLI and VS Code for Windows:
|
||||
https://github.com/dotnet/eShopOnContainers/wiki/03.-Setting-the-eShopOnContainers-solution-up-in-a-Windows-CLI-environment-(dotnet-CLI,-Docker-CLI-and-VS-Code)
|
||||
|
||||
### CLI and Mac based
|
||||
For those who prefer the CLI on a Mac, using dotnet CLI, docker CLI and VS Code for Mac:
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/wiki/04.-Setting-eShopOnContainer-solution-up-in-a-Mac,-VS-for-Mac-or-with-CLI-environment--(dotnet-CLI,-Docker-CLI-and-VS-Code)
|
||||
|
||||
## Related Documentation
|
||||
### Guide/eBook: .NET Microservices: Architecture for Containerized .NET Applications
|
||||
https://aka.ms/microservicesebook
|
||||
|
||||
## Issues:
|
||||
https://github.com/dotnet/eShopOnContainers/issues
|
||||
|
||||
## Sending feedback and pull requests
|
||||
We'd appreciate to your feedback, improvements and ideas.
|
||||
You can create new issues at the issues section, do pull requests and/or send emails to eshop_feedback@service.microsoft.com
|
||||
Subscribe by "WATCHING" this news GitHub repo: https://github.com/dotnet-architecture/News
|
||||
|
13
Identity-Server.md
Normal file
@ -0,0 +1,13 @@
|
||||
Identity Server placeholder
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **How WebMVC calls Identity.API ? [eShopOnContainers issue]** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1043>
|
||||
|
||||
- **Login from localhost fails [eShopOnContainers issue]** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/686#issuecomment-410226422>
|
||||
|
||||
- **Login flow [eShopOnContainers issue]** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/1050>
|
||||
|
63
Load-testing.md
Normal file
@ -0,0 +1,63 @@
|
||||
This folder details the setup needed to run load tests locally or on a Kubernetes / Service Fabric cluster.
|
||||
|
||||
Load testing requires Visual Studio Enterprise Edition
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Local environment](#local-environment)
|
||||
- [Kubernetes environment](#kubernetes-environment)
|
||||
- [Run Load Tests](#run-load-tests)
|
||||
|
||||

|
||||
|
||||
## Local environment
|
||||
|
||||
Modify the **app.config** file in the LoadTest project directory and set the following service urls.
|
||||
|
||||
```conf
|
||||
<Servers>
|
||||
<MvcWebServer url="http://localhost:5100" />
|
||||
<CatalogApiServer url="http://localhost:5101" />
|
||||
<OrderingApiServer url="http://localhost:5102" />
|
||||
<BasketApiServer url="http://localhost:5103" />
|
||||
<IdentityApiServer url="http://localhost:5105" />
|
||||
<LocationsApiServer url="http://localhost:5109" />
|
||||
<MarketingApiServer url="http://localhost:5110" />
|
||||
</Servers>
|
||||
```
|
||||
|
||||
Modify the **.env** file and set the following config property as shown bellow.
|
||||
|
||||
```env
|
||||
USE_LOADTEST=True
|
||||
```
|
||||
|
||||
## Kubernetes environment
|
||||
|
||||
Modify the **app.config** file in the LoadTest project directory and set the following service urls.
|
||||
|
||||
```conf
|
||||
<Servers>
|
||||
<MvcWebServer url="http://<public_ip_k8s>/webmvc" />
|
||||
<CatalogApiServer url="http://<public_ip_k8s>/catalog-api" />
|
||||
<OrderingApiServer url="http://<public_ip_k8s>/ordering-api" />
|
||||
<BasketApiServer url="http://<public_ip_k8s>/basket-api" />
|
||||
<IdentityApiServer url="http://<public_ip_k8s>/identity" />
|
||||
<LocationsApiServer url="http://<public_ip_k8s>/locations-api" />
|
||||
<MarketingApiServer url="http://<public_ip_k8s>/marketing-api" />
|
||||
</Servers>
|
||||
```
|
||||
|
||||
Modify the **conf_local.yml** file in the K8s directory and set the **EnableLoadTest** environment variable to True. This setting enables the load tests to bypass authorization in api services.
|
||||
|
||||

|
||||
|
||||
Deploy the Kubernetes services. Read the wiki pages related to Kubernetes setup:
|
||||
|
||||
- [Deploy to local Kubernetes](Deploy-to-Local-Kubernetes)
|
||||
- [Deploy to Azure Kubernetes Service (AKS)](Deploy-to-Azure-Kubernetes-Service-(AKS))
|
||||
|
||||
## Run Load Tests
|
||||
|
||||
Open the load test you want to perform ***.loadtest** files and click the Run Load test button.
|
||||

|
295
Mac-setup.md
Normal file
@ -0,0 +1,295 @@
|
||||
This page covers the setup of your Mac development computer and assumes you've already:
|
||||
|
||||
- Ensured your system meets the [system requirements](System-requirements#Mac) and
|
||||
- Installed Docker Desktop for Mac as directed in \
|
||||
<https://docs.docker.com/docker-for-mac/install/>.
|
||||
|
||||
The approach followed is to have the app running from the CLI first, since it's usually easier to deploy, and then go on to the option of using Visual Studio.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Configure Docker](#configure-docker)
|
||||
- [Memory and CPU](#memory-and-cpu)
|
||||
- [Shared folders](#shared-folders)
|
||||
- [Configure local networking](#configure-local-networking)
|
||||
- [Setting up the docker-compose environment variables and settings](#setting-up-the-docker-compose-environment-variables-and-settings)
|
||||
- [Build and deploy eShopOnContainers](#build-and-deploy-eshoponcontainers)
|
||||
- [1. Create a folder for your repositories](#1-create-a-folder-for-your-repositories)
|
||||
- [2. Clone eShopOnContainer's GitHub repo](#2-clone-eshoponcontainers-github-repo)
|
||||
- [3. Build the application](#3-build-the-application)
|
||||
- [4. Deploy to the local Docker host](#4-deploy-to-the-local-docker-host)
|
||||
- [5. Check the running containers](#5-check-the-running-containers)
|
||||
- [Explore the application](#explore-the-application)
|
||||
- [Optional - Use Visual Studio for Mac](#optional---use-visual-studio-for-mac)
|
||||
- [Open the solution with Visual Studio for Mac](#open-the-solution-with-visual-studio-for-mac)
|
||||
- [Build and run the application with F5 or Ctrl+F5](#build-and-run-the-application-with-f5-or-ctrlf5)
|
||||
- [Explore the code](#explore-the-code)
|
||||
- [Configuring the app for external access from remote client apps](#configuring-the-app-for-external-access-from-remote-client-apps)
|
||||
|
||||
## Configure Docker
|
||||
|
||||
The initial Docker for Desktop configuration is not suitable to run eShopOnContainers because the app uses a total of 25 Linux containers.
|
||||
|
||||
Even though the microservices are rather light, the application also runs SQL Server, Redis, MongoDb, RabbitMQ and Seq as separate containers. The SQL Server container has four databases (for different microservices) and takes an important amount of memory.
|
||||
|
||||
So it's important to configure enough memory RAM and CPU to Docker.
|
||||
|
||||
### Memory and CPU
|
||||
|
||||
Once Docker for Mac is installed, configure the minimum amount of memory and CPU like so:
|
||||
|
||||
- Memory: 4096 MB
|
||||
- CPU: 2
|
||||
|
||||
This amount of memory is the absolute minimum to have the app running, and that's why you need a 16GB RAM machine for optimal configuration.
|
||||
|
||||

|
||||
|
||||
Depending on how many apps you are running in your Mac you might need to assign more memory to Docker in the Mac. Usually, 4GB should suffice, but we've got feedback from developers who've needed to assign up to 8GB of RAM to Docker in the Mac.
|
||||
|
||||
### Shared folders
|
||||
|
||||
If your projects are placed within the /Users folder, you don't need to configure anything additional, as that is a pre-shared folder. However, if you place your projects under a different path, like /MyRootProjects, then you'd need to add that shared folder to Docker's configuration.
|
||||
|
||||
If using Visual Studio for Mac, it is also important that you share the folder `/usr/local/share/dotnet`, like here:
|
||||
|
||||

|
||||
|
||||
## Configure local networking
|
||||
|
||||
This configuration is necessary so you don't get the following error when trying to login in the MVC web app.
|
||||
|
||||

|
||||
|
||||
|
||||
That is because the by default IP used to redirect to the Identity service/app used by the application (based on IdentityServer4) is the IP 10.0.75.1.
|
||||
That IP is always set up when installing Docker for Windows in a Windows 10 machine. It is also used by Windows Server 2016 when using Windows Containers.
|
||||
|
||||
eShopOnContainers uses that IP as the "by default choice" so anyone testing the app don't need to configure further settings. However, that IP is not used by "Docker for Mac", so you need to change the config.
|
||||
|
||||
If you were to access the Docker containers from remote machines or mobile phones, like when using the Xamarin app or the web apps in remote PCs, then you would also need to change that IP and use a real IP from the network adapter.
|
||||
|
||||
### Setting up the docker-compose environment variables and settings
|
||||
|
||||
As explained [here by Docker](https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds),
|
||||
the Mac has a changing IP address (or none if you have no network access). From June 2017 onwards our recommendation is to connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host.
|
||||
|
||||
In the `docker-compose.override.yml` file, replace the IdentityUrl environment variable (or any place where the IP 10.0.75.1 is used) with:
|
||||
|
||||
```bash
|
||||
IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
```
|
||||
|
||||
You could also set your real IP at the Mac's network adapter. But that would be a worse solution as it'll depend on the network you are connecting your Mac development machine..
|
||||
|
||||
Therefore, the WebMVC service definition at the `docker-compose.override.yml` should finally be configured as shown bellow:
|
||||
|
||||
```bash
|
||||
webmvc:
|
||||
environment:
|
||||
- ASPNETCORE_ENVIRONMENT=Development
|
||||
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||
- CatalogUrl=http://catalog.api
|
||||
- OrderingUrl=http://ordering.api
|
||||
- BasketUrl=http://basket.api
|
||||
- LocationsUrl=http://locations.api
|
||||
- IdentityUrl=http://docker.for.mac.localhost:5105
|
||||
- MarketingUrl=http://marketing.api
|
||||
- CatalogUrlHC=http://catalog.api/hc
|
||||
- OrderingUrlHC=http://ordering.api/hc
|
||||
- IdentityUrlHC=http://identity.api/hc
|
||||
- BasketUrlHC=http://basket.api/hc
|
||||
- MarketingUrlHC=http://marketing.api/hc
|
||||
- PaymentUrlHC=http://payment.api/hc
|
||||
- UseCustomizationData=True
|
||||
- ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY}
|
||||
- OrchestratorType=${ORCHESTRATOR_TYPE}
|
||||
- UseLoadTest=${USE_LOADTEST:-False}
|
||||
ports:
|
||||
- "5100:80"
|
||||
```
|
||||
|
||||
If you re-deploy with `docker-compose up`, now the login page should work properly, as in the screenshot below.
|
||||
|
||||
NOTE: For some reason, if using SAFARI browser, it cannot reach docker.for.mac.localhost but using Chrome in Mac, it works with no issues. Since the usage of docker.for.mac.localhost is just for development purposes, just use Chrome for tests.
|
||||
|
||||

|
||||
|
||||
There's some additional configuration that's necessary in case you want to connect to the app from the Xamarin app or from the WiFi network, out the computer where the app is installed. You can find the detailed explanation at the end of his page.
|
||||
|
||||
## Build and deploy eShopOnContainers
|
||||
|
||||
At this point you should be able to run eShopOnContainers from the command line. To do that, you should:
|
||||
|
||||
### 1. Create a folder for your repositories
|
||||
|
||||
```console
|
||||
cd
|
||||
md MyGitRepos
|
||||
cd MyGitRepos
|
||||
```
|
||||
|
||||
This will create folder `/Users/<username>/MyGitRepos` will be fine.
|
||||
|
||||
### 2. Clone [eShopOnContainer's GitHub repo](https://github.com/dotnet-architecture/eShopOnContainers)
|
||||
|
||||
```console
|
||||
git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
||||
```
|
||||
|
||||
**Note:** Remember that the active development is done in `dev` branch. To test the latest code, use this branch instead of `master`.
|
||||
|
||||
### 3. Build the application
|
||||
|
||||
```console
|
||||
cd eShopOnContainers
|
||||
docker-compose build
|
||||
```
|
||||
|
||||
While building the docker images should take between 15 and 30 minutes to complete, depending on the system speed.
|
||||
|
||||

|
||||
|
||||
The first time you run this command it'll take some more additional time as it needs to pull/download the dotnet/core/aspnet and SDK images, so it'll take its time.
|
||||
|
||||
Later on you can try adding a parameter to speed up the image building process:
|
||||
|
||||
```console
|
||||
cd eShopOnContainers
|
||||
docker-compose build --build-arg RESTORECMD=scripts/restore-packages
|
||||
```
|
||||
|
||||
When the `docker-compose build` command finishes, you can check out with Docker CLI the images created with the following Docker command:
|
||||
|
||||
```console
|
||||
docker images
|
||||
```
|
||||
|
||||

|
||||
|
||||
Those are the Docker images available in your local image repository.
|
||||
|
||||
You might have additional images, but you should see, at least, the the custom images starting with the prefix "eshop/" which is the name of the eShopOnContainers images repo.
|
||||
|
||||
The images starting with `<none>` haven't been tagged with a name and are intermediate images of the build process. Other named images that don't start with "eshop/" are official base-images like the microsoft/aspnetcore or the SQL Server for Linux images.
|
||||
|
||||
### 4. Deploy to the local Docker host
|
||||
|
||||
```console
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
With the above single command you deploy the whole solution into your local Docker host. You should view something like this in the first seconds:
|
||||
|
||||

|
||||
|
||||
Ignore the warnings about environment variables for Azure, as that's only needed when deploying to Azure (Azure SQL Database, Redis as a service, Azure Service Bus, etc.) which is the "next step" when using eShopOnContainers.
|
||||
|
||||
Note that the first time you deploy the application (with docker run or docker-compose) it detects that it needs a few related infrastructure images, like the SQL Server, Redis, RabbitMQ images, and the like. So it'll pull or download those base images from the public Docker registry named DOCKER HUB, by pulling the "microsoft/mssql-server-linux" which is the base image for the SQL Server for Linux on containers, and the "library/redis" which is the base Redis image, and so on. Therefore, the first time you run "docker-compose up" it might take a few minutes pulling those images before it spins up your custom containers.
|
||||
|
||||
After a few more seconds, when all containers are deployed, you should see something like this:
|
||||
|
||||

|
||||
|
||||
The next time you run "docker-compose up", the app will start much faster, because all base images will already be downloaded and ready to go.
|
||||
|
||||
To stop all container you should just press Ctrl-C on the above terminal.
|
||||
|
||||
### 5. Check the running containers
|
||||
|
||||
Open a new terminal to view the running containers with the following command:
|
||||
|
||||
```console
|
||||
docker ps
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Explore the application
|
||||
|
||||
You can now [explore the application](Explore-the-application) or continue with the optional Visual Studio for Mac setup.
|
||||
|
||||
## Optional - Use Visual Studio for Mac
|
||||
|
||||
If you want to explore the code and debug the application to see it working, you have to install Visual Studio for Mac.
|
||||
|
||||
When installing [Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/), you can select between multiple workloads or platforms.
|
||||
|
||||
Make sure you select the .NET Core platform:
|
||||
|
||||

|
||||
|
||||
Before completing the VS for Mac installation, it will demand you to install XCode, that is needed for multiple dependencies.
|
||||
|
||||
If you install Android as a target platform, Java will also be installed as a dependency for building mobile apps for Android.
|
||||
|
||||
For running just the Docker containers and web apps, you'd just need the .NET Core platform.
|
||||
|
||||
But if you want to try the eShopOnContainers mobile app, that requires Xamarin and therefore, the iOS and Android platforms, too. Those mobile platforms are optional for this Wiki walkthrough, though.
|
||||
|
||||
### Open the solution with Visual Studio for Mac
|
||||
|
||||
Run Visual Studio for Mac and open the solution `eShopOnContainers-ServicesAndWebApps.sln`.
|
||||
|
||||
If you just want to run the containers/microservices and web apps, do NOT open the other solutions, like `eShopOnContainers.sln` as those solutions will also open the Xamarin projects and that might slow you down when testing due to additional dependencies in VS.
|
||||
|
||||
After opening the `eShopOnContainers-ServicesAndWebApps.sln` solution for the first time, it is recommended to wait for a few minutes as VS will be restoring many NuGet packages and the solution won't be able to compile or run until it gets all the nuGet packages dependencies, in the first place (this time is only needed the first time you open the solution. Next times it is a lot faster).
|
||||
|
||||
This is VS for Mac with the `eShopOnContainers-ServicesAndWebApps.sln` solution.
|
||||
|
||||

|
||||
|
||||
### Build and run the application with F5 or Ctrl+F5
|
||||
|
||||
Make sure that the by default start-up project is the Docker project named `docker-compose`.
|
||||
|
||||
Hit Ctrl+F5 or press the "play" button in VS for Mac.
|
||||
|
||||
IMPORTANT: The first time you run eShopOnContainers, it will take longer than the next time you launch it. Under the covers, Docker is pulling quite a few "heavy" images from Docker Hub (the public image registry), like the SQL Server image, Redis image, RabbitMQ image and the base ASP.NET Core images. That pull/download process will take a few minutes. Then, VS will launch the application custom containers plus the infrastructure containers (SQL, Redis, RabbitMQ and MongoDB), populate sample data in the databases and finally run the microservices and web apps on custom containers.
|
||||
|
||||
Note that you will see normal/controlled Http exceptions caused by our retries with exponential backoff, as the web apps have to wait until the microservices are ready for the first time which need first to run SQL sentences populating sample data, etc.
|
||||
|
||||
Once the solution is up and running, you should be able to see it in the browser at:
|
||||
|
||||
http://localhost:5100
|
||||
|
||||

|
||||
|
||||
If you open a bash window, you can type `docker images` and see the pulled/downloaded images plus the custom images created by VS for Mac:
|
||||
|
||||

|
||||
|
||||
And by typing `docker ps` you can see the containers running in Docker. The infrastructure containers like SQL, Redis, RabbitMQ plus the custom containers running Web API microservices and the web apps.
|
||||
|
||||

|
||||
|
||||
*IMPORTANT:* In order to have the full app working, like being able to login with a user and add items to the basket and create orders, or being able to consume the services from a remote Xamarin or web SPA, you need to configure additional steps for the app, like the IP to be used by the Identity Service because it needs to be redirected, etc. - Check the additional configuration at the end of this post.
|
||||
|
||||
## Explore the code
|
||||
|
||||
You should be now ready to begin learning by [exploring the code](Explore-the-code) and debugging eShopOnContainers.
|
||||
|
||||
## Configuring the app for external access from remote client apps
|
||||
|
||||
If using the services from remote apps, like the a phone with the Xamarin mobile app in the same Wifi network, or the web apps accessing remotely to the Docker Host, you need to change a few by-default URLs.
|
||||
|
||||
eShopOnContainers app uses the .env file to set certain by-default environment variables used by the multiple docker-compose.override you can have.
|
||||
|
||||
Therefore, the following change must be done in the .env file at the root of the eShopOnContainers folder.
|
||||
If you don't see the .env file, run the following command and re-start the Finder:
|
||||
|
||||
```bash
|
||||
$ defaults write com.apple.finder AppleShowAllFiles TRUE
|
||||
|
||||
$ killall Finder
|
||||
```
|
||||
Then, edit the .env file (with VS code, for instance) and change the ESHOP_EXTERNAL_DNS_NAME_OR_IP variable, and instead of using "localhost" as value, set a real IP or a real DNS name:
|
||||
|
||||
`
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=192.168.0.25
|
||||
`
|
||||
or
|
||||
`
|
||||
ESHOP_EXTERNAL_DNS_NAME_OR_IP=myserver.mydomain.com
|
||||
`
|
||||
This is something you'll want to do if deploying to a real Docker Host, like in a VM in Azure, where you can use a DNS name for that.
|
1
Microservices-Architecture-eBook-changelog.md
Normal file
@ -0,0 +1 @@
|
||||
# e-Book changelog placeholder
|
@ -21,4 +21,4 @@ Reference commit [6541f0d2](https://github.com/dotnet/docs/pull/12020/commits/65
|
||||
- Add section on Visual Studio Code (VS Code) and Docker extension for VS Code.
|
||||
- Update Docker image references to Microsoft Container Registry (MCR).
|
||||
- Update samples to using Azure DevOps.
|
||||
- Include section on creating Azure DevOps pipelines.
|
||||
- Include section on creating Azure DevOps pipelines.
|
||||
|
9
RabbitMQ.md
Normal file
@ -0,0 +1,9 @@
|
||||
Placeholder
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **[eShopOnContainers issue] EventBusRabbitMQ Message Processing Problem** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/issues/888
|
||||
|
||||
- **[eShopOnContainers PR] Use AsyncEventingBasicConsumer in RabbitMQ** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/pull/987>
|
106
Readme-files.md
Normal file
@ -0,0 +1,106 @@
|
||||
Repo README files index.
|
||||
|
||||
## README files - **TEMPORAL**
|
||||
|
||||
- Using Helm Charts to deploy eShopOnContainers to AKS with ISTIO \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/elk/Readme.md
|
||||
|
||||
- Deploy a VM to run the services - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/vms/plain-vm.md
|
||||
|
||||
- Docker-compose yaml files - **DELETED FROM MAIN REPO, PENDING PUSH** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/readme/readme-docker-compose.md
|
||||
|
||||
- Create a VM using docker-machine - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/vms/docker-machine.md
|
||||
|
||||
- Simplified CQRS and DDD - **DELETED FROM MAIN REPO, PENDING PUSH** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/docs-kb/simplified-cqrs-ddd/post.md
|
||||
|
||||
- Geolocator Plugin details \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/Components/GeolocatorPlugin-1.0.3/component/Details.md
|
||||
|
||||
- Getting Started with Geolocator Plugin \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/Components/GeolocatorPlugin-1.0.3/component/GettingStarted.md
|
||||
|
||||
- Kubernetes 101 \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/KUBERNETES.md
|
||||
|
||||
- YAML files used to deploy to k8s \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/k8s/conf-files.md
|
||||
|
||||
- Wiring eshopOnContainers with ELK in Localhost \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/elk/Readme.md
|
||||
|
||||
- Kubernetes (k8s) deploy information \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/k8s/readme.md
|
||||
|
||||
- Running Tests for eShopOnContainers - **COPIED TO WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/test/readme.md
|
||||
|
||||
- Documentation index \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/readme/readme.md
|
||||
|
||||
- Deploying Resources On Azure \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/readme.md
|
||||
|
||||
- eShopOnContainers on Kubernetes \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/k8s/README.k8s.md
|
||||
|
||||
- Simplified CQRS and DDD - **COPIED TO THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/docs-kb/simplified-cqrs-ddd/post.md
|
||||
|
||||
- Deploying resources using create-resources script - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/readme.md
|
||||
|
||||
- VSTS - Xamarin Android Build \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/vsts-docs/builds/xamarin-android.md
|
||||
|
||||
- VSTS - Xamarin iOS Build \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/vsts-docs/builds/xamarin-iOS.md
|
||||
|
||||
- Azure Resources - **COPIED TO THE WIKI**\
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/readme/README.ENV.md
|
||||
|
||||
- eShopOnContainers on Mobile \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Mobile/README.md
|
||||
|
||||
- Kubernetes CI/CD VSTS \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/k8s/README.CICD.k8s.md
|
||||
|
||||
- Deploying SQL Server & SQL Databases - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/sql/readme.md
|
||||
|
||||
- Create VM with Docker installed - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/vms/readme.md
|
||||
|
||||
- Deploying Redis Cache - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/redis/readme.md
|
||||
|
||||
- Deploying Azure Cosmosdb - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/cosmos/readme.md
|
||||
|
||||
- Deploying Azure Service Bus - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/servicebus/readme.md
|
||||
|
||||
- Docker-compose yaml files - **COPIED TO THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/readme/readme-docker-compose.md
|
||||
|
||||
- Deploying Azure Functions - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/azurefunctions/readme.md
|
||||
|
||||
- Deploying Catalog Storage - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/storage/catalog/readme.md
|
||||
|
||||
- Deploying Marketing Storage - **REFERENCED FROM THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/storage/marketing/readme.md
|
||||
|
||||
- Load Testing settings - **COPIED TO THE WIKI** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/test/ServicesTests/LoadTest/readme.md
|
||||
|
||||
- Deploying a Service Fabric cluster based on Linux nodes \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/servicefabric/LinuxContainers/readme.md
|
||||
|
||||
- Deploying a Service Fabric cluster based on Windows nodes \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/deploy/az/servicefabric/WindowsContainers/readme.md
|
||||
|
27
Roadmap.md
Normal file
@ -0,0 +1,27 @@
|
||||
## Next major release
|
||||
|
||||
Features that will be included in next major release.
|
||||
|
||||
- Migrate solution from ASP.NET Core 2.2 to 3.0 and update all projects to use the latest .NET Core 3.0 templates.
|
||||
|
||||
- Implement the new .NET Core 3.0 WorkerService in Ordering.API and other background processes.
|
||||
|
||||
- Improve Ordering.API
|
||||
- Group order items
|
||||
- apply discounts from Marketing.API
|
||||
|
||||
- Handle two deployment scenarios
|
||||
- Basic deployment, better for learning:
|
||||
- Docker compose
|
||||
- Local Kubernetes
|
||||
- Visual Studio F5 experience
|
||||
|
||||
- Advanced deployment, complex but more real-life:
|
||||
- Sidecar implementation with Envoy/Istio
|
||||
- Improved API Gateway and resilience
|
||||
- gRPC for inter-service communications
|
||||
- Azure Dev Spaces
|
||||
|
||||
## Feature candidates
|
||||
|
||||
Check the [backlog](Backlog) for candidate features.
|
295
Serilog-and-Seq.md
Normal file
@ -0,0 +1,295 @@
|
||||
Logging is a key element when you need to diagnose failures, and it's also important as a learning resource in eShopOnContainers. Logging allows you to view and explore inner working details that would be very hard understand otherwise.
|
||||
|
||||
This article contains a few sample use cases for logging, that also showcase the internals of some of the most interesting DDD patterns, that are not obvious by simply using the application. You'll also find here a brief introduction to centralized structured logging with [Serilog](https://serilog.net/) and event viewing with [Seq](https://getseq.net/) in eShopOnContainers.
|
||||
|
||||
Serilog is an [open source project in GitHub](https://github.com/serilog/serilog) and even though Seq is not, it's possible to [use it for free in development and small projects](https://getseq.net/Pricing), so it fits nicely for eShopOnContainers.
|
||||
|
||||
This article covers the most important tips for using structured logging in C# and conclude with some details on the setup of the logging system.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Logging samples in eShopOnContainers](#Logging-samples-in-eShopOnContainers)
|
||||
- [Application startup](#Application-startup)
|
||||
- [Closing in on a specific type of trace](#Closing-in-on-a-specific-type-of-trace)
|
||||
- [Integration event handling](#Integration-event-handling)
|
||||
- [Tracing an integration event from publishing to handling in other microservices](#Tracing-an-integration-event-from-publishing-to-handling-in-other-microservices)
|
||||
- [Viewing the log event details](#Viewing-the-log-event-details)
|
||||
- [Runtime detail level configuration](#Runtime-detail-level-configuration)
|
||||
- [Using structured logging](#Using-structured-logging)
|
||||
- [Getting the logger](#Getting-the-logger)
|
||||
- [Logging events](#Logging-events)
|
||||
- [Logging contexts and correlation Ids](#Logging-contexts-and-correlation-Ids)
|
||||
- [Important logging rules](#Important-logging-rules)
|
||||
- [Setup and configuration](#Setup-and-configuration)
|
||||
- [Serilog](#Serilog)
|
||||
- [Seq](#Seq)
|
||||
- [Additional resources](#Additional-resources)
|
||||
|
||||
## Logging samples in eShopOnContainers
|
||||
|
||||
These are just a few samples of what you can get when you combine proper structured logging with filtering by some convenient properties, as seen from **Seq**.
|
||||
|
||||
The filter expression is highlighted on the top of each image.
|
||||
|
||||
### Application startup
|
||||
|
||||
Get the details of application startup:
|
||||
|
||||

|
||||
|
||||
Filtering by `ApplicationContext` shows all events from the application, in this sample we just added a `DateTime` limit to show only the initial traces.
|
||||
|
||||
The "level" of the events shown, such as `Debug`, `Information`, `Warning`, can be configured as explained in the [setup and configuration section](#setup-and-configuration).
|
||||
|
||||
### Closing in on a specific type of trace
|
||||
|
||||
You can focus on a specific type of trace by filtering by "event template" (for a specific `ApplicationContext` here):
|
||||
|
||||

|
||||
|
||||
You can also show the same event template or "type" for all applications:
|
||||
|
||||

|
||||
|
||||
### Integration event handling
|
||||
|
||||
Filtering by `IntegrationEventId` and `IntegrationEventContext` shows the publishing (1) and handling (2) of the `UserCheckoutAcceptedIntegrationEvent`. This handling begins a transaction (3), creates an order (4), commits the transaction (5) and publishes the events `OrderStartedIntegrationEvent` (6) and `OrderStatusChangedToSubmittedIntegrationEvent` (7).
|
||||
|
||||
Worth noting here is that integration events are queued while in the scope of the transaction, and then published after it finishes:
|
||||
|
||||

|
||||
|
||||
### Tracing an integration event from publishing to handling in other microservices
|
||||
|
||||
A filter similar to the previous one, but showing the logging event details, with an `OrderStatusChangedToStockConfirmedIntegrationEvent` published in `Ordering.API` (1) and handled in `Ordering.SignalrHub` (2) and in `Payment.API` (3). Notice that while still handling the event in `Payment.API`, a new `OrderPaymentSuccededIntegrationEvent` (4) is published:
|
||||
|
||||

|
||||
|
||||
### Viewing the log event details
|
||||
|
||||
If you use [Firefox Developer Edition](https://www.mozilla.org/firefox/developer/) or your browser has a JSON files viewer, you can get the raw JSON event:
|
||||
|
||||

|
||||
|
||||
And view or navigate/expand/colapse all event details much more easily:
|
||||
|
||||

|
||||
|
||||
## Runtime detail level configuration
|
||||
|
||||
When you need to explore logged events with more detail, you can fine-tune the logging level filter by adding some environment variables `Serilog__MinimumLevel__Override__*` in the `docker-compose.override.yml` file, as shown next.
|
||||
|
||||
```yml
|
||||
ordering.api:
|
||||
environment:
|
||||
- Serilog__MinimumLevel__Override__Microsoft.eShopOnContainers.BuildingBlocks.EventBusRabbitMQ=Verbose
|
||||
- Serilog__MinimumLevel__Override__Ordering.API=Verbose
|
||||
```
|
||||
|
||||
Which is equivalent to adding this to the `Serilog` configuration in the `appsettings.json` file:
|
||||
|
||||
```json
|
||||
"Serilog": {
|
||||
"MinimumLevel": {
|
||||
"Override": {
|
||||
"Ordering": "Verbose",
|
||||
"eShopOnContainers.BuildingBlocks.EventBusRabbitMQ": "Verbose"
|
||||
}
|
||||
}
|
||||
},
|
||||
```
|
||||
|
||||
What this means is just: long any event defined as "Trace" or higher, for any class in a `Ordering.*` namespace.
|
||||
|
||||
## Using structured logging
|
||||
|
||||
This section explores the code-related aspects of logging, beginning with the "structured logging" concept that makes it possible to get the samples show above.
|
||||
|
||||
In a few word, **structured logging** can be thought of as a stream of key-value pairs for every event logged, instead of just the plain text line of conventional logging.
|
||||
|
||||
The key-value pairs are then the base to query the events, as was shown in the samples above.
|
||||
|
||||
### Getting the logger
|
||||
|
||||
The logging infrastructure of .NET supports structured logging when used with a `LoggerFactory`, such as **Serilog**, that supports it, and the simplest way to use is by requesting an `ILogger<T>` through Dependency Injection (DI) in the class constructor as shown here:
|
||||
|
||||
```cs
|
||||
public class WorkerClass
|
||||
{
|
||||
private readonly ILogger<WorkerClass> _logger;
|
||||
|
||||
public WorkerClass(ILogger<WorkerClass> logger) => _logger = logger;
|
||||
|
||||
// If you have to use ILoggerFactory, change the constructor like this:
|
||||
public WorkerClass(ILoggerFactory loggerFactory) => _logger = loggerFactory.CreateLogger<WorkerClass>();
|
||||
}
|
||||
```
|
||||
|
||||
The nice part of using the `ILogger<T>` is that you get a nice `SourceContext` property as shown here:
|
||||
|
||||

|
||||
|
||||
### Logging events
|
||||
|
||||
Logging events is pretty simple, as shown in the following code that produces the trace shown in image above:
|
||||
|
||||
```cs
|
||||
_logger.LogInformation("----- Publishing integration event: {IntegrationEventId} from {AppName} - ({@IntegrationEvent})", pubEvent.EventId, Program.AppName, pubEvent.IntegrationEvent);
|
||||
```
|
||||
|
||||
The code above is similar to what you've seen in the `string.format()` method, with three very important differences:
|
||||
|
||||
1. The first string defines a **type of event** or **template** property that can also be queried, along with any other of the event properties.
|
||||
|
||||
2. Every name in curly braces in the **template** defines a **property** that gets it's value from a parameter after the template, just as in `string.Format()`.
|
||||
|
||||
3. If a property name begins with `@` then the whole object graph is stored in the event log (some limits apply / can be configured).
|
||||
|
||||
### Logging contexts and correlation Ids
|
||||
|
||||
Logging context allows you to define a scope, so you can trace and correlate a set of events, even across the boundaries of the applications involved. The use of different types of contexts was shown in the [logging samples section](#logging-samples-in-eshoponcontainers) above.
|
||||
|
||||
Correlation Ids are a mean to establish a link between two or more contexts or applications, but can get difficult to trace. At some point it might be better to handle contexts that cover business concepts or entities, such as an **OrderContext** that can be easily identified across different applications, even when using different technologies.
|
||||
|
||||
These are some of the context properties used in eShopOnContainers:
|
||||
|
||||
- **ApplicationContext** Is defined on application startup and adds the `ApplicationContext` property to all events.
|
||||
|
||||
- **SourceContext** Identifies the full name of the class where the event is logged, it's usually defined when creating or injecting the logger.
|
||||
|
||||
- **RequestId** Is a typical context that covers all events while serving a request. It's defined by the ASP.NET Core request pipeline.
|
||||
|
||||
- **Transaction context** Covers the events from the beginning of the database transaction up to it's commit.
|
||||
|
||||
- **IntegrationEventContext** - Identifies all events that occur while handling an integration event in an application.
|
||||
|
||||
### Important logging rules
|
||||
|
||||
There a just a few simple rules to get the most from structured logging:
|
||||
|
||||
1. NEVER use string interpolation with variables as the template.
|
||||
|
||||
If you use interpolation, then the "template" will lose it's meaning as an event type, you will also lose the key-value pairs and the trace will become a plain old simple text trace.
|
||||
|
||||
2. Log exceptions with the proper overload as shown in the following code fragments:
|
||||
|
||||
```cs
|
||||
catch (Exception ex)
|
||||
{
|
||||
_logger.LogWarning(ex, "Could not publish event: {EventId} after {Timeout}s ({ExceptionMessage})", @event.Id, $"{time.TotalSeconds:n1}", ex.Message);
|
||||
}
|
||||
|
||||
.../...
|
||||
|
||||
catch (Exception ex)
|
||||
{
|
||||
_logger.LogError(ex, "Program terminated unexpectedly ({Application})!", AppName);
|
||||
return 1;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Don't log only the exception message, because it would be like violating rule #1.
|
||||
|
||||
## Setup and configuration
|
||||
|
||||
### Serilog
|
||||
|
||||
The logging setup used in eShopOnContainers is somewhat different from the usual samples in ASP.NET Core and it's taken mostly from <https://github.com/serilog/serilog-aspnetcore>. The main reason is to have logging services available as soon as possible during application startup.
|
||||
|
||||
These are the packages typically used to enable Serilog in the applications:
|
||||
|
||||
- Serilog.AspNetCore
|
||||
- Serilog.Enrichers.Environment
|
||||
- Serilog.Settings.Configuration
|
||||
- Serilog.Sinks.Console
|
||||
- Serilog.Sinks.Seq
|
||||
|
||||
Logger configuration is done in `Program.cs` as shown here:
|
||||
|
||||
```cs
|
||||
private static Serilog.ILogger CreateSerilogLogger(IConfiguration configuration)
|
||||
{
|
||||
var seqServerUrl = configuration["Serilog:SeqServerUrl"];
|
||||
|
||||
return new LoggerConfiguration()
|
||||
.MinimumLevel.Verbose()
|
||||
.Enrich.WithProperty("ApplicationContext", AppName)
|
||||
.Enrich.FromLogContext()
|
||||
.WriteTo.Console()
|
||||
.WriteTo.Seq(string.IsNullOrWhiteSpace(seqServerUrl) ? "http://seq" : seqServerUrl)
|
||||
.ReadFrom.Configuration(configuration)
|
||||
.CreateLogger();
|
||||
}
|
||||
```
|
||||
|
||||
The following aspects can be highlighted from the code above:
|
||||
|
||||
- `.Enrich.WithProperty("ApplicationContext", AppName)` defines the `ApplicationContext` for all traces in the application.
|
||||
- `.Enrich.FromLogContext()` allows you to define a log context anywhere you need it.
|
||||
- `.ReadFrom.Configuration(configuration)` allows you to override the configuration from values in `appsettings.json`, or environment variables, which becomes very handy for containers.
|
||||
|
||||
The next JSON fragment shows the typical default configuration for `appsettings.json` eShopOnContainers microservices:
|
||||
|
||||
```json
|
||||
"Serilog": {
|
||||
"SeqServerUrl": null,
|
||||
"MinimumLevel": {
|
||||
"Default": "Information",
|
||||
"Override": {
|
||||
"Microsoft": "Warning",
|
||||
"Microsoft.eShopOnContainers": "Information",
|
||||
"System": "Warning"
|
||||
}
|
||||
}
|
||||
},
|
||||
```
|
||||
|
||||
The previous JSON fragment shows how to configure the MinimumLevel for traces, according to the Namespace of the `SourceContext`, such that the default is **Information**, except for namespaces Microsoft.* and System.*, except again for **Microsoft.eShopOnContainers**, that's also Information.
|
||||
|
||||
### Seq
|
||||
|
||||
Seq is added as another container in the `docker-compose` files as shown here:
|
||||
|
||||
```yml
|
||||
# In docker-compose.yml
|
||||
services:
|
||||
seq:
|
||||
image: datalust/seq:latest
|
||||
|
||||
# in docker-compose.override.yml
|
||||
seq:
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
ports:
|
||||
- "5340:80"
|
||||
```
|
||||
|
||||
With the above configuration **Seq** will be availiable at `http://10.0.75.1:5340` or `http://localhost:5340`
|
||||
|
||||
**Important configuration note**
|
||||
|
||||
To limit the amount of disk space used by the event store, it's recommended that you create a retention policy of **one day**, with the option: **settings > RETENTION > ADD POLICY -> Delete all events after 1 day**.
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **Logging in ASP.NET Core** \
|
||||
<https://docs.microsoft.com/aspnet/core/fundamentals/logging/>
|
||||
|
||||
- **Serilog — simple .NET logging with fully-structured events** \
|
||||
<https://serilog.net/>
|
||||
|
||||
- **Seq — structured logs for .NET apps** \
|
||||
<https://getseq.net/>
|
||||
|
||||
- **Structured logging concepts in .NET Series (1)** \
|
||||
<https://nblumhardt.com/2016/06/structured-logging-concepts-in-net-series-1/>
|
||||
|
||||
- **Events and levels - structured logging concepts in .NET (2)** \
|
||||
<https://nblumhardt.com/2016/06/events-and-levels-structured-logging-concepts-in-net-2/>
|
||||
|
||||
- **Smart Logging Middleware for ASP.NET Core** \
|
||||
<https://blog.getseq.net/smart-logging-middleware-for-asp-net-core/>
|
||||
|
||||
- **Tagging log events for effective correlation** \
|
||||
<https://nblumhardt.com/2015/01/designing-log-events-for-effective-correlation/>)
|
110
Simplified-CQRS-and-DDD.md
Normal file
@ -0,0 +1,110 @@
|
||||
This page explores some code details related to the simplified CQRS and DDD approaches used in eShopOnContainers
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Conceptual overview](#conceptual-overview)
|
||||
- [Code details](#code-details)
|
||||
- [CQRS](#cqrs)
|
||||
- [DDD](#ddd)
|
||||
- [Additional resources](#additional-resources)
|
||||
|
||||
## Conceptual overview
|
||||
|
||||
CQRS, for Command and Query Responsibility Segregation, is an architectural pattern that, in very simple terms, has two different ways to handle the application model.
|
||||
|
||||
**Commands** are responsible for **changing** the application state, i.e. creating, updating and deleting entities (data).
|
||||
|
||||
**Queries** are responsible for **reading** the application state, e.g. to display information to the user.
|
||||
|
||||
**Commands** are made thinking about the Domain rules, restrictions and transaction boundaries.
|
||||
|
||||
**Queries** are made thinking about the presentation layer, the client UI.
|
||||
|
||||
When handling **commands**, the application model is usually represented by DDD constructs, e.g. Root aggregates, entities, value objects, etc., and there are usually some sort of rules that restrict the allowed state changes, e.g. An order has to be paid before dispatching.
|
||||
|
||||
When handling **queries**, the application model is usually represented by entities and relations and can be read much like SQL queries to display information.
|
||||
|
||||
Queries don't change state, so they can be run as much as required and will always return the same values (as long as the application state hasn't changed), i.e. queries are "idempotent".
|
||||
|
||||
Why the separation? because the rules for **changing** the model can impose unnecessary constraints for **reading** the model, e.g. you might allow to change order items only before dispatching so the order is like the gate-keeper (root aggregate) to access the order items, but you might also want to view all orders for some catalog item, so you have to be able to access the order items first (in a read only way).
|
||||
|
||||
In this simplified CQRS approach both the DDD model and the query model use the same database.
|
||||
|
||||
**Commands** and **Queries** are located in the Application layer, because:
|
||||
|
||||
1. It's where the composition of domain root aggregates occur (commands) and
|
||||
2. It's close to the UI requirements and has access to the whole database of the microservice (queries).
|
||||
|
||||
Ideally, root aggregates are ignorant of each other and it's the Application layer's responsibility to compose coordinated actions by means of domain events, because it knows about all root aggregates.
|
||||
|
||||
Regarding **queries**, in a similar analysis, the Application layer knows about all entities and relationships in the database, beyond the restrictions of the root aggregates.
|
||||
|
||||
## Code details
|
||||
|
||||
### CQRS
|
||||
|
||||
The CQRS pattern can be checked in the Ordering service:
|
||||
|
||||
Commands and queries are clearly separated in the application layer (Ordering.API).
|
||||
|
||||
**Solution Explorer [Ordering.API]:**
|
||||
|
||||

|
||||
|
||||
Commands are basically read only Data Transfer Objects (DTO) that contain all data that's required to execute the operation.
|
||||
|
||||
**CreateOrderCommand:**
|
||||
|
||||

|
||||
|
||||
Each command has a specific command handler that's responsible for executing the operations intended for the command.
|
||||
|
||||
**CreateOrderCommandHandler:**
|
||||
|
||||

|
||||
|
||||
In this case:
|
||||
|
||||
1. Creates an Order object (root aggregate)
|
||||
2. Adds the order items using the root aggregate method
|
||||
3. Adds the order through the repository
|
||||
4. Saves the order
|
||||
|
||||
Queries, on the other hand, just return whatever the UI needs, could be a domain object or collections of specific DTOs.
|
||||
|
||||
**IOrderQueries:**
|
||||
|
||||

|
||||
|
||||
And they are implemented as plain SQL queries, in this case using [Dapper](http://dapper-tutorial.net/ as the ORM.
|
||||
|
||||
**OrderQueries:**
|
||||
|
||||

|
||||
|
||||
There can even be specific ViewModels or DTOs just to get the query results.
|
||||
|
||||
**OrderViewModel:**
|
||||
|
||||

|
||||
|
||||
### DDD
|
||||
|
||||
The DDD pattern can be checked in the domain layer (Ordering.Domain)
|
||||
|
||||
**Solution Explorer [Ordering.Domain + Ordering.Infrastructure]:**
|
||||
|
||||

|
||||
|
||||
There you can see the Buyer aggregate and the Order aggregate, as well as the repository implementations in Ordering.Infrastructure.
|
||||
|
||||
Command handlers from the application layer use the root aggregates from the Domain layer and the repository implementations from the Infrastructure layer, the latter through Dependency Injection.
|
||||
|
||||
## Additional resources
|
||||
|
||||
- **Issue #592 - [Question] Ordering Queries** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/592>
|
||||
|
||||
- **Applying simplified CQRS and DDD patterns in a microservice** \
|
||||
<https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns>
|
||||
|
48
System-requirements.md
Normal file
@ -0,0 +1,48 @@
|
||||
## Windows
|
||||
|
||||
### Recommended Hardware requirements for Windows
|
||||
|
||||
- 16Gb of memory RAM - Since Hyper-V is needed for Docker Community Edition (a.k.a. Docker Desktop for Windows/Mac) in order to run the Linux Docker Host and and an SQL Server container and Redis container are also running. An 8Gb RAM machine might be too tight.
|
||||
|
||||
To run Hyper-V you also need:
|
||||
|
||||
- Windows 10 Pro, Education or Enterprise.
|
||||
- 64-bit Processor with Second Level Address Translation (SLAT).
|
||||
- CPU support for VM Monitor Mode Extension (VT-c on Intel CPU's).
|
||||
- Virtualization must enabled in BIOS. Typically, virtualization is enabled by default.
|
||||
- This is different from having Hyper-V enabled.
|
||||
|
||||
### Software requirements for Windows
|
||||
|
||||
- Docker Community Edition (aka. Docker for Windows) - Requires Windows 10 Pro 64 bits and Hyper-V enabled.
|
||||
- Latest **.NET Core 2.2 SDK** from: https://www.microsoft.com/net/download
|
||||
- (Optional) Visual Studio 2017 **15.8** or later (Visual Studio 2019 recommended) – Much better for debugging multi-containers apps
|
||||
- (Optional) Visual Studio Code.
|
||||
|
||||
If your system is OK with the Docker requirements above, you'll be fine for VS too.
|
||||
|
||||
### Setting up your development system for Windows
|
||||
|
||||
- Begin by installing Docker Desktop for Windows following the instructions in <https://docs.docker.com/docker-for-windows/install/>.
|
||||
|
||||
- Continue to the [Windows setup wiki page](Windows-setup).
|
||||
|
||||
## Mac
|
||||
|
||||
### Recommended Hardware requirements for Mac
|
||||
|
||||
- 16Gb of memory RAM - Since you run a VM in the Mac with the Linux Docker host and we're also running a SQL Server container and a Redis container, 8Gb of RAM might not be enough.
|
||||
- Processor with support for MMU virtualization.
|
||||
|
||||
### Software requirements for Mac
|
||||
|
||||
- Docker Community Edition (a.k.a. Docker for Mac) - Requires OS X El Capitan 10.11 or newer macOS.
|
||||
- Latest **.NET Core 2.2 SDK** from: https://www.microsoft.com/net/download
|
||||
- (Optional) Visual Studio for Mac.
|
||||
- (Optional) Visual Studio Code.
|
||||
|
||||
### Setting up your development system for Mac
|
||||
|
||||
- Begin by installing Docker Desktop for Mac following the instructions in <https://docs.docker.com/docker-for-mac/install/>.
|
||||
|
||||
- Continue to the [Mac setup wiki page](Mac-setup).
|
111
Unit-and-integration-testing.md
Normal file
@ -0,0 +1,111 @@
|
||||
Tests are an excellent way to explore the internals of any application, besides their main purpose of ensure quality.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Unit and functional tests per microservice](#unit-and-functional-tests-per-microservice)
|
||||
- [Running Unit Tests](#running-unit-tests)
|
||||
- [Running Functional/Integration Tests](#running-functionalintegration-tests)
|
||||
- [Global integration tests across microservices](#global-integration-tests-across-microservices)
|
||||
- [Load Testing](#load-testing)
|
||||
|
||||
The tests in eShopOnContainers are structured in the following structure, per type:
|
||||
|
||||
- Tests per microservice
|
||||
- Unit Tests
|
||||
- Functional/Integration Tests
|
||||
|
||||
- Global application tests
|
||||
- Microservices Functional/Integration Tests across the whole application
|
||||
|
||||
## Unit and functional tests per microservice
|
||||
|
||||
Within each microservice's folder there are multiple tests (Unit Tests and Functional Tests) available to validate its behaviour.
|
||||
The test projects are positioned within each microservice's physical folder because that helps on the goal of maitanining maximum development autonomy per microservice. Doing it this way and in a more advance scenario, you could even move each microservice to a different GitHub repo per microservice, along with its test projects.
|
||||
|
||||
For instance, this is the way you see the project folders for the *Ordering* microservice, where you also have the *Ordering.FunctionalTests* y *Ordering.UnitTests* within that folder structure.
|
||||
|
||||

|
||||
|
||||
### Running Unit Tests
|
||||
|
||||
In order to run the Unit Tests for any microservice, you just need to select the tests with [*Test Explorer* in Visual Studio](https://docs.microsoft.com/en-us/visualstudio/test/run-unit-tests-with-test-explorer) (or use your preferred tool) and run the tests.
|
||||
|
||||
For instance, you can filter and see just the Unit Test projects by typing *"UnitTest"* in the filter edit box within **Test Explorer**:
|
||||
|
||||

|
||||
|
||||
Then you can run all or selected tests, like in the following image:
|
||||
|
||||

|
||||
|
||||
These Unit Tests have no any dependency with any external infrastructure or any other microservice and that's why you don't need to spin-up additional infrastructure (Database server or additional containers).
|
||||
|
||||
### Running Functional/Integration Tests
|
||||
|
||||
In this case, the Functional Tests do have dependencies with additional infrastructure. For instance, they might have dependencies with the microservices's database in the SQL Server container, the messaging broker (RabbitMQ container), etc.
|
||||
|
||||
Therefore, in order to run the Functional Tests you first need to have the needed infrastructure, in this case to spin-up the infrastructure containers.
|
||||
|
||||
In order to facilitate how you can have the infrastructure containers up and running, you have certain docker-compose files you can use with `docker-compose up`. These files are available here:
|
||||
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/tree/feature/orgtestprojects/test
|
||||
|
||||
If you edit the docker-compose-tests.yml you can see that it just have info about the infrastructure containers to spin up:
|
||||
|
||||
```yml
|
||||
docker-compose-tests.yml
|
||||
|
||||
version: '3'
|
||||
services:
|
||||
redis.data:
|
||||
image: redis:alpine
|
||||
rabbitmq:
|
||||
image: rabbitmq:3-management-alpine
|
||||
sql.data:
|
||||
image: microsoft/mssql-server-linux:2017-latest
|
||||
nosql.data:
|
||||
image: mongo
|
||||
```
|
||||
|
||||
Here is how you start the infrastructure containers with "docker-compose up" in PowerShell or any Command-Line window:
|
||||
|
||||
> docker-compose -f .\docker-compose-tests.yml -f .\docker-compose-tests.override.yml up
|
||||
|
||||

|
||||
|
||||
Each Functional Test project uses a [TestServer](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.testhost.testserver?view=aspnetcore-2.1) configured with the required infrastructure which should be available thanks to the previous "docker-compose up", so the Functional Tests can be run.
|
||||
|
||||
> For more info about **TestServer** and *Functional Tests* and *Integration Tests*, see the article [Integration tests in ASP.NET Core](https://docs.microsoft.com/aspnet/core/test/integration-tests?view=aspnetcore-2.1).
|
||||
|
||||
In order to filter and see the Functional Tests to run, type *"Functional"* in **Test Explorer**.
|
||||
|
||||

|
||||
|
||||
You can, for instance, run the Functional Tests for the Catalog Microservice, which, under the covers, are accessing to the SQL Server container that should be running in Docker:
|
||||
|
||||

|
||||
|
||||
## Global integration tests across microservices
|
||||
|
||||
So far, we've been focusing on isolated Unit Tests or Functional Tests that were related to single/isolated microservices, although taking into account the infrastructure for the functional tests per microservice.
|
||||
|
||||
However, in a microservice-based application you also need how the multiple microservices interact with the whole application. For instance, you might raise an event from one microservice by publishing it on the Event Bus (based on RabbitMQ) and test/validate that you received that same event into another microservice because it was subscribed to it.
|
||||
|
||||
These global Functional/Integration tests for the services need to be placed in a common place instead within specific microservice's folders, as it needs to deal with multiple microservices.
|
||||
|
||||
That common place is the **"test/ServiceTests/FunctionalTests"** folder and it has those multiple integration tests for the whole application.
|
||||
|
||||

|
||||
|
||||
In order to run these application services tests, you can filter, like in the following image.
|
||||
|
||||

|
||||
|
||||
Then, making sure that you have the infrastructure containers up and running (thanks to the previous `docker-compose up` command, already explained), select and run the desired global application functional tests, as in the following image:
|
||||
|
||||

|
||||
|
||||
## Load Testing
|
||||
|
||||
Load Testing for eShopOnContainers is described in the [Load testing](Load-testing) page
|
||||
|
86
Using-Azure-resources.md
Normal file
@ -0,0 +1,86 @@
|
||||
This page contains details about configuring eShopOnContainers to use Azure resources.
|
||||
|
||||
See the page [Deploying Azure resources](Deploying-Azure-resources) For information about deploying the resources needed.
|
||||
|
||||
- [Azure Redis Cache service](#azure-redis-cache-service)
|
||||
- [Azure Service Bus service](#azure-service-bus-service)
|
||||
- [Azure Storage Account service](#azure-storage-account-service)
|
||||
- [Check status of Azure Storage Account with Health Checks](#check-status-of-azure-storage-account-with-health-checks)
|
||||
- [Azure SQL Database](#azure-sql-database)
|
||||
- [Azure Cosmos DB](#azure-cosmos-db)
|
||||
- [Azure Functions](#azure-functions)
|
||||
|
||||
**Note**: It is very important to disable any ESHOP_AZURE variables from .env file when the local storage or container services is set. Remember you can disable any variable from .env file putting '#' character before the variable declaration and you can run and test separately any Azure services.
|
||||
|
||||
With the steps explained in the next section, you will be able to run the application with Azure Redis Cache instead of the container of Redis service.
|
||||
|
||||
# Azure Redis Cache service
|
||||
|
||||
To enable the Redis Cache of Azure in eShop it is necessary to have previously configured the Azure Redis service through ARM file or manually through Azure portal. You can use the [ARM files](deploy/az/redis/readme.md) already created in eShop. Once the Redis Cache service is created, it is necessary to get the Primary connection string from information service in the Azure portal and modify the port value from 6380 to 6379 and the ssl value from True to False to establish a without ssl connection with the cache server. This Primary connection must be declared on .env file located in the solution root folder with `ESHOP_AZURE_REDIS_BASKET_DB` variable name.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_REDIS_BASKET_DB=yourredisservice.redis.cache.windows.net:6379,password=yourredisservicepassword,ssl=False,abortConnect=False
|
||||
|
||||
With the steps explained in the next section, you will be able to run the application with Azure Service Bus instead of the container of RabbitMQ service.
|
||||
|
||||
# Azure Service Bus service
|
||||
|
||||
To enable the service bus of Azure in eShop solution it is necessary having created previously the service bus service through ARM file or manually through Azure portal. You can use the [ARM files](deploy/az/servicebus/readme.md) already created in eShop. Finally, it is necessary to get the Shared access policy named "Root" (if you generated the service through ARM file) from eshop_event_bus topic. This policy must be declared on .env file located in the solution root folder with `ESHOP_AZURE_SERVICE_BUS` name.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_SERVICE_BUS=Endpoint=sb://yourservicebusservice.servicebus.windows.net/;SharedAccessKeyName=Root;SharedAccessKey=yourtopicpolicykey=;EntityPath=eshop_event_bus
|
||||
|
||||
Once the service bus service is created, it is necessary to set to true the "AzureServiceBusEnabled" environment variable from `settings.json` file on Catalog.API, Ordering.API, Basket.API, Payment.API, GracePeriodManager, Marketing.API and Locations.API.
|
||||
|
||||
With the steps explained in the next section, you will be able to run the application with Azure Storage Account instead of the local container storage.
|
||||
|
||||
# Azure Storage Account service
|
||||
|
||||
To enable Azure storage of Azure in eShopOnAzure solution it is necessary having created previously the storage service through ARM file or manually through Azure portal. You can use the ARM files find under **deploy/az/storage** folder already created in eShop. Once the storage account is created, it is very important to create a new container(blob kind) and upload the solution catalog pics files before to continue.Later, it is necessary to set to true the "AzureStorageEnabled" environment variable from `settings.json` in Catalog.API and Marketing.API.Finally, it is necessary to get the container endpoint url from information service in the Azure portal, This url must be declared on .env file located in the solution root folder with `ESHOP_AZURE_STORAGE_CATALOG` for the Catalog.API content and `ESHOP_AZURE_STORAGE_MARKETING` for the Marketing.API content.
|
||||
|
||||
Do not forget to put a slash character '/' in the end of the url.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_STORAGE_CATALOG=https://yourcatalogstorageaccountservice.blob.core.windows.net/yourcontainername/
|
||||
>ESHOP_AZURE_STORAGE_MARKETING=https://yourmarketingstorageaccountservice.blob.core.windows.net/yourcontainername/
|
||||
|
||||
|
||||
## Check status of Azure Storage Account with Health Checks
|
||||
|
||||
It is possible to add status check for the Azure Storage Account inside the Catalog Web Status. In case that the status check is enabled, for the Catalog and/or Marketing section in the WebStatus page, Azure Storage will be checked as one of the dependencies for theses APIs. To enable this check add the account name and key to the .env file for your account.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_STORAGE_CATALOG_NAME=storageaccountname
|
||||
>ESHOP_AZURE_STORAGE_CATALOG_KEY=storageaccountkey
|
||||
>ESHOP_AZURE_STORAGE_MARKETING_NAME=storageaccountname
|
||||
>ESHOP_AZURE_STORAGE_MARKETING_KEY=storageaccountkey
|
||||
|
||||
With the steps explained in the next section, you will be able to run the application with Azure SQL Database instead of local storage.
|
||||
|
||||
# Azure SQL Database
|
||||
|
||||
To enable Azure SQL Database in eShop is required to have a Azure SQL with the databases for Ordering.API, Identity.API, Catalaog.API and Marketing.API. You can use the [ARM files](deploy/az/sql/readme.md) already created in this project or do it manually. Once the databases are created, it is necessary to get the connection string for each service and set the corresponding variable in the .env file.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_CATALOG_DB=catalogazureconnectionstring
|
||||
>ESHOP_AZURE_IDENTITY_DB=identityazureconnectionstring
|
||||
>ESHOP_AZURE_ORDERING_DB=orderingazureconnectionstring
|
||||
>ESHOP_AZURE_MARKETING_DB=marketingazureconnectionstring
|
||||
|
||||
With the steps explained in the next section, you will be able to run the application with Azure Cosmos DB Database instead of local storage.
|
||||
|
||||
# Azure Cosmos DB
|
||||
|
||||
To enable Azure Cosmos DB in eShop is required to have the connection string. If you do not have an Azure Cosmos DB created you can use the ARM files under **deploy/az/cosmos** folder available in eShop or do it manually. Once the connection string is availabe it is necessary to add it in the .env file in the `ESHOP_AZURE_COSMOSDB`variable.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZURE_COSMOSDB=cosmosconnectionstring
|
||||
|
||||
# Azure Functions
|
||||
|
||||
To enable the Azure Functions in eShop you can add the URI where the functions have been deployed. You can use the ARM files under **deploy/az/azurefunctions** to create the resources in Azure. Once created and available, it is necessary to add to the .env file the `ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI` variable.
|
||||
|
||||
For example:
|
||||
>ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI=https://marketing-functions.azurewebsites.net/api/MarketingDetailsHttpTrigger?code=AzureFunctioncode
|
||||
|
||||
See Azure Functions deployment Files and Readme for more details [ARM files](deploy/az/azurefunctions/readme.md)
|
82
Using-HealthChecks.md
Normal file
@ -0,0 +1,82 @@
|
||||
ASP.NET Core 2.2 HealthChecks package is used in all APIs and applications of eShopOnContainers.
|
||||
|
||||
All applications and APIs expose two endpoints (`/liveness` and `/hc`) to check the current application and all their dependencies. The `liveness` endpoint is intended to be used as a liveness probe in Kubernetes and the `hc` is intended to be used as a readiness probe in Kubernetes.
|
||||
|
||||
## Implementing health checks in ASP.NET Core services
|
||||
|
||||
Here is the documentation about how implement HealthChecks in ASP.NET Core 2.2:
|
||||
|
||||
https://docs.microsoft.com/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health
|
||||
|
||||
Also, there's a **nice blog post** on HealthChecks by @scottsauber
|
||||
https://scottsauber.com/2017/05/22/using-the-microsoft-aspnetcore-healthchecks-package/
|
||||
|
||||
## Implementation in eShopOnContainers
|
||||
|
||||
The readiness endpoint (`/hc`) checks all the dependencies of the API. Let's take the MVC client as an example. This client depends on:
|
||||
|
||||
* Web purchasing BFF
|
||||
* Web marketing BFF
|
||||
* Identity API
|
||||
|
||||
So, following code is added to `ConfigureServices` in `Startup`:
|
||||
|
||||
```cs
|
||||
services.AddHealthChecks()
|
||||
.AddCheck("self", () => HealthCheckResult.Healthy())
|
||||
.AddUrlGroup(new Uri(configuration["PurchaseUrlHC"]), name: "purchaseapigw-check", tags: new string[] { "purchaseapigw" })
|
||||
.AddUrlGroup(new Uri(configuration["MarketingUrlHC"]), name: "marketingapigw-check", tags: new string[] { "marketingapigw" })
|
||||
.AddUrlGroup(new Uri(configuration["IdentityUrlHC"]), name: "identityapi-check", tags: new string[] { "identityapi" });
|
||||
return services;
|
||||
```
|
||||
|
||||
Four checkers are added: one named "self" that will return always OK, and three that will check the dependent services. Next step is to add the two endpoints (`/liveness` and `/hc`). Note that the `/liveness` must return always an HTTP 200 (if liveness endpoint can be reached that means that the MVC web is in healthy state (although it may not be usable if some dependent service is not healthy).
|
||||
|
||||
```cs
|
||||
app.UseHealthChecks("/liveness", new HealthCheckOptions
|
||||
{
|
||||
Predicate = r => r.Name.Contains("self")
|
||||
});
|
||||
```
|
||||
|
||||
The predicate defines which checkers are executed. In this case for the `/liveness` endpoint we only want to run the checker named "self" (the one that returns always OK).
|
||||
|
||||
Next step is to define the `/hc` endpoint:
|
||||
|
||||
```cs
|
||||
app.UseHealthChecks("/hc", new HealthCheckOptions()
|
||||
{
|
||||
Predicate = _ => true,
|
||||
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
|
||||
});
|
||||
```
|
||||
|
||||
In this case we want to run **all checkers defined** (so, the predicate will always return true to select all checkers).
|
||||
|
||||
## Configuring probes for Kubernetes using health checks
|
||||
|
||||
Helm charts already configure the needed probes in Kubernetes using the healthchecks, but you can override the configuration provided by **editing the file `/k8s/helm/<chart-folder>/values.yaml`**. You'll see a code like that:
|
||||
|
||||
```yaml
|
||||
probes:
|
||||
liveness:
|
||||
path: /liveness
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 15
|
||||
port: 80
|
||||
readiness:
|
||||
path: /hc
|
||||
timeoutSeconds: 5
|
||||
initialDelaySeconds: 90
|
||||
periodSeconds: 60
|
||||
port: 80
|
||||
```
|
||||
|
||||
You can remove a probe if you want or update its configuration. Default configuration is the same for all charts:
|
||||
|
||||
* 10 seconds before k8s starts to test the liveness probe
|
||||
* 1 sec of timeout for liveness probe (**not configurable**)
|
||||
* 15 sec between liveness probes calls
|
||||
* 90 seconds before k8s starts to test the readiness probe
|
||||
* 5 sec of timeout for readiness probe
|
||||
* 60 sec between readiness probes calls
|
52
Webhooks.md
Normal file
@ -0,0 +1,52 @@
|
||||
eShopOnContainers supports using _webhooks_ to notify external services about events that happened inside eShopOnContainers. A new API and a webhooks demo client were developed.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Webhooks API](#webhooks-api)
|
||||
- [Registering a webhook](#registering-a-webhook)
|
||||
- [Webhooks client](#webhooks-client)
|
||||
|
||||
## Webhooks API
|
||||
|
||||
Webhooks API is exposed directly (not through any BFF) because its usage is not tied to any particular client. The API offers endpoints to register and view the current webhooks. The API is authenticated, so you can only register a new webhook when authenticated in the Identity.API and when you list the webhooks, you only see the webhooks registered by you.
|
||||
|
||||
### Registering a webhook
|
||||
|
||||
Registering a webhook is a process that involves two parties: the Webhooks API and the Webhooks client (outside eShopOnContainers). To avoid allowing clients that aren't under your control a basic security mechanism (known as URL Granting) is used when registering webhooks:
|
||||
|
||||
- When registering the webhook (using Webhooks API under authenticated account) you must pass a token (any string value up to you) and a "Grant URL".
|
||||
- Webhooks API will call the "Grant URL" using HTTP `OPTIONS` and passing the token sent by you in the `X-eshop-whtoken` header.
|
||||
- Webhooks API expects to receive a HTTP successful status code **and** the same token in the same `x-eshop-whtoken` header, in the response
|
||||
|
||||
If token is not sent in the response, or the HTTP Status code is not successful, then the Webhooks API, returns a HTTP Status Code 418 (because trying to register a URL owned by someone else is almost the same as making coffee in a teapot ;-)).
|
||||
|
||||
Due to security reasons the "Grant URL" used and the URL for the webhook MUST belong to the same
|
||||
|
||||
When eShopOnContainers sends the webhook, this token is also sent (in the same header), giving the client the choice to process or not the hook.
|
||||
|
||||
## Webhooks client
|
||||
|
||||
Webhooks Client is a basic web (developed with Razor Pages) that allows you to test the eShopOnContainers webhooks system. It allows you to register the "OrderPaid" webhook.
|
||||
|
||||
The client is exposed directly (like all other clients). In k8s the ingress path is `/webhooks-web`.
|
||||
|
||||
Here are the configuration values of this demo client (with the values used in default compose file):
|
||||
|
||||
```yaml
|
||||
- ASPNETCORE_URLS=http://0.0.0.0:80
|
||||
- Token=6168DB8D-DC58-4094-AF24-483278923590 # Webhooks are registered with this token
|
||||
- IdentityUrl=http://10.0.75.1:5105
|
||||
- CallBackUrl=http://localhost:5114
|
||||
- WebhooksUrl=http://webhooks.api
|
||||
- SelfUrl=http://webhooks.client/
|
||||
```
|
||||
|
||||
- `Token`: Client will send always this token when webhooks asks for url grant. Also client expects this token to be in the webhooks sent by eShopOnContainers.
|
||||
- `IdeneityUrl`: URL of Identity API
|
||||
- `CallBackUrl`: Callback url for Identity API
|
||||
- `WebhooksUrl`: URL of webhooks API (using internal containers networking)
|
||||
- `SelfUrl`: URL where demo client can be reached from webhooks api. In k8s deployments ingress-based url is used, in compose the internal url has to be used.
|
||||
|
||||
There is an additional configuration value named `ValidateToken`. If set to `true` (defaults to `false`), the webhook demo client ensures that the webhook sent by eShopOnContainers has the same token as the `Token` configuration value. If set to `false`, the client will accept any hook, regardless its token header value.
|
||||
|
||||
>**Note**: Regardless the value of `ValidateToken` configuration entry, note that the client **always sends back the value of `Token` entry when granting the url**.
|
291
Windows-setup.md
Normal file
@ -0,0 +1,291 @@
|
||||
This page covers the setup of your Windows development computer and assumes you've already:
|
||||
|
||||
- Ensured your system meets the [system requirements](System-requirements#Windows) and
|
||||
- Installed Docker Desktop for Windows as directed in \
|
||||
<https://docs.docker.com/docker-for-windows/install/>.
|
||||
|
||||
The approach followed is to have the app running from the CLI first, since it's usually easier to deploy, and then go on to the option of using Visual Studio.
|
||||
|
||||
> **CONTENT**
|
||||
|
||||
- [Configure Docker](#configure-docker)
|
||||
- [Memory and CPU](#memory-and-cpu)
|
||||
- [Shared drives](#shared-drives)
|
||||
- [Configure local networking](#configure-local-networking)
|
||||
- [Build and deploy eShopOnContainers](#build-and-deploy-eshoponcontainers)
|
||||
- [1. Create a folder for your repositories](#1-create-a-folder-for-your-repositories)
|
||||
- [2. Clone eShopOnContainer's GitHub repo](#2-clone-eshoponcontainers-github-repo)
|
||||
- [3. Build the application](#3-build-the-application)
|
||||
- [4. Deploy to the local Docker host](#4-deploy-to-the-local-docker-host)
|
||||
- [Explore the application](#explore-the-application)
|
||||
- [Optional - Use Visual Studio](#optional---use-visual-studio)
|
||||
- [Server side (Microservices and web applications) - Workloads](#server-side-microservices-and-web-applications---workloads)
|
||||
- [Mobile (Xamarin apps for iOS, Android and Windows UWP) - Workloads](#mobile-xamarin-apps-for-ios-android-and-windows-uwp---workloads)
|
||||
- [Stop Docker background tasks on project open](#stop-docker-background-tasks-on-project-open)
|
||||
- [Open eShopOnContainers solution in Visual Studio](#open-eshoponcontainers-solution-in-visual-studio)
|
||||
- [Build and run the application with F5 or Ctrl+F5](#build-and-run-the-application-with-f5-or-ctrlf5)
|
||||
- [Set docker-compose as the default StartUp project](#set-docker-compose-as-the-default-startup-project)
|
||||
- [Debug with several breakpoints across the multiple containers/projects](#debug-with-several-breakpoints-across-the-multiple-containersprojects)
|
||||
- [Issue with "Visual Studio 2017 Tools for Docker" and network proxies/firewalls](#issue-with-%22visual-studio-2017-tools-for-docker%22-and-network-proxiesfirewalls)
|
||||
- [Optional - Use Visual Studio Code](#optional---use-visual-studio-code)
|
||||
- [Explore the code](#explore-the-code)
|
||||
- [Low memory configuration](#low-memory-configuration)
|
||||
- [Additional resources](#additional-resources)
|
||||
|
||||
## Configure Docker
|
||||
|
||||
The initial Docker for Desktop configuration is not suitable to run eShopOnContainers because the app uses a total of 25 Linux containers.
|
||||
|
||||
Even though the microservices are rather light, the application also runs SQL Server, Redis, MongoDb, RabbitMQ and Seq as separate containers. The SQL Server container has four databases (for different microservices) and takes an important amount of memory.
|
||||
|
||||
So it's important to configure enough memory RAM and CPU to Docker.
|
||||
|
||||
### Memory and CPU
|
||||
|
||||
Once Docker for Windows is installed, go to the **Settings > Advanced** option, from the Docker icon in the system tray, to configure the minimum amount of memory and CPU like so:
|
||||
|
||||
- Memory: 4096 MB
|
||||
- CPU: 2
|
||||
|
||||
This amount of memory is the absolute minimum to have the app running, and that's why you need a 16GB RAM machine for optimal configuration.
|
||||
|
||||

|
||||
|
||||
[What can I do if my computer has only 8 GB RAM?](#low-memory-configuration)
|
||||
|
||||
### Shared drives
|
||||
|
||||
This step is optional but recommended, as Docker sometimes needs to access the shared drives when building, depending on the build actions.
|
||||
|
||||
This is not really necessary when building from the CLI, but it's mandatory when building from Visual Studio to access the code to build.
|
||||
|
||||
The drive you'll need to share depends on where you place your source code.
|
||||
|
||||

|
||||
|
||||
## Configure local networking
|
||||
|
||||
IMPORTANT: Ports 5100 to 5105 must be open in the local Firewall, so authentication to the STS (Security Token Service container, based on IdentityServer) can be done through the 10.0.75.1 IP, which should be available and already setup by Docker. These ports are also needed for client remote apps like Xamarin app or SPA app in a remote browser.
|
||||
|
||||
You can manually create a rule in your local firewall in your development machine or you can just run the **add-firewall-rules-for-sts-auth-thru-docker.ps1** script available in the solution's **cli-windows** folder.
|
||||
|
||||

|
||||
|
||||
**NOTE:** If you get the error **Unable to obtain configuration from: `http://10.0.75.1:5105/.well-known/openid-configuration`** you might need to allow the program `vpnkit` for connections to and from any computer through all ports.
|
||||
|
||||
If you are working within a corporate VPN you might need to run this power shell command every time you power up your machine, to allow access from the `DockerNAT` network:
|
||||
|
||||
```powershell
|
||||
Get-NetConnectionProfile | Where-Object { $_.InterfaceAlias -match "(DockerNAT)" } | ForEach-Object { Set-NetConnectionProfile -InterfaceIndex $_.InterfaceIndex -NetworkCategory Private }
|
||||
```
|
||||
|
||||
## Build and deploy eShopOnContainers
|
||||
|
||||
At this point you should be able to run eShopOnContainers from the command line. To do that, you should:
|
||||
|
||||
### 1. Create a folder for your repositories
|
||||
|
||||
Go to a directory to clone the repo, something like `C:\Users\<username>\source` will be fine.
|
||||
|
||||
### 2. Clone [eShopOnContainer's GitHub repo](https://github.com/dotnet-architecture/eShopOnContainers)
|
||||
|
||||
```console
|
||||
git clone https://github.com/dotnet-architecture/eShopOnContainers.git
|
||||
```
|
||||
|
||||
**Note:** Remember that the active development is done in `dev` branch. To test the latest code, use this branch instead of `master`.
|
||||
|
||||
### 3. Build the application
|
||||
|
||||
```console
|
||||
cd eShopOnContainers
|
||||
docker-compose build --build-arg RESTORECMD=scripts/restore-packages
|
||||
```
|
||||
|
||||
While building the docker images, you should see something like the following image, and the process should take between 10 and 30 minutes to complete, depending on the system speed.
|
||||
|
||||

|
||||
|
||||
The first time you run this command it'll take some more additional time as it needs to pull/download the dotnet/core/aspnet and SDK images, so it'll take its time.
|
||||
|
||||
### 4. Deploy to the local Docker host
|
||||
|
||||
```console
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
You should view something like this in the first seconds:
|
||||
|
||||

|
||||
|
||||
After a few more seconds you should see something like this:
|
||||
|
||||

|
||||
|
||||
At this point you should be able to navigate to <http://localhost:5107/> and see the WebStatus microservice:
|
||||
|
||||

|
||||
|
||||
When all microservices are up (green checks) you should be able to navigate to <http://localhost:5100/> and see the Home Page of eShopOnContainers:
|
||||
|
||||

|
||||
|
||||
## Explore the application
|
||||
|
||||
You can now [explore the application](Explore-the-application) or continue with the optional Visual Studio setup.
|
||||
|
||||
## Optional - Use Visual Studio
|
||||
|
||||
If you want to explore the code and debug the application to see it working, you have to install Visual Studio.
|
||||
|
||||
You have to install at least VS 2017 (15.9) and you can install the latest release from https://visualstudio.microsoft.com/vs/.
|
||||
|
||||
Upon running the installer, select the following workloads depending on the apps you intend to test or work with:
|
||||
|
||||
### Server side (Microservices and web applications) - Workloads
|
||||
|
||||
- .NET Core cross-platform development
|
||||
- Azure development (Optional) - It is optional but recommended in case you want to deploy to Docker hosts in Azure or use any other infrastructure in Azure.
|
||||
|
||||

|
||||
|
||||
### Mobile (Xamarin apps for iOS, Android and Windows UWP) - Workloads
|
||||
|
||||
If you also want to test/work with the eShopOnContainer model app based on Xamarin, you need to install the following additional workloads:
|
||||
|
||||
- Mobile development with .NET (Xamarin)
|
||||
- Universal Windows Platform development
|
||||
- .NET desktop development (Optional) - This is not required, but just in case you also want to make tests consuming the microservices from WPF or WinForms desktop apps
|
||||
|
||||

|
||||
|
||||
IMPORTANT: As mentioned above, make sure you are NOT installing Google's Android emulator with Intel HAXM hypervisor or you will run on an incompatibility and Hyper-V won't work in your machine, therefore, Docker for Windows won't work when trying to run the Linux host or any host with Hyper-V.
|
||||
|
||||
Make sure that you DO NOT select the highlighted options below with red arrows:
|
||||
|
||||

|
||||
|
||||
### Stop Docker background tasks on project open
|
||||
|
||||
VS runs some docker related tasks when opening a project with Docker support, to avoid these tasks from executing and slowing down your system, you might want to configure this options:
|
||||
|
||||

|
||||
|
||||
### Open eShopOnContainers solution in Visual Studio
|
||||
|
||||
- If testing/working only with the server-side applications and services, open the solution: **eShopOnContainers-ServicesAndWebApps.sln** (Recommended for most cases testing the containers and web apps)
|
||||
|
||||
- If testing/working either with the server-side applications and services plus the Xamarin mobile apps, open the solution: **eShopOnContainers.sln**
|
||||
|
||||
Below you can see the full **eShopOnContainers-ServicesAndWebApps.sln** solution (server side) opened in Visual Studio 2017:
|
||||
|
||||

|
||||
|
||||
Note how VS 2017 loads the docker-compose.yml files in a special node-tree so it uses that configuration to deploy/debug all the containers configured, at the same time into your Docker host.
|
||||
|
||||
### Build and run the application with F5 or Ctrl+F5
|
||||
|
||||
#### Set docker-compose as the default StartUp project
|
||||
|
||||
**IMPORTANT**: If the **"docker-compose" project** is not your "by default startup project", right click on the "docker-compose" node and select the "Set as Startup Project" menu option, as shown below:
|
||||
|
||||

|
||||
|
||||
At this point, after waiting sometime for the NuGet packages to be properly restored, you should be able to build the whole solution or even directly deploy/debug it into Docker by simple hitting F5 or pressing the debug "Play" button that now should be labeled as "Docker":
|
||||
|
||||

|
||||
|
||||
VS 2017 should compile the .NET projects, then create the Docker images and finally deploy the containers in the Docker host (your by default Linux VM in Docker for Windows).
|
||||
Note that the first time you hit F5 it'll take more time, a few minutes at least, because in addition to compile your bits, it needs to pull/download the base images (SQL for Linux Docker Image, Redis Image, ASPNET image, etc.) and register them in the local image repo of your PC. The next time you hit F5 it'll be much faster.
|
||||
|
||||
Finally, because the docker-compose configuration project is configured to open the MVC application, it should open your by default browser and show the MVC application with data coming from the microservices/containers:
|
||||
|
||||

|
||||
|
||||
Here's how the docker-compose configuration project is configured to open the MVC application:
|
||||
|
||||

|
||||
|
||||
Finally, you can check out how the multiple containers are running in your Docker host by running the command **"docker ps"** like below:
|
||||
|
||||

|
||||
|
||||
You can see the 8 containers are running and what ports are being exposed, etc.
|
||||
|
||||
#### Debug with several breakpoints across the multiple containers/projects
|
||||
|
||||
Something very compelling and productive in VS 2017 is the capability to debug several breakpoints across the multiple containers/projects.
|
||||
For instance, you could set a breakpoint in a controller within the MVC web app plus a second breakpoint in a second controller within the Catalog Web API microservice, then refresh the browser if you were still running the app or F5 again, and VS will be stopping within your microservices running in Docker as shown below! :)
|
||||
|
||||
Breakpoint at the MVC app running as Docker container in the Docker host:
|
||||
|
||||

|
||||
|
||||
Press F5 again...
|
||||
|
||||
Breakpoint at the Catalog microservice running as Docker container in the Docker host:
|
||||
|
||||

|
||||
|
||||
And that's it! Super simple! Visual Studio is handling all the complexities under the covers and you can directly do F5 and debug a multi-container application!
|
||||
|
||||
### Issue with "Visual Studio 2017 Tools for Docker" and network proxies/firewalls
|
||||
|
||||
After installing VS2017 with docker support, if you cannot debug properly and you are trying from a corporate network behind a proxy, consider the following issue and workarounds, until this issue is fixed in Visual Studio:
|
||||
|
||||
- https://github.com/dotnet-architecture/eShopOnContainers/issues/224#issuecomment-319462344
|
||||
|
||||
## Optional - Use Visual Studio Code
|
||||
|
||||
After installing VS code from <a href='https://code.visualstudio.com/'>Visual Studio Code</a> you can edit particular file or "open" the whole solution folder like in the following screenshots:
|
||||
|
||||
`Opening the Solution's folder`
|
||||
|
||||

|
||||
|
||||
`Editing a .yml file`
|
||||
|
||||

|
||||
|
||||
It is also recommended to install the C# extension and the Docker extension for VS Code:
|
||||
|
||||

|
||||
|
||||
## Explore the code
|
||||
|
||||
You should be now ready to begin learning by [exploring the code](Explore-the-code) and debugging eShopOnContainers.
|
||||
|
||||
## Low memory configuration
|
||||
|
||||
If your computer has only 8 GB RAM, you **might** still get eShopOnContainers up and running, but it's not sure and you will not be able to run Visual Studio. You might be able to run VS Code and you'll be limited to the CLI. You might even need to run Chromium or any other bare browser, Chrome will most probably not do. You'll also need to close any other program running in your machine.
|
||||
|
||||
The easiest way to get Chromium binaries directly from Google is to install the [node-chromium package](https://www.npmjs.com/package/chromium) in a folder and then look for the `chrome.exe` program, as follows:
|
||||
|
||||
1. Install node.
|
||||
|
||||
2. Create a folder wherever best suits you.
|
||||
|
||||
3. Run `npm install --save chromium`
|
||||
|
||||
4. After installation finishes go to folder `node_modules\chromium\lib\chromium\chrome-win` (in Windows) to find `chrome.exe`
|
||||
|
||||
The installation process should look something like this:
|
||||
|
||||

|
||||
|
||||
## Additional resources
|
||||
|
||||
- **[eShopOnContainers issue] Can't display login page on MVC app** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/issues/295#issuecomment-327973650>
|
||||
|
||||
- **[docs.microsoft.com issue] Configuring Windows vEthernet Adapter Networks to Properly Support Docker Container Volumes** \
|
||||
<https://github.com/dotnet/docs/issues/11528#issuecomment-486662817>
|
||||
|
||||
- **[eShopOnContainers PR] Add Power Shell script to set network category to private for DockerNAT** \
|
||||
<https://github.com/dotnet-architecture/eShopOnContainers/pull/1019>
|
||||
|
||||
- **Troubleshoot Visual Studio development with Docker (Networking)** \
|
||||
<https://docs.microsoft.com/en-us/visualstudio/containers/troubleshooting-docker-errors?view=vs-2019#errors-specific-to-networking-when-debugging-your-application>
|
||||
|
||||
- **[eShopOnContainers issue] Projects won't load in VS2019** \
|
||||
https://github.com/dotnet-architecture/eShopOnContainers/issues/1013#issuecomment-488664792
|
16
Xamarin-setup.md
Normal file
@ -0,0 +1,16 @@
|
||||
_IMPORTANT: This section is in early draft state. Will be evolving, eventually_
|
||||
|
||||
## Important Notes for the Xamarin app
|
||||
|
||||
- When running the Xamarin app note that it can run in "Mock mode" so you won't need any connection to the microservices, so the data shown is "fake data" generated by the client Xamarin app.
|
||||
|
||||
- In order to really access the microservices/containers you'll need to deploy the containers following this ["production" deployment procedure for the containers](Docker-host) and then, provide the external IP of your dev machine or the DNS name or IP of the Docker Host you are using into the Xamarin app settings when NOT using the "mock mode".
|
||||
|
||||
## Guidance on Architecture patterns of Xamarin.Forms apps
|
||||
|
||||
The following book (early draft state) is being created aligned with this sample/reference Xamarin app.
|
||||
You can download it here:
|
||||
|
||||
<a href='https://aka.ms/xamarinpatternsebook'> <img src="images/eBooks/xamarinpatternsebook-cover.png"></a>
|
||||
|
||||
[**Download** (Early DRAFT, still work in progress)](https://aka.ms/xamarinpatternsebook)
|
59
_Sidebar.md
Normal file
@ -0,0 +1,59 @@
|
||||
## eShopOnContainers
|
||||
|
||||
- [Home](Home)
|
||||
- [Roadmap](Roadmap)
|
||||
- [e-books](eBooks)
|
||||
- [Follow updates](https://github.com/dotnet-architecture/News/issues?q=is%3Aopen+is%3Aissue)
|
||||
|
||||
## Getting started
|
||||
|
||||
- [System requirements](System-requirements)
|
||||
- Development setup
|
||||
- [Windows](Windows-setup)
|
||||
- [Mac](Mac-setup)
|
||||
- [Xamarin](Xamarin-setup)
|
||||
- [Databases & containers](Databases-and-containers)
|
||||
|
||||
### [FREQUENT ERRORS](Frecuent-errors)
|
||||
|
||||
## Explore
|
||||
|
||||
- [Architecture](Architecture)
|
||||
- [Application](Explore-the-application)
|
||||
- [Code](Explore-the-code)
|
||||
- [Simplified CQRS & DDD](Simplified-CQRS-and-DDD)
|
||||
- [API gateways](API-gateways)
|
||||
- [Webhooks](Webhooks)
|
||||
- [Azure Key Vault](Azure-Key-Vault)
|
||||
- Logging and Monitoring
|
||||
- [Serilog & Seq](Serilog-and-Seq)
|
||||
- [Using HealthChecks](Using-HealthChecks)
|
||||
- [ELK Stack](ELK-Stack)
|
||||
- [Application Insights](Application-Insights)
|
||||
- Tests
|
||||
- [Unit & Integration](Unit-and-integration-testing)
|
||||
- [Load](Load-testing)
|
||||
|
||||
## Deployment
|
||||
|
||||
- [Docker-compose files](Docker-compose-deployment-files)
|
||||
|
||||
### Local
|
||||
|
||||
- [Kubernetes](Deploy-to-Local-Kubernetes)
|
||||
- [Windows containers](Deploy-to-Windows-containers)
|
||||
|
||||
### Production (generic)
|
||||
|
||||
- [Docker host](Docker-host)
|
||||
|
||||
### Cloud
|
||||
|
||||
- [Azure Kubernetes Service (AKS)](Deploy-to-Azure-Kubernetes-Service-(AKS))
|
||||
- [Azure Dev Spaces](Azure-Dev-Spaces)
|
||||
- [Using Azure resources](Using-Azure-resources)
|
||||
- [Deploying Azure resources](Deploying-Azure-resources)
|
||||
|
||||
## DevOps
|
||||
|
||||
- [Azure DevOps pipelines](Azure-DevOps-pipelines)
|
14
eBooks.md
Normal file
@ -0,0 +1,14 @@
|
||||
|
||||
While developing this reference application, we also created the following companion **Reference Guides/eBooks**:
|
||||
|
||||
| Architecting & Developing | |
|
||||
|---------------------------|---|
|
||||
| <a href='https://aka.ms/microservicesebook'><img src="images/eBooks/microservicesebook-cover.png"></a> | <ul><li><a href='https://aka.ms/microservicesebook'>**Download .PDF** (v2.2 Edition)</a></li><li><a href="https://docs.microsoft.com/dotnet/standard/microservices-architecture/">Read online</a></li><li><a href="Microservices-Architecture-eBook-changelog">Changelog</a></li></ul> |
|
||||
|
||||
| Containers Lifecycle & CI/CD | |
|
||||
|------------------------------|---|
|
||||
| <a href='https://aka.ms/dockerlifecycleebook'> <img src="images/eBooks/dockerlifecycleebook-cover.png"></a> | <ul><li><a href='https://aka.ms/dockerlifecycleebook'>**Download** </a></li><li><a href="https://docs.microsoft.com/dotnet/standard/containerized-lifecycle-architecture/">Read online</a></li><li><a href="Microservices-DevOps-eBook-changelog">Changelog</a></li></ul> |
|
||||
|
||||
| App patterns with Xamarin.Forms | |
|
||||
|---------------------------------|---|
|
||||
| <a href='https://aka.ms/xamarinpatternsebook'> <img src="images/eBooks/xamarinpatternsebook-cover.png"></a> | <ul><li><a href='https://aka.ms/xamarinpatternsebook'>**Download** </a></li></ul> |
|
BIN
images/Application-Insights/appinsights-loggerfactory.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
images/Application-Insights/appinsights-register.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
images/Application-Insights/appinsights-screenshot.png
Normal file
After Width: | Height: | Size: 112 KiB |
BIN
images/Application-Insights/create-insights.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
images/Application-Insights/settings-insights.png
Normal file
After Width: | Height: | Size: 23 KiB |
BIN
images/Application-Insights/useappinsights-program.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
images/Architecture/azure-api-management-gateway.png
Normal file
After Width: | Height: | Size: 347 KiB |
BIN
images/Architecture/eshoponcontainers-arquitecture.png
Normal file
After Width: | Height: | Size: 204 KiB |
BIN
images/Architecture/eshoponcontainers-microservice-types.png
Normal file
After Width: | Height: | Size: 156 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 11 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 39 KiB After Width: | Height: | Size: 39 KiB |
Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 57 KiB |
Before Width: | Height: | Size: 45 KiB After Width: | Height: | Size: 45 KiB |
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 31 KiB |
BIN
images/Deploy-to-Local-Kubernetes/kubernetes-dashboard-login.png
Normal file
After Width: | Height: | Size: 29 KiB |
After Width: | Height: | Size: 112 KiB |
After Width: | Height: | Size: 145 KiB |
BIN
images/Deploy-to-Local-Kubernetes/kubernetes-webstatus.png
Normal file
After Width: | Height: | Size: 108 KiB |
After Width: | Height: | Size: 133 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 32 KiB |
BIN
images/Docker-setup/mac-docker-configuration-memory-cpu.png
Normal file
After Width: | Height: | Size: 189 KiB |
BIN
images/Docker-setup/mac-docker-configuration-shared-folders.png
Normal file
After Width: | Height: | Size: 181 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 60 KiB |