Doc updated
parent
a9eb2c97ab
commit
40ff688a6b
@ -54,25 +54,6 @@ Services expose Pods to external networks. For example, the `basket` Service exp
|
|||||||
> app: eshop
|
> app: eshop
|
||||||
> component: basket
|
> component: basket
|
||||||
>```
|
>```
|
||||||
Kubernetes's built-in DNS service resolves Service names to cluster-internal IP addresses. This allows the nginx frontend to proxy connections to the app's microservices by name:
|
|
||||||
>`nginx.conf`
|
|
||||||
>```
|
|
||||||
>location /basket-api {
|
|
||||||
> proxy_pass http://basket;
|
|
||||||
>```
|
|
||||||
The frontend Pod is different in that it needs to be exposed outside the cluster. This is accomplished with another Service:
|
|
||||||
>`frontend.yaml`
|
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> ports:
|
|
||||||
> - port: 80
|
|
||||||
> targetPort: 8080
|
|
||||||
> selector:
|
|
||||||
> app: eshop
|
|
||||||
> component: frontend
|
|
||||||
> type: LoadBalancer
|
|
||||||
>```
|
|
||||||
`type: LoadBalancer` tells Kubernetes to expose the Service behind a load balancer appropriate for the cluster's platform. For Azure Container Service, this creates an Azure load balancer rule with a public IP.
|
|
||||||
|
|
||||||
### Deployments
|
### Deployments
|
||||||
Kubernetes uses Pods to organize containers, and Services to network them. It uses **Deployments** to organize creating, and modifying, Pods. A Deployment describes a state of one or more Pods. When a Deployment is created or modified, Kubernetes attempts to realize that state.
|
Kubernetes uses Pods to organize containers, and Services to network them. It uses **Deployments** to organize creating, and modifying, Pods. A Deployment describes a state of one or more Pods. When a Deployment is created or modified, Kubernetes attempts to realize that state.
|
||||||
@ -91,30 +72,20 @@ This allows the deployment script to change images before Kubernetes creates the
|
|||||||
>kubectl rollout resume -f deployments.yaml
|
>kubectl rollout resume -f deployments.yaml
|
||||||
>```
|
>```
|
||||||
|
|
||||||
|
### Ingress
|
||||||
|
|
||||||
|
Ingress is the preferred way (since Kubernetes 1.1) to manage external exposure of services running in the cluster. An ingress resource defines which services are exposed outside the cluster and under which conditions. Then a _ingress controller_ (a pod and a service) is paired with the ingress resource to ensure compliance.
|
||||||
|
|
||||||
|
In eShopOnContainers we are using [nginx-ingress](https://github.com/kubernetes/ingress-nginx) which is an implementation of a ingress controller based on NGINX.
|
||||||
|
|
||||||
### ConfigMaps
|
### ConfigMaps
|
||||||
A **ConfigMap** is a collection of key/value pairs commonly used to provide configuration information to Pods. The deployment script uses one to store the frontend's configuration:
|
A **ConfigMap** is a collection of key/value pairs commonly used to provide configuration information to Pods. The deployment script creates two configmaps:
|
||||||
>`deploy.ps1`
|
|
||||||
>```
|
* `urls` which contains all urls needed (internal and external)
|
||||||
>kubectl create configmap config-files from-file=nginx-conf=nginx.conf
|
* `externalcfg` which contains all settings for microservices. This configmap is created from the file that is passed in the `-configFile` parameter.
|
||||||
>```
|
|
||||||
This creates a ConfigMap named `config-files` with key `nginx-conf` whose value is the content of nginx.conf. The frontend Pod mounts that value as `/etc/nginx/nginx.conf`:
|
|
||||||
>`frontend.yaml`
|
Also some other configmaps needed for nginx-ingrerss are created too.
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> containers:
|
|
||||||
> - name: nginx
|
|
||||||
> ...
|
|
||||||
> volumeMounts:
|
|
||||||
> - name: config
|
|
||||||
> mountPath: /etc/nginx
|
|
||||||
> volumes:
|
|
||||||
> - name: config
|
|
||||||
> configMap:
|
|
||||||
> name: config-files
|
|
||||||
> items:
|
|
||||||
> - key: nginx-conf
|
|
||||||
> path: nginx.conf
|
|
||||||
>```
|
|
||||||
This facilitates rapid iteration better than other techniques, e.g. building an image to bake in configuration.
|
This facilitates rapid iteration better than other techniques, e.g. building an image to bake in configuration.
|
||||||
|
|
||||||
Configuration of all pods is stored in a ConfigMap (named `externalcfg`) in the file `conf-local.yml`. This file contains the configuration to run k8s with "local" resources (that is, using SQL, Redis, RabbitMQ, etc, as containers in k8s). You can provide your own configuration file if you want to use external resources (i.e. if you want to use resources in Azure). The deployment script accepts the parameter `-configFile` to set which config file must be used to create the `externalcfg` ConfigMap.
|
Configuration of all pods is stored in a ConfigMap (named `externalcfg`) in the file `conf-local.yml`. This file contains the configuration to run k8s with "local" resources (that is, using SQL, Redis, RabbitMQ, etc, as containers in k8s). You can provide your own configuration file if you want to use external resources (i.e. if you want to use resources in Azure). The deployment script accepts the parameter `-configFile` to set which config file must be used to create the `externalcfg` ConfigMap.
|
||||||
@ -149,7 +120,6 @@ You might wonder why `urls` ConfigMap is created by the script instead to be cre
|
|||||||
* [kubectl for Docker Users](https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/)
|
* [kubectl for Docker Users](https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/)
|
||||||
* [Kubernetes API reference](https://kubernetes.io/docs/api-reference/v1.5/)
|
* [Kubernetes API reference](https://kubernetes.io/docs/api-reference/v1.5/)
|
||||||
|
|
||||||
|
|
||||||
# Setting eShopOnContainers up on Kubernetes (in Azure Container Service)
|
# Setting eShopOnContainers up on Kubernetes (in Azure Container Service)
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
@ -274,18 +244,42 @@ Right after running that command you now will have the file `config` with the ne
|
|||||||
|
|
||||||
## Deploying eShopOnContainers to Kubernetes in ACS (Azure Container Service) with the deployment script
|
## Deploying eShopOnContainers to Kubernetes in ACS (Azure Container Service) with the deployment script
|
||||||
|
|
||||||
**Prerequisites** Before publishing the code to k8s you should have compiled and published your code to `obj/Docker/publish` folder (on any project path). The best way to do it is to run the `/cli-windows/buid-bits.ps1` script in windows or the `/cli-linux/build-bits-linux.sh` on Linux or the `/cli-mac/build-bits.sh` on Mac.
|
Before installing anthing:
|
||||||
|
|
||||||
NOTE: Make sure you have npm and bower installed. If Bower is not installed globally, you might have errors when building the bits for the MVC and WebStatus apps.
|
|
||||||
If you don't have Bower installed, run: `npm install -g bower`
|
|
||||||
|
|
||||||
If you prefer, you can force the `publish.ps1` script to build and publish the code for you using the `-buildBits $true` parameter. Using this parameter the publishing script will recompile and republish all projects (default value is `$false`).
|
|
||||||
|
|
||||||
We have simplified the deployment so you can do it just by executing a script by following the following few steps:
|
|
||||||
|
|
||||||
1. Open a PowerShell command line at the `k8s` directory of your local eShopOnContainers repository.
|
1. Open a PowerShell command line at the `k8s` directory of your local eShopOnContainers repository.
|
||||||
2. Ensure `docker`, `docker-compose`, and `kubectl` are on the path, and configured for your Docker machine and Kubernetes cluster.
|
2. Ensure `docker`, `docker-compose`, and `kubectl` are on the path, and configured for your Docker machine and Kubernetes cluster.
|
||||||
3. Run the `deploy.ps1` script. But you'll need to know your Azure Container Registry credentials first. The Docker username and password are provided by Azure Container Registry, and can be retrieved from the Azure portal. Optionally, ACR credentials can be obtained by running the following command:
|
|
||||||
|
### Step 1: Install nginx-ingress
|
||||||
|
|
||||||
|
To install nginx-ingress you must follow these steps:
|
||||||
|
|
||||||
|
1. Run `.\deploy-ingress.ps1`
|
||||||
|
2. Run `.\deploy-ingress-azure.ps1`
|
||||||
|
|
||||||
|
None of the powershell scripts accetps parameters. First script create the generic nginx-ingress resources and the second one creates specific resources for using nginx-ingress under ACS/AKS.
|
||||||
|
|
||||||
|
To ensure that everything is installed you can type `kubectl get services -n ingress-nginx`. Output should be something like:
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
default-http-backend 10.0.171.196 <none> 80/TCP 1d
|
||||||
|
ingress-nginx 10.0.93.222 aa.bb.cc.dd 80:32133/TCP,443:31472/TCP 19h
|
||||||
|
```
|
||||||
|
>**Note**: The EXTERNAL-IP of the ingress-nginx takes some minutes to appear. This will be the external IP of the cluster.
|
||||||
|
|
||||||
|
## Step 2a: Install eShopOnContainers using the public images provided by Microsoft
|
||||||
|
|
||||||
|
Microsoft is providing public images for eShopOnContainers. Images with `:dev` tag are the most updated. Use them if you want to install eShopOnContainers on your k89 cluster.
|
||||||
|
|
||||||
|
1. Run the `deploy.ps1` script: `.\deploy.ps1 -configFile .\conf_local.yml -imageTag dev -buildImages $false -deployCI $false -deployInfrastructure $true`
|
||||||
|
|
||||||
|
This command will install eShopOnContainers with the public images with tag `:dev`. Note the parameter `-buildImages` is set to `$false` to force the script to not build nor push the Docker images.
|
||||||
|
|
||||||
|
The `deploy.ps1` script autodetects the IP of the ingress service, if the ingress service is not installed or its EXTERNAL-IP is not yet stablished an error will be thrown and the deploy process will be cancelled.
|
||||||
|
|
||||||
|
## Step 2b: Install eShopOnContainers using your own images
|
||||||
|
|
||||||
|
To use your own images you need to build them and push on a registry. DockerHub and ACR are both supported by the deployment script. If you are using ACR you'll need to know your Azure Container Registry credentials first. The Docker username and password are provided by Azure Container Registry, and can be retrieved from the Azure portal. Optionally, ACR credentials can be obtained by running the following command:
|
||||||
|
|
||||||
>```
|
>```
|
||||||
>az acr credential show -n eShopAutogenContainerRegistry
|
>az acr credential show -n eShopAutogenContainerRegistry
|
||||||
@ -299,21 +293,16 @@ The following example uses your already available Azure Container Registry where
|
|||||||
>```
|
>```
|
||||||
>./deploy.ps1 -registry eshopautogencontainerregistry.azurecr.io -dockerUser eShopAutogenContainerRegistry -dockerPassword YourSuperSecretPassword -configFile ./conf_local.yml
|
>./deploy.ps1 -registry eshopautogencontainerregistry.azurecr.io -dockerUser eShopAutogenContainerRegistry -dockerPassword YourSuperSecretPassword -configFile ./conf_local.yml
|
||||||
>```
|
>```
|
||||||
|
**Note**: If the `-registry` parameter is not passed, DockerHub is assumed.
|
||||||
|
**Note**: You don't need to set the `-buildImages` to `$true` because this is the default value. Docker images are built using the multi stage build Docker files.
|
||||||
|
|
||||||
<img src="img/kubernetes/deploying-eshop-to-kubernetes-cluster.png">
|
<img src="img/kubernetes/deploying-eshop-to-kubernetes-cluster.png">
|
||||||
|
|
||||||
The script will build the .NET Core code, SPA TypeScript code and corresponding Docker images, push the latter to your registry at Azure Container Registry, and deploy the application to your cluster. Building .NET Core projects and/or build docker images is optionally based on parameters. ** Please refer to the file `k8s/README.k8s.md` for detailed info about how the script works and its parameters.
|
The script will build the Docker images push the latter to your registry at Azure Container Registry, and deploy the application to your cluster. Building .NET Core projects and/or build docker images is optionally based on parameters. ** Please refer to the file `k8s/README.k8s.md` for detailed info about how the script works and its parameters.
|
||||||
|
|
||||||
While deploying eShopOnContainers with the script, you can see the Docker image repositories being created in your **Azure Container Registry** which happens right before deploying to Kubernetes, at the Azure's portal:
|
While deploying eShopOnContainers with the script, you can see the Docker image repositories being created in your **Azure Container Registry** which happens right before deploying to Kubernetes, at the Azure's portal:
|
||||||
<img src="img/kubernetes/azure-container-service-repos.png">
|
<img src="img/kubernetes/azure-container-service-repos.png">
|
||||||
|
|
||||||
Another deployment example would be to "do NOT build the Docker images" and to "do NOT use your Azure Container Registry" but instead just deploy the eShopOncontainer images already available at Docker Hub (Public Docker Registry). In this case, it won't be using your code but the Docker images built and pushed by us to Docker Hub, though.
|
|
||||||
|
|
||||||
>```
|
|
||||||
>.\deploy.ps1 -configFile .\conf_local.yml -buildImages $false -imageTag dev
|
|
||||||
>```
|
|
||||||
|
|
||||||
Note that in this last case it just needs the config for the services/containers to deploy because we're explicitely saying we don't want to build the Docker Images, neither pushing the images to any specific Docker Registry (like ACR), so it will just use the available images at Docker Hub for the deployment.
|
|
||||||
|
|
||||||
You can also watch the deployment unfold from the Kubernetes web interface by running `kubectl proxy` in a different PowerShell or bash window and open a browser to [http://localhost:8001/ui](http://localhost:8001/ui) as in the following screenshot:
|
You can also watch the deployment unfold from the Kubernetes web interface by running `kubectl proxy` in a different PowerShell or bash window and open a browser to [http://localhost:8001/ui](http://localhost:8001/ui) as in the following screenshot:
|
||||||
|
|
||||||
<img src="img/kubernetes/kubernetes-admin-ui-01.png">
|
<img src="img/kubernetes/kubernetes-admin-ui-01.png">
|
||||||
@ -352,8 +341,6 @@ In that case, you can try to resume/restart that specific deployment by executin
|
|||||||
|
|
||||||
`kubectl rollout resume deployments/webspa `
|
`kubectl rollout resume deployments/webspa `
|
||||||
|
|
||||||
`kubectl rollout resume deployments/frontend `
|
|
||||||
|
|
||||||
`kubectl rollout resume deployments/catalog `
|
`kubectl rollout resume deployments/catalog `
|
||||||
|
|
||||||
Then, check if the deployment was performed successfully by using the Kubernetes dashboard or by using the following command:
|
Then, check if the deployment was performed successfully by using the Kubernetes dashboard or by using the following command:
|
||||||
@ -362,9 +349,7 @@ Then, check if the deployment was performed successfully by using the Kubernetes
|
|||||||
|
|
||||||
## Scaling out eShopOnContainers in Kubernetes
|
## Scaling out eShopOnContainers in Kubernetes
|
||||||
In order to explicitly scale out any specific service, you can also use KUBECTL.
|
In order to explicitly scale out any specific service, you can also use KUBECTL.
|
||||||
For instance, let's say you want to scale out the NGINX frontend or the CATALOG service to 5 instances. You just need to run any of the following commands:
|
For instance, let's say you want to scale out the CATALOG service to 5 instances. You just need to run any of the following commands:
|
||||||
|
|
||||||
`kubectl scale --replicas=5 deployments/frontend`
|
|
||||||
|
|
||||||
`kubectl scale --replicas=5 deployments/catalog `
|
`kubectl scale --replicas=5 deployments/catalog `
|
||||||
|
|
||||||
|
@ -1,320 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
# Introduction to Kubernetes and ACS (Azure Container Service)
|
|
||||||
|
|
||||||
## Azure Container Service (ACS)
|
|
||||||
Azure Container Service makes it simpler for you to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications (based on Docker container format), in Azure.
|
|
||||||
ACS uses an optimized configuration of popular open-source scheduling and orchestration tools such as KUBERNETES, DOCKER SWARM and Mesosphere DC/OS, so you can you can scale these applications to thousands of containers.
|
|
||||||
In this case, targeting Kubernetes, ACS is basically the infrastructure in Azure where you create your Kubernetes cluster.
|
|
||||||
|
|
||||||
## Docker and Kubernetes
|
|
||||||
Docker helps you package applications into images, and execute them in containers. Kubernetes is a robust orchestrator platform for containerized applications. It abstracts away the underlying network infrastructure and hardware required to run them, simplifying their deployment, scaling, and management.
|
|
||||||
|
|
||||||
## Kubernetes from the container up
|
|
||||||
### Pods
|
|
||||||
The basic unit of a Kubernetes deployment is the **Pod**. A Pod encapsulates one or more containers. For example, the `basket` Pod specifies two containers:
|
|
||||||
>`deployments.yaml`
|
|
||||||
>
|
|
||||||
>The first container runs the `eshop/basket.api` image:
|
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> containers:
|
|
||||||
> - name: basket
|
|
||||||
> image: eshop/basket.api
|
|
||||||
> env:
|
|
||||||
> - name: ConnectionString
|
|
||||||
> value: 127.0.0.1
|
|
||||||
>```
|
|
||||||
>Note the `ConnectionString` environment variable: containers within a Pod are networked via `localhost`. The second container runs the `redis` image:
|
|
||||||
>```yaml
|
|
||||||
>- name: basket-data
|
|
||||||
> image: redis:3.2-alpine
|
|
||||||
> ports:
|
|
||||||
> - containerPort: 6379
|
|
||||||
>```
|
|
||||||
Placing `basket` and `basket-data` in the same Pod is reasonable here because the former requires the latter, and owns all its data. If we wanted to scale the service, however, it would be better to place the containers in separate Pods because the basket API and redis scale at different rates.
|
|
||||||
|
|
||||||
If the containers were in separate Pods, they would no longer be able to communicate via `localhost`; a **Service** would be required.
|
|
||||||
|
|
||||||
### Services
|
|
||||||
Services expose Pods to external networks. For example, the `basket` Service exposes Pods with labels `app=eshop` and `component=basket` to the cluster at large:
|
|
||||||
>`services.yaml`
|
|
||||||
>```yaml
|
|
||||||
>kind: Service
|
|
||||||
>metadata:
|
|
||||||
> ...
|
|
||||||
> name: basket
|
|
||||||
>spec:
|
|
||||||
> ports:
|
|
||||||
> - port: 80
|
|
||||||
> selector:
|
|
||||||
> app: eshop
|
|
||||||
> component: basket
|
|
||||||
>```
|
|
||||||
Kubernetes's built-in DNS service resolves Service names to cluster-internal IP addresses. This allows the nginx frontend to proxy connections to the app's microservices by name:
|
|
||||||
>`nginx.conf`
|
|
||||||
>```
|
|
||||||
>location /basket-api {
|
|
||||||
> proxy_pass http://basket;
|
|
||||||
>```
|
|
||||||
The frontend Pod is different in that it needs to be exposed outside the cluster. This is accomplished with another Service:
|
|
||||||
>`frontend.yaml`
|
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> ports:
|
|
||||||
> - port: 80
|
|
||||||
> targetPort: 8080
|
|
||||||
> selector:
|
|
||||||
> app: eshop
|
|
||||||
> component: frontend
|
|
||||||
> type: LoadBalancer
|
|
||||||
>```
|
|
||||||
`type: LoadBalancer` tells Kubernetes to expose the Service behind a load balancer appropriate for the cluster's platform. For Azure Container Service, this creates an Azure load balancer rule with a public IP.
|
|
||||||
|
|
||||||
### Deployments
|
|
||||||
Kubernetes uses Pods to organize containers, and Services to network them. It uses **Deployments** to organize creating, and modifying, Pods. A Deployment describes a state of one or more Pods. When a Deployment is created or modified, Kubernetes attempts to realize that state.
|
|
||||||
|
|
||||||
The Deployments in this project are basic. Still, `deploy.ps1` shows some more advanced Deployment capabilities. For example, Deployments can be paused. Each Deployment of this app is paused at creation:
|
|
||||||
>`deployments.yaml`
|
|
||||||
>```yaml
|
|
||||||
>kind: Deployment
|
|
||||||
>spec:
|
|
||||||
> paused: true
|
|
||||||
>```
|
|
||||||
This allows the deployment script to change images before Kubernetes creates the Pods:
|
|
||||||
>`deploy.ps1`
|
|
||||||
>```powershell
|
|
||||||
>kubectl set image -f deployments.yaml basket=$registry/basket.api ...
|
|
||||||
>kubectl rollout resume -f deployments.yaml
|
|
||||||
>```
|
|
||||||
|
|
||||||
### ConfigMaps
|
|
||||||
A **ConfigMap** is a collection of key/value pairs commonly used to provide configuration information to Pods. The deployment script uses one to store the frontend's configuration:
|
|
||||||
>`deploy.ps1`
|
|
||||||
>```
|
|
||||||
>kubectl create configmap config-files from-file=nginx-conf=nginx.conf
|
|
||||||
>```
|
|
||||||
This creates a ConfigMap named `config-files` with key `nginx-conf` whose value is the content of nginx.conf. The frontend Pod mounts that value as `/etc/nginx/nginx.conf`:
|
|
||||||
>`frontend.yaml`
|
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> containers:
|
|
||||||
> - name: nginx
|
|
||||||
> ...
|
|
||||||
> volumeMounts:
|
|
||||||
> - name: config
|
|
||||||
> mountPath: /etc/nginx
|
|
||||||
> volumes:
|
|
||||||
> - name: config
|
|
||||||
> configMap:
|
|
||||||
> name: config-files
|
|
||||||
> items:
|
|
||||||
> - key: nginx-conf
|
|
||||||
> path: nginx.conf
|
|
||||||
>```
|
|
||||||
This facilitates rapid iteration better than other techniques, e.g. building an image to bake in configuration.
|
|
||||||
|
|
||||||
The script also stores public URLs for the app's components in a ConfigMap:
|
|
||||||
>`deploy.ps1`
|
|
||||||
>```powershell
|
|
||||||
>kubectl create configmap urls --from-literal=BasketUrl=http://$($frontendUrl)/basket-api ...
|
|
||||||
>```
|
|
||||||
>Here's how the `webspa` Deployment uses it:
|
|
||||||
>
|
|
||||||
>`deployments.yaml`
|
|
||||||
>```yaml
|
|
||||||
>spec:
|
|
||||||
> containers:
|
|
||||||
> - name: webspa
|
|
||||||
> ...
|
|
||||||
> env:
|
|
||||||
> ...
|
|
||||||
> - name: BasketUrl
|
|
||||||
> valueFrom:
|
|
||||||
> configMapKeyRef:
|
|
||||||
> name: urls
|
|
||||||
> key: BasketUrl
|
|
||||||
>```
|
|
||||||
|
|
||||||
### Further reading
|
|
||||||
* [Kubernetes Concepts](https://kubernetes.io/docs/concepts/)
|
|
||||||
* [kubectl for Docker Users](https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/)
|
|
||||||
* [Kubernetes API reference](https://kubernetes.io/docs/api-reference/v1.5/)
|
|
||||||
|
|
||||||
|
|
||||||
# Setting eShopOnContainers up on Kubernetes (in Azure Container Service)
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
To create an Azure Container Service cluster using the Azure CLI 2.0, you must:
|
|
||||||
* Have an **Azure subscription account** ([get a free Azure trial](https://azure.microsoft.com/en-us/free/))
|
|
||||||
* A Docker development environment (based on [Docker Community Edition](https://www.docker.com/community-edition)) with `docker` and `docker-compose`. You should already have this environment if you were already testing eShopOnContainers on Docker. Other than that:
|
|
||||||
* Visit [docker.com](https://docker.com) to download the tools and set up the environment. Docker's [installation guide](https://docs.docker.com/engine/getstarted/step_one/#step-3-verify-your-installation) covers verifying your Docker installation.
|
|
||||||
* Have installed and set up the Azure CLI 2.0 (Explained in the next step)
|
|
||||||
|
|
||||||
* Installing Azure CLI 2.0
|
|
||||||
If you don't have Azure CLI installed, install it following these steps:
|
|
||||||
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
|
|
||||||
Once Azure CLI is installed, test it by simply writing "az" and hitting enter on your PowerShell:
|
|
||||||
<img src="img/azure-cli/azure-cli-test.png">
|
|
||||||
|
|
||||||
Next, to verify the installation was successful, run `az --version` from your command line.
|
|
||||||
You should see the version number of Azure CLI and other dependent libraries installed on your computer.
|
|
||||||
|
|
||||||
## Manually creating a Kubernetes cluster in ACS (Azure Container Service) and Azure environment
|
|
||||||
As optional procedure, you can manually create your Kubernetes cluster and Azure Container Registry, as in the following links:
|
|
||||||
* A Kubernetes cluster. Follow Azure Container Service's [walkthrough](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough) to create one.
|
|
||||||
* A private Docker registry. Follow Azure Container Registry's [guide](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal) to create one.
|
|
||||||
|
|
||||||
However, unless you want to spend more time learning, step by step, focusing on how to create an ACS cluster and ACR, it is recommended to create those prerequisites with the scripts in the following procedure, as it'll be much quicker for you to set it up.
|
|
||||||
|
|
||||||
## Automatically creating the Azure environment with a PowerShell script provided by eShopOnContainers
|
|
||||||
|
|
||||||
Under the folder **k8s** in the root folder of eShopOnContainers solution, you can run the **gen-k8s-env.ps1** PowerShell script (from eShopOnContainers) to automatically create the Azure environment needed for a Kubernetes deployment.
|
|
||||||
Here is the script to create the Kubernetes cluster in ACS plus the Script and metadata definition for eShopOnContainers to deploy it into the Kubernetes cluster:
|
|
||||||
<img src="img/kubernetes/kubernetes-scripts.png">
|
|
||||||
Steps to follow:
|
|
||||||
|
|
||||||
1. Make sure you have **Azure CLI 2.0** or later version installed on your dev machine. If you don't have it installed, follow the previous section that explained it.
|
|
||||||
|
|
||||||
2. **Login in Azure through Azure CLI.** Log in to Azure using a work or school account or a Microsoft account identity - Use the azure login command with either type of account identity to authenticate through Azure Active Directory. Most customers creating new Azure deployments should use this method. With certain accounts, the azure login command requires you to log in interactively through a web portal, as shown in the screenshot below.
|
|
||||||
Type the following in your command prompt:
|
|
||||||
>```
|
|
||||||
>az login
|
|
||||||
>```
|
|
||||||
<img src="img/azure-cli/az-login-1.png">
|
|
||||||
|
|
||||||
Also use the azure login command to authenticate a service principal for an Azure Active Directory application, which is useful for running automated services.
|
|
||||||
|
|
||||||
After logging in with a supported account identity, you can use either Azure Resource Manager mode or Azure Service Management mode CLI commands.
|
|
||||||
|
|
||||||
3. **Select your Azure subscription** You might have [several Azure subscriptions](https://docs.microsoft.com/en-us/cli/azure/account#set) as shown if you type the following.
|
|
||||||
>```
|
|
||||||
>az account list
|
|
||||||
>```
|
|
||||||
If you have multiple subscription accounts, you first need to select the Azure subscription account you want to target. Type the following:
|
|
||||||
>```
|
|
||||||
>az account set --subscription "Your Azure Subscription Name or ID"
|
|
||||||
>```
|
|
||||||
<img src="img/azure-cli/set-azure-subscription-default-to-target.png">
|
|
||||||
<p>
|
|
||||||
|
|
||||||
3. **Run the gen-k8s-env.ps1 script**. This will create the Azure environment needed. Basically a Kubernetes cluster where to deploy de containers and an Azure Container Registry where you will push the images in the first place. Make sure that you are positioned in the folder at the PowerShell prompt (like [eShopOnContainers folder]\k8s), the run the following command but with your own names and IDs.
|
|
||||||
|
|
||||||
>```
|
|
||||||
>./gen-k8s-env -resourceGroupName eShopAutogenk8sResGroup -location westus -registryName eShopAutogenContainerRegistry -orchestratorName eshop-autogen-k8s-cluster -dnsName eshop-autogen-k8s-dns
|
|
||||||
>```
|
|
||||||
|
|
||||||
The execution should be similar to the following screenshot. It will take a few minutes to complete:
|
|
||||||
<img src="img/kubernetes/creating-kubernetes-cluster-and-acr.png">
|
|
||||||
|
|
||||||
At the end of that execution you will see the passwords generated for your Azure Container Registry. Copy those passwords so you can use it later (and change it later, if you wish).
|
|
||||||
|
|
||||||
You can see in Azure's portal how the ACS-Kubernetes cluster was created with its basic information, such as the Master FQDN:
|
|
||||||
eshop-autogen-k8s-dns.westus.cloudapp.azure.com
|
|
||||||
|
|
||||||
<img src="img/kubernetes/kubernetes-cluster-in-azure-portal.png">
|
|
||||||
<p>
|
|
||||||
<p>
|
|
||||||
## Connect to a Kubernetes cluster in ACS
|
|
||||||
1. **Make sure you have the SSH RSA keys and service principal to connect.**
|
|
||||||
* **IMPORTANT**: When you created the Kubernetes cluster with the script provided by eShopOnContainers or by using the command `az acs create --orchestrator-type=kubernetes --resource-group $RESOURCE_GROUP --name=$CLUSTER_NAME --dns-prefix=$DNS_PREFIX --generate-ssh-keys`, it generated the SSH RSA keys and service principal for the Kubernetes cluster.
|
|
||||||
Those keys were stored in the client PC used when you created the cluster and you definitely need those in order to connect to the cluster. If you are going to connect to the cluster from the same machine, it'll work directly because the keys are there. But if you try to connect to the cluster from a different client machine, you'll need to copy the SSH RSA keys and the service principal to that new client machine.
|
|
||||||
|
|
||||||
The SSH RSA private key file and the corresponding public key files generated when you created the cluster are stored in the folder at the client machine you used.
|
|
||||||
<img src="img/kubernetes/ssh-rsa-keys-folder.png">
|
|
||||||
|
|
||||||
The service principal credentials are written to the file ~/.azure/acsServicePrincipal.json on the client machine you used, as shown in the image below.
|
|
||||||
<img src="img/kubernetes/service-principal-credentials-folder.png">
|
|
||||||
|
|
||||||
It is critical that you copy and store those files above in a secure place so you can re-used them in the future in other additional client machines.
|
|
||||||
If you are trying to connect from a new client machine, then, copy those files into the same folder paths.
|
|
||||||
For further info about SSH RSA keys, see these links:
|
|
||||||
|
|
||||||
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ssh-from-windows
|
|
||||||
|
|
||||||
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/troubleshoot-ssh-connection
|
|
||||||
|
|
||||||
2. **Install the Kubernetes command line client**, `kubectl`.
|
|
||||||
* `kubectl` is the Kubernetes command line client. If you don't already have it installed, you can install it with:
|
|
||||||
`az acs kubernetes install-cli`
|
|
||||||
as in the following screenshot (use 'sudo' if in a Mac or Linux).
|
|
||||||
<img src="img/kubernetes/install-kubernetes-cli.png">
|
|
||||||
By default, this command installs the `kubectl` binary to `C:\Program Files (x86)\kubectl.exe` on Windows or to `/usr/local/bin/kubectl` on a Linux or macOS system. To specify a different installation path, use the `--install-location` parameter.
|
|
||||||
You can also download it from the [Kubernetes site](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
|
||||||
|
|
||||||
3. **Download the Kubernetes cluster configuration**: Note that you only need to do this if you are using a different client machine than the one you used to create the cluster in ACS. If this is a different client machine, run the following command to download the master Kubernetes cluster configuration to the ~/.kube/config file:
|
|
||||||
`az acs kubernetes get-credentials --resource-group=$RESOURCE_GROUP --name=$CLUSTER_NAME`
|
|
||||||
<img src="img/kubernetes/kubernetes-acs-get-credentials-az-cli.png">
|
|
||||||
Note: If the cluster was provisioned using any other SSH key than /home/myusername/.ssh/id_rsa then the --ssh-key-file parameter must be used pointing to the SSH key utilized to provision the cluster.
|
|
||||||
<p>
|
|
||||||
<p>
|
|
||||||
Right after running that command you now will have the file `config` with the needed credentials to connect to the Kubernetes cluster.
|
|
||||||
<img src="img/kubernetes/kubernetes-acs-get-credentials-folder.png">
|
|
||||||
|
|
||||||
|
|
||||||
4. Now that you have the credentials and `kubectl` is also installed, add its directory in your system path. Make sure you access to its path by typing `kubectl version` as in the following screenshot:
|
|
||||||
<img src="img/kubernetes/kubernetes-cli-version.png">
|
|
||||||
Or you an also verify connectivity to the ACS Kubernetes cluster by running `kubectl cluster-info`:
|
|
||||||
<img src="img/kubernetes/kubernetes-cli-cluster-info.png">
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* For further info about `az` tool and Kubernetes see the Azure Container Service [walkthrough](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough).
|
|
||||||
* `az` is also helpful for getting the credentials `kubectl` needs to access your cluster. For other installation options, and information about configuring `kubectl` yourself, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/kubectl/install/).
|
|
||||||
* For further info to connect to a Kubenetes cluster in ACS, see: [Connecting to a Kubernetes cluster in ACS](https://docs.microsoft.com/en-us/azure/container-service/container-service-connect)
|
|
||||||
|
|
||||||
## Deploying eShopOnContainers to Kubernetes in ACS (Azure Container Service) with the deployment script
|
|
||||||
We have simplified the deployment so you can do it just by executing a script by following the following few steps:
|
|
||||||
|
|
||||||
1. Open a PowerShell command line at the `k8s` directory of your local eShopOnContainers repository.
|
|
||||||
2. Ensure `docker`, `docker-compose`, and `kubectl` are on the path, and configured for your Docker machine and Kubernetes cluster.
|
|
||||||
3. Run the `deploy.ps1` script. But you'll need to know your Azure Container Registry credentials first. The Docker username and password are provided by Azure Container Registry, and can be retrieved from the Azure portal. Optionally, ACR credentials can be obtained by running the following command:
|
|
||||||
|
|
||||||
>```
|
|
||||||
>az acr credential show -n eShopAutogenContainerRegistry
|
|
||||||
>```
|
|
||||||
<img src="img/kubernetes/acr-passwords.png">
|
|
||||||
|
|
||||||
Once the user and password are retrieved, run the following script for deployment, including your values and password. For example:
|
|
||||||
>```
|
|
||||||
>./deploy.ps1 -registry eshopautogencontainerregistry.azurecr.io -dockerUser eShopAutogenContainerRegistry -dockerPassword YourSuperSecretPassword
|
|
||||||
>```
|
|
||||||
<img src="img/kubernetes/deploying-eshop-to-kubernetes-cluster.png">
|
|
||||||
|
|
||||||
The script will build the .NET Core code, SPA TypeScript code and corresponding Docker images, push the latter to your registry at Azure Container Registry, and deploy the application to your cluster.
|
|
||||||
|
|
||||||
While deploying eShopOnContainers with the script, you can see the Docker image repositories being created in your **Azure Container Registry** which happens right before deploying to Kubernetes, at the Azure's portal:
|
|
||||||
<img src="img/kubernetes/azure-container-service-repos.png">
|
|
||||||
|
|
||||||
You can also watch the deployment unfold from the Kubernetes web interface by running `kubectl proxy` in a different PowerShell or bash window and open a browser to [http://localhost:8001/ui](http://localhost:8001/ui) as in the following screenshot:
|
|
||||||
|
|
||||||
<img src="img/kubernetes/kubernetes-admin-ui-01.png">
|
|
||||||
|
|
||||||
Finally you can test the web applications at the URLs told at the end of the script's execution, like:
|
|
||||||
>```
|
|
||||||
>WebSPA is exposed at http://your_kubernetes_cluster_ip
|
|
||||||
>```
|
|
||||||
|
|
||||||
>```
|
|
||||||
>WebMVC at http://your_kubernetes_cluster_ip/webmvc
|
|
||||||
>```
|
|
||||||
You should be able to run the eShop MVC app like in the following screenshot (using your external IP for the cluster, of course)
|
|
||||||
<img src="img/kubernetes/eshop-mvc-web-running-on-kubernetes.png">
|
|
||||||
|
|
||||||
|
|
||||||
## Scaling out eShopOnContainers in Kubernetes
|
|
||||||
TBD
|
|
||||||
|
|
||||||
## High Availability of eShopOnContainers in Kubernetes
|
|
||||||
TBD
|
|
||||||
|
|
||||||
## Sending feedback and pull requests
|
|
||||||
We'd appreciate to your feedback, improvements and ideas.
|
|
||||||
You can create new issues at the issues section, do pull requests and/or send emails to
|
|
||||||
[eshop_feedback@service.microsoft.com](eshop_feedback@service.microsoft.com)
|
|
||||||
|
|
||||||
## Questions
|
|
||||||
[QUESTION] Answer +1 if the solution is working for you (Through VS2017 or CLI environment):
|
|
||||||
https://github.com/dotnet/eShopOnContainers/issues/107
|
|
||||||
|
|
||||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 313 KiB After Width: | Height: | Size: 123 KiB |
Loading…
x
Reference in New Issue
Block a user