Browse Source

Remove "obsolete" folder

migration/net-5
Miguel Veloso 4 years ago
parent
commit
8689cf01d5
17 changed files with 0 additions and 464 deletions
  1. +0
    -134
      obsolete/KUBERNETES.md
  2. +0
    -61
      obsolete/readme/README.ENV.md
  3. +0
    -85
      obsolete/readme/readme-docker-compose.md
  4. +0
    -19
      obsolete/readme/readme.md
  5. BIN
      obsolete/vsts-docs/builds/images/android-build-step1.png
  6. BIN
      obsolete/vsts-docs/builds/images/android-build-step2.png
  7. BIN
      obsolete/vsts-docs/builds/images/android-build-step3.png
  8. BIN
      obsolete/vsts-docs/builds/images/android-build-step4.png
  9. BIN
      obsolete/vsts-docs/builds/images/android-build-step5.png
  10. BIN
      obsolete/vsts-docs/builds/images/android-build.png
  11. BIN
      obsolete/vsts-docs/builds/images/ios-build-step1.png
  12. BIN
      obsolete/vsts-docs/builds/images/ios-build-step2.png
  13. BIN
      obsolete/vsts-docs/builds/images/ios-build-step3.png
  14. BIN
      obsolete/vsts-docs/builds/images/ios-build.png
  15. +0
    -95
      obsolete/vsts-docs/builds/xamarin-android.md
  16. +0
    -63
      obsolete/vsts-docs/builds/xamarin-iOS.md
  17. +0
    -7
      obsolete/vsts-docs/readme.md

+ 0
- 134
obsolete/KUBERNETES.md View File

@ -1,134 +0,0 @@
# Kubernetes 101
## Docker vs. Kubernetes
Docker helps you package applications into images, and execute them in containers. Kubernetes is a robust platform for containerized applications. It abstracts away the underlying network infrastructure and hardware required to run them, simplifying their deployment, scaling, and management.
## Kubernetes from the container up
### Pods
The basic unit of a Kubernetes deployment is the **Pod**. A Pod encapsulates one or more containers. For example, the `basket` Pod specifies two containers:
>`deployments.yaml`
>
>The first container runs the `eshop/basket.api` image:
>```yaml
>spec:
> containers:
> - name: basket
> image: eshop/basket.api
> env:
> - name: ConnectionString
> value: 127.0.0.1
>```
>Note the `ConnectionString` environment variable: containers within a Pod are networked via `localhost`. The second container runs the `redis` image:
>```yaml
>- name: basket-data
> image: redis:3.2-alpine
> ports:
> - containerPort: 6379
>```
Placing `basket` and `basket-data` in the same Pod is reasonable here because the former requires the latter, and owns all its data. If we wanted to scale the service, however, it would be better to place the containers in separate Pods because the basket API and redis scale at different rates.
If the containers were in separate Pods, they would no longer be able to communicate via `localhost`; a **Service** would be required.
### Services
Services expose Pods to external networks. For example, the `basket` Service exposes Pods with labels `app=eshop` and `component=basket` to the cluster at large:
>`services.yaml`
>```yaml
>kind: Service
>metadata:
> ...
> name: basket
>spec:
> ports:
> - port: 80
> selector:
> app: eshop
> component: basket
>```
Kubernetes's built-in DNS service resolves Service names to cluster-internal IP addresses. This allows the nginx frontend to proxy connections to the app's microservices by name:
>`nginx.conf`
>```
>location /basket-api {
> proxy_pass http://basket;
>```
The frontend Pod is different in that it needs to be exposed outside the cluster. This is accomplished with another Service:
>`frontend.yaml`
>```yaml
>spec:
> ports:
> - port: 80
> targetPort: 8080
> selector:
> app: eshop
> component: frontend
> type: LoadBalancer
>```
`type: LoadBalancer` tells Kubernetes to expose the Service behind a load balancer appropriate for the cluster's platform. For Azure Container Service, this creates an Azure load balancer rule with a public IP.
### Deployments
Kubernetes uses Pods to organize containers, and Services to network them. It uses **Deployments** to organize creating, and modifying, Pods. A Deployment describes a state of one or more Pods. When a Deployment is created or modified, Kubernetes attempts to realize that state.
The Deployments in this project are basic. Still, `deploy.ps1` shows some more advanced Deployment capabilities. For example, Deployments can be paused. Each Deployment of this app is paused at creation:
>`deployments.yaml`
>```yaml
>kind: Deployment
>spec:
> paused: true
>```
This allows the deployment script to change images before Kubernetes creates the Pods:
>`deploy.ps1`
>```powershell
>kubectl set image -f deployments.yaml basket=$registry/basket.api ...
>kubectl rollout resume -f deployments.yaml
>```
### ConfigMaps
A **ConfigMap** is a collection of key/value pairs commonly used to provide configuration information to Pods. The deployment script uses one to store the frontend's configuration:
>`deploy.ps1`
>```
>kubectl create configmap config-files from-file=nginx-conf=nginx.conf
>```
This creates a ConfigMap named `config-files` with key `nginx-conf` whose value is the content of nginx.conf. The frontend Pod mounts that value as `/etc/nginx/nginx.conf`:
>`frontend.yaml`
>```yaml
>spec:
> containers:
> - name: nginx
> ...
> volumeMounts:
> - name: config
> mountPath: /etc/nginx
> volumes:
> - name: config
> configMap:
> name: config-files
> items:
> - key: nginx-conf
> path: nginx.conf
>```
This facilitates rapid iteration better than other techniques, e.g. building an image to bake in configuration.
The script also stores public URLs for the app's components in a ConfigMap:
>`deploy.ps1`
>```powershell
>kubectl create configmap urls --from-literal=BasketUrl=http://$($frontendUrl)/basket-api ...
>```
>Here's how the `webspa` Deployment uses it:
>
>`deployments.yaml`
>```yaml
>spec:
> containers:
> - name: webspa
> ...
> env:
> ...
> - name: BasketUrl
> valueFrom:
> configMapKeyRef:
> name: urls
> key: BasketUrl
>```
### Further reading
* [Kubernetes Concepts](https://kubernetes.io/docs/concepts/)
* [kubectl for Docker Users](https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/)
* [Kubernetes API reference](https://kubernetes.io/docs/api-reference/v1.5/)

+ 0
- 61
obsolete/readme/README.ENV.md View File

@ -1,61 +0,0 @@
**Note**: It is very important to disable any ESHOP_AZURE variables from .env file when the local storage or container services is set. Remember you can disable any variable from .env file putting '#' character before the variable declaration and you can run and test separately any Azure services.
With the steps explained in the next section, you will be able to run the application with Azure Redis Cache instead of the container of Redis service.
# Azure Redis Cache service
To enable the Redis Cache of Azure in eShop it is necessary to have previously configured the Azure Redis service through ARM file or manually through Azure portal. You can use the [ARM files](deploy/az/redis/readme.md) already created in eShop. Once the Redis Cache service is created, it is necessary to get the Primary connection string from information service in the Azure portal and modify the port value from 6380 to 6379 and the ssl value from True to False to establish a without ssl connection with the cache server. This Primary connection must be declared on .env file located in the solution root folder with `ESHOP_AZURE_REDIS_BASKET_DB` variable name.
For example:
>ESHOP_AZURE_REDIS_BASKET_DB=yourredisservice.redis.cache.windows.net:6379,password=yourredisservicepassword,ssl=False,abortConnect=False
With the steps explained in the next section, you will be able to run the application with Azure Service Bus instead of the container of RabbitMQ service.
# Azure Service Bus service
To enable the service bus of Azure in eShop solution it is necessary having created previously the service bus service through ARM file or manually through Azure portal. You can use the [ARM files](deploy/az/servicebus/readme.md) already created in eShop. Finally, it is necessary to get the Shared access policy named "Root" (if you generated the service through ARM file) from eshop_event_bus topic. This policy must be declared on .env file located in the solution root folder with `ESHOP_AZURE_SERVICE_BUS` name.
For example:
>ESHOP_AZURE_SERVICE_BUS=Endpoint=sb://yourservicebusservice.servicebus.windows.net/;SharedAccessKeyName=Root;SharedAccessKey=yourtopicpolicykey=;EntityPath=eshop_event_bus
Once the service bus service is created, it is necessary to set to true the "AzureServiceBusEnabled" environment variable from `settings.json` file on Catalog.API, Ordering.API, Basket.API, Payment.API and GracePeriodManager.
With the steps explained in the next section, you will be able to run the application with Azure Storage Account instead of the local container storage.
# Azure Storage Account service
To enable Azure storage of Azure in eShopOnAzure solution it is necessary having created previously the storage service through ARM file or manually through Azure portal. You can use the ARM files find under **deploy/az/storage** folder already created in eShop. Once the storage account is created, it is very important to create a new container(blob kind) and upload the solution catalog pics files before to continue.Later, it is necessary to set to true the "AzureStorageEnabled" environment variable from `settings.json` in Catalog.API.Finally, it is necessary to get the container endpoint url from information service in the Azure portal, This url must be declared on .env file located in the solution root folder with `ESHOP_AZURE_STORAGE_CATALOG` for the Catalog.API content.
Do not forget to put a slash character '/' in the end of the url.
For example:
>ESHOP_AZURE_STORAGE_CATALOG=https://yourcatalogstorageaccountservice.blob.core.windows.net/yourcontainername/
## Check status of Azure Storage Account with Health Checks
It is possible to add status check for the Azure Storage Account inside the Catalog Web Status. In case that the status check is enabled, for the Catalog section in the WebStatus page, Azure Storage will be checked as one of the dependencies for theses APIs. To enable this check add the account name and key to the .env file for your account.
For example:
>ESHOP_AZURE_STORAGE_CATALOG_NAME=storageaccountname
>ESHOP_AZURE_STORAGE_CATALOG_KEY=storageaccountkey
With the steps explained in the next section, you will be able to run the application with Azure SQL Database instead of local storage.
# Azure SQL Database
To enable Azure SQL Database in eShop is required to have a Azure SQL with the databases for Ordering.API, Identity.API, Catalaog.API. You can use the [ARM files](deploy/az/sql/readme.md) already created in this project or do it manually. Once the databases are created, it is necessary to get the connection string for each service and set the corresponding variable in the .env file.
For example:
>ESHOP_AZURE_CATALOG_DB=catalogazureconnectionstring
>ESHOP_AZURE_IDENTITY_DB=identityazureconnectionstring
>ESHOP_AZURE_ORDERING_DB=orderingazureconnectionstring
With the steps explained in the next section, you will be able to run the application with Azure Cosmos DB Database instead of local storage.
# Azure Cosmos DB
To enable Azure Cosmos DB in eShop is required to have the connection string. If you do not have an Azure Cosmos DB created you can use the ARM files under **deploy/az/cosmos** folder available in eShop or do it manually. Once the connection string is availabe it is necessary to add it in the .env file in the `ESHOP_AZURE_COSMOSDB`variable.
For example:
>ESHOP_AZURE_COSMOSDB=cosmosconnectionstring
# Azure Functions
To enable the Azure Functions in eShop you can add the URI where the functions have been deployed. You can use the ARM files under **deploy/az/azurefunctions** to create the resources in Azure. Once created and available, it is necessary to add to the .env file the `ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI` variable.
See Azure Functions deployment Files and Readme for more details [ARM files](deploy/az/azurefunctions/readme.md)

+ 0
- 85
obsolete/readme/readme-docker-compose.md View File

@ -1,85 +0,0 @@
# Docker-compose yaml files
In the root folder of the repo are all docker-compose files (`docker-compose*.yml`). Here is a list of all of them and what is their purpose:
## Files needed to run eShopOnContainers locally
* `docker-compose.yml`: This file contains **the definition of all images needed for running eShopOnContainers**.
* `docker-compose.override.yml`: This file contains the base configuration for all images of the previous file
Usually these two files are using together. The standard way to start eShopOnContainers from CLI is:
```
docker-compose -f docker-compose.yml -f docker-compose.override.yml
```
This will start eShopOnContainers with all containers running locally, and it is the default development environment.
## Files needed to run eShopOnContainers on a remote docker host
* `docker-compose.prod.yml`: This file is a replacement of the `docker-compose.override.yml` but contains some configurations more suitable for a "production" environment or when you need to run the services using an external docker host.
```
docker-compose -f docker-compose.yml -f docker-compose.prod.yml
```
When using this file the following environments variables must be set:
* `ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP` with the IP or DNS name of the docker host that runs the services (can use `localhost` if needed).
* `ESHOP_AZURE_STORAGE_CATALOG` with the URL of the Azure Storage that will host the catalog images
You might wonder why an external image resource (storage) is needed when using `docker-compose.prod.yml` instead of `docker-compose.override.yml`. Answer to this is related to a limitation of Docker Compose file format. This is how we set the environment configuration of Catalog microservice in `docker-compose.override.yml`:
```
PicBaseUrl=${ESHOP_AZURE_STORAGE_CATALOG:-http://localhost:5101/api/v1/catalog/items/[0]/pic/}
```
The `PicBaseUrl` variable is set to the value of `ESHOP_AZURE_STORAGE_CATALOG` if this variable is set to any value other than blank string. If not, the value is set to `http://localhost:5101/api/v1/catalog/items/[0]/pic/`. That works perfectly in a local environment where you run all your services in `localhost` and setting `ESHOP_AZURE_STORAGE_CATALOG` you can use or not Azure Storage for the images (if you don't use Azure Storage images are served locally by catalog servide). But when you run the services in a external docker host, specified in `ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP`, the configuration should be as follows:
```
PicBaseUrl=${ESHOP_AZURE_STORAGE_CATALOG:-http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}
```
So, use `ESHOP_AZURE_STORAGE_CATALOG` if set, and if not use `http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}`. Unfortunately seems that docker-compose do not substitute variables inside variables, so the value that `PicBaseUrl` gets if `ESHOP_AZURE_STORAGE_CATALOG` is not set is literally `http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5101/api/v1/catalog/items/[0]/pic/}` without any substitution.
## Build container (DEPRECATED)
NOTE that since we support Docker MULTI-STAGE builds (support in VS 2017 since December 2017), the build container is no loger needed in CI/CD pipelines as a similar process is done by Docker itself under the covers with the multi-stage builds.
For more info on Docker Multi-Stage, read:
https://docs.docker.com/develop/develop-images/multistage-build/
https://blogs.msdn.microsoft.com/stevelasker/2017/09/11/net-and-multistage-dockerfiles/
* `docker-compose.ci.build.yml`: This file is for starting the build container to build the project using a container that has all needed prerequisites. Refer to [corresponding wiki section](https://github.com/dotnet-architecture/eShopOnContainers/wiki/03.-Setting-the-eShopOnContainers-solution-up-in-a-Windows-CLI-environment-(dotnet-CLI,-Docker-CLI-and-VS-Code)#build-the-bits-through-the-build-container-image) for more information.
**For more information** about docker-compose variable substitution read the [compose docs](https://docs.docker.com/compose/compose-file/#variable-substitution).
## Other files
* `docker-compose.nobuild.yml`: This file contains the definition of all images needed to run the eShopOnContainers. Contains **the same images that `docker-compose.yml`** but without any `build` instruction. If you use this file instead of `docker-compose.yml` when launching the project and you don't have the images built locally, **the images will be pulled from dockerhub**. This file is not intended for development usage, but for some CI/CD scenarios.
* `docker-compose.vs.debug.yml`: This file is used by Docker Tools of VS2017, and should not be used directly.
* `docker-compose.vs.release.yml`: This file is used by Docker Tools of VS2017, and should not be used directly.
**Note**: The reason why we need the `docker-compose.nobuild.yml` is that [docker-compose issue #3391](https://github.com/docker/compose/issues/3391). Once solved, parameter `--no-build` of docker-compose could be used safely in a CI/CD environments and the need for this file will disappear.
## Windows container files
All `docker-compose-windows*.yml` files have a 1:1 relationship with the same file without the `-windows` in its name. Those files are used to run Windows Containers instead of Linux Containers.
* `docker-compose-windows.yml`: Contains the definitions of all containers that are needed to run eShopOnContainers using windows containers (equivalent to `docker-compose.yml`).
* `docker-compose-windows.override.yml`: Contains the base configuration for all windows containers
**Note** We plan **to remove** the `docker-compose-windows.override.yml` file, because it is **exactly the same** as the `docker-compose.override.yml`. The reason of its existence is historical, but is no longer needed. You can use `docker-compose.override.yml` instead.
* `docker-compose-windows.prod.yml` is the equivalent of `docker-compose.prod.yml` for containers windows. As happens with `docker-compose-windows.override.yml` this file will be deleted in a near future, so you should use `docker-compose.prod.yml` instead.
## "External container" files
These files were intended to provide a fast way to start only "infrastructure" containers (SQL Server, Redis, etc). *This files are deprecated and will be deleted in a near future**:
* `docker-compose-external.override.yml`
* `docker-compose-external.yml`
If you want to start only certain containers use `docker-compose -f ... -f ... up container1 contaner2 containerN` as specified in [compose doc](https://docs.docker.com/compose/reference/up/)

+ 0
- 19
obsolete/readme/readme.md View File

@ -1,19 +0,0 @@
# Documentation index
This file contains links to the documentation of the project.
* **Wiki**: The wiki contains detailed step-by-step information about how to set up the project. Read it at: [https://github.com/dotnet-architecture/eShopOnContainers/wiki](https://github.com/dotnet-architecture/eShopOnContainers/wiki)
## Documentation included in files
* [Branch Guide](../branch-guide.md): List of branches used and their purpose.
* [vsts-docs folder](../vsts-docs/readme.md): Information about how to setup a CI/CD procedure using Azure DevOps
* [Kubernetes](../k8s/readme.md): Information about how to deploy eShopOnContainers in a kubernetes cluster, and how to setup a CI/CD for k8s using VSTS
* [deploy](../deploy/readme.md): Information about how deploy Azure resources using the Azure CLI 2.0.
* [.env file](./README.ENV.md): What is the `.env` file and how to use it to configure eShopOnContainers to use external resources (like Azure)
* [docker-compose files](./readme-docker-compose.md): What are all these `docker-compose-*.yml` files
## Docs folder
The `/docs` folder contains the pdfs versions of the books

BIN
obsolete/vsts-docs/builds/images/android-build-step1.png View File


BIN
obsolete/vsts-docs/builds/images/android-build-step2.png View File


BIN
obsolete/vsts-docs/builds/images/android-build-step3.png View File


BIN
obsolete/vsts-docs/builds/images/android-build-step4.png View File


BIN
obsolete/vsts-docs/builds/images/android-build-step5.png View File


BIN
obsolete/vsts-docs/builds/images/android-build.png View File


BIN
obsolete/vsts-docs/builds/images/ios-build-step1.png View File


BIN
obsolete/vsts-docs/builds/images/ios-build-step2.png View File


BIN
obsolete/vsts-docs/builds/images/ios-build-step3.png View File


BIN
obsolete/vsts-docs/builds/images/ios-build.png View File


+ 0
- 95
obsolete/vsts-docs/builds/xamarin-android.md View File

@ -1,95 +0,0 @@
# Xamarin Android Build
Follow these steps to create a VSTS build for your eShopOnContainers app (android).
**Note**: This document assumes basic knowledge about creating builds and configuring external VSTS connections
## Creating the build
Despite the _"Get Sources"_ task there are five tasks more in the build:
1. Restore NuGet Packages
2. Build Xamarin Android Project
3. Download the certstore to sign the APK
4. Sign the APK
5. Publish the build artifact.
![Android Build Steps](images/android-build.png)
Let's discuss each of them.
### Restore NuGet Packages
Add a "NuGet restore" task and enter the following configuration:
1. Enter `eShopOnContainers-Android.sln` in "Path to solution, packages.config, or project.json". This sln is created ex professo for the build and contains only the Xamarin Android project plus the Xamarin Forms one.
![Android Build Step 1](images/android-build-step1.png)
### Build the project
Add a "Xamarin Android" task with following configuration:
1. `**/*Droid*.csproj` in "Project"
2. `$(build.binariesdirectory)/$(BuildConfiguration)` in "Output Directory"
3. `$(BuildConfiguration)` in "Configuration"
4. Ensure that the "Create App Package" checkbox is enabled
5. In "JDK Options" be sure to select "JDK 8" in the "JDK Version" dropdown.
![Android Build Step 2](images/android-build-step2.png)
### Download the keystore to sign the build
** Note** This require you have a valid keystore. Refer to [this Xamarin article](https://developer.xamarin.com/guides/android/deployment,_testing,_and_metrics/publishing_an_application/part_2_-_signing_the_android_application_package/) for instructions on how create one, using Visual Studio and Xamarin. Or if you prefer, you can read [how use the Android SDK tools to create a keystore](https://developer.android.com/studio/publish/app-signing.html).
This build assumes the keystore is stored somewhere in internet. Beware on where you store your keystores! Keem them safe and privately. Always consider other possible alternatives on where store the keycert:
1. Store in the source control repository, **assuming it's private**. For public repositories this option is discarded
2. Store in the build agent. If you use a custom VSTS build agent, store the keycert files locally in the agent. This is simple and secure.
3. Store in internet. If this is the case, **protect the resource**. You can be forced to use this option if your repository is public *and* you use the VSTS hosted agent.
Add a task "Download file" (**Note:** this task is installed [through a VSTS extension](https://marketplace.visualstudio.com/items?itemName=automagically.DownloadFile)) with following configuration:
1. `$(keystore.url)$(keystore.name)` in "File URL"
2. `$(Build.SourcesDirectory)` in "Destination Folder"
Fill the "Credentials" section accordly.
![Android Build Step 3](images/android-build-step3.png)
**Note:** You can, of course, use any other way to download the file (like a Powershell task).
### Signing the APK
Add a "Android Signing" task with following configuation:
1. `$(build.binariesdirectory)/$(BuildConfiguration)/*.apk` in "APK Files"
2. Ensure the checkbox "Sign the APK" is checked
3. `$(Build.SourcesDirectory)\$(keystore.name)` in "Keystore file". This location has to be where the keystore is. If you downloaded it using a previous task (as our example), use the same value. If keystore is physically in the VSTS agent you can use the filepath.
4. `$(keystore.pwd)` in "Keystore Password"
5. `$(keystore.alias)` in "Keystore Alias"
6. `$(key.pwd)` in "Key password".
7. `-verbose` in "Jarsigner Arguments"
7. Ensure the checkbox "Zipalign" is checked.
![Android Build Step 4](images/android-build-step4.png)
### Publishing build artifact
Add a "Publish Build Artifacts" task, with following configuration:
1. `$(build.binariesdirectory)/$(BuildConfiguration)` in "Path to publish"
2. `drop` in "Artifact Name"
3. `Server` in "Artifact Type"
![Android Build Step 5](images/android-build-step5.png)
## Variables
You need to setup the following variables:
1. `keystore.pwd` -> Password of the keystore
2. `keystore.alias` -> Alias of the keystore
3. `keystore.url` -> Full URL of the keystore
4. `key.pwd` -> Password of the key

+ 0
- 63
obsolete/vsts-docs/builds/xamarin-iOS.md View File

@ -1,63 +0,0 @@
# Xamarin iOS Build
Follow these steps to create a VSTS build for your eShopOnContainers app (iOS)
**Note**: This document assumes basic knowledge about creating builds and configuring external VSTS connections
## Creating the build
Despite the _"Get Sources"_ task there are three tasks more in the build:
1. Build Xamarin iOS Project
2. Copy generated packages
3. Publish the build artifact.
![iOS Build Steps](images/ios-build.png)
Let's discuss each of them.
### Build the project
Add a "Xamarin iOS" task with following configuration:
1. `eShopOnContainers-iOS.sln` in "Solution". This solution has been created ex professo for the build.
2. Ensure that the "Create App Package" checkbox is enabled
**About signing & Provisioning section**
In order to deploy your app to a physical device you must sign it using a certificate with a provisioning profile. Refer to [this blog
post of the Xamarin team](https://blog.xamarin.com/continuous-integration-for-ios-apps-with-visual-studio-team-services/) for more info.
Basically you have three options for setting the certificate (p12 file) and the provisioning profile:
1. Use MacInCloud VSTS agent and setup the p12 file and provisioning profile in the setup [https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/18/macincloud-visual-studio-team-services-build-and-improvements-to-ios-build-support/](https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/18/macincloud-visual-studio-team-services-build-and-improvements-to-ios-build-support/)
2. Use a custom mac machine with the certificate and provisioning profile installed. In this case you don't have to do anything else.
3. Have the p12 file and the provisioning profile reachable on somewhere
If you choose option 3, you need to download the certificate and the provisioning profile into the build agent (using a previous build task).
Once downloaded two files, you have to specify the location of both in the "Signing & Provisioning Section".
![iOS Build Step 1](images/ios-build-step1.png)
### Copy generated files to output folder
Add a "Copy files" task with following configuration:
1. `src/Mobile/eShopOnContainers/eShopOnContainers.iOS/bin/iPhone/$(BuildConfiguration)` in "Source Folder"
2. `**/*.ipa` in "Contents"
3. `$(Build.ArtifactStagingDirectory)` in "Target Folder"
4. Ensure that "Clean Target folder" (under "Advanced" section) is checked
This way we copy the generated IPA in the _Build.ArtifactStagingDirectory_ folder (and remove any previous IPA generated by a previous build).
![iOS Build Step 2](images/ios-build-step2.png)
### Publishing build artifact
Add a "Publish Build Artifacts" task, with following configuration:
1. `$(Build.ArtifactStagingDirectory)` in "Path to publish"
2. `drop` in "Artifact Name"
3. `Server` in "Artifact Type"
![Android Build Step 3](images/ios-build-step3.png)

+ 0
- 7
obsolete/vsts-docs/readme.md View File

@ -1,7 +0,0 @@
# VSTS Related Documentation
## Builds and releases
1. [VSTS build for Xamarin App (Android)](builds/xamarin-android.md)
2. [VSTS build for Xamarin App (iOS)](builds/xamarin-iOS.md)

Loading…
Cancel
Save