Designing and developing containerized apps using Docker and Microsoft Azure
Vision: Design and develop scalable solutions with Docker in mind.
There are many great-fit use cases for containers, not just for microservices-oriented architectures, but also when you simply have regular services or web applications to run and you want to reduce frictions between development and production environment deployments.
Chapter 1 introduced the fundamental concepts regarding containers and Docker. That information is the basic level of information you need to get started. But, enterprise applications can be complex and composed of multiple services instead of a single service or container. For those optional use cases, you need to know additional approaches to design, such as Service-Oriented Architecture (SOA) and the more advanced microservices concepts and container orchestration concepts. The scope of this document is not limited to microservices but to any Docker application life cycle, therefore, it does not explore microservices architecture in depth because you also can use containers and Docker with regular SAO, background tasks or jobs, or even with monolithic application deployment approaches.
More info To learn more about enterprise applications and microservices architecture in depth, read the that NET Microservices: Architecture for Containerized .NET Applications that you can download from here https://aka.ms/MicroservicesEbook
However, before we get into the application life cycle and DevOps, it is important to know how you are going to design and construct your application and what are your design choices.
Ahead of getting into the development process there are a few basic concepts worth mentioning with regard to how you use containers.
In the container development model, a container represents a single process. By defining a container as a process boundary, you begin to create the primitives used to scale, or batch-off, processes. When you run a Docker container, you’ll see an ENTRYPOINT definition. This defines the process and the lifetime of the container. When the process completes, the container life cycle ends. There are long-running processes, such as web servers, and short-lived processes, such as batch jobs, which might have been implemented as Microsoft Azure WebJobs. If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was instructed to keep five instances running and one fails, the orchestrator will create another container to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete.
You might find a scenario in which you want multiple processes running in a single container. In any architecture document, there’s never a “never,” nor is there always an “always.” For scenarios requiring multiple processes, a common pattern is to use Supervisor.
In this scenario, you are building a single and monolithic web application or service and deploying it as a container. Within the application, the structure might not be monolithic; it might comprise several libraries, components, or even layers (application layer, domain layer, data access layer, etc.). Externally, it is a single container, like a single process, single web application, or single service.
To manage this model, you deploy a single container to represent the application. To scale it, just add a few more copies with a load balancer in front. The simplicity comes from managing a single deployment in a single container or virtual machine (VM).
Following the principal that a container does one thing only, and does it in one process, the monolithic pattern is in conflict. You can include multiple components/libraries or internal layers within each container, as illustrated in Figure 4-1.
Figure 4-1: An example of monolithic application architecture
The downside to this approach comes if or when the application grows, requiring it to scale. If the entire application scaled, it’s not really a problem. However, in most cases, a few parts of the application are the choke points that require scaling, whereas other components are used less.
Using the typical e-commerce example, what you likely need is to scale the product information component. Many more customers browse products than purchase them. More customers use their basket than use the payment pipeline. Fewer customers add comments or view their purchase history. And you likely have only a handful of employees, in a single region, that need to manage the content and marketing campaigns. By scaling the monolithic design, all of the code is deployed multiple times.
In addition to the “scale-everything” problem, changes to a single component require complete retesting of the entire application as well as a complete redeployment of all the instances.
The monolithic approach is common, and many organizations are developing with this architectural method. Many enjoy good enough results, whereas others encounter limits. Many designed their applications in this model because the tools and infrastructure were too difficult to build SOAs, and they didn’t see the need—until the app grew.
From an infrastructure perspective, each server can run many applications within the same host and have an acceptable ratio of efficiency in your resources usage, as shown in Figure 4-2.
Figure 4-2: A host running multiple apps/containers
Finally, from availability perspective monolithic applications must be deployed completely, that means that in the case that you must stop and start, all the functionality and users will be affected during the deployment slot window. In certain situations, the use of Azure and containers can minimize these cases and reduce the probability of downtime of your application as you can imagine viewing the Figure 4-3.
You can deploy monolithic applications in Azure by using dedicated VMs for each instance. Using Azure VM Scale Sets, you can scale the VMs easily.
You can also use Azure App Services to run monolithic applications and easily scale instances without having to manage the infrastructure of the VMs. Since 2016, Azure App Services can run single instances of Docker Linux containers, simplifying the deployment.
You can deploy multiple VM as Docker hosts and run any number of containers per VM. Then, by using the Azure balancer, as illustrated in the Figure 4-3, you can manage scaling.
Figure 4-3: Multiple hosts scaling-out a single Docker application apps/containers
You can manage the deployment of the hosts themselves via traditional deployment techniques.
You can manually manage Docker containers deployment by using commands like docker run and docker-compose up, and you can also automate it in Continuous Delivery (CD) pipelines and deploy to Docker hosts from VSTS releases, for instance.
There are benefits to using containers to manage monolithic deployments. Scaling the instances of containers is far faster and easier than deploying additional VMs.
Deploying updates as Docker images is far faster and network efficient. Docker containers typically start in seconds, speeding rollouts. Tearing down a Docker container is as easy as invoking the docker stop command, typically completing in less than a second.
Because containers are inherently immutable, by design, you never need to worry about corrupted VMs because an update script forgot to account for some specific configuration or file left on disk.
Although monolithic apps can benefit from Docker, we’re touching on only the tips of the benefits. The larger benefits of managing containers comes from deploying with container orchestrators that manage the various instances and life cycle of each container instance. Breaking up the monolithic application into subsystems that can be scaled, developed, and deployed individually is your entry point into the realm of microservices.
To learn about how to “lift and shift” monolithic applications with containers and how you can modernize your applications, you can read this additional Microsoft book named Modernize existing .NET applications with Azure cloud and Windows Containers, that you can download from here https://t.co/xw5ilGAJmY
Either because you want to get a quick validation of a container deployed to Azure or because the app is simply a single-container app, Azure App Services provides a great way to provide scalable single-container services (As of 2017, only Linux containers support in Azure App Service).
Using Azure App Service is intuitive and you can get up and running quickly because it provides great Git integration to take your code, build it in Microsoft Visual Studio, and directly deploy it to Azure. But, traditionally (with no Docker), if you needed other capabilities, frameworks, or dependencies that aren’t supported in App Services, you needed to wait for it until the Azure team updates those dependencies in App Service or switched to other services like Service Fabric, Cloud Services, or even plain VMs, for which you have further control and can install a required component or framework for your application.
Now, however, (announced at Microsoft Connect 2016 in November 2016) and as shown in Figure 4-4, when using Visual Studio 2017, container support in Azure App Service gives you the ability to include whatever you want in your app environment. If you added a dependency to your app, because you are running it in a container, you get the capability of including those dependencies in your Dockerfile or Docker image.
Figure 4-4: Publishing a container to Azure App Service from Visual Studio apps/containers
Figure 4-4 also shows that the publish flow pushes an image through a Container Registry, which can be the Azure Container Registry (a registry near to your deployments in Azure and secured by Azure Active Directory groups and accounts) or any other Docker Registry like Docker Hub or on-premises registries.
A primitive of containers is immutability. When compared to a VM, containers don’t disappear as a common occurrence. A VM might fail in various forms from dead processes, overloaded CPU, or a full or failed disk. Yet, we expect the VM to be available and RAID drives are commonplace to assure drive failures maintain data.
However, containers are thought to be instances of processes. A process doesn’t maintain durable state. Even though a container can write to its local storage, assuming that that instance will be around indefinitely would be equivalent to assuming a single-copy memory will be durable. You should assume that containers, like processes, are duplicated, killed, or, when managed with a container orchestrator, they might be moved.
Docker uses a feature known as an overlay file system to implement a copy-on-write process that stores any updated information to the root file system of a container, compared to the original image on which it is based. These changes are lost if the container is subsequently deleted from the system. A container, therefore, does not have persistent storage by default. Although it’s possible to save the state of a container, designing a system around this would be in conflict with the principle of container architecture.
To manage persistent data in Docker applications, there are common solutions:
Data volumes are specially designated directories within one or more containers that bypass the Union File System. Data volumes are designed to maintain data, independent of the container’s life cycle. Docker therefore never automatically deletes volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container. The host operating system can browse and edit the data in any volume freely, which is just another reason to use data volumes sparingly.
A data volume container is an improvement over regular data volumes. It is essentially a dormant container that has one or more data volumes created within it (as described earlier). The data volume container provides access to containers from a central mount point. The benefit of this method of access is that it abstracts the location of the original data, making the data container a logical mount point. It also allows "application" containers accessing the data container volumes to be created and destroyed while keeping the data persistent in a dedicated container.
Figure 4-5 shows that regular Docker volumes can be placed on storage out of the containers themselves but within the host server/VM physical boundaries. Docker volumes don’t have the ability to use a volume from one host server/VM to another.
Figure 4-5: Data volumes and external data sources for containers apps/containers
Due to the inability to manage data shared between containers that run on separate physical hosts, it is recommended that you not use volumes for business data unless the Docker host is a fixed host/VM, because when using Docker containers in an orchestrator, containers are expected to be moved from one to another host, depending on the optimizations to be performed by the cluster.
Therefore, regular data volumes are a good mechanism to work with trace files, temporal files, or any similar concept that won’t affect the business data consistency if or when your containers are moved across multiple hosts.
Volume plug-ins like Flocker provide data across all hosts in a cluster. Although not all volume plug-ins are created equally, volume plug-ins typically provide externalized persistent reliable storage from the immutable containers.
Remote data sources and caches like SQL Database, DocumentDB, or a remote cache like Redis would be the same as developing without containers. This is one of the preferred, and proven, ways to store business application data.
SOA was an overused term and meant so many different things to different people. But at a minimum and as a common denominator, SOA, or service orientation, mean that you structure the architecture of your application by decomposing it in multiple services (most commonly as HTTP services) that can be classified in different types like subsystems or, in other cases, as tiers.
Today, you can deploy those services as Docker containers, which solves deployment-related issues because all of the dependencies are included within the container image. However, when you need to scale-out SOAs, you might encounter challenges if you are deploying based on single instances. This is where a Docker clustering software or orchestrator will help you. We’ll look at this in greater detail in the next section when we examine microservices approaches.
At the end of the day, the container clustering solutions are useful for both a traditional SOA architecture or for a more advanced microservices architecture in which each microservice owns its data model. And, thanks to multiple databases, you also can scale-out the data tier instead of working with monolithic databases shared by the SOA services. However, the discussion about splitting the data is purely about architecture and design.
Using orchestrators for production-ready applications is essential if your application is based on microservices or simply split across multiple containers. As introduced previously, in a microservice-based approach, each microservice owns its model and data so that it will be autonomous from a development and deployment point of view. But even if you have a more traditional application that is composed of multiple services (like SOA), you also will have multiple containers or services comprising a single business application that need to be deployed as a distributed system. These kinds of systems are complex to scale out and manage; therefore, you absolutely need an orchestrator if you want to have a production-ready and scalable multicontainer application.
Figure 4-6 illustrates deployment into a cluster of an application composed of multiple microservices (containers).
Figure 4-6: A cluster of containers
It looks like a logical approach. But how are you handling load balancing, routing, and orchestrating these composed applications?
The Docker command-line interface (CLI) meets the needs of managing one container on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, suspend them, or shut them down when needed, and ideally also control how they access resources like the network and data storage.
To go beyond the management of individual containers or very simple composed apps and move toward larger enterprise applications with microservices, you must turn to orchestration and clustering platforms.
From an architecture and development point of view, if you are building large, enterprise, microservices-based, applications, it is important to understand the following platforms and products that support advanced scenarios:
More info Microsoft has released a preview version of Azure Container Service (AKS) on Oct 24th 2017. AKS makes it easy to run Kubernetes at large scales, with management and maintenance tools. You will know more about AKS later in this book.
Table 4-1: Software platforms for container clustering, orchestration, and scheduling
The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. Table 4-1 lists the most important platform and software choices you have for clusters and schedulers. These clusters are generally offered in public clouds like Azure.
Several cloud vendors offer Docker containers support plus Docker clusters and orchestration support, including Azure, Amazon EC2 Container Service, and Google Container Engine. Azure provides Docker cluster and orchestrator support through Azure Container Service, as explained in the next section.
Another choice is to use Azure Service Fabric, which also supports Docker based on Linux and Windows Containers. Service Fabric runs on Azure or any other cloud as well as on-premises.
A Docker cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster. The cluster will handle all the complex management plumbing such as scalability and health. Figure 4-7 represents how a Docker cluster for composed applications maps to Container Service.
Container Service provides a way to simplify the creation, configuration, and management of a cluster of VMs that are preconfigured to run containerized applications. Using an optimized configuration of popular open-source scheduling and orchestration tools, Container Service gives you the ability to use your existing skills or draw on a large and growing body of community expertise to deploy and manage container-based applications in Azure.
Container Service optimizes the configuration of popular Docker clustering open-source tools and technologies specifically for Azure. You get an open solution that offers portability for both your containers and your application configuration. You select the size, the number of hosts, and the orchestrator tools, and Container Service handles everything else.
Container Service uses Docker images to ensure that your application containers are fully portable. It supports your choice of open-source orchestration platforms like DC/OS, Kubernetes, and Docker Swarm to ensure that these applications can scale to thousands or even tens of thousands of containers.
With Azure Container Service, you can take advantage of the enterprise-grade features of Azure while still maintaining application portability, including at the orchestration layers.
4-15
Figure 4-7: Clustering choices in Azure Container Service
Azure Container Service is simply the infrastructure provided by Azure in order to deploy clusters of DC/OS, Kubernetes, or Docker Swarm, but it does not implement any additional orchestrator. Therefore, Azure Container Service is not an orchestrator, as such; it is only an infrastructure that takes advantage of existing open-source orchestrators for containers, as shown in Figure 4-8.
Figure 4-8: Azure Container Service provides the infrastructure for open-source orchestrators
From a usage perspective, the goal of Container Service is to provide a container hosting environment by using popular open-source tools and technologies. To this end, it exposes the standard API endpoints for your chosen orchestrator. By using these endpoints, you can use any software that can communicate to those endpoints. For example, in the case of the Docker Swarm endpoint, you might choose to use the Docker CLI. For DC/OS, you might choose to use the DC/OS CLI.
To begin using Container Service, you deploy a Container Service cluster from the Azure portal by using an Azure Resource Manager template or the CLI. Available templates include Docker Swarm, Kubernetes, and DC/OS. You can modify the quickstart templates to include additional or advanced Azure configuration.
More info To learn more about deploying a Container Service cluster, on the Azure website, read Deploy an Azure Container Service cluster.
There are no fees for any of the software installed by default as part of ACS. All default options are implemented with open-source software.
Container Service is currently available for Standard A, D, DS, G, and GS series Linux VMs in Azure. You are charged only for the compute instances you choose as well as the other underlying infrastructure resources consumed, such as storage and networking. There are no incremental charges for Container Service itself.
Following are locations where you can find additional information:
Azure Container Service has been available since 2015 with support for multiple container orchestrators as described before. During these years Kubernetes has emerged as the open source standard for container orchestration. Microsoft has been working in a new version of Azure Container Service (AKS) that contains a lot of improvements and full coverage of Kubernetes.
Using Azure Container Service AKS, you can take the advantage of the enterprise-grade features of Azure, while still maintaining application portability through Kubernetes and Docker.
Later you will learn through some examples how you can start using AKS.
You can read more about AKS here:
https://docs.microsoft.com/en-us/azure/aks/
https://azure.microsoft.com/en-us/services/container-service/
If you have your current applications based on Docker Containers on any clouds or on-premises, you can bring these applications directly to Azure through AKS with Kubernetes thanks to the portability that containers provide.
The goal with AKS is to provide a container hosting environment by using open-source tools and technologies that are popular among customers. To this end, AKS exposes the standard Kubernetes API endpoints. By using these standard endpoints, you can leverage any tool that is capable of talking to a Kubernetes cluster. For example, you might choose kubectl, helm, or draft.
As mentioned before, Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of host. It was originally created by Google and has been used for a long time in their own systems, that means that is a proved solution in real systems and it wat created and conceptualized using the experience of the needs of these real systems that usually have an important workload.
The main characteristics of Kubernetes are:
In the next chapters you will learn more about Kubernetes and how you use it locally, from your development machine and in Microsoft Azure using Azure Container Service (AKS).
Service Fabric arose from Microsoft’s transition from delivering “box” products, which were typically monolithic in style, to delivering services. The experience of building and operating large services at scale, such as Azure SQL Database, Azure Document DB, Azure Service Bus, or Cortana’s Backend, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments.
The aim of Service Fabric is to solve the difficult problems of building and running a service and utilizing infrastructure resources efficiently so that teams can solve business problems using a microservices approach.
Service Fabric provides two broad areas to help you build applications that use a microservices approach:
Service Fabric is agnostic with respect to how you build your service, and you can use any technology. However, it provides built-in programming APIs that make it easier to build microservices.
Figure 4-9 demonstrates how you can create and run microservices in Service Fabric either as simple processes or as Docker containers. It is also possible to mix container-based microservices with process-based microservices within the same Service Fabric cluster.
Figure 4-9: Deploying microservices as processes or as containers in Azure Service Fabric
Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers.
More info For up-to-date information about containers support in Service Fabric, on the Azure website, read Service Fabric and containers.
Service Fabric is a good example of a platform with which you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation. For example, if you implement Stateful Reliable Services in Azure Service Fabric, which are introduced in the next section, “Stateless versus stateful microservices,” you have a business microservice concept with multiple physical services.
In Service Fabric, you can group and deploy groups of services as a Service Fabric Application, which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well.
With regard to containers in Service Fabric, you also can deploy services in container images within a Service Fabric cluster. Figure 4-11 illustrates that most of the time there will be only one container per service.
Figure 4-10: Business microservice with several services (containers) in Service Fabric
However, so-called “sidecar” containers (two containers that must be deployed together as part of a logical service) are also possible in Azure Service Fabric. The important thing is that a business microservice is the logical boundary around several cohesive elements. In many cases, it might be a single service with a single data model, but in some other cases you might have physical several services, as well.
As of this writing (late 2017), in Service Fabric you cannot deploy SF Reliable Stateful Services on containers—you can deploy only guest containers, stateless services, or actor services in containers. But note that you can mix services in processes and services in containers in the same Service Fabric application, as shown in Figure 4-11.
Figure 4-11: Business microservice mapped to a Service Fabric application with containers and stateful services
Support is also different depending on whether you are using Docker containers on Linux or Windows Containers. Support for containers in Service Fabric will be expanding in upcoming releases. For up-to-date news about container support in Service Fabric, on the Azure website, read Service Fabric and containers.
As mentioned earlier, each microservice (logical Bounded Context) must own its domain model (data and logic). In the case of stateless microservices, the databases will be external, employing relational options like SQL Server, or NoSQL options like MongoDB or Azure DocumentDB.
But the services themselves also can be stateful, which means that the data resides within the microservice. This data might exist not just on the same server, but within the microservice process, in memory, and persisted on drives and replicated to other nodes. Figure 4-12 shows the different approaches.
Figure 4-12: Stateless versus stateful microservices
A stateless approach is perfectly valid and is easier to implement than stateful microservices because the approach is similar to traditional and well-known patterns. But stateless microservices impose latency between the process and data sources. They also involve more moving pieces when you are trying to improve performance with additional cache and queues. The result is that you can end up with complex architectures that have too many tiers.
In contrast, stateful microservices can excel in advanced scenarios because there is no latency between the domain logic and data. Heavy data processing, gaming back-ends, databases as a service, and other low-latency scenarios all benefit from stateful services, which provide local state for faster access.
Stateless and stateful services are complementary. For instance, a stateful service could be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys.
Stateful services do have drawbacks. They impose a level of complexity that allows them to scale out. Functionality that would usually be implemented by external database systems must be addressed for tasks such as data replication across stateful microservices and data partitioning. However, this is one of the areas where an orchestrator like Service Fabric with its stateful reliable services can help the most—by simplifying the development and lifecycle of stateful microservices using the Reliable Services API and Reliable Actors.
Other microservice frameworks that allow stateful services, that support the Actor pattern, and that improve fault tolerance and latency between business logic and data are Microsoft Orleans, from Microsoft Research, and Akka.NET. Both frameworks are currently improving their support for Docker.
Note that Docker containers are themselves stateless. If you want to implement a stateful service, you need one of the additional prescriptive and higher-level frameworks noted earlier. However, as of this writing, stateful services in Service Fabric are not supported as containers, only as plain microservices. Reliable services support in containers will be available in upcoming versions of Service Fabric.
You can interact with AKS using your preferred client operating system, here we show how you can do this using Microsoft Windows and using an embedded version of Ubuntu Linux in our Windows, showing how it works using Bash commands.
Prerequisites to have before using AKS are:
More info
To find complete information about:
Azure-CLI, go to https://docs.microsoft.com/%20cli/azure/overview?view=azure-cli-latest
Windows Subsystem for Linux, go to https://msdn.microsoft.com/%20commandline/wsl/about
There are several ways to create the AKS Environment. It can be done by using Azure-CLI commands or by using the UI in Azure’s portal.
Here you can explore some examples using the Azure-CLI to create the cluster and the Azure’s portal UI to review the results. There are other tools that you need to install in the development machine, like Kubectl and obviously Docker.
As of the moment of writing this eBook, AKS was in preview, so if you want to use it when in Preview, you have to activate it in your subscription. To enable AKS you have to use the following command:
az provider register -n Microsoft.ContainerService
After enabling this preview you can then create the AKS cluster using the following command:
az aks create --resource-group MSSampleResourceGroup --name MSSampleClusterK801 --agent-count 1 --generate-ssh-keys --location westus2
Figure 4-13: AKS Creation Command
After the creation job finish you can see the AKS created in the Azure UI:
The resource group:
Figure 4-14: AKS Resource Group view from Azure.
The AKS cluster:
Figure 4-15: AKS view from Azure.
Also, you can view the node created using Azure-CLI and Kubectl, getting the credentials and invoking the command get nodes from Kubectl:
# az aks get-credentials --resource-group MSSampleK8ClusterRG --name MSSampleK8Cluster
# kubectl get nodes
Here you can see the commands under the Linux subsystem and the result:
Figure 4-16: View of nodes.
No matter if you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has you covered when it comes to developing Docker applications.
If you prefer a lightweight, cross-platform editor supporting any development language, you can use Visual Studio Code and Docker CLI. These products provide a simple yet robust experience, which is critical for streamlining the developer workflow. By installing “Docker for Mac” or “Docker for Windows” (development environment), Docker developers can use a single Docker CLI to build apps for both Windows or Linux (runtime environment). Plus, Visual Studio Code supports extensions for Docker with IntelliSense for Dockerfiles and shortcut-tasks to run Docker commands from the editor.
Note
To download Visual Studio Code, go to https://code.visualstudio.com/download.
To download Docker for Mac and Windows, go to http://www.docker.com/products/docker.
We recommend to use Visual Studio 2017 (or later) with Docker Tools enabled that comes built-in. With Visual Studio you can develop, run, and validate your applications directly in the chosen Docker environment. F5 your application (single container or multiple containers) directly into a Docker host with debugging, or press Ctrl+F5 to edit and refresh your app without having to rebuild the container. This is the simplest and more powerful choice for Windows developers creating Docker containers for Linux or Windows.
You can use Visual Studio for Mac when developing Docker-based applications. Visual Studio for Mac offers a richer IDE for Mac compared to Visual Code for Mac which is a plain code editor.
You can develop Docker applications and Microsoft tools with most modern languages. The following is an initial list, but you are not limited to it:
Basically, you can use any modern language supported by Docker in Linux or Windows.
Before triggering the outer-loop workflow spanning the entire DevOps cycle, it all begins on each developer’s machine, coding the app itself, using his preferred languages or platforms, and testing it locally (Figure 4-33). But in every case, you will have a very important point in common, no matter what language, framework, or platforms you choose. In this specific workflow, you are always developing and testing Docker containers, but locally.
Figure 4-17: Inner-loop development context
The container or instance of a Docker image will contain these components:
You can set up the inner-loop development workflow that utilizes Docker as the process (described in the next section). Take into account that the initial steps to set up the environment is not included, because you need to do that just once.
Apps are made up from your own services plus additional libraries (dependencies).
Figure 4-18 shows the basic steps that you usually need to carry out when building a Docker app, followed by detailed descriptions of each step.
Figure 4-18: High-level workflow for the life cycle for Docker containerized applications using Docker CLI
The way you develop your application is pretty similar to the way you do it without Docker. The difference is that while developing, you are deploying and testing your application or services running within Docker containers placed in your local environment (like a Linux VM or Windows).
Setting up your local environment
With the latest versions of Docker for Mac and Windows, it’s easier than ever to develop Docker applications, and the setup is straightforward.
More info For instructions on setting up Docker for Windows, go to:
https://docs.docker.com/docker-for-windows/
For instructions on setting up Docker for Mac, go to:
In addition, you’ll need a code editor so that you can actually develop your application while using Docker CLI.
Microsoft provides Visual Studio Code, which is a lightweight code editor that is supported on Mac, Windows, and Linux, and provides IntelliSense with support for many languages (JavaScript, .NET, Go, Java, Ruby, Python, and most modern languages), debugging, integration with Git and extensions support. This editor is a great fit for Mac and Linux developers. In Windows, you also can use the full Visual Studio application.
More info For instructions on installing Visual Studio Code for Windows, Mac, or Linux, go to:
http://code.visualstudio.com/docs/setup/setup-overview/https://docs.docker.com/docker-for-mac/
You can work with Docker CLI and write your code using any code editor, but if you use Visual Studio Code, it makes it easy to author Dockerfile and docker-compose.yml files in your workspace. Plus, you can run Visual Studio Code tasks from the IDE that will prompt scripts that can be running elaborated operations using Docker CLI underneath.
The Docker extension for VS Code provides the following features:
To install the Docker extension press Ctrl+Shift+P, type ext install, and then run the Extensions: Install Extension command to bring up the Marketplace extension list. Next, type docker to filter the results, and then select the Docker Support extension, as depicted in Figure 4-19.
Figure 4-19: Installing the Docker Extension in Visual Studio Code
You will need a DockerFile per custom image to be built and per container to be deployed, therefore, if your app is made up of a single custom service, you will need a single DockerFile. But, if your app is composed of multiple services (as in a microservices architecture), you’ll need one Dockerfile per service.
The DockerFile is usually placed within the root folder of your app or service and contains the required commands so that Docker knows how to set up and run that app or service. You can create your DockerFile and add it to your project along with your code (node.js, .NET Core, etc.), or, if you are new to the environment, take a look at the following Tip.
Tip You can use the Docker extension to guide you when using the Dockers files and docker-compose.yml files related to your Docker containers. Eventually, you will probably write this kind of files without this tools, but using the Docker extension is a good starting point that will accelerate your learning curve.
In figure 4-37 you can see how a docker-compose file is added by using the Docker Extension for VS Code.
Figure 4-20: Docker files added using the Add Docker files to Workspace command
When you add DockerFile, you specify what base Docker image you’ll be using (like using “FROM Microsoft/aspnetcore”). You usually will build your custom image on top of a base image that you can get from any official repository at the Docker Hub registry (like an image for .NET Core or one for Node.js).
Option A: Use an existing official Docker image
Using an official repository of a language stack with a version number ensures that the same language features are available on all machines (including development, testing, and production).
Following is a sample DockerFile for a .NET Core container:
# Base Docker image to use
FROM microsoft/aspnetcore:2.0
# Set the Working Directory and files to be copied to the image
ARG source
WORKDIR /app
COPY ${source:-bin/Release/PublishOutput} .
# Configure the listening port to 80 (Internal/Secured port within Docker host)
EXPOSE 80
# Application entry point
ENTRYPOINT ["dotnet", "MyCustomMicroservice.dll"]
In this case, it is using the version 2.0 of the official ASP.NET Core Docker image for Linux named microsoft/aspnetcore:2.0. For further details, consult the ASP.NET Core Docker Image page and the .NET Core Docker Image page. You also could be using another comparable image like node:4-onbuild for Node.js, or many other preconfigured images for development languages, which are available at Docker Hub.
In the DockerFile, you can instruct Docker to listen to the TCP port that you will use at runtime (such as port 80).
There are other lines of configuration that you can add in the DockerFile depending on the language/framework you are using, so Docker knows how to run the app. For instance, you need the ENTRYPOINT line with ["dotnet", "MyCustomMicroservice.dll"] to run a .NET Core app, although you can have multiple variants depending on the approach to build and run your service. If you’re using the SDK and dotnet CLI to build and run the .NET app, it would be slightly different. The bottom line is that the ENTRYPOINT line plus additional lines will be different depending on the language/platform you choose for your application.
More info For information about building Docker images for .NET Core applications, go to:
https://docs.microsoft.com/dotnet/articles/core/docker/building-net-docker-images.
To learn more about building your own images, go to:
Multiplatform/Multi-Architecture image repositories
As Windows containers become more prevalent, a single image repository can contain platform variants, such as a Linux and Windows image. This is a new feature in Docker (since 2017) that makes it possible for vendors to use a single image repository to cover multiple platforms, such as microsoft/aspdotnetcore repository, which is available at DockerHub registry.
When pulling the microsoft/aspdotnetcore image from a Windows host it will pull the Windows variant, whereas when pulling the same image name from a Linux host it pulls the Linux variant.
Option B: Create your base image from scratch
You can create your own Docker base image from scratch as explained in this article from Docker. This is a scenario that is probably not best for you if you are just starting with Docker, but if you want to set the specific bits of your own base image, you can do it.
For each custom service that comprises your app, you’ll need to create a related image. If your app is made up of a single service or web app, you’ll need just a single image.
Note When taking into account the “outer-loop DevOps workflow,” the images will be created by an automated build process whenever you push your source code to a Git repository (Continuous Integration) so the images will be created in that global environment from your source code.
But, before we consider going to that outer-loop route, we need to ensure that the Docker application is actually working properly so that they don’t push code that might not work properly to the source control system (Git, etc.).
Therefore, each developer first needs to do the entire inner-loop process to test locally and continue developing until they want to push a complete feature or change to the source control system.
To create an image in your local environment and using the DockerFile, you can use the docker build command, as demonstrated in Figure 4-21 (you also can run docker-compose up --build for applications composed by several containers/services).
Figure 4-21: Running docker build
Optionally, instead of directly running docker build from the project’s folder, you first can generate a deployable folder with the .NET libraries needed by using the run dotnet publish command, and then run docker build.
In this example, this creates a Docker image with the name cesardl/netcore-webapi-microservice-docker:first (:first is a tag, like a specific version). You can take this step for each custom image you need to create for your composed Docker application with several containers.
You can find the existing images in your local repository (your development machine) by using the docker images command, as illustrated in Figure 4-22.
Figure 4-22: Viewing existing images using docker images
With the docker-compose.yml file you can define a set of related services to be deployed as a composed application with the deployment commands explained in the next step section.
You need to create that file in your main or root solution folder; it should have a similar content to the following docker-compose.yml file:
version: '2'
services:
web:
build: .
ports:
- "81:80"
volumes:
- .:/code
depends_on:
- redis
redis:
image: redis
In this particular case, this file defines two services: the web service (your custom service) and the redis service (a popular cache service). Each service will be deployed as a container, so we need to use a concrete Docker image for each. For this particular web service, the image will need to do the following:
The redis service uses the latest public redis image pulled from the Docker Hub registry. redis is a very popular cache system for server-side applications.
If your app has only a single container, you just need to run it by deploying it to your Docker Host (VM or physical server). However, if your app is made up of multiple services, you need to compose it, too. Let’s see the different options.
Option A: Run a single container or service
You can run the Docker image by using the docker run command, as shown here:
docker run -t -d -p 80:5000 cesardl/netcore-webapi-microservice-docker:first
Note that for this particular deployment, we’ll be redirecting requests sent to port 80 to the internal port 5000. Now, the application is listening on the external port 80 at the host level.
Option B: Compose and run a multiple-container application
In most enterprise scenarios, a Docker application will be composed of multiple services. For these cases, you can run the command docker-compose up (Figure 4-23), which will use the docker-compose.yml file that you might have created previously. Running this command deploys a composed application with all of its related containers.
Figure 4-23: Results of running the "docker-compose up" command
After you run docker-compose up, you deploy your application and its related container(s) into your Docker Host, as illustrated in Figure 4-24, in the VM representation.
Figure 4-24: VM with Docker containers deployed
This step will vary depending on what your app is doing.
In a very simple .NET Core Web API “Hello World” deployed as a single container or service, you’d just need to access the service by providing the TCP port specified in the DockerFile.
If localhost is not turned on, to navigate to your service, find the IP address for the machine by using this command:
docker-machine ip your-container-name
On the Docker host, open a browser and navigate to that site; you should see your app/service running, as demonstrated in Figure 4-25.
Figure 4-25: Testing your Docker application locally using localhost
Note that it is using port 80, but internally it was being redirected to port 5000, because that’s how it was deployed with docker run, as explained earlier.
You can test this by using CURL from the terminal. In a Docker installation on Windows, the default IP is 10.0.75.1, as depicted in Figure 4-26.
Figure 4-26: Testing a Docker application locally by using CURL
Debugging a container running on Docker
Visual Studio Code supports debugging Docker if you’re using Node.js and other platforms like .NET Core containers.
You also can debug .NET Core or .NET Framework containers in Docker when using Visual Studio for Windows or Mac, as described in the next section.
More info: To learn more about debugging Node.js Docker containers, go to:
https://blog.docker.com/2016/07/live-debugging-docker and https://blogs.msdn.microsoft.com/user_ed/2016/02/27/visual-studio-code-new-features-13-big-debugging-updates-rich-object-hover-conditional-breakpoints-node-js-mono-more/
The developer workflow when using Visual Studio Tools for Docker is similar to the workflow when using Visual Studio Code and Docker CLI (in fact, it is based on the same Docker CLI), but it is easier to get started, simplifies the process, and provides greater productivity for the build, run, and compose tasks. It’s also able to execute and debug your containers via simple actions like F5 and Ctrl+F5 from Visual Studio. Even more, with Visual Studio 2017, in addition to being able to run and debug a single container, you also can run and debug a group of containers (a whole solution) at the same time if they are defined in the same docker-compose.yml file at the solution level.
With the latest versions of Docker for Windows, it is easier than ever to develop Docker applications because the setup is straightforward, as explained in the following references.
More info: To learn more about installing Docker for Windows, go to https://docs.docker.com/docker-for-windows/.
When you add Docker support to a service project in your solution (see Figure 4-27), Visual Studio is not just adding a DockerFile file to your project, it also is adding a service section in your solution’s docker-compose.yml files (or creating the files if they didn’t exist). It’s an easy way to begin composing your multicontainer solution; you then can open the docker-compose.yml files and update them with additional features.
Figure 4-27: Turning on Docker Solution support in a Visual Studio 2017 project
This action not only adds the DockerFile to your project, it also adds the required configuration lines of code to a global docker-compose.yml set at the solution level.
You also can turn on Docker support when creating an ASP.NET Core project in Visual Studio 2017, as shown in Figure 4-28.
Figure 4-28: Turning on Docker support when creating a project
After you add Docker support to your solution in Visual Studio, you also will see a new node tree in Solution Explorer with the added docker-compose.yml files, as depicted in Figure 4-29.
Figure 4-29: docker-compose.yml files in VS Solution Explorer
You could deploy a multicontainer application by using a single docker-compose.yml file when you run docker-compose up; however, Visual Studio adds a group of them, so you can override values depending on the environment (development versus production) and the execution type (release versus debug). This capability will be better explained in later chapters.
More info: For further details on the services implementation and use of Visual Studio Tools for Docker, read the following articles:
Build, debug, update, and refresh apps in a local Docker container:
https://azure.microsoft.com/documentation/articles/vs-azure-tools-docker-edit-and-refresh/
Deploy an ASP.NET container to a remote Docker host:
https://azure.microsoft.com/documentation/articles/vs-azure-tools-docker-hosting-web-apps-in-docker/
With Windows Containers, you can convert your existing Windows applications to Docker images and deploy them with the same tools as the rest of the Docker ecosystem.
To use Windows Containers, you just need to write Windows PowerShell commands in the DockerFile, as demonstrated in the following example:
FROM microsoft/windowsservercore
LABEL Description="IIS" Vendor="Microsoft" Version="10"
RUN powershell -Command Add-WindowsFeature Web-Server
CMD [ "ping", "localhost", "-t" ]
In this case, we’re using Windows PowerShell to install a Windows Server Core base image as well as IIS.
In a similar way, you also could use Windows PowerShell commands to set up additional components like the traditional ASP.NET 4.x and .NET 4.6 or any other Windows software, as shown here:
RUN powershell add-windowsfeature web-asp-net45
In late 2017, Microsoft released a preview version of a new managed Kubernetes service, Azure Container Service (AKS), which is a separate product from the existing ACS that offers support for multiple orchestrators Kubernetes, Mesos DC/OS and Docker Swarm.
More info: To know more about the Microsoft AKS announce you can go to the ACS docs:
The new features in AKS: An Azure-hosted control plane, automated upgrades, self-healing, user configurable scaling, and a simpler user experience for both developers and cluster operators. At launch, AKS will default to using Kubernetes 1.7.7, the software's latest stable release, and customers can opt into using the new 1.8 beta if they choose to do so.
In the following examples you can see the creation of an application, using Visual Studio 2017 with .NET Core 2.0, that runs on Linux and deploys to an AKS Cluster in Azure.
.NET Core is a general-purpose development platform maintained by Microsoft and the .NET community on GitHub. It is cross-platform, supporting Windows, macOS and Linux, and can be used in device, cloud, and embedded/IoT scenarios.
For this example, we use a simple project based on a Visual Studio Web API template, so you don´t need any additional knowledge to create the sample, you only have to create a project using a standard template that include all the elements to run a small project with a REST API under Microsoft .NET Core 2.0 technology.
To create the sample project, you have to select New->Project on Visual Studio and ASP.NET Core Web Application
Figure 4-30: Creating ASP.NET Core Application
Visual Studio will show a list with Templates for Web Projects. For our example you must select Web API, that means you will create an ASP.NET Web API Application.
Verify that you have selected ASP.NET Core 2.0 as the framework. The .NET Core 2.0 is included in the last version of Visual Studio 2017 and is automatically installed and configured for you when you install Visual Studio 2017.
If you have any previous version of .NET Core you can download and install the 2.0 version from https://www.microsoft.com/net/download/core#/sdk
Figure 4-31: Selecting .NET CORE 2.0 and Web API project type
You can enable Docker support at the moment of the project creation in the previous step, or later that means that you can Dockerize your project in any moment. To enable the Docker support after the project creation you must right click on the solution file and selecting on the context menu add ->Docker support
Figure 4-32: Enabling Docker support
You can select Windows or Linux, in our case you must select Linux, because as of late 2017, AKS still doesn’t support Windows Containers.
Figure 4-33: Selecting Linux Containers.
With these simple steps you will have your application based on .NET Core 2.0 running on a Linux Container.
As you can see the integration between Visual Studio 2017 and Docker is totally oriented to the developer’s productivity.
Now you can run your application with the key F5 or using the Play Button
After running the project, you can check the images using the Docker images command and in the list, you will see the mssampleapplication image that has been create with the execution an automatic deploy of our project using Visual Studio 2017.
# Docker images
Figure 4-34: View of docker images
We have to upload the image to any Docker registry, like Azure Container Registry (ACR) or Docker Hub so the images will be deployed to the AKS cluster from that image registry. In this case, we’re uploading the image to Azure Container Registry.
Create the image in Release
We will create the image in Release Mode (ready for production) changing to Release as shown here and executing the application like you have done before
Figure 4-35: Selecting Release Mode
If you execute the Docker Image commad you will see the both images created Debug and Relase Mode.
Create a new Tag for the Image
Each container image needs to be tagged with the loginServer name of the registry. This tag is used for routing when pushing container images to an image registry.
You can view the name of the loginserver in two ways, accesing to Acure UI and taking the information from the Azure ACR Registry
Figure 4-36: View of the name of the Registry
Or running the command
# az acr list --resource-group MSSampleResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
Figure 4-37: List the name of the registry using powershell
In both cases you will obtain the name, in our example mssampleacr.azurecr.io
Now you can Tag the image, taking the latest image (Release image) using the command:
# docker tag mssampleaksapplication:latest mssampleacr.azurecr.io/mssampleaksapplication:v1
Figure 4-38: Apply Tag Command
After run the Tag Command, if you use the Docker images command to see the images, you will see how the image has changed to the new Tag.
Figure 4-39: View of images
Push the image into the Azure ACR
Now you can Push the image into the Azure ACR, for that you have to execute the command:
# docker push mssampleacr.azurecr.io/mssampleaksapplication:v1
This command starts uploading and spend a little time always showing you the progress.
Figure 4-40: Uploading the image to the ACR
When the upload has finished you will see the next information:
Figure 4-41: View of nodes
The next step is deploy your container into your AKS Kubernetes cluster, for that you need a file (.yml deploy file), that four our example will contain:
More info: You can find more information about yml files here:
Now you can deploy using Kubectl, first you must get the credentials to the AKS Cluster, you can see how create the AKS Cluster that we are using in this sample in Azure Kubernetes Managed Service later on this book.
# az aks get-credentials --resource-group MSSampleResourceGroupAKS --name mssampleclusterk801
Figure 4-42: getting credentials
Then you can use the Kubectl create command to launch the deployment.
Figure 4-43: Deploy in Kubernetes
When the Deployment has been finished you can access to the Kubernetes console using a local proxy that you can temporally access using this command:
# az aks browse --resource-group MSSampleResourceGroupAKS --name mssampleclusterk801
and accessing to the url http://127.0.0.1:8001
Figure 4-44: View Kubernetes cluster information
Now you have your application deployed on Azure using a Linux Container and a AKS Kubernetes Cluster and you can access it using the public IP of your Service specified in the Services information.