Development Process for Docker-Based Applications

Vision

Develop containerized .NET applications the way you like, either IDE focused with Visual Studio and Visual Studio tools for Docker or CLI/Editor focused with Docker CLI and Visual Studio Code.

Development environment for Docker apps

Development tool choices: IDE or editor

Whether you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has tools that you can use for developing Docker applications.

Visual Studio (for Windows). When developing Docker-based applications with Visual Studio, it is recommended to use Visual Studio 2017 15.7 or higher which comes with tools for Docker already built-in. The tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL+F5 to edit and refresh your application without having to rebuild the container. This is the most powerful development choice for Docker based apps.

Visual Studio for Mac. It is an IDE, evolution of Xamarin Studio, running in macOS and supports Docker since mid-2017. This should be the preferred choice for developers working in Mac machines who also want to use a powerful IDE.

Visual Studio Code and Docker CLI. If you prefer a lightweight and cross-platform editor that supports any development language, you can use Microsoft Visual Studio Code (VS Code) and the Docker CLI. This is a cross-platform development approach for Mac, Linux, and Windows.

By installing Docker Community Edition (CE) tools, you can use a single Docker CLI to build apps for both Windows and Linux.

Additional resources

.NET languages and frameworks for Docker containers

As mentioned in earlier sections of this guide, you can use .NET Framework, .NET Core, or the open-source Mono project when developing Docker containerized .NET applications. You can develop in C#, F#, or Visual Basic when targeting Linux or Windows Containers, depending on which .NET framework is in use. For more details about.NET languages, see the blog post The .NET Language Strategy.

Development workflow for Docker apps

The application development lifecycle starts at each developer’s machine, where the developer codes the application using their preferred language and tests it locally. No matter which language, framework, and platform the developer chooses, with this workflow, the developer is always developing and testing Docker containers, but doing so locally.

Each container (an instance of a Docker image) includes the following components:

Workflow for developing Docker container-based applications

This section describes the inner-loop development workflow for Docker container-based applications. The inner-loop workflow means it is not taking into account the broader DevOps workflow (that can include up to production deployment) and just focuses on the development work done on the developer’s computer. The initial steps to set up the environment are not included, since those are done only once.

An application is composed of your own services plus additional libraries (dependencies). The following are the basic steps you usually take when building a Docker application, as illustrated in Figure 5-1.

Image

Figure 5-1. Step-by-step workflow for developing Docker containerized apps

In this section, this whole process is detailed and every major step is explained by focusing on a Visual Studio environment.

When you are using an editor/CLI development approach (for example, Visual Studio Code plus Docker CLI on macOS or Windows), you need to know every step, generally in more detail than if you are using Visual Studio. For more details about working in a CLI environment, refer to the eBook Containerized Docker Application lifecycle with Microsoft Platforms and Tools.

When you are using Visual Studio 2017, many of those steps are handled for you, which dramatically improves your productivity. This is especially true when you are using Visual Studio 2017 and targeting multi-container applications. For instance, with just one mouse click, Visual Studio adds the Dockerfile and docker-compose.yml file to your projects with the configuration for your application. When you run the application in Visual Studio, it builds the Docker image and runs the multi-container application directly in Docker; it even allows you to debug several containers at once. These features will boost your development speed.

ImageHowever, just because Visual Studio makes those steps automatic does not mean that you do not need to know what is going on underneath with Docker. Therefore, in the guidance that follows, we detail every step.

Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you are deploying and testing your application or services running within Docker containers in your local environment (either a Linux VM setup by Docker or directly Windows if using Windows Containers).

Set up your local environment with Visual Studio

To begin, make sure you have Docker Community Edition (CE) for Windows installed, as explained in the following instructions:

Get started with Docker CE for Windows

In addition, you will need Visual Studio 2017 installed. Visual Studio 2017 includes the tooling for Docker if you selected the .NET Core and Docker workload during installation, as shown in Figure 5-2.

Image

Figure 5-2. Selecting the .NET Core and Docker workload during Visual Studio 2017 setup

You can start coding your application in plain .NET (usually in .NET Core if you are planning to use containers) even before enabling Docker in your application and deploying and testing in Docker. However, it is recommended that you start working on Docker as soon as possible, because that will be the real environment and any issues can be discovered as soon as possible. This is encouraged because Visual Studio makes it so easy to work with Docker that it almost feels transparent—the best example when debugging multi-container applications from Visual Studio.

Additional resources

 

You need a Dockerfile for each custom image you want to build; you also need a Dockerfile for each container to be deployed, whether you deploy automatically from Visual Studio or manually using the Docker CLI (docker run and docker-compose commands). If your application contains a single custom service, you need a single Dockerfile. If your application contains multiple services (as in a microservices architecture), you need one Dockerfile for each service.

The Dockerfile is placed in the root folder of your application or service. It contains the commands that tell Docker how to set up and run your application or service in a container. You can manually create a Dockerfile in code and add it to your project along with your .NET dependencies.

With Visual Studio and its tools for Docker, this task requires only a few mouse clicks. When you create a new project in Visual Studio 2017, there is an option named Enable Container (Docker) Support, as shown in Figure 5-3.

Image

Figure 5-3. Enabling Docker Support when creating a new project in Visual Studio 2017

You can also enable Docker support on a new or existing project by right-clicking your project file in Visual Studio and selecting the option Add-Docker Project Support, as shown in Figure 5-4.

Image

Figure 5-4. Enabling Docker support in an existing Visual Studio 2017 project

This action on a project (like an ASP.NET Web application or Web API service) adds a Dockerfile to the project with the required configuration. It also adds a docker-compose.yml file for the whole solution. In the following sections, we describe the information that goes into each of those files. Visual Studio can do this work for you, but it is useful to understand what goes into a Dockerfile.

Option A: Creating a project using an existing official .NET Docker image

You usually build a custom image for your container on top of a base image you can get from an official repository at the Docker Hub registry. That is precisely what happens under the covers when you enable Docker support in Visual Studio. Your Dockerfile will use an existing aspnetcore image.

Earlier we explained which Docker images and repos you can use, depending on the framework and OS you have chosen. For instance, if you want to use ASP.NET Core (Linux or Windows), the image to use is microsoft/dotnet:2.1-aspnetcore-runtime. Therefore, you just need to specify what base Docker image you will use for your container. You do that by adding FROM microsoft/dotnet:2.1-aspnetcore-runtime to your Dockerfile. This will be automatically performed by Visual Studio, but if you were to update the version, you update this value.

Using an official .NET image repository from Docker Hub with a version number ensures that the same language features are available on all machines (including development, testing, and production).

The following example shows a sample Dockerfile for an ASP.NET Core container.

Image

In this case, the image is based on version 2.1 of the official ASP.NET Core Docker image (multi-arch for Linux and Windows). This is the setting microsoft/dotnet:2.1-aspnetcore-runtime. (For further details about this base image, see the ASP.NET Core Docker Image page and the .NET Core Docker Image page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting).

You can specify additional configuration settings in the Dockerfile, depending on the language and framework you are using. For instance, the ENTRYPOINT line with ["dotnet", "MySingleContainerWebApp.dll"] tells Docker to run a .NET Core application. If you are using the SDK and the .NET CLI (dotnet CLI) to build and run the .NET application, this setting would be different. The bottom line is that the ENTRYPOINT line and other settings will be different depending on the language and platform you choose for your application.

Additional resources

Using multi-arch image repositories

A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is Linux and Windows). For example, the microsoft/aspnetcore repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name.

If you specify a tag, targeting a platform that is explicit like in the following cases:

Image

But, and this is new since mid-2017, if you specify the same image name, even with the same tag, the new multi-arch images (like the aspnetcore image which supports multi-arch) will use the Linux or Windows version depending on the Docker host OS you are deploying, as shown in the following example:

Image

This way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant.

Option B: Creating your base image from scratch

You can create your own Docker base image from scratch. This scenario is not recommended for someone who is starting with Docker, but if you want to set the specific bits of your own base image, you can do so.

Additional resources

Multi-stage builds in dockerfile

The Dockerfile is similar to a batch script. Similar to what you would do if you had to setup the machine from the command line.

It starts with a base image that sets up the initial context, it’s like the startup filesystem, that sits on top of the host OS. It’s not an OS, but you can think of if like “the” OS inside the container.

The execution of every command line creates a new layer on the filesystem with the changes from the previous one, so that, when combined, produce the resulting filesystem.

Since every new layer “rests” on top of the previous one and the resulting image size increases with every command, images can get very large if they have to include, for example, the SDK needed to build and publish an application.

This is where multi-stage builds get into the plot (from Docker 17.05 and higher) to do their magic.

The core idea is that you can separate the Dockerfile execution process in stages, where a stage is an initial image followed by one or more commands, and the last stage determines the final image size.

In short, multi-stage builds allow splitting the creation in different “phases” and then assemble the final image taking only the relevant directories from the intermediate stages. The general strategy to use this feature is:

  1. Use a base SDK image (doesn’t matter how large), with everything needed to build and publish the application to a folder and then
  2. Use a base, small, runtime-only image and copy the publishing folder from the previous stage to produce a small final image.

Probably the best way to understand multi-stage is going through a Dockerfile in detail, line by line, so let’s begin with the initial Dockerfile created by Visual Studio when adding Docker support to a project and will get into some optimizations later.

The initial Dockerfile might look something like this:

ImageAnd these are the details, line by line:

1) Begin a stage with a “small” runtime-only base image, call it base for reference.
2) Create /app directory in the image.
3) Expose port 80.

 

5) Begin a new stage with “large” image for building/publishing, call it build for reference.
6) Create directory /src in the image.
7) Up to line 16, copy referenced projects .csproj files, to be able to restore pakages later.

 

17) Restore packages for the Catalog.API project and the referenced projects.
18) Copy all directory tree for the solution (except the files/directories included in the .dockerignore file) from to the /src directory in the image.
19) Change current folder to Catalog.API project.
20) Build project (and other project dependencies) and output to /app directory in the image.

 

22) Begin a new stage continuing from build, call it publish for reference.
23) Publish project (and dependencies) and output to /app directory in the image.

 

25) Begin a new stage continuing from base and call it final
26) Change current directory to /app
27) Copy the /app directory from stage publish to the current directory
28) Define the command to run when the container is started.

Now we’ll explore some optimizations to improve the whole process performance that, in the case of eShopOnContainers, means about 22 minutes or more to build the complete solution in Linux containers.

We’ll take advantage of Doker’s layer cache feature, which is quite simple: if the base image and the commands are the same as some previously executed, it can just use the resulting layer without the need to execute the commands, thus saving some time.

So, let’s focus on the build stage, lines 5-6 are mostly the same, but lines 7-17 are different for every service from eShopOnContainers, so they have to execute every single time, however if we changed lines 7-16 to:

COPY . .

Then it would be just the same for every service, it would copy the whole solution and would create a larger layer but:

1) The copy process would only be executed the first time (and when rebuilding if a file is changed) and would use the cache for all other services and
2) Since the larger image occurs in an intermediate stage it doesn’t affects the final image size.

The next significant optimization involves the restore command executed in line 17, which is also different for every service of eShopOnContainers. If we changed that line to just:

RUN dotnet restore

It whould restore the packages for the whole solution, but then again, it would do it just once, instead of the 15 times with the current strategy.      

However, dotnet restore only runs if there’s a single project or solution file in the folder, so achieving this is a bit more complicated and the way to solve it, without getting into too much details, is this:

1) Add the following lines to .dockerignore:
2) Include the /ignoreprojectextensions:.dcproj argument to dotnet restore, so it also ignores the docker-compose project and only restores the packages for the eShopOnContainers-ServicesAndWebApps solution.

For the final optimization, it just happens that line 20 is redundant, as line 23 also builds the application and comes, in essence, right after line 20, so there goes another time-consuming command.

The resulting file is then:

Image

Which, besides being much shorter, when applying the pattern to all Dockerfiles, also results in a more than 50% reduction of build time.

Additional resources