Introduction to containers and Docker

Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. You then can test the containerized application as a unit and deploy it as a container image instance to the host operating system.

Just as the shipping industry uses standardized containers to move goods by ship, train, or truck, regardless of the cargo within them, software containers act as a standard unit of software that can contain different code and dependencies. Placing software into containers makes it possible for developers and IT professionals to deploy those containers across environments with little or no modification.

Containers also isolate applications from one another on a shared operating system (OS). Containerized applications run on top of a container host, which in turn runs on the OS (Linux or Windows). Thus, containers have a significantly smaller footprint than virtual machine (VM) images.

Each container can run an entire web application or a service, as shown in Figure 1-1.

Image

Figure 1-1: Multiple containers running on a container host

In this example, Docker Host is a container host, and App 1, App 2, Svc 1, and Svc 2 are containerized applications or services.

Another benefit you can derive from containerization is scalability. You can scale-out quickly by creating new containers for short-term tasks. From an application point of view, instantiating an image (creating a container) is similar to instantiating a process like a service or web app. For reliability, however, when you run multiple instances of the same image across multiple host servers, you typically want each container (image instance) to run in a different host server or VM in different fault domains.

In short, containers offer the benefits of isolation, portability, agility, scalability, and control across the entire application life cycle workflow. The most important benefit is the isolation provided between Dev and Ops.

What is Docker?

Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run in the cloud or on-premises (see Figure 1-2). Docker is also a company that promotes and develops this technology, working in collaboration with cloud, Linux, and Windows vendors, including Microsoft.

Image

Figure 1-2: Docker deploys containers at all layers of the hybrid cloud

Docker image containers can run natively on Linux and Windows. However, Windows images can run only on Windows hosts and Linux images can run only on Linux hosts, meaning a host server or a VM.

Developers can use development environments on Windows, Linux, or macOS. On the development computer, the developer runs a Docker host to which Docker images are deployed, including the app and its dependencies. Developers who work on Linux or on the Mac use a Docker host that is Linux based, and they can create images only for Linux containers. (Developers working on the Mac can edit code or run the Docker command-line interface [CLI] from macOS, but, as of this writing, containers do not run directly on macOS.) Developers who work on Windows can create images for either Linux or Windows Containers.

To host containers in development environments and provide additional developer tools, Docker ships Docker Community Edition (CE) for Windows or for macOS. These products install the necessary VM (the Docker host) to host the containers. Docker also makes available Docker Enterprise Edition (EE), which is designed for enterprise development and is used by IT teams who build, ship, and run large business-critical applications in production.

To run Windows Containers, there are two types of runtimes:

The images for these containers are created in the same way and function the same. The difference is in how the container is created from the image—running a Hyper-V Container requires an extra parameter. For details, see Hyper-V Containers.

Comparing Docker containers with VMs

Figure 1-3 shows a comparison between VMs and Docker containers.

Because containers require far fewer resources (for example, they do not need a full OS), they are easy to deploy and they start fast. This makes it possible for you to have higher density, meaning that you can run more services on the same hardware unit, thereby reducing costs.

As a side effect of running on the same kernel, you achieve less isolation than VMs.

The main goal of an image is that it makes the environment (dependencies) the same across different deployments. This means that you can debug it on your machine and then deploy it to another machine with the same environment guaranteed.

A container image is a way to package an app or service and deploy it in a reliable and reproducible manner. In this respect, Docker is not only a technology, it’s also a philosophy and a process.

When using Docker, you will not hear developers say, “It works on my machine, why not in production?” They can simply say, “It runs on Docker,” because the packaged Docker application can be run on any supported Docker environment, and it will run the way it was intended to on all deployment destinations (Dev, QA, staging, production, etc.).

Image

Figure 1-3: Comparison of traditional VMs to Docker containers

Docker terminology

This section lists terms and definitions with which you should become familiar with before delving deeper into Docker (for further definitions, see the extensive glossary provided by Docker here):

Docker containers, images, and registries

When using Docker, you create an app or service and package it and its dependencies into a container image. An image is a static representation of the app or service and its configuration and dependencies.

To run the app or service, the app’s image is instantiated to create a container, which will be running on the Docker host. Containers are initially tested in a development environment or PC.

You store images in a registry, which acts as a library of images. You need a registry when deploying to production orchestrators. Docker maintains a public registry via Docker Hub; other vendors provide registries for different collections of images. Alternatively, enterprises can have a private registry on-premises for their own Docker images.

Figure 1-4 shows how images and registries in Docker relate to other components. It also shows the multiple registry offerings from vendors.

Image

Figure 1-4: Taxonomy of Docker terms and concepts

By putting images in a registry, you can store static and immutable application bits, including all of their dependencies, at a framework level. You then can version and deploy images in multiple environments and thus provide a consistent deployment unit.

Private image registries, either hosted on-premises or in the cloud, are recommended for the following situations:

Road to modern applications based on containers.

Probably, you are reading this book because you are planning the development of new applications and you are analyzing the impact in your Company of using Docker, Containers and new approaches like Microservices.

The adoption of new development paradigms must be measured before starting a project to know the impact on your dev teams, your budget or your infrastructure.

Microsoft has been working in a rich guidance, sample applications and a suite of eBooks, that can help you to take the right decision and guide your team for a successful development, deployment and operations of your new applications.

This book belongs to a Microsoft’s suite of guides that cover many of the needs you can have during the process of developing new modern applications based on containers.

The additional Microsoft eBooks related to Docker containers are the following:

.NET Microservices: Architecture for Containerized .NET Applications https://aka.ms/MicroservicesEbook

Modernize existing .NET applications with Azure cloud and Windows Containers https://t.co/xw5ilGAJmY