Building the application from a build (CI) container

Another benefit of Docker is that you can build your application from a preconfigured container, as shown in Figure 6-13, so you do not need to create a build machine or VM to build your application. You can use or test that build container by running it at your development machine. But what is even more interesting is that you can use the same build container from your CI (Continuous Integration) pipeline.

Image

Figure 6-13. Docker build-container compiling your .NET binaries

For this scenario, we provide the microsoft/aspnetcore-build image, which you can use to compile and build your ASP.NET Core apps. The output is placed in an image based on the microsoft/aspnetcore image, which is an optimized runtime image, as previously noted.

The aspnetcore-build image contains everything you need in order to compile an ASP.NET Core application, including .NET Core, the ASP.NET SDK, npm, Bower, Gulp, etc.

We need these dependencies at build time. But we do not want to carry these with the application at runtime, because it would make the image unnecessarily large. In the eShopOnContainers application, you can build the application from a container by just running the following docker-compose command.

docker-compose -f docker-compose.ci.build.yml up

 

Figure 6-14 shows this command running at the command line.

Image

Figure 6-14. Building your .NET application from a container

As you can see, the container that is running is the ci-build_1 container. This is based on the aspnetcore-build image so that it can compile and build your whole application from within that container instead of from your PC. That is why in reality it is building and compiling the .NET Core projects in Linux—because that container is running on the default Docker Linux host.

The docker-compose.ci.build.yml file for that image (part of eShopOnContainers) contains the following code. You can see that it starts a build container using the microsoft/aspnetcore-build image.

version: '3'

 

services:

 

ci-build:

 

image: microsoft/aspnetcore-build:2.0

 

volumes:

- .:/src

 

working_dir: /src

 

command: /bin/bash -c "pushd ./src/Web/WebSPA && npm rebuild node-sass && popd && dotnet restore ./eShopOnContainers-ServicesAndWebApps.sln && dotnet publish ./eShopOnContainers-ServicesAndWebApps.sln -c Release -o ./obj/Docker/publish"

 

 

Once the build container is up and running, it runs the .NET Core SDK dotnet restore and dotnet publish commands against all the projects in the solution in order to compile the .NET bits. In this case, because eShopOnContainers also has an SPA based on TypeScript and Angular for the client code, it also needs to check JavaScript dependencies with npm, but that action is not related to the .NET bits.

The dotnet publish command builds and publishes the compiled output within each project’s folder to the ../obj/Docker/publish folder, as shown in Figure 6-15.

Image

Figure 6-15. Binary files generated by the dotnet publish command

Creating the Docker images from the CLI

Once the application output is published to the related folders (within each project), the next step is to actually build the Docker images. To do this, you use the docker-compose build and docker-compose up commands, as shown in Figure 6-16.

Image

Figure 6-16. Building Docker images and running the containers

In Figure 6-17, you can see how the docker-compose build command runs.

Image

Figure 6-17. Building the Docker images with the docker-compose build command

The difference between the docker-compose build and docker-compose up commands is that docker-compose up both, builds the images and starts the containers.

When you use Visual Studio, all these steps are performed under the covers. Visual Studio compiles your .NET application, creates the Docker images, and deploys the containers into the Docker host. Visual Studio offers additional features, like the ability to debug your containers running in Docker, directly from Visual Studio.

The overall takeway here is that you are able to build your application the same way your CI/CD pipeline should build it—from a container instead of from a local machine. After having the images created, then you just need to run the Docker images using the docker-compose up command.

Additional resources

Using a database server running as a container

You can have your databases (SQL Server, PostgreSQL, MySQL, etc.) on regular standalone servers, in on-premises clusters, or in PaaS services in the cloud like Azure SQL DB. However, for development and test environments, having your databases running as containers is convenient, because you do not have any external dependency, and simply running the docker-compose command starts the whole application. Having those databases as containers is also great for integration tests, because the database is started in the container and is always populated with the same sample data, so tests can be more predictable.

SQL Server running as a container with a microservice-related database

In eShopOnContainers, there is a container named sql.data defined in the docker-compose.yml file that runs SQL Server for Linux with all the SQL Server databases needed for the microservices. (You could also have one SQL Server container for each database, but that would require more memory assigned to Docker.) The important point in microservices is that each microservice owns its related data, therefore its related SQL database in this case. But the databases can be anywhere.

The SQL Server container in the sample application is configured with the following YAML code in the docker-compose.yml file, which is executed when you run docker-compose up. Note that the YAML code has consolidated configuration information from the generic docker-compose.yml file and the docker-compose.override.yml file. (Usually you would separate the environment settings from the base or static information related to the SQL Server image.)

sql.data:

image: microsoft/mssql-server-linux

environment:

- MSSQL_SA_PASSWORD=Pass@word

- ACCEPT_EULA=Y

- MSSQL_PID=Developer

ports:

- "5434:1433"

 

In a similar way, instead of using docker-compose, the following docker run command can run that container:

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD= your@password' -p 1433:1433 -d microsoft/mssql-server-linux

 

However, if you are deploying a multi-container application like eShopOnContainers, it is more convenient to use the docker-compose up command so that it deploys all the required containers for the application.

When you start this SQL Server container for the first time, the container initializes SQL Server with the password that you provide. Once SQL Server is running as a container, you can update the database by connecting through any regular SQL connection, such as from SQL Server Management Studio, Visual Studio, or C# code.

The eShopOnContainers application initializes each microservice database with sample data by seeding it with data on startup, as explained in the following section.

Having SQL Server running as a container is not just useful for a demo where you might not have access to an instance of SQL Server. As noted, it is also great for development and testing environments so that you can easily run integration tests starting from a clean SQL Server image and known data by seeding new sample data.

Additional resources

Seeding with test data on Web application startup

To add data to the database when the application starts up, you can add code like the following to the Configure method in the Startup class of the Web API project:

public class Startup

{

// Other Startup code...

 

public void Configure(IApplicationBuilder app,

IHostingEnvironment env,

ILoggerFactory loggerFactory)

{

// Other Configure code...

 

// Seed data through our custom class

CatalogContextSeed.SeedAsync(app)

.Wait();

 

// Other Configure code...

}

}

 

The following code in the custom CatalogContextSeed class populates the data.

public class CatalogContextSeed

{

public static async Task SeedAsync(IApplicationBuilder applicationBuilder)

{

var context = (CatalogContext)applicationBuilder

.ApplicationServices.GetService(typeof(CatalogContext));

using (context)

{

context.Database.Migrate();

 

if (!context.CatalogBrands.Any())

{

context.CatalogBrands.AddRange(

GetPreconfiguredCatalogBrands());

 

await context.SaveChangesAsync();

}

if (!context.CatalogTypes.Any())

{

context.CatalogTypes.AddRange(

GetPreconfiguredCatalogTypes());

 

await context.SaveChangesAsync();

}

}

}

static IEnumerable<CatalogBrand> GetPreconfiguredCatalogBrands()

{

return new List<CatalogBrand>()

{

new CatalogBrand() { Brand = "Azure"},

new CatalogBrand() { Brand = ".NET" },

new CatalogBrand() { Brand = "Visual Studio" },

new CatalogBrand() { Brand = "SQL Server" }

};

}

 

static IEnumerable<CatalogType> GetPreconfiguredCatalogTypes()

{

return new List<CatalogType>()

{

new CatalogType() { Type = "Mug"},

new CatalogType() { Type = "T-Shirt" },

new CatalogType() { Type = "Backpack" },

new CatalogType() { Type = "USB Memory Stick" }

};

}

}

When you run integration tests, having a way to generate data consistent with your integration tests is useful. Being able to create everything from scratch, including an instance of SQL Server running on a container, is great for test environments.

EF Core InMemory database versus SQL Server running as a container

Another good choice when running tests is to use the Entity Framework InMemory database provider. You can specify that configuration in the ConfigureServices method of the Startup class in your Web API project:

public class Startup

{

// Other Startup code ...

public void ConfigureServices(IServiceCollection services)

{

services.AddSingleton<IConfiguration>(Configuration);

 

// DbContext using an InMemory database provider

services.AddDbContext<CatalogContext>(opt => opt.UseInMemoryDatabase());

 

//(Alternative: DbContext using a SQL Server provider

//services.AddDbContext<CatalogContext>(c =>

//{

// c.UseSqlServer(Configuration["ConnectionString"]);

//

//});

}

// Other Startup code ...

}

 

There is an important catch, though. The in-memory database does not support many constraints that are specific to a particular database. For instance, you might add a unique index on a column in your EF Core model and write a test against your in-memory database to check that it does not let you add a duplicate value. But when you are using the in-memory database, you cannot handle unique indexes on a column. Therefore, the in-memory database does not behave exactly the same as a real SQL Server database—it does not emulate database-specific constraints.

Even so, an in-memory database is still useful for testing and prototyping. But if you want to create accurate integration tests that take into account the behavior of a specific database implementation, you need to use a real database like SQL Server. For that purpose, running SQL Server in a container is a great choice and more accurate than the EF Core InMemory database provider.

Using a Redis cache service running in a container

You can run Redis on a container, especially for development and testing and for proof-of-concept scenarios. This scenario is convenient, because you can have all your dependencies running on containers—not just for your local development machines, but for your testing environments in your CI/CD pipelines.

However, when you run Redis in production, it is better to look for a high-availability solution like Redis Microsoft Azure, which runs as a PaaS (Platform as a Service). In your code, you just need to change your connection strings.

Redis provides a Docker image with Redis. That image is available from Docker Hub at this URL:

https://hub.docker.com/_/redis/

You can directly run a Docker Redis container by executing the following Docker CLI command in your command prompt:

docker run --name some-redis -d redis

 

The Redis image includes expose:6379 (the port used by Redis), so standard container linking will make it automatically available to the linked containers.

In eShopOnContainers, the basket.api microservice uses a Redis cache running as a container. That basket.data container is defined as part of the multi-container docker-compose.yml file, as shown in the following example:

//docker-compose.yml file

//...

basket.data:

image: redis

expose:

- "6379"

 

This code in the docker-compose.yml defines a container named basket.data based on the redis image and publishing the port 6379 internally, meaning that it will be accessible only from other containers running within the Docker host.

Finally, in the docker-compose.override.yml file, the basket.api microservice for the eShopOnContainers sample defines the connection string to use for that Redis container:

basket.api:

environment:

// Other data ...

- ConnectionString=basket.data

- EventBusConnection=rabbitmq

Implementing event-based communication between microservices (integration events)

As described earlier, when you use event-based communication, a microservice publishes an event when something notable happens, such as when it updates a business entity. Other microservices subscribe to those events. When a microservice receives an event, it can update its own business entities, which might lead to more events being published. This is the essence of the eventual consistency concept. This publish/subscribe system is usually performed by using an implementation of an event bus. The event bus can be designed as an interface with the API needed to subscribe and unsubscribe to events and to publish events. It can also have one or more implementations based on any inter-process or messaging communication, such as a messaging queue or a service bus that supports asynchronous communication and a publish/subscribe model.

You can use events to implement business transactions that span multiple services, which gives you eventual consistency between those services. An eventually consistent transaction consists of a series of distributed actions. At each action, the microservice updates a business entity and publishes an event that triggers the next action.

Image

Figure 6-18. Event-driven communication based on an event bus

This section describes how you can implement this type of communication with .NET by using a generic event bus interface, as shown in Figure 6-18. There are multiple potential implementations, each using a different technology or infrastructure such as RabbitMQ, Azure Service Bus, or any other third-party open-source or commercial service bus.

Using message brokers and services buses for production systems

As noted in the architecture section, you can choose from multiple messaging technologies for implementing your abstract event bus. But these technologies are at different levels. For instance, RabbitMQ, a messaging broker transport, is at a lower level than commercial products like Azure Service Bus, NServiceBus, MassTransit, or Brighter. Most of these products can work on top of either RabbitMQ or Azure Service Bus. Your choice of product depends on how many features and how much out-of-the-box scalability you need for your application.

For implementing just an event bus proof-of-concept for your development environment, as in the eShopOnContainers sample, a simple implementation on top of RabbitMQ running as a container might be enough. But for mission-critical and production systems that need high scalability, you might want to evaluate and use Azure Service Bus.

If you require high-level abstractions and richer features like Sagas for long-running processes that make distributed development easier, other commercial and open-source service buses like NServiceBus, MassTransit, and Brighter are worth evaluating. In this case, the abstractions and API to use would usually be directly the ones provided by those high-level service buses instead of your own abstractions (like the simple event bus abstractions provided at eShopOnContainers). For that matter, you can research the forked eShopOnContainers using NServiceBus (additional derived sample implemented by Particular Software).

Of course, you could always build your own service bus features on top of lower-level technologies like RabbitMQ and Docker, but the work needed to “reinvent the wheel” might be too costly for a custom enterprise application.

To reiterate: the sample event bus abstractions and implementation showcased in the eShopOnContainers sample are intended to be used only as a proof of concept. Once you have decided that you want to have asynchronous and event-driven communication, as explained in the current section, you should choose the service bus product that best fits your needs for production.

Integration events

Integration events are used for bringing domain state in sync across multiple microservices or external systems. This is done by publishing integration events outside the microservice. When an event is published to multiple receiver microservices (to as many microservices as are subscribed to the integration event), the appropriate event handler in each receiver microservice handles the event.

An integration event is basically a data-holding class, as in the following example:

public class ProductPriceChangedIntegrationEvent : IntegrationEvent

{

public int ProductId { get; private set; }

public decimal NewPrice { get; private set; }

public decimal OldPrice { get; private set; }

 

public ProductPriceChangedIntegrationEvent(int productId, decimal newPrice,

decimal oldPrice)

{

ProductId = productId;

NewPrice = newPrice;

OldPrice = oldPrice;

}

}

 

The integration events can be defined at the application level of each microservice, so they are decoupled from other microservices, in a way comparable to how ViewModels are defined in the server and client. What is not recommended is sharing a common integration events library across multiple microservices; doing that would be coupling those microservices with a single event definition data library. You do not want to do that for the same reasons that you do not want to share a common domain model across multiple microservices: microservices must be completely autonomous.

There are only a few kinds of libraries you should share across microservices. One is libraries that are final application blocks, like the Event Bus client API, as in eShopOnContainers. Another is libraries that constitute tools that could also be shared as NuGet components, like JSON serializers.

The event bus

An event bus allows publish/subscribe-style communication between microservices without requiring the components to explicitly be aware of each other, as shown in Figure 6-19.

Image

Figure 6-19. Publish/subscribe basics with an event bus

The event bus is related to the Observer pattern and the publish-subscribe pattern.

Observer pattern

In the Observer pattern, your primary object (known as the Observable) notifies other interested objects (known as Observers) with relevant information (events).

Publish-subscribe (Pub/Sub) pattern

The purpose of the Publish-Subscribe pattern is the same as the Observer pattern: you want to notify other services when certain events take place. But there is an important difference between the Observer and Pub/Sub patterns. In the observer pattern, the broadcast is performed directly from the observable to the observers, so they “know” each other. But when using a Pub/Sub pattern, there is a third component, called broker or message broker or event bus, which is known by both the publisher and subscriber. Therefore, when using the Pub/Sub pattern the publisher and the subscribers are precisely decoupled thanks to the mentioned event bus or message broker.

The middleman or event bus

How do you achieve anonymity between publisher and subscriber? An easy way is let a middleman take care of all the communication. An event bus is one such middleman.

An event bus is typically composed of two parts:

In Figure 6-19 you can see how, from an application point of view, the event bus is nothing more than a Pub/Sub channel. The way you implement this asynchronous communication can vary. It can have multiple implementations so that you can swap between them, depending on the environment requirements (for example, production versus development environments).

In Figure 6-20 you can see an abstraction of an event bus with multiple implementations based on infrastructure messaging technologies like RabbitMQ, Azure Service Bus, or other event/message broker.

Image

Figure 6- 20. Multiple implementations of an event bus

However, and as mentioned previously, using your own abstractions (the event bus interface) is good only if you need basic event bus features supported by your abstractions. If you need richer service bus features, you should probably use the API and abstractions provided by your preferred commercial service bus instead of your own abstractions.

Defining an event bus interface

Let’s start with some implementation code for the event bus interface and possible implementations for exploration purposes. The interface should be generic and straightforward, as in the following interface.

public interface IEventBus

{

void Publish(IntegrationEvent @event);

 

void Subscribe<T, TH>()

where T : IntegrationEvent

where TH : IIntegrationEventHandler<T>;

 

void SubscribeDynamic<TH>(string eventName)

where TH : IDynamicIntegrationEventHandler;

 

void UnsubscribeDynamic<TH>(string eventName)

where TH : IDynamicIntegrationEventHandler;

 

void Unsubscribe<T, TH>()

where TH : IIntegrationEventHandler<T>

where T : IntegrationEvent;

}

 

The Publish method is straightforward. The event bus will broadcast the integration event passed to it to any microservice, or even an external application, subscribed to that event. This method is used by the microservice that is publishing the event.

The Subscribe methods (you can have several implementations depending on the arguments) are used by the microservices that want to receive events. This method has two arguments. The first is the integration event to subscribe to (IntegrationEvent). The second argument is the integration event handler (or callback method), named IIntegrationEventHandler<T>, to be executed when the receiver microservice gets that integration event message.

Implementing an event bus with RabbitMQ for the development or test environment

We should start by saying that if you create your custom event bus based on RabbitMQ running in a container, as the eShopOnContainers application does, it should be used only for your development and test environments. You should not use it for your production environment, unless you are building it as a part of a production-ready service bus. A simple custom event bus might be missing many production-ready critical features that a commercial service bus has.

One of the event bus custom implementation in eShopOnContainers is basically a library using the RabbitMQ API (There’s another implementation based on Azure Service Bus).

The event bus implementation with RabbitMQ lets microservices subscribe to events, publish events, and receive events, as shown in Figure 6-21.

Image

Figure 6-21. RabbitMQ implementation of an event bus

In the code, the EventBusRabbitMQ class implements the generic IEventBus interface. This is based on Dependency Injection so that you can swap from this dev/test version to a production version.

public class EventBusRabbitMQ : IEventBus, IDisposable

{

// Implementation using RabbitMQ API

//...

 

The RabbitMQ implementation of a sample dev/test event bus is boilerplate code. It has to handle the connection to the RabbitMQ server and provide code for publishing a message event to the queues. It also has to implement a dictionary of collections of integration event handlers for each event type; these event types can have a different instantiation and different subscriptions for each receiver microservice, as shown in Figure 6-21.

Implementing a simple publish method with RabbitMQ

The following code is part is a simplified event bus implementation for RabbitMQ, improved in the actual code of eShopOnContainers. You usually do not need to code it unless you are making improvements. The code gets a connection and channel to RabbitMQ, creates a message, and then publishes the message into the queue.

public class EventBusRabbitMQ : IEventBus, IDisposable

{

// Member objects and other methods ...

// ...

public void Publish(IntegrationEvent @event)

{

var eventName = @event.GetType().Name;

var factory = new ConnectionFactory() { HostName = _connectionString };

using (var connection = factory.CreateConnection())

using (var channel = connection.CreateModel())

{

channel.ExchangeDeclare(exchange: _brokerName,

type: "direct");

 

string message = JsonConvert.SerializeObject(@event);

var body = Encoding.UTF8.GetBytes(message);

 

channel.BasicPublish(exchange: _brokerName,

routingKey: eventName,

basicProperties: null,

body: body);

}

}

}

 

The actual code of the Publish method in the eShopOnContainers application is improved by using a Polly retry policy, which retries the task a certain number of times in case the RabbitMQ container is not ready. This can occur when docker-compose is starting the containers; for example, the RabbitMQ container might start more slowly than the other containers.

As mentioned earlier, there are many possible configurations in RabbitMQ, so this code should be used only for dev/test environments.

Implementing the subscription code with the RabbitMQ API

As with the publish code, the following code is a simplification of part of the event bus implementation for RabbitMQ. Again, you usually do not need to change it unless you are improving it.

public class EventBusRabbitMQ : IEventBus, IDisposable

{

// Member objects and other methods ...

// ...

 

public void Subscribe<T, TH>()

where T : IntegrationEvent

where TH : IIntegrationEventHandler<T>

{

var eventName = _subsManager.GetEventKey<T>();

var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);

if (!containsKey)

{

if (!_persistentConnection.IsConnected)

{

_persistentConnection.TryConnect();

}

 

using (var channel = _persistentConnection.CreateModel())

{

channel.QueueBind(queue: _queueName,

exchange: BROKER_NAME,

routingKey: eventName);

}

 

}

 

_subsManager.AddSubscription<T, TH>();

}

 

}

Each event type has a related channel to get events from RabbitMQ. You can then have as many event handlers per channel and event type as needed.

The Subscribe method accepts an IIntegrationEventHandler object, which is like a callback method in the current microservice, plus its related IntegrationEvent object. The code then adds that event handler to the list of event handlers that each integration event type can have per client microservice. If the client code has not already been subscribed to the event, the code creates a channel for the event type so it can receive events in a push style from RabbitMQ when that event is published from any other service.

Subscribing to events

The first step for using the event bus is to subscribe the microservices to the events they want to receive. That should be done in the receiver microservices.

The following simple code shows what each receiver microservice needs to implement when starting the service (that is, in the Startup class) so it subscribes to the events it needs. In this case, the basket.api microservice needs to subscribe to ProductPriceChangedIntegrationEvent and the OrderStartedIntegrationEvent messages.

For instance, when subscribing to the ProductPriceChangedIntegrationEvent event, that makes the basket microservice aware of any changes to the product price and lets it warn the user about the change if that product is in the user’s basket.

var eventBus = app.ApplicationServices.GetRequiredService<IEventBus>();

 

eventBus.Subscribe<ProductPriceChangedIntegrationEvent,

ProductPriceChangedIntegrationEventHandler>();

eventBus.Subscribe<OrderStartedIntegrationEvent,

OrderStartedIntegrationEventHandler>();

 

After this code runs, the subscriber microservice will be listening through RabbitMQ channels. When any message of type ProductPriceChangedIntegrationEvent arrives, the code invokes the event handler that is passed to it and processes the event.

Publishing events through the event bus

Finally, the message sender (origin microservice) publishes the integration events with code similar to the following example. (This is a simplified example that does not take atomicity into account.) You would implement similar code whenever an event must be propagated across multiple microservices, usually right after committing data or transactions from the origin microservice.

First, the event bus implementation object (based on RabbitMQ or based on a service bus) would be injected at the controller constructor, as in the following code:

[Route("api/v1/[controller]")]

public class CatalogController : ControllerBase

{

private readonly CatalogContext _context;

private readonly IOptionsSnapshot<Settings> _settings;

private readonly IEventBus _eventBus;

 

public CatalogController(CatalogContext context,

IOptionsSnapshot<Settings> settings,

IEventBus eventBus)

{

_context = context;

_settings = settings;

_eventBus = eventBus;

// ...

}

 

Then you use it from your controller’s methods, like in the UpdateProduct method:

[Route("update")]

[HttpPost]

public async Task<IActionResult> UpdateProduct([FromBody]CatalogItem product)

{

var item = await _context.CatalogItems.SingleOrDefaultAsync(

i => i.Id == product.Id);

// ...

if (item.Price != product.Price)

{

var oldPrice = item.Price;

item.Price = product.Price;

_context.CatalogItems.Update(item);

 

var @event = new ProductPriceChangedIntegrationEvent(item.Id,

item.Price,

oldPrice);

// Commit changes in original transaction

await _context.SaveChangesAsync();

// Publish integration event to the event bus

// (RabbitMQ or a service bus underneath)

_eventBus.Publish(@event);

// ...

In this case, since the origin microservice is a simple CRUD microservice, that code is placed right into a Web API controller.

In more advanced microservices, like when using CQRS approaches, it can be implemented in the CommandHandler class, within the Handle() method.

Designing atomicity and resiliency when publishing to the event bus

When you publish integration events through a distributed messaging system like your event bus, you have the problem of atomically updating the original database and publishing an event (that is, either both operations complete or none of them). For instance, in the simplified example shown earlier, the code commits data to the database when the product price is changed and then publishes a ProductPriceChangedIntegrationEvent message. Initially, it might look essential that these two operations be performed atomically. However, if you are using a distributed transaction involving the database and the message broker, as you do in older systems like Microsoft Message Queuing (MSMQ), this is not recommended for the reasons described by the CAP theorem.

Basically, you use microservices to build scalable and highly available systems. Simplifying somewhat, the CAP theorem says that you cannot build a (distributed) database (or a microservice that owns its model) that is continually available, strongly consistent, and tolerant to any partition. You must choose two of these three properties.

In microservices-based architectures, you should choose availability and tolerance, and you should deemphasize strong consistency. Therefore, in most modern microservice-based applications, you usually do not want to use distributed transactions in messaging, as you do when you implement distributed transactions based on the Windows Distributed Transaction Coordinator (DTC) with MSMQ.

Let’s go back to the initial issue and its example. If the service crashes after the database is updated (in this case, right after the line of code with _context.SaveChangesAsync()), but before the integration event is published, the overall system could become inconsistent. This might be business critical, depending on the specific business operation you are dealing with.

As mentioned earlier in the architecture section, you can have several approaches for dealing with this issue:

For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not the best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as Event Store, or a document-oriented database such as Azure Cosmos DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing.

The option to use transaction log mining initially looks very transparent. However, to use this approach, the microservice has to be coupled to your RDBMS transaction log, such as the SQL Server transaction log. This is probably not desirable. Another drawback is that the low-level updates recorded in the transaction log might not be at the same level as your high-level integration events. If so, the process of reverse-engineering those transaction log operations can be difficult.

A balanced approach is a mix of a transactional database table and a simplified ES pattern. You can use a state such as “ready to publish the event,” which you set in the original event when you commit it to the integration events table. You then try to publish the event to the event bus. If the publish-event action succeeds, you start another transaction in the origin service and move the state from “ready to publish the event” to “event already published.”

If the publish-event action in the event bus fails, the data still will not be inconsistent within the origin microservice—it is still marked as “ready to publish the event,” and with respect to the rest of the services, it will eventually be consistent. You can always have background jobs checking the state of the transactions or integration events. If the job finds an event in the “ready to publish the event” state, it can try to republish that event to the event bus.

Notice that with this approach, you are persisting only the integration events for each origin microservice, and only the events that you want to communicate to other microservices or external systems. In contrast, in a full ES system, you store all domain events as well.

Therefore, this balanced approach is a simplified ES system. You need a list of integration events with their current state (“ready to publish” versus “published”). But you only need to implement these states for the integration events. And in this approach, you do not need to store all your domain data as events in the transactional database, as you would in a full ES system.

If you are already using a relational database, you can use a transactional table to store integration events. To achieve atomicity in your application, you use a two-step process based on local transactions. Basically, you have an IntegrationEvent table in the same database where you have your domain entities. That table works as an insurance for achieving atomicity so that you include persisted integration events into the same transactions that are committing your domain data.

Step by step, the process goes like this: the application begins a local database transaction. It then updates the state of your domain entities and inserts an event into the integration event table. Finally, it commits the transaction. You get the desired atomicity.

When implementing the steps of publishing the events, you have these choices:

Figure 6-22 shows the architecture for the first of these approaches.

Image

Figure 6-22. Atomicity when publishing events to the event bus

The approach illustrated in Figure 6-22 is missing an additional worker microservice that is in charge of checking and confirming the success of the published integration events. In case of failure, that additional checker worker microservice can read events from the table and republish them, that is, repeat step number 2.

About the second approach: you use the EventLog table as a queue and always use a worker microservice to publish the messages. In that case, the process is like that shown in Figure 6-23. This shows an additional microservice, and the table is the single source when publishing events.

Image

Figure 6-23. Atomicity when publishing events to the event bus with a worker microservice

For simplicity, the eShopOnContainers sample uses the first approach (with no additional processes or checker microservices) plus the event bus. However, the eShopOnContainers is not handling all possible failure cases. In a real application deployed to the cloud, you must embrace the fact that issues will arise eventually, and you must implement that check and resend logic. Using the table as a queue can be more effective than the first approach if you have that table as a single source of events when publishing them (with the worker) through the event bus.

Implementing atomicity when publishing integration events through the event bus

The following code shows how you can create a single transaction involving multiple DbContext objects—one context related to the original data being updated, and the second context related to the IntegrationEventLog table.

Note that the transaction in the example code below will not be resilient if connections to the database have any issue at the time when the code is running. This can happen in cloud-based systems like Azure SQL DB, which might move databases across servers. For implementing resilient transactions across multiple contexts, see the Implementing resilient Entity Framework Core SQL connections section later in this guide.

For clarity, the following example shows the whole process in a single piece of code. However, the eShopOnContainers implementation is actually refactored and split this logic into multiple classes so it is easier to maintain.

// Update Product from the Catalog microservice

//

public async Task<IActionResult> UpdateProduct([FromBody]CatalogItem
productToUpdate)

{

var catalogItem =

await _catalogContext.CatalogItems.SingleOrDefaultAsync(i => i.Id ==

productToUpdate.Id);

if (catalogItem == null) return NotFound();

 

bool raiseProductPriceChangedEvent = false;

IntegrationEvent priceChangedEvent = null;

if (catalogItem.Price != productToUpdate.Price)

raiseProductPriceChangedEvent = true;

 

if (raiseProductPriceChangedEvent) // Create event if price has changed

{

var oldPrice = catalogItem.Price;

priceChangedEvent = new ProductPriceChangedIntegrationEvent(catalogItem.Id,

productToUpdate.Price,

oldPrice);

}

// Update current product

catalogItem = productToUpdate;

 

// Just save the updated product if the Product's Price hasn't changed.

if !(raiseProductPriceChangedEvent)

{

await _catalogContext.SaveChangesAsync();

}

else // Publish to event bus only if product price changed

{

// Achieving atomicity between original DB and the IntegrationEventLog

// with a local transaction

using (var transaction = _catalogContext.Database.BeginTransaction())

{

_catalogContext.CatalogItems.Update(catalogItem);

await _catalogContext.SaveChangesAsync();

 

// Save to EventLog only if product price changed

if(raiseProductPriceChangedEvent)

await _integrationEventLogService.SaveEventAsync(priceChangedEvent);

 

transaction.Commit();

}

 

// Publish the intergation event through the event bus

_eventBus.Publish(priceChangedEvent);

 

integrationEventLogService.MarkEventAsPublishedAsync(

priceChangedEvent);

}

 

return Ok();

}

After the ProductPriceChangedIntegrationEvent integration event is created, the transaction that stores the original domain operation (update the catalog item) also includes the persistence of the event in the EventLog table. This makes it a single transaction, and you will always be able to check whether event messages were sent.

The event log table is updated atomically with the original database operation, using a local transaction against the same database. If any of the operations fail, an exception is thrown and the transaction rolls back any completed operation, thus maintaining consistency between the domain operations and the event messages saved to the table.

Receiving messages from subscriptions: event handlers in receiver microservices

In addition to the event subscription logic, you need to implement the internal code for the integration event handlers (like a callback method). The event handler is where you specify where the event messages of a certain type will be received and processed.

An event handler first receives an event instance from the event bus. Then it locates the component to be processed related to that integration event, propagating and persisting the event as a change in state in the receiver microservice. For example, if a ProductPriceChanged event originates in the catalog microservice, it is handled in the basket microservice and changes the state in this receiver basket microservice as well, as shown in the following code.

//

//Integration-Event handler

//

 

Namespace Microsoft.eShopOnContainers.Services.Basket.

API.IntegrationEvents.EventHandling

{

 

public class ProductPriceChangedIntegrationEventHandler :

IIntegrationEventHandler<ProductPriceChangedIntegrationEvent>

{

private readonly IBasketRepository _repository;

 

public ProductPriceChangedIntegrationEventHandler(

IBasketRepository repository)

{

_repository = repository ??

throw new ArgumentNullException(nameof(repository));

}

 

public async Task Handle(ProductPriceChangedIntegrationEvent @event)

{

var userIds = await _repository.GetUsers();

foreach (var id in userIds)

{

var basket = await _repository.GetBasket(id);

await UpdatePriceInBasketItems(@event.ProductId, @event.NewPrice,

basket);

}

}

 

 

private async Task UpdatePriceInBasketItems(int productId, decimal newPrice,

CustomerBasket basket)

{

var itemsToUpdate = basket?.Items?.

Where(x => int.Parse(x.ProductId) == productId).ToList();

 

if (itemsToUpdate != null)

{

foreach (var item in itemsToUpdate)

{

if(item.UnitPrice != newPrice)

{

var originalPrice = item.UnitPrice;

item.UnitPrice = newPrice;

item.OldUnitPrice = originalPrice;

}

}

 

await _repository.UpdateBasket(basket);

}

}

}

}

The event handler needs to verify whether the product exists in any of the basket instances. It also updates the item price for each related basket line item. Finally, it creates an alert to be displayed to the user about the price change, as shown in Figure 6-24.

Image

Figure 6-24. Displaying an item price change in a basket, as communicated by integration events

Idempotency in update message events

An important aspect of update message events is that a failure at any point in the communication should cause the message to be retried. Otherwise a background task might try to publish an event that has already been published, creating a race condition. You need to make sure that the updates are either idempotent or that they provide enough information to ensure that you can detect a duplicate, discard it, and send back only one response.

As noted earlier, idempotency means that an operation can be performed multiple times without changing the result. In a messaging environment, as when communicating events, an event is idempotent if it can be delivered multiple times without changing the result for the receiver microservice. This may be necessary because of the nature of the event itself, or because of the way the system handles the event. Message idempotency is important in any application that uses messaging, not just in applications that implement the event bus pattern.

An example of an idempotent operation is a SQL statement that inserts data into a table only if that data is not already in the table. It does not matter how many times you run that insert SQL statement; the result will be the same—the table will contain that data. Idempotency like this can also be necessary when dealing with messages if the messages could potentially be sent and therefore processed more than once. For instance, if retry logic causes a sender to send exactly the same message more than once, you need to make sure that it is idempotent.

It is possible to design idempotent messages. For example, you can create an event that says "set the product price to $25" instead of "add $5 to the product price." You could safely process the first message any number of times and the result will be the same. That is not true for the second message. But even in the first case, you might not want to process the first event, because the system could also have sent a newer price-change event and you would be overwriting the new price.

Another example might be an order-completed event being propagated to multiple subscribers. It is important that order information be updated in other systems just once, even if there are duplicated message events for the same order-completed event.

It is convenient to have some kind of identity per event so that you can create logic that enforces that each event is processed only once per receiver.

Some message processing is inherently idempotent. For example, if a system generates image thumbnails, it might not matter how many times the message about the generated thumbnail is processed; the outcome is that the thumbnails are generated and they are the same every time. On the other hand, operations such as calling a payment gateway to charge a credit card may not be idempotent at all. In these cases, you need to ensure that processing a message multiple times has the effect that you expect.

Additional resources

Deduplicating integration event messages

You can make sure that message events are sent and processed just once per subscriber at different levels. One way is to use a deduplication feature offered by the messaging infrastructure you are using. Another is to implement custom logic in your destination microservice. Having validations at both the transport level and the application level is your best bet.

Deduplicating message events at the EventHandler level

One way to make sure that an event is processed just once by any receiver is by implementing certain logic when processing the message events in event handlers. For example, that is the approach used in the eShopOnContainers application, as you can see in the

source code of the UserCheckoutAcceptedIntegrationEventHandler class

when it receives a CreateOrderCommand command. (In this case we wrap the CreateOrderCommand with an IdentifiedCommand, using the eventMsg.RequestId as an identifier, before sending it to the command handler).

Deduplicating messages when using RabbitMQ

When intermittent network failures happen, messages can be duplicated, and the message receiver must be ready to handle these duplicated messages. If possible, receivers should handle messages in an idempotent way, which is better than explicitly handling them with deduplication.

According to the RabbitMQ documentation, “If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one).

If the “redelivered” flag is set, the receiver must take that into account, because the message might already have been processed. But that is not guaranteed; the message might never have reached the receiver after it left the message broker, perhaps because of network issues. On the other hand, if the “redelivered” flag is not set, it is guaranteed that the message has not been sent more than once. Therefore, the receiver needs to deduplicate messages or process messages in an idempotent way only if the “redelivered” flag is set in the message.

Additional resources

http://go.particular.net/eShopOnContainers

http://soapatterns.org/design_patterns/event_driven_messaging

https://jimmybogard.com/refactoring-towards-resilience-evaluating-coupling/

http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html

https://msdn.microsoft.com/library/jj591572.aspx

https://en.wikipedia.org/wiki/Eventual_consistency

http://culttt.com/2014/11/26/strategies-integrating-bounded-contexts/

https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-2-richardson

http://microservices.io/patterns/data/event-sourcing.html

https://msdn.microsoft.com/library/jj591559.aspx

https://geteventstore.com/

https://dzone.com/articles/event-driven-data-management-for-microservices-1

https://en.wikipedia.org/wiki/CAP_theorem

https://www.quora.com/What-Is-CAP-Theorem-1

https://msdn.microsoft.com/library/dn589800.aspx

https://blogs.msdn.microsoft.com/rickatmicrosoft/2013/01/03/the-cap-theorem-why-everything-is-different-with-the-cloud-and-internet/

https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed

https://code.msdn.microsoft.com/Brokered-Messaging-c0acea25

https://www.rabbitmq.com/reliability.html#consumer

Testing ASP.NET Core services and web apps

Controllers are a central part of any ASP.NET Core API service and ASP.NET MVC Web application. As such, you should have confidence they behave as intended for your application. Automated tests can provide you with this confidence and can detect errors before they reach production.

You need to test how the controller behaves based on valid or invalid inputs, and test controller responses based on the result of the business operation it performs. However, you should have these types of tests your microservices:

Implementing unit tests for ASP.NET Core Web APIs

Unit testing involves testing a part of an application in isolation from its infrastructure and dependencies. When you unit test controller logic, only the content of a single action or method is tested, not the behavior of its dependencies or of the framework itself. Unit tests do not detect issues in the interaction between components—that is the purpose of integration testing.

As you unit test your controller actions, make sure you focus only on their behavior. A controller unit test avoids things like filters, routing, or model binding (the mapping of request data to a ViewModel or DTO). Because they focus on testing just one thing, unit tests are generally simple to write and quick to run. A well-written set of unit tests can be run frequently without much overhead.

Unit tests are implemented based on test frameworks like xUnit.net, MSTest, Moq, or NUnit. For the eShopOnContainers sample application, we are using XUnit.

When you write a unit test for a Web API controller, you instantiate the controller class directly using the new keyword in C#, so that the test will run as fast as possible. The following example shows how to do this when using XUnit as the Test framework.

 

[Fact]

public async Task Get_order_detail_success()

{

    //Arrange

    var fakeOrderId = "12";

    var fakeOrder = GetFakeOrder();

//...

 

    //Act

    var orderController = new OrderController(

_orderServiceMock.Object, 

_basketServiceMock.Object, 

_identityParserMock.Object);

 

    orderController.ControllerContext.HttpContext = _contextMock.Object;

    var actionResult = await orderController.Detail(fakeOrderId);

    //Assert

    var viewResult = Assert.IsType<ViewResult>(actionResult);

    Assert.IsAssignableFrom<Order>(viewResult.ViewData.Model);

}

 

Implementing integration and functional tests for each microservice

As noted, integration tests and functional tests have different purposes and goals. However, the way you implement both when testing ASP.NET Core controllers is similar, so in this section we concentrate on integration tests.

Integration testing ensures that an application's components function correctly when assembled. ASP.NET Core supports integration testing using unit test frameworks and a built-in test web host that can be used to handle requests without network overhead.

Unlike unit testing, integration tests frequently involve application infrastructure concerns, such as a database, file system, network resources, or web requests and responses. Unit tests use fakes or mock objects in place of these concerns. But the purpose of integration tests is to confirm that the system works as expected with these systems, so for integration testing you do not use fakes or mock objects. Instead, you include the infrastructure, like database access or service invocation from other services.

Because integration tests exercise larger segments of code than unit tests, and because integration tests rely on infrastructure elements, they tend to be orders of magnitude slower than unit tests. Thus, it is a good idea to limit how many integration tests you write and run.

ASP.NET Core includes a built-in test web host that can be used to handle HTTP requests without network overhead, meaning that you can run those tests faster than when using a real web host. The test web host is available in a NuGet component as Microsoft.AspNetCore.TestHost. It can be added to integration test projects and used to host ASP.NET Core applications.

As you can see in the following code, when you create integration tests for ASP.NET Core controllers, you instantiate the controllers through the test host. This is comparable to an HTTP request, but it runs faster.

public class PrimeWebDefaultRequestShould

{

private readonly TestServer _server;

private readonly HttpClient _client;

public PrimeWebDefaultRequestShould()

{

// Arrange

_server = new TestServer(new WebHostBuilder()

.UseStartup<Startup>());

_client = _server.CreateClient();

}

[Fact]

public async Task ReturnHelloWorld()

{

// Act

var response = await _client.GetAsync("/");

response.EnsureSuccessStatusCode();

 

var responseString = await response.Content.ReadAsStringAsync();

 

// Assert

Assert.Equal("Hello World!",

responseString);

}

}

Additional resources

Implementing service tests on a multi-container application

As noted earlier, when you test multi-container applications, all the microservices need to be running within the Docker host or container cluster. End-to-end service tests that include multiple operations involving several microservices require you to deploy and start the whole application in the Docker host by running docker-compose up (or a comparable mechanism if you are using an orchestrator). Once the whole application and all its services is running, you can execute end-to-end integration and functional tests.

There are a few approaches you can use. In the docker-compose.yml file that you use to deploy the application (or similar ones, like docker-compose.ci.build.yml), at the solution level you can expand the entry point to use dotnet test. You can also use another compose file that would run your tests in the image you are targeting. By using another compose file for integration tests that includes your microservices and databases on containers, you can make sure that the related data is always reset to its original state before running the tests.

Once the compose application is up and running, you can take advantage of breakpoints and exceptions if you are running Visual Studio. Or you can run the integration tests automatically in your CI pipeline in Visual Studio Team Services or any other CI/CD system that supports Docker containers.

Implement background tasks in microservices with IHostedService and the BackgroundService class

Background tasks and scheduled jobs are something you might need to implement, eventually, in a microservice based application or in any kind of application. The difference when using a microservices architecture is that you can implement a single microservice process/container for hosting these background tasks so you can scale it down/up as you need or you can even make sure that it runs a single instance of that microservice process/container.

From a generic point of view, in .NET Core we called these type of tasks Hosted Services, because they are services/logic that you host within your host/application/microservice. Note that in this case, the hosted service simply means a class with the background task logic.

Since .NET Core 2.0, the framework provides a new interface named IHostedService helping you to easily implement hosted services. The basic idea is that you can register multiple background tasks (hosted services), that run in the background while your web host or host is running, as shown in the image 8-25. Image

Figure 6-25. Using IHostedService in a WebHost vs. a Host

Note the difference made between WebHost and Host.

A WebHost (base class implementing IWebHost) in ASP.NET Core 2.0 is the infrastructure artifact you use to provide Http server features to your process, such as if you are implementing an MVC web app or Web API service. It provides all the new infrastructure goodness in ASP.NET Core, enabling you to use dependency injection, insert middlewares in the Http pipeline, etc. and precisely use these IHostedServices for background tasks.

A Host (base class implementing IHost), however, is something new in .NET Core 2.1. Basically, a Host allows you to have a similar infrastructure than what you have with WebHost (dependency injection, hosted services, etc.), but in this case, you just want to have a simple and lighter process as the host, with nothing related to MVC, Web API or Http server features.

Therefore, you can choose and either create a specialized host-process with IHost to handle the hosted services and nothing else, such a microservice made just for hosting the IHostedServices, or you can alternatevely extend an existing ASP.NET Core WebHost, such as an existing ASP.NET Core Web API or MVC app.

Each approach has pros and cons depending on your business and scalability needs. The bottom line is basically that if your background tasks have nothing to do with HTTP (IWebHost) you should use IHost, when available in .NET Core 2.1.

Registering hosted services in your WebHost or Host

Let’s drill down further on the IHostedService interface since its usage is pretty similar in a WebHost or in a Host.

SignalR is one example of an artifact using hosted services, but you can also use it for much simpler things like:

You can basically offload any of those actions to a background task based on IHostedService.

The way you add one or multiple IHostedServices into your WebHost or Host is by registering them up through the standard DI (dependency injection) in an ASP.NET Core WebHost (or in a Host in .NET Core 2.1). Basically, you have to register the hosted services within the familiar ConfigureServices() method of the Startup class, as in the following code from a typical ASP.NET WebHost.

public IServiceProvider ConfigureServices(IServiceCollection services)

{

//Other DI registrations;

 

// Register Hosted Services

services.AddSingleton<IHostedService, GracePeriodManagerService>();

services.AddSingleton<IHostedService, MyHostedServiceB>();

services.AddSingleton<IHostedService, MyHostedServiceC>();

//...

}

In that code, the GracePeriodManagerService hosted service is real code from the Ordering business microservice in eShopOnContainers, while the other two are just two additional samples.

The IHostedService background task execution is coordinated with the lifetime of the application (host or microservice, for that matter). You register tasks when the application starts and you have the opportunity to do some graceful action or clean-up when the application is shutting down.

Without using IHostedService, you could always start a background thread to run any task. The difference is precisely at the app’s shutdown time when that thread would simply be killed without having the opportunity to run graceful clean-up actions.

The IHostedService interface

When you register an IHostedService, .NET Core will call the StartAsync() and StopAsync() methods of your IHostedService type during application start and stop respectively. Specifically, start is called after the server has started and IApplicationLifetime.ApplicationStarted is triggered.

The IHostedService as defined in .NET Core, looks like the following.

namespace Microsoft.Extensions.Hosting

{

//

// Summary:

// Defines methods for objects that are managed by the host.

public interface IHostedService

{

//

// Summary:

// Triggered when the application host is ready to start the service.

Task StartAsync(CancellationToken cancellationToken);

//

// Summary:

// Triggered when the application host is performing a graceful shutdown.

Task StopAsync(CancellationToken cancellationToken);

}

}

As you can imagine, you can create multiple implementations of IHostedService and register them at the ConfigureService() method into the DI container, as shown previously. All those hosted services will be started and stopped along with the application/microservice.

As a developer, you are responsible for handling the stopping action or your services when StopAsync() method is triggered by the host.

Implementing IHostedService with a custom hosted service class deriving from the BackgroundService base class

You could go ahead and create you custom hosted service class from scratch and implement the IHostedService, as you need to do when using .NET Core 2.0.

However, since most background tasks will have similar needs in regard to the cancellation tokens management and other tipical operations, .NET Core 2.1 will be providing a very convenient abstract base class you can derive from, named BackgroundService.

That class provides the main work needed to set up the background task. Note that this class will come in the .NET Core 2.1 library so you don’t need to write it.

However, as of the time of this writing, .NET Core 2.1 has not been released. Therefore, in eShopOnContainers which is currently using .NET Core 2.0, we are just temporally incorporating that class from the .NET Core 2.1 open-source repo (no need of any proprietary license other than the open-source license) because it is compatible with the current IHostedService interface in .NET Core 2.0. When .NET Core 2.1 is released, you’ll just need to point to the right NuGet package.

The next code is the abstract BackgroundService base class as implemented in .NET Core 2.1.

// Copyright (c) .NET Foundation. Licensed under the Apache License, Version 2.0.

/// <summary>

/// Base class for implementing a long running <see cref="IHostedService"/>.

/// </summary>

public abstract class BackgroundService : IHostedService, IDisposable

{

private Task _executingTask;

private readonly CancellationTokenSource _stoppingCts =

new CancellationTokenSource();

 

protected abstract Task ExecuteAsync(CancellationToken stoppingToken);

 

public virtual Task StartAsync(CancellationToken cancellationToken)

{

// Store the task we're executing

_executingTask = ExecuteAsync(_stoppingCts.Token);

 

// If the task is completed then return it,

// this will bubble cancellation and failure to the caller

if (_executingTask.IsCompleted)

{

return _executingTask;

}

 

// Otherwise it's running

return Task.CompletedTask;

}

public virtual async Task StopAsync(CancellationToken cancellationToken)

{

// Stop called without start

if (_executingTask == null)

{

return;

}

 

try

{

// Signal cancellation to the executing method

_stoppingCts.Cancel();

}

finally

{

// Wait until the task completes or the stop token triggers

await Task.WhenAny(_executingTask, Task.Delay(Timeout.Infinite,

cancellationToken));

}

 

}

 

public virtual void Dispose()

{

_stoppingCts.Cancel();

}

}

When deriving from the previous abstract base class, thanks to that inherited implementation, you just need to implement the ExecuteAsync() method in your own custom hosted service class, as in the following simplified code from eShopOnContainers which is polling a database and publishing integration events into the Event Bus when needed.

public class GracePeriodManagerService

: BackgroundService

{

private readonly ILogger<GracePeriodManagerService> _logger;

private readonly OrderingBackgroundSettings _settings;

 

private readonly IEventBus _eventBus;

 

public GracePeriodManagerService(IOptions<OrderingBackgroundSettings> settings,

IEventBus eventBus,

ILogger<GracePeriodManagerService> logger)

{

//Constructor’s parameters validations...

}

 

protected override async Task ExecuteAsync(CancellationToken stoppingToken)

{

_logger.LogDebug($"GracePeriodManagerService is starting.");

 

stoppingToken.Register(() =>

_logger.LogDebug($" GracePeriod background task is stopping."));

 

while (!stoppingToken.IsCancellationRequested)

{

_logger.LogDebug($"GracePeriod task doing background work.");

 

// This eShopOnContainers method is quering a database table

// and publishing events into the Event Bus (RabbitMS / ServiceBus)

CheckConfirmedGracePeriodOrders();

 

await Task.Delay(_settings.CheckUpdateTime, stoppingToken);

}

_logger.LogDebug($"GracePeriod background task is stopping.");

 

}

 

protected override async Task StopAsync (CancellationToken stoppingToken)

{

// Run your graceful clean-up actions

}

}

In this specific case for eShopOnContainers, it is executing an application method that is quering a database table looking for orders with a specific state and when applying changes, it is publishing integration events through the event bus (underneath it can be using RabbitMQ or Azure Service Bus).

Of course, you could run any other business background task, instead.

By default, the cancellation token is set with a 5 second timeout, although you can change that value when building your WebHost using the UseShutdownTimeout extension of the IWebHostBuilder. This means that our service is expected to cancel within 5 seconds otherwise it will be more abruptly killed.

The following code would be changing that time to 10 seconds.

WebHost.CreateDefaultBuilder(args)

.UseShutdownTimeout(TimeSpan.FromSeconds(10))

...

Summary class diagram

The following image shows a visual summary of the classes and interfaced involved when implementing IHostedServices.

Image

Figure 6-26. Class diagram showing the multiple classes and interfaces related to IHostedService

Deployment considerations and takeaways

It is important to note that the way you deploy your ASP.NET Core WebHost or .NET Core Host might impact the final solution. For instance, if you deploy your WebHost on IIS or a regular Azure App Service, your host can be shut down because of app pool recycles. But if you are deploying your host as a container into an orchestrator like Kubernetes or Service Fabric, you can control the assured number of live instances of your host. In addition, you could consider other approaches in the cloud especially made for these scenarios, like Azure Functions.

But even for a WebHost deployed into an app pool, there are scenarios like repopulating or flushing application’s in-memory cache, that would be still applicable.

The IHostedService interface provides a convenient way to start background tasks in an ASP.NET Core web application (in .NET Core 2.0) or in any process/host (starting in .NET Core 2.1 with IHost). Its main benefit is the opportunity you get with the graceful cancellation to clean-up code of your background tasks when the host itself is shutting down.

Additional resources

https://blog.maartenballiauw.be/post/2017/08/01/building-a-scheduled-cache-updater-in-aspnet-core-2.html

https://www.stevejgordon.co.uk/asp-net-core-2-ihostedservice

https://github.com/aspnet/Hosting/tree/dev/samples/GenericHostSample