9

Tackling Business Complexity in a Microservice with DDD and CQRS Patterns

Vision

Design a domain model for each microservice or Bounded Context that reflects understanding of the business domain.

This section focuses on more advanced microservices that you implement when you need to tackle complex subsystems, or microservices derived from the knowledge of domain experts with ever-changing business rules. The architecture patterns used in this section are based on domain-driven design (DDD) and Command and Query Responsibility Segregation (CQRS) approaches, as illustrated in Figure 9-1.

image

Figure 9-1. External microservice architecture versus internal architecture patterns for each microservice

However, most of the techniques for data driven microservices, such as how to implement an ASP.NET Core Web API service or how to expose Swagger metadata with Swashbuckle, are also applicable to the more advanced microservices implemented internally with DDD patterns. This section is an extension of the previous sections, because most of the practices explained earlier also apply here or for any kind of microservice.

This section first provides details on the simplified CQRS patterns used in the eShopOnContainers reference application. Later, you will get an overview of the DDD techniques that enable you to find common patterns that you can reuse in your applications.

DDD is a large topic with a rich set of resources for learning. You can start with books like Domain-Driven Design by Eric Evans and additional materials from Vaughn Vernon, Jimmy Nilsson, Greg Young, Udi Dahan, Jimmy Bogard, and many other DDD/CQRS experts. But most of all you need to try to learn how to apply DDD techniques from the conversations, whiteboarding, and domain modeling sessions with the experts in your concrete business domain.

Additional resources
DDD (Domain-Driven Design)
DDD books
DDD training

Applying simplified CQRS and DDD patterns in a microservice

CQRS is an architectural pattern that separates the models for reading and writing data. The related term Command Query Separation (CQS) was originally defined by Bertrand Meyer in his book Object Oriented Software Construction. The basic idea is that you can divide a system’s operations into two sharply separated categories:

CQS is a simple concept—it is about methods within the same object being either queries or commands. Each method either returns state or mutates state, but not both. Even a single repository pattern object can comply with CQS. CQS can be considered a foundational principle for CQRS.

Command and Query Responsibility Segregation (CQRS) was introduced by Greg Young and strongly promoted by Udi Dahan and others. It is based on the CQS principle, although it is more detailed. It can be considered a pattern based on commands and events plus optionally on asynchronous messages. In many cases, CQRS is related to more advanced scenarios, like having a different physical database for reads (queries) than for writes (updates). Moreover, a more evolved CQRS system might implement Event-Sourcing (ES) for your updates database, so you would only store events in the domain model instead of storing the current-state data. However, this is not the approach used in this guide; we are using the simplest CQRS approach, which consists of just separating the queries from the commands.

The separation aspect of CQRS is achieved by grouping query operations in one layer and commands in another layer. Each layer has its own data model (note that we say model, not necessarily a different database) and is built using its own combination of patterns and technologies. More importantly, the two layers can be within the same tier or microservice, as in the example (ordering microservice) used for this guide. Or they could be implemented on different microservices or processes so they can be optimized and scaled out separately without affecting one another.

CQRS means having two objects for a read/write operation where in other contexts there is one. There are reasons to have a denormalized reads database, which you can learn about in more advanced CQRS literature. But we are not using that approach here, where the goal is to have more flexibility in the queries instead of limiting the queries with constraints from DDD patterns like aggregates.

An example of this kind of service is the ordering microservice from the eShopOnContainers reference application. This service implements a microservice based on a simplified CQRS approach. It uses a single data source or database, but two logical models plus DDD patterns for the transactional domain, as shown in Figure 9-2.

image

Figure 9-2. Simplified CQRS- and DDD-based microservice

The application layer can be the Web API itself. The important design aspect here is that the microservice has split the queries and ViewModels (data models especially created for the client applications) from the commands, domain model, and transactions following the CQRS pattern. This approach keeps the queries independent from restrictions and constraints coming from DDD patterns that only make sense for transactions and updates, as explained in later sections.

Applying CQRS and CQS approaches in a DDD microservice in eShopOnContainers

The design of the ordering microservice at the eShopOnContainers reference application is based on CQRS principles. However, it uses the simplest approach, which is just separating the queries from the commands and using the same database for both actions.

The essence of those patterns, and the important point here, is that queries are idempotent: no matter how many times you query a system, the state of that system will not change You could even use a different “reads” data model than the transactional logic “writes” domain model, although the ordering microservices is using the same database. Hence this is a simplified CQRS approach.

On the other hand, commands, which trigger transactions and data updates, change state in the system. With commands, you need to be careful when dealing with complexity and ever-changing business rules. This is the where you want to apply DDD techniques to have a better modeled system.

The DDD patterns presented in this guide should not be applied universally. They introduce constraints on your design. Those constraints provide benefits such as higher quality over time, especially in commands and other code that modifies system state. However, those constraints add complexity with fewer benefits for reading and querying data.

One such pattern is the Aggregate pattern, which we examine more in later sections. Briefly, in the Aggregate pattern, you treat many domain objects as a single unit as a result of their relationship in the domain. You might not always gain advantages from this pattern in queries; it can increase the complexity of query logic. For read-only queries, you do not get the advantages of treating multiple objects as a single Aggregate. You only get the complexity.

As shown in Figure 9-2, this guide suggests using DDD patterns only in the transactional/updates area of your microservice (that is, as triggered by commands). Queries can follow a simpler approach and should be separated from commands, following a CQRS approach.

For implementing the “queries side”, you can choose between many approaches, from your full-blown ORM like EF Core, AutoMapper projections, stored procedures, views, materialized views or a micro ORM.

In this guide and in eShopOnContainers (specifically the ordering microservice) we chose to implement straight queries using a micro ORM like Dapper. This lets you implement any query based on SQL statements to get the best performance, thanks to a light framework with very little overhead.

Note that when you use this approach, any updates to your model that impact how entities are persisted to a SQL database also need separate updates to SQL queries used by Dapper or any other separate (non-EF) approaches to querying.

CQRS and DDD patterns are not top-level architectures

It important to understand that CQRS and most DDD patterns (like DDD layers or a domain model with aggregates) are not architectural styles, but only architecture patterns. Microservices, SOA, and event-driven architecture (EDA) are examples of architectural styles. They describe a system of many components, such as many microservices. CQRS and DDD patterns describe something inside a single system or component; in this case, something inside a microservice.

Different Bounded Contexts (BCs) will employ different patterns. They have different responsibilities, and that leads to different solutions. It is worth emphasizing that forcing the same pattern everywhere leads to failure. Do not use CQRS and DDD patterns everywhere. Many subsystems, BCs, or microservices are simpler and can be implemented more easily using simple CRUD services or using another approach.

There is only one application architecture: the architecture of the system or end-to-end application you are designing (for example, the microservices architecture). However, the design of each Bounded Context or microservice within that application reflects its own tradeoffs and internal design decisions at an architecture patterns level. Do not try to apply the same architectural patterns like CQRS or DDD everywhere.

 Additional resources

Implementing reads/queries in a CQRS microservice

For reads/queries, the ordering microservice from the eShopOnContainers reference application implements the queries independently from the DDD model and transactional area. This was done primarily because the demands for queries and for transactions are drastically different. Writes execute transactions that must be compliant with the domain logic. Queries, on the other hand, are idempotent and can be segregated from the domain rules.

The approach is simple, as shown in Figure 9-3. The API interface is implemented by the Web API controllers using any infrastructure (such as a micro ORM like Dapper) and returning dynamic ViewModels depending on the needs of the UI applications.

image

Figure 9-3. The simplest approach for queries in a CQRS microservice

This is the simplest possible approach for queries. The query definitions query the database and return a dynamic ViewModel built on the fly for each query. Since the queries are idempotent, they will not change the data no matter how many times you run a query. Therefore, you do not need to be restricted by any DDD pattern used in the transactional side, like aggregates and other patterns, and that is why queries are separated from the transactional area. You simply query the database for the data that the UI needs and return a dynamic ViewModel that does not need to be statically defined anywhere (no classes for the ViewModels) except in the SQL statements themselves.

Since this is a simple approach, the code required for the queries side (such as code using a micro ORM like Dapper) can be implemented within the same Web API project. Figure 9-4 shows this. The queries are defined in the Ordering.API microservice project within the eShopOnContainers solution.

image

Figure 9-4. Queries in the Ordering microservice in eShopOnContainers

Using ViewModels specifically made for client apps, independent from domain model constraints

Since the queries are performed to obtain the data needed by the client applications, the returned type can be specifically made for the clients, based on the data returned by the queries. These models, or Data Transfer Objects (DTOs), are called ViewModels.

The returned data (ViewModel) can be the result of joining data from multiple entities or tables in the database, or even across multiple aggregates defined in the domain model for the transactional area. In this case, because you are creating queries independent of the domain model, the aggregates boundaries and constraints are completely ignored and you are free to query any table and column you might need. This approach provides great flexibility and productivity for the developers creating or updating the queries.

The ViewModels can be static types defined in classes. Or they can be created dynamically based on the queries performed (as is implemented in the ordering microservice), which is very agile for developers.

Using Dapper as a micro ORM to perform queries

You can use any micro ORM, Entity Framework Core, or even plain ADO.NET for querying. In the sample application, we selected Dapper for the ordering microservice in eShopOnContainers as a good example of a popular micro ORM. It can run plain SQL queries with great performance, because it is a very light framework. Using Dapper, you can write a SQL query that can access and join multiple tables.

Dapper is an open source project (original created by Sam Saffron), and is part of the building blocks used in Stack Overflow. To use Dapper, you just need to install it through the Dapper NuGet package, as shown in the following figure.

image

You  will also need to add a using statement so your code has access to the Dapper extension methods.

When you use Dapper in your code, you directly use the SqlClient class available in the System.Data.SqlClient namespace. Through the QueryAsync method and other extension methods which extend the SqlClient class, you can simply run queries in a straightforward and performant way.

Dynamic and static ViewModels

As shown in the following code from the ordering microservice, most of the ViewModels returned by the queries are implemented as dynamic. That means that the subset of attributes to be returned is based on the query itself. If you add a new column to the query or join, that data is dynamically added to the returned ViewModel. This approach reduces the need to modify queries in response to updates to the underlying data model, making this design approach more flexible and tolerant of future changes.

using Dapper;

using Microsoft.Extensions.Configuration;

using System.Data.SqlClient;

using System.Threading.Tasks;

using System.Dynamic;

using System.Collections.Generic;

 

public classOrderQueries : IOrderQueries

{

public asyncTask<IEnumerable<dynamic>> GetOrdersAsync()

{

using (var connection = newSqlConnection(_connectionString))

{

connection.Open();

return await connection.QueryAsync<dynamic>(@”SELECT o.[Id] as ordernumber,

o.[OrderDate] as [date],os.[Name] as [status],

SUM(oi.units*oi.unitprice) as total

FROM [ordering].[Orders] o

LEFT JOIN[ordering].[orderitems] oi ON  o.Id = oi.orderid

LEFT JOIN[ordering].[orderstatus] os on o.OrderStatusId = os.Id

GROUP BY o.[Id], o.[OrderDate], os.[Name]”);

}

}

}

The important point is that by using a dynamic type, the returned collection of data will be dynamically assembled as the ViewModel.

For most queries, you do not need to predefine a DTO or ViewModel class, which makes coding them straightforward and productive. However, you can predefine ViewModels (like predefined DTOs) if you want to have ViewModels with a more restricted definition as contracts.

Additional resources

Designing a DDD-oriented microservice

Domain-driven design (DDD) advocates modeling based on the reality of business as relevant to your use cases. In the context of building applications, DDD talks about problems as domains. It describes independent problem areas as Bounded Contexts (each Bounded Context correlates to a microservice), and emphasizes a common language to talk about these problems. It also suggests many technical concepts and patterns, like domain entities with rich models (no anemic-domain model), value objects, aggregates and aggregate root (or root entity) rules to support the internal implementation. This section introduces the design and implementation of those internal patterns.

Sometimes these DDD technical rules and patterns are perceived as obstacles that have a steep learning curve for implementing DDD approaches. But the important part is not the patterns themselves, but organizing the code so it is aligned to the business problems, and using the same business terms (ubiquitous language). In addition, DDD approaches should be applied only if you are implementing complex microservices with significant business rules. Simpler responsibilities, like a CRUD service, can be managed with simpler approaches.

Where to draw the boundaries is the key task when designing and defining a microservice. DDD patterns help you understand the complexity in the domain. For the domain model for each Bounded Context, you identify and define the entities, value objects, and aggregates that model your domain. You build and refine a domain model that is contained within a boundary that defines your context. And that is very explicit in the form of a microservice. The components within those boundaries end up being your microservices, although in some cases a BC or business microservices can be composed of several physical services. DDD is about boundaries and so are microservices.

Keep the microservice context boundaries relatively small

Determining where to place boundaries between Bounded Contexts balances two competing goals. First, you want to initially create the smallest possible microservices, although that should not be the main driver; you should create a boundary around things that need cohesion. Second, you want to avoid chatty communications between microservices. These goals can contradict one another. You should balance them by decomposing the system into as many small microservices as you can until you see communication boundaries growing quickly with each additional attempt to separate a new Bounded Context. Cohesion is key within a single bounded context.

It is similar to the Inappropriate Intimacy code smell when implementing classes. If two microservices need to collaborate a lot with each other, they should probably be the same microservice.

Another way to look at this is autonomy. If a microservice must rely on another service to directly service a request, it is not truly autonomous.

Layers in DDD microservices

Most enterprise applications with significant business and technical complexity are defined by multiple layers. The layers are a logical artifact, and are not related to the deployment of the service. They exist to help developers manage the complexity in the code. Different layers (like the domain model layer versus the presentation layer, etc.) might have different types, which mandates translations between those types.

For example, an entity could be loaded from the database. Then part of that information, or an aggregation of information including additional data from other entities, can be sent to the client UI through a REST Web API. The point here is that the domain entity is contained within the domain model layer and should not be propagated to other areas that it does not belong to, like to the presentation layer.

Additionally, you need to have always-valid entities (see the Designing validations in the domain model layer section) controlled by aggregate roots (root entities). Therefore, entities should not be bound to client views, because at the UI level some data might still not be validated. This is what the ViewModel is for. The ViewModel is a data model exclusively for presentation layer needs. The domain entities do not belong directly to the ViewModel. Instead, you need to translate between ViewModels and domain entities and vice versa.

When tackling complexity, it is important to have a domain model controlled by aggregate roots (we go into this in more detail later) that make sure that all the invariants and rules related to that group of entities (aggregate) are performed through a single entry point or gate, the aggregate root.

Figure 9-5 shows how a layered design is implemented in the eShopOnContainers application.

image

Figure 9-5. DDD layers in the ordering microservice in eShopOnContainers

You want to design the system so that each layer communicates only with certain other layers. That may be easier to enforce if layers are implemented as different class libraries, because you can clearly identify what dependencies are set between libraries. For instance, the domain model layer should not take a dependency on any other layer (the domain model classes should be Plain Old CLR Objects, or POCO, classes). As shown in Figure 9-6, the Ordering.Domain layer library has dependencies only on the .NET Core libraries but not on any other custom library (data library, persistence library, etc.).

image

Figure 9-6. Layers implemented as libraries allow better control of dependencies between layers

The domain model layer

Eric Evans’s excellent book Domain Driven Design says the following about the domain model layer and the application layer.

Domain Model Layer: Responsible for representing concepts of the business, information about the business situation, and business rules. State that reflects the business situation is controlled and used here, even though the technical details of storing it are delegated to the infrastructure. This layer is the heart of business software.

The domain model layer is where the business is expressed. When you implement a microservice domain model layer in .NET, that layer is coded as a class library with the domain entities that capture data plus behavior (methods with logic).

Following the Persistence Ignorance and the Infrastructure Ignorance principles, this layer must completely ignore data persistence details. These persistence tasks should be performed by the infrastructure layer. Therefore, this layer should not take direct dependencies on the infrastructure, which means that an important rule is that your domain model entity classes should be POCOs.

Domain entities should not have any direct dependency (like deriving from a base class) on any data access infrastructure framework like Entity Framework or NHibernate. Ideally, your domain entities should not derive from or implement any type defined in any infrastructure framework.

Most modern ORM frameworks like Entity Framework Core allow this approach, so that your domain model classes are not coupled to the infrastructure. However, having POCO entities is not always possible when using certain NoSQL databases and frameworks, like Actors and Reliable Collections in Azure Service Fabric.

Even when it is important to follow the Persistence Ignorance principle for you Domain model, you should not ignore persistence concerns. It is still very important to understand the physical data model and how it maps to your entity object model. Otherwise you can create impossible designs.

Also, this does not mean you can take a model designed for a relational database and directly move it to a NoSQL or document-oriented database. In some entity models, the model might fit, but usually it does not. There are still constraints that your entity model must adhere to, based both on the storage technology and ORM technology.

The application layer

Moving on to the application layer, we can again cite Eric Evans’s book Domain Driven Design:

Application Layer: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program.

A microservice’s application layer in .NET is commonly coded as an ASP.NET Core Web API project. The project implements the microservice’s interaction, remote network access, and the external Web APIs used from the UI or client apps. It includes queries if using a CQRS approach, commands accepted by the microservice, and even the event-driven communication between microservices (integration events). The ASP.NET Core Web API that represents the application layer must not contain business rules or domain knowledge (especially domain rules for transactions or updates); these should be owned by the domain model class library. The application layer must only coordinate tasks and must not hold or define any domain state (domain model). It delegates the execution of business rules to the domain model classes themselves (aggregate roots and domain entities), which will ultimately update the data within those domain entities.

Basically, the application logic is where you implement all use cases that depend on a given front end. For example, the implementation related to a Web API service.

The goal is that the domain logic in the domain model layer, its invariants, the data model, and related business rules must be completely independent from the presentation and application layers. Most of all, the domain model layer must not directly depend on any infrastructure framework.

The infrastructure layer

The infrastructure layer is how the data that is initially held in domain entities (in memory) is persisted in databases or another persistent store. An example is using Entity Framework Core code to implement the Repository pattern classes that use a DBContext to persist data in a relational database.

In accordance with the previously mentioned Persistence Ignorance and Infrastructure Ignorance principles, the infrastructure layer must not “contaminate” the domain model layer. You must keep the domain model entity classes agnostic from the infrastructure that you use to persist data (EF or any other framework) by not taking hard dependencies on frameworks. Your domain model layer class library should have only your domain code, just POCO entity classes implementing the heart of your software and completely decoupled from infrastructure technologies.

Thus, your layers or class libraries and projects should ultimately depend on your domain model layer (library), not vice versa, as shown in Figure 9-7.

image

Figure 9-7. Dependencies between layers in DDD

This layer design should be independent for each microservice. As noted earlier, you can implement the most complex microservices following DDD patterns, while implementing simpler data-driven microservices (simple CRUD in a single layer) in a simpler way.

Additional resources

Designing a microservice domain model

Define one rich domain model for each business microservice or Bounded Context

Your goal is to create a single cohesive domain model for each business microservice or Bounded Context (BC). Keep in mind, however, that a BC or business microservice could sometimes be composed of several physical services that share a single domain model. The domain model must capture the rules, behavior, business language, and constraints of the single Bounded Context or business microservice that it represents.

The Domain Entity pattern

Entities represent domain objects and are primarily defined by their identity, continuity, and persistence over time, and not only by the attributes that comprise them. As Eric Evans says, “an object primarily defined by its identity is called an Entity.” Entities are very important in the domain model, since they are the base for a model. Therefore, you should identify and design them carefully.

An entity’s identity can cross multiple microservices or Bounded Contexts.

The same identity (though not the same entity) can be modeled across multiple Bounded Contexts or microservices. However, that does not imply that the same entity, with the same attributes and logic would be implemented in multiple Bounded Contexts. Instead, entities in each Bounded Context limit their attributes and behaviors to those required in that Bounded Context’s domain.

For instance, the buyer entity might have most of a person’s attributes that are defined in the user entity in the profile or identity microservice, including the identity. But the buyer entity in the ordering microservice might have fewer attributes, because only certain buyer data is related to the order process. The context of each microservice or Bounded Context impacts its domain model.

Domain entities must implement behavior in addition to implementing data attributes

A domain entity in DDD must implement the domain logic or behavior related to the entity data (the object accessed in memory). For example, as part of an order entity class you must have business logic and operations implemented as methods for tasks such as adding an order item, data validation, and total calculation. The entity’s methods take care of the invariants and rules of the entity instead of having those rules spread across the application layer.

Figure 9-8 shows a domain entity that implements not only data attributes but operations or methods with related domain logic.

image

Figure 9-8. Example of a domain entity design implementing data plus behavior

Of course, sometimes you can have entities that do not implement any logic as part of the entity class. This can happen in child entities within an aggregate if the child entity does not have any special logic because most of the logic is defined in the aggregate root. If you have a complex microservice that has a lot of logic implemented in the service classes instead of in the domain entities, you could be falling into the anemic domain model, explained in the following section.

Rich domain model versus anemic domain model

In his post AnemicDomainModel, Martin Fowler describes an anemic domain model this way:

The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters.

Of course, when you use an anemic domain model, those data models will be used from a set of service objects (traditionally named the business layer) which capture all the domain or business logic. The business layer sits on top of the data model and uses the data model just as data.

The anemic domain model is just a procedural style design. Anemic entity objects are not real objects because they lack behavior (methods). They only hold data properties and thus it is not object-oriented design. By putting all the behavior out into service objects (the business layer) you essentially end up with spaghetti code or transaction scripts, and therefore you lose the advantages that a domain model provides.

Regardless, if your microservice or Bounded Context is very simple (a CRUD service), the anemic domain model in the form of entity objects with just data properties might be good enough, and it might not be worth implementing more complex DDD patterns. In that case, it will be simply a persistence model, because you have intentionally created an entity with only data for CRUD purposes.

That is why microservices architectures are perfect for a multi-architectural approach depending on each Bounded Context. For instance, in eShopOnContainers, the ordering microservice implements DDD patterns, but the catalog microservice, which is a simple CRUD service, does not.

Some people say that the anemic domain model is an anti-pattern. It really depends on what you are implementing. If the microservice you are creating is simple enough (for example, a CRUD service), following the anemic domain model it is not an anti-pattern. However, if you need to tackle the complexity of a microservice’s domain that has a lot of ever-changing business rules, the anemic domain model might be an anti-pattern for that microservice or Bounded Context. In that case, designing it as a rich model with entities containing data plus behavior as well as implementing additional DDD patterns (aggregates, value objects, etc.) might have huge benefits for the long-term success of such a microservice.

Additional resources

The Value Object pattern

As Eric Evans has noted, “Many objects do not have conceptual identity. These objects describe certain characteristics of a thing.”

An entity requires an identity, but there are many objects in a system that do not, like the Value Object pattern. A value object is an object with no conceptual identity that describes a domain aspect. These are objects that you instantiate to represent design elements that only concern you temporarily. You care about what they are, not who they are. Examples include numbers and strings, but can also be higher-level concepts like groups of attributes.

Something that is an entity in a microservice might not be an entity in another microservice, because in the second case, the Bounded Context might have a different meaning. For example, an address in an e-commerce application might not have an identity at all, since it might only represent a group of attributes of the customer’s profile for a person or company. In this case, the address should be classified as a value object. However, in an application for an electric power utility company, the customer address could be important for the business domain. Therefore, the address must have an identity so the billing system can be directly linked to the address. In that case, an address should be classified as a domain entity.

A person with a name and surname is usually an entity because a person has identity, even if the name and surname coincide with another set of values, such as if those names also refers to a different person.

Value objects are hard to manage in relational databases and ORMs like EF, whereas in document oriented databases they are easier to implement and use.

Additional resources

The Aggregate pattern

A domain model contains clusters of different data entities and processes that can control a significant area of functionality, such as order fulfilment or inventory. A more fine-grained DDD unit is the aggregate, which describes a cluster or group of entities and behaviors that can be treated as a cohesive unit.

You usually define an aggregate based on the transactions that you need. A classic example is an order that also contains a list of order items. An order item will usually be an entity. But it will be a child entity within the order aggregate, which will also contain the order entity as its root entity, typically called an aggregate root.

Identifying aggregates can be hard. An aggregate is a group of objects that must be consistent together, but you cannot just pick a group of objects and label them an aggregate. You must start with a domain concept and think about the entities that are used in the most common transactions related to that concept. Those entities that need to be transactionally consistent are what forms an aggregate. Thinking about transaction operations is probably the best way to identify aggregates.

The Aggregate Root or Root Entity pattern

An aggregate is composed of at least one entity: the aggregate root, also called root entity or primary ientity. Additionally, it can have multiple child entities and value objects, with all entities and objects working together to implement required behavior and transactions.

The purpose of an aggregate root is to ensure the consistency of the aggregate; it should be the only entry point for updates to the aggregate through methods or operations in the aggregate root class. You should make changes to entities within the aggregate only via the aggregate root. It is the aggregate’s consistency guardian, taking into account all the invariants and consistency rules you might need to comply with in your aggregate. If you change a child entity or value object independently, the aggregate root cannot ensure that the aggregate is in a valid state. It would be like a table with a loose leg. Maintaining consistency is the main purpose of the aggregate root.

In Figure 9-9, you can see sample aggregates like the buyer aggregate, which contains a single entity (the aggregate root Buyer). The order aggregate contains multiple entities and a value object.

image

Figure 9-9. Example of aggregates with multiple or single entities

Note that the Buyer aggregate could have additional child entities, depending on your domain, as it does in the ordering microservice in the eShopOnContainers reference application. Figure 9-9 just illustrates a case in which the buyer has a single entity, as an example of an aggregate that contains only an aggregate root.

In order to maintain separation of aggregates and keep clear boundaries between them, it is a good practice in a DDD domain model to disallow direct navigation between aggregates and only having the foreign key (FK) field, as implemented in the Ordering microservice domain model in eShopOnContainers. The Order entity only has a FK field for the buyer, but not an EF Core navigation property, as shown in the following code:

public class Order : Entity, IAggregateRoot

{

private DateTime _orderDate;

publicAddress Address { get; private set; }

private int? _buyerId;  //FK pointing to a different aggregate root

publicOrderStatus OrderStatus { get; private set; }

Identifying and working with aggregates requires research and experience. For more information, see the following Additional resources list.

Additional resources

Implementing a microservice domain model with .NET Core

In the previous section, the fundamental design principles and patterns for designing a domain model were explained. Now it is time to explore possible ways to implement the domain model by using .NET Core (plain C# code) and EF Core. Note that your domain model will be composed simply of your code. It will have just the EF Core model requirements, but not real dependencies on EF. You should not have hard dependencies or references to EF Core or any other ORM in your domain model.

Domain model structure in a custom .NET Standard Library

The folder organization used for the eShopOnContainers reference application demonstrates the DDD model for the application. You might find that a different folder organization more clearly communicates the design choices made for your application. As you can see in Figure 9-10, in the ordering domain model there are two aggregates, the order aggregate and the buyer aggregate. Each aggregate is a group of domain entities and value objects, although you could have an aggregate composed of a single domain entity (the aggregate root or root entity) as well.

image

Figure 9-10. Domain model structure for the ordering microservice in eShopOnContainers

Additionally, the domain model layer includes the repository contracts (interfaces) that are the infrastructure requirements of your domain model. In other words, these interfaces express what repositories the infrastructure layer must implement and how. It is critical that the implementation of the repositories be placed outside of the domain model layer, in the infrastructure layer library, so the domain model layer is not “contaminated” by API or classes from infrastructure technologies, like Entity Framework.

You can also see a SeedWork folder that contains custom base classes that you can use as a base for your domain entities and value objects, so you do not have redundant code in each domain’s object class.

Structuring aggregates in a custom .NET Standard Library

An aggregate refers to a cluster of domain objects grouped together to match transactional consistency. Those objects could be instances of entities (one of which is the aggregate root or root entity) plus any additional value objects.

Transactional consistency means that an aggregate is guaranteed to be consistent and up to date at the end of a business action. For example, the order aggregate from the eShopOnContainers ordering microservice domain model is composed as shown in Figure 9-11.

image

Figure 9-11. The order aggregate in Visual Studio solution

If you open any of the files in an aggregate folder, you can see how it is marked as either a custom base class or interface, like entity or value object, as implemented in the Seedwork folder.

Implementing domain entities as POCO classes

You implement a domain model in .NET by creating POCO classes that implement your domain entities. In the following example, the Order class is defined as an entity and also as an aggregate root. Because the Order class derives from the Entity base class, it can reuse common code related to entities. Bear in mind that these base classes and interfaces are defined by you in the domain model project, so it is your code, not infrastructure code from an ORM like EF.

// COMPATIBLE WITH ENTITY FRAMEWORK CORE 1.0

// Entity is a custom base class with the ID

public classOrder : Entity, IAggregateRoot

{

public int BuyerId { get; private set; }

publicDateTime OrderDate { get; private set; }

public int StatusId { get; private set; }

publicICollection<OrderItem> OrderItems { get; private set; }

publicAddress ShippingAddress { get; private set; }

publicint PaymentId { get; private set; }

protectedOrder() { } //Design constraint needed only by EF Core

publicOrder(intbuyerId, intpaymentId)

{

BuyerId = buyerId;

PaymentId = paymentId;

StatusId = OrderStatus.InProcess.Id;

OrderDate = DateTime.UtcNow;

OrderItems = newList<OrderItem>();

}

public voidAddOrderItem(productName,

pictureUrl,

unitPrice,

discount,

units)

{

//…

// Domain rules/logic for adding the OrderItem to the order

// …

OrderItem item = new OrderItem(this.Id, ProductId, ProductName,

PictureUrl, UnitPrice, Discount, Units);

OrderItems.Add(item);

}

// …

// Additional methods with domain rules/logic related to the Order aggregate

// …

It is important to note that this is a domain entity implemented as a POCO class. It does not have any direct dependency on Entity Framework Core or any other infrastructure framework. This implementation is as it should be, just C# code implementing a domain model.

In addition, the class is decorated with an interface named IAggregateRoot. That interface is an empty interface, sometimes called a marker interface, that is used just to indicate that this entity class is also an aggregate root.

A marker interface is sometimes considered as an anti-pattern; however, it is also a clean way to mark a class, especially when that interface might be evolving. An attribute could be the other choice for the marker, but it is quicker to see the base class (Entity) next to the IAggregate interface instead of putting an Aggregate attribute marker above the class. It is a metter of preferences, in any case.

Having an aggregate root means that most of the code related to consistency and business rules of the aggregate’s entities should be implemented as methods in the Order aggregate root class (for example, AddOrderItem when adding an OrderItem object to the aggregate). You should not create or update OrderItems objects independently or directly; the AggregateRoot class must keep control and consistency of any update operation against its child entities.

For example, you should not do the following from any command handler method or application layer class:

// WRONG ACCORDING TO DDD PATTERNS – CODE AT THE APPLICATION LAYER OR

// COMMAND HANDLERS

// Code in command handler methods or Web API controllers

//… (WRONG) Some code with business logic out of the domain classes …

OrderItem myNewOrderItem = new OrderItem(orderId, productId, productName,

pictureUrl, unitPrice, discount, units);

//… (WRONG) Accessing the OrderItems colletion directly from the application layer // or command handlers

myOrder.OrderItems.Add(myNewOrderItem);

//…

In this case, the Add method is purely an operation to add data, with direct access to the OrderItems collection. Therefore, most of the domain logic, rules, or validations related to that operation with the child entities will be spread across the application layer (command handlers and Web API controllers).

If you go around the aggregate root, the aggregate root cannot guarantee its invariants, its validity, or its consistency. Eventually you will have spaghetti code or transactional script code.

To follow DDD patterns, entities must not have public setters in any entity property. Changes in an entity should be driven by explicit methods with explicit ubiquitous language about the change they are performing in the entity.

Furthermore, collections within the entity (like the order items) should be read-only properties (the AsReadOnly method explained later). You should be able to update it only from within the aggregate root class methods or the child entity methods.

As you can see in the code for the Order aggregate root, all setters should be private or at least read-only externally, so that any operation against the entity’s data or its child entities has to be performed through methods in the entity class. This maintains consistency in a controlled and object-oriented way instead of implementing transactional script code.

The following code snippet shows the proper way to code the task of adding an OrderItem object to the Order aggregate.

// RIGHT ACCORDING TO DDD–CODE AT THE APPLICATION LAYER OR COMMAND HANDLERS

// The code in command handlers or WebAPI controllers, related only to application stuff

// There is NO code here related to OrderItem object’s business logic

myOrder.AddOrderItem(productId, productName, pictureUrl, unitPrice, discount, units);

// The code related to OrderItem params validations or domain rules should

// be WITHIN the AddOrderItem method.

//…

In this snippet, most of the validations or logic related to the creation of an OrderItem object will be under the control of the Order aggregate root—in the AddOrderItem method—especially validations and logic related to other elements in the aggregate. For instance, you might get the same product item as the result of multiple calls to AddOrderItem. In that method, you could examine the product items and consolidate the same product items into a single OrderItem object with several units. Additionally, if there are different discount amounts but the product ID is the same, you would likely apply the higher discount. This principle applies to any other domain logic for the OrderItem object.

In addition, the new OrderItem(params) operation will also be controlled and performed by the AddOrderItem method from the Order aggregate root. Therefore, most of the logic or validations related to that operation (especially anything that impacts the consistency between other child entities) will be in a single place within the aggregate root. That is the ultimate purpose of the aggregate root pattern.

When you use Entity Framework 1.1, a DDD entity can be better expressed because one of the new features of Entity Framework Core 1.1 is that it allows mapping to fields in addition to properties. This is useful when protecting collections of child entities or value objects. With this enhancement, you can use simple private fields instead of properties and you can implement any update to the field collection in public methods and provide read-only access through the AsReadOnly method.

In DDD you want to update the entity only through methods in the entity (or the constructor) in order to control any invariant and the consistency of the data, so properties are defined only with a get accessor. The properties are backed by private fields. Private members can only be accessed from within the class. However, there one exception: EF Core needs to set these fields as well.

// ENTITY FRAMEWORK CORE 1.1 OR LATER

// Entity is a custom base class with the ID

public classOrder: Entity, IAggregateRoot

{

// DDD Patterns comment

// Using private fields, allowed since EF Core 1.1, is a much better

// encapsulation aligned with DDD aggregates and domain entities (instead of

// properties and property collections)

private bool_someOrderInternalState;

privateDateTime _orderDate;

publicAddress Address { get;private set; }

publicBuyer Buyer { get;private set; }

private int _buyerId;

publicOrderStatus OrderStatus { get;private set; }

private int _orderStatusId;

// DDD patterns comment

// Using a private collection field is better for DDD aggregate encapsulation.

// OrderItem objects cannot be added from outside the aggregate root

// directly to the collection, but only through the

// OrderAggrergateRoot.AddOrderItem method, which includes behavior.

private readonlyList<OrderItem> _orderItems;

publicIEnumerable<OrderItem> OrderItems => _orderItems.AsReadOnly();

// Using List<>.AsReadOnly()

// This will create a read-only wrapper around the private list so it is

// protected against external updates. It’s much cheaper than .ToList(),

// because it will not have to copy all items in a new collection.

// (Just one heap alloc for the wrapper instance)

// https://msdn.microsoft.com/en-us/library/e78dcd75(v=vs.110).aspx

publicPaymentMethod PaymentMethod { get;private set; }

private int _paymentMethodId;

protectedOrder() { }

publicOrder(intbuyerId, intpaymentMethodId, Addressaddress)

{

_orderItems = newList<OrderItem>();

_buyerId = buyerId;

_paymentMethodId = paymentMethodId;

_orderStatusId = OrderStatus.InProcess.Id;

_orderDate = DateTime.UtcNow;

Address = address;

}

// DDD patterns comment

// The Order aggregate root method AddOrderitem() should be the only way

// to add items to the Order object, so that any behavior (discounts, etc.)

// and validations are controlled by the aggregate root in order to

// maintain consistency within the whole aggregate.

public void AddOrderItem(int productId, string productName, decimal unitPrice,

decimal discount, string pictureUrl, int units = 1)

{

// …

// Domain rules/logic here for adding OrderItem objects to the order

// …

OrderItem item = newOrderItem(this.Id, productId, productName,

pictureUrl, unitPrice, discount, units);

OrderItems.Add(item);

}

// …

// Additional methods with domain rules/logic related to the Order aggregate

// …

}

Mapping properties with only get accessors to the fields in the database table

Mapping properties to the database table columns is not a domain responsibility, but part of the infrastructure and persistence layer. We mention this here just so you are aware of the new capabilities in EF 1.1 related to how you can model entities. Additional details on this topic are explained in the infrastructure and persistence section.

When you use EF 1.0, within the DbContext you need to map the properties that are defined only with getters to the actual fields in the database table. This is done with the HasField method of the PropertyBuilder class.

Mapping fields without properties

With the new feature in EF Core 1.1 to map columns to fields, it is also possible to not use properties. Instead, you can just map columns from a table to fields. A common use case for this is private fields for an internal state that does not need to be accessed from outside the entity.

For example, in the preceding code example, the _someOrderInternalState field has no related property for either a setter or getter. That field will also be calculated within the order’s business logic and used from the order’s methods, but it needs to be persisted in the database as well. So, in EF 1.1 there is a way to map a field without a related property to a column in the database. This is also explained in the Infrastructure layer section of this guide.

Additional resources

Seedwork (reusable base classes and interfaces for your domain model)

As mentioned, in the solution folder you can also see a SeedWork folder. This folder contains custom base classes that you can use as a base for your domain entities and value objects, so you do not have redundant code in each domain’s object class. The folder for these types of classes is called SeedWork and not something like Framework, because the folder contains just a small subset of reusable classes which cannot really considered a framework. Seedwork is a term introduced by Michael Feathers and popularized by Martin Fowler but you could also name that folder Common, SharedKernel, or similar.

Figure 9-12 shows the classes that form the seedwork of the domain model in the ordering microservice. It has a few custom base classes like Entity, ValueObject, and Enumeration, plus a few interfaces. These interfaces (IRepository and IUnitOfWork) inform the infrastructure layer about what needs to be implemented. Those interfaces are also used through Dependency Injection from the application layer.

image

Figure 9-12. A sample set of domain model “seedwork” base classes and interfaces

This is the type of copy and paste reuse that many developers share between projects, not a formal framework. You can have seedworks in any layer or library. However, if the set of classes and interfaces gets big enough, you might want to create a single class library.

The custom Entity base class

The following code is an example of an Entity base class where you can place code that can be used the same way by any domain entity, such as the entity ID, equality operators, etc.

// ENTITY FRAMEWORK CORE 1.1

publicabstract classEntity

{

int? _requestedHashCode;

int _Id;

public virtual  int Id

{

get

{

return _Id;

}

protected set

{

_Id = value;

}

}

public bool IsTransient()

{

return this.Id == default(Int32);

}

public override bool Equals(object obj)

{

if (obj == null || !(obj isEntity))

return false;

if (Object.ReferenceEquals(this, obj))

return true;

if (this.GetType() != obj.GetType())

return false;

Entity item = (Entity)obj;

if (item.IsTransient() || this.IsTransient())

return false;

else

return item.Id == this.Id;

}

public override int GetHashCode()

{

if (!IsTransient())

{

if (!_requestedHashCode.HasValue)

_requestedHashCode = this.Id.GetHashCode() ^ 31;

// XOR for random distribution. See:

// http://blogs.msdn.com/b/ericlippert/archive/2011/02/28/guidelines-

// and-rules-for-gethashcode.aspx

return _requestedHashCode.Value;

}

else

return base.GetHashCode();

}

public static bool operator ==(Entity left, Entity right)

{

if (Object.Equals(left, null))

return (Object.Equals(right, null)) ? true : false;

else

return left.Equals(right);

}

public static bool operator !=(Entity left, Entity right)

{

return !(left == right);

}

}

Repository contracts (interfaces) in the domain model layer

Repository contracts are simply .NET interfaces that express the contract requirements of the repositories to be used for each aggregate. The repositories themselves, with EF Core code or any other infrastructure dependencies and code, must not be implemented within the domain model; the repositories should only implement the interfaces you define.

A pattern related to this practice (placing the repository interfaces in the domain model layer) is the Separated Interface pattern.  As explained by Martin Fowler, “Use Separated Interface to define an interface in one package but implement it in another. This way a client that needs the dependency to the interface can be completely unaware of the implementation.”

Following the Separated Interface pattern enables the application layer (in this case, the Web API project for the microservice) to have a dependency on the requirements defined in the domain model, but not a direct dependency to the infrastructure/persistence layer. In addition, you can use Dependency Injection to isolate the implementation, which is implemented in the infrastructure/ persistence layer using repositories.

For example, the following example with the IOrderRepository interface defines what operations the OrderRepository class will need to implement at the infrastructure layer. In the current implementation of the application, the code just needs to add the order to the database, since queries are split following the CQS approach, and updates to orders are not implemented.

public interface IOrderRepository : IRepository<Order>

{

Order Add(Order order);

}

public interface IRepository<T> where T : IAggregateRoot

{

IUnitOfWork UnitOfWork { get; }

}

Additional resources

Implementing value objects

As discussed in earlier sections about entities and aggregates, identity is fundamental for entities. However, there are many objects and data items in a system that do not require an identity and identity tracking, such as value objects.

A value object can reference other entities. For example, in an application that generates a route that describes how to get from one point to another, that route would be a value object. It would be a snapshot of points on a specific route, but this suggested route would not have an identity, even though internally it might refer to entities like City, Road, etc.

Figure 9-13 shows the Address value object within the Order aggregate.

Figure 9-13. Address value object within the Order aggregate

As shown in Figure 9-13, an entity is usually composed of multiple attributes. For example, Order can be modeled as an entity with an identity and composed internally of a set of attributes such as OrderId, OrderDate, OrderItems, etc. But the address, which is simply a complex value composed of country, street, city, etc. must be modeled and treated as a value object.

Important characteristics of value objects

There are two main characteristics for value objects:

The first characteristic was already discussed. Immutability is an important requirement. The values of a value object must be immutable once the object is created. Therefore, when the object is constructed, you must provide the required values, but you must not allow them to change during the object’s lifetime.

Value objects allow you to perform certain tricks for performance, thanks to their immutable nature. This is especially true in systems where there may be thousands of value object instances, many of which have the same values. Their immutable nature allows them to be reused; they can be interchangeable objects, since their values are the same and they have no identity. This type of optimization can sometimes make a difference between software that runs slowly and software with good performance. Of course, all these cases depend on the application environment and deployment context.

Value object implementation in C#

In terms of implementation, you can have a value object base class that has basic utility methods like equality based on comparison between all the attributes (since a value object must not be based on identity) and other fundamental characteristics. The following example shows a value object base class used in the ordering microservice from eShopOnContainers.

public abstract class ValueObject

{

protected static bool EqualOperator(ValueObject left, ValueObject right)

{

if (ReferenceEquals(left, null) ^ ReferenceEquals(right, null))

{

return false;

}

return ReferenceEquals(left, null) || left.Equals(right);

}

protected static bool NotEqualOperator(ValueObject left, ValueObject right)

{

return !(EqualOperator(left, right));

}

protected abstractIEnumerable<object> GetAtomicValues();

public override bool Equals(object obj)

{

if (obj == null || obj.GetType() != GetType())

{

return false;

}

ValueObject other = (ValueObject)obj;

IEnumerator<object> thisValues = GetAtomicValues().GetEnumerator();

IEnumerator<object> otherValues = other.GetAtomicValues().GetEnumerator();

while (thisValues.MoveNext() && otherValues.MoveNext())

{

if (ReferenceEquals(thisValues.Current, null) ^

ReferenceEquals(otherValues.Current, null))

{

return false;

}

if (thisValues.Current != null &&

!thisValues.Current.Equals(otherValues.Current))

{

return false;

}

}

return !thisValues.MoveNext() && !otherValues.MoveNext();

}

// Other utilility methods

}

You can use this class when implementing your actual value object, as with the Address value object shown in the following example:

public class Address : ValueObject

{

publicString Street { get; private set; }

publicString City { get; private set; }

publicString State { get; private set; }

publicString Country { get; private set; }

publicString ZipCode { get; private set; }

public Address(string street, string city, string state,

string country, string zipcode)

{

Street = street;

City = city;

State = state;

Country = country;

ZipCode = zipcode;

}

protected overrideIEnumerable<object> GetAtomicValues()

{

yield return Street;

yield return City;

yield return State;

yield return Country;

yield return ZipCode;

}

}

Hiding the identity characteristic when using EF Core to persist value objects

A limitation when using EF Core is that in its current version (EF Core 1.1) you cannot use complex types as defined in EF 6.x. Therefore, you must store your value object as an EF entity. However, you can hide its ID so you make clear that the identity is not important in the model that the value object is part of. You hide the ID is by using the ID as a shadow property. Since that configuration for hiding the ID in the model is set up in the infrastructure level, it will be transparent for your domain model, and its infrastructure implementation could change in the future.

In eShopOnContainers, the hidden ID needed by EF Core infrastructure is implemented in the following way in the DbContext level, using Fluent API at the infrastructure project.

// Fluent API within the OrderingContext:DbContext in the

// Ordering.Infrastructure project

void ConfigureAddress(EntityTypeBuilder<Address> addressConfiguration)

{

addressConfiguration.ToTable(“address”, DEFAULT_SCHEMA);

addressConfiguration.Property<int>(“Id”)

.IsRequired();

    addressConfiguration.HasKey(“Id”);

}

Therefore, the ID is hidden from the domain model point of view, and in the future, the value object infrastructure could also be implemented as a complex type or another way.

Additional resources

Using Enumeration classes instead of C# language enum types

Enumerations (enums for short) are a thin language wrapper around an integral type. You might want to limit their use to when you are storing one value from a closed set of values. Classification based on gender (for example, male, female, unknown), or sizes (S, M, L, XL) are good examples. Using enums for control flow or more robust abstractions can be a code smell. This type of usage will lead to fragile code with many control flow statements checking values of the enum.

Instead, you can create Enumeration classes that enable all the rich features of an object-oriented language. However, this is not a critical issue and in many cases, for simplicity, you can still use regular enums if that is your preference.

Implementing Enumeration classes

The ordering microservice in eShopOnContainers provides a sample Enumeration base class implementation, as shown in the following example:

public abstract class Enumeration : IComparable

{

public string Name { get; private set; }

public int Id { get; private set; }

protected Enumeration()

{

}

protected Enumeration(int id, string name)

{

Id = id;

Name = name;

}

public override string ToString()

{

return Name;

}

public staticIEnumerable<T> GetAll<T>() whereT : Enumeration, new()

{

var type = typeof(T);

var fields = type.GetTypeInfo().GetFields(BindingFlags.Public |

BindingFlags.Static |

BindingFlags.DeclaredOnly);

foreach (var info in fields)

{

var instance = newT();

var locatedValue = info.GetValue(instance) as T;

if (locatedValue != null)

{

yield return locatedValue;

}

}

}

public override bool Equals(object obj)

{

var otherValue = obj asEnumeration;

if (otherValue == null)

{

return false;

}

var typeMatches = GetType().Equals(obj.GetType());

var valueMatches = Id.Equals(otherValue.Id);

return typeMatches && valueMatches;

}

public int CompareTo(object other)

{

return Id.CompareTo(((Enumeration)other).Id);

}

// Other utility methods …

}

You can use this class as a type in any entity or value object, as for the following CardType Enumeration class.

public class CardType : Enumeration

{

public staticCardType Amex = newCardType(1, “Amex”);

public staticCardType Visa = newCardType(2, “Visa”);

public staticCardType MasterCard = newCardType(3, “MasterCard”);

protected CardType() { }

public CardType(int id, string name)

: base(id, name)

{

}

public static IEnumerable<CardType> List()

{

return new[] { Amex, Visa, MasterCard };

}

// Other util methods

}

Additional resources

Designing validations in the domain model layer

In DDD, validation rules can be thought as invariants. The main responsibility of an aggregate is to enforce invariants across state changes for all the entities within that aggregate.

The reasoning behind this is that many bugs occur because objects are in a state they should never have been in. The following is a good explanation from Greg Young in an online discussion:

Let’s propose we now have a SendUserCreationEmailService that takes a UserProfile … how can we rationalize in that service that Name is not null? Do we check it again? Or more likely … you just don’t bother to check and “hope for the best”—you hope that someone bothered to validate it before sending it to you. Of course, using TDD one of the first tests we should be writing is that if I send a customer with a null name that it should raise an error. But once we start writing these kinds of tests over and over again we realize … “wait if we never allowed name to become null we wouldn’t have all of these tests”

Implementing validations in the domain model layer

Validations are usually implemented in domain entity constructors or in methods that can update the entity. There are multiple ways to implement validations, such as verifying data and raising exceptions if the validation fails. There are also more advanced patterns such as using the Specification pattern for validations, and the Notification pattern to return a collection of errors instead of returning an exception for each validation as it occurs.

Validating conditions and throwing exceptions

The following code example shows the simplest approach to validation in a domain entity by raising an exception. In the references table at the end of this section you can see links to more advanced implementations based on the patterns we have discussed previously.

public void SetAddress(Address address)

{

_shippingAddress = address?? throw newArgumentNullException(nameof(address));

}

A better example would demonstrate the need to ensure that either the internal state did not change, or that all the mutations for a method occurred. For example, the following implementation would leave the object in an invalid state:

Public void SetAddress(string line1, string line2,

string city, string state, int zip)

{

_shippingAddress.line1 = line1 ?? throw new

_shippingAddress.line2 = line2;

_shippingAddress.city = city ?? throw new

_shippingAddress.state = (IsValid(state) ? state : throw new …);

}

If the value of the state is invalid, the first address line and the city have already been changed. That might make the address invalid.

A similar approach can be used in the entity’s constructor, raising an exception to make sure that the entity is valid once it is created.

Using validation attributes in the model based on data annotations

Another approach is to use validation attributes based on data annotations. Validation attributes provide a way to configure model validation, which is similar conceptually to validation on fields in database tables. This includes constraints such as assigning data types or required fields. Other types of validation include applying patterns to data to enforce business rules, such as a credit card number, phone number, or email address. Validation attributes make it easy to enforce requirements.

However, as shown in the following code, this approach might be too intrusive in a DDD model, because it takes a dependency on ModelState.IsValid from Microsoft.AspNetCore.Mvc.ModelState, which you must call from your MVC controllers. The model validation occurs prior to each controller action being invoked, and it is the controller method’s responsibility to inspect the result of calling ModelState.IsValid and react appropriately. The decision to use it depends on how tightly coupled you want the model to be with that infrastructure.

using System.ComponentModel.DataAnnotations;

// Other using statements …

// Entity is a custom base class which has the ID

public class Product : Entity

{

[Required]

[StringLength(100)]

public string Title { get; private set; }

[Required]

[Range(0, 999.99)]

public decimal Price { get; private set; }

[Required]

[VintageProduct(1970)]

[DataType(DataType.Date)]

public DateTime ReleaseDate { get; private set; }

[Required]

[StringLength(1000)]

public string Description { get; private set; }

// Constructor…

// Additional methods for entity logic and constructor…

}

However, from a DDD point of view, the domain model is best kept lean with the use of exceptions in your entity’s behavior methods, or by implementing the Specification and Notification patterns to enforce validation rules. Validation frameworks like data annotations in ASP.NET Core or any other validation frameworks like FluentValidation carry a requirement to invoke the application framework. For example, when calling the ModelState.IsValid method in data annotations, you need to invoke ASP.NET controllers.

It can make sense to use data annotations at the application layer in ViewModel classes (instead of domain entities) that will accept input, to allow for model validation within the UI layer. However, this should not be done at the exclusion of validation within the domain model.

Validating entities by implementing the Specification pattern and the Notification pattern

Finally, a more elaborate approach to implementing validations in the domain model is by implementing the Specification pattern in conjunction with the Notification pattern, as explained in some of the additional resources listed later.

It is worth mentioning that you can also use just one of those patterns—for example, validating manually with control statements, but using the Notification pattern to stack and return a list of validation errors.

Using deferred validation in the domain

There are various approaches to deal with deferred validations in the domain. In his book Implementing Domain-Driven Design, Vaughn Vernon discusses these in the section on validation.

Two-step validation

Also consider two-step validation. Use field-level validation on your command Data Transfer Objects (DTOs) and domain-level validation inside your entities. You can do this by returning a result object instead exceptions in order to make it easier to deal with the validation errors.

Using field validation with data annotations, for example, you do not duplicate the validation definition. The execution, though, can be both server-side and client-side in the case of DTOs (commands and ViewModels, for instance).

Additional resources

Client-side validation (validation in the presentation layers)

Even when the source of truth is the domain model and ultimately you must have validation at the domain model level, validation can still be handled at both the domain model level (server side) and the client side.

Client-side validation is a great convenience for users. It saves time they would otherwise spend waiting for a round trip to the server that might return validation errors. In business terms, even a few fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and frustration. Straightforward and immediate validation enables users to work more efficiently and produce better quality input and output.

Just as the view model and the domain model are different, view model validation and domain model validation might be similar but serve a different purpose. If you are concerned about DRY (the Don’t Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to follow the DRY principle.

Therefore, in client-side code you typically validate the ViewModels. You could also validate the client output DTOs or commands before you send them to the services.

The implementation of client-side validation depends on what kind of client application you are building. It will be different if you are validating data in a web MVC web application with most of the code in .NET, an SPA web application with that validation being coded in JavaScript or TypeScript, or a mobile app coded with Xamarin and C#.

Additional resources
Validation in Xamarin mobile apps
Validation in ASP.NET Core apps
Validation in SPA Web apps (Angular 2, TypeScript, JavaScript)

In summary, these are the most important concepts in regards to validation:

Domain events: design and implementation

Use domain events to explicitly implement side effects of changes within your domain.  In other words, and using DDD terminology, use domain events to explicitly implement side effects across multiple aggregates. Optionally, for better scalability and less impact in database locks, use eventual consistency between aggregates within the same domain.

What is a domain event?

An event is something that has happened in the past. A domain event is, logically, something that happened in a particular domain, and something you want other parts of the same domain (in-process) to be aware of and potentially react to.

An important benefit of domain events is that side effects after something happened in a domain can be expressed explicitly instead of implicitly. Those side effects must be consistent so either all the operations related to the business task happen, or none of them. In addition, domain events enable a better separation of concerns among classes within the same domain.

For example, if you are just using just Entity Framework and entities or even aggregates, if there have to be side effects provoked by a use case, those will be implemented as an implicit concept in the coupled code after something happened. But, if you just see that code, you might not know if that code (the side effect) is part of the main operation or if it really is a side effect. On the other hand, using domain events makes the concept explicit and part of the ubiquitous language. For example, in the eShopOnContainers application, creating an order is not just about the order; it updates or creates a buyer aggregate based on the original user, because the user is not a buyer until there is an order in place. If you use domain events, you can explicitly express that domain rule based in the ubiquitous language provided by the domain experts.

Domain events are somewhat similar to messaging-style events, with one important difference. With real messaging, message queuing, message brokers, or a service bus using AMPQ, a message is always sent asynchronously and communicated across processes and machines. This is useful for integrating multiple Bounded Contexts, microservices, or even different applications. However, with domain events, you want to raise an event from the domain operation you are currently running, but you want any side effects to occur within the same domain.

The domain events and their side effects (the actions triggered afterwards that are managed by event handlers) should occur almost immediately, usually in-process, and within the same domain. Thus, domain events could be synchronous or asynchronous. Integration events, however, should always be asynchronous.

Domain events versus integration events

Semantically, domain and integration events are the same thing: notifications about something that just happened. However, their implementation must be different. Domain events are just messages pushed to a domain event dispatcher, which could be implemented as an in-memory mediator based on an IoC container or any other method.

On the other hand, the purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether they are other microservices, Bounded Contexts or even external applications. Hence, they should occur only if the entity is successfully persisted, since in many scenarios if this fails, the entire operation effectively never happened.

In addition, and as mentioned, integration events must be based on asynchronous communication between multiple microservices (other Bounded Contexts) or even external systems/applications. Thus, the event bus interface needs some infrastructure that allows inter-process and distributed communication between potentially remote services. It can be based on a commercial service bus, queues, a shared database used as a mailbox, or any other distributed and ideally push based messaging system.

Domain events as a preferred way to trigger side effects across multiple aggregates within the same domain

If executing a command related to one aggregate instance requires additional domain rules to be run on one or more additional aggregates, you should design and implement those side effects to be triggered by domain events. As shown in Figure 9-14, and as one of the most important use cases, a domain event should be used to propagate state changes across multiple aggregates within the same domain model.

image

Figure 9-14. Domain events to enforce consistency between multiple aggregates within the same domain

In the figure, when the user initiates an order, the OrderStarted domain event triggers creation of a Buyer object in the ordering microservice, based on the original user info from the identity microservice (with information provided in the CreateOrder command). The domain event is generated by the order aggregate when it is created in the first place.

Alternately, you can have the aggregate root subscribed for events raised by members of its aggregates (child entities). For instance, each OrderItem child entity can raise an event when the item price is higher than a specific amount, or when the product item amount is too high. The aggregate root can then receive those events and perform a global calculation or aggregation.

It is important to understand that this event-based communication is not implemented directly within the aggregates; you need to implement domain event handlers. Handling the domain events is an application concern. The domain model layer should only focus on the domain logic—things that a domain expert would understand, not application infrastructure like handlers and side-effect persistence actions using repositories. Therefore, the application layer level is where you should have domain event handlers triggering actions when a domain event is raised.

Domain events can also be used to trigger any number of application actions, and what is more important, must be open to increase that number in the future in a decoupled way. For instance, when the order is started, you might want to publish a domain event to propagate that info to other aggregates or even to raise application actions like notifications.

The key point is the open number of actions to be executed when a domain event occurs. Eventually, the actions and rules in the domain and application will grow. The complexity or number of side-effect actions when something happens will grow, but if your code were coupled with “glue” (that is, just instantiating objects with the new keyword in C#), then every time you needed to add a new action you would need to change the original code. This could result in new bugs, because with each new requirement you would need to change the original code flow. This goes against the Open/Closed principle from SOLID. Not only that, the original class that was orchestrating the operations would grow and grow, which goes against the Single Responsibility Principle (SRP).

On the other hand, if you use domain events, you can create a fine-grained and decoupled implementation by segregating responsibilities using this approach:

  1. Send a command (for example, CreateOrder).
  2. Receive the command in a command handler.
  1. Handle domain events (within the current process) thast will execute an open number of side effects in multiple aggregates or application actions. For example:

As shown in Figure 9-15, starting from the same domain event, you can handle multiple actions related to other aggregates in the domain or additional application actions you need to perform across microservices connecting with integration events and the event bus.

image

Figure 9-15. Handling multiple actions per domain

The event handlers are typically in the application layer, because you will use infrastructure objects like repositories or an application API for the microservice’s behavior. In that sense, event handlers are similar to command handlers, so both are part of the application layer. The important difference is that a command should be processed just once. A domain event could be processed zero or n times, because if can be received by multiple receivers or event handlers with a different purpose for each handler.

The possibility of an open number of handlers per domain event allows you to add many more domain rules without impacting your current code. For instance, implementing the following business rule that has to happen right after an event might be as easy as adding a few event handlers (or even just one):

When the total amount purchased by a customer in the store, across any number of orders, exceeds $6,000, apply a 10% off discount to every new order and notify the customer with an email about that discount for future orders.

Implementing domain events

In C#, a domain event is simply a data-holding structure or class, like a DTO, with all the information related to what just happened in the domain, as shown in the following example:

public class OrderStartedDomainEvent : IAsyncNotification

{

public int CardTypeId { get; private set; }

public string CardNumber { get; private set; }

public string CardSecurityNumber { get; private set; }

public string CardHolderName { get; private set; }

publicDateTime CardExpiration { get; private set; }

publicOrder Order { get; private set; }

public OrderStartedDomainEvent(Order order,

int cardTypeId, string cardNumber,

string cardSecurityNumber, string cardHolderName,

DateTime cardExpiration)

{

Order = order;

CardTypeId = cardTypeId;

CardNumber = cardNumber;

CardSecurityNumber = cardSecurityNumber;

CardHolderName = cardHolderName;

CardExpiration = cardExpiration;

}

}

This is essentially a class that holds all the data related to the OrderStarted event.

In terms of the ubiquitous language of the domain, since an event is something that happened in the past, the class name of the event should be represented as a past-tense verb, like OrderStartedDomainEvent or OrderShippedDomainEvent. That is how the domain event is implemented in the ordering microservice in eShopOnContainers.

As we have noted, an important characteristic of events is that since an event is something that happened in the past, it should not change. Therefore it must be an immutable class. You can see in the preceding code that the properties are read-only from outside of the object. The only way to update the object is through the constructor when you create the event object.

Raising domain events

The next question is how to raise a domain event so it reaches its related event handlers. You can use multiple approaches.

Udi Dahan originally proposed (for example, in several related posts, such as Domain Events – Take 2) using a static class for managing and raising the events. This might include a static class named DomainEvents that would raise domain events immediately when it is called, using syntax like DomainEvents.Raise(Event myEvent). Jimmy Bogard wrote a blog post (Strengthening your domain: Domain Events) that recommends a similar approach.

However, when the domain events class is static, it also dispatches to handlers immediately. This makes testing and debugging more difficult, because the event handlers with side-effects logic are executed immediately after the event is raised. When you are testing and debugging, you want to focus on and just what is happening in the current aggregate classes; you do not want to suddenly be redirected to other event handlers for side effects related to other aggregates or application logic. This is why other approaches have evolved, as explained in the next section.

The deferred approach for raising and dispatching events

Instead of dispatching to a domain event handler immediately, a better approach is to add the domain events to a collection and then to dispatch those domain events right before or rightafter committing the transaction (as with SaveChanges in EF). (This approach was described by Jimmy Bogard in this post A better domain events pattern.)

Deciding if you send the domain events right before or right after committing the transaction is important, since it determines whether you will include the side effects as part of the same transaction or in different transactions. In the latter case, you need to deal with eventual consistency across multiple aggregates. This topic is discussed in the next section.

The deferred approach is what eShopOnContainers uses. First, you add the events happening in your entities into a collection or list of events per entity. That list should be part of the entity object, or even better, part of your base entity class, as shown in the following example:

public abstract class Entity

{

private List<IAsyncNotification> _domainEvents;

publicList<IAsyncNotification> DomainEvents => _domainEvents;

public voidAddDomainEvent(IAsyncNotification eventItem)

{

_domainEvents = _domainEvents ?? new List<IAsyncNotification>();

_domainEvents.Add(eventItem);

}

public void RemoveDomainEvent(IAsyncNotification eventItem)

{

if (_domainEvents is null) return;

_domainEvents.Remove(eventItem);

}

// …

}

When you want to raise an event, you just add it to the event collection to be placed within an aggregate entity method, as the following code shows:

var orderStartedDomainEvent = new OrderStartedDomainEvent(this, //Order object

cardTypeId,

cardNumber,

cardSecurityNumber,

cardHolderName,

cardExpiration);

this.AddDomainEvent(orderStartedDomainEvent);

Notice that the only thing that the AddDomainEvent method is doing is adding an event to the list. No event is raised yet, and no event handler is invoked yet.

You actually want to dispatch the events later on, when you commit the transaction to the database. If you are using Entity Framework Core, that means in the SaveChanges method of your EF DbContext, as in the following code:

// EF Core DbContext

public classOrderingContext : DbContext, IUnitOfWork

{

// …

public asyncTask<int> SaveEntitiesAsync()

{

// Dispatch Domain Events collection.

// Choices:

// A) Right BEFORE committing data (EF SaveChanges) into the DB. This makes

//    a single  transaction including side effects from the domain event

//    handlers that are using the same DbContext with Scope lifetime

// B) Right AFTER committing data (EF SaveChanges) into the DB. This makes

//    multiple transactions. You will need to handle eventual consistency and

//    compensatory actions in case of failures.

await _mediator.DispatchDomainEventsAsync(this);

// After this line runs, all the changes (from the Command Handler and Domain

// event handlers) performed through the DbContext will be commited

var result = await base.SaveChangesAsync();

}

}

With this code, you dispatch the entity events to their respective event handlers.

The overall result is that you have decoupled the raising of a domain event (a simple add into a list in memory) from dispatching it to an event handler. In addition, depending on what kind of dispatcher you are using, you could dispatch the events synchronously or asynchronously.

Be aware that transactional boundaries come into significant play here. If your unit of work and transaction can span more than one aggregate (as when using EF Core and a relational database), this can work well. But if the transaction cannot span aggregates, such as when you are using a NoSQL database like Azure DocumentDB, you have to implement additional steps to achieve consistency. This is another reason why persistence ignorance is not universal; it depends on the storage system you use.

Single transaction across aggregates versus eventual consistency across aggregates

The question of whether to perform a single transaction across aggregates versus relying on eventual consistency across those aggregates is a controversial one. Many DDD authors like Eric Evans and Vaughn Vernon advocate the rule that one transaction = one aggregate and therefore argue for eventual consistency across aggregates. For example, in his book Domain-Driven Design, Eric Evans says this:

Any rule that spans Aggregates will not be expected to be up-to-date at all times. Through event processing, batch processing, or other update mechanisms, other dependencies can be resolved within some specific time. (pg. 128)

Vaughn Vernon says the following in Effective Aggregate Design. Part II: Making Aggregates Work Together:

Thus, if executing a command on one aggregate instance requires that additional business rules execute on one or more aggregates, use eventual consistency […] There is a practical way to support eventual consistency in a DDD model. An aggregate method publishes a domain event that is in time delivered to one or more asynchronous subscribers.

This rationale is based on embracing fine-grained transactions instead of transactions spanning many aggregates or entities. The idea is that in the second case, the number of database locks will be substantial in large-scale applications with high scalability needs. Embracing the fact that high-scalable applications need not have instant transactional consistency between multiple aggregates helps with accepting the concept of eventual consistency. Atomic changes are often not needed by the business, and it is in any case the responsibility of the domain experts to say whether particular operations need atomic transactions or not. If an operation always needs an atomic transaction between multiple aggregates, you might ask whether your aggregate should be larger or was not correctly designed.

However, other developers and architects like Jimmy Bogard are okay with spanning a single transaction across several aggregates—but only when those additional aggregates are related to side effects for the same original command. For instance, in A better domain events pattern, Bogard says this:

Typically, I want the side effects of a domain event to occur within the same logical transaction, but not necessarily in the same scope of raising the domain event […] Just before we commit our transaction, we dispatch our events to their respective handlers.

If you dispatch the domain events right before committing the original transaction, it is because you want the side effects of those events to be included in the same transaction. For example, if the EF DbContext SaveChanges method fails, the transaction will roll back all changes, including the result of any side effect operations implemented by the related domain event handlers. This is because the DbContext life scope is by default defined as “scoped.” Therefore, the DbContext object is shared across multiple repository objects being instantiated within the same scope or object graph. This coincides with the HttpRequest scope when developing Web API or MVC apps.

In reality, both approaches (single atomic transaction and eventual consistency) can be right. It really depends on your domain or business requirements and what the domain experts tell you. It also depends on how scalable you need the service to be (more granular transactions have less impact with regard to database locks). And it depends on how much investment you are willing to make in your code, since eventual consistency requires more complex code in order to detect possible inconsistencies across aggregates and the need to implement compensatory actions. Take into account that if you commit changes to the original aggregate and afterwards, when the events are being dispatched, there is an issue and the event handlers cannot commit their side effects, you will have inconsistencies between aggregates.

A way to allow compensatory actions would be to store the domain events in additional database tables so they can be part of the original transaction. Afterwards, you could have a batch process that detects inconsistencies and runs compensatory actions by comparing the list of events with the current state of the aggregates. The compensatory actions are part of a complex topic that will require deep analysis from your side, which includes discussing it with the business user and domain experts.

In any case, you can choose the approach you need. But the initial deferred approach—raising the events before committing, so you use a single transaction—is the simplest approach when using EF Core and a relational database. It is easier to implement and valid in many business cases. It is also the approach used in the ordering microservice in eShopOnContainers.

But how do you actually dispatch those events to their respective event handlers? What is the _mediator object that you see in the previous example? That has to do with the techniques and artifacts you can use to map between events and their event handlers.

The domain event dispatcher: mapping from events to event handlers

Once you are able to dispatch or publish the events, you need some kind of artifact that will publish the event so that every related handler can get it and process side effects based on that event.

One approach is a real messaging system or even an event bus, possibly based on a service bus as opposed to in-memory events. However, for the first case, real messaging would be overkill for processing domain events, since you just need to process those events within the same process (that is, within the same domain and application layer).

Another way to map events to multiple event handlers is by using types registration in an IoC container so that you can dynamically infer where to dispatch the events. In other words, you need to know what event handlers need to get a specific event. Figure 9-16 shows a simplified approach for that.

image

Figure 9-16. Domain event dispatcher using IoC

You can build all the plumbing and artifacts to implement that approach by yourself. However, you can also use available libraries like MediatR, which underneath the covers uses your IoT container. You can therefore directly use the predefined interfaces and the mediator object’s publish/dispatch methods.

In code, you first need to register the event handler types in your IoC container, as shown in the following example:

public class MediatorModule : Autofac.Module

{

protected override void Load(ContainerBuilder builder)

{

// Other registrations …

 

// Register the DomainEventHandler classes (they implement

// IAsyncNotificationHandler<>) in assembly holding the Domain Events

builder.RegisterAssemblyTypes(

typeof(ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler).

GetTypeInfo().Assembly)

.Where(t =>

t.IsClosedTypeOf(typeof(IAsyncNotificationHandler<>)))

.AsImplementedInterfaces();

// Other registrations …

}

}

The code first identifies the assembly that contains the domain event handlers by locating the assembly that holds any of the handlers (using typeof(ValidateOrAddBuyerAggregateWhenXxxx), but you could have chosen any other event handler to locate the assembly). Since all the event handlers implement the IAsyncNotificationHandler interface, the code then just searches for those types and registers all the event handlers.

How to subscribe to domain events

When you use MediatR, each event handler must use an event type that is provided on the generic parameter of the IAsyncNotificationHandler interface, as you can see in the following code:

public class ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler

: IAsyncNotificationHandler<OrderStartedDomainEvent>

Based on the relationship between event and event handler, which can be considered the subscription, the MediatR artifact can discover all the event handlers for each event and trigger each of those event handlers.

How to handle domain events

Finally, the event handler usually implements application layer code that uses infrastructure repositories to obtain the required additional aggregates and to execute side-effect domain logic. The following code shows an example.

public class ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler

: IAsyncNotificationHandler<OrderStartedDomainEvent>

{

private readonlyILoggerFactory _logger;

private readonlyIBuyerRepository<Buyer> _buyerRepository;

private readonlyIIdentityService _identityService;

public ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler(

ILoggerFactory logger,

IBuyerRepository<Buyer> buyerRepository,

IIdentityService identityService)

{

// Parameter validations

//…

}

public asyncTaskHandle(OrderStartedDomainEvent orderStartedEvent)

{

var cardTypeId = (orderStartedEvent.CardTypeId != 0) ?

orderStartedEvent.CardTypeId : 1;

var userGuid = _identityService.GetUserIdentity();

var buyer = await _buyerRepository.FindAsync(userGuid);

bool buyerOriginallyExisted = (buyer == null) ? false : true;

if (!buyerOriginallyExisted)

{

buyer = newBuyer(userGuid);

}

buyer.VerifyOrAddPaymentMethod(cardTypeId,

$”Payment Method on {DateTime.UtcNow}”,

orderStartedEvent.CardNumber,

orderStartedEvent.CardSecurityNumber,

orderStartedEvent.CardHolderName,

orderStartedEvent.CardExpiration,

orderStartedEvent.Order.Id);

var buyerUpdated = buyerOriginallyExisted ? _buyerRepository.Update(buyer) :

_buyerRepository.Add(buyer);

await _buyerRepository.UnitOfWork.SaveEntitiesAsync();

// Logging code using buyerUpdated info, etc.

}

}

This event handler code is considered application layer code because it uses infrastructure repositories, as explained in the next section on the infrastructure-persistence layer. Event handlers could also use other infrastructure components.

Domain events can generate integration events to be published outside of the microservice boundaries

Finally, is important to mention that you might sometimes want to propagate events across multiple microservices. That is considered an integration event, and it could be published through an event bus from any specific domain event handler.

Conclusions on domain events

As stated, use domain events to explicitly implement side effects of changes within your domain. To use DDD terminology, use domain events to explicitly implement side effects across one or multiple aggregates. Additionally, and for better scalability and less impact on database locks, use eventual consistency between aggregates within the same domain.

Additional resources

Designing the infrastructure persistence layer

Data persistence components provide access to the data hosted within the boundaries of a microservice (that is, a microservice’s database). They contain the actual implementation of components such as repositories and Unit of Work classes, like custom EF DBContexts.

The Repository pattern

Repositories are classes or components that encapsulate the logic required to access data sources. They centralize common data access functionality, providing better maintainability and decoupling the infrastructure or technology used to access databases from the domain model layer. If you use an ORM like Entity Framework, the code that must be implemented is simplified, thanks to LINQ and strong typing. This lets you focus on the data persistence logic rather than on data access plumbing.

The Repository pattern is a well-documented way of working with a data source. In the book Patterns of Enterprise Application Architecture, Martin Fowler describes a repository as follows:

A repository performs the tasks of an intermediary between the domain model layers and data mapping, acting in a similar way to a set of domain objects in memory. Client objects declaratively build queries and send them to the repositories for answers. Conceptually, a repository encapsulates a set of objects stored in the database and operations that can be performed on them, providing a way that is closer to the persistence layer. Repositories, also, support the purpose of separating, clearly and in one direction, the dependency between the work domain and the data allocation or mapping.

Define one repository per aggregate

For each aggregate or aggregate root, you should create one repository class. In a microservice based on DDD patterns, the only channel you should use to update the database should be the repositories. This is because they have a one-to-one relationship with the aggregate root, which controls the aggregate’s invariants and transactional consistency. It is okay to query the database through other channels (as you can do following a CQRS approach), because queries do not change the state of the database. However, the transactional area—the updates—must always be controlled by the repositories and the aggregate roots.

Basically, a repository allows you to populate data in memory that comes from the database in the form of the domain entities. Once the entities are in memory, they can be changed and then persisted back to the database through transactions.

As noted earlier, if you are using the CQS/CQRS architectural pattern, the initial queries will be performed by side queries out of the domain model, performed by simple SQL statements using Dapper. This approach is much more flexible than repositories because you can query and join any tables you need, and these queries are not restricted by rules from the aggregates. That data will go to the presentation layer or client app.

If the user makes changes, the data to be updated will come from the client app or presentation layer to the application layer (such as a Web API service). When you receive a command (with data) in a command handler, you use repositories to get the data you want to update from the database. You update it in memory with the information passed with the commands, and you then add or update the data (domain entities) in the database through a transaction.

We must emphasize again that only one repository should be defined for each aggregate root, as shown in Figure 9-17. To achieve the goal of the aggregate root to maintain transactional consistency between all the objects within the aggregate, you should never create a repository for each table in the database.

image

Figure 9-17. The relationship between repositories, aggregates, and database tables

Enforcing one aggregate root per repository

It can be valuable to implement your repository design in such a way that it enforces the rule that only aggregate roots should have repositories. You can create a generic or base repository type that constrains the type of entities it works with to ensure they have the IAggregateRoot marker interface.

Thus, each repository class implemented at the infrastructure layer implements its own contract or interface, as shown in the following code:

namespace Microsoft.eShopOnContainers.Services.Ordering.Infrastructure.Repositories

{

public classOrderRepository : IOrderRepository

{

Each specific repository interface implements the generic IRepository interface:

public interface IOrderRepository : IRepository<Order>

{

Order Add(Order order);

// …

}

However, a better way to have the code enforce the convention that each repository should be related to a single aggregate would be to implement a generic repository type so it is explicit that you are using a repository to target a specific aggregate. That can be easily done by implementing that generic in the IRepository base interface, as in the following code:

public interface IRepository<T> where T : IAggregateRoot

The Repository pattern makes it easier to test your application logic

The Repository pattern allows you to easily test your application with unit tests. Remember that unit tests only test your code, not infrastructure, so the repository abstractions make it easier to achieve that goal.

As noted in an earlier section, it is recommended that you define and place the repository interfaces in the domain model layer so the application layer (for instance, your Web API microservice) does not depend directly on the infrastructure layer where you have implemented the actual repository classes. By doing this and using Dependency Injection in the controllers of your Web API, you can implement mock repositories that return fake data instead of data from the database. That decoupled approach allows you to create and run unit tests that can test just the logic of your application without requiring connectivity to the database.

Connections to databases can fail and, more importantly, running hundreds of tests against a database is bad for two reasons. First, it can take a lot of time because of the large number of tests. Second, the database records might change and impact the results of your tests, so that they might not be consistent. Testing against the database is not a unit tests but an integration test. You should have many unit tests running fast, but fewer integration tests against the databases.

In terms of separation of concerns for unit tests, your logic operates on domain entities in memory. It assumes the repository class has delivered those. Once your logic modifies the domain entities, it assumes the repository class will store them correctly. The important point here is to create unit tests against your domain model and its domain logic. Aggregate roots are the main consistency boundaries in DDD.

The difference between the Repository pattern and the legacy Data Access class (DAL class) pattern

A data access object directly performs data access and persistence operations against storage. A repository marks the data with the operations you want to perform in the memory of a unit of work object (as in EF when using the DbContext), but these updates will not be performed immediately.

A unit of work is referred to as a single transaction that involves multiple insert, update, or delete operations. In simple terms, it means that for a specific user action (for example, registration on a website), all the insert, update, and delete transactions are handled in a single transaction. This is more efficient than handling multiple database transactions in a chattier way.

These multiple persistence operations will be performed later in a single action when your code from the application layer commands it. The decision about applying the in-memory changes to the actual database storage is typically based on the Unit of Work pattern. In EF, the Unit of Work pattern is implemented as the DBContext.

In many cases, this pattern or way of applying operations against the storage can increase application performance and reduce the possibility of inconsistencies. Also, it reduces transaction blocking in the database tables, because all the intended operations are committed as part of one transaction. This is more efficient in comparison to executing many isolated operations against the database. Therefore, the selected ORM will be able to optimize the execution against the database by grouping several update actions within the same transaction, as opposed to many small and separate transaction executions.

Repositories should not be mandatory

Custom repositories are useful for the reasons cited earlier, and that is the approach for the ordering microservice in eShopOnContainers. However, it is not an essential pattern to implement in a DDD design or even in general development in .NET.

For instance, Jimmy Bogard, when providing direct feedback for this guide, said the following:

This’ll probably be my biggest feedback. I’m really not a fan of repositories, mainly because they hide the important details of the underlying persistence mechanism. It’s why I go for MediatR for commands, too. I can use the full power of the persistence layer, and push all that domain behavior into my aggregate roots. I don’t usually want to mock my repositories – I still need to have that integration test with the real thing. Going CQRS meant that we didn’t really have a need for repositories any more.

We find repositories useful, but we acknowledge that they are not critical for your DDD design, in the way that the Aggregate pattern and rich domain model are. Therefore, use the Repository pattern or not, as you see fit.

Additional resources
The Repository pattern
Unit of Work pattern

Implementing the infrastructure persistence layer with Entity Framework Core

When you use relational databases such as SQL Server, Oracle, or PostgreSQL, a recommended approach is to implement the persistence layer based on Entity Framework (EF). EF supports LINQ and provides strongly typed objects for your model, as well as simplified persistence into your database.

Entity Framework has a long history as part of the .NET Framework. When you use .NET Core, you should also use Entity Framework Core, which runs on Windows or Linux in the same way as .NET Core. EF Core is a complete rewrite of Entity Framework, implemented with a much smaller footprint and important improvements in performance.

Introduction to Entity Framework Core

Entity Framework (EF) Core is a lightweight, extensible, and cross-platform version of the popular Entity Framework data access technology. It was introduced with .NET Core in mid-2016.

Since an introduction to EF Core is already available in Microsoft documentation, here we simply provide links to that information.

Additional resources

Infrastructure in Entity Framework Core from a DDD perspective

From a DDD point of view, an important capability of EF is the ability to use POCO domain entities, also known in EF terminology as POCO code-first entities. If you use POCO domain entities, your domain model classes are persistence-ignorant, following the Persistence Ignorance and the Infrastructure Ignorance principles.

Per DDD patterns, you should encapsulate domain behavior and rules within the entity class itself, so it can control invariants, validations, and rules when accessing any collection. Therefore, it is not a good practice in DDD to allow public access to collections of child entities or value objects. Instead, you want to expose methods that control how and when your fields and property collections can be updated, and what behavior and actions should occur when that happens.

In EF Core 1.1, to satisfy those DDD requirements you can have plain fields in your entities instead of properties with public and private setters. If you do not want an entity field to be externally accessible, you can just create the attribute or field instead of a property. There is no need to use private setters if you prefer this cleaner approach.

In a similar way, you can now have read-only access to collections by using a public property typed as IEnumerable<T>, which is backed by a private field member for the collection (like a List<>) in your entity that relies on EF for persistence. Previous versions of Entity Framework required collection properties to support ICollection<T>, which meant that any developer using the parent entity class could add or remove items from its property collections. That possibility would be against the recommended patterns in DDD.

You can use a private collection while exposing a read-only IEnumerable object, as shown in the following code example:

public class Order : Entity

{

// Using private fields, allowed since EF Core 1.1

privateDateTime _orderDate;

// Other fields …

private readonlyList<OrderItem> _orderItems;

publicIEnumerable<OrderItem> OrderItems => _orderItems.AsReadOnly();

 

protected Order() { }

public Order(int buyerId, int paymentMethodId, Address address)

{

// Initializations …

}

public void AddOrderItem(int productId, string productName,

decimal unitPrice, decimal discount,

string pictureUrl, int units = 1)

{

// Validation logic…

var orderItem = newOrderItem(productId, productName, unitPrice, discount,

pictureUrl, units);

_orderItems.Add(orderItem);

}

}

Note that the OrderItems property can only be accessed as read-only using List<>.AsReadOnly(). This method creates a read-only wrapper around the private list so that it is protected against external updates. It is much cheaper than using the ToList method, because it does not have to copy all the items in a new collection; instead, it performs just one heap alloc operation for the wrapper instance.

EF Core provides a way to map the domain model to the physical database without contaminating the domain model. It is pure .NET POCO code, because the mapping action is implemented in the persistence layer. In that mapping action, you need to configure the fields-to-database mapping. In the following example of an OnModelCreating method, the highlighted code tells EF Core to access the OrderItems property through its field.

protected override void OnModelCreating(ModelBuilder modelBuilder)

{

// …

modelBuilder.Entity<Order>(ConfigureOrder);

// Other entities …

}

 

void ConfigureOrder(EntityTypeBuilder<Order> orderConfiguration)

{

// Other configuration …

var navigation = orderConfiguration.Metadata.

FindNavigation(nameof(Order.OrderItems));

navigation.SetPropertyAccessMode(PropertyAccessMode.Field);

// Other configuration …

}

When you use fields instead of properties, the OrderItem entity is persisted just as if it had a List<OrderItem> property. However, it exposes a single accessor (the AddOrderItem method) for adding new items to the order. As a result, behavior and data are tied together and will be consistent throughout any application code that uses the domain model.

Implementing custom repositories with Entity Framework Core

At the implementation level, a repository is simply a class with data persistence code coordinated by a unit of work (DBContext in EF Core) when performing updates, as shown in the following class:

// using statements…

namespace Microsoft.eShopOnContainers.Services.Ordering.Infrastructure.Repositories

{

public class BuyerRepository : IBuyerRepository

{

private readonlyOrderingContext _context;

publicIUnitOfWork UnitOfWork

{

get

{

return _context;

}

}

publicBuyerRepository(OrderingContext context)

{

if (context == null)

{

throw newArgumentNullException(

nameof(context));

}

_context = context;

}

publicBuyerAdd(Buyer buyer)

{

return _context.Buyers

.Add(buyer)

.Entity;

}

public async Task<Buyer> FindAsync(string BuyerIdentityGuid)

{

var buyer = await _context.Buyers

.Include(b => b.Payments)

.Where(b => b.FullName == BuyerIdentityGuid)

.SingleOrDefaultAsync();

return buyer;

}

}

Note that the IBuyerRepository interface comes from the domain model layer. However, the repository implementation is done at the persistence and infrastructure layer.

The EF DbContext comes through the constructor through Dependency Injection. It is shared between multiple repositories within the same HTTP request scope, thanks to its default lifetime (ServiceLifetime.Scoped) in the IoC container (which can also be explicitly set with services.AddDbContext<>).

Methods to implement in a repository (updates or transactions versus queries)

Within each repository class, you should put the persistence methods that update the state of entities contained by its related aggregate. Remember there is one-to-one relationship between an aggregate and its related repository. Take into account that an aggregate root entity object might have embedded child entities within its EF graph. For example, a buyer might have multiple payment methods as related child entities.

Since the approach for the ordering microservice in eShopOnContainers is also based on CQS/CQRS, most of the queries are not implemented in custom repositories. Developers have the freedom to create the queries and joins they need for the presentation layer without the restrictions imposed by aggregates, custom repositories per aggregate, and DDD in general. Most of the custom repositories suggested by this guide have several update or transactional methods but just the query methods needed to get data to be updated. For example, the BuyerRepository repository implements a FindAsync method, because the application needs to know whether a particular buyer exists before creating a new buyer related to the order.

However, the real query methods to get data to send to the presentation layer or client apps are implemented, as mentioned, in the CQRS queries based on flexible queries using Dapper.

Using a custom repository versus using EF DbContext directly

The Entity Framework DbContext class is based on the Unit of Work and Repository patterns, and can be used directly from your code, such as from an ASP.NET Core MVC controller. That is the way you can create the simplest code, as in the CRUD catalog microservice in eShopOnContainers. In cases where you want the simplest code possible, you might want to directly use the DbContext class, as many developers do.

However, implementing custom repositories provides several benefits when implementing more complex microservices or applications. The Unit of Work and Repository patterns are intended to encapsulate the infrastructure persistence layer so it is decoupled from the application and domain model layers. Implementing these patterns can facilitate the use of mock repositories simulating access to the database.

In Figure 9-18 you can see the differences between not using repositories (directly using the EF DbContext) versus using repositories which make it easier to mock those repositories.

image

Figure 9-18. Using custom repositories versus a plain DbContext

There are multiple alternatives when mocking. You could mock just repositories or you could mock a whole unit of work. Usually mocking just the repositories is enough, and the complexity to abstract and mock a whole unit of work is usually not needed.

Later, when we focus on the application layer, you will see how Dependency Injection works in ASP.NET Core and how it is implemented when using repositories.

In short, custom repositories allow you to test code more easily with unit tests that are not impacted by the data tier state. If you run tests that also access the actual database through the Entity Framework, they are not unit tests but integration tests, which are a lot slower.

If you were using DbContext directly, the only choice you would have would be to run unit tests by using an in-memory SQL Server with predictable data for unit tests. You would not be able to control mock objects and fake data in the same way at the repository level. Of course, you could always test the MVC controllers.

EF DbContext and IUnitOfWork instance lifetime in your IoC container

The DbContext object (exposed as an IUnitOfWork object) might need to be shared among multiple repositories within the same HTTP request scope. For example, this is true when the operation being executed must deal with multiple aggregates, or simply because you are using multiple repository instances. It is also important to mention that the IUnitOfWork interface is part of the domain, not an EF type.

In order to do that, the instance of the DbContext object has to have its service lifetime set to ServiceLifetime.Scoped. This is the default lifetime when registering a DbContext with services.AddDbContext in your IoC container from the ConfigureServices method of the Startup.cs file in your ASP.NET Core Web API project. The following code illustrates this.

public IServiceProvider ConfigureServices(IServiceCollection services)

{

// Add framework services.

services.AddMvc(options =>

{

options.Filters.Add(typeof(HttpGlobalExceptionFilter));

}).AddControllersAsServices();

services.AddEntityFrameworkSqlServer()

.AddDbContext<OrderingContext>(options =>

{

options.UseSqlServer(Configuration[“ConnectionString”],

sqlop => sqlop.MigrationsAssembly(typeof(Startup).GetTypeInfo().

Assembly.GetName().Name));

},

ServiceLifetime.Scoped  // Note that Scoped is the default choice

// in AddDbContext. It is shown here only for

// pedagogic purposes.

);

}

The DbContext instantiation mode should not be configured as ServiceLifetime.Transient or ServiceLifetime.Singleton.

The repository instance lifetime in your IoC container

In a similar way, repository’s lifetime should usually be set as scoped (InstancePerLifetimeScope in Autofac). It could also be transient (InstancePerDependency in Autofac), but your service will be more efficient in regards memory when using the scoped lifetime.

// Registering a Repository in Autofac IoC container

builder.RegisterType<OrderRepository>()

.As<IOrderRepository>()

.InstancePerLifetimeScope();

Note that using the singleton lifetime for the repository could cause you serious concurrency problems when your DbContext is set to scoped (InstancePerLifetimeScope) lifetime (the default lifetimes for a DBContext).

Additional resources

Table mapping

Table mapping identifies the table data to be queried from and saved to the database. Previously you saw how domain entities (for example, a product or order domain) can be used to generate a related database schema. EF is strongly designed around the concept of conventions. Conventions address questions like “What will the name of a table be?” or “What property is the primary key?” Conventions are typically based on conventional names—for example, it is typical for the primary key to be a property that ends with Id.

By convention, each entity will be set up to map to a table with the same name as the DbSet<TEntity> property that exposes the entity on the derived context. If no DbSet<TEntity> value is provided for the given entity, the class name is used.

Data Annotations versus Fluent API

There are many additional EF Core conventions, and most of them can be changed by using either data annotations or Fluent API, implemented within the OnModelCreating method.

Data annotations must be used on the entity model classes themselves, which is a more intrusive way from a DDD point of view. This is because you are contaminating your model with data annotations related to the infrastructure database. On the other hand, Fluent API is a convenient way to change most conventions and mappings within your data persistence infrastructure layer, so the entity model will be clean and decoupled from the persistence infrastructure.

Fluent API and the OnModelCreating method

As mentioned, in order to change conventions and mappings, you can use the OnModelCreating method in the DbContext class. The following example shows how we do this in the ordering microservice in eShopOnContainers.

protected override void OnModelCreating(ModelBuilder modelBuilder)

{

//Other entities

modelBuilder.Entity<OrderStatus>(ConfigureOrderStatus);

//Other entities

}

void ConfigureOrder(EntityTypeBuilder<Order> orderConfiguration)

{

orderConfiguration.ToTable(“orders”, DEFAULT_SCHEMA);

orderConfiguration.HasKey(o => o.Id);

orderConfiguration.Property(o => o.Id)

.ForSqlServerUseSequenceHiLo(“orderseq”, DEFAULT_SCHEMA);

orderConfiguration.Property<DateTime>(“OrderDate”).IsRequired();

orderConfiguration.Property<string>(“Street”).IsRequired();

orderConfiguration.Property<string>(“State”).IsRequired();

orderConfiguration.Property<string>(“City”).IsRequired();

orderConfiguration.Property<string>(“ZipCode”).IsRequired();

orderConfiguration.Property<string>(“Country”).IsRequired();

orderConfiguration.Property<int>(“BuyerId”).IsRequired();

orderConfiguration.Property<int>(“OrderStatusId”).IsRequired();

orderConfiguration.Property<int>(“PaymentMethodId”).IsRequired();

var navigation =

orderConfiguration.Metadata.FindNavigation(nameof(Order.OrderItems));

// DDD Patterns comment:

// Set as Field (new since EF 1.1) to access

// the OrderItem collection property as a field

navigation.SetPropertyAccessMode(PropertyAccessMode.Field);

orderConfiguration.HasOne(o => o.PaymentMethod)

.WithMany()

.HasForeignKey(“PaymentMethodId“)

.OnDelete(DeleteBehavior.Restrict);

orderConfiguration.HasOne(o => o.Buyer)

.WithMany()

.HasForeignKey(“BuyerId”);

orderConfiguration.HasOne(o => o.OrderStatus)

.WithMany()

.HasForeignKey(“OrderStatusId”);

}

}

You could set all the Fluent API mappings within the same OnModelCreating method, but it is advisable to partition that code and have multiple submethods, one per entity, as shown in the example. For particularly large models, it can even be advisable to have separate source files (static classes) for configuring different entity types.

The code in the example is explicit. However, EF Core conventions do most of this automatically, so the actual code you would need to write to achieve the same thing would be much smaller.

The Hi/Lo algorithm in EF Core

An interesting aspect of code in the preceding example is that it uses the Hi/Lo algorithm as the key generation strategy.

The Hi/Lo algorithm is useful when you need unique keys. As a summary, the Hi-Lo algorithm assigns unique identifiers to table rows while not depending on storing the row in the database immediately. This lets you start using the identifiers right away, as happens with regular sequential database IDs.

The Hi/Lo algorithm describes a mechanism for generating safe IDs on the client side rather than in the database. Safe in this context means without collisions. This algorithm is interesting for these reasons:

EF Core supports HiLo with the ForSqlServerUseSequenceHiLo method, as shown in the preceding example.

Mapping fields instead of properties

With the feature of EF Core 1.1 that maps columns to fields, it is possible to not use any properties in the entity class, and just to map columns from a table to fields. A common use for that would be private fields for any internal state that do not need to be accessed from outside the entity.

EF 1.1 supports a way to map a field without a related property to a column in the database. You can do this with single fields or also with collections, like a List<> field. This point was mentioned earlier when we discussed modeling the domain model classes, but here you can see how that mapping is performed with the PropertyAccessMode.Field configuration highlighted in the previous code.

Using shadow properties in value objects for hidden IDs at the infrastructure level

Shadow properties in EF Core are properties that do not exist in your entity class model. The values and states of these properties are maintained purely in the ChangeTracker class at the infrastructure level.

From a DDD point of view, shadow properties are a convenient way to implement value objects by hiding the ID as a shadow property primary key. This is important, because a value object should not have identity (at least, you should not have the ID in the domain model layer when shaping value objects). The point here is that as of the current version of EF Core, EF Core does not have a way to implement value objects as complex types, as is possible in EF 6.x. That is why you currently need to implement a value object as an entity with a hidden ID (primary key) set as a shadow property.

As you can see in the Address value object in eShopOnContainers, in the Address model you do not see an ID:

public class Address : ValueObject

{

public String Street { get; private set; }

public String City { get; private set; }

public String State { get; private set; }

public String Country { get; private set; }

public String ZipCode { get; private set; }

//Constructor initializing, etc

}

But under the covers, we need to provide an ID so that EF Core is able to persist this data in the database tables. We do that in the ConfigureAddress method of the OrderingContext.cs class at the infrastructure level, so we do not pollute the domain model with EF infrastructure code.

void ConfigureAddress(EntityTypeBuilder<Address> addressConfiguration)

{

addressConfiguration.ToTable(“address”, DEFAULT_SCHEMA);

// DDD pattern comment:

// Implementing the Address ID as a shadow property, because the

// address is a value object and an identity is not required for a

// value object

// EF Core just needs the ID so it can store it in a database table

// See: https://docs.microsoft.com/en-us/ef/core/modeling/shadow-properties

addressConfiguration.Property<int>(“Id”).IsRequired();

addressConfiguration.HasKey(“Id”);

}

Additional resources

Using NoSQL databases as a persistence infrastructure

When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Document DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables.

However, when you use a NoSQL database, especially a document-oriented database like Azure Document DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design.

When you use a document-oriented database, you implement an aggregate as a single document, serialized in JSON or another format. However, the use of the database is transparent from a domain model code point of view. When using a NoSQL database, you still are using entity classes and aggregate root classes, but with more flexibility than when using EF Core because the persistence is not relational.

The difference is in how you persist that model. If you implemented your domain model based on POCO entity classes, agnostic to the infrastructure persistence, it might look like you could move to a different persistence infrastructure, even from relational to NoSQL. However, that should not be your goal. There are always constraints in the different databases will push you back, so you will not be able to have the same model for relational or NoSQL databases. Changing persistence models would not be trivial, because transactions and persistence operations will be very different.

For example, in a document-oriented database, it is okay for an aggregate root to have multiple child collection properties. In a relational database, querying multiple child collection properties is awful, because you get a UNION ALL SQL statement back from EF. Having the same domain model for relational databases or NoSQL databases is not simple, and you should not try it. You really have to design your model with an understanding of how the data is going to be used in each particular database.

A benefit when using NoSQL databases is that the entities are more denormalized, so you do not set a table mapping. Your domain model can be more flexible than when using a relational database.

When you design your domain model based on aggregates, moving to NoSQL and document-oriented databases might be even easier than using a relational database, because the aggregates you design are similar to serialized documents in a document-oriented database. Then you can include in those “bags” all the information you might need for that aggregate.

For instance, the following JSON code is a sample implementation of an order aggregate when using a document-oriented database. It is similar to the order aggregate we implemented in the eShopOnContainers sample, but without using EF Core underneath.

{

“id”: “2017001”,

“orderDate”: “2/25/2017”,

“buyerId”: “1234567”,

“address”: [

{

“street”: “100 One Microsoft Way”,

“city”: “Redmond”,

“state”: “WA”,

“zip”: “98052”,

“country”: “U.S.”

}

],

“orderItems”: [

{“id”: 20170011, “productId”: “123456”, “productName”: “.NET T-Shirt”,

“unitPrice”: 25, “units”: 2, “discount”: 0},

{“id”: 20170012, “productId”: “123457”, “productName”: “.NET Mug”,

“unitPrice”: 15, “units”: 1, “discount”: 0}

]

}

When you use a C# model to implement the aggregate to be used by something like the Azure Document DB SDK, the aggregate is similar to the C# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code:

// C# EXAMPLE OF AN ORDER AGGREGATE BEING PERSISTED WITH DOCUMENTDB API

// *** Domain Model Code ***

// Aggregate: Create an Order object with its child entities and/or value objects.

// Then, use AggregateRoot’s methods to add the nested objects so invariants and

// logic is consistent across the nested properties (value objects and entities).

// This can be saved as JSON as is without converting into rows/columns.

Order orderAggregate = newOrder

{

Id = “2017001”,

OrderDate = new DateTime(2005, 7, 1),

BuyerId = “1234567”,

PurchaseOrderNumber = “PO18009186470”

}

 

Address address = newAddress

{

Street = “100 One Microsoft Way”,

City = “Redmond”,

State = “WA”,

Zip = “98052”,

Country = “U.S.”

}

 

orderAggregate.UpdateAddress(address);

 

OrderItem orderItem1 = newOrderItem

{

Id = 20170011,

ProductId = “123456”,

ProductName = “.NET T-Shirt”,

UnitPrice = 25,

Units = 2,

Discount = 0;

};

 

OrderItem orderItem2 = newOrderItem

{

Id = 20170012,

ProductId = “123457”,

ProductName = “.NET Mug”,

UnitPrice = 15,

Units = 1,

Discount = 0;

};

//Using methods with domain logic within the entity. No anemic-domain model

orderAggregate.AddOrderItem(orderItem1);       

orderAggregate.AddOrderItem(orderItem2);       

// *** End of Domain Model Code ***

//…

// *** Infrastructure Code using Document DB Client API ***

Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName,

collectionName);

await client.CreateDocumentAsync(collectionUri, order);

 

// As your app evolves, let’s say your object has a new schema. You can insert

// OrderV2 objects without any changes to the database tier.

Order2 newOrder = GetOrderV2Sample(“IdForSalesOrder2”);

await client.CreateDocumentAsync(collectionUri, newOrder);

You can see that the way you work with your domain model can be similar to the way you use it in your domain model layer when the infrastructure is EF. You still use the same aggregate root methods to ensure consistency, invariants, and validations within the aggregate.

However, when you persist your model into the NoSQL database, the code and API change dramatically compared to EF Core code or any other code related to relational databases.

Additional resources

Designing the microservice application layer and Web API

Using SOLID principles and Dependency Injection

SOLID principles are critical techniques to be used in any modern and mission-critical application, such as developing a microservice with DDD patterns. SOLID is an acronym that groups five fundamental principles:

SOLID is more about how you design your application or microservice internal layers and about decoupling dependencies between them. It is not related to the domain, but to the application’s technical design. The final principle, the Dependency Inversion (DI) principle, allows you to decouple the infrastructure layer from the rest of the layers, which allows a better decoupled implementation of the DDD layers.

DI is one way to implement the Dependency Inversion principle. It is a technique for achieving loose coupling between objects and their dependencies. Rather than directly instantiating collaborators, or using static references, the objects that a class needs in order to perform its actions are provided to (or “injected into”) the class. Most often, classes will declare their dependencies via their constructor, allowing them to follow the Explicit Dependencies principle. DI is usually based on specific Inversion of Control (IoC) containers. ASP.NET Core provides a simple built-in IoC container, but you can also use your favorite IoC container, like Autofac or Ninject.

By following the SOLID principles, your classes will tend naturally to be small, well-factored, and easily tested. But how can you know if too many dependencies are being injected into your classes? If you use DI through the constructor, it will be easy to detect that by just looking at the number of parameters for your constructor. If there are too many dependencies, this is generally a sign (a code smell) that your class is trying to do too much, and is probably violating the Single Responsibility principle.

It would take another guide to cover SOLID in detail. Therefore, this guide requires you to have only a minimum knowledge of these topics.

Additional resources

Implementing the microservice application layer using the Web API

Using Dependency Injection to inject infrastructure objects into your application layer

As mentioned previously, the application layer can be implemented as part of the artifact you are building, such as within a Web API project or an MVC web app project. In the case of a microservice built with ASP.NET Core, the application layer will usually be your Web API library. If you want to separate what is coming from ASP.NET Core (its infrastructure plus your controllers) from your custom application layer code, you could also place your application layer in a separate class library, but that is optional.

For instance, the application layer code of the ordering microservice is directly implemented as part of the Ordering.API project (an ASP.NET Core Web API project), as shown in Figure 9-19.

image

Figure 9-19. The application layer in the Ordering.API ASP.NET Core Web API project

ASP.NET Core includes a simple built-in IoC container (represented by the IServiceProvider interface) that supports constructor injection by default, and ASP.NET makes certain services available through DI. ASP.NET Core uses the term service for any of the types you register that will be injected through DI. You configure the built-in container’s services in the ConfigureServices method in your application’s Startup class. Your dependencies are implemented in the services that a type needs.

Typically, you want to inject dependencies that implement infrastructure objects. A very typical dependency to inject is a repository. But you could inject any other infrastructure dependency that you may have. For simpler implementations, you could directly inject your Unit of Work pattern object (the EF DbContext object), because the DBContext is also the implementation of your infrastructure persistence objects.

In the following example, you can see how .NET Core is injecting the required repository objects through the constructor. The class is a command handler, which we will cover in the next section.

// Sample command handler

public classCreateOrderCommandHandler

: IAsyncRequestHandler<CreateOrderCommand, bool>

{

private readonly IOrderRepository _orderRepository;

// Constructor where Dependencies are injected

publicCreateOrderCommandHandler(IOrderRepositoryorderRepository)

{

if (orderRepository == null)

{

throw new ArgumentNullException(nameof(orderRepository));

}

_orderRepository = orderRepository;

}

public async Task<bool> Handle(CreateOrderCommand message)

{

//

// … Additional code

//

// Create the Order AggregateRoot

// Add child entities and value objects through the Order aggregate root

// methods and constructor so validations, invariants, and business logic

// make sure that consistency is preserved across the whole aggregate

var address = new Address(message.Street, message.City, message.State, message.Country, message.ZipCode);

var order = new Order(address, message.CardTypeId, message.CardNumber,

message.CardSecurityNumber,

message.CardHolderName,

message.CardExpiration);

foreach (var item in message.OrderItems)

{

order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice,

item.Discount, item.PictureUrl, item.Units);

}

//Persist the Order through the Repository

_orderRepository.Add(order);

var result = await _orderRepository.UnitOfWork

.SaveEntitiesAsync();

return result > 0;

}

}

The class uses the injected repositories to execute the transaction and persist the state changes. It does not matter whether that class is a command handler, an ASP.NET Core Web API controller method, or a DDD Application Service. It is ultimately a simple class that uses repositories, domain entities, and other application coordination in a fashion similar to a command handler. Dependency Injection works the same way for all the mentioned classes, as in the example using DI based on the constructor.

Registering the dependency implementation types and interfaces or abstractions

Before you use the objects injected through constructors, you need to know where to register the interfaces and classes that produce the objects injected into your application classes through DI. (Like DI based on the constructor, as shown previously.)

Using the built-in IoC container provided by ASP.NET Core

When you use the built-in IoC container provided by ASP.NET Core, you register the types you want to inject in the ConfigureServices method in the Startup.cs file, as in the following code:

// Registration of types into ASP.NET Core built-in container

public void ConfigureServices(IServiceCollection services)

{

// Register out-of-the-box framework services.

services.AddDbContext<CatalogContext>(c =>

{

c.UseSqlServer(Configuration[“ConnectionString”]);

},

ServiceLifetime.Scoped

);

services.AddMvc();

// Register custom application dependencies.

services.AddScoped<IMyCustomRepository, MyCustomSQLRepository>();

}

The most common pattern when registering types in an IoC container is to register a pair of types—an interface and its related implementation class. Then when you request an object from the IoC container through any constructor, you request an object of a certain type of interface. For instance, in the previous example, the last line states that when any of your constructors have a dependency on IMyCustomRepository (interface or abstraction), the IoC container will inject an instance of the MyCustomSQLServerRepository implementation class.

Using the Scrutor library for automatic types registration

When using DI in .NET Core, you might want to be able to scan an assembly and automatically register its types by convention. This feature is not currently available in ASP.NET Core. However, you can use the Scrutor library for that. This approach is convenient when you have dozens of types that need to be registered in your IoC container.

Additional resources
Using Autofac as an IoC container

You can also use additional IoC containers and plug them into the ASP.NET Core pipeline, as in the ordering microservice in eShopOnContainers, which uses Autofac. When using Autofac you typically register the types via modules, which allow you to split the registration types between multiple files depending on where your types are, just as you could have the application types distributed across multiple class libraries.

For example, the following is the Autofac application module for the Ordering.API Web API project with the types you will want to inject.

public class ApplicationModule

:Autofac.Module

{

public string QueriesConnectionString { get; }

public ApplicationModule(string qconstr)

{

QueriesConnectionString = qconstr;

}

protected override void Load(ContainerBuilder builder)

{

builder.Register(c => newOrderQueries(QueriesConnectionString))

.As<IOrderQueries>()

.InstancePerLifetimeScope();

builder.RegisterType<BuyerRepository>()

.As<IBuyerRepository>()

.InstancePerLifetimeScope();

builder.RegisterType<OrderRepository>()

.As<IOrderRepository>()

.InstancePerLifetimeScope();

builder.RegisterType<RequestManager>()

.As<IRequestManager>()

.InstancePerLifetimeScope();

}

}

The registration process and concepts are very similar to the way you can register types with the built-in ASP.NET Core iOS container, but the syntax when using Autofac is a bit different.

In the example code, the abstraction IOrderRepository is registered along with the implementation class OrderRepository. This means that whenever a constructor is declaring a dependency through the IOrderRepository abstraction or interface, the IoC container will inject an instance of the OrderRepository class.

The instance scope type determines how an instance is shared between requests for the same service or dependency. When a request is made for a dependency, the IoC container can return the following:

Additional resources

Implementing the Command and Command Handler patterns

In the DI-through-constructor example shown in the previous section, the IoC container was injecting repositories through a constructor in a class. But exactly where were they injected? In a simple Web API (for example, the catalog microservice in eShopOnContainers), you inject them at the MVC controllers level, in a controller constructor. However, in the initial code of this section (the CreateOrderCommandHandler class from the Ordering.API service in eShopOnContainers), the injection of dependencies is done through the constructor of a particular command handler. Let us explain what a command handler is and why you would want to use it.

The Command pattern is intrinsically related to the CQRS pattern that was introduced earlier in this guide. CQRS has two sides. The first area is queries, using simplified queries with the Dapper micro ORM, which was explained previously. The second area is commands, which are the starting point for transactions, and the input channel from outside the service.

As shown in Figure 9-20, the pattern is based on accepting commands from the client side, processing them based on the domain model rules, and finally persisting the states with transactions.

image

Figure 9-20. High-level view of the commands or “transactional side” in a CQRS pattern

The command class

A command is a request for the system to perform an action that changes the state of the system. Commands are imperative, and should be processed just once.

Since commands are imperatives, they are typically named with a verb in the imperative mood (for example, “create” or “update”), and they might include the aggregate type, such as CreateOrderCommand. Unlike an event, a command is not a fact from the past; it is only a request, and thus may be refused.

Commands can originate from the UI as a result of a user initiating a request, or from a process manager when the process manager is directing an aggregate to perform an action.

An important characteristic of a command is that it should be processed just once by a single receiver. This is because a command is a single action or transaction you want to perform in the application. For example, the same order creation command should not be processed more than once. This is an important difference between commands and events. Events may be processed multiple times, because many systems or microservices might be interested in the event.

In addition, it is important that a command be processed only once in case the command is not idempotent. A command is idempotent if it can be executed multiple times without changing the result, either because of the nature of the command, or because of the way the system handles the command.

It is a good practice to make your commands and updates idempotent when it makes sense under your domain’s business rules and invariants. For instance, to use the same example, if for any reason (retry logic, hacking, etc.) the same CreateOrder command reaches your system multiple times, you should be able to identify it and ensure that you do not create multiple orders. To do so, you need to attach some kind of identity in the operations and identify whether the command or update was already processed.

You send a command to a single receiver; you do not publish a command. Publishing is for integration events that state a fact—that something has happened and might be interesting for event receivers. In the case of events, the publisher has no concerns about which receivers get the event or what they do it. But integration events are a different story already introduced in previous sections.

A command is implemented with a class that contains data fields or collections with all the information that is needed in order to execute that command. A command is a special kind of Data Transfer Object (DTO), one that is specifically used to request changes or transactions. The command itself is based on exactly the information that is needed for processing the command, and nothing more.

The following example shows the simplified CreateOrderCommand class. This is an immutable command that is used in the ordering microservice in eShopOnContainers.

// DDD and CQRS patterns comment

// Note that it is recommended that yuo implement immutable commands

// In this case, immutability is achieved by having all the setters as private

// plus being able to update the data just once, when creating the object

// through the constructor.

 

// References on immutable commands:

// http://cqrs.nu/Faq

// https://docs.spine3.org/motivation/immutability.html

// http://blog.gauffin.org/2012/06/griffin-container-introducing-command-support/

// https://msdn.microsoft.com/en-us/library/bb383979.aspx

 

[DataContract]

public classCreateOrderCommand

:IAsyncRequest<bool>

{

[DataMember]

private readonlyList<OrderItemDTO> _orderItems;

[DataMember]

public string City { get; private set; }

[DataMember]

public string Street { get; private set; }

[DataMember]

public string State { get; private set; }

[DataMember]

public string Country { get; private set; }

[DataMember]

public string ZipCode { get; private set; }

[DataMember]

public string CardNumber { get; private set; }

[DataMember]

public string CardHolderName { get; private set; }

[DataMember]

publicDateTime CardExpiration { get; private set; }

[DataMember]

public string CardSecurityNumber { get; private set; }

[DataMember]

public int CardTypeId { get; private set; }

[DataMember]

public IEnumerable<OrderItemDTO> OrderItems => _orderItems;

public CreateOrderCommand()

{

_orderItems = new List<OrderItemDTO>();

}

public CreateOrderCommand(List<OrderItemDTO> orderItems, string city,

string street,

string state, string country, string zipcode,

string cardNumber, string cardHolderName, DateTime cardExpiration,

string cardSecurityNumber, int cardTypeId) : this()

{

_orderItems = orderItems;

City = city;

Street = street;

State = state;

Country = country;

ZipCode = zipcode;

CardNumber = cardNumber;

CardHolderName = cardHolderName;

CardSecurityNumber = cardSecurityNumber;

CardTypeId = cardTypeId;

CardExpiration = cardExpiration;

}

public classOrderItemDTO

{

public int ProductId { get; set; }

public string ProductName { get; set; }

public decimal UnitPrice { get; set; }

public decimal Discount { get; set; }

public int Units { get; set; }

public string PictureUrl { get; set; }

}

}

Basically, the command class contains all the data you need for performing a business transaction by using the domain model objects. Thus, commands are simply data structures that contain read-only data, and no behavior. The command’s name indicates its purpose. In many languages like C#, commands are represented as classes, but they are not true classes in the real object-oriented sense.

As an additional characteristic, commands are immutable, because the expected usage is that they are processed directly by the domain model. They do not need to change during their projected lifetime. In a C# class, immutability can be achieved by not having any setters or other methods that change internal state.

For example, the command class for creating an order is probably similar in terms of data to the order you want to create, but you probably do not need the same attributes. For instance, CreateOrderCommand does not have an order ID, because the order has not been created yet.

Many command classes can be simple, requiring only a few fields about some state that needs to be changed. That would be the case if you are just changing the status of an order from “in process” to “paid” or “shipped” by using a command similar to the following:

[DataContract]

public classUpdateOrderStatusCommand

:IAsyncRequest<bool>

{

[DataMember]

public string Status { get; private set; }

[DataMember]

public string OrderId { get; private set; }

[DataMember]

public string BuyerIdentityGuid { get; private set; }

}

Some developers make their UI request objects separate from their command DTOs, but that is just a matter of preference. It is a tedious separation with not much added value, and the objects are almost exactly the same shape. For instance, in eShopOnContainers, the commands come directly from the client side.

The Command Handler class

You should implement a specific command handler class for each command. That is how the pattern works, and it is where you will use the command object, the domain objects, and the infrastructure repository objects. The command handler is in fact the heart of the application layer in terms of CQRS and DDD. However, all the domain logic should be contained within the domain classes—within the aggregate roots (root entities), child entities, or domain services, but not within the command handler, which is a class from the application layer.

A command handler receives a command and obtains a result from the aggregate that is used. The result should be either successful execution of the command, or an exception. In the case of an exception, the system state should be unchanged.

The command handler usually takes the following steps:

Typically, a command handler deals with a single aggregate driven by its aggregate root (root entity). If multiple aggregates should be impacted by the reception of a single command, you could use domain events to propagate states or actions across multiple aggregates

The important point here is that when a command is being processed, all the domain logic should be inside the domain model (the aggregates), fully encapsulated and ready for unit testing. The command handler just acts as a way to get the domain model from the database, and as the final step, to tell the infrastructure layer (repositories) to persist the changes when the model is changed. The advantage of this approach is that you can refactor the domain logic in an isolated, fully encapsulated, rich, behavioral domain model without changing code in the application or infrastructure layers, which are the plumbing level (command handlers, Web API, repositories, etc.).

When command handlers get complex, with too much logic, that can be a code smell. Review them, and if you find domain logic, refactor the code to move that domain behavior to the methods of the domain objects (the aggregate root and child entity).

As an example of a command handler class, the following code shows the same CreateOrderCommandHandler class that you saw at the beginning of this chapter. In this case we have highlighted the Handle method and the operations with the domain model objects/aggregates.

public class CreateOrderCommandHandler

: IAsyncRequestHandler<CreateOrderCommand, bool>

{

private readonly IBuyerRepository _buyerRepository;

private readonly IOrderRepository _orderRepository;

public CreateOrderCommandHandler(IBuyerRepository buyerRepository,

IOrderRepository orderRepository)

{

if (buyerRepository == null)

{

throw newArgumentNullException(nameof(buyerRepository));

}

if (orderRepository == null)

{

throw newArgumentNullException(nameof(orderRepository));

}

_buyerRepository = buyerRepository;

_orderRepository = orderRepository;

}

public asyncTask<bool> Handle(CreateOrderCommandmessage)

{

//

// Additional code …

//

// Create the Order aggregate root

// Add child entities and value objects through the Order aggregate root

// methods and constructor so validations, invariants, and business logic

// make sure that consistency is preserved across the whole aggregate

varorder = new Order(buyer.Id, payment.Id,

newAddress(message.Street,

message.City, message.State,

message.Country, message.ZipCode));

foreach (var item in message.OrderItems)

{

order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice,

item.Discount, item.PictureUrl, item.Units);

}

// Persist the Order through the aggregate’s repository

_orderRepository.Add(order);

return await _orderRepository.UnitOfWork.SaveChangesAsync();

}

}

These are additional steps a command handler should take:

Additional resources

The Command process pipeline: how to trigger a command handler

The next question is how to invoke a command handler. You could manually call it from each related ASP.NET Core controller. However, that approach would be too coupled and is not ideal.

The other two main options, which are the recommended options, are:

Using the Mediator pattern (in-memory) in the command pipeline

As shown in Figure 9-21, in a CQRS approach you use an intelligent mediator, similar to an in-memory bus, which is smart enough to redirect to the right command handler based on the type of the command or DTO being received. The single black arrows between components represent the dependencies between objects (in many cases, injected through DI) with their related interactions.

image

Figure 9-21. Using the Mediator pattern in process in a single CQRS microservice

The reason that using the Mediator pattern makes sense is that in enterprise applications, the processing requests can get complicated. You want to be able to add an open number of cross-cutting concerns like logging, validations, audit, and security. In these cases, you can rely on a mediator pipeline (see Mediator pattern) to provide a means for these extra behaviors or cross-cutting concerns.

A mediator is an object that encapsulates the “how” of this process: it coordinates execution based on state, the way a command handler is invoked, or the payload you provide to the handler. With a mediator component you can apply cross-cutting concerns in a centralized and transparent way by applying decorators (or pipeline behaviors since Mediator v3). (For more information, see the Decorator pattern.)

Decorators and behaviors are similar to Aspect Oriented Programming (AOP), only applied to a specific process pipeline managed by the mediator component. Aspects in AOP that implement cross-cutting concerns are applied based on aspect weavers injected at compilation time or based on object call interception. Both typical AOP approaches are sometimes said to work “like magic,” because it is not easy to see how AOP does its work. When dealing with serious issues or bugs, AOP can be difficult to debug. On the other hand, these decorators/behaviors are explicit and applied only in the context of the mediator, so debugging is much more predictable and easy.

For example, in the eShopOnContainers ordering microservice, we implemented two sample decorators, a LogDecorator class and a ValidatorDecorator class. The decorator’s implementation is explained in the next section. Note that in a future version, eShopOnContainers will migrate to MediatR 3 and move to behaviors instead of using decorators.

Using message queues (out-of-proc) in the command’s pipeline

Another choice is to use asynchronous messages based on brokers or message queues, as shown in Figure 9-22. That option could also be combined with the mediator component right before the command handler.

image

Figure 9-22. Using message queues (out of process and inter-process communication) with CQRS commands

Using message queues to accept the commands can further complicate your command’s pipeline, because you will probably need to split the pipeline into two processes connected through the external message queue. Still, it should be used if you need to have improved scalability and performance based on asynchronous messaging. Consider that in the case of Figure 9-22, the controller just posts the command message into the queue and returns. Then the command handlers process the messages at their own pace. That is a great benefit of queues—the message queue can act as a buffer in cases when hyper scalability is needed, such as for stocks or any other scenario with a high volume of ingress data.

However, because of the asynchronous nature of message queues, you need to figure out how to communicate with the client application about the success or failure of the command’s process. As a rule, you should never use “fire and forget” commands. Every business application needs to know if a command was processed successfully, or at least validated and accepted.

Thus, being able to respond to the client after validating a command message that was submitted to an asynchronous queue adds complexity to your system, as compared to an in-process command process that returns the operation’s result after running the transaction. Using queues, you might need to return the result of the command process through other operation result messages, which will require additional components and custom communication in your system.

Additionally, async commands are one-way commands, which in many cases might not be needed, as is explained in the following interesting exchange between Burtsev Alexey and Greg Young in an online conversation:

[Burtsev Alexey] I find lots of code where people use async command handling or one way command messaging without any reason to do so (they are not doing some long operation, they are not executing external async code, they do not even cross application boundary to be using message bus). Why do they introduce this unnecessary complexity? And actually, I haven’t seen a CQRS code example with blocking command handlers so far, though it will work just fine in most cases.

[Greg Young] […] an asynchronous command doesn’t exist; it’s actually another event. If I must accept what you send me and raise an event if I disagree, it’s no longer you telling me to do something. It’s you telling me something has been done. This seems like a slight difference at first, but it has many implications.

Asynchronous commands greatly increase the complexity of a system, because there is no simple way to indicate failures. Therefore, asynchronous commands are not recommended other than when scaling requirements are needed or in special cases when communicating the internal microservices through messaging. In those cases, you must design a separate reporting and recovery system for failures.

In the initial version of eShopOnContainers, we decided to use synchronous command processing, started from HTTP requests and driven by the Mediator pattern. That easily allows you to return the success or failure of the process, as in the CreateOrderCommandHandler implementation.

In any case, this should be a decision based on your application’s or microservice’s business requirements.

Implementing the command process pipeline with a mediator pattern (MediatR)

As a sample implementation, this guide proposes using the in-process pipeline based on the Mediator pattern to drive command ingestion and routing them, in memory, to the right command handlers. The guide also proposes applying decorators or behaviors in order to separate cross-cutting concerns.

For implementation in .NET Core, there are multiple open-source libraries available that implement the Mediator pattern. The library used in this guide is the MediatR open-source library (created by Jimmy Bogard), but you could use another approach. MediatR is a small and simple library that allows you to process in-memory messages like a command, while applying decorators or behaviors.

Using the Mediator pattern helps you to reduce coupling and to isolate the concerns of the requested work, while automatically connecting to the handler that performs that work—in this case, to command handlers.

Another good reason to use the Mediator pattern was explained by Jimmy Bogard when reviewing this guide:

I think it might be worth mentioning testing here – it provides a nice consistent window into the behavior of your system. Request-in, response-out. We’ve found that aspect quite valuable in building consistently behaving tests.

First, let us take a look to the controller code where you actually would use the mediator object. If you were not using the mediator object, you would need to inject all the dependencies for that controller, things like a logger object and others. Therefore, the constructor would be quite complicated. On the other hand, if you use the mediator object, the constructor of your controller can be a lot simpler, with just a few dependencies instead of many dependencies that you would have if you had one per cross-cutting operation, as in the following example:

public class OrdersController : Controller

{

public OrdersController(IMediator mediator,

IOrderQueries orderQueries)

// …

You can see that the mediator provides a clean and lean Web API controller constructor. In addition, within the controller methods, the code to send a command to the mediator object is almost one line:

[Route(“new”)]

[HttpPost]

public asyncTask<IActionResult> CreateOrder([FromBody]CreateOrderCommand

createOrderCommand)

{

var commandResult = await_mediator.SendAsync(createOrderCommand);

 

return commandResult ? (IActionResult)Ok() : (IActionResult)BadRequest();

}

In order for MediatR to be aware of your command handler classes, you need to register the mediator classes and the command handler classes in your IoC container. By default, MediatR uses Autofac as the IoC container, but you can also use the built-in ASP.NET Core IoC container or any other container supported by MediatR.

The following code shows how to register Mediator’s types and commands when using Autofac modules.

public class MediatorModule : Autofac.Module

{

protected override void Load(ContainerBuilder builder)

{

builder.RegisterAssemblyTypes(typeof(IMediator).GetTypeInfo().Assembly)

.AsImplementedInterfaces();

builder.RegisterAssemblyTypes(typeof(CreateOrderCommand).

GetTypeInfo().Assembly)

.As(o => o.GetInterfaces()

.Where(i => i.IsClosedTypeOf(typeof(IAsyncRequestHandler<,>)))

.Select(i => newKeyedService(IAsyncRequestHandler, i)));

builder.RegisterGenericDecorator(typeof(LogDecorator<,>),

typeof(IAsyncRequestHandler<,>),

“IAsyncRequestHandler”);

// Other types registration

}

Because each command handler implements the interface with generic IAsyncRequestHandler<T> and then inspects the RegisteredAssemblyTypes object, the handler is able to relate each command with its command handler, because that relationship is stated in the CommandHandler class, as in the following example:

public class CreateOrderCommandHandler

: IAsyncRequestHandler<CreateOrderCommand, bool>

{

This is the code that correlates commands with command handlers. The handler is just a simple class, but it inherits from RequestHandler<T>, and MediatR makes sure it gets invoked with the correct payload.

Applying cross-cutting concerns when processing commands with the Mediator and Decorator patterns

There is one more thing: being able to apply cross-cutting concerns to the mediator pipeline. You can also see at the end of the Autofac registration module code how it is registering a decorator type, specifically, a custom LogDecorator class.

Again, note that a future version of eShopOnContainers it will migrate to MediatR 3 and move to behaviors instead of using decorators.

That LogDecorator class can be implemented as the following code, which logs information about the command handler being executed and whether it was successful or not.

public class LogDecorator<TRequest, TResponse>

: IAsyncRequestHandler<TRequest, TResponse>

whereTRequest : IAsyncRequest<TResponse>

{

private readonlyIAsyncRequestHandler<TRequest, TResponse> _inner;

private readonlyILogger<LogDecorator<TRequest, TResponse>> _logger;

public LogDecorator(

IAsyncRequestHandler<TRequest, TResponse> inner,

ILogger<LogDecorator<TRequest, TResponse>> logger)

{

_inner = inner;

_logger = logger;

}

public asyncTask<TResponse> Handle(TRequest message)

{

_logger.LogInformation($”Executing command {_inner.GetType().FullName}”);

var response = await _inner.Handle(message);

_logger.LogInformation($”Succeeded executed command

{_inner.GetType().FullName}”);

return response;

}

}

Just by implementing this decorator class and by decorating the pipeline with it, all the commands processed through MediatR will be logging information about the execution.

The eShopOnContainers ordering microservice also applies a second decorator for basic validations, the ValidatorDecorator class that relies on the FluentValidation library, as shown in the following code:

public class ValidatorDecorator<TRequest, TResponse>

: IAsyncRequestHandler<TRequest, TResponse>

whereTRequest : IAsyncRequest<TResponse>

{

private readonlyIAsyncRequestHandler<TRequest, TResponse> _inner;

private readonlyIValidator<TRequest>[] _validators;

public ValidatorDecorator(

IAsyncRequestHandler<TRequest, TResponse> inner,

IValidator<TRequest>[] validators)

{

_inner = inner;

_validators = validators;

}

public asyncTask<TResponse> Handle(TRequest message)

{

var failures = _validators

.Select(v => v.Validate(message))

.SelectMany(result => result.Errors)

.Where(error => error != null)

.ToList();

if (failures.Any())

{

throw newOrderingDomainException(

$”Command Validation Errors for type {typeof(TRequest).Name}”,

newValidationException(“Validation exception”, failures));

}

var response = await _inner.Handle(message);

return response;

}

}

Then, based on the FluentValidation library, we created validation for the data passed with CreateOrderCommand, as in the following code:

public class CreateOrderCommandValidator : AbstractValidator<CreateOrderCommand>

{

public CreateOrderCommandValidator()

{

RuleFor(command => command.City).NotEmpty();

RuleFor(command => command.Street).NotEmpty();

RuleFor(command => command.State).NotEmpty();

RuleFor(command => command.Country).NotEmpty();

RuleFor(command => command.ZipCode).NotEmpty();

RuleFor(command => command.CardNumber).NotEmpty().Length(12, 19);

RuleFor(command => command.CardHolderName).NotEmpty();

RuleFor(command => command.CardExpiration).NotEmpty().Must(BeValidExpirationDate).

WithMessage(“Please specify a valid card expiration date”);

RuleFor(command => command.CardSecurityNumber).NotEmpty().Length(3);

RuleFor(command => command.CardTypeId).NotEmpty();

RuleFor(command => command.OrderItems).

Must(ContainOrderItems).WithMessage(“No order items found”);

}

private bool BeValidExpirationDate(DateTime dateTime)

{

return dateTime >= DateTime.UtcNow;

}

private bool ContainOrderItems(IEnumerable<OrderItemDTO> orderItems)

{

return orderItems.Any();

}

}

You could create additional validations. This is a very clean and elegant way to implement your command validations.

In a similar way, you could implement other decorators for additional aspects or cross-cutting concerns that you want to apply to commands when handling them.

Additional resources
The mediator pattern
The decorator pattern
MediatR (Jimmy Bogard)
Fluent validation