Build the modular monolith first

Even talking about building a monolith today, is a bit taboo. It is all about microservices at the moment, and has been for a few years. But they aren’t a silver bullet…

Sure, a bunch of the big players use them. But microservices also come with a lot of extra complexity that can make life a lot harder than it has to be. So maybe…just maybe…you should consider building a modular monolith to start of with, and then transition it to a services based architecture when you actually need it.

The pros and cons of microservices

Microservices do have benefit. I’m not saying that they don’t…

Personally, I think the biggest benefit with microservices, is the ability to distribute the development on multiple smaller teams, without having them trip over each other. But that’s an organizational problem, not a technical one. But they do also allow us to choose the best language, and architecture, for each service. And allow us to scale the services independently, making it possible to tailor the system resources based on the actual needs. Not to mention that they allow us to deploy our services independently, allowing for faster release cycles. At least if they are built right, and the system is architected properly.

However, they also come with quite a few disadvantages, or “complexities”. First of all, they are distributed across the network. And networks aren’t always as reliable as you would like. They are also much slower than executing code inside the same machine. Not to mention that they make logging and tracing a lot harder. And those last parts become really important when you start doing microservices. They are really the only way to debug issues, and when the system is distributed, well…your issues are also distributed. So the logging and tracing must be good enough to find out not only what has gone wrong, but also where…

Oh…and that part about being able to build and deploy your services independently. That very much depends on the interfaces you define. If they need to change, which they will over time, all changes need to be backwards compatible for that to work. And that my friend, can sometimes be harder than it sounds. At least if you haven’t planned for it upfront.

So, yes, there are definitely pros to using a microservice’s architecture. But there are also a lot of complexity that comes with it. To the point that the first law of distributed objects is “don’t distribute your objects”.

The pros and cons of monoliths

Building a monolith today, might not sound that awesome. Over the years, the term monolith has turned into meaning a bunch of poorly built legacy code. But that isn’t necessarily what it means.

Yes, it can mean a monolithically coded, entangled ball of mud. But it can also mean a system that is deployed as a single unit. This doesn’t mean that the code that makes up the system has to be a ball of mud.

Yes, when you build a monolith, it quite easily becomes an entangled mess. But if you take some time, and put in some love, it doesn’t have to. And honestly, your microservice’s architecture can also quite easily become an entangled monolith if you aren’t careful.

Sure, monoliths don’t support independent scaling of individual pieces of the system, and it doesn’t allow for releasing parts of the system independently. But those are really the biggest downsides in my mind. We can still write “beautiful” code, and build a system that can be properly maintained and evolve over time. And maybe even evolve into a distributed system if needed.

On top of that, by not distributing the system across multiple services, we get rid of a lot of the complexity. Logging and tracing becomes a lot easier. There are no costly cross network calls. And we don’t have to be as scared of the calls failing, as most of them will now be inside the same machine.

Note: Yes, by moving to a message based system, which most people believe is the right thing for microservices, we can get rid of some of the problems with failing networks and temporarily missing services. But it also means designing the system to be as asynchronous as possible, which can be a bit complicated in some cases.

However, I still love the idea of being able to split the system into individual pieces, possibly across multiple teams that can work somewhat independently to a large degree. I also really like the idea of being able to choose the architecture based on what the different parts of the system do. Some parts might just need a simple CRUD model with EF Core, while other parts might need a domain model, or event sourcing, or maybe an actor-based model. But these things don’t mean that we can’t still have a monolith. A well-designed, modular monolith. And if we do it right, we can even prepare the whole thing to be pulled apart into smaller services if we get to the point where we really need that.

Designing a modular monolith

When designing a modular monolith, it is all about breaking up the system into modules, and then combining those modules into a monolith for deployment.

It might be worth noting that finding what modules you will need, might not be as easy as you would think. They have to be as independent as possible. High-cohesion and low coupling is very important here, as all communication between the modules might end up being a cross network call, if you decide to break it into services in the future.

This means that all communication between modules need to well abstracted, and be either asynchronous, so that they can handle the call going across the network in the future, or use some form of messaging.

Comment: You must also ignore that urge of being “DRY”. You will probably end up with duplicate code in some places. And that is ok! Rather some duplication of code in independent modules, than unnecessary dependencies between modules.

The next step is to figure out how to work on those pieces in a good way. Preferably in a way that allows you to work on, and in the future potentially deploy it, as individual pieces, while still being able to deploy it as a monolith at the moment.

In the architecture I’m about to describe, this is done by having each module be its own ASP.NET Core API project. Complete with an entry point that allows you to start and run the module on its own. This allows each module to have its own architecture, be tested individually using the MVC Testing framework and be worked on in isolation.

These modules are then pulled together, into a single API, in a separate ASP.NET Core API project, allowing us to deploy the whole system as a monolith. But it still allows us to pull out the individual modules into separate services, if needed in the future.

The sample

To demonstrate the architecture, I will use very simple 2 modules. A user management module, and an order management module. The order management module depends on the user management module to fetch user information.

Note: I have kept this sample extremely basic, as it is not the actual functionality that is interesting, but the set-up of the system. I have also not demonstrated message based interaction as this has very little impact on the system as such.

For this, I have created 3 ASP.NET Core Web projects and one class library

  • OrderManagement.Module - The project that contains the code for the order management module
  • UserManagement.Module - The project that contains the code for the user management module
  • Api - The “host” project that ties the modules together into a monolith for deployment
  • UserManagement - A “shared” class library that contains the interface and DTO that enables interaction with the user management module

Comment: In the code on GitHub, there are also 3 test projects. One for each of the two modules, to show how you can test the modules individually, and one to test the host to see that it all works together as intended.

The UserManagement.Module project

The user management module is quite simple. It contains a single controller called UsersController, which has a single Get method that allows you to get a user.

To fetch the actual user entity, it uses a simple repository interface called IUsers.

[Route("api/[controller]")]
[ApiController]
public class UsersController : ControllerBase
{
    private readonly Data.IUsers users;

    public UsersController(Data.IUsers users)
    {
        this.users = users;
    }

    [HttpGet("{id}")]
    public async Task<ActionResult<User>> Get(int id)
    {
        var user = await users.WithId(id);

        return user == null ? NotFound() : user;
    }
}

The IUsers interface is also really simple

public interface IUsers
{
    Task<User?> WithId(int id);
}

That’s it! The implementation of the interface is pretty unimportant for this post.

Not to mention that the implementation in the demo is really dumb. But it removes the need for a database etc.

The project also has a Program.cs file that looks a lot like a standard web application

using FiftyNine.ModularMonolith.UserManagement.Module.Extensions;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

// Add the User Management module
builder.AddUserManagement();

var app = builder.Build();

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
    endpoints.MapControllers();
});
app.Run();

The only real thing to note in here, is the call to AddUserManagement(), which is an extension method that adds all the stuff needed to run the API in the user management module.

However, it doesn’t “just” add the stuff needed to run it on its own. It also adds some MVC code that makes it integrate with the “host” as well. Like this

public static WebApplicationBuilder AddUserManagement(this WebApplicationBuilder builder)
{
    builder.Services.AddControllers()
                    .AddApplicationPart(typeof(WebApplicationBuilderExtensions).Assembly);

    builder.Services.AddSingleton<Users>()
                    .AddSingleton<Data.IUsers>(x => x.GetRequiredService<Users>())
                    .AddSingleton<IUsers>(x => x.GetRequiredService<Users>());

    return builder;
}

As you can see, it starts off by telling MVC to include any controller defined in the current assembly.

When running it on its own, this has no effect at all. But when you run it “inside” the “host”, it will make sure that any controller in this project is registered in the host.

After that, it just adds the services needed for this module. In this case the IUsers service. However, as you can see, the Users service is registered using 2 different interfaces, both called IUsers, unfortunately.

The reason for the double IUsers interface, is that we need two ways to retrieve users. One that we can use internally in this module, and one that can be used from any external module that also needs to retrieve users. Which is what the UserManagement project, is all about.

But for now, all you need to know is that we need to register 2 interfaces. And in this case, the service that is registered in the AddUserManagement() method happens to implement both.

Note: Yes, the naming is unfortunate. But I’m not sure how to name this any better. The module project becomes UserManagement.Module, and the “integration” project used to talk to this module is UserManagement. And inside them, we end up having the same interface name, because it is unfortunately the name that makes sense in both cases. If you have a better suggestion, please let me know!

The UserManagement “integration” project

The UserManagement project contains the objects needed to interact with the user management module from external modules. In this case for example, the order module needs to retrieve users from the user module to add to the orders. For this to work, the UserManagement project contains an interface and a DTO. The interface is, as mentioned before, called IUsers and looks like this

public interface IUsers
{
    Task<User?> WithId(int id);
}

As you can see, it is pretty much identical to the one inside the UserManagement.Module project. Which is, as mentioned, a bit unfortunate, as it causes some interesting namespacing issues inside the module. But the naming makes sense as such, so I have kept it. And most of the time, we only really care about the internal one anyway.

However, a big difference is that it returns a User DTO, defined in the UserManagement project, and not the User from the module project. Yes, it is a bit confusing when describing it, but it makes sense if you look at the code. I promise!

The actual implementation implements both interfaces

public class Users : IUsers, UserManagement.IUsers
{
    public Task<User?> WithId(int id)
    {
        // Implementation
    }

    Task<UserManagement.User?> UserManagement.IUsers.WithId(int id)
        => WithId(id).ContinueWith(x => x.Result?.ToUser());
}

As you can see, the “integration” implementation just uses the “internal” WithId() method to retrieve a User entity, and then uses a ToUser() extension method to map it to an User DTO instance from the “integration” project.

And since the Users class happens to implement both the interfaces, it is registered like this

builder.Services.AddSingleton<Users>()
                .AddSingleton<Data.IUsers>(x => x.GetRequiredService<Users>())
                .AddSingleton<IUsers>(x => x.GetRequiredService<Users>());

The first AddSingleton() call is to register the service in the IoC container. And the second and third is to register that instance as both the IUsers interfaces.

The OrderManagement.Module project

The order management module is very similar to the user management module in this case. It contains a very simple controller that uses an internal IOrders interface to retrieve orders, as well as the IUsers interface from the user management “integration” project, to retrieve the user who placed the order.

[Route("api/[controller]")]
[ApiController]
public class OrdersController : ControllerBase
{
    private readonly IOrders orders;
    private readonly IUsers users;

    public OrdersController(IOrders orders, IUsers users)
    {
        this.orders = orders;
        this.users = users;
    }

    [HttpGet("{id}")]
    public async Task<ActionResult> Get(int id)
    {
        var order = await orders.WithId(id);

        if (order == null)
            return NotFound();

        var user = await users.WithId(order.OrderedById);

        return Ok(new { 
            Id = order.Id,
            OrderDate = order.OrderDate.ToString("yyyy-MM-dd HH:mm"),
            OrderedBy = user == null ? null : new
            {
                Id = user.Id,
                FirstName = user.FirstName,
                LastName = user.LastName
            }
        });
    }
}

Yes, a bit more code, but a lot of it is just mapping the result to a dynamic object to be returned. Other than that, it is pretty simple. Retrieve the order, and then retrieve the user who placed the order by calling the IUsers.WithId() method.

In the monolithic version, the IUsers interface will be the implementation that is created in the UserManagement.Module. However, if we wanted to split out the user management module to a separate service, we could just replace the implementation with one that uses an HttpClient to retrieve the user instead. And none of the code in here would change.

But what implementation is used when this module is run on its own? Well, for this sample, I have simply added a FakeItEasy fake to do the job. It looks like this in the Program.cs

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

// Add the Order Management module
builder.AddOrderManagement();

var usersFake = A.Fake<IUsers>();
A.CallTo(() => usersFake.WithId(1)).Returns(User.Create(1, "John", "Doe"));
builder.Services.AddSingleton(usersFake);

var app = builder.Build();

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
    endpoints.MapControllers();
});
app.Run();

As you can see, it calls an AddOrderManagement() extension method to add the required things to the system, just like the UserManagement.Module project. However, since it requires an implementation of the IUsers interface from the UserManagement project, which isn’t supplied in this case, I create a simple fake to take its place.

Note: Yes, this implementation is probably overly simplified. However, you could obviously create something more elaborate here. The main thing is that you are not dependent on the other module to be able to work.

Remember, this Program.cs will only be called when running the project in isolation. When running it as part of the “host”, this code will never be executed, and the IUsers implementation will be the one registered in the UserManagement.Module project.

The AddOrderManagement() extension method looks like this

public static WebApplicationBuilder AddOrderManagement(this WebApplicationBuilder builder)
{
    builder.Services.AddControllers()
                    .AddApplicationPart(typeof(WebApplicationBuilderExtensions).Assembly);

    builder.Services.AddSingleton<IOrders, Orders>();

    return builder;
}

As you can see, it is pretty much an exact copy of the one from the user management module. Which makes sense, as all modules would need to register its own “application part”, and its own services. And since this demo is so ridiculously simple, the modules are almost identical.

The final part of the puzzle is to bring it all together in the monolith for deployment.

The API project

The API project is the thing that is responsible for this. It references all the required modules, and registers them during start up, using the same extension methods that the individual modules use.

Like this

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

builder.AddOrdersModule();
builder.AddUsersModule();

var app = builder.Build();

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
    endpoints.MapControllers();
});
app.Run();

As you can see, all it really does, is asking the modules to register themselves. They will then add their own assemblies as application parts in MVC, making sure that they are discovered when calling endpoints.MapControllers(), and add whatever services they provide in the IoC container. Both internal services, and “integration” services that are to be used by other modules.

This allows the “host” to have a complete set of endpoints, made up of both local ones and those defined in the referenced modules.

Note: This registration could obviously be automated using some attribute or interface, and some reflection if you have a lot of modules.

If the “host” needs to get some information from one of the modules, it can use the “integration” services in the same way as other modules do.

public class HomeController : Controller
{
    private readonly IUsers users;

    public HomeController(IUsers users)
    {
        this.users = users;
    }

    [HttpGet("/")]
    public async Task<IActionResult> Index()
    {
        var user = await users.WithId(1);

        return user == null ? NotFound() : View(user);
    }
}

And if you ever need to break the system apart and put some of the modules in external services, you can do so as long as you create new implementations for the “integration” services.

And since every module is its own project, much like microservices, they can choose their own architecture and be developed and managed by separate teams.

Another nice benefit of this is that since the “integration” service interfaces are actual C# interfaces, any breaking change will show up when you try to compile the “host” project. Instead of at run time, which can easily happen when your interfaces are loosely defined as REST endpoints.

Conclusion

I personally believe that this architecture is good starting point when building ASP.NET Core Web applications. It allows for a nice separation of concerns, just as with a microservice’s architecture. It allows the system to be migrated into a distributed system if needed. And it allows for different architecture styles inside each of the modules, without making the system design look disjointed and weird. But, compared to a microservice’s architecture, it doesn’t have the same complexity out of the gate. Instead, you have the simplicity of a monolith, but hopefully without the all too common spaghetti code that is caused by putting it all in one place. And when you really need it, you can split it into separate services.

Comment: Being able to split it into separate services obviously depends on how you design your modules. If you for example make them too chatty, splitting them might cause a lot of latency. And if you put all the data in a single database, you still have a common dependency that has to be managed as such. And so on…

If you have any thoughts or questions. I’m on Twitter as usual. Just ping me at @ZeroKoll!

And of course, the code is available on GitHub! Just go to https://github.com/chrisklug/asp-net-modular-monolith to look at it!

zerokoll

Chris

Developer-Badass-as-a-Service at your service