Category: Architecture

Azure Functions – Expect Significant Clock Skew

While running the experiment I posted about on Sunday I annotated the message I sent on to the event hub with the Azure Functions view of the current time (basically I set a property to DateTime.UtcNow) and, out of curiosity, grouped my results by second long tumbling windows based on that date. This gave me results that were observably different than when doing the same with the enqueue date and time logged by the Event Hub (as an aside there is some interesting information about Event Hubs and clock skew here). My experiments didn’t need massively accurate time tracking as I was really just looking for trends over a long, relative to the clock skew, period of time however I looked at some of the underlying numbers and became suspicious that there was a significant degree of clock skew across my functions.

I reached out to the @AzureFunctions team on Twitter asking how they handled clock sync on the platform and one of the engineers, Chris Anderson, replied confirming what I suspected: there are no guarantees about clock sync on the Azure Functions platform and, further, you should expect the time skew to be large.

That means you can’t really obtain a consistent view of “now” from within an Azure Function. You could go and get it from an external source but that in and of itself is going to introduce other inaccuracies. Essentially you can’t handle dynamic time reliably inside a function with any precision and you’re limited to working with reference points obtained upstream and passed in.

Definitely something to be aware of when designing systems that make use of Azure Functions as it would be easy to use them in scenarios applying timestamps expecting some sense of temporal sequence.

This isn’t, of course, a new problem – dealing with precise and accurate time in a distributed system is always a challenge and requires careful consideration but it does underline the importance of understanding your cloud vendors various runtime environments.

Azure Functions – Queue Trigger Scaling (Part 1)

I’m a big fan of the serverless compute model (known as Azure Functions on the Azure platform), but in some ways its greatest strength is also its weakness: the serverless model essentially asks you to run small units of code inside a fully managed environment on a pay for what you need basis that in theory will scale infinitely in response to demand. With the increased granularity this is the next evolution in cloud elasticity with no more need to buy and reserve CPUs which sit partially idle until the next scaling point is reached. However as a result you lose control over the levers you might be used to pulling in a more traditional cloud compute environment – it’s very much a black box. Using typical queue processing patterns as an example this includes the number of “threads” or actors looking at a queue and the length of the back off timings.

Most of the systems I’ve transitioned onto Azure Functions to date have been more focused on cost than scale and have had no particular latency requirements and so I’ve just been happy to reduce my costs without a particularly close examination. However I’m starting to look at moving spikier higher volume queue systems onto Azure Functions and so I’ve been looking to understand the opaque aspects more fully through running a series of experiments.

Before continuing it’s worth noting that Microsoft continue to evolve the runtime host for Azure Functions and so the results are only really valid at the time they are run – run them again in 6 months and you’re likely to see, hopefully subtle and improved, changes in behaviour.

Most of my higher volume requirements are light in terms of compute power but heavy on volume and so I’ve created a simple function that pulls a message from a Service Bus queue and writes it, along with a timestamp, onto an event hub:

[FunctionName("DequeAndForwardOn")]
[return: EventHub("results", Connection = "EhConnectionString")]
public static string Run([ServiceBusTrigger("testqueue", Connection = "SbConnectionString")]string myQueueItem, TraceWriter log)
{
    log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    EventHubOutput message = JsonConvert.DeserializeObject<EventHubOutput>(myQueueItem);
    message.ProcessedAtUtc = DateTime.UtcNow;
    string json = JsonConvert.SerializeObject(message);
    return json;
}

Once the items are on the Event Hub I’m using a Streaming Analytics job to count the number of dequeues per second with a tumbling window and output them to table storage:

SELECT
    System.TimeStamp AS TbPartitionKey,
    '' as TbRowKey,
    SUBSTRING(CAST(System.TimeStamp as nvarchar(max)), 12, 8) as Time,
    COUNT(1) AS totalProcessed
INTO
    resultsbysecond
FROM
    [results]
TIMESTAMP BY EventEnqueuedUtcTime
GROUP BY TUMBLINGWINDOW(second,1)

For this initial experiment I’m simply going to pre-load the Service Bus queue with 1,000,000 messages and analyse the dequeue rate. Taking all the above gives us a workflow that looks like this:

Executing all this gave some interesting results as can be seen from the graph below:

From a cold start it took just under 13 minutes to dequeue all 1,000,000 messages with a fairly linear, if spiky, approach to scaling up the dequeue rate from a low at the beginning of 23 dequeues per second to a peak of over 3000 increasing at a very rough rate of 3.2 messages per second. It seems entirely likely that this will go on until we start to hit IO limits around the Service Bus. We’d need to do more experiments to be certain but it looks like the runtime is allocating more queue consumers while all existing consumers continue to find their are items on the queue to process.

In the next part we’re going to run a few more experiments to help us understand the scaling rate better and how it is impacted by quiet periods.

C# Cloud Application Architecture – Commanding via a Mediator (Part 2)

In the last post we’d converted our traditional layered architecture into one that instead organised itself around business domains and made use of command and mediator patterns to promote a more loosely coupled approach. However things were definitely still very much a work in progress with our controllers still fairly scruffy and our command handlers largely just a copy and paste of our services from the layered system.

In this post I’m going to concentrate on simplifying the controllers and undertake some minor rework on our commands to help with this. This part is going to be a bit more code heavy than the last and will involve diving into parts of ASP.Net Core. The code for the result of all these changes can be found on GitHub here:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part2

As a reminder our aim is to move from controller with actions that look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put(Guid productId, int quantity)
{
    CommandResponse response = await _dispatcher.DispatchAsync(new AddToCartCommand
    {
        UserId = this.GetUserId(),
        ProductId = productId,
        Quantity = quantity
    });
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

To controllers with actions that look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

And we want to do this in a way that makes adding future controllers and commands super simple and reliable.

We can do this fairly straightforwardly as we now only have a single “service”, our dispatcher, and our commands are simple state – this being the case we can use some ASP.Net Cores model binding to take care of populating our commands along with some conventions and a base class to do the heavy lifting for each of our controllers. This focuses the complexity in a single, testable, place and means that our controllers all become simple and thin.

If we’re going to use model binding to construct our commands the first thing we need to be wary of is security: many of our commands have a UserId property that, as can be seen in the code snippet above, is set within the action from a claim. Covering claims based security and the options available in ASP.Net Core is a topic in and of itself and not massively important to the application architecture and code structure we’re focusing on for the sake of simplicty I’m going to use a hard coded GUID, however hopefully its clear from the code below where this ID would come from in a fully fledged solution:

public static class ControllerExtensions
{
    public static Guid GetUserId(this Controller controller)
    {
        // in reality this would pull the user ID from the claims e.g.
        //     return Guid.Parse(controller.User.FindFirst("userId").Value);
        return Guid.Parse("A9F7EE3A-CB0D-4056-9DB5-AD1CB07D3093");
    }
}

If we’re going to use model binding we need to be extremely careful that this ID cannot be set by a consumer as this would almost certainly lead to a security incident. In fact if we make an interim change to our controller action we can see immediately that we’ve got a problem:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute]AddToCartCommand command)
{
    CommandResponse response = await _dispatcher.DispatchAsync(command);
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

The user ID has bled through into the Swagger definition but, because of how we’ve configured the routing with no userId parameter on the route definition, the binder will ignore any value we try and supply: there is nowhere to specify it as a route parameter and a query parameter or request body will be ignored. Still – it’s not pleasant: it’s misleading to the consumer of the API and if, for example, we were reading the command from the request body the model binder would pick it up.

We’re going to take a three pronged approach to this so that we can robustly prevent this happening in all scenarios. Firstly, to support some of these changes, we’re going to rename the UserId property on our commands to AuthenticatedUserId and introduce an interface called IUserContextCommand as shown below that all our commands that require this information will implement:

public interface IUserContextCommand
{
    Guid AuthenticatedUserId { get; set; }
}

With that change made our AddToCartCommand now looks like this:

public class AddToCartCommand : ICommand<CommandResponse>, IUserContextCommand
{
    public Guid AuthenticatedUserId { get; set; }

    public Guid ProductId { get; set; }

    public int Quantity { get; set; }
}

I’m using the Swashbuckle.AspNetCore package to provide a Swagger definition and user interface and fortunately it’s quite a configurable package that will allow us to customise how it interprets actions (operations in it’s parlance) and schema through a filter system. We’re going to create and register both an operation and schema filter that ensures any reference to AuthenticatedUserId either inbound or outbound is removed. The first code snippet below will remove the property from any schema (request or response bodies) and the second snippet will remove it from any operation parameters – if you use models with the [FromRoute] attribute as we have done then Web API will correctly only bind to any parameters specified in the route but Swashbuckle will still include all the properties in the model in it’s definition.

public class SwaggerAuthenticatedUserIdFilter : ISchemaFilter
{
    private const string AuthenticatedUserIdProperty = "authenticatedUserId";
    private static readonly Type UserContextCommandType = typeof(IUserContextCommand);

    public void Apply(Schema model, SchemaFilterContext context)
    {
        if (UserContextCommandType.IsAssignableFrom(context.SystemType))
        {
            if (model.Properties.ContainsKey(AuthenticatedUserIdProperty))
            {
                model.Properties.Remove(AuthenticatedUserIdProperty);
            }
        }
    }
}
public class SwaggerAuthenticatedUserIdOperationFilter : IOperationFilter
{
    private const string AuthenticatedUserIdProperty = "authenticateduserid";

    public void Apply(Operation operation, OperationFilterContext context)
    {
        IParameter authenticatedUserIdParameter = operation.Parameters?.SingleOrDefault(x => x.Name.ToLower() == AuthenticatedUserIdProperty);
        if (authenticatedUserIdParameter != null)
        {
            operation.Parameters.Remove(authenticatedUserIdParameter);
        }
    }
}

These are registered in our Startup.cs file alongside Swagger itself:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new Info { Title = "Online Store API", Version = "v1" });
        c.SchemaFilter<SwaggerAuthenticatedUserIdFilter>();
        c.OperationFilter<SwaggerAuthenticatedUserIdOperationFilter>();
    });

    CommandingDependencyResolver = new MicrosoftDependencyInjectionCommandingResolver(services);
    ICommandRegistry registry = CommandingDependencyResolver.UseCommanding();

    services
        .UseShoppingCart(registry)
        .UseStore(registry)
        .UseCheckout(registry);
}

If we try this now we can see that our action now looks as we expect:

With our Swagger work we’ve at least obfuscated our AuthenticatedUserId property and now we want to make sure ASP.Net ignores it during binding operations. A common means of doing this is to use the [BindNever] attribute on models however we want our commands to remain free of the concerns of the host environment, we don’t want to have to remember to stamp [BindNever] on all of our models, and we definetly don’t want the assemblies that contain them to need to reference ASP.Net packages – after all one of the aims of taking this approach is to promote a very loosely coupled design.

This being the case we need to find another way and other than the binding attributes possibly the easiest way to prevent user IDs being set from a request body is to decorate the default body binder with some additional functionality that forces an empty GUID. Binders consist of two parts, the binder itself and a binder provider so to add our binder we need two new classes (and we also include a helper class with an extension method). Firstly our custom binder:

public class AuthenticatedUserIdAwareBodyModelBinder : IModelBinder
{
    private readonly IModelBinder _decoratedBinder;

    public AuthenticatedUserIdAwareBodyModelBinder(IModelBinder decoratedBinder)
    {
        _decoratedBinder = decoratedBinder;
    }

    public async Task BindModelAsync(ModelBindingContext bindingContext)
    {
        await _decoratedBinder.BindModelAsync(bindingContext);
        if (bindingContext.Result.Model is IUserContextCommand command)
        {
            command.AuthenticatedUserId = Guid.Empty;
        }
    }
}

You can see from the code above that we delegate all the actual binding down to the decorated binder and then simply check to see if we are dealing with a command that implements our IUserContextCommand interface and blanks out the GUID if neecssary.

We then need a corresponding provider to supply this model binder:

internal static class AuthenticatedUserIdAwareBodyModelBinderProviderInstaller
{
    public static void AddAuthenticatedUserIdAwareBodyModelBinderProvider(this MvcOptions options)
    {
        IModelBinderProvider bodyModelBinderProvider = options.ModelBinderProviders.Single(x => x is BodyModelBinderProvider);
        int index = options.ModelBinderProviders.IndexOf(bodyModelBinderProvider);
        options.ModelBinderProviders.Remove(bodyModelBinderProvider);
        options.ModelBinderProviders.Insert(index, new AuthenticatedUserIdAwareBodyModelBinderProvider(bodyModelBinderProvider));
    }
}

internal class AuthenticatedUserIdAwareBodyModelBinderProvider : IModelBinderProvider
{
    private readonly IModelBinderProvider _decoratedProvider;

    public AuthenticatedUserIdAwareBodyModelBinderProvider(IModelBinderProvider decoratedProvider)
    {
        _decoratedProvider = decoratedProvider;
    }

    public IModelBinder GetBinder(ModelBinderProviderContext context)
    {
        IModelBinder modelBinder = _decoratedProvider.GetBinder(context);
        return modelBinder == null ? null : new AuthenticatedUserIdAwareBodyModelBinder(_decoratedProvider.GetBinder(context));
    }
}

Our installation extension method looks for the default BodyModelBinderProvider class, extracts it, constructs our decorator around it, and replaces it in the set of binders.

Again, like with our Swagger filters, we configure this in the ConfigureServices method of our Startup class:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(c =>
    {
        c.AddAuthenticatedUserIdAwareBodyModelBinderProvider();
    });
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new Info { Title = "Online Store API", Version = "v1" });
        c.SchemaFilter<SwaggerAuthenticatedUserIdFilter>();
        c.OperationFilter<SwaggerAuthenticatedUserIdOperationFilter>();
    });

    CommandingDependencyResolver = new MicrosoftDependencyInjectionCommandingResolver(services);
    ICommandRegistry registry = CommandingDependencyResolver.UseCommanding();

    services
        .UseShoppingCart(registry)
        .UseStore(registry)
        .UseCheckout(registry);
}

At this point we’ve configured ASP.Net so that consumers of the API cannot manipulate sensitive properties on our commands and so for our next step we want to ensure that the AuthenticatedUserId property on our commands is popualted with the legitimate ID of the logged in user.

While we could set this in our model binder (where currently we ensure it is blank) I’d suggest that’s a mixing of concerns and in any case will only be triggered on an action with a request body, it won’t work for a command bound from a route or query parameters for example. All that being the case I’m going to implement this through a simple action filter as below:

public class AssignAuthenticatedUserIdActionFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext context)
    {
        foreach (object parameter in context.ActionArguments.Values)
        {
            if (parameter is IUserContextCommand userContextCommand)
            {
                userContextCommand.AuthenticatedUserId = ((Controller) context.Controller).GetUserId();
            }
        }
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
            
    }
}

Action filters are run after model binding and authentication and authorization has occurred and so at this point we can be sure we can access the claims for a validated user.

Before we move on it’s worth reminding ourselves what our updated controller action now looks like:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute]AddToCartCommand command)
{
    CommandResponse response = await _dispatcher.DispatchAsync(command);
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

To remove this final bit of boilerplate we’re going to introduce a base class that handles the dispatch and HTTP response interpretation:

public abstract class AbstractCommandController : Controller
{
    protected AbstractCommandController(ICommandDispatcher dispatcher)
    {
        Dispatcher = dispatcher;
    }

    protected ICommandDispatcher Dispatcher { get; }

    protected async Task<IActionResult> ExecuteCommand(ICommand<CommandResponse> command)
    {
        CommandResponse response = await Dispatcher.DispatchAsync(command);
        if (response.IsSuccess)
        {
            return Ok();
        }
        return BadRequest(response.ErrorMessage);
    }
}

With that in place we can finally get our controller to our target form:

[Route("api/[controller]")]
public class ShoppingCartController : AbstractCommandController
{
    public ShoppingCartController(ICommandDispatcher dispatcher) : base(dispatcher)
    {
            
    }

    [HttpPut("{productId}/{quantity}")]
    public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

    // ... other verbs
}

To roll this out easily over the rest of our commands we’ll need to get them into a more consistent shape – at the moment they have a mix of response types which isn’t ideal for translating the results into HTTP responses. We’ll unify them so they all return a CommandResponse object that can optionally also contain a result object:

public class CommandResponse
{
    protected CommandResponse()
    {
            
    }

    public bool IsSuccess { get; set; }

    public string ErrorMessage { get; set; }

    public static CommandResponse Ok() {  return new CommandResponse { IsSuccess = true};}

    public static CommandResponse WithError(string error) {  return new CommandResponse { IsSuccess = false, ErrorMessage = error};}
}

public class CommandResponse<T> : CommandResponse
{
    public T Result { get; set; }

    public static CommandResponse<T> Ok(T result) { return new CommandResponse<T> { IsSuccess = true, Result = result}; }

    public new static CommandResponse<T> WithError(string error) { return new CommandResponse<T> { IsSuccess = false, ErrorMessage = error }; }

    public static implicit operator T(CommandResponse<T> from)
    {
        return from.Result;
    }
}

With all that done we can complete our AbstractCommandController class and convert all of our controllers to the simpler syntax. By making these changes we’ve removed all the repetitive boilerplate code from our controllers which means less tests to write and maintain, less scope for us to make mistakes and less scope for us to introduce inconsistencies. Instead we’ve leveraged the battle hardened model binding capabilities of ASP.Net and concentrated all of the our response handling in a single class that we can test extensively. And if we want to add new capabilities all we need to do is add commands, handlers and controllers that follow our convention and all the wiring is taken care of for us.

In the next part we’ll explore how we can add validation to our commands in a way that is loosely coupled and reusable in domains other than ASP.Net and then we’ll revisit our handlers to separate out our infrastructure code (logging for example) from our business logic.

Other Parts in the Series

Part 5
Part 4
Part 3
Part 1

C# Cloud Application Architecture – Commanding via a Mediator (Part 1)

When designing cloud based systems we often think about the benefits of loosely coupling systems, for example through queues and pub/sub mechanisms, but fall back on more traditional onion or layered patterns for the application architecture when alternative approaches might provide additional benefits.

My aim over a series of posts is to show how combining the Command pattern with the Mediator pattern can yield a number of benefits including:

  • A better separation of concerns with a clear demarkation of domains
  • An internally decomposed application that is simple to work with early in it’s lifecycle yet can easily evolve into a microservice approach
  • Is able to reliably support none-transactional or distributed data storage operations
  • Less repetitive infrastructure boilerplate to write such as logging and telemetry

To illustrate this we’re going to refactor an example application, over a number of stages, from a layered onion ring style to a decoupled in-process application and finally on through to a distributed application. An up front warning, to get the most out of these posts a fair amount of code reading is encouraged. It doesn’t make sense for me to walk through each change and doing so would make the posts ridiculously long but hopefully the sample code provided is clear and concise. It really is designed to go along with the posts.

The application is a RESTful HTTP API supporting some basic, simplified, functionality for an online store. The core API operations are:

  • Shopping cart management: get the cart, add products to the cart, clear the cart
  • Checkout: convert a cart into an order, mark the order as paid
  • Product: get a product

To support the above there is also a sense of store product catalog, it’s not exposed through the API, but it is used internally.

I’m going to be making use of my own command mediation framework AzureFromTheTrenches.Commanding as it has some features that will be helpful to us as we progress towards a distributed application but alternatives are available such as the popular Mediatr and if this approach appeals to you I’d encourage you to look at those too.

Our Starting Point

The code for the onion ring application, our starting point, can be found on GitHub:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1a

Firstly lets define what I mean by onion ring or layered architecture. In the context of our example application it’s a series of layers built one on top of the other with calls up and down the chain never skipping a layer: the Web API talks to the application services, the application services to the data access layer, and the data access layer to the storage. Each layer drives the one below. To support this each layer is broken out into it’s own assembly and a mixture of encapsulation (at the assembly and class level) and careful reference management ensures this remains the case.

In the case of our application this looks like this:

Concrete class coupling is avoided through the use of interfaces and these are constructor injected to promote a testable approach. Normally in such an architecture each layer has its own set of models (data access models, application models etc.) with mapping taking place between them however for the sake of brevity, and as I’m just using mock data stores, I’ve ignored this and used a single set of models that can be found in the OnlineStore.Model assembly.

As this is such a common pattern, it’s certainly the pattern I’ve come across most often over the last few years (where application architecture has been considered at all – sadly the spaghetti / make it up as you go along approach is still prevalent), it’s worth looking at what’s good about it:

  1. Perhaps most importantly – it’s simple. You can draw out the architecture on a whiteboard with a set of circles or boxes in 30 seconds and it’s quite easy to explain what is going on and why
  2. It’s testable, obviously things get more complicated in the real world but above the data access layer testing is really easy: everything is mockable and the major concrete external dependencies have been isolated in our data access layer.
  3. A good example of the pattern is open for extension but closed for modification: because their are clear contractual interfaces decorators can be added to, for example, add telemetry without modifying business logic.
  4. It’s very well supported in tooling – if you’re using Visual Studio it’s built in refactoring tools or Resharper will have no problem crunching over this solution to support changes and help keep things consistent.
  5. It can be used in a variety of different operating environments, I’ve used a Web API for this example but you can find onion / layer type architectures in all manner of solutions.
  6. It’s what’s typically been done before – most engineers are familiar with the pattern and its become almost a default choice.

I’ve built many successful systems myself using this pattern and, to a point, its served me well. However I started looking for better approaches as I experienced some of the weaknesses:

  1. It’s not as loosely coupled and malleable as it seems. This might seem an odd statement to make given our example makes strong use of interfaces and composition via dependency injection, an approach often seen to promote loose coupling and flexibility. And it does to a point. However we still tend to think of operations in such an architecture in quite a tightly coupled way: this controller calls this method on this interface which is implemented by this service class and on down through the layers. It’s a very explicit invocation process.
  2. Interfaces and classes in the service layer often become “fat” – they tend to end up coupled around a specific part of the application domain (in our case, for example, the shopping basket) and used to group together functionality.
  3. As an onion ring application gets larger it’s quite common for the Application Layer to begin to resemble spaghetti – repositories are made reference to from across the code base, services refer to services, and helpers to other helpers. You can mitigate this by decomposing the application layer into further assemblies, and making use of various design patterns, but it only takes you so far. These are all sticking plaster over a fundamental problem with the onion ring approach: it’s designed to segregate at the layer level not at the domain or bounding context level (we’ll talk about DDD later). It’s also common to see concerns start to bleed across the layer with what once were discrete boundaries.
  4. In the event of fundamental change the architecture can be quite fragile and resistant to easy change. Let’s consider a situation where our online store has become wildly successful and to keep it serving the growing userbase we need to break it apart to handle the growing scale. This will probably involve breaking our API into multiple components and using patterns such as queues to asynchronously run and throttle some actions. With the onion ring architecture we’re going to have to analyse what is probably a large codebase looking for how we can safely break it apart. Almost inevitably this will be more difficult than it first seems and it will probably seem quite difficult to begin with (I may have been here myself!) and we’ll uncover all manner of implicit tight coupling points.

There are of course solutions to many of these problems, particularly the code organisation issues, but at some point it’s worth considering if the architecture is still helping you or beginning to hinder you and if perhaps there are alternative approaches.

With that in mind, and as I said earlier, I’d like to introduce an alternative. It’s by no means the alternative but its one I’ve found tremendously helpful when building applications for the cloud.

The Command Pattern with a Mediator

The classic command pattern aims to encapsulate all the information needed to perform an action typically within a class that contains properties (the data needed to perform the action) and an execute method (that undertakes the action). For example:

public class GetProductCommand : ICommand<Product>
{
    public Guid ProductId { get; set; }

    public Task<Product> Execute()
    {
         /// ... logic
    }
}

This can be a useful way of structuring an application but tightly couples state and execution and that can prove limiting if, for example, it would be helpful for the command to participate in a pub/sub approach, take part in a command chain (step 1 – store state in undo, step 2 – execute command, step 3 – log command was executed), or operate across a process boundary.

A powerful improvement can be made to the approach by decoupling the execution from the command state via the mediator pattern. In this pattern all commands, each of which consist of simple serializable state, are dispatched to the mediator and the mediator determines which handlers will execute the command:

Because the command is decoupled from execution it is possible to associate multiple handlers with a command and moving a handler to a different process space (for example splitting a Web API into multiple Web APIs) is simply a matter of changing the configuration of the mediator. And because the command is simple state we can easily serialize it into, for example, an event store (the example below illustrates a mediator that is able to serialize commands to stores directly, but this can also be accomplished through additional handlers:

It’s perhaps worth also briefly touching on the CQRS pattern – while the Command Message pattern may facilitate a CQRS approach it neither requires it or has any particular opinion on it.

With that out the way lets take a look at how this pattern impacts our applications architecture. Firstly the code for our refactored solution can be found here:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1b

And from our layered approach we now end up with something that looks like this:

It’s important to realise when looking at this code that we’re going to do more work to take advantage of the capabilities being introduced here. This is really just a “least work” refactor to demonstrate some high level differences – hopefully over the next few posts you’ll see how we can really use these capabilities to simplify our solution. However even at this early stage there are some clear points to note about how this effects the solution structure and code:

  1. Rather than structure our application architecture around technology we’ve restructured it around domains. Where previously we had a data access layer, an application layer and an API layer we now have a shopping cart application, a checkout application, a product application and a Web API application. Rather than the focus being about technology boundaries its more about business domain boundaries.
  2. Across boundaries (previously layers, now applications) we’ve shifted our invocation semantics from interfaces, methods and parameters to simple state that is serializable and persistable and communicated through a “black box” mediator or dispatcher. The instigator of an operation no longer has any knowledge about what will handle the operation and as we’ll see later that can even include the handler executing in a different process space such as another web application.
  3. I’ve deliberately used the term application for these business domain implementations as even though they are all operating in the same process space it really is best to think of them as self contained units. Other than a simple wiring class to allow them to be configured in the host application (via a dependency injector) they interoperate with their host (the Web API) and between themselves entirely through the dispatch of commands.
  4. The intent is that each application is fairly small in and of itself and that it can take the best approach required to solve it’s particular problem. In reality it’s quite typical that many of the applications follow the same conventions and patterns, as they do here, and when this is the case its best to establish some code organisation concentions. In fact it’s not uncommon to see a lightweight onion ring architecture used inside each of these applications (so for lovers of the onion ring – you don’t need to abandon it completely!).

Let’s start by comparing the controllers. In our traditional layered architecture our product controller looked like this:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly IProductService _productService;

    public ProductController(IProductService productService)
    {
        _productService = productService;
    }

    [HttpGet("{id}")]
    public async Task<StoreProduct> Get(Guid id)
    {
        return await _productService.GetAsync(id);
    }
}

As expected this accepts an instance of a service, in this case of an IProductService, and it’s get method simply passes the ID on down to it. ALthough the controller is abstracted away from the implementation of the IProductService it is clearly and directly linked to one type of service and a method on that service.

Now lets look at its replacement:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly ICommandDispatcher _dispatcher;

    public ProductController(ICommandDispatcher dispatcher)
    {
        _dispatcher = dispatcher;
    }

    [HttpGet("{id}")]
    public async Task Get(Guid id)
    {
        return await _dispatcher.DispatchAsync(new GetStoreProductQuery
        {
            ProductId = id
        });
    }
}

In this implementation the controller accepts an instance of ICommandDispatcher and then rather than invoke a method on a service it calls DispatchAsync on that dispatcher supplying it a command model. The controller no longer has any knowledge of what is going to handle this request and all our commands are executed by discrete handlers. In this case the GetStoreProductQuery command is handled the the GetStoreProductQueryHandler in the Store.Application assembly:

internal class GetStoreProductQueryHandler : ICommandHandler<GetStoreProductQuery, StoreProduct>
{
    private readonly IStoreProductRepository _repository;

    public GetStoreProductQueryHandler(IStoreProductRepository repository)
    {
        _repository = repository;
    }

    public Task<StoreProduct> ExecuteAsync(GetStoreProductQuery command, StoreProduct previousResult)
    {
        return _repository.GetAsync(command.ProductId);
    }
}

It really does no more than the implementation of the ProductService in our initial application but importantly derives from the ICommandHandler generic interface supplying as generic types the type of the command it is handling and the result type. Our command mediation framework takes care of routing the command to this handler and will ultimately call ExecuteAsync on it. The framework we are using here allows commands to be chained and so as well as being given the command state it is also given any previous result.

Handlers are registered against actors as follows (this can be seen in the IServiceCollectionExtenions.cs file of Store.Application):

commandRegistry.Register<GetStoreProductQueryHandler>();

The command mediation framework has all it needs from the definition of GetStoreProductQueryHandler to map the handler to the command.

I think it’s fair to say that’s all pretty loosely coupled! In fact its so loose that if we weren’t going to go on to make more of this pattern we might conclude that the loss of immediate traceability is not worth it.

In the next part we’re going to visit some of our more complicated controllers and commands, such as the Put verb on the ShoppingCartController, to look at how we can massively simplify the code below:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put(Guid productId, int quantity)
{
    CommandResponse response = await _dispatcher.DispatchAsync(new AddToCartCommand
    {
        UserId = this.GetUserId(),
        ProductId = productId,
        Quantity = quantity
    });
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

By way of a teaser the aim is to end up with really thin controller action that, across the piece, look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

We’ll then roll that approach out over the rest of the API. Then get into some really cool stuff!

Other Parts in the Series

Part 5
Part 4
Part 3
Part 2

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez