Author: James

C# Cloud Application Architecture – Commanding via a Mediator (Part 2)

In the last post we’d converted our traditional layered architecture into one that instead organised itself around business domains and made use of command and mediator patterns to promote a more loosely coupled approach. However things were definitely still very much a work in progress with our controllers still fairly scruffy and our command handlers largely just a copy and paste of our services from the layered system.

In this post I’m going to concentrate on simplifying the controllers and undertake some minor rework on our commands to help with this. This part is going to be a bit more code heavy than the last and will involve diving into parts of ASP.Net Core. The code for the result of all these changes can be found on GitHub here:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part2

As a reminder our aim is to move from controller with actions that look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put(Guid productId, int quantity)
{
    CommandResponse response = await _dispatcher.DispatchAsync(new AddToCartCommand
    {
        UserId = this.GetUserId(),
        ProductId = productId,
        Quantity = quantity
    });
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

To controllers with actions that look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

And we want to do this in a way that makes adding future controllers and commands super simple and reliable.

We can do this fairly straightforwardly as we now only have a single “service”, our dispatcher, and our commands are simple state – this being the case we can use some ASP.Net Cores model binding to take care of populating our commands along with some conventions and a base class to do the heavy lifting for each of our controllers. This focuses the complexity in a single, testable, place and means that our controllers all become simple and thin.

If we’re going to use model binding to construct our commands the first thing we need to be wary of is security: many of our commands have a UserId property that, as can be seen in the code snippet above, is set within the action from a claim. Covering claims based security and the options available in ASP.Net Core is a topic in and of itself and not massively important to the application architecture and code structure we’re focusing on for the sake of simplicty I’m going to use a hard coded GUID, however hopefully its clear from the code below where this ID would come from in a fully fledged solution:

public static class ControllerExtensions
{
    public static Guid GetUserId(this Controller controller)
    {
        // in reality this would pull the user ID from the claims e.g.
        //     return Guid.Parse(controller.User.FindFirst("userId").Value);
        return Guid.Parse("A9F7EE3A-CB0D-4056-9DB5-AD1CB07D3093");
    }
}

If we’re going to use model binding we need to be extremely careful that this ID cannot be set by a consumer as this would almost certainly lead to a security incident. In fact if we make an interim change to our controller action we can see immediately that we’ve got a problem:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute]AddToCartCommand command)
{
    CommandResponse response = await _dispatcher.DispatchAsync(command);
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

The user ID has bled through into the Swagger definition but, because of how we’ve configured the routing with no userId parameter on the route definition, the binder will ignore any value we try and supply: there is nowhere to specify it as a route parameter and a query parameter or request body will be ignored. Still – it’s not pleasant: it’s misleading to the consumer of the API and if, for example, we were reading the command from the request body the model binder would pick it up.

We’re going to take a three pronged approach to this so that we can robustly prevent this happening in all scenarios. Firstly, to support some of these changes, we’re going to rename the UserId property on our commands to AuthenticatedUserId and introduce an interface called IUserContextCommand as shown below that all our commands that require this information will implement:

public interface IUserContextCommand
{
    Guid AuthenticatedUserId { get; set; }
}

With that change made our AddToCartCommand now looks like this:

public class AddToCartCommand : ICommand<CommandResponse>, IUserContextCommand
{
    public Guid AuthenticatedUserId { get; set; }

    public Guid ProductId { get; set; }

    public int Quantity { get; set; }
}

I’m using the Swashbuckle.AspNetCore package to provide a Swagger definition and user interface and fortunately it’s quite a configurable package that will allow us to customise how it interprets actions (operations in it’s parlance) and schema through a filter system. We’re going to create and register both an operation and schema filter that ensures any reference to AuthenticatedUserId either inbound or outbound is removed. The first code snippet below will remove the property from any schema (request or response bodies) and the second snippet will remove it from any operation parameters – if you use models with the [FromRoute] attribute as we have done then Web API will correctly only bind to any parameters specified in the route but Swashbuckle will still include all the properties in the model in it’s definition.

public class SwaggerAuthenticatedUserIdFilter : ISchemaFilter
{
    private const string AuthenticatedUserIdProperty = "authenticatedUserId";
    private static readonly Type UserContextCommandType = typeof(IUserContextCommand);

    public void Apply(Schema model, SchemaFilterContext context)
    {
        if (UserContextCommandType.IsAssignableFrom(context.SystemType))
        {
            if (model.Properties.ContainsKey(AuthenticatedUserIdProperty))
            {
                model.Properties.Remove(AuthenticatedUserIdProperty);
            }
        }
    }
}
public class SwaggerAuthenticatedUserIdOperationFilter : IOperationFilter
{
    private const string AuthenticatedUserIdProperty = "authenticateduserid";

    public void Apply(Operation operation, OperationFilterContext context)
    {
        IParameter authenticatedUserIdParameter = operation.Parameters?.SingleOrDefault(x => x.Name.ToLower() == AuthenticatedUserIdProperty);
        if (authenticatedUserIdParameter != null)
        {
            operation.Parameters.Remove(authenticatedUserIdParameter);
        }
    }
}

These are registered in our Startup.cs file alongside Swagger itself:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new Info { Title = "Online Store API", Version = "v1" });
        c.SchemaFilter<SwaggerAuthenticatedUserIdFilter>();
        c.OperationFilter<SwaggerAuthenticatedUserIdOperationFilter>();
    });

    CommandingDependencyResolver = new MicrosoftDependencyInjectionCommandingResolver(services);
    ICommandRegistry registry = CommandingDependencyResolver.UseCommanding();

    services
        .UseShoppingCart(registry)
        .UseStore(registry)
        .UseCheckout(registry);
}

If we try this now we can see that our action now looks as we expect:

With our Swagger work we’ve at least obfuscated our AuthenticatedUserId property and now we want to make sure ASP.Net ignores it during binding operations. A common means of doing this is to use the [BindNever] attribute on models however we want our commands to remain free of the concerns of the host environment, we don’t want to have to remember to stamp [BindNever] on all of our models, and we definetly don’t want the assemblies that contain them to need to reference ASP.Net packages – after all one of the aims of taking this approach is to promote a very loosely coupled design.

This being the case we need to find another way and other than the binding attributes possibly the easiest way to prevent user IDs being set from a request body is to decorate the default body binder with some additional functionality that forces an empty GUID. Binders consist of two parts, the binder itself and a binder provider so to add our binder we need two new classes (and we also include a helper class with an extension method). Firstly our custom binder:

public class AuthenticatedUserIdAwareBodyModelBinder : IModelBinder
{
    private readonly IModelBinder _decoratedBinder;

    public AuthenticatedUserIdAwareBodyModelBinder(IModelBinder decoratedBinder)
    {
        _decoratedBinder = decoratedBinder;
    }

    public async Task BindModelAsync(ModelBindingContext bindingContext)
    {
        await _decoratedBinder.BindModelAsync(bindingContext);
        if (bindingContext.Result.Model is IUserContextCommand command)
        {
            command.AuthenticatedUserId = Guid.Empty;
        }
    }
}

You can see from the code above that we delegate all the actual binding down to the decorated binder and then simply check to see if we are dealing with a command that implements our IUserContextCommand interface and blanks out the GUID if neecssary.

We then need a corresponding provider to supply this model binder:

internal static class AuthenticatedUserIdAwareBodyModelBinderProviderInstaller
{
    public static void AddAuthenticatedUserIdAwareBodyModelBinderProvider(this MvcOptions options)
    {
        IModelBinderProvider bodyModelBinderProvider = options.ModelBinderProviders.Single(x => x is BodyModelBinderProvider);
        int index = options.ModelBinderProviders.IndexOf(bodyModelBinderProvider);
        options.ModelBinderProviders.Remove(bodyModelBinderProvider);
        options.ModelBinderProviders.Insert(index, new AuthenticatedUserIdAwareBodyModelBinderProvider(bodyModelBinderProvider));
    }
}

internal class AuthenticatedUserIdAwareBodyModelBinderProvider : IModelBinderProvider
{
    private readonly IModelBinderProvider _decoratedProvider;

    public AuthenticatedUserIdAwareBodyModelBinderProvider(IModelBinderProvider decoratedProvider)
    {
        _decoratedProvider = decoratedProvider;
    }

    public IModelBinder GetBinder(ModelBinderProviderContext context)
    {
        IModelBinder modelBinder = _decoratedProvider.GetBinder(context);
        return modelBinder == null ? null : new AuthenticatedUserIdAwareBodyModelBinder(_decoratedProvider.GetBinder(context));
    }
}

Our installation extension method looks for the default BodyModelBinderProvider class, extracts it, constructs our decorator around it, and replaces it in the set of binders.

Again, like with our Swagger filters, we configure this in the ConfigureServices method of our Startup class:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(c =>
    {
        c.AddAuthenticatedUserIdAwareBodyModelBinderProvider();
    });
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new Info { Title = "Online Store API", Version = "v1" });
        c.SchemaFilter<SwaggerAuthenticatedUserIdFilter>();
        c.OperationFilter<SwaggerAuthenticatedUserIdOperationFilter>();
    });

    CommandingDependencyResolver = new MicrosoftDependencyInjectionCommandingResolver(services);
    ICommandRegistry registry = CommandingDependencyResolver.UseCommanding();

    services
        .UseShoppingCart(registry)
        .UseStore(registry)
        .UseCheckout(registry);
}

At this point we’ve configured ASP.Net so that consumers of the API cannot manipulate sensitive properties on our commands and so for our next step we want to ensure that the AuthenticatedUserId property on our commands is popualted with the legitimate ID of the logged in user.

While we could set this in our model binder (where currently we ensure it is blank) I’d suggest that’s a mixing of concerns and in any case will only be triggered on an action with a request body, it won’t work for a command bound from a route or query parameters for example. All that being the case I’m going to implement this through a simple action filter as below:

public class AssignAuthenticatedUserIdActionFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext context)
    {
        foreach (object parameter in context.ActionArguments.Values)
        {
            if (parameter is IUserContextCommand userContextCommand)
            {
                userContextCommand.AuthenticatedUserId = ((Controller) context.Controller).GetUserId();
            }
        }
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
            
    }
}

Action filters are run after model binding and authentication and authorization has occurred and so at this point we can be sure we can access the claims for a validated user.

Before we move on it’s worth reminding ourselves what our updated controller action now looks like:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute]AddToCartCommand command)
{
    CommandResponse response = await _dispatcher.DispatchAsync(command);
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

To remove this final bit of boilerplate we’re going to introduce a base class that handles the dispatch and HTTP response interpretation:

public abstract class AbstractCommandController : Controller
{
    protected AbstractCommandController(ICommandDispatcher dispatcher)
    {
        Dispatcher = dispatcher;
    }

    protected ICommandDispatcher Dispatcher { get; }

    protected async Task<IActionResult> ExecuteCommand(ICommand<CommandResponse> command)
    {
        CommandResponse response = await Dispatcher.DispatchAsync(command);
        if (response.IsSuccess)
        {
            return Ok();
        }
        return BadRequest(response.ErrorMessage);
    }
}

With that in place we can finally get our controller to our target form:

[Route("api/[controller]")]
public class ShoppingCartController : AbstractCommandController
{
    public ShoppingCartController(ICommandDispatcher dispatcher) : base(dispatcher)
    {
            
    }

    [HttpPut("{productId}/{quantity}")]
    public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

    // ... other verbs
}

To roll this out easily over the rest of our commands we’ll need to get them into a more consistent shape – at the moment they have a mix of response types which isn’t ideal for translating the results into HTTP responses. We’ll unify them so they all return a CommandResponse object that can optionally also contain a result object:

public class CommandResponse
{
    protected CommandResponse()
    {
            
    }

    public bool IsSuccess { get; set; }

    public string ErrorMessage { get; set; }

    public static CommandResponse Ok() {  return new CommandResponse { IsSuccess = true};}

    public static CommandResponse WithError(string error) {  return new CommandResponse { IsSuccess = false, ErrorMessage = error};}
}

public class CommandResponse<T> : CommandResponse
{
    public T Result { get; set; }

    public static CommandResponse<T> Ok(T result) { return new CommandResponse<T> { IsSuccess = true, Result = result}; }

    public new static CommandResponse<T> WithError(string error) { return new CommandResponse<T> { IsSuccess = false, ErrorMessage = error }; }

    public static implicit operator T(CommandResponse<T> from)
    {
        return from.Result;
    }
}

With all that done we can complete our AbstractCommandController class and convert all of our controllers to the simpler syntax. By making these changes we’ve removed all the repetitive boilerplate code from our controllers which means less tests to write and maintain, less scope for us to make mistakes and less scope for us to introduce inconsistencies. Instead we’ve leveraged the battle hardened model binding capabilities of ASP.Net and concentrated all of the our response handling in a single class that we can test extensively. And if we want to add new capabilities all we need to do is add commands, handlers and controllers that follow our convention and all the wiring is taken care of for us.

In the next part we’ll explore how we can add validation to our commands in a way that is loosely coupled and reusable in domains other than ASP.Net and then we’ll revisit our handlers to separate out our infrastructure code (logging for example) from our business logic.

 

C# Cloud Application Architecture – Commanding via a Mediator (Part 1)

When designing cloud based systems we often think about the benefits of loosely coupling systems, for example through queues and pub/sub mechanisms, but fall back on more traditional onion or layered patterns for the application architecture when alternative approaches might provide additional benefits.

My aim over a series of posts is to show how combining the Command pattern with the Mediator pattern can yield a number of benefits including:

  • A better separation of concerns with a clear demarkation of domains
  • An internally decomposed application that is simple to work with early in it’s lifecycle yet can easily evolve into a microservice approach
  • Is able to reliably support none-transactional or distributed data storage operations
  • Less repetitive infrastructure boilerplate to write such as logging and telemetry

To illustrate this we’re going to refactor an example application, over a number of stages, from a layered onion ring style to a decoupled in-process application and finally on through to a distributed application. An up front warning, to get the most out of these posts a fair amount of code reading is encouraged. It doesn’t make sense for me to walk through each change and doing so would make the posts ridiculously long but hopefully the sample code provided is clear and concise. It really is designed to go along with the posts.

The application is a RESTful HTTP API supporting some basic, simplified, functionality for an online store. The core API operations are:

  • Shopping cart management: get the cart, add products to the cart, clear the cart
  • Checkout: convert a cart into an order, mark the order as paid
  • Product: get a product

To support the above there is also a sense of store product catalog, it’s not exposed through the API, but it is used internally.

I’m going to be making use of my own command mediation framework AzureFromTheTrenches.Commanding as it has some features that will be helpful to us as we progress towards a distributed application but alternatives are available such as the popular Mediatr and if this approach appeals to you I’d encourage you to look at those too.

Our Starting Point

The code for the onion ring application, our starting point, can be found on GitHub:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1a

Firstly lets define what I mean by onion ring or layered architecture. In the context of our example application it’s a series of layers built one on top of the other with calls up and down the chain never skipping a layer: the Web API talks to the application services, the application services to the data access layer, and the data access layer to the storage. Each layer drives the one below. To support this each layer is broken out into it’s own assembly and a mixture of encapsulation (at the assembly and class level) and careful reference management ensures this remains the case.

In the case of our application this looks like this:

Concrete class coupling is avoided through the use of interfaces and these are constructor injected to promote a testable approach. Normally in such an architecture each layer has its own set of models (data access models, application models etc.) with mapping taking place between them however for the sake of brevity, and as I’m just using mock data stores, I’ve ignored this and used a single set of models that can be found in the OnlineStore.Model assembly.

As this is such a common pattern, it’s certainly the pattern I’ve come across most often over the last few years (where application architecture has been considered at all – sadly the spaghetti / make it up as you go along approach is still prevalent), it’s worth looking at what’s good about it:

  1. Perhaps most importantly – it’s simple. You can draw out the architecture on a whiteboard with a set of circles or boxes in 30 seconds and it’s quite easy to explain what is going on and why
  2. It’s testable, obviously things get more complicated in the real world but above the data access layer testing is really easy: everything is mockable and the major concrete external dependencies have been isolated in our data access layer.
  3. A good example of the pattern is open for extension but closed for modification: because their are clear contractual interfaces decorators can be added to, for example, add telemetry without modifying business logic.
  4. It’s very well supported in tooling – if you’re using Visual Studio it’s built in refactoring tools or Resharper will have no problem crunching over this solution to support changes and help keep things consistent.
  5. It can be used in a variety of different operating environments, I’ve used a Web API for this example but you can find onion / layer type architectures in all manner of solutions.
  6. It’s what’s typically been done before – most engineers are familiar with the pattern and its become almost a default choice.

I’ve built many successful systems myself using this pattern and, to a point, its served me well. However I started looking for better approaches as I experienced some of the weaknesses:

  1. It’s not as loosely coupled and malleable as it seems. This might seem an odd statement to make given our example makes strong use of interfaces and composition via dependency injection, an approach often seen to promote loose coupling and flexibility. And it does to a point. However we still tend to think of operations in such an architecture in quite a tightly coupled way: this controller calls this method on this interface which is implemented by this service class and on down through the layers. It’s a very explicit invocation process.
  2. Interfaces and classes in the service layer often become “fat” – they tend to end up coupled around a specific part of the application domain (in our case, for example, the shopping basket) and used to group together functionality.
  3. As an onion ring application gets larger it’s quite common for the Application Layer to begin to resemble spaghetti – repositories are made reference to from across the code base, services refer to services, and helpers to other helpers. You can mitigate this by decomposing the application layer into further assemblies, and making use of various design patterns, but it only takes you so far. These are all sticking plaster over a fundamental problem with the onion ring approach: it’s designed to segregate at the layer level not at the domain or bounding context level (we’ll talk about DDD later). It’s also common to see concerns start to bleed across the layer with what once were discrete boundaries.
  4. In the event of fundamental change the architecture can be quite fragile and resistant to easy change. Let’s consider a situation where our online store has become wildly successful and to keep it serving the growing userbase we need to break it apart to handle the growing scale. This will probably involve breaking our API into multiple components and using patterns such as queues to asynchronously run and throttle some actions. With the onion ring architecture we’re going to have to analyse what is probably a large codebase looking for how we can safely break it apart. Almost inevitably this will be more difficult than it first seems and it will probably seem quite difficult to begin with (I may have been here myself!) and we’ll uncover all manner of implicit tight coupling points.

There are of course solutions to many of these problems, particularly the code organisation issues, but at some point it’s worth considering if the architecture is still helping you or beginning to hinder you and if perhaps there are alternative approaches.

With that in mind, and as I said earlier, I’d like to introduce an alternative. It’s by no means the alternative but its one I’ve found tremendously helpful when building applications for the cloud.

The Command Pattern with a Mediator

The classic command pattern aims to encapsulate all the information needed to perform an action typically within a class that contains properties (the data needed to perform the action) and an execute method (that undertakes the action). For example:

public class GetProductCommand : ICommand<Product>
{
    public Guid ProductId { get; set; }

    public Task<Product> Execute()
    {
         /// ... logic
    }
}

This can be a useful way of structuring an application but tightly couples state and execution and that can prove limiting if, for example, it would be helpful for the command to participate in a pub/sub approach, take part in a command chain (step 1 – store state in undo, step 2 – execute command, step 3 – log command was executed), or operate across a process boundary.

A powerful improvement can be made to the approach by decoupling the execution from the command state via the mediator pattern. In this pattern all commands, each of which consist of simple serializable state, are dispatched to the mediator and the mediator determines which handlers will execute the command:

Because the command is decoupled from execution it is possible to associate multiple handlers with a command and moving a handler to a different process space (for example splitting a Web API into multiple Web APIs) is simply a matter of changing the configuration of the mediator. And because the command is simple state we can easily serialize it into, for example, an event store (the example below illustrates a mediator that is able to serialize commands to stores directly, but this can also be accomplished through additional handlers:

It’s perhaps worth also briefly touching on the CQRS pattern – while the Command Message pattern may facilitate a CQRS approach it neither requires it or has any particular opinion on it.

With that out the way lets take a look at how this pattern impacts our applications architecture. Firstly the code for our refactored solution can be found here:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1b

And from our layered approach we now end up with something that looks like this:

It’s important to realise when looking at this code that we’re going to do more work to take advantage of the capabilities being introduced here. This is really just a “least work” refactor to demonstrate some high level differences – hopefully over the next few posts you’ll see how we can really use these capabilities to simplify our solution. However even at this early stage there are some clear points to note about how this effects the solution structure and code:

  1. Rather than structure our application architecture around technology we’ve restructured it around domains. Where previously we had a data access layer, an application layer and an API layer we now have a shopping cart application, a checkout application, a product application and a Web API application. Rather than the focus being about technology boundaries its more about business domain boundaries.
  2. Across boundaries (previously layers, now applications) we’ve shifted our invocation semantics from interfaces, methods and parameters to simple state that is serializable and persistable and communicated through a “black box” mediator or dispatcher. The instigator of an operation no longer has any knowledge about what will handle the operation and as we’ll see later that can even include the handler executing in a different process space such as another web application.
  3. I’ve deliberately used the term application for these business domain implementations as even though they are all operating in the same process space it really is best to think of them as self contained units. Other than a simple wiring class to allow them to be configured in the host application (via a dependency injector) they interoperate with their host (the Web API) and between themselves entirely through the dispatch of commands.
  4. The intent is that each application is fairly small in and of itself and that it can take the best approach required to solve it’s particular problem. In reality it’s quite typical that many of the applications follow the same conventions and patterns, as they do here, and when this is the case its best to establish some code organisation concentions. In fact it’s not uncommon to see a lightweight onion ring architecture used inside each of these applications (so for lovers of the onion ring – you don’t need to abandon it completely!).

Let’s start by comparing the controllers. In our traditional layered architecture our product controller looked like this:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly IProductService _productService;

    public ProductController(IProductService productService)
    {
        _productService = productService;
    }

    [HttpGet("{id}")]
    public async Task<StoreProduct> Get(Guid id)
    {
        return await _productService.GetAsync(id);
    }
}

As expected this accepts an instance of a service, in this case of an IProductService, and it’s get method simply passes the ID on down to it. ALthough the controller is abstracted away from the implementation of the IProductService it is clearly and directly linked to one type of service and a method on that service.

Now lets look at its replacement:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly ICommandDispatcher _dispatcher;

    public ProductController(ICommandDispatcher dispatcher)
    {
        _dispatcher = dispatcher;
    }

    [HttpGet("{id}")]
    public async Task Get(Guid id)
    {
        return await _dispatcher.DispatchAsync(new GetStoreProductQuery
        {
            ProductId = id
        });
    }
}

In this implementation the controller accepts an instance of ICommandDispatcher and then rather than invoke a method on a service it calls DispatchAsync on that dispatcher supplying it a command model. The controller no longer has any knowledge of what is going to handle this request and all our commands are executed by discrete handlers. In this case the GetStoreProductQuery command is handled the the GetStoreProductQueryHandler in the Store.Application assembly:

internal class GetStoreProductQueryHandler : ICommandHandler<GetStoreProductQuery, StoreProduct>
{
    private readonly IStoreProductRepository _repository;

    public GetStoreProductQueryHandler(IStoreProductRepository repository)
    {
        _repository = repository;
    }

    public Task<StoreProduct> ExecuteAsync(GetStoreProductQuery command, StoreProduct previousResult)
    {
        return _repository.GetAsync(command.ProductId);
    }
}

It really does no more than the implementation of the ProductService in our initial application but importantly derives from the ICommandHandler generic interface supplying as generic types the type of the command it is handling and the result type. Our command mediation framework takes care of routing the command to this handler and will ultimately call ExecuteAsync on it. The framework we are using here allows commands to be chained and so as well as being given the command state it is also given any previous result.

Handlers are registered against actors as follows (this can be seen in the IServiceCollectionExtenions.cs file of Store.Application):

commandRegistry.Register<GetStoreProductQueryHandler>();

The command mediation framework has all it needs from the definition of GetStoreProductQueryHandler to map the handler to the command.

I think it’s fair to say that’s all pretty loosely coupled! In fact its so loose that if we weren’t going to go on to make more of this pattern we might conclude that the loss of immediate traceability is not worth it.

In the next part we’re going to visit some of our more complicated controllers and commands, such as the Put verb on the ShoppingCartController, to look at how we can massively simplify the code below:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put(Guid productId, int quantity)
{
    CommandResponse response = await _dispatcher.DispatchAsync(new AddToCartCommand
    {
        UserId = this.GetUserId(),
        ProductId = productId,
        Quantity = quantity
    });
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

By way of a teaser the aim is to end up with really thin controller action that, across the piece, look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

We’ll then roll that approach out over the rest of the API. Then get into some really cool stuff!

Part 2 can be found here.

Cosmos DB x-ms-partitionkey Error

A small tip but one might save you some time – if you’re trying to run Cosmos DB queries via the REST API you might encounter the following Bad Request error:

The partition key supplied in x-ms-partitionkey header has fewer components than defined in the the collection.

What you actually need to specify is the partition key for your collection using the x-ms-documentdb-partitionkey header.

An annoyingly misleading error message!

Upgrading an ASP.Net Core 1.1 project to ASP.Net Core 2.0 and run on Azure

Following the instructions on the ASP.Net Core 2.0 announcement blog post I was quickly able to get a website updated to ASP.Net Core 2.0 and run it locally.

The website is hosted on Azure in the App Service model and unfortunately after publishing my project I encountered IIS 502.5 errors. Luckily this was in a deployment slot so my public website was unaffected.

After a bit of head scratching I found two extra steps were required to upgrade an existing web site.

Firstly In your .csproj file you should have an ItemGroup like the one below:

<ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.1" />
</ItemGroup>

You need to change the version to 2.0.0.

Secondly you need to clear out all the old files from your Azure hosted instance. The easiest way to do this is to open Kudu (Advanced Tools in the portal), navigate to the site/wwwroot folder and delete them.

After that if you publish your website again it should work fine.

Hope that saves someone a few hours of head-scratching.

Open Source Commanding Framework

I’ve been making use of a lightweight configuration based commanding framework in a recent project that proved to be really useful and so I’ve spun it out into an open source project on GitHub, documented it, and released the packages on NuGet.

What I’ve found most useful about it is that commands and actors are registered and configured using an IoC type syntax that also defines how dispatch and execution behave – by changing the configuration you can update your commands to run in memory, over HTTP or via Azure Storage queues without changing your actual application code.

It’s a set of .NET Standard assemblies and runs well with both ASP.Net and ASP.Net Core (along with WebJobs and other Azure type goodness). It’s designed specifically for dependency inversion and IoC containers and works with Unity, Autofac, Ninject and the out the box ASP.Net Core service container.

I’ve released it under the permissive MIT license and I hope it’s useful to you. If you get stuck or have any questions I’m best contacted on Twitter as @azuretrenches.

ApplictionSupport Update

I first wrote this back in 2012 as a set of commonly useful abstractions for building applications on the cloud. It’s evolved over time but an awful lot of things have changed and grown in the last 5 years and although many of the core patterns and abstractions are still useful it’s become hindered by it’s own weight when put into a more recent context. When I wrote it all sorts of things that are effectively “off the shelf” now just didn’t exist. And of course, I hope, I’ve learned and grown as a developer and architect since then.

I’m also increasingly wanting to take advantage of new developments in the .NET space such as .NET Standard and ASP.Net Core and the AccidentalFish.ApplicationSupport framework is very much a .NET 4.x rooted framework.

All that being the case, and being mindful that a reasonable number of people are using this framework in production software,  my intention is to place the current version (6.0.0) of the framework into maintenance mode and start afresh with a simplified framework aimed squarely at support .NET Standard and taking advantage of recent developments. That leaves the way open for future updates to the existing framework but lets me move forward with some major changes.

My goals for this major reworking include:

  • .NET Standard support to enable use in as full a range as possible of .NET runtimes (Core, Framework, Xamarin etc.)
  • Increased modularity of packages so you can use just what you need
  • A removal of the configuration framework and replacement by an abstracted bridge to external configuration sources
  • A removal of the logging abstraction framework and adoption of the abstraction layer provided by Microsoft in the Microsoft.Extensions.Logging.Abstractions package.
  • A pulling apart of the dependency injection abstraction framework.

Before I release it I’ll be reworking a significantly sized product of my own (Simple Status Page) over to it which should mean it hits the open source world in reasonable shape.

The code will be on GitHub just as soon as I can figure out what to call it!

If you want to discuss this then the best thing to do is to get on touch with me on Twitter at @azuretrenches.

Introductory Guide to AABB Tree Collision Detection

Those of you who read my Christmas break update will know I’ve taken a slight detour into low level graphics and games programming, writing a C++ voxel engine from the ground up for fun and a challenge. I’ve continued to tinker with it when I can and have since added support for voxel based sprites. In the screenshot below the aliens, player and bullet are all made up of voxels and move around the world space in a fashion vaguely reminiscent of the classic Space Invaders game:

2017_01_08_2

Though I realise it just looks like a flat textured mesh, and that a flat textured mesh would be many times more efficient than voxels for this, the ground is also made up voxels.

Of course as soon as you add sprites or anything that moves to a world you start to think about detecting collisions and doing this efficiently remains an interesting area of development with different approaches being more or less optimal for different conditions. It’s essentially a spatial organisation problem. As such although you might think “huh, collision detection, what use is that to me?” it’s not hard to see how similar indexing approaches can be useful in answering similar questions not involving games. Similarly although the A* algorithm, for example, is traditionally thought of as a gaming algorithm I’ve found myself using it in some very interesting none-gaming spaces.

In the case of collisions between sprites in my voxel engine I need to consider hundreds of objects made up of, in sum, millions of voxels spread across an indeterminate, but essentially very large and mostly sparse, three dimensional space. Pixel / voxel perfect detection between every one of those sprites is clearly going to be horrifically expensive and so typically collision detection is broken down into two phases:

  1. Broad range collisions – quickly draw up a shortlist of probable collisions.
  2. Narrow range collisions – use additional detail to filter the probably collisions resulting from the broad range pass into the actual pixel / voxel perfect collisions.

My first attempt at solving the broad range problem in my voxel engine is a common one and takes advantage of an efficient means of determining if two boxes (2d or 3d) intersect that relies on them being axis aligned – hence the axis aligned bounding box, or AABB. If that sounds complicated – don’t worry, it isn’t and I’ll explain it later.

There are many implementations of this available online and various blog posts and tutorials but I didn’t find anything that brought it all together and so what follows is a step by step guide to AABBs and how to sort them in a tree to quickly determine intersections. There is sample code provided in the form of the C++ implementation in my voxel engine and at the bottom of this post I explain how you can use this (just the AABB tree implementation) in your own code. However this post is more about the theory and once you understand that a simple implementation is fairly straightforward (though there are numerous optimisations, some of which are in my sample code some of which aren’t, that can complicate things).

For the purposes of keeping this guide simple I’m going to draw diagrams and talk about 2 dimensional boxes (rectangles) rather than 3 dimensional boxes however quite literally to add the 3rd dimension you just need to add in the 3rd dimension. Wherever x and y apply in the below add in z following the same pattern as for x and y. Hopefully that is clear in the sample code but if you have any questions you can always reach out on Twitter.

What are AABBs?

AABBs are simpler than they sound – they are essentially boxes who’s axes (so x,y for 2d and x,y,z for 3d) align / run in the same direction. The bounding part of the name is because when used for collision detection or as part of a tree they commonly contain, or bind, other boxes. The diagram below shows two simple and compatible AABBs:

simpleaabbs3

In contrast the two boxes shown in the diagram below are not AABBs as their axes do not align:

incompatibleaabbs3

A key characteristic of an AABB is that the space it occupies can be defined by 2 points irrespective of whether it is in a 2 or 3 dimensional space. In a 2 dimensional space the 2 points are (minx, miny) and (maxx, maxy).

This can be used to perform a very fast check as to whether or not two AABBs intersect. Consider the two AABBs in the diagram below:

intersectaabbs3

In this diagram we have two AABBs defined by a pair of points and the result of the following expression can determine whether or not they intersect:

maxx1 > minx2 && minx1 < maxx2 && maxy1 > miny1 && miny1 < maxy2

An important point to note about that expression is that it is made up of a series of ands which means that evaluation will stop as soon as one of the condition fails.

I’m fortunate working in a voxel engine in that objects in my game world are naturally axis aligned: voxels essentially being 3d pixels or tiny cubes. However what if your objects aren’t naturally aligned or are made up of shapes other than boxes? This is where the bounding part of the AABB comes in as you need to create a bounding box that encompasses the complex shape as shown in the diagram below:

boundingaabbs2

Obviously testing the AABBs for an intersection will not result in pixel perfect collision detection but remember the primary goal of using AABBs is in the broad range part of the process. Having quickly and cheaply determined that the two AABBs in the diagram above do not intersect we can save ourselves the computational expense of trying to figure out if two complex shapes intersect.

The AABB Tree

Using the above approach you can see how we can quickly and easily test for collisions between two objects in your world space however what if you have 100 objects? Or 1000? No matter how efficient the individual tests comparing 1000 AABBs against themselves is going to be an expensive operation and highly wasteful.

This is where the AABB tree comes in. It allows us to organise and index our AABBs to minimise the number of AABB intersection tests that need to be made by slicing the world up using, guess what, more AABBs.

If you’ve not come across them before trees are incredibly useful hierarchical data structures and there are many variants on the basic concept (if such things interest you an excellent, if quite formal, book on the subject is Introduction to Algorithms) and before continuing it’s worth gaining a basic knowledge of the structure and terminology from this Wikipedia article.

In the case of the AABB tree presented here the root, branch and leaves have some very specific properties:

Branch – Our branches always have exactly two children (known as left and right) and are assigned an AABB that is large enough to contain all of it’s descendants.
Leaf – Our leaves are associated with a game world object and through that have an AABB. A leaf’s AABB must fit entirely within it’s parents AABB and due to how our branches are defined that means it fits in every ancestors AABB.
Root – Our root may be a branch or a leaf

The best way to illustrate how this works is through a worked example.

Constructing an AABB Tree

Imagine we have an empty world, and so at this point our tree is empty, to which we are adding our first object. As our tree is currently empty we create a leaf node that corresponds to our new object and shares its AABB and assign that leaf to be the root:

treeaabb___1

Now we add a second object to our world, it doesn’t intersect with our first node, and something interesting happens to our tree:

treeaabb2

When we added the second object to our game world a number of things occurred:

  1. We created a branch node for our tree and assigned it an AABB that is large enough to contain both object (1) and object (2).
  2. We created a new leaf node for object (2) and attached it to our new branch node.
  3. We took our original leaf node for object (1) and attached it to our new branch node.
  4. We made the new branch node the root of the tree.

Ok. Let’s add another object to the game world and this time we’ll have it intersect with an existing object:

treeaabb3

Again when we added this object some interesting things happened to our tree:

  1. We created a new branch node (b) and assigned it an AABB that encompassed objects (1) and (3).
  2. We created a new leaf node for object (3) and assigned it to branch (b).
  3. We moved leaf node (1) to be a child of branch node (b) and attached this new branch node (b) branch node (a).
  4. This is subtle but important: we adjusted the AABB assigned to branch node (a) so that it accounts for the new leaf node. If we hadn’t done that the AABB assigned to branch node a would no longer have been big enough to contain the AABBs of its descendants.

Essentially every time we add a new game world object we manipulate the tree so that the rules for branch and root nodes that I described earlier still apply. This being the case we can describe a generic process for adding a new game world object into the tree:

  1. Create a leaf node for the object and assign it an AABB based on it’s associated object.
  2. Find the best existing node (leaf or branch) in the tree to make the new leaf a sibling of.
  3. Create a new branch node for the located node and the new leaf and assign it an AABB that contains both nodes (essentially combine the AABBs of the two located node and the new leaf).
  4. Attach the new leaf to the new branch node.
  5. Remove the existing node from the tree and attach it to the new branch node.
  6. Attach the new branch node as a child of the existing nodes previous parent node.
  7. Walk back up the tree adjusting the AABB of all of our ancestors to ensure they still contain the AABBs of all of their descendants.

Step 2 in the above does beg the question: how do you find the best leaf in the tree to make the new leaf a sibling of? Essentially this involves descending the tree and evaluating the likely cost of attaching to the left or right of each branch you move through. The better the decision you can make the more balanced the tree and the cheaper the subsequent queries.

A common heuristic used here is to assign a cost to the surface area of the left and right nodes having been adjusted for the addition of the new leaf’s AABB and descend in the direction of the cheapest node until you find yourself on a leaf.

Querying the AABB Tree

This is where all of our hard work pays off – it’s very simple and very fast. If we want to find all the possible collisions for a given AABB object all we need to do, starting at the root of the tree, is:

  1. Check to see if the current node intersects with the test objects AABB.
  2. If it does and if its a leaf node then this is a collision and so add it to the list of collisions.
  3. If it does and it’s a branch node then descend to the left and right and recursively repeat this process.

At the end of the above your list will contain all the possible collisions for your test object and we will have minimised the number of AABB intersection checks we had to perform as we will not have descended down any pathways (and therefore all the subsequent children) that could not intersect with the test AABB tree.

In implementation it’s best not to actually use a recursive approach as this can get expensive (and could fail) on large trees. Instead just maintain a stack / list of nodes that are to be further explored  as follows:

  1. Push the root node onto a stack (in C++ I use std::stack)
  2. While the stack is not empty:
    1. Pop from the stack
    2. Check and see if the node intersects with the test AABB object
    3. If it does then either:
      1. If it is a leaf node then this is a match for a collision. Add the leaf node (or its referenced object) to the list of collisions.
      2. If it is a branch node then push the children (the left and right nodes) on to the stack.

Understanding how to iterate over trees none-recursively can be very useful.

Updating the AABB Tree

In most (but not all) scenarios involving collision detection at least some of the objects in the world are moving. As the objects move that does require the tree to be updated and this is accomplished by removing the leaf that corresponds to the world object and reinserting it.

This can be an expensive operation and you can minimise the number of times you will need to do this if you express the movement of your world objects using a velocity vector and use this to “fatten” the AABB trees that you insert into the tree. For example take the object in the diagram below, it has a (x,y) velocity of (1,0) and has had it’s bounding AABB fattened accordingly:

velocityaabbs

How much you fatten the AABB trees is a trade off between update cost, predictability and broad range accuracy and you may need to experiment to get the best performance.

Finally as trees are updated it is possible for them to become unbalanced with some queries inappropriately requiring the traversal of many more nodes than others. One technique for resolving this is to rebalance the tree using rotations based on the height (depth from the bottom) of each node. Another is to balance the tree based around how evenly child nodes divide the parent nodes AABB. That’s slightly beyond the scope of a beginners guide and I’ve not yet implemented it myself in the sample code – however I may return to it at some point.

Sample Code

Finally their is sample code that goes along with this blog post and you can find it in my game engine. It’s pretty decoupled from the engine itself and you should be able to use it in your own code without too many problems. The key files are:

AABB.h – defines a structure for representing and manipulating an AABB
AABBTree.h – header file that defines the entry points to the tree class AABBTree and the AABBNode structure
AABBTree.cpp – implementation
IAABB.h – defines an interface that objects added to the tree must implement

To use the tree you will need to add those four files to your project and construct an instance of the AABBTree class – it’s constructor is very simple and takes an initial size (the number of tree nodes to reserve up front). Any object you want to add to the tree needs to implement the IAABB interface that simply needs to return the AABB structure when asked for. You can add, update and remove those objects with the insertObject, updateObject and removeObject methods respectively and you can query for collisions with the queryOverlaps method.

Useful Links

While researching AABB trees I found a number of useful links and these are below.

Dynamic AABB Tree
Game Physics: Broadphase Dynamic AABB Tree
AABB.cc
Box2D

In particular I found the code in the last two links very useful to read through and was able to base my own code on that, particularly the node allocation.

 

 

React Sample App

Having built a number of reasonably large apps using Angular (v1.x) I began to grow uncomfortable with some of its pain points and so started to look for alternatives. As a result I’ve spent a lot of time over the last 12 months with various JavaScript frameworks but in particular I found myself settling on React combined with Redux.

I find I’m an increasingly functional programmer and the idea of the user interface as a clean expression of state appeals to me massively. I’ve also found that when using React I feel like I’m using the JavaScript language itself, particularly ES6, rather than alternatives to what are becoming core language features.

However although the documentation is pretty good I found I started to struggle to find good patterns and practices when I moved away from simple ToDo type apps. As an experienced developer what I really wanted was some guidance illustrated by code in a working app.

To that end I recently build a sample app that illustrates solutions to typical problems you’ll encounter when moving beyond a ToDo app and have started working on a short book to go along with it. Initially I’d intended to do this as a series of blog posts but as I started to write them I realised the structure was not well suited to the blog format. My intention is still to push this out, at least largely, as open source probably under a creative commons license.

As a first step I’ve published an early version of the sample app onto GitHub and you can find it here. Constructive feedback would be very welcome and the best way to get in touch with me is on Twitter. I’ll have more details on the book as the content develops further.

segmentcomparison

A Simple Voxel Engine – Christmas 2016 Project

Happy New Year everyone – I hope everyone reading this had a great end of year break and has a fantastic 2017.

I suspect I’m not alone in that I like to spend a little time over the Christmas break working (in the fun sense of the word) on something a bit different from the day job. This year I took about 4 days out to work on a really simple voxel engine as both the look and constructible / destructible feel of voxels have always appealed to me, perhaps as a result of a childhood playing with Lego. Though bizarrely I’ve never played Minecraft.

The code for the engine along with a couple of demonstrations can, as ever, be found over on GitHub.

I didn’t want to feel like I was writing glue code and so rather than use an existing library I started with a blank C++ file, OpenGL, and an awful lot of ignorance. Now to be fair I do have a reasonable background in C, C++, graphics and systems programming – before this web thing came along I was working with embedded systems and 2d graphics engines using C, C++ and 68000 assembly – but that was back in 1994. I continued as a commercial C++ developer for 4 more years but then found myself working with C# and the first .Net beta via a brief foray with Delphi.

So I was rusty. Very rusty. However I had enthusiasm on my side and persevered until I had a couple of demo’s and some code that I wasn’t too embarrassed to share. The video below is me navigating a voxel “landscape” created using a black and white self portrait as a height map:

First step was some early research and I quickly discovered some really useful resources:

  • GLFW – a great library for abstracting away the underlying hardware and OS and getting you a window to draw in
  • Glad – OpenGL is a strange, strange, place and chances are you will need to use a “loader” to get access to the full API. This web service makes it really easy.
  • GLM – A header only mathematics library based on the OpenGL shader specification
  • Learn OpenGL – tutorial series
  • OpenGL Tutoral – another OpenGL tutorial series

Being conscious C++ had changed a fair bit since my youthful glory days I also skim read Accelerated C++ and Effective Modern C++. I’ll be returning to the latter and reading it in some more depth as I’m sure my code is still pretty stinky – but it works!

Day 1 – I Know Nothing

Armed with my new found expertise I set myself a goal of generating and rendering a landscape based on Perlin Noise but given I couldn’t yet draw a cube I thought I’d better bootstrap my way up. This first day was the toughest and involved a fair amount of staring at a blank screen, cursing, and feeling extremely thankful for the safety of managed environments. However at the end of the day I managed to get not only a single cube on the screen but a set of voxels arranged into a chunk rendering (a chunk is an organisational concept in a voxel engine – collections of voxels) and you can see that below:

2016_12_29_1

I didn’t yet have any lighting so to help me process what I was looking at I did a simple adjustment of the colours around each vertex.

The next step was to display several chunks together. My first effort at this went… badly. Only one chunk was ever displayed. After spending an hour looking for obscure OpenGL problems I realised I’d done something more basically dumb with C++ (we’ve all copied something we thought we were passing by reference right?), pounded my head into the desk a few times, corrected it and hurrah. 4×4 chunks!

2016_12_29_2

Flush with success wine may have been consumed.

Day 2 – Algorithms and Structures

Sitting at the desk with a slightly groggy head – which the wine had nothing to do with, nothing, not a damn thing – I started to ponder how to arrange and manage chunks.  The memory requirements for voxels can be huge. Imagine that each of your voxels takes 16 bytes to store it and that your landscape is a modest sized 1024 voxels wide, 32 voxels high, and 1024 voxels deep – that’s 536,870,912 bytes, better known as 512Mb, of storage needed before you even account for rendering geometry.

It’s clear you’re going to need someway of managing that and for practical usage compressing and/or virtualising it.

I came up with, and by came up with I mean I’d read somewhere and recalled it, the concept of a chunk manager (this came up somewhere in my research), chunk factories (this didn’t come up in my research but seems a logical extension) and a chunk optimiser (not yet implemented!).

The main decision here was how low to go – so to speak. The most optimal route likely involves getting the STL out of the way, pre-allocating memory and organising it in a very specific manner. However that also seemed the route most likely to feel like having your teeth pulled. In the end I decided to go with the path of least resistance and mostly used the std::vector class with pre-allocated sizes where known.

Nothing to see from my efforts yet except the cube still shows but using the new code. Which while not worth a screenshot was a result!

With that out the way I needed a perlin noise generator to implement as part of one of my new fangled chunk factories. I’ve written one before for a lava type effect at some point back when you had to find the algorithm in a book and figure out how to write it yourself. Having no wish to repeat that painful experience I quickly found a great post and gist on GitHub written in C# and converted it to C++.

I hit F5, Visual Studio whirred away for a few seconds, and to my utter amazement my good run of luck continued and I had an ugly but working voxel landscape.

2016_12_30_2

Good times. And good times when you’re on your Christmas break deserve wine.

Day 3 – This Is Far From Light Speed

Without any lighting it was hard to really get a sense of the terrain in the landscape that I could see but before I could do something about that I needed to address some performance issues. Rendering was fine – I was cruising at a vertical refresh locked 60 frames per second. When rendering started. Problem was that even on my 4.5GHz 6700K i7 it took about 90 seconds to generate the voxels and the initial geometry.

Normally I’d advocate measuring before going any further but the issue was self evidently (famous last words…. but not this time) in a fairly narrow section of code so step 1 was a code review which quickly highlighted a moment of stupidity from the previous day: I was calling the perlin noise function 16 times more than I needed to.

I could see from Visual Studio that memory allocation was also sub-optimal as it was occurring on an “as needed” basis but as previously noted I’m favouring clarity and simplicity over gnarly. However Visual Studio was also highlighting another issue – my CPU was only being used at around 15%. The code is single threaded and I’ve got 4 cores capable of supporting 8 threads.

Haha thinks me with my C# head on. This is eminently parallelisable (the clues being in the names “chunk” and perlin noise “function”) and C++ must have a decent task / parallel library by now. Oh yes. Futures. They’ll do it.

Oh boy. Oh boy oh boy. That hurt. Converting the code to use lambda’s and futures was straightforward but then the issues began – turns out there is not yet an equivalent for something like Task.WhenAll in the C++ standard library. I got there in the end after a bit of a deep dive into modern C++ threading concepts (I really didn’t want to use the OS primitives directly as I want to make this multi platform) and solved this with locks and condition_variables.

I’ve tested it on a variety of systems (with different performance characteristics and cores available) and it hasn’t crashed or done anything unpredictable on a variety of different systems but I’m sure there is something wrong with this code so if any C++ experts read this I’d love some pointers.

So now its quicker, about 6 or 7 seconds to generate a respectably sized landscape, but it looks the same and most of the day has gone.

Fortunately adding basic lighting proved much simpler. Simple voxels, cubes, are pretty simple to light as the normals are somewhat obvious and there is plenty of documented examples of using vertex normals inside vertex and fragment shaders to apply the appropriate colouring. A couple of hours later I had ambient, diffuse, and specular lighting up and running and could turn off my dodgy half shading.

2016_12_31_2

A great way to go into New Years Eve and definitely earning of a celebratory brandy. Slightly warmed and in an appropriate glass of course.

Day 4 – This Code Stinks

Old Man Randall (me) fell asleep long before midnight and so I woke up fresh and eager on New Years Day to do some more work. I’d had the brainwave a few days ago to use photographs as the source for landscape height maps using the brightness (whiteness in black and white photographs, calculated perceived brightness in colour photographs) of each pixel to build an appropriately high stack of voxels.

However before I could do anything else the code really needed some work. It had the distinctive whiff of a codebase in which many new things had been tried with little idea of what would work or not. Structure had become scrambled, inconsistencies were everywhere, and commented out attempts at different approaches abound. Not to mention I’d built everything into a single executable with no attempt to separate the engine from it’s usage.

A few hours later and although still not perfect I had something that was workable and set about implementing my voxel photographs starting with black and white. Being a supreme narcissist I selected a photograph of myself from my flickr library and busied myself trying to load it. This proved to be the hardest part – I’d forgotten how painful C++ and libraries without package managers can be (is there a package manager for C++?) with chains of dependencies.

In the end I realised that cimg could load bmp files without any dependencies and implemented a new chunk factory that used that to scan the pixels and create voxels based on their whiteness.

This worked a treat and I captured the video of me moving around the “landscape” that appears at the top of this post.

2017_01_01_1

A great way to finish off and in a burst of enthusiasm I quickly sent the link to my fans. Or fan. By which I mean my mum. No celebratory brandy sadly as structured cycling training has restarted but happiness abounds.

Conclusions and What Next?

A few things struck me throughout this short project that I think are worth collecting and reflecting on:

  • The modern Internet truly is a wonderful thing. The last time I was coding in C++ (about 1998) it wasn’t the Internet of today and things were so much harder. It’s an amazing resource.
  • The development community is amazing. There are so many people giving their time and expertise away for free in the form of blog posts, articles, and code that unless you’re doing something really on the bleeding edge there is someone, somewhere, who can help you.
  • Wow C++ has changed – for the better. It’s still complicated but there’s a lot more available out of the box to help you not make simple mistakes.
  • GPUs are incredible number crunching machines. I’m fortunate enough to have a pretty top notch GPU in my desktop (a 980Ti) but the sheer number of computations being performed to render, in 60 fps, some of the scenes I threw at this code is astounding. I knew in theory the capabilities of GPUs but didn’t “feel” it – I’m now interested where I can apply this power elsewhere.
  • Iterative approaches are great for tackling something you don’t know much about. I just kept setting myself mini-milestones and evolving the code, not being afraid to do something really badly at first. By setting small milestones you get a series of positive kicks that keeps you motivated through the hard times.
  • Coding, particularly when it involves learning, is fun. It really is. I still love it. I’ve been coding for around 34 years now, since I was 6 or 7, and it still gives me an enormous thrill. My passion for it at its core is undiminished.

As for this project and what I might do next. I’ve got a couple of simple projects in mind I’d like to use it with and a small shopping list of things I think will be interesting to implement:

  • Mac / *nix support
  • General optimisation. I’ve only done the really low hanging fruit so far and there is scope for improvement everywhere.
  • Voxel sprite support
  • Shadows
  • Skybox / fog support
  • Paging / virtualisation – with this in mind this is one of the main reasons that scenes are constructed using the chunk factory
  • Level of detail support
  • Simple physics

Whether or when I’ll get time to do this I’d rather not say, I have a big product launch on the horizon and a React book to write, but I’ve had so much fun that I’m likely to keep tinkering when I can.

 

Recent Posts

Recent Tweets

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez