Category: Function Monkey

Using Function Monkey with MediatR

There are a lot of improvements coming in v4 of Function Monkey and the beta is currently available on NuGet. As the full release approaches I thought it would make sense to introduce some of these new capabilities here.

In order to simplyify Azure Functions development Function Monkey makes heavy use of commanding via a mediator and ships with my own mediation library. However there’s a lot of existing code out their that makes use of the popular MediatR library which, if Function Monkey supported, could fairly easily be moved into a serverless execution environment.

Happily Function Monkey now supports just this! You can use my existing bundled mediator, bring your own mediator, or add the shiny new FunctionMonkey.MediatR NuGet package. Here we’re going to take a look at using the latter.

First begin by creating a new, empty, Azure Functions project and add three NuGet packages:

FunctionMonkey
FunctionMonkey.Compiler
FunctionMonkey.MediatR

At the time of writing be sure to use the prerelease packages version 4.0.39-beta.4 or later.

Next create a folder called Models. Add a class called ToDoItem:

public class ToDoItem
{
    public Guid Id { get; set; }
    
    public string Title { get; set; }
    
    public bool IsComplete { get; set; }
}

Now add a folder called Services and add an interface called IRepository:

internal interface IRepository
{
    Task<ToDoItem> Create(string title);
}

And a memory based implementation of this called Repository:

internal class Repository : IRepository
{
    private readonly List<ToDoItem> _items = new List<ToDoItem>();
    
    public Task<ToDoItem> Create(string title)
    {
        ToDoItem newItem = new ToDoItem()
        {
            Title = title,
            Id = Guid.NewGuid(),
            IsComplete = false
        };
        _items.Add(newItem);
        return Task.FromResult(newItem);
    }
}

Now create a folder called Commands and in here create a class called CreateToDoItemCommand:

public class CreateToDoItemCommand : IRequest<ToDoItem>
{
    public string Title { get; set; }
}

If you’re familiar with Function Monkey you’ll notice the difference here – we’d normally implement the ICommand<> interface but here we’re implementing MediatR’s IRequest<> interface instead.

Next create a folder called Handlers and in here create a class called CreateToDoItemCommandHandler as shown below:

internal class CreateToDoItemCommandHandler : IRequestHandler<CreateToDoItemCommand, ToDoItem>
{
    private readonly IRepository _repository;

    public CreateToDoItemCommandHandler(IRepository repository)
    {
        _repository = repository;
    }
    
    public Task<ToDoItem> Handle(CreateToDoItemCommand request, CancellationToken cancellationToken)
    {
        return _repository.Create(request.Title);
    }
}

Again the only real difference here is that rather than implement the ICommandHandler interface we implement the IRequestHandler interface from MediatR.

Finally we need to add our FunctionAppConfiguration class to the root of the project to wire everything up:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup(sc => sc
                .AddMediatR(typeof(FunctionAppConfiguration).Assembly)
                .AddSingleton<IRepository, Repository>()
            )
            .UseMediatR()
            .Functions(functions => functions
                .HttpRoute("todo", route => route
                    .HttpFunction<CreateToDoItemCommand>(HttpMethod.Post)
                )
            );
    }
}

Again this should look familiar however their are two key differences. Firstly in the Setup block we use MediatR’s IServiceCollection extension method AddMediatR – this will wire up the request handlers in the dependency injector. Secondly the .UseMediatR() option instructs Function Monkey to use MediatR for its command mediation.

And really that’s all their is to it! You can use both requests and notifications and you can find a more fleshed out example of this on GitHub.

As always feedback is welcome on Twitter or over on the GitHub issues page for Function Monkey.

Function Monkey for F#

Over the last couple of weeks I’ve been working on adapting Function Monkey so that it feels natural to work with in F#. The driver for this is that I find myself writing more and more F# and want to develop the backend for a new app in it and run it on Azure Functions.

I’m not going to pretend its pretty under the covers but its starting to take shape and I’m beginning to use it in my new backend and so now seemed like a good time to write a little about it by walking through putting together a simple ToDo style API that saves data to CosmosDB.

Declaring a Function App

As ever you’ll need to begin by creating a new Azure Function app in the IDE / editor of your choice. Once you’ve got that empty starting point you’ll need to two NuGet package FunctionMonkey.FSharp to the project (either with Paket or Nuget):

FunctionMonkey.FSharp
FunctionMonkey.Compiler

This is currently in alpha and so you’ll need to enable pre-release packages and add the following NuGet repository:

https://www.myget.org/F/functionmonkey-beta/api/v3/index.json

Next by create a new module called EntryPoint that looks like this:

namespace FmFsharpDemo
open AccidentalFish.FSharp.Validation
open System.Security.Claims
open FunctionMonkey.FSharp.Configuration
open FunctionMonkey.FSharp.Models

module EntryPoint =
    exception InvalidTokenException
    
    let validateToken (bearerToken:string) =
        match bearerToken.Length with
        | 0 -> raise InvalidTokenException
        | _ -> new ClaimsPrincipal(new ClaimsIdentity([new Claim("userId", "2FF4D861-F9E3-4694-9553-C49A94D7E665")]))
    
    let isResultValid (result:ValidationState) =
        match result with
        | Ok -> true
        | _ -> false
                                    
    let app = functionApp {
        // authorization
        defaultAuthorizationMode Token
        tokenValidator validateToken
        claimsMappings [
            claimsMapper.shared ("userId", "userId")
        ]
        // validation
        isValid isResultValid
        // functions
        httpRoute "version" [
            azureFunction.http (Handler(getApiVersion), Get)
        ]
    }

Ok. So what’s going on here? We’ll break it down block by block. We’re going to demonstrate authorisation using a (pretend) bearer token and so we begin by creating a function that can validate a token:

exception InvalidTokenException

let validateToken (bearerToken:string) =
    match bearerToken.Length with
    | 0 -> raise InvalidTokenException
    | _ -> new ClaimsPrincipal(new ClaimsIdentity([new Claim("userId", "2FF4D861-F9E3-4694-9553-C49A94D7E665")]))

This is our F# equivalent of the ITokenValidator interface in the C# version. In this case we take valid to mean any string of length in the authorization header and if the token is valid then we return a ClaimsPrincipal. Again here we just return a made up principal. In the case of an invalid token we simply raise an exception – Function Monkey will translate this to a 401 HTTP status.

We’re going to validate the inputs to our functions using my recently released validation framework. Function Monkey for F# supports any validation framework but as such you need to tell it what constitutes a validation failure and so next we create a function that is able to do this:

let isResultValid result = match result with | Ok -> true | _ -> false

Finally we declare our Function App itself:

let app = functionApp {
    // authorization
    defaultAuthorizationMode Token
    tokenValidator validateToken
    claimsMappings [
        claimsMapper.shared ("userId", "userId")
    ]
    // validation
    isValid isResultValid
    // functions
    httpRoute "version" [
        azureFunction.http (Handler(fun () -> "1.0.0"), Get, authorizationMode=Anonymous)
    ]
}

We declare our settings (and optionally functions) inside a functionApp block that we have to assign to a public member on the module so that the Function Monkey compiler can find your declaration.

Within the block we start by setting up our authorisation to use token validation (line 3) and instruct it to use the token validator function we created earlier (line 4). In lines 5 to 7 we then set up a claims mapping which will set userId on any of our record types associated with functions to the value of the userId claim. You can also set mappings to specific command type property like in the C# version.

On line 9 we tell Function Monkey to use our isResultValid function to determine if a validation results constitutes success of failure.

Then finally on line 11 we declare a HTTP route and a function within it. If you’re familiar with the C# version you can see here that we no longer use commands and command handlers – instead we use functions and their input parameter determines the type of the model being passed into the Azure Function and their return value determines the output of the Azure Function. In this case the function has no parameters and returns a string – a simple API version. We set this specific function to not require authorisation.

Finally lets add a host.json file to remove the auto-prefixing of api to routes (this causes problems with things like Open API output):

{
  "version": "2.0",
  "extensions": {
    "http": {
      "routePrefix": ""
    }
  }
}

If we run this now then in PostMan we should be able go call the endpoint http://localhost:7071/version and receive the response “1.0.0”.

Building our ToDo API

If you’re familiar with Function Monkey for C# then at this point you might be wandering where the rest of the functions are. We could declare them all here like we would in C# but the F# version of Function Monkey allows functions to be declared in multiple modules so that the functions can be located close to the domain logic and to avoid a huge function declaration block.

To get started create a new module called ToDo and we’ll begin by creating a type to model our to do items – we’ll also use this type for updating out to do items:

type ToDoItem =
    {
        id: string
        title: string
        complete: bool
    }

Next we’ll declare a type for adding a to do item:

type AddToDoItemCommand =
    {
        userId: string
        title: string
        isComplete: bool
    }

And finally an type that represents querying to find an item:

type GetToDoItemQuery =
    {
        id: string
    }

Next we’ll declare our validations for these models:

let withIdValidations = [
   isNotEmpty
   hasLengthOf 36
]

let withTitleValidations = [
    isNotEmpty
    hasMinLengthOf 1
    hasMaxLengthOf 255
]

let validateGetToDoItemQuery = createValidatorFor<GetToDoItemQuery>() {
    validate (fun q -> q.id) withIdValidations
}
    
let validateAddToDoItemCommand = createValidatorFor<AddToDoItemCommand>() {
    validate (fun c -> c.userId) withIdValidations
    validate (fun c -> c.title) withTitleValidations
}

let validateToDoItem = createValidatorFor<ToDoItem>() {
    validate (fun c -> c.id) withIdValidations
    validate (fun c -> c.title) withTitleValidations
    validate (fun c -> c.owningUserId) withIdValidations
}

Ok. So now we need to create functions for adding an item to the database and another for getting one from it. We’ll use Azure CosmosDB as a data store and I’m going to assume you’ve set one up. Our add function needs to accept a record of type AddToDoItemCommand and return a new record of type ToDoItem assigning properties as appropriate:

let addToDoItem command =
    {
        id = Guid.NewGuid().ToString()
        owningUserId = command.userId
        title = command.title
        isComplete = command.isComplete
    }

The user ID on our command will have been populated by the claims binding. We don’t write the item to Cosmos here, instead we’re going to use an output binding shortly.

Next our function for reading a to do item from Cosmos:

let getToDoItem query =
    CosmosDb.reader<ToDoItem> <| query.id

CosmosDb.reader is a super simple helper function I created:

namespace FmFsharpDemo
open Microsoft.Azure.Cosmos
open System

module CosmosDb =
    let cosmosDatabase = "testdatabase"
    let cosmosCollection = "colToDoItems"
    let cosmosConnectionString = Environment.GetEnvironmentVariable("cosmosConnectionString")
    
    let reader<'t> id =
        async {
            use client = new CosmosClient(cosmosConnectionString)
            let container = client.GetContainer(cosmosDatabase, cosmosCollection)
            let! response = container.ReadItemAsync<'t>(id, new PartitionKey(id)) |> Async.AwaitTask
            return response.Resource
        }
    

If we inspect the signatures for our two functions we’ll find that addToDoItem has a signature of AddToDoItemCommand -> ToDoItem and getToDoItem has a signature of GetToDoItemQuery -> Async<ToDoItem>. One of them is asynchronous and the other is not – Function Monkey for F# supports both forms. We’re not going to create a function for updating an existing item to demonstrate handler-less functions (though as we’ll see we’ll duck a slight issue for the time being!).

There is one last step we’re going to take before we declare our functions and that’s to create a curried output binding function:

let todoDatabase =
    cosmosDb cosmosCollection cosmosDatabase

In the above cosmosDb is a function that is part of the Function Monkey output binding set and it takes three parameters – the collection / container name, the database name and finally the function that the output binding is being applied to. We’re going to use it multiple times so we create this curried function to make our code less repetitive and more readable.

With all that we can now declare our functions block:

let toDoFunctions = functions {
    httpRoute "api/v1/todo" [
        azureFunction.http (AsyncHandler(getToDoItem),
                            verb=Get, subRoute="/{id}",
                            validator=validateGetToDoItemQuery)
        azureFunction.http (Handler(addToDoItem),
                            verb=Post,
                            validator=validateAddToDoItemCommand,
                            returnResponseBodyWithOutputBinding=true)
            |> todoDatabase
        azureFunction.http (NoHandler, verb=Put, validator=validateToDoItem)
            |> todoDatabase
    ]
}

The functions block is a subset of the functionApp block we saw earlier and can only be used to define functions – shared configuration must go in the functionApp block.

Hopefully the first, GET verb, function is reasonably self-explanatory. The AsyncHandler case instructs Function Monkey that this is an async function and we assign a validator with the validator option.

The second function, for our POST verb, introduces a new concept – output bindings. We pipe the output of azureFunction.http to our curried output binding and this will result in a function being created that outputs to Cosmos DB. Because we’re using the Cosmos output binding we also need to add the Microsoft.Azure.WebJobs.Extensions.CosmosDB package to our functional project. We set the option returnResponseBodyWithOutputBinding to true so that as well as sending the output of our function to the output trigger we also return it as part of the HTTP response (this is optional as you can imagine in a more complex scenario that could leak data).

Finally for the third function our PUT verb also uses an output binding but this doesn’t have a handler at all, hence the NoHandler case. In this scenario the command that is passed in, once validated, is simply passed on as the output of the function. And so in this instance we can PUT a to do item to our endpoint and it will update the appropriate entry in Cosmos. (Note that for the moment I have not answered the question as to how to prevent one user from updating another users to do items – our authorisation approach is currently limited and I’ll come back to that in a future post).

Trying It Out

With all that done we can try this function app out in Postman. If we begin by attempting to add an invalid post to our POST endpoint, say with an empty title, we’ll get a 400 status code returned and a response as follows:

{
  "case": "Errors",
  "fields": [
    [
      {
        "message": "Must not be empty",
        "property": "title",
        "errorCode": "isNotEmpty"
      },
      {
        "message": "Must have a length no less than 1",
        "property": "title",
        "errorCode": "hasMinLengthOf"
      }
    ]
  ]
}

Now if we run it with a valid payload we will get:

{
  "id": "09482e8d-41aa-4c25-9552-b7b05bf0a787",
  "owningUserId": "2FF4D861-F9E3-4694-9553-C49A94D7E665",
  "title": "Buy underpants",
  "isComplete": false
}

Next Steps

These are with me really – I need to continue to flesh out the functionality which at this point essentially boils down to expanding out the computation expression and its helpers. I also need to spend some time refactoring aspects of Function Monkey. I’ve had to dig up and change quite a few things so that it can work in this more functional manner as well as continue to support the more typical C# patterns.

Then of course there is documentation!

Writing and Testing Azure Functions with Function Monkey – Part 4

Part 4 of my series on writing Azure Functions with Function Monkey is now available on YouTube:

This part focuses on addressing cross cutting concerns in a DRY manner by implementing a custom command dispatcher.

I’ve also switched over to Rider as my main IDE now and in this video I’m making use of its Presentation Mode. I think it works really well but let me know.

Function Monkey 2.1.0

I’ve just pushed out a new version of Function Monkey with one fairly minor but potentially important change – the ability to create functions without routes.

You can now use the .HttpRoute() method without specifying an actual route. If you then also specify no path on the .HttpFunction<TCommand>() method that will result in an Azure Function with no route specified – it will then be named in the usual way based on the function name, which in the case of Function Monkey is the command name.

I’m not entirely comfortable with the approach I’ve taken to this at an API level but didn’t want to break anything – next time I plan a set of breaking changes I’ll probably look to clean this up a bit.

The reason for this is to support Logic Apps. Logic Apps only support routes with an accompanying Swagger / OpenAPI doc and you don’t necessarily want the latter for your functions.

While I was using proxies HTTP functions had no route and so they could be called from Logic Apps using the underlying function (while the outside world would use the shaped endpoint exposed through the proxy).

Having moved to a proxy-less world I’d managed to break a production Logic App of my own because the Logic App couldn’t find the function (404 error). Redeployment then generated a more meaningful error – that routed functions aren’t supported. Jeff Hollan gives some background on why here.

I had planned a bunch of improvements for 2.1.0 (which I’ve started) which will now move to 2.2.0.

Writing and Testing Azure Functions with Function Monkey – Part 3

Part 3 of my series on writing Azure Functions with Function Monkey focuses on writing tests using the newly released testing package – while this is by no means required it does make writing high value acceptance tests that use your applications full runtime easy and quick.

Lessons Learned

It really is amazing how quickly time passes when you’re talking and coding – I really hadn’t realised I’d recorded over an hours footage until I came to edit the video. I thought about splitting it in two but the contents really belonged together so I’ve left it as is.

Writing and Testing Azure Functions with Function Monkey – Part 2

Part 2 in my series of writing and testing Azure Functions with Function Monkey looks at adding validation and returning appropriate status codes from HTTP requests. Part 3 will look at acceptance testing this.

(Don’t worry – we’re going to look at some additional trigger types soon!)

Lessons Learned

I made a bunch of changes following some awesome feedback on Twitter and hopefully that results in an improved viewing experience for people.

I still haven’t got the font size right – I used a mix of Windows 10 scaling and zoomed text in Visual Studio but its not quite there. I also need to start thinking of actually using zoom when I’m pointing at something.

Hopefully part 3 I can address more of these things.

Writing and Testing Azure Functions with Function Monkey – Part 1

I’ve long thought about creating some video content but forever put it off – like many people I dislike seeing and hearing myself on video (as I quipped half in jest, half in horror, on Twitter “I have a face made for radio and a voice for silent movies”). I finally convinced myself to just get on with it and so my first effort is presented below along with some lessons learned.

Hopefully video n+1 will be an improvement on video n in this series – if I can do that I’ll be happy!

The Process and Lessons Learned

This was very much a voyage of discovery – I’ve never attempted recording myself code before and I’ve never edited video before.

To capture the screen and initial audio I used the free OBS Studio which after a little bit of fiddling to persuade it to capture at 4K and user error resulting in the loss of 20 minutes of video worked really well. It was unobtrusive and did what it says on the tin.

I use a good quality 4K display with my desktop machine and so on my first experiment the text was far too smaller, I used the scaling feature in Windows 10 to bump things up to 200% and that seemed about right (but you tell me!).

I sketched out a rough application to build but left things fairly loose as I hoped the video would feel natural and I know from presentations (which I don’t mind at all – in contrast to seeing and hearing myself!) that if I plan too much I get a bit robotic. I also figured I’d be likely to make some mistakes and with a bit of luck they’d be mistakes that would be informative to others.

This mostly worked but I could have done with a little more practice up front as I took myself down a stupid route at one point as my brain struggled with coding and narrating simulatanously! Fortunately I was able to fix this later while editing but I made things harder for myself than it needed to be and there’s a slight alteration in audo tone as I cut the new voice work in.

Having captured the video I transferred it to my MacBook to process in Final Cut Pro X and at this point realised I’d captured the video in flv – a format Final Cut doesn’t import. This necessitated downloading Handbrake to convert the video into something Final Cut could import. Not a big deal but I could have saved myself some time – even a pretty fast Mac takes a while to re-encode 55 minutes of 4K video!

I’d never used Final Cut before but it turned out to be fairly easy to use and I was able to cut out my slurps of coffee and the time wasting uninformative mistake I made. I did have to recut some audio as I realised I’d mangled some names – this was fairly simple but the audio doesn’t sound exactly the same as it did when recorded earlier despite using the same microphone in the same room. Again not the end of the world (I’m not challenging for an Oscar here).

Slightly more irritating – I have a mechanical cherry switch keyboard which I find super pleasant to type on, but carries the downside of making quite a clatter which is really rather loud in the video. Hmmm. I do have an Apple bluetooth keyboard next to me, I may try connecting that to the PC for the next installment but it might impede my typing too much.

Overall that was a less fraught experience than I’d imagined – I did slowly get used to hearing myself while editing the video, though listening to it fresh again a day later some of that discomfort has returned! I’m sure I’ll get used to it in time.

Would love to hear any feedback over on Twitter.

Acceptance Testing with Function Monkey

I’ve been so busy the last few months that finding time to write technical articles has been difficult and I’ve not even managed to cover Function Monkey – the framework I put together for, according to its own strapline, writing testable more elegant Azure Functions with less boilerplate, more consistency, and support for REST APIs with C# / .Net.

I’m not proposing to start retro-covering 9 months of development as there is some fairly comprehensive documentation available on its own website but I thought it might be quite interesting to talk briefly about testing.

Function Monkey and the pattern it promotes (commanding / mediation) makes it very easy to unit test functions: command handlers can be constructed using dependency injection (either through an IServiceCollection approach or “poor mans” injection) and as the function trigger handlers themselves are written and compiled by the Function Monkey build tool there is by definition a clean separation of infrastructure / host and business logic (though of course you can always muddy this water!).

Perhaps less obvious is how to handle acceptance / integration type tests. One option, like with the out the box approach to writing Azure Functions, is to test the Functions by actually triggering them with an event. This works fine but it can quickly become fairly complex – you need to run the Functions and outcomes can be asynchronous and so, again, awkward to validate.

Function Monkey allows for another approach which often provides a high level of test coverage and high value tests while eliminating the complexity associated with a full end to end test that includes the triggers and that’s to run the tests at the command level – essentially run everything accept the pre-tested boilerplate generated by the Function Monkey framework. This also makes it easy to decouple from external dependencies such as storage if you so wish.

Within the generated Function Monkey boilerplate after all the busy work of deserializing, logging, and dealing with cross cutting concerns the heart of the function is a simple dispatch through a mediator of the command associated with the function. This look something like this:

var result = await dispatcher.DispatchAsync(deserializedCommand);

This will invoke all the implementation in your handlers and below and so we can get massive test coverage of the integrated system without having to actually stand up a Function App and deal with its triggers and dependencies quite easily simply by providing an environment in our tests that lets us run those commands.

The key to this is the implementation of a custom IFunctionHostBuilder for testing – this is the interface that is passed to your Function Monkey configuration class (based on IFunctionAppConfiguration). Function Monkey uses an internal “real” implementation of this as part of its runtime setup process but by making use of a custom one built for testing you can set up your test scenario using the exact same code as your production system but then modify its configuration, if required, for test.

It’s not a lot of work to build one of these but as its fairly generic and requires a small amount of Function Monkey implementation knowledge and a little knowledge of how the mediator works in a multi-threaded environment and so I’ve created a FunctionMonkey.Testing NuGet package that contains two important classes:

AbstractAcceptanceTest – a base class for your acceptance tests that will create an environment in which you can dispatch and test commands. This is useful for test frameworks that take a constructor approach to test setup such as xUnit.

AcceptanceTestScaffold – a class that can be used with frameworks that take a method based approach to test setup such as nUnit. Internally AbstractAcceptanceTest makes use of this class.

To see how they are used consider the below simple example function app:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup((serviceCollection, commandRegistry) =>
            {
                // register our dependencies
                serviceCollection.AddTransient<ICalculator, Calculator>();

                // register our commands and handlers
                commandRegistry.Discover<FunctionAppConfiguration>();
            })
            .Functions(functions => functions
                .HttpRoute("/calculator", route => route
                    .HttpFunction<AdditionCommand>("/add")
                )
            )
            ;
    }
}

This creates a function app with a single HTTP route that takes two values and adds them together. The handler itself makes use of a dependency that is injected:

internal class AdditionCommandHandler : ICommandHandler<AdditionCommand, int>
{
    private readonly ICalculator _calculator;

    public AdditionCommandHandler(ICalculator calculator)
    {
        _calculator = calculator;
    }

    public Task<int> ExecuteAsync(AdditionCommand command, int previousResult)
    {
        return Task.FromResult(_calculator.Add(command.ValueOne, command.ValueTwo));
    }
}

We could run the Function App and write our tests against the exposed HTTP endpoint simply by calling the URL (for example http://localhost:7071/calculator/add?valueOne=2&valueTwo=3) and reading the result and there’s certainly value in that but there’s a fair amount of orchestration involved. You can make use of some Azure Function test code (based on the teams own integration tests) to run the Functions in process but at the time of writing its complex and not well documented (its also been a bit of a moving target).

However we can use the classes I mentioned earlier to invoke our commands without doing this. Here’s an example of an xUnit test that demonstrates this (to use these classes add the FunctionMonkey.Testing package to your test project):

public class AdditionFunctionShould : AbstractAcceptanceTest
{
    [Fact]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.Equal(9, result);
    }
}

That’s an awful lot easier than the alternatives and, importantly, the test is running in the exact same runtime that Function Monkey and your FunctionAppConfiguration class set up for the real functions.

For comparison the MS Test version using the AcceptanceTestScaffold looks like this:

[TestClass]
public class AdditionFunctionShould
{
    private AcceptanceTestScaffold _acceptanceTestScaffold;

    [TestInitialize]
    public void Setup()
    {
        _acceptanceTestScaffold = new AcceptanceTestScaffold();
        _acceptanceTestScaffold.Setup(typeof(FunctionAppConfiguration).Assembly);
    }

    [TestMethod]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await _acceptanceTestScaffold.Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.AreEqual(9, result);
    }
}

I’m going to focus on the xUnit version and AbstractAcceptanceTest for the remainder of this blog post but the same functionality exists on AcceptanceTestScaffold but as methods rather than overrides.

Its common in tests, even integration or acceptance tests, to want to mock out part of the system. For example you might depend on a third party system that isn’t really a comfortable fit for test scenarios. You can accomplish this kind of modification to your normal execution runtime by using the AfterBuild and BeforeBuild support. The AfterBuild method is invoked immediately after the Setup method of the function app builder has been called and can be used, for example, to replace dependencies. The example below, though a little contrived, replaces the ICalculator with an NSubstitute substitute:

public class AdditionCommandShouldIncludingDependencyReplacement : AbstractAcceptanceTest
{
    public override void AfterBuild(IServiceCollection serviceCollection, ICommandRegistry commandRegistry)
    {
        base.AfterBuild(serviceCollection, commandRegistry);
        ICalculator calculator = Substitute.For<ICalculator>();
        calculator.Add(Arg.Any<int>(), Arg.Any<int>()).Returns(8);
        serviceCollection.Replace(new ServiceDescriptor(typeof(ICalculator), calculator));
    }

    [Fact]
    public async Task ReturnSubstitutedDependencyResult()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 2,
            ValueTwo = 2
        });

        Assert.Equal(8, result);
    }
}

Finally if you require environment variables to be set for configuration you can use a normal Azure Functions *.settings.json file and add it where you require. The below example introduces an additional base class that ensures the environment variables are always added for each test – because environment variables are global by default the Function Monkey test harness will only add them once (though you can override this with optional boolean flag on the AddEnvironmentVariables method):

public abstract class CommonAcceptanceTest : AbstractAcceptanceTest
{
    protected CommonAcceptanceTest()
    {
        AddEnvironmentVariables("./test.settings.json");
    }
}

public class AdditionCommandShouldWithEnvironmentVariables : CommonAcceptanceTest
{
    [Fact]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.Equal(9, result);
    }
}

And thats it – hope that’s useful! The source code for the examples can be found here on GitHub.

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez