Category: Function Monkey

Writing and Testing Azure Functions with Function Monkey – Part 4

Part 4 of my series on writing Azure Functions with Function Monkey is now available on YouTube:

This part focuses on addressing cross cutting concerns in a DRY manner by implementing a custom command dispatcher.

I’ve also switched over to Rider as my main IDE now and in this video I’m making use of its Presentation Mode. I think it works really well but let me know.

Function Monkey 2.1.0

I’ve just pushed out a new version of Function Monkey with one fairly minor but potentially important change – the ability to create functions without routes.

You can now use the .HttpRoute() method without specifying an actual route. If you then also specify no path on the .HttpFunction<TCommand>() method that will result in an Azure Function with no route specified – it will then be named in the usual way based on the function name, which in the case of Function Monkey is the command name.

I’m not entirely comfortable with the approach I’ve taken to this at an API level but didn’t want to break anything – next time I plan a set of breaking changes I’ll probably look to clean this up a bit.

The reason for this is to support Logic Apps. Logic Apps only support routes with an accompanying Swagger / OpenAPI doc and you don’t necessarily want the latter for your functions.

While I was using proxies HTTP functions had no route and so they could be called from Logic Apps using the underlying function (while the outside world would use the shaped endpoint exposed through the proxy).

Having moved to a proxy-less world I’d managed to break a production Logic App of my own because the Logic App couldn’t find the function (404 error). Redeployment then generated a more meaningful error – that routed functions aren’t supported. Jeff Hollan gives some background on why here.

I had planned a bunch of improvements for 2.1.0 (which I’ve started) which will now move to 2.2.0.

Writing and Testing Azure Functions with Function Monkey – Part 3

Part 3 of my series on writing Azure Functions with Function Monkey focuses on writing tests using the newly released testing package – while this is by no means required it does make writing high value acceptance tests that use your applications full runtime easy and quick.

Lessons Learned

It really is amazing how quickly time passes when you’re talking and coding – I really hadn’t realised I’d recorded over an hours footage until I came to edit the video. I thought about splitting it in two but the contents really belonged together so I’ve left it as is.

Writing and Testing Azure Functions with Function Monkey – Part 2

Part 2 in my series of writing and testing Azure Functions with Function Monkey looks at adding validation and returning appropriate status codes from HTTP requests. Part 3 will look at acceptance testing this.

(Don’t worry – we’re going to look at some additional trigger types soon!)

Lessons Learned

I made a bunch of changes following some awesome feedback on Twitter and hopefully that results in an improved viewing experience for people.

I still haven’t got the font size right – I used a mix of Windows 10 scaling and zoomed text in Visual Studio but its not quite there. I also need to start thinking of actually using zoom when I’m pointing at something.

Hopefully part 3 I can address more of these things.

Writing and Testing Azure Functions with Function Monkey – Part 1

I’ve long thought about creating some video content but forever put it off – like many people I dislike seeing and hearing myself on video (as I quipped half in jest, half in horror, on Twitter “I have a face made for radio and a voice for silent movies”). I finally convinced myself to just get on with it and so my first effort is presented below along with some lessons learned.

Hopefully video n+1 will be an improvement on video n in this series – if I can do that I’ll be happy!

The Process and Lessons Learned

This was very much a voyage of discovery – I’ve never attempted recording myself code before and I’ve never edited video before.

To capture the screen and initial audio I used the free OBS Studio which after a little bit of fiddling to persuade it to capture at 4K and user error resulting in the loss of 20 minutes of video worked really well. It was unobtrusive and did what it says on the tin.

I use a good quality 4K display with my desktop machine and so on my first experiment the text was far too smaller, I used the scaling feature in Windows 10 to bump things up to 200% and that seemed about right (but you tell me!).

I sketched out a rough application to build but left things fairly loose as I hoped the video would feel natural and I know from presentations (which I don’t mind at all – in contrast to seeing and hearing myself!) that if I plan too much I get a bit robotic. I also figured I’d be likely to make some mistakes and with a bit of luck they’d be mistakes that would be informative to others.

This mostly worked but I could have done with a little more practice up front as I took myself down a stupid route at one point as my brain struggled with coding and narrating simulatanously! Fortunately I was able to fix this later while editing but I made things harder for myself than it needed to be and there’s a slight alteration in audo tone as I cut the new voice work in.

Having captured the video I transferred it to my MacBook to process in Final Cut Pro X and at this point realised I’d captured the video in flv – a format Final Cut doesn’t import. This necessitated downloading Handbrake to convert the video into something Final Cut could import. Not a big deal but I could have saved myself some time – even a pretty fast Mac takes a while to re-encode 55 minutes of 4K video!

I’d never used Final Cut before but it turned out to be fairly easy to use and I was able to cut out my slurps of coffee and the time wasting uninformative mistake I made. I did have to recut some audio as I realised I’d mangled some names – this was fairly simple but the audio doesn’t sound exactly the same as it did when recorded earlier despite using the same microphone in the same room. Again not the end of the world (I’m not challenging for an Oscar here).

Slightly more irritating – I have a mechanical cherry switch keyboard which I find super pleasant to type on, but carries the downside of making quite a clatter which is really rather loud in the video. Hmmm. I do have an Apple bluetooth keyboard next to me, I may try connecting that to the PC for the next installment but it might impede my typing too much.

Overall that was a less fraught experience than I’d imagined – I did slowly get used to hearing myself while editing the video, though listening to it fresh again a day later some of that discomfort has returned! I’m sure I’ll get used to it in time.

Would love to hear any feedback over on Twitter.

Acceptance Testing with Function Monkey

I’ve been so busy the last few months that finding time to write technical articles has been difficult and I’ve not even managed to cover Function Monkey – the framework I put together for, according to its own strapline, writing testable more elegant Azure Functions with less boilerplate, more consistency, and support for REST APIs with C# / .Net.

I’m not proposing to start retro-covering 9 months of development as there is some fairly comprehensive documentation available on its own website but I thought it might be quite interesting to talk briefly about testing.

Function Monkey and the pattern it promotes (commanding / mediation) makes it very easy to unit test functions: command handlers can be constructed using dependency injection (either through an IServiceCollection approach or “poor mans” injection) and as the function trigger handlers themselves are written and compiled by the Function Monkey build tool there is by definition a clean separation of infrastructure / host and business logic (though of course you can always muddy this water!).

Perhaps less obvious is how to handle acceptance / integration type tests. One option, like with the out the box approach to writing Azure Functions, is to test the Functions by actually triggering them with an event. This works fine but it can quickly become fairly complex – you need to run the Functions and outcomes can be asynchronous and so, again, awkward to validate.

Function Monkey allows for another approach which often provides a high level of test coverage and high value tests while eliminating the complexity associated with a full end to end test that includes the triggers and that’s to run the tests at the command level – essentially run everything accept the pre-tested boilerplate generated by the Function Monkey framework. This also makes it easy to decouple from external dependencies such as storage if you so wish.

Within the generated Function Monkey boilerplate after all the busy work of deserializing, logging, and dealing with cross cutting concerns the heart of the function is a simple dispatch through a mediator of the command associated with the function. This look something like this:

var result = await dispatcher.DispatchAsync(deserializedCommand);

This will invoke all the implementation in your handlers and below and so we can get massive test coverage of the integrated system without having to actually stand up a Function App and deal with its triggers and dependencies quite easily simply by providing an environment in our tests that lets us run those commands.

The key to this is the implementation of a custom IFunctionHostBuilder for testing – this is the interface that is passed to your Function Monkey configuration class (based on IFunctionAppConfiguration). Function Monkey uses an internal “real” implementation of this as part of its runtime setup process but by making use of a custom one built for testing you can set up your test scenario using the exact same code as your production system but then modify its configuration, if required, for test.

It’s not a lot of work to build one of these but as its fairly generic and requires a small amount of Function Monkey implementation knowledge and a little knowledge of how the mediator works in a multi-threaded environment and so I’ve created a FunctionMonkey.Testing NuGet package that contains two important classes:

AbstractAcceptanceTest – a base class for your acceptance tests that will create an environment in which you can dispatch and test commands. This is useful for test frameworks that take a constructor approach to test setup such as xUnit.

AcceptanceTestScaffold – a class that can be used with frameworks that take a method based approach to test setup such as nUnit. Internally AbstractAcceptanceTest makes use of this class.

To see how they are used consider the below simple example function app:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup((serviceCollection, commandRegistry) =>
            {
                // register our dependencies
                serviceCollection.AddTransient<ICalculator, Calculator>();

                // register our commands and handlers
                commandRegistry.Discover<FunctionAppConfiguration>();
            })
            .Functions(functions => functions
                .HttpRoute("/calculator", route => route
                    .HttpFunction<AdditionCommand>("/add")
                )
            )
            ;
    }
}

This creates a function app with a single HTTP route that takes two values and adds them together. The handler itself makes use of a dependency that is injected:

internal class AdditionCommandHandler : ICommandHandler<AdditionCommand, int>
{
    private readonly ICalculator _calculator;

    public AdditionCommandHandler(ICalculator calculator)
    {
        _calculator = calculator;
    }

    public Task<int> ExecuteAsync(AdditionCommand command, int previousResult)
    {
        return Task.FromResult(_calculator.Add(command.ValueOne, command.ValueTwo));
    }
}

We could run the Function App and write our tests against the exposed HTTP endpoint simply by calling the URL (for example http://localhost:7071/calculator/add?valueOne=2&valueTwo=3) and reading the result and there’s certainly value in that but there’s a fair amount of orchestration involved. You can make use of some Azure Function test code (based on the teams own integration tests) to run the Functions in process but at the time of writing its complex and not well documented (its also been a bit of a moving target).

However we can use the classes I mentioned earlier to invoke our commands without doing this. Here’s an example of an xUnit test that demonstrates this (to use these classes add the FunctionMonkey.Testing package to your test project):

public class AdditionFunctionShould : AbstractAcceptanceTest
{
    [Fact]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.Equal(9, result);
    }
}

That’s an awful lot easier than the alternatives and, importantly, the test is running in the exact same runtime that Function Monkey and your FunctionAppConfiguration class set up for the real functions.

For comparison the MS Test version using the AcceptanceTestScaffold looks like this:

[TestClass]
public class AdditionFunctionShould
{
    private AcceptanceTestScaffold _acceptanceTestScaffold;

    [TestInitialize]
    public void Setup()
    {
        _acceptanceTestScaffold = new AcceptanceTestScaffold();
        _acceptanceTestScaffold.Setup(typeof(FunctionAppConfiguration).Assembly);
    }

    [TestMethod]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await _acceptanceTestScaffold.Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.AreEqual(9, result);
    }
}

I’m going to focus on the xUnit version and AbstractAcceptanceTest for the remainder of this blog post but the same functionality exists on AcceptanceTestScaffold but as methods rather than overrides.

Its common in tests, even integration or acceptance tests, to want to mock out part of the system. For example you might depend on a third party system that isn’t really a comfortable fit for test scenarios. You can accomplish this kind of modification to your normal execution runtime by using the AfterBuild and BeforeBuild support. The AfterBuild method is invoked immediately after the Setup method of the function app builder has been called and can be used, for example, to replace dependencies. The example below, though a little contrived, replaces the ICalculator with an NSubstitute substitute:

public class AdditionCommandShouldIncludingDependencyReplacement : AbstractAcceptanceTest
{
    public override void AfterBuild(IServiceCollection serviceCollection, ICommandRegistry commandRegistry)
    {
        base.AfterBuild(serviceCollection, commandRegistry);
        ICalculator calculator = Substitute.For<ICalculator>();
        calculator.Add(Arg.Any<int>(), Arg.Any<int>()).Returns(8);
        serviceCollection.Replace(new ServiceDescriptor(typeof(ICalculator), calculator));
    }

    [Fact]
    public async Task ReturnSubstitutedDependencyResult()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 2,
            ValueTwo = 2
        });

        Assert.Equal(8, result);
    }
}

Finally if you require environment variables to be set for configuration you can use a normal Azure Functions *.settings.json file and add it where you require. The below example introduces an additional base class that ensures the environment variables are always added for each test – because environment variables are global by default the Function Monkey test harness will only add them once (though you can override this with optional boolean flag on the AddEnvironmentVariables method):

public abstract class CommonAcceptanceTest : AbstractAcceptanceTest
{
    protected CommonAcceptanceTest()
    {
        AddEnvironmentVariables("./test.settings.json");
    }
}

public class AdditionCommandShouldWithEnvironmentVariables : CommonAcceptanceTest
{
    [Fact]
    public async Task ReturnTheSumOfTwoValues()
    {
        int result = await Dispatcher.DispatchAsync(new AdditionCommand
        {
            ValueOne = 5,
            ValueTwo = 4
        });

        Assert.Equal(9, result);
    }
}

And thats it – hope that’s useful! The source code for the examples can be found here on GitHub.

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez