Category: Azure Functions

Azure Functions Performance – Update on EP1 Results

In yesterdays post comparing Azure Functions to AWS Lambda the EP1 plan was a notable poor performer – to the extent I wandered if it was an anomalous result. For context here is yesterday’s results for a low load test:

I created a new plan this morning with a view to exploring the results further and I think I can provide some additional insight into this.

You shouldn’t have to “warm” a Premium Plan but to be sure and consistent I ran these tests after allowing for an idle / scale down period and then making a single request for a Mandelbrot.

The data here is based around a test making 32 concurrent requests to the Function App for a single Mandelbrot. Here is the graph for the initial run.

First if we consider the overall statistics for the full run – they are still not great. If I pop those into the comparison chart I used yesterday EP1 is still trailing – the blue column is yesterdays EP1 result and the green line todays.

Its improved – but its still poor. However if we look at the graph of the run over time we can see its something of a graph of two halves and I’ve highlighted two sections of it (with the highlight numbers in the top half):

There is a marked increase in response time and request per second rate between the two halves. Although I’m not tracking the instance IDs I would conclude that Azure Functions scaled up to involve a second App Service Instance and that resulted in the improved throughput.

To verify this I immediately ran the test again to take advantage of the increased resource availability in the Function Plan and that result is shown below along with another comparative graph of the run in context.

We can see here that the EP1 plan is now in the same kind of ballpark as Lambda and the EP2 plan. As two EP1 instances in play we are now running with a similar amount of total compute as the EP1 plan – just on two 210 ACU instances rather than one 420 ACU instance.

To obtain this level of performance we are sacrificing consumption based billing and moving to a baseline cost of £0.17 per hour (£125 per month) bursting to £0.34 per hour (£250 per month) to cover this low level of load.

Conclusions

I would argue this verifies yesterdays results – with a freshly deployed Function App we have obtained similar results and by looking at its behavior over time we can see how Azure Functions is adding resource to an EP1 plan then giving us similar total resource to the EP2 plan and similar results.

Every workload is different and I would always encourage this but based on this I would strongly suggest that if you’re using Premium Plan’s you dive into your workload and seek to understand if it is a cost effective use of your spend.

Comparative performance of Azure Functions and AWS Lambda

Update: the results below showed the EP1 plan to be a clear outlier in some of its performance results. I’ve since retested on a fresh EP1 plan and confirmed these results as accurate and been able to provide further insight into the performance: Azure Functions Performance – Update on EP1 Results – Azure From The Trenches.

I was recently asked if I would spend some time comparing AWS Lambda with Azure Functions at a meetup – of course, happily! As part of that preparing for that I did a bit of a dive into the performance aspects of the two systems and I think the results are interesting and useful and so I’m also going to share them here.

Test Methodology

Methodology may be a bit grand but here’s how I ran the tests.

The majority of the tests were conducted with SuperBenchmarker against systems deployed entirely in the UK (eu-west-2 on AWS and UK South on Azure). I interleaved the results – testing on AWS, testing on Azure, and ran the tests multiple times to ensure I was getting consistent results.

I’ve not focused on cold start as Mikhail Shilkov has covered that ground excellently and I really have nothing to add to his analysis:

Cold Starts in Azure Functions | Mikhail Shilkov
Cold Starts in AWS Lambda | Mikhail Shilkov

I essentially focused on two sets of tests – an IO workload (flushing a queue and writing some blobs) and a compute workload (calculating a mandelbrot and returning it as an image).

All tests are making use of .NET Core 3.1 and I’ve tested on the following configurations:

  1. Azure Functions – Consumption Plan
  2. Azure Functions – EP1 Premium Plan
  3. Azure Functions – EP2 Premium Plan
  4. AWS Lambda – 256 Mb
  5. AWS Lambda – 1024 Mb
  6. AWS Lambda – 2048 Mb

Its worth noting that Lambda assigns a proportion of CPU(s) based on the allocated memory – more memory means more horsepower and potentially multiple cores (beyond the 1.8Gb mark if memory serves).

Queue Processing

For this test I preloaded a queue with 10,000 and 100,000 queue items and wrote the time the queue item was processed to a blob file (a file per queue item). The measured times are between the time the first blob was created and the time the last blob was created.

On Azure I made use of Azure Queue Storage and Azure Blob Storage and on AWS I used SQS and S3.

AWS was the clear winner of this test and from the shape of the data it appeared that AWS was accelerating faster than Azure – more eager to process more items but I would need to do further testing to compare. However it is possible the other services were a influencing factor. However its a reasonable IO test on common services by a function.

HTTP Trigger under steady load

This test was focused on a compute workload – essentially calculating a Mandelbrot. The Function / Lambda will generate n lambda’s based on a query parameter. The Mandelbrots are generated in parallel using the Task system.

32 concurrent requests, 1 Mandelbrot per request

Percentile and average response times can be seen in the graph below (lower is better):

With this low level of low all the services performed acceptable. The Azure Premium Plans strangely perform the worst with the EP1 service being particularly bad. I reran this several times and received similar results.

The range of response times (min, max) can be seen below alongside the average where we can see again followed by the total number of requests served over the 2 minute period:

32 concurrent requests, 8 Mandelbrots per request

In this test each request results in the Lambda / Function calculating the Mandelbrot 8 times in parallel and then returning one of the Mandelbrots as an image.

Percentile and average response times can be seen in the graph below (lower is better):

Things get a bit more interesting here. The level of compute is beyond the proportion of CPU assigned to the 256Mb Lambda and it struggles constantly. The Consumption Plan and EP1 Premium Plan fair a little better but are still impacted. However the 1024Mb and 2048Mb Lambda’s are comfortable – with the latter essentially being unchanged.

The range of response times (min, max) can be seen below alongside the average where we can see again followed by the total number of requests served over the 2 minute period:

I don’t think there’s much to add here – it largely plays out as you’d expect.

HTTP undergoing a load spike

In this test I ran at a low and steady rate of 2 concurrent requests for 1 Mandelbrot over 4 minutes. After around 1 to 2 minutes time I then loaded the system, independently, with a spike of 64 concurrent requests for 8 Mandelbrots.

Azure

First up Azure with the Consumption Plan:

Its clear to see in this graph where the additional load begins and, unfortunately, Azure really struggles with this. Served requests largely flatline throughout the spike. To provide more insight here’s a graph of the spike (note: I actually captured this from a later run but the results were the same as this first run).

Azure struggled to serve any of this. It didn’t fail any requests but performance really has nosedived.

I captured the same data for the EP1 and EP2 Premium Plans and these can be seen below:

Unfortunately Azure continues to struggle – even on an EP2 plan (costing £250 per month at a minimum). The spike data was broadly the same as in the Consumption plan run.

I would suggest this is due to Azure’s fairly monolithic architecture – all of our functions are running in shared resource and the more expensive requests can sink the entire shared space and Azure isn’t able to address this.

Lambda

First up the 256Mb Lambda:

We can see here that the modest 1 Mandelbrot requests made to the Lambda are untroubled by the Spike. You can see a slight rise in response time and drop in RPS when the additional load hits but overall the Lambda maintains consistent performance. You can see what is happening in the spike below:

Like in our earlier tests the 256 Mb Lambda struggles with the request for 8 Mandelbrot’s – but its performance is isolated away from the smaller requests due to Lambda’s more isolated architecture. The additional spikes showed characteristics similar to the runs measured earlier. The 1024 Mb and 2048 Mb run are shown below:

Again they run at a predicable and consistent rate. The graphs for the spikes behaved in line with performance of their respective consistent loads.

Spike Test Results Summary

Based on the above its unsurprising that the overall metrics are heavily in favour of AWS Lambda.

Concluding Thoughts

Its fascinating to see how the different architectures impact the scaling and performance characteristics of the service. With Lambda being based around containerized Functions then as long as the workload of a specific request fits within a containers capabilities performance remains fairly constant and consistent and any containers that are stretched have isolated impact.

As long as you measure, understand and plan for the resource requirements of your workloads Lambda can present a fairly consistent consumption based pricing scaling model.

Whereas Azure Functions uses the coarse App Service Instance scaling model – many functions are running within a single resource and this means that additional workload can stretch a resource beyond its capabilities and have an adverse effect on the whole system. Even spending more money has a limited impact for spikes – we would need resources that can manage the “peak peak” request workloads.

I’ve obviously only pushed these systems so far in these tests – enough to show the weaknesses in the Azure Functions compute architecture but not enough to really expose Lambda. That’s something to return to in a future installment.

In my view the Azure Functions infrastructure needs a major overall in order to be performance competitive. It has a FaaS programming model shackled to and held back by a traditional web server architecture. Though I am surprised by the EP1 results and will reach out to the Functions team.

An Azure Reference Architecture

There are an awful lot of services available on Azure but I’ve noticed a pattern emerging in a lot of my work around web apps. At their core they often have a similar architecture, deployment in Azure, and process for build and release.

For context a lot my hands on work over the last 3 years has been as a freelancer developing custom systems for people or on my own side projects (most recently https://www.forcyclistsbycyclists.com). In these situations I’ve found productivity to be super important in a few key ways:

  1. There’s a lot to get done, one or two people, and not much time – so being able to crank out a lot of work quickly and to a good level of quality is key.
  2. Adaptability – if its an externally focused green field system there’s a reasonable chance that there’s a degree of uncertainty over what the right feature set is. I generally expect to have to iterate a few times.
  3. I can’t be wasting time repeating myself or undertaking lengthy manual tasks.

Due to this I generally avoid over complicating my early stage deployment with too much separation – but I *do* make sure I understand where my boundaries and apply principles that support the later distribution of a system in the code.

With that out the way… here’s an architecture I’ve used as a good starting point several times now. And while it continues to evolve and I will vary specific decisions based on need its served me well and so I thought I’d share it here.

I realise there are some elements on here that are not “the latest and greatest” however its rarely productive to be on the bleeding edge. It seems likely, for example, that I’ll adopt the Azure SPA support at some point – but there’s not much in it for me doing that now. Similarly I can imagine giving GitHub Actions ago at some point – but what do I really gain by throwing what I know away today. From the experiments I’ve run I gain no productivity. Judging this stuff is something of a fine line but at the risk of banging this drum too hard: far too many people adopt technology because they see it being pushed and talked about on Twitter or dev.to (etc.) by the vendor, by their DevRel folk and by their community (e.g. MVPs) and by those who have jumped early and are quite possibly (likely!) suffering from a bizarre mix of Stockholm Syndrome and sunk cost fallacy “honestly the grass is so much greener over here… I’m so happy I crawled through the barbed wire”.

Rant over. If you’ve got any questions, want to tell me I’m crazy or question my parentage: catch me over on Twitter.

Architecture

Build & Release

I’ve long been a fan of automating at least all the high value parts of build & release. If you’re able to get it up and running quickly it rapidly pays for itself over the lifetime of a project. And one of the benefits of not CV chasing the latest tech is that most of this stuff is movable from project to project. Once you’ve set up a pipeline for a given set of assets and components its pretty easy to use on your next project. Introduce lots of new components… yeah you’ll have lots of figuring out to do. Consistency is underrated in our industry.

So what do I use and why?

  1. Git repository – I was actually an early adopter of Git. Mostly because I was taking my personal laptop into a disconnected environment on a regular basis when it first started to emege and I’m a frequent committer.

    In this architecture it holds all the assets required to build & deploy my system other than secrets.
  2. Azure DevOps – I use the pipelines to co-ordinate build & release activities both directly using built in tasks, third party tasks and scripts. Why? At the time I started it was free and “good enough”. I’ve slowly moved over to the YAML pipelines. Slowly.
  3. My builds will output four main assets: an ARM template, Docker container, a built single page application, and SQL migration scripts. These get deployed into a an Azure resource group, Azure container registry, blob storage, and a SQL database respectively.

    My migration scripts are applied against a SQL database using DbUp and my ARM templates are generated using Farmer and then used to provision a resource group. I’m fairly new to Farmer but so far its been fantastic – previously I was using Terraform but Farmer just fit a little nicer with my workflow and I like to support the F# community.

Runtime Environment

So what do I actually use to run and host my code?

  1. App Service – I’ve nearly always got an API to host and though I will sometimes use Azure Functions for this I more often use the Web App for Containers support.

    Originally I deployed directly into a “plain” App Service but grew really tired with the ongoing “now this is really how you deploy without locked files” fiasco and the final straw was the bungled .NET Core release.

    Its just easier and more reliable to deploy a container.
  2. Azure DNS – what it says on the tin! Unless there is a good reason to run it elsewhere I prefer to keep things together, keeps things simple.
  3. Azure CDN – gets you a free SSL cert for your single page app, is fairly inexpensive, and helps with load times.
  4. SQL Database – still, I think, the most flexible general purpose and productive data solution. Sure at scale others might be better. Sure sometimes less structured data is better suited to less structured data sources. But when you’re trying to get stuff done there’s a lot to be said for having an atomic, transactional data store. And if I had a tenner for every distributed / none transactional design I’ve seen that dealt only with the happy path I would be a very very wealthy man.

    Oh and “schema-less”. In most cases the question is is the schema explicit or implicit. If its implicit… again a lot of what I’ve seen doesn’t account for much beyodn the happy path.

    SQL might not be cool, and maybe I’m boring (but I’ll take boring and gets shit done), but it goes a long way in a simple to reason about manner.
  5. Storage accounts – in many systems you come across small bits of data that are handy to dump into, say, a blob store (poor mans NoSQL right there!) or table store. I generally find myself using it at some point.
  6. Service Bus – the unsung hero of Azure in my opinion. Its reliable. Does what it says on the tin and is easy to work with. Most applications have some background activity, chatter or async events to deal with and service bus is a great way of handling this. I sometimes pair this (and Azure Functions below) with SignalR.
  7. Azure Functions – great for processing the Service Bus, running code on a schedule and generally providing glue for your system. Again I often find myself with at least a handful of these. I often also use Service Bus queues with Functions to provide a “poor mans admin console”. Basically allow me to kick off administrative events by dropping a message on a queue.
  8. Application Insights – easy way of gathering together logs, metrics, telemetry etc. If something does go wrong or your system is doing something strange the query console is a good way of exploring what the root cause might be.

Code

I’m not going to spend too long talking about how I write the system itself (plenty of that on this blog already). In generally I try and keep things loosely coupled and normally start with a modular monolith – easy to reason about, well supported by tooling, minimal complexity but can grow into something more complex when and if that’s needed.

My current tools of choice is end to end F# with Fable and Saturn / Giraffe on top of ASP.Net Core and Fable Remoting on top of all that. I hopped onto functional programming as:

  1. It seemed a better fit for building web applications and APIs.
  2. I’d grown tired with all the C# ceremony.
  3. Collectively we seem to have decided that traditional OO is not that useful – yet we’re working in languages built for that way of working. And I felt I was holding myself back / being held back.

But if you’re looking to be productive – use what you know.

App Service Easy Auth with Auth0 (or any Open ID Connect provider)

So I’m going to prefix this with a warning – I doubt this is officially supported but at a basic level it does seem to work. I would use at your peril and I’m writing this in the hope that it makes for a useful starting point discussion with the App Service team.

I was looking at Easy Auth this week and found myself curious as to if it would work with a generic Open ID Connect identity provider. My first choice provider is Auth0 but that’s not one of the listed providers on the Easy Auth configuration page which, on the face of it, is quite limited:

Azure AD is (as well as many other things) an Open ID Connect Provider so I had a look at its settings in the advanced tab and its asking for two pretty common pieces of information in the identity world: a client ID and an issuer URL. I had an app in Auth0 that I use for general testing so I pasted in its well known configuration endpoint and the ID for my client:

I hit save and it seemed to accept everything. My web app is sat on the URL https://jdreasyauth0.azurewebsites.net/ so on the Auth0 side I added a callback URL to the Easy Auth callback endpoint:

Easy Auth forwards on the contents of common claims in headers such as X-MS-CLIENT-PRINCIPAL-ID (the subject) and X-MS-CLIENT-PRINCIPAL-NAME (the name) so to see if this was working I uploaded a simple ASP.Net Core app that would output the contents of the request headers to a web page. Then I paid it a visit in my browser:

Oh. So that’s hurdle one passed. It does redirect successfully to a none-Azure AD identity provider. What about logging in?

Great. Yes. This works too. And the headers are correct based on the identity I used to login with.

How does this compare to the headers from an Azure AD backed Easy Auth:

Basically the Auth0 login is missing the refresh token (I did later set a client secret and tweak configuration in Auth0) – so there might be some work needed there. But I don’t think that’s essential.

It would be incredibly useful to be able to use Easy Auth in a supported manner with other identity providers – particularly for Azure Functions where dealing with token level authorization is a bit more “low level” than in a fully fledged framework like ASP .Net Core (though my Function Monkey library can help with this) and is only dealt with after a function invocation.

Using Function Monkey with MediatR

There are a lot of improvements coming in v4 of Function Monkey and the beta is currently available on NuGet. As the full release approaches I thought it would make sense to introduce some of these new capabilities here.

In order to simplyify Azure Functions development Function Monkey makes heavy use of commanding via a mediator and ships with my own mediation library. However there’s a lot of existing code out their that makes use of the popular MediatR library which, if Function Monkey supported, could fairly easily be moved into a serverless execution environment.

Happily Function Monkey now supports just this! You can use my existing bundled mediator, bring your own mediator, or add the shiny new FunctionMonkey.MediatR NuGet package. Here we’re going to take a look at using the latter.

First begin by creating a new, empty, Azure Functions project and add three NuGet packages:

FunctionMonkey
FunctionMonkey.Compiler
FunctionMonkey.MediatR

At the time of writing be sure to use the prerelease packages version 4.0.39-beta.4 or later.

Next create a folder called Models. Add a class called ToDoItem:

public class ToDoItem
{
    public Guid Id { get; set; }
    
    public string Title { get; set; }
    
    public bool IsComplete { get; set; }
}

Now add a folder called Services and add an interface called IRepository:

internal interface IRepository
{
    Task<ToDoItem> Create(string title);
}

And a memory based implementation of this called Repository:

internal class Repository : IRepository
{
    private readonly List<ToDoItem> _items = new List<ToDoItem>();
    
    public Task<ToDoItem> Create(string title)
    {
        ToDoItem newItem = new ToDoItem()
        {
            Title = title,
            Id = Guid.NewGuid(),
            IsComplete = false
        };
        _items.Add(newItem);
        return Task.FromResult(newItem);
    }
}

Now create a folder called Commands and in here create a class called CreateToDoItemCommand:

public class CreateToDoItemCommand : IRequest<ToDoItem>
{
    public string Title { get; set; }
}

If you’re familiar with Function Monkey you’ll notice the difference here – we’d normally implement the ICommand<> interface but here we’re implementing MediatR’s IRequest<> interface instead.

Next create a folder called Handlers and in here create a class called CreateToDoItemCommandHandler as shown below:

internal class CreateToDoItemCommandHandler : IRequestHandler<CreateToDoItemCommand, ToDoItem>
{
    private readonly IRepository _repository;

    public CreateToDoItemCommandHandler(IRepository repository)
    {
        _repository = repository;
    }
    
    public Task<ToDoItem> Handle(CreateToDoItemCommand request, CancellationToken cancellationToken)
    {
        return _repository.Create(request.Title);
    }
}

Again the only real difference here is that rather than implement the ICommandHandler interface we implement the IRequestHandler interface from MediatR.

Finally we need to add our FunctionAppConfiguration class to the root of the project to wire everything up:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup(sc => sc
                .AddMediatR(typeof(FunctionAppConfiguration).Assembly)
                .AddSingleton<IRepository, Repository>()
            )
            .UseMediatR()
            .Functions(functions => functions
                .HttpRoute("todo", route => route
                    .HttpFunction<CreateToDoItemCommand>(HttpMethod.Post)
                )
            );
    }
}

Again this should look familiar however their are two key differences. Firstly in the Setup block we use MediatR’s IServiceCollection extension method AddMediatR – this will wire up the request handlers in the dependency injector. Secondly the .UseMediatR() option instructs Function Monkey to use MediatR for its command mediation.

And really that’s all their is to it! You can use both requests and notifications and you can find a more fleshed out example of this on GitHub.

As always feedback is welcome on Twitter or over on the GitHub issues page for Function Monkey.

Function Monkey for F#

Over the last couple of weeks I’ve been working on adapting Function Monkey so that it feels natural to work with in F#. The driver for this is that I find myself writing more and more F# and want to develop the backend for a new app in it and run it on Azure Functions.

I’m not going to pretend its pretty under the covers but its starting to take shape and I’m beginning to use it in my new backend and so now seemed like a good time to write a little about it by walking through putting together a simple ToDo style API that saves data to CosmosDB.

Declaring a Function App

As ever you’ll need to begin by creating a new Azure Function app in the IDE / editor of your choice. Once you’ve got that empty starting point you’ll need to two NuGet package FunctionMonkey.FSharp to the project (either with Paket or Nuget):

FunctionMonkey.FSharp
FunctionMonkey.Compiler

This is currently in alpha and so you’ll need to enable pre-release packages and add the following NuGet repository:

https://www.myget.org/F/functionmonkey-beta/api/v3/index.json

Next by create a new module called EntryPoint that looks like this:

namespace FmFsharpDemo
open AccidentalFish.FSharp.Validation
open System.Security.Claims
open FunctionMonkey.FSharp.Configuration
open FunctionMonkey.FSharp.Models

module EntryPoint =
    exception InvalidTokenException
    
    let validateToken (bearerToken:string) =
        match bearerToken.Length with
        | 0 -> raise InvalidTokenException
        | _ -> new ClaimsPrincipal(new ClaimsIdentity([new Claim("userId", "2FF4D861-F9E3-4694-9553-C49A94D7E665")]))
    
    let isResultValid (result:ValidationState) =
        match result with
        | Ok -> true
        | _ -> false
                                    
    let app = functionApp {
        // authorization
        defaultAuthorizationMode Token
        tokenValidator validateToken
        claimsMappings [
            claimsMapper.shared ("userId", "userId")
        ]
        // validation
        isValid isResultValid
        // functions
        httpRoute "version" [
            azureFunction.http (Handler(getApiVersion), Get)
        ]
    }

Ok. So what’s going on here? We’ll break it down block by block. We’re going to demonstrate authorisation using a (pretend) bearer token and so we begin by creating a function that can validate a token:

exception InvalidTokenException

let validateToken (bearerToken:string) =
    match bearerToken.Length with
    | 0 -> raise InvalidTokenException
    | _ -> new ClaimsPrincipal(new ClaimsIdentity([new Claim("userId", "2FF4D861-F9E3-4694-9553-C49A94D7E665")]))

This is our F# equivalent of the ITokenValidator interface in the C# version. In this case we take valid to mean any string of length in the authorization header and if the token is valid then we return a ClaimsPrincipal. Again here we just return a made up principal. In the case of an invalid token we simply raise an exception – Function Monkey will translate this to a 401 HTTP status.

We’re going to validate the inputs to our functions using my recently released validation framework. Function Monkey for F# supports any validation framework but as such you need to tell it what constitutes a validation failure and so next we create a function that is able to do this:

let isResultValid result = match result with | Ok -> true | _ -> false

Finally we declare our Function App itself:

let app = functionApp {
    // authorization
    defaultAuthorizationMode Token
    tokenValidator validateToken
    claimsMappings [
        claimsMapper.shared ("userId", "userId")
    ]
    // validation
    isValid isResultValid
    // functions
    httpRoute "version" [
        azureFunction.http (Handler(fun () -> "1.0.0"), Get, authorizationMode=Anonymous)
    ]
}

We declare our settings (and optionally functions) inside a functionApp block that we have to assign to a public member on the module so that the Function Monkey compiler can find your declaration.

Within the block we start by setting up our authorisation to use token validation (line 3) and instruct it to use the token validator function we created earlier (line 4). In lines 5 to 7 we then set up a claims mapping which will set userId on any of our record types associated with functions to the value of the userId claim. You can also set mappings to specific command type property like in the C# version.

On line 9 we tell Function Monkey to use our isResultValid function to determine if a validation results constitutes success of failure.

Then finally on line 11 we declare a HTTP route and a function within it. If you’re familiar with the C# version you can see here that we no longer use commands and command handlers – instead we use functions and their input parameter determines the type of the model being passed into the Azure Function and their return value determines the output of the Azure Function. In this case the function has no parameters and returns a string – a simple API version. We set this specific function to not require authorisation.

Finally lets add a host.json file to remove the auto-prefixing of api to routes (this causes problems with things like Open API output):

{
  "version": "2.0",
  "extensions": {
    "http": {
      "routePrefix": ""
    }
  }
}

If we run this now then in PostMan we should be able go call the endpoint http://localhost:7071/version and receive the response “1.0.0”.

Building our ToDo API

If you’re familiar with Function Monkey for C# then at this point you might be wandering where the rest of the functions are. We could declare them all here like we would in C# but the F# version of Function Monkey allows functions to be declared in multiple modules so that the functions can be located close to the domain logic and to avoid a huge function declaration block.

To get started create a new module called ToDo and we’ll begin by creating a type to model our to do items – we’ll also use this type for updating out to do items:

type ToDoItem =
    {
        id: string
        title: string
        complete: bool
    }

Next we’ll declare a type for adding a to do item:

type AddToDoItemCommand =
    {
        userId: string
        title: string
        isComplete: bool
    }

And finally an type that represents querying to find an item:

type GetToDoItemQuery =
    {
        id: string
    }

Next we’ll declare our validations for these models:

let withIdValidations = [
   isNotEmpty
   hasLengthOf 36
]

let withTitleValidations = [
    isNotEmpty
    hasMinLengthOf 1
    hasMaxLengthOf 255
]

let validateGetToDoItemQuery = createValidatorFor<GetToDoItemQuery>() {
    validate (fun q -> q.id) withIdValidations
}
    
let validateAddToDoItemCommand = createValidatorFor<AddToDoItemCommand>() {
    validate (fun c -> c.userId) withIdValidations
    validate (fun c -> c.title) withTitleValidations
}

let validateToDoItem = createValidatorFor<ToDoItem>() {
    validate (fun c -> c.id) withIdValidations
    validate (fun c -> c.title) withTitleValidations
    validate (fun c -> c.owningUserId) withIdValidations
}

Ok. So now we need to create functions for adding an item to the database and another for getting one from it. We’ll use Azure CosmosDB as a data store and I’m going to assume you’ve set one up. Our add function needs to accept a record of type AddToDoItemCommand and return a new record of type ToDoItem assigning properties as appropriate:

let addToDoItem command =
    {
        id = Guid.NewGuid().ToString()
        owningUserId = command.userId
        title = command.title
        isComplete = command.isComplete
    }

The user ID on our command will have been populated by the claims binding. We don’t write the item to Cosmos here, instead we’re going to use an output binding shortly.

Next our function for reading a to do item from Cosmos:

let getToDoItem query =
    CosmosDb.reader<ToDoItem> <| query.id

CosmosDb.reader is a super simple helper function I created:

namespace FmFsharpDemo
open Microsoft.Azure.Cosmos
open System

module CosmosDb =
    let cosmosDatabase = "testdatabase"
    let cosmosCollection = "colToDoItems"
    let cosmosConnectionString = Environment.GetEnvironmentVariable("cosmosConnectionString")
    
    let reader<'t> id =
        async {
            use client = new CosmosClient(cosmosConnectionString)
            let container = client.GetContainer(cosmosDatabase, cosmosCollection)
            let! response = container.ReadItemAsync<'t>(id, new PartitionKey(id)) |> Async.AwaitTask
            return response.Resource
        }
    

If we inspect the signatures for our two functions we’ll find that addToDoItem has a signature of AddToDoItemCommand -> ToDoItem and getToDoItem has a signature of GetToDoItemQuery -> Async<ToDoItem>. One of them is asynchronous and the other is not – Function Monkey for F# supports both forms. We’re not going to create a function for updating an existing item to demonstrate handler-less functions (though as we’ll see we’ll duck a slight issue for the time being!).

There is one last step we’re going to take before we declare our functions and that’s to create a curried output binding function:

let todoDatabase =
    cosmosDb cosmosCollection cosmosDatabase

In the above cosmosDb is a function that is part of the Function Monkey output binding set and it takes three parameters – the collection / container name, the database name and finally the function that the output binding is being applied to. We’re going to use it multiple times so we create this curried function to make our code less repetitive and more readable.

With all that we can now declare our functions block:

let toDoFunctions = functions {
    httpRoute "api/v1/todo" [
        azureFunction.http (AsyncHandler(getToDoItem),
                            verb=Get, subRoute="/{id}",
                            validator=validateGetToDoItemQuery)
        azureFunction.http (Handler(addToDoItem),
                            verb=Post,
                            validator=validateAddToDoItemCommand,
                            returnResponseBodyWithOutputBinding=true)
            |> todoDatabase
        azureFunction.http (NoHandler, verb=Put, validator=validateToDoItem)
            |> todoDatabase
    ]
}

The functions block is a subset of the functionApp block we saw earlier and can only be used to define functions – shared configuration must go in the functionApp block.

Hopefully the first, GET verb, function is reasonably self-explanatory. The AsyncHandler case instructs Function Monkey that this is an async function and we assign a validator with the validator option.

The second function, for our POST verb, introduces a new concept – output bindings. We pipe the output of azureFunction.http to our curried output binding and this will result in a function being created that outputs to Cosmos DB. Because we’re using the Cosmos output binding we also need to add the Microsoft.Azure.WebJobs.Extensions.CosmosDB package to our functional project. We set the option returnResponseBodyWithOutputBinding to true so that as well as sending the output of our function to the output trigger we also return it as part of the HTTP response (this is optional as you can imagine in a more complex scenario that could leak data).

Finally for the third function our PUT verb also uses an output binding but this doesn’t have a handler at all, hence the NoHandler case. In this scenario the command that is passed in, once validated, is simply passed on as the output of the function. And so in this instance we can PUT a to do item to our endpoint and it will update the appropriate entry in Cosmos. (Note that for the moment I have not answered the question as to how to prevent one user from updating another users to do items – our authorisation approach is currently limited and I’ll come back to that in a future post).

Trying It Out

With all that done we can try this function app out in Postman. If we begin by attempting to add an invalid post to our POST endpoint, say with an empty title, we’ll get a 400 status code returned and a response as follows:

{
  "case": "Errors",
  "fields": [
    [
      {
        "message": "Must not be empty",
        "property": "title",
        "errorCode": "isNotEmpty"
      },
      {
        "message": "Must have a length no less than 1",
        "property": "title",
        "errorCode": "hasMinLengthOf"
      }
    ]
  ]
}

Now if we run it with a valid payload we will get:

{
  "id": "09482e8d-41aa-4c25-9552-b7b05bf0a787",
  "owningUserId": "2FF4D861-F9E3-4694-9553-C49A94D7E665",
  "title": "Buy underpants",
  "isComplete": false
}

Next Steps

These are with me really – I need to continue to flesh out the functionality which at this point essentially boils down to expanding out the computation expression and its helpers. I also need to spend some time refactoring aspects of Function Monkey. I’ve had to dig up and change quite a few things so that it can work in this more functional manner as well as continue to support the more typical C# patterns.

Then of course there is documentation!

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez