Migrating www.forcyclistsbycyclists.com to AWS from Azure (Part 1)

If you follow me on Twitter you might have seen that as a side project I run a cycling performance analytics website called Performance For Cyclists – this is currently running on Azure.

Its built in F# and deploys to Azure largely like this (its moved on a little since I drew this but not massively):

It runs fine but if you’ve been following me recently you’ll know I’ve been looking at AWS and am becoming somewhat concerned that Microsoft are falling behind in a couple of key areas:

  • Support for .NET – AWS seem to always be a step ahead in terms of .NET running in serverless environments with official support for the latest runtimes rolling out quickly and the ability to deploy custom runtimes if you need. Cold starts are much better and they have the capability to run an ASP.Net Core application serverlessly with much less fuss.

    I can also, already, run .NET on ARM on AWS which leads me to my second point (its almost as if I planned this)…
  • Lower compute costs – my recent tests demonstrated that I could achieve a 20% to 40% saving depending on the workload by making use of ARM on AWS. It seems clear that AWS are going to broaden out ARM yet further and I can imagine them using that to put some distance between Azure and AWS pricing.

    I’ve poked around this as best I can with the channels available to me but can’t get any engagement so my current assumption is Microsoft aren’t listening (to me or more broadly), know but have no response, or know but aren’t yet ready to reveal a response.

(just want to be clear about something – I don’t have an intrinsic interest in ARM, its the outcomes and any coupled economic opportunities that I am interested in)

I’m also just plain and simpe curious. I’ve dabbled with AWS, mostly when clients were using it when I freelanced, but never really gone at it with anything of significant size.

I’m going to have to figure my way through things a bit, and doubtless iterate, but at the moment I’m figuring its going to end up looking something like this:

Leaving Azure Maps their isn’t a mistake – I’m not sure what service on AWS offers the functionality I need, happy to here suggestions on Twitter!

I may go through this process and decide I’m going to stick with Azure but worst case is that I learn something! Either way I’ll blog about what I learn. I’ve already got the API up and running in ECS backed by Fargate and being built and deployed through GitHub Actions and so I’ll write about that in my next post.

Compute “Bang for Buck” on Azure and AWS – 20% to 40% advantage on AWS

As I normally post from a developer perspective I thought it might be worth starting off with some additional context for this post. If you follow me on Twitter you might know that about 14 months ago I moved into a CTO role at a rapidly growing business – we’re making ever increasing use of the cloud both by migrating workloads and the introduction of new workloads. Operational cost is a significant factor in my budget. To me the cloud can be summarised as “cloud = economics + capabilities” and so if I have a similar set of capabilities (or at least capabilities that map to my needs) then reduction in compute costs has the potential to drive the choice of vendor and unlock budget I can use to grow faster.

In the last few posts I’ve been exploring the performance of ARM processors in the cloud but ultimately what matters to me is not a processor architecture but the economics it brings – how much am I paying for a given level of performance and set of characteristics.

It struck me there were some interesting differences across ARM, x86, Azure and AWS and I’ve expanded my testing and attempted here to present these findings in (hopefully) useful terms.

All tests have been run on CentOS Linux (or the AWS derivative) using the .NET 5 runtime with Apache acting as a reverse proxy to Kestrel. I’ve followed the same setup process on every VM and then run performance tests directly against their public IP using loader.io all within the USA.

I’ve run two workloads:

  1. Generate a Mandelbrot – this is computationally heavy with no asynchronous yield points.
  2. A test that simulates handing off asynchronously to remote resources. I’ve included a small degree of randomness in this.

At the bottom of the post is a table containing the full set of tests I’ve run on the many different VM types available. I’m going to focus on some of the more interesting scenarios here.

Computational Workload

2 Core Tests

For these tests I picked what on AWS is a mid range machine and on Azure the entry level D series machine:

AWS (ARM): t4g.large – a 2 core VM with 8GiB of RAM and costing $0.06720 per hour
AWS (x86): t3.large – a 2 core VM with 8GiB of RAM and costing $0.08320 per hour
Azure (x86): D2s v4 – a 2 core VM with 8GiB of RAM and costing $0.11100 per hour

On these machines I then ran the workloads with different numbers of clients per seconds and measured their response times and the failure rate (failure being categorised as a response of > 10 seconds):

Both Intel VMs generated too many errors at the 25 client per second rate and the load tester aborted.

Its clear from these results that the ARM VM running on AWS has a significant bang for buck advantage – its more performant than the Intel machines and is 20% cheaper than the AWS Intel machine and 40% cheaper than the Azure machine.

Interestingly the Intel machine on AWS lags behind the Intel machine on Azure particularly when stressed. It is however around 20% cheaper and it feels as if performance between the Intel machines is largely on the same economic path (the AWS machine is slightly ahead if you normalise the numbers).

4 Core Tests

I wanted to understand what a greater number of cores would do for performance – in theory it should let me scale past the 20 client per second level of the smaller instances. Having concluded that ARM represented the best value for money for this workload on AWS I didn’t do an x86 test on AWS. I used:

AWS: t4g.xlarge (ARM) – a 4 core VM with 16GiB of RAM and costing $0.13440 per hour
Azure: D4s_v4 – a 4 core VM with 16GiB of RAM and costing $0.22200 per hour

I then ran the workloads with different numbers of clients per seconds and measured their response times and the failure rate (failure being categorised as a response of > 10 seconds):

The Azure instance failed the 55 client per second rate – it had so many responses above 10 seconds in duration that the load test tool aborted the test.

Its clear from these graphs that the ARM VM running on AWS outperforms Azure both in terms of response time and massively in terms of bang for buck – its nearly half the price of the corresponding Azure VM.

Starter Workloads

One of the nice things about AWS and Azure is they offer very cheap VMs. The Azure VMs are burstable (and there is some complexity here with banked credits) which makes them hard to measure but as we saw in a previous post the ARM machines perform very well at this level.

The three machines used are:

AWS (ARM): t4g.micro, 2 core, 1GiB of RAM costing $0.00840 per hour
Azure (x86): B1S, 1 core, 1GiB of RAM costing $0.00690 per hour
AWS (x86): t3.micro, 2 core, 1 GiB of RAM costing $0.00840 per hour

Its an easy victory for ARM on AWS here – its performant, cheap and predictable. The B1S instance on Azure couldn’t handle 15 or 20 clients per second at all but may be worth consideration if its bursting system works for you.

Simulated Async Workload

2 Core Tests

For these tests I used the same configurations as in the computational workload.

Their is less to separate the processors and vendors with a less computationally intensive workload. Interestingly the AWS machines have a less stable response time with more > 10 second response times but, in the case of the ARM chip, it does this while holding a lower average response time while under load.

Its worth noting that the ARM VM is doing this at 40% of the cost of the Azure VM and so I would argue again represents the best bang for buck. The AWS x86 VM is 20% cheaper than the Azure equivelant – if you can live with the extra “chop” that may still be worth it or you can use that saving to purchase a bigger tier unit.

4 Core Tests

For these tests I used the same virtual machines as for the computational workload:

There is little to separate the two VMs until they come under heavy load at which point we see mixed results – I would argue the ARM VM suffers more as it becomes much more spiky with no consistent benefit in average response time.

However in terms of bang for buck – this ARM VM is nearly half the price of the Azure VM. There’s no contest. I could put two of these behind a load balancer for nearly the same cost.

Starter Workloads

For these tests I used the same virtual machines as for the computational workload:

Its a pretty even game here until we hit the 100 client per second range at which point the AWS VMs begin to outperform the Azure VM though at the 200 client per second range at the expense of more long response times.

Conclusions

Given the results, at least with these workloads, its hard not to conclude that AWS currently offers significantly greater bang for buck than Azure for compute. Particularly with their use of ARM processors AWS seem to have taken a big leap ahead in terms of value for money for which, at the moment, Azure doesn’t look to have any response.

Perhaps tailoring Azure VMs to your specific workloads may get you more mileage.

I’ve tried to measure raw compute here in the simplest way I can – I’d stress that if you use more managed services you may see a different story (though ultimately its all running on the same infrastructure so my suspicion is not). And as always, particularly if you’re considering a switch of vendor, I’d recommend running and measuring representative workloads.

Full Results

TestVendorInstanceClients per secondMinMaxAverageSuccessful ResponsesTimeouts> 10 secondsPrice per hour
MandelbrotAzureA2_V2 (x64)29179279346000.0%$0.10600
MandelbrotAzureA2_V2 (x64)51263664939755600.0%$0.10600
MandelbrotAzureA2_V2 (x64)101205102037985342138.2%$0.10600
MandelbrotAzureA2_V2 (x64)15ERROR RATE TOO HIGH#DIV/0!$0.10600
MandelbrotAzureA2_V2 (x64)20ERROR RATE TOO HIGH#DIV/0!$0.10600
AsyncAzureA2_V2 (x64)2017334325260000.0%$0.10600
AsyncAzureA2_V2 (x64)50196504274149800.0%$0.10600
AsyncAzureA2_V2 (x64)10023942402484179400.0%$0.10600
AsyncAzureA2_V2 (x64)20042389295475172500.0%$0.10600
MandelbrotAzureB1S (x86)2670255111715700.0%$0.00690
MandelbrotAzureB1S (x86)51612552132527200.0%$0.00690
MandelbrotAzureB1S (x86)1012591000171157222.7%$0.00690
MandelbrotAzureB1S (x86)15ERROR RATE TOO HIGH$0.00690
AsyncAzureB1S (x64)2020638326858000.0%$0.00690
MandelbrotAzureB1S (x86)20ERROR RATE TOO HIGH$0.00690
AsyncAzureB1S (x64)50209436278149800.0%$0.00690
AsyncAzureB1S (x64)10029231511892225200.0%$0.00690
AsyncAzureB1S (x64)20048277084474213600.0%$0.00690
MandelbrotAzureD1 v2 (x64)27488287876000.0%$0.08780
MandelbrotAzureD1 v2 (x64)52858424236467000.0%$0.08780
MandelbrotAzureD1 v2 (x64)1011921000175235758.1%$0.08780
MandelbrotAzureD1 v2 (x64)15ERROR RATE TOO HIGH$0.08780
MandelbrotAzureD1 v2 (x64)20ERROR RATE TOO HIGH$0.08780
AsyncAzureD1 v2 (x64)20$0.08780
AsyncAzureD1 v2 (x64)50168407244149900.0%$0.08780
AsyncAzureD1 v2 (x64)10024133981986215600.0%$0.08780
AsyncAzureD1 v2 (x64)20040791714927195100.0%$0.08780
MandelbrotAzureD2as_v425596045666000.0%$0.11100
MandelbrotAzureD2as_v455872606159613300.0%$0.11100
MandelbrotAzureD2as_v41013055920354113400.0%$0.11100
MandelbrotAzureD2as_v41513589607559612600.0%$0.11100
AsyncAzureD2as_v42020030523960000.0%$0.11100
MandelbrotAzureD2as_v4206381237974351043324.1%$0.11100
MandelbrotAzureD2as_v4251459102938850587054.7%$0.11100
AsyncAzureD2as_v450200312238149800.0%$0.11100
AsyncAzureD2as_v4100202347247300000.0%$0.11100
AsyncAzureD2as_v420029541292053427600.0%$0.11100
AsyncAzureD2as_v43003291126931904334230.5%$0.11100
AsyncAzureD2as_v440033817305397842472054.6%$0.11100
MandelbrotAWSt2.micro (x86)2675114010105800.0%$0.01160
MandelbrotAWSt2.micro (x86)5651532433327200.0%$0.01160
MandelbrotAWSt2.micro (x86)10186710193699956812.5%$0.01160
MandelbrotAWSt2.micro (x86)151445102039458324457.9%$0.01160
AsyncAWSt2.micro (x64)2024241229860000.0%$0.01160
MandelbrotAWSt2.micro (x86)201486102068895114078.4%$0.01160
AsyncAWSt2.micro (x64)50241545312149700.0%$0.01160
AsyncAWSt2.micro (x64)10024498292260198900.0%$0.01160
AsyncAWSt2.micro (x64)200347173753858211825210.6%$0.01160
MandelbrotAWSt3.micro (x86)27018857446000.0%$0.01040
MandelbrotAWSt3.micro (x86)58783313206910800.0%$0.01040
MandelbrotAWSt3.micro (x86)108558037449810300.0%$0.01040
MandelbrotAWSt3.micro (x86)159731020269308499.7%$0.01040
AsyncAWSt3.micro (x64)2023340227960000.0%$0.01160
MandelbrotAWSt3.micro (x86)201030102158495743532.1%$0.01040
AsyncAWSt3.micro (x64)502354912407149800.0%$0.01160
AsyncAWSt3.micro (x64)100235545292299400.0%$0.01160
AsyncAWSt3.micro (x64)20023417376259830892607.8%$0.01160
MandelbrotAWSt4g.large (ARM)26327796546000.0%$0.06720
MandelbrotAWSt4g.large (ARM)56982436175313700.0%$0.06720
MandelbrotAWSt4g.large (ARM)1019366284368213700.0%$0.06720
MandelbrotAWSt4g.large (ARM)1521209927562413300.0%$0.06720
MandelbrotAWSt4g.large (ARM)208651020774721003123.7%$0.06720
MandelbrotAWSt4g.large (ARM)25757102078432568058.8%$0.06720
AsyncAWSt4g.large (ARM)2023439828059900.0%$0.06720
AsyncAWSt4g.large (ARM)50229395275149800.0%$0.06720
AsyncAWSt4g.large (ARM)100236426287299200.0%$0.06720
AsyncAWSt4g.large (ARM)20031617359208040262606.1%$0.06720
AsyncAWSt4g.large (ARM)300241173813322306063917.3%$0.06720
AsyncAWSt4g.large (ARM)4003491312733464038108821.2%$0.06720
MandelbrotAWSt4g.micro (ARM)26187516386000.0%$0.00840
MandelbrotAWSt4g.micro (ARM)57652794170913200.0%$0.00840
MandelbrotAWSt4g.micro (ARM)107616958388213000.0%$0.00840
MandelbrotAWSt4g.micro (ARM)1575910203570412710.8%$0.00840
AsyncAWSt4g.micro (ARM)2023637127560000.0%$0.00840
MandelbrotAWSt4g.micro (ARM)208021020774591191410.5%$0.00840
AsyncAWSt4g.micro (ARM)502224178373149800.0%$0.00840
AsyncAWSt4g.micro (ARM)100231414286299400.0%$0.00840
AsyncAWSt4g.micro (ARM)20031017388202839952004.8%$0.00840
AsyncAzureD4s_v42016723920060000.0%$0.22200
AsyncAzureD4s_v450165242197149900.0%$0.22200
AsyncAzureD4s_v4100153243198300000.0%$0.22200
AsyncAzureD4s_v4200165270204600000.0%$0.22200
AsyncAzureD4s_v430020899621395790000.0%$0.22200
AsyncAzureD4s_v440021116283204976951141.5%$0.22200
MandelbrotAzureD4s_v423043343136000.0%$0.22200
MandelbrotAzureD4s_v4541567550015000.0%$0.22200
MandelbrotAzureD4s_v41048834501670238800.0%$0.22200
MandelbrotAzureD4s_v4154864371301225600.0%$0.22200
MandelbrotAzureD4s_v4207276572402723900.0%$0.22200
MandelbrotAzureD4s_v42514538024512723500.0%$0.22200
MandelbrotAzureD4s_v4308869282598823800.0%$0.22200
MandelbrotAzureD4s_v435613100056850196198.8%$0.22200
MandelbrotAzureD4s_v4401817133527905215146.1%$0.22200
MandelbrotAzureD4s_v44524121020786392044116.7%$0.22200
MandelbrotAzureD4s_v4507471020789538015866.4%$0.22200
MandelbrotAzureD4s_v455ERROR RATE TOO HIGH
MandelbrotAzureD2s v424594824696000.0%$0.11100
MandelbrotAzureD2s v458833449176412300.0%$0.11100
MandelbrotAzureD2s v4104806747405312300.0%$0.11100
MandelbrotAzureD2s v41548310202628611810.8%$0.11100
MandelbrotAzureD2s v420506102067636862824.6%$0.11100
MandelbrotAzureD2s v425ERROR RATE TOO HIGH$0.11100
AsyncAzureD2s v42016827020658000.0%$0.11100
AsyncAzureD2s v450164266205149900.0%$0.11100
AsyncAzureD2s v4100167310217300000.0%$0.11100
AsyncAzureD2s v420025938182233399000.0%$0.11100
AsyncAzureD2s v43002491560335923808120.3%$0.11100
AsyncAzureD2s v440033016811434139142034.9%$0.11100
MandelbrotAWSt3.large (x86)27118787536000.0%$0.08320
MandelbrotAWSt3.large (x86)57583150202411300.0%$0.08320
MandelbrotAWSt3.large (x86)1010237656439311500.0%$0.08320
MandelbrotAWSt3.large (x86)15234010202661510443.7%$0.08320
MandelbrotAWSt3.large (x86)202453104068479475453.5%$0.08320
MandelbrotAWSt3.large (x86)25ERROR RATE TOO HIGH$0.08320
AsyncAWSt3.large (x86)2023438028060000.0%$0.08320
AsyncAWSt3.large (x86)50230386276144800.0%$0.08320
AsyncAWSt3.large (x86)100235424287299000.0%$0.08320
AsyncAWSt3.large (x86)20023717373280830262708.2%$0.08320
AsyncAWSt3.large (x86)300230173823358312760316.2%$0.08320
AsyncAWSt3.large (x86)4005511312734513664108822.9%$0.08320
AsyncAWSt4g.xlarge (ARM)2023138427459900.0%$0.13440
AsyncAWSt4g.xlarge (ARM)50221380273149800.0%$0.13440
AsyncAWSt4g.xlarge (ARM)100221382273299500.0%$0.13440
AsyncAWSt4g.xlarge (ARM)200222395276600000.0%$0.13440
AsyncAWSt4g.xlarge (ARM)300266102097068699560.6%$0.13440
AsyncAWSt4g.xlarge (ARM)4002581110619537793108812.3%$0.13440
MandelbrotAWSt4g.xlarge (ARM)26337796466000.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)5581111573715000.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)106134203154026400.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)1510796666270826400.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)2011786820372327200.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)257518171442526400.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)3067710231555526562.2%$0.13440
MandelbrotAWSt4g.xlarge (ARM)35687102456356239259.5%$0.13440
MandelbrotAWSt4g.xlarge (ARM)40895103347353229228.8%$0.13440
MandelbrotAWSt4g.xlarge (ARM)4510411020780441995321.0%$0.13440
MandelbrotAWSt4g.xlarge (ARM)5012361020686241737630.5%$0.13440
MandelbrotAWSt4g.xlarge (ARM)55119310206917910516160.5%$0.13440

.NET 5 – ARM vs x64 in the Cloud Part 2 – Azure

Having conducted my ARM and x64 tests on AWS yesterday I was curious to see how Azure would fair – it doesn’t support ARM but ultimately that’s a mechanism for delivering value (performance and price) and not an end in and of itself. And so this evening I set about replicating the tests on Azure.

In the end I’ve massively limited my scope to two instance sizes:

  1. A2 – this has 2 CPUs and 4Gb of RAM (much more RAM than yesterdays) and costs $0.120 per hour
  2. B1S – a burstable VM that has 1 CPUand 1Gb RAM (so most similar to yesterdays t2.micro) and costs $0.0124 per hour

Note – I’ve begun to conduct tests on D series too, preliminary findings is that the D1 is similar to the A2 in performance characteristics.

I was struggling to find Azure VMs with the same pricing as AWS and so had to start with a burstable VM to get something in the same kind of ballpark. Not ideal but they are the chips you are dealt on Azure! I started with the B1S which was still more expensive than the ARM VM. I created the VM, installed software, and ran the tests – the machine comes with 30 credits for bursting. However after running tests several times it was still performing consistently so these were either exhausted quickly, made little difference, or were used consistently.

I moved to the A2_V2 because, frankly, the performance was dreadful on my early tests with the B1S and I also wanted something that wouldn’t burst. I was also trying to match the spec of the AWS machines – 2 cores and 1Gb of RAM. I’ll attempt the same tests with a D series when I can.

Test setup was the same and all tests are run on VMs accessed directly on their public IP using Apache as a reverse proxy to Kestrel and our .NET application.

I’ve left the t2.micro instance out of this analysis

Mandelbrot

With 2 clients per test we see the following response times:

We can see that the two Azure instances are already off to a bad start on this computationally heavy test.

At 10 clients per second we continue to see this reflected:

However at this point the two Azure instances begin to experience timeout failures (the threshold being set at 10 seconds in the load tester):

The A2_V2 instance is faring particularly badly particularly given it is 10x the cost of the AWS instances.

Unfortunately their is no meaningful compaison I can make under higher load as both Azure instances collapse when I push to 15 clients per second. For complete sake here are the results on AWS at 20 clients per second (average response and total requests):

Simulated Async Workload

With our simulated async workload Azure fares better at low scale. Here are the results at 20 requests per second:

As we push the scale up things get interesting with different patterns across the two vendors. Here are the average response times at 200 clients per second:

At first glance AWS looks to be running away with things however both the t4g.micro and t3.micro suffer from performance degradation at the extremes – the max response time is 17 seconds for both while for the Azure instances it is around 9 seconds.

You can see this reflected in the success and total counts where the AWS instances see a number of timeout failures (> 10 seconds) while the Azure instances stay more consistent:

However the AWS instances have completed many more requests overall. I’ve not done a percentile breakdown (see comments yesterday) but it seems likely that at the edges AWS is fraying and degrading more severely than Azure leading to this pattern.

Conclusions

The different VMs clearly have different strengths and weaknesses however in the computational test the Azure results are disappointing – the VMs are more expensive yet, at best, offer performance with different characteristics (more consistent when pushed but lower average performance – pick your poison) and at worst offer much lower performance and far less value for money. They seem to struggle with computational load and nosedive rapdily when pushed in that scenario.

Full Results

TestVendorInstanceClients per secondMinMaxAverageSuccessful ResponsesTimeouts
MandelbrotAWSt4g.micro (ARM)2618751638600
MandelbrotAWSt4g.micro (ARM)5765279417091320
MandelbrotAWSt4g.micro (ARM)10761695838821300
MandelbrotAWSt4g.micro (ARM)157591020357041271
MandelbrotAWSt4g.micro (ARM)2080210207745911914
MandelbrotAWSt3.micro (x64)2701885744600
MandelbrotAWSt3.micro (x64)5878331320691080
MandelbrotAWSt3.micro (x64)10855803744981030
MandelbrotAWSt3.micro (x64)15973102026930849
MandelbrotAWSt3.micro (x64)2010301021584957435
MandelbrotAWSt2.micro (x64)267511401010580
MandelbrotAWSt2.micro (x64)565153243332720
MandelbrotAWSt2.micro (x64)101867101936999568
MandelbrotAWSt2.micro (x64)1514451020394583244
MandelbrotAWSt2.micro (x64)2014861020688951140
MandelbrotAzureA2_V2 (x64)2917927934600
MandelbrotAzureA2_V2 (x64)5126366493975560
MandelbrotAzureA2_V2 (x64)1012051020379853421
MandelbrotAzureA2_V2 (x64)15ERROR RATE TOO HIGH
MandelbrotAzureA2_V2 (x64)20ERROR RATE TOO HIGH
MandelbrotAzureB1S (x64)267025511171570
MandelbrotAzureB1S (x64)5161255213252720
MandelbrotAzureB1S (x64)101259100017115722
MandelbrotAzureB1S (x64)15ERROR RATE TOO HIGH
MandelbrotAzureB1S (x64)20ERROR RATE TOO HIGH
AsyncAWSt4g.micro (ARM)202363712756000
AsyncAWSt4g.micro (ARM)50222417837314980
AsyncAWSt4g.micro (ARM)10023141428629940
AsyncAWSt4g.micro (ARM)2003101738820283995200
AsyncAWSt3.micro (x64)202334022796000
AsyncAWSt3.micro (x64)50235491240714980
AsyncAWSt3.micro (x64)10023554529229940
AsyncAWSt3.micro (x64)2002341737625983089260
AsyncAWSt2.micro (x64)202424122986000
AsyncAWSt2.micro (x64)5024154531214970
AsyncAWSt2.micro (x64)1002449829226019890
AsyncAWSt2.micro (x64)2003471737538582118252
AsyncAzureA2_V2 (x64)201733432526000
AsyncAzureA2_V2 (x64)5019650427414980
AsyncAzureA2_V2 (x64)1002394240248417940
AsyncAzureA2_V2 (x64)2004238929547517250
AsyncAzureB1S (x64)202063832685800
AsyncAzureB1S (x64)5020943627814980
AsyncAzureB1S (x64)1002923151189222520
AsyncAzureB1S (x64)2004827708447421360

.NET 5 – ARM vs x64 in the Cloud

With Microsoft and Apple both now beginning to use ARM chips in laptops, what was traditionally the domain of x86/x64 architecture, I found myself curious as to the ramifications of this move – particularly by Apple who are transitioning their entire lineup to ARM over the next 2 years.

While musing on the pain points of this I found myself wandering if Azure supported ARM processors, they don’t, and got pointed to AWS who do. @thebeebs (an AWS developer advocate) mentioned that some customers had seen significant cost reductions by moving some workloads over to ARM and so I, inevitably, found myself curious as to how typical .NET workloads might run in comparison to x64 and set about some tests.

The Tests

I quickly rustled up a simple API containing two invocable workloads:

  1. A computation heavy workload – I’m rendering a Mandelbrot and returning it as an image. This involves floating point maths.
  2. A simulated await workload – often with APIs we hand off to some other system (e.g. a database) and then do a small amount of computation. I’ve simulated this with Task.Delay and a (very small) random factor to simulate the slight variations you will get with any network / remote service request and then around this I compute two tiny Mandelbrots and return a couple of numbers. It would be nice to come back at some point and use a more structured approach for the simulated remote latency.

I’ve written this in F# (its not particularly “functional”) using Giraffe on top of ASP.Net Core just because that’s my go to language these days. Its running under the .NET 5 runtime.

The code for this is here. Its not particularly elegant and I simply converted some old JavaScript code of mine into F# for the Mandelbrot. It does a job.

The Setup

Within AWS I created three EC2 Linux instances:

  1. t4g.micro – ARM based, 2 vCPU, 1Gb memory, $0.0084 per hour
  2. t3.micro – x64 based, 2 vCPU, 1Gb memory, $0.0104 per hour
  3. t2.micro – x64 based, 1 vCPU, 1Gb memory, $0.0116 per hour

Its worth noting that my ARM instance is costing me 20% less than the t3.micro.

I’ve deliberately chosen very small instances in order to make it easier to stress them without having to sell a kidney to fund the load testing. We should be able to stress these instances quite quickly.

I then SSHed into each box and installed .NET 5 from the appropriate binaries and setup Apache as a reverse proxy. On the ARM machine I also had to install GCC and compile a version of libicui18n for .NET to work.

Next I used git clone to bring down the source and ran dotnet restore followed by dotnet run. At this point I had the same code working on each of my EC2 instances. Easy to verify as the root of the site shows a Mandelbrot:

This was all pretty easy to set up. You can also do it using a Cloud Formation sample that I was pointed at (again by @thebeebs).

I still think its worth remarking how much .NET has changed in the last few years – I’ve not touched Windows here and have the same source running on two different CPU architectures with no real effort on my part. Yes its “get through the door” stakes these days but it was hard to imagine this a few years back.

Benchmarks

My tests were fairly simple – I’ve used loader.io to maintain a steady state of a given number of clients per second and gathered up the response times and total execution counts along with the number of timeouts. I had the timeout threshold set at 10 seconds.

Time allowing I will come back to this and run some percentile analysis – loader doesn’t support this and so I would need to do some additional work.

I’ve run the test several times and averaged the results – though they were all in the same ballpark.

Mandelbrot

Firstly as a baseline lets look at things running with just two clients per second:

With little going on we can see that the ARM instance already has a slight advantage – its consistently (min, max and average) around 100ms faster than the closest x64 based instance.

Unsurprisingly if we push things a little harder to 5 clients per second this becomes magnified:

We’re getting no errors or timeouts at this point and you can see the total throughput over the 30 second run below:

The ARM instance has completed around 20% more requests than the nearest x64 instance, with a 18% improvement in average response time and at 80% of the cost.

And if we push this out to 20 clients per second (my largest scale test) the ARM instance looks better again:

Its worth noting that at this point all three instances are generating timeouts in our load test suite but again the ARM instance wins out here – we get fewer timeouts and get through more overall requests:

You can see from this that our ARM instance is performing much better under this level of load. We can say that:

  • Its successfully completed 60% more requests than the nearest x64 instance
  • It has a roughly 12% improvement on average response time
  • And it is doing this at 80% of the cost of the x64 instance

With our Mandelbrot test its clear that the ARM instance has a consistent advantage both in performance and cost.

Simulated Async Workload

Starting again with a low scale test (in this case 50 clients per second – this test spends significant time awaiting) in this case we can see that our t2 x64 instance had an advantage of around 40ms:

However if we move up to 100 clients per second we can see the t2 instance essentially collapse while out t4g ARM instance and t3 x64 instance are essentially level pegging (286ms and 292ms) respectively:

We get no timeouts at this point and our ARM and x64 instance level peg again on total requests:

However if we push on to a higher scale test (200 clients per second) we can see the ARM instance begin to pull ahead:

Conclusions

Going into this I really didn’t know what to expect but these fairly simple tests suggest their is an economic advantage to running under ARM in the cloud. At worst you will see comparable performance at a lower price point but for some workloads you may see a significant performance gain – again at a lower price point.

20% performance gain at 80% the price is most certainly not to be sniffed at and for large workloads could quickly offset the cost of moving infrastructure to ARM.

Presumably the price savings are due to the power efficiency of the ARM chips. However what is hard to tell is how much of the pricing is “early adopter” to encourage people to move to CPUs that have long term advantage to cloud vendors (even minor power efficiency gains over cloud scale data centers must total significant numbers on the bottom line) and how much of that will be sustained and passed on to users in the long term.

Doubtless we’ll land somewhere in the middle.

Question I have now is: where the heck is Azure in all this? Between Lambda and ARM on AWS its hard not to feel as if the portability advantages, both processor and OS, of .NET Core / 5 are being realised more effectively by Amazon than they are by Microsoft themselves. Strange times.

Full Results

Response Times (ms)
TestInstanceClients per secondMinMaxAverageSuccessful ResponsesTimeouts
Mandelbrott4g.micro (ARM)2618751638600
Mandelbrott4g.micro (ARM)5765279417091320
Mandelbrott4g.micro (ARM)10761695838821300
Mandelbrott4g.micro (ARM)157591020357041271
Mandelbrott4g.micro (ARM)2080210207745911914
Mandelbrott3.micro (x64)2701885744600
Mandelbrott3.micro (x64)5878331320691080
Mandelbrott3.micro (x64)10855803744981030
Mandelbrott3.micro (x64)15973102026930849
Mandelbrott3.micro (x64)2010301021584957435
Mandelbrott2.micro (x64)267511401010580
Mandelbrott2.micro (x64)565153243332720
Mandelbrott2.micro (x64)101867101936999568
Mandelbrott2.micro (x64)1514451020394583244
Mandelbrott2.micro (x64)2014861020688951140
Asynct4g.micro (ARM)202363712756000
Asynct4g.micro (ARM)50222417837314980
Asynct4g.micro (ARM)10023141428629940
Asynct4g.micro (ARM)2003101738820283995200
Asynct3.micro (x64)202334022796000
Asynct3.micro (x64)50235491240714980
Asynct3.micro (x64)10023554529229940
Asynct3.micro (x64)2002341737625983089260
Asynct2.micro (x64)202424122986000
Asynct2.micro (x64)5024154531214970
Asynct2.micro (x64)1002449829226019890
Asynct2.micro (x64)2003471737538582118252

Code has no value

Nor does SCRUM.
Nor does XP.
Nor does waterfall (I could keep going with the methodologies).
Nor does unit testing.
Nor does acceptance testing.
Nor does the runtime you are using.
Nor does the version of a framework you are using.
Nor does your automated deployment process.
Nor does the language you are using.
Nor do the patterns you are using.
Nor do those post-its you’ve put on the wall.

We get so worked up about these things on Twitter (and boy do we get worked up) but they are all intrinsically valueless. Utterly valueless.

What gives them value (positive or negative) is context – your goal and your constraints. To illustrate this I’m going to use three examples from my career.

Hobbyist developer

As far as my public persona goes this is largely what I am today. My goal and constraints are pretty clear: I want to have fun, there’s only me and limited time, and I only want to spend hobby levels of money.

In many ways I’m back where I started with my BBC Micro. I have an idea, it appeals to me, and I just crack on and have a bash it. Sometimes the focus on satisfying my curiosity, sometimes its making something I will use, sometimes its making something I hope others will use. I don’t have to talk to anybody about it, I don’t need to make money, I’m the customer / customer proxy.

This being the case I’ll use a toolchain I think is fun and the most complex process I’m likely to have is a Trello board, but mostly not even that. I might add a unit test in if it helps me. I’ll probably do an automated deployment process because the things I do are simple enough that its really just “build and upload”.

I really just make day to day decisions based on what will amuse me.

Small startup

The phrase “next paycheck delivery” was used in what I think was a pejorative manner in a reply to one of my tweets earlier today but actually next paycheck delivery (much like code) has no value in and of itself and can be either good or bad depending on your context. I’ve spent time in a number of startups where actually that was entirely the right thing to be doing – without the next paycheck the company was going to sink.

In the specific example I’m thinking of the business needed to pivot and had very little runway left. We had to attract customers and fast and they wanted to see something working and be able to use it.

So our goal was clear – get a working, usable, demo for a customer. Our constraints: we had an insane deadline, no money, and there were two of us.

We sketched out a plan. Minimum possible set of features based on what would be useful and what we thought it would cost to build. We divided up the work so we could do it in parallel. We wrote no unit tests. We used the most bare bones of process (you do this stuff and I’ll do this stuff). We threw things together in a fairly ramshackle way based on how each of us could move fastest. We were not gnashing out teeth about what version of runtimes we were using. We reused some bits in a different language that had issues but would serve our purpose.

It was so down to the wire we were doing the proverbial “coding while the customer was watching”. Put simply: “I ain’t got time to bleed”.

Did this cause a problem for later down the road? Yes. Absolutely.
Did it enable the business to survive? Yes. Absolutely.
Would it have been better not to end up here? Yes. Absolutely.
Is it code I was proud of? Yes. Absolutely. It met its goal.

I’ve been involved in a few more startups and super constrained environments since and I’ve got a lot better at moving fast and ending up with something that is easier to move along and develop.

If you find yourself here the important thing is to recognise that yes you’re likely building up debt for the future, try to minimise that, understand that at some point yes you may enter rewrite territory.

Large organisation, established products

I’ve also spent time in larger organisations doing various roles – I think the biggest had 50 development teams and it was an environment where, at the wrong time, mistakes in some parts of the system could cost millions in minutes.

The company had market fit, it had customers, it was still growing rapidly, it had complex integrated systems and services, and a physical presence. It didn’t matter if teams did things differently but there were economies of scale that came from solving similar classes of problem in similar ways (why generate unique DevOps pipelines for two .NET API services for example). But what was really hard was co-ordinating all this. Sure we could use versioned APIs, contracts etc. but delivering a significant new piece of value required the delivery from several teams and co-ordination with the physical aspects of the business.

Goals and constraints would change but to focus in on just a handful of key things that were common:

  • Predictable delivery
  • Visibility
  • Common understanding of contracts
  • High quality

As a result we had people focused on service integration and standards, teams followed agile processes (there was variance between the teams), there were two or three runtime environments but some basic conformance standards, a lot of focus on both unit and acceptance testing, formalised roles, code reviews, architecture reviews, discipline specialists, etc. etc.

These things worked for this organisation – in the context of the organisations goals and constraints they added value. “We’re not at home to Mr Cockup”.

See the source image

But the practices they found essential would have crippled the startup trying to pivot.

Finishing thoughts

I provide these examples hopefully to illustrate that its all about context. For me understanding this is what sets a good developer apart from a coder and its vital to engineering management.

To finish in the pithy tone in which I started I just have a few recommendations really:

  • Always make sure you understand and can clearly articulate your goal
  • Always make sure you understand and can clearly articulate your constraints
  • Always make sure you understand what success looks like and how you will measure that
  • Always make sure you understand what failure looks like
  • Always make sure your approach is based on maximizing your outcomes within those constraints.
  • Have an eye on the future but don’t let it impede your today – the future may never come if you do.
  • Relentlessly focus on value – but remember that value is related to your context and not what some dude told you on Twitter.
  • When someone tells you technology, tools, practices have value in and of themselves – they’re probably trying to sell you something.
  • Do not, I repeat, do not become an evangelist unless your job is to be an evangelist. The minute you do you’ve lost objectivity.

Dell XPS 15 Mini-Review – A Developers Perspective

If you follow me on Twitter you’ll know I’m mostly a Mac user and am fortunate enough to have a super nice 16″ MacBook Pro that I definitely consider my benchmark for a good developer laptop. From a usability / tactility perspective the hardware quality is still the best out their in my opinion and you get a lot of power in the box.

However I do like to keep my hand in with Windows 10 and “full” Visual Studio and so like to keep a Windows machine around too. On the one hand having another laptop for this is definitely extravagant… on the other I spent 3/4 of my life with this stuff (and the other 1/4 with my bikes!) and its nice to be able to freshen things up through switching device. My previous Windows laptop was a Surface Book 2 but although at first I was quite impressed over time and with long term use its flaws become obvious.

Firstly Microsoft’s own updates managed to brick the dedicated GPU on two occasions. It would just go missing. And their support teams were hopeless in terms of resolving the actual issue – sure they’d sent me a new unit almost no questions asked (awesome!) but it was clearly a software issue relating to a Windows Update and would likely happen again (their were loads of people effected by it). There have been driver issues with screen detachment. Honestly just not good enough when you own the hardware and operating system and are charging as a premium product.

And finally the hardware itself – its got a great feel to it but their are some glaring issues, particularly in 2020 and with the Surface Book 3 too as its not really been updated. Specifically:

  • The device is unstable on a lap – there’s a lot of weight in the screen so it can tip over easily
  • The trackpad is tiny
  • There are huge borders round the screen

To me, in the price point, all these things make it an absolute no-goer at the price point. I applaud trying something new but the compromises are in all the wrong places for most use cases.

I considered the Surface Laptop range – I really like the tactility aspects, great solid feeling kit, decent trackpad and keyboard. But my word at the prices they are underpowered. In fact they are underpowered full stop.

Plenty of people are fans of them I realise… but to me its like the Apple contingent ignoring the keyboard issues on the laptops or the thermal and expansion issues on the trash can Mac Pro. Obvious issues for premium devices.

So all that leads me to the Dell XPS 15 which I picked up about 3 weeks ago, have done a lot of coding on, and am writing this on. Its an i7 model with the 4K touch screen and 32Gb of RAM and the 1650 GPU. That’s a touch behind my MacBook Pro which is the i9 model with 64Gb of RAM.

Hopefully all the above gives you context for the following comments. Some might be operating system related but unless you want to go Linux the hardware and software go together.

Good Stuff

  • It was really well packaged with a great unboxing experience – Dell have definitely learned from Apple here, first impressions count.
  • Purchase and delivery was easy. The Dell website is a bit “dense” when it comes to how it presents information but I clicked buy and it arrived the next day.
  • The screen is beautiful – it really is. Bright, high pixel density, very thin borders, vibrant, sharp. I love it. I never use it as a touch screen. I hate grubby finger prints on my screens.
  • I’ve not benchmarked it but it feels plenty fast.
  • Plugging it in to my daisy chained Thunderbolt monitor setup via my existing Thunderbolt hub on a single cable “just worked” and it seems to have no trouble running three 4K displays (its own and the two attached)
  • The interior of the laptop feels really nice. Its almost soft and feels more approachable than the now in comparison cold feeling MacBook Pro.
  • Windows 10. I really like Windows 10 – its a nice operating system and things like WSL are neat touches. That said…. see bad stuff too.
  • It handles external displays better than a Mac. Why Apple cannot get this right is beyond me – but when I plug my MacBook in its like a lottery as to what it will decide to do with the screens and its not unusual for me to have to unplug it and try again several times and even restart. The Dell “just works”.
  • I can use XBox Game Pass on it – this has a decent selection of strategy games (my favourite) that I can now play without having to buy them on Steam. Awesome!
  • It has WiFi 6 support – I’ve recently upgraded to a WiFi 6 network to resolve some issues and it was nice to find the Dell has hardware of this standard.

Ok Stuff

  • The trackpad is ok. Its not dreadful and its certainly big enough but clicking it is weird and less accurate than the Mac. It can sometimes feel what I can only describe as “loose”. Gestures are nowhere near as accurate as on a Mac. I’ve got used to it but I don’t like it.
  • The keyboard is ok. Again not dreadful but not quite to my taste (for context I really like the Apple magic keyboard and the reworked and fixed keyboard on the new MacBook’s). It just feels a bit… squidgy?
  • The outside of the device isn’t very attractive to my eyes. Sat next to my MacBook it looks like a cheap device – it is not a cheap device!
  • Battery seems ok. I’ve never found myself desperate for power.

Bad Stuff

  • The fans are noisy when working hard and it seems to spin fans up more readily than the MacBook Pro. Relatedly it also seems to get hotter easier than the MacBook.
  • Windows 10 spyware. Not being able to disable telemetry is, and excuse me, a fucking disgrace.
  • The out the box Windows 10 experience is terrible. After initial boot up there were loads of updates to apply. The Store was only partially working until one of the updates downloaded. Worse it does all these downloads in bits and pieces with many reboots and no end in sight to it. Its confusing and incredibly user hostile. I don’t mind an update but gather them together and do a single update – and if the operating system is only going to partially work until this is done hold off until it can be done. Rubbish frankly. Utterly rubbish.

Overall

I like it! Interestingly I’m finding myself preferring the Dell, and Windows 10, when plugged in at my desk and preferring my MacBook when on the move and thanks to WSL I don’t have to grapple with different shells. When at my desk I’m using an Apple Magic Keyboard and Logitech MX Master mouse and so that is certainly part of it.

I don’t (and won’t again) do app development so if Apple messes up with this ARM transition or we end up with borked lower power laptops as a result I feel I’ve got a decent way forward with hardware. At the moment I feel as if I could happily use this as my daily driver.

All this said – longer term problems may emerge. I guess I’ll find out.

Creating FableTrek – Part 1

If you want to follow along with this series of posts the source code and be found here, the published game here and the source for this specific post here.

If you read my blog or follow me on Twitter you’ll know I’ve really fallen in love with F# this last year and am using the SAFE stack to build out my cycling performance application. A big part of that toolkit is Fable – the F# to JavaScript transpiler that also comes with some great bindings and tools for building single page applications.

A few people have asked me how to get started with Fable, why I think F# is so well suited for this kind of work, and how to make the transition from C# and so I thought I’d build out a simple game using Fable and talk about it in a series of posts / videos.

Now before anyone gets too excited this isn’t going to be the next Halo! Instead I’m going to rework my iOS version of the classic Star Trek game (PaddTrek) into Fable – now called FableTrek!

PaddTrek – an early iPad game

Its not going to be flashy but hopefully strikes the right balance between complex enough to show real problems and solutions and simple enough to understand. I don’t have a grand up-front plan for building out this game and I’m creating each part as I go – lets just hope it doesn’t turn out like the new Star Wars trilogy!

I’ll also add that I still class myself as a relative beginner with F# but I do manage to get stuff done.

With all that out of the way – lets take a look at the first code drop.

Exploring the code

To get started I used this bare bones starting point for a simple Fable app with webpack and debug support for Visual Studio Code (hat tip to Isaac Abraham who created this for me to help me out a little while back).

I’ve then added a few things to it:

One of the first things to wrap your head round with Fable is that it lives in both .NET and JavaScript worlds. And so in the root folder you’ll find a package.json and a webpack.config.js and in the src folder a F# project file – App.fsproj.

When adding dependencies you’ll often find yourself adding both a NuGet package and an npm package. I’m using the dotnet command line tools to manage the former and yarn for the latter.

The Fable compiler itself is generally invoked through webpack using the fable-loader plugin. I’ve got a fairly simple webpack file – I’ve added support for Stylus and a little work to output a production build to a deploy folder.

App.fs and Index.html – entry point

Starting up a React app in F# is not much different to doing the same with JavaScript or TypeScript. First you need a HTML file:

<!doctype html>
<html>
<head>
  <title>Fable</title>
  <meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="shortcut icon" href="fable.ico" />
</head>
<body>
    <div id="trek-app"></div>
</body>
</html>

We’re going to create the React app inside the trek-app div block. You might notice their is no <script> tag – so how does any code get invoked? I’ve set up webpack to inject the <script> tag during the build process using the HtmlWebpackPlugin plugin – this way we can handle unique script filenames that include a hash component.

The code entry point for our app is App.fs in the source folder. At the top of the file first we import some dependencies:

open Elmish
open Fable.React
open App.Types
open App.State
open Fable.Core.JsInterop

importSideEffects "./game.styl"

The interesting line here is the final one where we import game.styl. This is our Stylus CSS file. Importing it like this will ensure webpack will compile it to CSS and include it in our final bundle – to do this I’ve used the stylus-loader package for webpack.

If we skip to the end of the file we can see a block of code that starts up a Fable Elmish based React app:

Program.mkProgram init update root
|> Program.toNavigable (parseHash Router.pageParser) urlUpdate
#if DEBUG
|> Program.withConsoleTrace
|> Program.withDebugger
#endif
|> Program.withReactBatched "trek-app"
|> Program.run

We’re passing in a whole load of stuff to this startup block – init, update, root, pageParser and urlUpdate are the basic building blocks of our Elmish app. And so I guess its time I answered the question: what is Elmish?

Elmish…

… is a set of abstractions that allow us to implement the Model View Update pattern in F# based on Elm. Essentially its a pattern where a user interface (a view) is created from immutable state and from which messages are passed back to our program to update the state (via a transformation – its immutable) and regenerate the view. To make this work their are a number of concepts which I’ve started to flesh out in our game.

Thomas Bandt has written a blog post around a simple counter example that introduces the same concepts that is worth reading before continuing. I’m going to go through the same concepts but using the game we’re creating which is a little more complex as we have more data to model and in our game I’m subdividing the system into smaller sub-applications.

Model

The model contains our application state and the best way to express this in F# is using a record. You can see our applications top level model in our App.Types.fs file:

type Model =
  {
    CurrentPage: Router.Page
    StartScreen: Interface.StartScreen.Types.Model option
  }
  static member Empty =
    { CurrentPage = StartScreenPage
      StartScreen = Some Interface.StartScreen.Types.Model.Empty
    }

Our root model keeps track of the current page we’re looking at (based on a URL – we’ll look at the router later) and the state for our sub-applications of which we currently only have one – the start screen. These are optional pieces of state that we create as required as the URL changes. I’ve added a static member that returns an empty model that we can use when we need to initiate a default model.

Being records the state is immutable and we’ll look at how we manage updates to this a little later.

The StartScreen model is currently just a placeholder which we’ll flesh out in later parts.

Messages

Messages are what we use to glue our system together. They’re typically sent by the user interacting with our views through a dispatcher and are handled by update functions that we’ll look at in a moment. Each of our sub-applications will have its own set of messages only visible within its own scope – this makes for a great way to isolate the moving parts of a system and is particularly useful in large systems like my cycling analytic website. It means I can change one sub-application without worrying about breaking others.

In our game you can find the messages for the start screen sub-application in Interface/StartScreen/Types.fs:

type StartScreenMsg =
  | NewGame

When we flesh out our start screen this message will be dispatched when the user clicks a new game button.

Discriminated unions are almost tailor made for messages. If you come from a C# background you might think they look like an enumerated type but they are much more! For one thing they can be associated with payloads and we already have an example of this – we use a top level set of messages that themselves encapsulate our sub-application messages to enable our program to route messages to the correct sub-application. You can see an example of this in App.Types.fs:

type Msg =
  | GameScreenDispatcherMsg
  | StartScreenDispatcherMsg of Interface.StartScreen.Types.StartScreenMsg

Updates

Update functions are how we respond to messages and update our state however they never modify state. Instead given the current state and a message they return new state and a command (often another message).

We don’t yet have a simple example of this but we do have an example of an update function coordinating between our sub-applications which you can find in App.State.fs:

let update msg model =
  match (msg, model) with
  | (StartScreenDispatcherMsg subMsg, { StartScreen = Some extractedModel }) ->
    let (subModel, subCmd) = Interface.StartScreen.State.update subMsg extractedModel
    { model with StartScreen = Some subModel}, Cmd.map StartScreenDispatcherMsg subCmd
  | _ ->
    console.error("Missing match in App.State")
    model, Cmd.none

If you’re not familiar with F# you might currently be asking what is this crazy wizardry. I’ll try and break it down! F# has great support for pattern matching – we begin a match set using the match with construct. In our case we’re saying match the msg and model together (what we’re actually doing is creating a tuple out of the message and the model).

Each pattern for consideration, they are evaluated in order, is started with the pipe | operator and in our case we have two possible matches.

Our first pattern match we look for a message of type StartScreenDispatcherMsg and a model that has the StartScreen attribute set to Some. While matching we can also extract values – in our case we’re extracting the sub message for the start screen and the model we have assigned to StartScreen. If we find these things then we pass them on to to our sub-applications updater – it also returns a model and a command. Finally we return a model with the new submodel returned by our child updater and whatever command it returned.

Our second match (the one that reads | _) is our fall through match. This will match anything and simply return the model and the command but pop an error into our browsers console. I’ve got no implementation at all yet for the GameScreenDispatcherMsg.

Routing

I briefly mentioned routing earlier when we looked at models – routing being the process of displaying the correct content within a SPA for a given URL. Elmish comes with some support for routing which I’ve used to build the router here which you can find in Router.fs. Firstly we use a disciminated union to define our routes:

type Page =
  | StartScreenPage
  | GameScreenPage

If we had sub-routes we could use the payload of a discriminated union to nest them but we don’t (at least not yet) so for now that’s it. Next we need a way to turn out union members into URL paths:

let private toHash page =
  match page with
  | StartScreenPage -> "#/"
  | GameScreenPage -> "#/game"

Here I’m using hash based paths but you can also use full paths. Finally we need a parser – a function that can extract values from routes and return a correctly structured Page discriminated union. We don’t have any parameters for the routes yet so this is quite simple:

let pageParser =
  oneOf [
    map GameScreenPage (s "game")
    map StartScreenPage top
  ]

Finally when a URL changes we need a way of updating the current page and state in our model. In App.State.fs we have a urlUpdate function:

let urlUpdate (result: Option<Router.Page>) model =
  match result with
  | None ->
    console.error("Error parsing url: " + window.location.href)
    model, Router.modifyUrl model.CurrentPage
  | Some page ->
    let model = { model with CurrentPage = page}
    match page with
    | StartScreenPage ->
      let (subModel, subCmd) = Interface.StartScreen.State.init () 
      { model with StartScreen = Some subModel }, Cmd.map StartScreenDispatcherMsg subCmd
    | _ -> model, Cmd.none

This is very similar in form to a regular update function except rather than take a message in takes our page discriminated union. Like the regular update it returns an updated model and command.

Views

Ok. With all that plumbing in place we need a way of transforming our state into user interface and we do that with views. Our root view can be found in App.fs:

let root model dispatch =
  div [Class "container"] [
    match model with
    | { CurrentPage = Router.Page.StartScreenPage ; StartScreen = Some extractedModel } ->
      Interface.StartScreen.View.root model dispatch
    | _ -> div [] [str "not found"]
  ]

Like much of our code so far this is somewhat infrastructural – it looks at our model, determines the current sub-application based on the route, and delegates its handling down to that (which you can find in Interface/StartScreen/View.fs):

let root model dispatch =
  div [Class "startScreen"] [
    div [Class "title"] [str "< work in progress >"]
  ]

If you come from a typical React background you’re probably used to using JSX – a DSL that is extremely similar to HTML. In F# we use a DSL that is, well, F#. Its still very similar in use but being F# is typesafe and because its F# you don’t have those slightly awkward breaks between JSX and JavaScript. We’ll see later on how easy it is to use with FP techniques to create complex interfaces cleanly.

Styling with Stylus

I’m building the CSS by hand for this project and to help me do that I’m using a preprocessor called Stylus. I like it because it removes a lot of the noise and clutter from a CSS file. You can find the, fairly limited, amounts of CSS in our game.styl file.

For the time being I’m referring to class names by strings but I may do something about this as we progress 🙂

Putting it all together

If we wind back to where we started and look at the block of code at the bottom of App.fs:

Program.mkProgram init update root
|> Program.toNavigable (parseHash Router.pageParser) urlUpdate
#if DEBUG
|> Program.withConsoleTrace
|> Program.withDebugger
#endif
|> Program.withReactBatched "trek-app"
|> Program.run

You can see how all these bits we’ve looked at are brought together in the first two lines of this to create our Elmish program that can now parse URLs, update models, and translate models into views.

Deploying to Azure Static Web Hosting with GitHub Actions

Right back at the start of this piece I shared a link to the game running live on the web – you can find it at https://www.fabletrek.com. Its not all that exciting at the moment:

But it is running – and its deploying automatically and directly of the GitHub repository using GitHub Actions and Azure Static Web Apps. This solution doesn’t come with automatic support for Fable but its pretty straightforward to add.

Start by creating the Azure Static Web App itself in Azure following these instructions. For the app artifact directory enter deploy but otherwise follow those instructions.

This will create a GitHub Action for you in the repository but if you run it you’ll get an error stating that the job they provide doesn’t know how to build out code. Their is support for using a custom build command (and we build our solution with the command yarn webpack) but I found using that gives an error with locating and running dotnet from within the Fable compiler webpack loader.

Instead what we need to do is move the build outside of their job so all we need to then do is pick up the contents of our deploy folder. To do that we add these steps:

      - name: Setup .NET Core
        uses: actions/setup-dotnet@v1
        with:
          dotnet-version: 3.1.402
      - name: Install .NET dependencies
        working-directory: "src"
        run: dotnet restore
      - name: Install JS dependencies
        run: yarn install
      - name: Build
        run: yarn webpack

This makes sure the correct version of .NET Core is installed, restores our NuGet and NPM packages, and then finally builds the solution.

We then need to modify the provided Azure Static Web App job to prevent it from trying to build the solution again. Their is no way to disable the build process but what you can do is supply it with a harmless build command – rather than build anything I simply run yarn –version:

         with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_UNIQUE_FOR_YOU }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
          action: "upload"
          ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
          # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
          app_location: "/" # App source code path
          api_location: "api" # Api source code path - optional
          app_artifact_location: "deploy" # Built app content directory - optional
          # we need a separate build step as the fable-compiler fails when run under the context of the static site builder
          # and their is no disable build property that I am aware of so we run a harmless command
          app_build_command: "yarn --version"
          ###### End of Repository/Build Configurations ######

With those changes made every time you commit code to your branch it will be built and deployed. You can find the complete action here. I then added a custom domain which gives you a free SSL cert too.

Next Steps

That’s the setup part done really – if you’re new to Fable, F# and Elmish then this I think is the more complex bit. In the next stage we’ll build out a start screen and get to the point where we can start a new game.

Azure SQL Database deployment with Farmer, DbUp and GitHub Actions

Farmer is a DSL for generating and executing ARM templates and one of the great things about it is that its based on .NET Core. That means that you can use it in combination with other components from the .NET ecosystem to create end to end typesafe deployment solutions.

As an aside – I recently posted a critique of Microsofts new DSL Bicep. One of the things I didn’t mention in that but did in a series of tweets was the shortcomings of inventing a new language that lives in its own ecosystem.

Ultimately Bicep will need to support “extension points” or you’ll have to wrap them in script and communicate information across boundaries (of course their can be benefits to that approach too). Not to mention they need to write all the tooling from scratch and developers / administrators need to learn another language.

By taking the approach Farmer has handling boundaries is a lot cleaner – as we’ll see – and we can take advantage of some neat language features.

In this example I’m going to provision an Azure SQL Database into Azure and then upgrade its schema using DbUp and we’ll run all this through GitHub Actions giving us an automated end to end deployment / upgrade system for our SQL database. You could do this with less F# code (almost none) but I also want to try and illustrate how this approach can form a nice framework for more complicated deployment scenarios so we’re also going to look at error handling across a deployment pipeline.

As all the components themselves are well documented I’m not going to go end to end on all the detail of each component here – instead I’m going to focus on the big picture and the glue. You can find the code for the finished demonstration on GitHub here.

Starting with a F# console app, adding the Farmer NuGet package, and the boilerplate Program.fs file first we need to declare our Azure resources – in this case a SQL Server and a database and then bring them together in an ARM template:

let demoDatabase = sqlServer {
    name serverName
    admin_username "demoAdmin"
    enable_azure_firewall
    
    add_databases [
        sqlDb { name databaseName ; sku DbSku.Basic }
    ]
}

let template = arm {
    location Location.UKWest
    add_resource demoDatabase
    output "connection-string" (demoDatabase.ConnectionString databaseName)
}

Pretty straightforward but a couple of things worth noting:

  1. Both serverName and databaseName are simple constants (e.g. let databaseName = “myDatabaseName”) that I’ve created as I’m going to use them a couple of times.
  2. Opening up the database to azure services (enable_azure_firewall) will allow the GitHub Actions Runner to access the database.
  3. On the final line of our arm block we output the connection string for the database so we can use it later.

That’s our Azure resources but how do we apply our SQL scripts to generate our schema? First we’ll need to add the dbup-sqlserver NuGet package and with that in place we’ll first add a Scripts folder to our solution and in my example four scripts:

DbUp keeps track of the last script it ran and applies subsequent scripts – essentially its a forward only ladder of migrations. If you’re adding scripts of your own make sure you mark them as Embedded Resource otherwise DbUp won’t find them. To apply the scripts we simply need some fairly standard DbUp code like that shown below, I’ve placed this in a F# module called DbUpgrade so, as we’ll see in a minute, we can pipe to it quite elegantly:

let tryExecute =
  Result.bind (fun (outputs:Map<string,string>) ->
    try
      let connectionString = outputs.["connection-string"]
      let result =
        DeployChanges
          .To
          .SqlDatabase(connectionString)
          .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
          .LogToConsole()
          .Build()
          .PerformUpgrade()
      match result.Successful with
      | true -> Ok outputs
      | false -> Error (sprintf "%s: %s" (result.Error.GetType().Name.ToUpper()) result.Error.Message)
    with _ -> Error "Unexpected error occurred upgrading database"
  )

If you’re not familiar with F# you might wonder what this Result.bind function is. F# has a wrapper type for handling success and error states called options and a bunch of helper functions for their use. One of the neat things about it is it lets you chain lots of functions together with an elegant pattern for handling failure – this is often referred to as Railway Oriented Programming.

We’ve now declared our Azure resources and we’ve got a process for deploying our upgrade scripts and we need to bring it all together and actually execute all this. First lets create our deployment pipeline that first provisions the resources and then upgrades the database:

let deploymentPipeline =
  Deploy.tryExecute "demoResourceGroup" [ adminPasswordParameter ]
  >> DbUpgrade.tryExecute 

If we had additional tasks to run in our pipeline we’d join them together with the >> operator as I’ve done here.

To run the deployment we need to provide an admin passford for SQL server which you can see in this code snippet as sqlServerPasswordParameter and we need to do this securely – so it can’t sit in the source code. Instead as I’m going to be running this from GitHub Actions an obvious place is the Secrets area of GitHub and an easy way to make that available to our deployment console app is through an environment variable in the appropriate action (which we’ll look at later). We can then access this and format it for use with Farmber by adding this line:

let adminPasswordParameter =
  Environment.GetEnvironmentVariable("ADMIN_PASSWORD") |> createSqlServerPasswordParameter serverName

Farmer uses a convention approach to a parameter name – I’ve built a little helper function createSqlServerPassword to form that up.

(We could take a number of different approaches to this – ARM parameters for example – I’ve just picked a simple mechanism for this demo)

Finally to invoke all this we add this line at the bottom of our file:

template |> deploymentPipeline |> asGitHubAction

asGitHubAction is another little helper I’ve created that simply returns a 0 on success or prints a message to the console and returns a 1 in the event of an error. This will cause the GitHub Action to fail as we want.

That’s the code side of things done. Our finished Program.cs looks like this:

open System
open Farmer
open Farmer.Builders
open Sql
open Constants
open Helpers

[<EntryPoint>]
let main _ =
  let adminPasswordParameter =
    Environment.GetEnvironmentVariable("ADMIN_PASSWORD") |> createSqlServerPasswordParameter serverName

  let demoDatabase = sqlServer {
    name serverName
    admin_username "demoAdmin"
    enable_azure_firewall
      
    add_databases [
      sqlDb { name databaseName ; sku DbSku.Basic }
    ]
  }

  let template = arm {
    location Location.UKWest
    add_resource demoDatabase
    output "connection-string" (demoDatabase.ConnectionString databaseName)
  }

  let deploymentPipeline =
    Deploy.tryExecute "demoResourceGroup" [ adminPasswordParameter ]
    >> DbUpgrade.tryExecute 
  
  template |> deploymentPipeline |> asGitHubAction

All we need to do now is wrap it up in a GitHub Action. I’ve based this action on the stock .NET Core build one – lets take a look at it:

name: Deploy SQL database

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.301
    - name: Install dependencies
      run: dotnet restore
    - name: Build
      run: dotnet build --configuration Release --no-restore
    - name: Login via Az module
      uses: azure/login@v1.1
      with:
        creds: ${{secrets.AZURE_CREDENTIALS}}
        enable-AzPSSession: true
    - name: Run
      env:
        ADMIN_PASSWORD: ${{ secrets.ADMIN_PASSWORD}}
      run: dotnet DeployDb/bin/Release/netcoreapp3.1/DeployDb.dll

If you’re familiar with GitHub Actions most of this should be fairly self explanatory – there’s nothing special about our deployment code, its a standard .NET Core console app so we begin by building it as we would any other (again this is one of the things I like about Farmer – its just .NET, and if you’re using .NET there’s nothing else required). However after building it we do a couple of things:

  1. To do the deployment Farmer will use the Azure CLI and so we need to login to Azure via that. We do that in the Login via Az module step which is pretty stock and documented on GitHub here. I’ve stored the secret for the service principal in the secrets area of GitHub.
  2. In the final step we run our deployment – again its just a standard console app. You an see in this step the use of the env section – we take a secret we’ve called ADMIN_PASSWORD and set it as an environment variable making it available to our console app.

And that’s it! At this point you’ve got an automated solution that will make sure your Azure SQL database infrastructure and its schema are managed get up to date. Change the configuration of your SQL database and/or add a SQL script and this will kick off and apply the changes for you. If / when you run it for the first time you should see output like this from the build section of the Action:

I think its a simple, but powerful, example of infrastructure as code and the benefits of using an existing language and ecosystem for creating DSLs – you get so much for free by doing so. And if the rest of your codebase is in .NET then with Farmer you can share code, whether that be simple constants and names or implementation, easily across your deployment and runtime environments. Thats a big win. I’m slowly adding it into my Performance for Cyclists project and this approach here is largely lifted from their.

Finally I think its worth emphasising – you don’t need to really know F# to use Farmer and you certainly don’t need to be using it elsewhere in your solution. Its a pretty simple DSL build on top of F# and a fantastic example of how good F# is as a basis for DSLs. I’ve dug a little deeper into the language here to integrate another .NET tool but if all you want to do is generate ARM templates then, as you can see from the Farmer examples on its website, you really don’t need to get into the F# side (though I do encourage you to!).

Bicep – an utterly uninspiring start

I’m not sure when it went public but Microsoft now have an alpha of Bicep available on GitHub. Bicep is their attempt to deal with the developer unfriendly horror and rats nest that is ARM templates.

I took a look at it today and came away thoroughly disappointed with what I saw. Before I even started to look at the DSL itself the goals raised a whole bunch of red flags… so lets start there.

And strap in… because this is going to be brutal.

Bicep Goals

  1. Build the best possible language for describing, validating, and deploying infrastructure to Azure.
    Laudable.
  2. The language should provide a transparent abstraction for the underlying platform. There must be no “onboarding step” to enable it to a new resource type and/or apiVersion in Bicep.
    Seems important. Azure moves fast and new things are added all the time. On the one hand it would be nice to see each push of a new resource / API to Azure be coupled to some nice Bicep wrapping – but given the breadth and pace of what goes on its probably not realistic.

    On the other hand…. this seems dangerous. There’s a lot of intricacy in that stuff and if we just boil down to expressing the ARM inside strings with a bit of sugar round it what have we gained?
  3. Code should be easy to understand at a glance and straightforward to learn, regardless of your experience with other programming languages.
    Ok. No great quibble here.
  4. Users should be given a lot of freedom to modularize and reuse their code. Reusing code should not require any ‘copy/paste’.
    Seems weak. I’d like to see this strengthened such that writing new Bicep code shouldn’t require copy/paste – see my point (2) above as these things seem somewhat coupled. Cough. Magic strings. Cough.
  5. Tooling should provide a high level of resource discoverability and validation, and should be developed alongside the compiler rather than added at the end.
    On the one hand I don’t disagree. On the other… this seems like a bit of a cop out back to magic strings.
  6. Users should have a high level of confidence that their code is ‘syntactically valid’ before deploying.
    Wait. What? A “high level of confidence”. I don’t want a high level of confidence. I want to know. Ok – if I’m on the bleeding edge and the Azure resource hasn’t been packed into some nice DSL support yet then ok. But if I’m deploying, say, a vanilla App Service I don’t want a high level of confidence – I want to know.

I don’t about you but to me this sounds like it has the hallmarks of another half baked solution (“no its awesome” – random Twitter devrel) that still requires you to remember a bunch of low level details. And probably still relies on strings.

The Tutorial

Next I cautiously cracked open the tutorial… oh god.

Strings galore. Strings for well known identities. The joy. I’ve not installed the tooling but I assume it helps you pick the right string. But really? REALLY? Strings.

I was pretty horrified / disappointed so I continued through hoping this was just the start but as far as I can tell – nope. Strings are a good idea apparently. This is the final example for storage:

Same dogs dinner.

I can absolutely see why you want some form of string support in their. As I noted in the goals if a new API version is released you want to be able to specify it. But there are ways to achieve this that don’t involve this kind of untyped nonsense. In the normal course of events for common things like Storage accounts the only strings in use should be for names.

No wonder they can only give “confidence” things are syntactically correct.

The tutorial finishes with “convert any ARM template to Bicep” – its not hard is it. This is still a thin low value wrapper on top of ARM. We’ve just got rid of the JSON and replaced it with something else. If you don’t add much value then converting between two things is generally straight forward.

I’m struggling not to be unkind… but is their any kind of peer review for this stuff? Do people who understand languages or have a degree of breadth get involved? Do people actually *making* things using this stuff get involved? Because as someone involved in making lots of stuff and running teams making stuff – this misses by miles.

It really doesn’t have to be this way – the community are coming up with better solutions frankly. Bit of a “back to Build” but why the heck didn’t they put some weight behind Farmer – or at least lift some of its ideas (I’m glad they at least acknowledge it). Because here’s a storage account modelled in that:

Here’s a more complex Farmer block for storage and a web app:

Neater right? And typesafe. Its not hard to imagine something inbetween Farmer and Bicep that doesn’t rely on all these strings and bespoke tooling (Farmer is based on F#… so the tooling already exists) but still allows you to dive into things “not in the box”.

Conclusion

Super disappointing. Hopelessly basic. Doesn’t look to solve many problems. Another “requires lots of custom tooling” project. A tiny incremental move on from ARM. Doesn’t seem worth the effort. Better hope the Bicep tooling is good and frequently updated if you plan on using this.

If we’re now “treating ARM as the IL” (which is what it is – despite years of MS pushing back on feedback that ARM is awesome) then this really is a poor effort to build on that. Which is sad because as it comes from Microsoft its likely to become the most commonly used solution. Merit won’t have much to do with it.

If anyone from the Bicep team wants to talk about this – happy to.

An Azure Reference Architecture

There are an awful lot of services available on Azure but I’ve noticed a pattern emerging in a lot of my work around web apps. At their core they often have a similar architecture, deployment in Azure, and process for build and release.

For context a lot my hands on work over the last 3 years has been as a freelancer developing custom systems for people or on my own side projects (most recently https://www.forcyclistsbycyclists.com). In these situations I’ve found productivity to be super important in a few key ways:

  1. There’s a lot to get done, one or two people, and not much time – so being able to crank out a lot of work quickly and to a good level of quality is key.
  2. Adaptability – if its an externally focused green field system there’s a reasonable chance that there’s a degree of uncertainty over what the right feature set is. I generally expect to have to iterate a few times.
  3. I can’t be wasting time repeating myself or undertaking lengthy manual tasks.

Due to this I generally avoid over complicating my early stage deployment with too much separation – but I *do* make sure I understand where my boundaries and apply principles that support the later distribution of a system in the code.

With that out the way… here’s an architecture I’ve used as a good starting point several times now. And while it continues to evolve and I will vary specific decisions based on need its served me well and so I thought I’d share it here.

I realise there are some elements on here that are not “the latest and greatest” however its rarely productive to be on the bleeding edge. It seems likely, for example, that I’ll adopt the Azure SPA support at some point – but there’s not much in it for me doing that now. Similarly I can imagine giving GitHub Actions ago at some point – but what do I really gain by throwing what I know away today. From the experiments I’ve run I gain no productivity. Judging this stuff is something of a fine line but at the risk of banging this drum too hard: far too many people adopt technology because they see it being pushed and talked about on Twitter or dev.to (etc.) by the vendor, by their DevRel folk and by their community (e.g. MVPs) and by those who have jumped early and are quite possibly (likely!) suffering from a bizarre mix of Stockholm Syndrome and sunk cost fallacy “honestly the grass is so much greener over here… I’m so happy I crawled through the barbed wire”.

Rant over. If you’ve got any questions, want to tell me I’m crazy or question my parentage: catch me over on Twitter.

Architecture

Build & Release

I’ve long been a fan of automating at least all the high value parts of build & release. If you’re able to get it up and running quickly it rapidly pays for itself over the lifetime of a project. And one of the benefits of not CV chasing the latest tech is that most of this stuff is movable from project to project. Once you’ve set up a pipeline for a given set of assets and components its pretty easy to use on your next project. Introduce lots of new components… yeah you’ll have lots of figuring out to do. Consistency is underrated in our industry.

So what do I use and why?

  1. Git repository – I was actually an early adopter of Git. Mostly because I was taking my personal laptop into a disconnected environment on a regular basis when it first started to emege and I’m a frequent committer.

    In this architecture it holds all the assets required to build & deploy my system other than secrets.
  2. Azure DevOps – I use the pipelines to co-ordinate build & release activities both directly using built in tasks, third party tasks and scripts. Why? At the time I started it was free and “good enough”. I’ve slowly moved over to the YAML pipelines. Slowly.
  3. My builds will output four main assets: an ARM template, Docker container, a built single page application, and SQL migration scripts. These get deployed into a an Azure resource group, Azure container registry, blob storage, and a SQL database respectively.

    My migration scripts are applied against a SQL database using DbUp and my ARM templates are generated using Farmer and then used to provision a resource group. I’m fairly new to Farmer but so far its been fantastic – previously I was using Terraform but Farmer just fit a little nicer with my workflow and I like to support the F# community.

Runtime Environment

So what do I actually use to run and host my code?

  1. App Service – I’ve nearly always got an API to host and though I will sometimes use Azure Functions for this I more often use the Web App for Containers support.

    Originally I deployed directly into a “plain” App Service but grew really tired with the ongoing “now this is really how you deploy without locked files” fiasco and the final straw was the bungled .NET Core release.

    Its just easier and more reliable to deploy a container.
  2. Azure DNS – what it says on the tin! Unless there is a good reason to run it elsewhere I prefer to keep things together, keeps things simple.
  3. Azure CDN – gets you a free SSL cert for your single page app, is fairly inexpensive, and helps with load times.
  4. SQL Database – still, I think, the most flexible general purpose and productive data solution. Sure at scale others might be better. Sure sometimes less structured data is better suited to less structured data sources. But when you’re trying to get stuff done there’s a lot to be said for having an atomic, transactional data store. And if I had a tenner for every distributed / none transactional design I’ve seen that dealt only with the happy path I would be a very very wealthy man.

    Oh and “schema-less”. In most cases the question is is the schema explicit or implicit. If its implicit… again a lot of what I’ve seen doesn’t account for much beyodn the happy path.

    SQL might not be cool, and maybe I’m boring (but I’ll take boring and gets shit done), but it goes a long way in a simple to reason about manner.
  5. Storage accounts – in many systems you come across small bits of data that are handy to dump into, say, a blob store (poor mans NoSQL right there!) or table store. I generally find myself using it at some point.
  6. Service Bus – the unsung hero of Azure in my opinion. Its reliable. Does what it says on the tin and is easy to work with. Most applications have some background activity, chatter or async events to deal with and service bus is a great way of handling this. I sometimes pair this (and Azure Functions below) with SignalR.
  7. Azure Functions – great for processing the Service Bus, running code on a schedule and generally providing glue for your system. Again I often find myself with at least a handful of these. I often also use Service Bus queues with Functions to provide a “poor mans admin console”. Basically allow me to kick off administrative events by dropping a message on a queue.
  8. Application Insights – easy way of gathering together logs, metrics, telemetry etc. If something does go wrong or your system is doing something strange the query console is a good way of exploring what the root cause might be.

Code

I’m not going to spend too long talking about how I write the system itself (plenty of that on this blog already). In generally I try and keep things loosely coupled and normally start with a modular monolith – easy to reason about, well supported by tooling, minimal complexity but can grow into something more complex when and if that’s needed.

My current tools of choice is end to end F# with Fable and Saturn / Giraffe on top of ASP.Net Core and Fable Remoting on top of all that. I hopped onto functional programming as:

  1. It seemed a better fit for building web applications and APIs.
  2. I’d grown tired with all the C# ceremony.
  3. Collectively we seem to have decided that traditional OO is not that useful – yet we’re working in languages built for that way of working. And I felt I was holding myself back / being held back.

But if you’re looking to be productive – use what you know.

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez