DigitalOcean App Platform – Security Concerns

While recently reviewing my options for hosting a new project (SPA, API, database – pretty stock stuff) I took a good look at DigitalOcean.

With the recent addition of their managed App platform their hosting solution is simple to use, competitively priced, and very appealing for simple apps. I did some basic experimentation and had a dev system running in it for a while and it all seemed pretty good.

However as I looked to deploy a production environment I came across what, to me, is a glaring issue. The App Platform can only communicate with a Managed Database if you disable the “trusted sources” and this means that your database is sat on the public Internet without even an IP restriction in place. If you try and associate an App with a managed database you are given a link to explain how to disable trusted sources. And when you do so you get this sensible warning:


I attempted to engage on Twitter to see if I was missing something and was advised to raise a ticket. I’ve done that and they’ve confirmed this is the case.

Let that sink in a moment: DigitalOcean thought it was ok to launch a platform, designed for ease of use, that contravenes their own otherwise recommended (and very sensible) security practices and requires maanged databases to be sat without even an IP restriction on a public network.

The “workaround” is to deploy using droplets – but, to me, this defeats the point of using such a high level platform.

Addressing this is on the backlog apparently but that this made it to market in this fashion raises, for me at least, all manner of questions about the culture at DigitalOcean in respect to security and rules them out as a vendor I am comfortable hosting my data and systems with.

Terrifying that they are leading people down this path and I wonder just how many databases are sat exposed as a result.

I’ve terminated my own experiments and am deleting my account.

Note: as per the post I’ve reached out on via Twitter and to DigitalOcean to ensure the information is accurate – if anyone at DigitalOcean disagrees I’m easy to get in touch with on Twitter.

Azure Functions Performance – Update on EP1 Results

In yesterdays post comparing Azure Functions to AWS Lambda the EP1 plan was a notable poor performer – to the extent I wandered if it was an anomalous result. For context here is yesterday’s results for a low load test:

I created a new plan this morning with a view to exploring the results further and I think I can provide some additional insight into this.

You shouldn’t have to “warm” a Premium Plan but to be sure and consistent I ran these tests after allowing for an idle / scale down period and then making a single request for a Mandelbrot.

The data here is based around a test making 32 concurrent requests to the Function App for a single Mandelbrot. Here is the graph for the initial run.

First if we consider the overall statistics for the full run – they are still not great. If I pop those into the comparison chart I used yesterday EP1 is still trailing – the blue column is yesterdays EP1 result and the green line todays.

Its improved – but its still poor. However if we look at the graph of the run over time we can see its something of a graph of two halves and I’ve highlighted two sections of it (with the highlight numbers in the top half):

There is a marked increase in response time and request per second rate between the two halves. Although I’m not tracking the instance IDs I would conclude that Azure Functions scaled up to involve a second App Service Instance and that resulted in the improved throughput.

To verify this I immediately ran the test again to take advantage of the increased resource availability in the Function Plan and that result is shown below along with another comparative graph of the run in context.

We can see here that the EP1 plan is now in the same kind of ballpark as Lambda and the EP2 plan. As two EP1 instances in play we are now running with a similar amount of total compute as the EP1 plan – just on two 210 ACU instances rather than one 420 ACU instance.

To obtain this level of performance we are sacrificing consumption based billing and moving to a baseline cost of £0.17 per hour (£125 per month) bursting to £0.34 per hour (£250 per month) to cover this low level of load.


I would argue this verifies yesterdays results – with a freshly deployed Function App we have obtained similar results and by looking at its behavior over time we can see how Azure Functions is adding resource to an EP1 plan then giving us similar total resource to the EP2 plan and similar results.

Every workload is different and I would always encourage this but based on this I would strongly suggest that if you’re using Premium Plan’s you dive into your workload and seek to understand if it is a cost effective use of your spend.

Comparative performance of Azure Functions and AWS Lambda

Update: the results below showed the EP1 plan to be a clear outlier in some of its performance results. I’ve since retested on a fresh EP1 plan and confirmed these results as accurate and been able to provide further insight into the performance: Azure Functions Performance – Update on EP1 Results – Azure From The Trenches.

I was recently asked if I would spend some time comparing AWS Lambda with Azure Functions at a meetup – of course, happily! As part of that preparing for that I did a bit of a dive into the performance aspects of the two systems and I think the results are interesting and useful and so I’m also going to share them here.

Test Methodology

Methodology may be a bit grand but here’s how I ran the tests.

The majority of the tests were conducted with SuperBenchmarker against systems deployed entirely in the UK (eu-west-2 on AWS and UK South on Azure). I interleaved the results – testing on AWS, testing on Azure, and ran the tests multiple times to ensure I was getting consistent results.

I’ve not focused on cold start as Mikhail Shilkov has covered that ground excellently and I really have nothing to add to his analysis:

Cold Starts in Azure Functions | Mikhail Shilkov
Cold Starts in AWS Lambda | Mikhail Shilkov

I essentially focused on two sets of tests – an IO workload (flushing a queue and writing some blobs) and a compute workload (calculating a mandelbrot and returning it as an image).

All tests are making use of .NET Core 3.1 and I’ve tested on the following configurations:

  1. Azure Functions – Consumption Plan
  2. Azure Functions – EP1 Premium Plan
  3. Azure Functions – EP2 Premium Plan
  4. AWS Lambda – 256 Mb
  5. AWS Lambda – 1024 Mb
  6. AWS Lambda – 2048 Mb

Its worth noting that Lambda assigns a proportion of CPU(s) based on the allocated memory – more memory means more horsepower and potentially multiple cores (beyond the 1.8Gb mark if memory serves).

Queue Processing

For this test I preloaded a queue with 10,000 and 100,000 queue items and wrote the time the queue item was processed to a blob file (a file per queue item). The measured times are between the time the first blob was created and the time the last blob was created.

On Azure I made use of Azure Queue Storage and Azure Blob Storage and on AWS I used SQS and S3.

AWS was the clear winner of this test and from the shape of the data it appeared that AWS was accelerating faster than Azure – more eager to process more items but I would need to do further testing to compare. However it is possible the other services were a influencing factor. However its a reasonable IO test on common services by a function.

HTTP Trigger under steady load

This test was focused on a compute workload – essentially calculating a Mandelbrot. The Function / Lambda will generate n lambda’s based on a query parameter. The Mandelbrots are generated in parallel using the Task system.

32 concurrent requests, 1 Mandelbrot per request

Percentile and average response times can be seen in the graph below (lower is better):

With this low level of low all the services performed acceptable. The Azure Premium Plans strangely perform the worst with the EP1 service being particularly bad. I reran this several times and received similar results.

The range of response times (min, max) can be seen below alongside the average where we can see again followed by the total number of requests served over the 2 minute period:

32 concurrent requests, 8 Mandelbrots per request

In this test each request results in the Lambda / Function calculating the Mandelbrot 8 times in parallel and then returning one of the Mandelbrots as an image.

Percentile and average response times can be seen in the graph below (lower is better):

Things get a bit more interesting here. The level of compute is beyond the proportion of CPU assigned to the 256Mb Lambda and it struggles constantly. The Consumption Plan and EP1 Premium Plan fair a little better but are still impacted. However the 1024Mb and 2048Mb Lambda’s are comfortable – with the latter essentially being unchanged.

The range of response times (min, max) can be seen below alongside the average where we can see again followed by the total number of requests served over the 2 minute period:

I don’t think there’s much to add here – it largely plays out as you’d expect.

HTTP undergoing a load spike

In this test I ran at a low and steady rate of 2 concurrent requests for 1 Mandelbrot over 4 minutes. After around 1 to 2 minutes time I then loaded the system, independently, with a spike of 64 concurrent requests for 8 Mandelbrots.


First up Azure with the Consumption Plan:

Its clear to see in this graph where the additional load begins and, unfortunately, Azure really struggles with this. Served requests largely flatline throughout the spike. To provide more insight here’s a graph of the spike (note: I actually captured this from a later run but the results were the same as this first run).

Azure struggled to serve any of this. It didn’t fail any requests but performance really has nosedived.

I captured the same data for the EP1 and EP2 Premium Plans and these can be seen below:

Unfortunately Azure continues to struggle – even on an EP2 plan (costing £250 per month at a minimum). The spike data was broadly the same as in the Consumption plan run.

I would suggest this is due to Azure’s fairly monolithic architecture – all of our functions are running in shared resource and the more expensive requests can sink the entire shared space and Azure isn’t able to address this.


First up the 256Mb Lambda:

We can see here that the modest 1 Mandelbrot requests made to the Lambda are untroubled by the Spike. You can see a slight rise in response time and drop in RPS when the additional load hits but overall the Lambda maintains consistent performance. You can see what is happening in the spike below:

Like in our earlier tests the 256 Mb Lambda struggles with the request for 8 Mandelbrot’s – but its performance is isolated away from the smaller requests due to Lambda’s more isolated architecture. The additional spikes showed characteristics similar to the runs measured earlier. The 1024 Mb and 2048 Mb run are shown below:

Again they run at a predicable and consistent rate. The graphs for the spikes behaved in line with performance of their respective consistent loads.

Spike Test Results Summary

Based on the above its unsurprising that the overall metrics are heavily in favour of AWS Lambda.

Concluding Thoughts

Its fascinating to see how the different architectures impact the scaling and performance characteristics of the service. With Lambda being based around containerized Functions then as long as the workload of a specific request fits within a containers capabilities performance remains fairly constant and consistent and any containers that are stretched have isolated impact.

As long as you measure, understand and plan for the resource requirements of your workloads Lambda can present a fairly consistent consumption based pricing scaling model.

Whereas Azure Functions uses the coarse App Service Instance scaling model – many functions are running within a single resource and this means that additional workload can stretch a resource beyond its capabilities and have an adverse effect on the whole system. Even spending more money has a limited impact for spikes – we would need resources that can manage the “peak peak” request workloads.

I’ve obviously only pushed these systems so far in these tests – enough to show the weaknesses in the Azure Functions compute architecture but not enough to really expose Lambda. That’s something to return to in a future installment.

In my view the Azure Functions infrastructure needs a major overall in order to be performance competitive. It has a FaaS programming model shackled to and held back by a traditional web server architecture. Though I am surprised by the EP1 results and will reach out to the Functions team.

Who is Azure for?

As I’ve worked with a wider variety of cloud vendors in recent months I’m becoming increasingly unsure who Azure is a good fit for. To contextualise this a lot of my work has been based around fairly common application patterns: APIs, SPAs, data storage, message brokers, background activities. As some of this is self funded I’m often interested in cost but still want good performance for users.

For simple projects (lets say an SPA, an API and a database) you now have services like Digital Ocean which will deploy your app direct from GitHub and let you set up a database and CDN with a few lines of code or couple of button pushes in the portal. The portals are super easy to use and focused. Digital Ocean can also be cheap. If you’re a developer focused on code and product its about as simple as it gets.

Azure has some of this but its far less streamlined and doesn’t go as low on entry level price. It’s also mired in inconsistencies across its services.

So for a simple project – I don’t think it competes either on usability or price. And on the commericals – Microsoft are difficult to approach as a startup or as someone looking to migrate workloads. In contrast from my recent experiences AWS are far more aggressive in this regard.

At the other end of things if you want more control and access to more complex services you have AWS – like Azure they have a vast array of services for you to choose from. I would argue AWS has a slightly steeper learning curve than Azure – you quickly get involved with networking and IAM policies and roles – but its more consistent than Azure. Once you’ve got your head around those concepts things start to flow nicely. AWS feels like it was built bottom up (infrastructure upwards) and if you look at its history it was. Its benefiting from that now as it builds higher level components on top of that base. Azure in comparison feels (and was) built top down – it started with PaaS and has moved downwards. Unfortunately its harder to patch in a foundation and some of the issues we experience I would argue are due to that. I do think Azure has a better portal that brings deployed services together in a clearer way – but if you’re doing the more complex work that makes sense on something like Azure or AWS you’re almost certainly using Infrastructure as Code technologies (and if not – you should be!) so it becomes less of an advantage at this end of the scale.

And worryingly Microsoft have missed the two big recent advances in compute: serverless and ARM.

With serverless Azure Functions is barely a competitor to Lambda at this point. Its got some nice trimmings in things like Durable Functions but as a core serverless offering its slow both in terms of raw performance and cold start and its inflexible – its still suffering from being cobbled together like a monolithic Frankenstein’s monster from the parts Microsoft had lying around when Lambda was launched (the perils of Marketing Led Development).

On ARM, Amazon are on v2 of their Graviton support and you can obtain 20% to 40% cost savings by moving typical .NET 5 workloads onto ARM backed EC2 instances. Azure don’t even have anything in preview. How long will it be until AWS have got ARM rolled out in further services?

So ultimately who is Azure for? The only audience I’m coming up with are existing Microsoft customers who have large investments in the ecosystem or those locked into spending with Microsoft.

Otherwise I’m struggling to see why, if given a free choice, you would now choose it. I’d looked to put my latest side project on Azure – I felt I ought to give it another chance, I’m an Azure MVP for goodness sake – but I can’t find a compelling reason to and deploying it was painful. Performance experiments today (that will be published after my meetup talk tomorrow) were disappointing. Whereas I can find compelling reasons to deploy on AWS.

It genuinely saddens me to be writing this – I’ve had a lot of success with Azure over the years but Microsoft seem to have lost their way. In the chase for every feature and every customer the commercial focus has gone and it feels like they’re paying a price for chasing AWS from weak foundations. I’ll continue to engage with Product Teams and users as I can and I don’t think of myself as “walking away” from Azure but my latest project is headed to AWS. Commercially and productivity wise its a no brainer.

And all in it makes it very difficult to recommend Azure at this point if you have a free choice of cloud vendor.

Update: Its worth noting that some folk I trust, such as Howard van Roojen of Endjin, speak highly of Azure’s data platform capabilities and have delivered some serious systems based around that so as ever you do have to look at your use case. And if you’re looking at making a significant investment with a cloud vendor I highly recommend finding someone independent who has experience in the space and test with some representative workloads as soon as you can. Don’t rely on what the vendor is telling you. And don’t assume that because you’re using .NET that Azure is the answer – it may be, but it may not.

Thoughts on the .NET Foundation

The .NET Foundation Annual Survey is currently open (please do participate) and I made some comments on Twitter about how I thought the Foundation was operating “back to front” and consumer first with OSS. Claire Novotny reached out and asked me for specifics and so I thought I’d try and capture them in a format a bit more useful than a 280 character tweet.

Before I launch into this I’ll start by saying that firstly these are just my opinions – I have no special insight. They are observations meant in good spirit. And secondly: I don’t think anybody involved in the Foundation is malicious or has any kind of “bad” agenda but I do think that like all of us their views are influenced by their own bubbles and I think this is, in part, what feels like such a disconnect between the Foundation and those of us in the .NET OSS space who’s bubbles look very different.

Foundation Independence and Make-Up

Going through the “About Us” page 10 of the 17 people involved in the directorship, management team, or advisory board are from Microsoft (Microsoft / GitHub). Of the others I would argue at least 2 are from large tech businesses (World Wide Technology and Twilio).

A frequent (at least in my circles) observation made about the .NET Foundation is that it is not independent from Microsoft. The .NET Foundation maintains it is independent and perhaps legally it is but with 10 of the 17 lead roles populated by Microsoft employees in practice how can it be? Its hard to believe its ever going to act against Microsoft’s interests. Its certainly heavily slanted by Microsoft opinions.

In order to build trust Microsoft need to take a genuine back seat from the Foundations leadership. Take Microsoft off the board, or reduce to a small number of seats, and have a minority of the advisors being from Microsoft. It was good to see the majority of funding came from outside Microsoft in the last accounts but its still the dominant sponsor and dominant sponsors have a lot of clout.

Clearly Microsoft have a place at this table and the people involved are talented and well meaning but I don’t see how the Foundation will ever be truly independent and representing anyone other than a very narrow range of interests while it is slanted as it is. It desperately needs more diversity of viewpoint else its going to continue to operate in a small bubble and its certainly not going to shake its perception as a PR stunt by Microsoft.

And an independent Foundation looking out for the community should hold Microsoft to account for working against their stated mission when they don’t do this. Its hard to see that really happening as things are currently set up.

The 500kg Gorilla in the room

When it comes to .NET Microsoft are, unsurprisingly, absolutely the dominant voice in the room. Through both action and inaction they absolutely steer the .NET ecosystem – they own the roadmap for .NET. They build many of the key frameworks and surrounding technologies. If they announce a piece of technology in a space it will suck the oxygen out of the room for none-Microsoft solutions even when the MS solution is arguably poorer.

Combined with this Microsoft seem to have a real “Not Invented Here” attitude. I’m not privy to the inner workings of Microsoft but I assume they are like many large organisations – heavily siloed (though management will likely claim otherwise), promotion obtained through success in product launch, and a good degree of power dynamics at play. They also have a lot of smart engineers who they need to keep happy and who are as susceptbile to the same foibles as the rest of us (this is too complicated, I could do that better, etc. etc.).

It creates a real trust issue for OSS contributors that I’m not sure Microsoft can solve. They’re a business and if they decide something is key to their success they are going to want to clearly and unequivocally own the Intellectual Property Rights. Even if they acquihire that seems likely to result in starting again in a space. And speaking of IPR…

Copyright Assignment

While writing this piece I came across a fascinating GitHib Issue – the short version is that the Foundation has removed the options for projects that join the Foundation from the join up options leaving only the assignment model. This means that for a new project to join the Foundation it has to assign copyright to the Foundation itself with the original authors recieving a license in return.

Reading the only response from a Foundation director in the issue so far this is down to enabling the Foundation to better protect the copyright of member projects. With more details to follow.

I assume that following a copyright assignment the license is updated to read like this (taken from Benchmark .NET):

Beyond it being a mystery to me why anyone who has put thousands of hours of work into something would accept this it strikes me there are a number of things wrong with this.

  1. Its back to front – this is a significant decision for someone considering contributing their project to the Foundation, the information should be available ahead of the change.
  2. That’s a hell of an ask and somewhat based on trust in a position of power imbalance. The Foundation has a lawyer on hand – contributors often do not.
  3. Have the Foundation ever taken legal action on behalf of a member project? I genuinely don’t know. Would they take legal action against Microsoft? Or any of their other sponsors?
  4. For me at least it would need to be accompanied by a contract stating exactly what the Foundation commits to provide in return for this assignment of, essentially, ownerhip and what the consequences are for violation or change of those terms.
  5. Its not clear to me why this is required despite the statement. As far as I am aware a lawyer can be paid to act on anybodies behalf. That a contributor or the Foundation surely doesn’t effect copyright violation or any other form of violation. Again – this is why clear information needs to be provided ahead of time.
  6. How does this impact OSS practicalities? Hybrid licenses for commercial business models for example? Is this one way? Can contributors regain ownership of the copyright?

Again I feel we are back to trust and communicaton issues. The Foundation and Microsoft seem like one and the same thing to many and handling things like this just stirs that pot. Its easy to view this as either a land grab or being for consumers (a .NET Foundation license is likely more palettable to a risk averse bank than licensed by James Randall).

OSS Professionalism

Around 15 months ago the .NET Foundation proposed a maturity model for OSS – essentially this was an attempt to professionalise the OSS space so that more “risk averse” businesses would be more comfortable making use of OSS. It caused quite a ruckus amongst contributors and, fortunately, this was listened to and the Foundation pulled back.

We’re currently going through that loop again with a slightly different, and arguably healthier, spin: ecosystem growth though that too still touches on the professionalism of OSS contrasting with other ecosystems and how many projects are (as suggested) worked on by employees of businesses – essentially funded by business.

Perhaps this is the future of .NET OSS but I don’t think there are many business who will fund OSS project development to a serious degree. Certainly they exist but its a percent of a percent: larger tech companies and consultancies using OSS for matketing (unsurprisingly, the companies on the Foundation board are likely to do this), other than my own business I’ve only worked in one that would in even a minor sense (and perhaps that is my loss).

And circiling back to copyright assignment for a moment – I’ve never worked in a business that would consider giving up its copyright ownership to another organisation and I find it hard to imagine many would. Maybe I’m wrong.

In any case their has to be a clear incentive for businesses to build and maintain OSS which leads me into contributor challenges.

Contributor Challenges

It can often feel like being trapped in a no-win scenario in the .NET OSS space – it can be very hard to get any attention on to an OSS project. If projects do get a measure of success they oftem seem to flounder for one of two reasons: the author(s) lose motivation due to the demands from a now significant audience or Microsoft decide they need to do something in the same space sucking all the air out of the room.

A lot of what people seem to struggle with is none-technical – how to manage an OSS project, how to publicize it and grow an audience, recruit fellow contributors, what opportunities exist for funding and monetising, example commercial licences. Yet I can’t find any materials produced by the Foundation to help with this (Resources is essentially a list of tech books).

Once this is in place then we could start to look at maturity models and other frameworks.

Closing Thoughts

On a sunny day it seems to me the Foundation is a well meaning but clumsy communicator. On a rainy day it feels like it is far too focused on growing the ecosystem for Microsoft’s benefit with contributors at best an afterthought.

Ultimately I just think its stuck in a bit of a weird bubble.

I’d like to see it lose a lot more Microsoft employees from the Board. Not because they are bad people but because it needs to inject a more diverse set of opinions into its leadership from across broader sections of the tech community.

It also needs to communicate before it acts on critical subjects. Not doing so means it makes easily avoidable mistakes and causes distrust – the Maturity Model was carnage, the copyright assignment issue has the potential to be the same.

And finally: focus on contributors. They are the, pun unintended, foundation of OSS. Help them help you.

The State of Azure Deployment

(I’m happy to engage with anyone about MS / Azure about this but I don’t think their is any new feedback here sadly – its a “more of the same” thing)

This last week I needed to deploy a greenfield system to Azure for the first time in a good while and so it seemed like a good point to reflect on the state of Azure deployment. tl;dr – it was like pulling teeth.

The simple is fairly typical – it uses a variety of Azure components to allow users to access a React (Fable) based website that talks to APIs (.NET 5) and a PostgreSQL database and a simple blob storage based key/document store to allow users to manage risk and capabilities in their organisations.

As its greenfield I’ve gone all in on managed identity and Key Vault wherever possible. I use Docker to run the API and within Web Apps for Containers and I use the Azure CDN backed by a storage account to serve the React app.

Build, release and deployment occurs via a GitHub Action with the real work taking place inside Farmer and so ultimately the deployment is based on ARM.

The system is written end to end in F# and the architecture is shown below.

Just a couple of notes on this architecture based on common questions.

  1. Why not use AKS? For something this simple? Not needed – massive overkill.
  2. Why use Docker then? I value being able to move things between vendors. For example I’ve experimented with this deployment in AWS, Azure and Digital Ocean. Using fairly standardised packaging mechanisms means I’m generally just worrying about configuration.
  3. Why not use use Azure Static Web Apps. Somehow this is still in preview and I already knew how to do the same thing with the CDN and had to do a “manual” build anyway as I am using Fable (which isn’t hard – you can find some notes about that here).
  4. I thought you loved Service Bus where’s the Service Bus? v1.1 🙂

The Good

  1. Farmer is excellent – I’m a big fan of using actual code for infrastructure and see no reason at all to learn another (stunted) language like Bicep to do so. Pages of ARM become just a handful of lines of Farmer code and as it ultimately outputs and (optionally) executes ARM templates you’re not locked into anything. My final build and release script is entirely within F# – its not a Frankensteins monster of JSON, Bash, Powershell etc. though I do call out to az cli on occasion to work round problems in ARM.

    At this point I’ve used ARM itself, Farmer, Pulumi, Terraform and played with Bicep and my favourite of these is definitely Farmer. It saves time and reduces cognitive load.
  2. Managed Identity now feels pretty usable within Azure. Last time I tried this support was so patchy it just wasn’t worth the effort. This time round although not everywhere it is in many places, is fairly well documented, and supports local development easily – again last time I experimented with it local development (at least on a Mac at the time) was somewhat painful.
  3. The Azure Portal is useful when you’re dealing with multiple services. Its got its issues for sure but it does bring things together in a helpful way.
  4. Log Streaming for Web Apps is great – you can flick to that tab and quickly see application related issues around startup within your containers.

The Bad

  1. Error reporting from the Azure Resource Manager itself is still dreadful. A times I was faced by utterly opaque errors and at others completely misleading errors. I’m 95% sure that if I cracked the lid on the underlying code I would find things like “try …. catch all… log generic error”.

    Additionally what are basically validation errors “you can’t do this with that” are left to the various resources to handle. The problem with this is that this won’t occur until a long way into development and means the feedback loop for development is torturously slow.

    You can burn days on this stuff. And I understand the need to decouple resource deployment from the orchestration of it but its not hard to see how validation couldn’t also be decoupled and done earlier stage for many errors.
  2. Read the small print! Things in Azure are in a constant state of moving forward – and this is great for the most part – but when those systems are foundational things you find they are partially supported and that their are caveats. And you can still get surprised by things if you are a new Azure user: for example Functions not supporting .NET 5.
  3. Things are declarative until they are not! The defence of ARM, and now Bicep, is that its declarative. The problem is it really isn’t – it requires orchestrating and some of this is in areas that really need smoothing out.

    A great example of this is deployment from ACR to Web Apps for Containers. The Web App for Container won’t deploy with CI/CD turned on until ACR is both created and has an image inside it. This immediately means I have to split my deployment into two ARM templates and orchestrate between the two. This is such a common use case it really ought to be smoothed out – let the Web App be created but stay empty until a ACR is created and a container pushed. If I need to restart the web app ok but don’t prevent me creating it.
  4. Bugs. Even in this simple deployment I’ve come across a few of these. Just a couple of examples below but the real issue is that when you combine this with the above issues you end up with very confusing situations.
    1. Granting my web app managed identity access to blob storage fails on multiple runs on the ARM template. It will work when the account is created but fail on subsequent deployments. Workaround: do it using Az CLI.
    2. For reasons unknown KeyVault references are not working on the Web App. Workaround: don’t use them, use the same managed identity to load them into the ASP.Net Core app configuration with AddKeyVault(). I’m hoping this is something that can be resolved but I’ve tried all sorts of things with no luck yet.

Closing Thoughts

Deploying to Azure is still a painful and unpredictable task for developers. It shouldn’t be. As a point of comparison this took me two days to complete for dev and live environments (the latter took about an hour) – I deployed a very similar system into AWS a couple of months back knowing absolutely nothing about AWS and it also took two days. Given I’ve been working with Azure for 10+ years that’s really disappointing.

I think one of the reasons its such a muddle is that Azure has been built “top down” rather than bottom up. The underlying compute, network and identity platform have been patched in under the original rather humble set of PaaS services. AWS on the other hand feels like it was built bottom up and so while it can feel lower level than Azure it also feels more consistent and predictable.

Why don’t these time burning and infuriating issues get attention? I think their are two things at play.

Firstly – Conway’s Law. Microsoft is huge and you can see the walls between the teams and divisions bleeding into common areas like this and their doesn’t seem to be a guiding set of minimum standards for Azure teams. And if they are they are clearly high up on the list of things to be sacrificed when things are running late.

Secondly – these things aren’t sexy. They can’t be launched at a huge PR event. They don’t result in a 5 minute demo. They don’t sell to CTOs of large big spending businesses who are a big distance from this stuff and the cost of these issues is often hidden – its buried in the minutiae.

Thirdly – Microsoft is at war with Eurasia and its always been at war with Eurasia. Its hard to announce you’re working on fixing something that is really sub-standard without admitting you have something really sub-standard. And so due to the marketing things are awesome until they can be replaced by a new feature that can be celebrated. Take ARM as an example – celebrated and championed, despite complaints of real world users, right up until Bicep when it became IL (though ironically it seems all the same hacks are still needed).

As ever I’m sure the Product teams are well meaning but like the rest of us they are wrapped in management, commercial concerns, and organisational systems that result in sub-optimal outcomes for users. Sadly we, as users of Azure, pay for this every day and the deployment area of Azure continues to feel at the sharp and expensive end of this.

Looking for folk to help

I’m currently taking a bit of a break from regular work. I can’t remember the last time I didn’t have a deadline or an imminent sense of expectation – I’ve been digging on the coal face of professional software development almost constantly for 27 years. I’m lucky to be in a position, through hard work over those 27 years and circumstance, that I don’t need to find paid work “tomorrow” and so I’m taking a little time out to recharge, scratch a few itches, complete a couple of side projects, and figure out what I want to do next.

I’m a week in and one of the things that dawned on me is that I’m not really talking to people in the day and I miss helping fellow developers with technical problems and career development and that general chewing of the fat that goes on. Whatever I do next has to include this!

And so I figured a way to perhaps solve that problem and give something back to the community would be to offer some of my time to help others and hopefully everyone wins.

As a starting point I figure offering a regular hour a week to 3 people would be a good place to start. A space to knowledge share, talk through technical issues, career issues, approaches, challenges, look at code – basically whatever helps. I’m likely to favor helping people at the start of their career – just because its so hard when you’re faced with the entire crazy world of software development and the choices you make then reverberate down the rest of your career.

If its of interest then drop me a DM on Twitter with just a short note of how you think I might be able to help and we can set up a Teams or Zoom call and talk it through and see if its a good fit. I’m on UK time. If I end up over subscribed (feels very hubristic to say but I think its important to be clear) I’ll need to filter down to 3 people and that will be based on where I think I can have the most impact.

As some background on me – I’m based in the UK and I taught myself to code in the 80s on the 8-bit micros (a BBC Model B and Master 128) and moved into professional software development I think in 1994. I skipped university – I had a place to do Computer Science but school had left a really nasty taste in my mouth and I managed to find a job through sending in some source code to businesses (on a 3.5″ floppy!). Over the years I’ve done just about every role from writing code to managing large teams, working in product teams, consultancy, corporates, and startups. Making things is my passion and my natural home is small teams with a good deal of end to end autonomy. Tech wise – I started out with BBC Basic and 6502 assembly. Spent most of the 90s doing C and C++ on 68000 based embedded systems and PCs. Then the 2000s has been mostly .NET and web technologies (JavaScript, TypeScript, React, CSS etc.) – more recently I’ve got heavily into my functional programming with F# but still use C#. I’ve been all over Azure for the last 10 years and more recently have been dabbling with AWS and Digital Ocean.

Outside of tech – I’m (to say the least) a keen cyclist love to cycle up mountains. Sadly I live somewhere pan flat so COVID has been somewhat limiting this last year.


While working on a fun little F# side project over the Christmas break I had the need for a couple of resilience policies. I had a Google but couldn’t really find much in the space and what I was really looking for was an elegant way to use the battle hardened Polly with my F# code. It wasn’t critical so I parked it and moved on.

The itch kept, errr, itching and so I came back to it this last week looking to build an expressive F# feeling wrapper for Polly and the result of that is Pholly (hat tip to Scott Wlaschin for the name).

So far it includes support for the three reactive Polly policies: retry, circuit breaker and fallback and they can be used in both asynchronous and synchronous situations. The async makes use of tasks via the Ply task builder – this seemingly being the direction of travel and, as Isaac Abraham pointed out, with F# async we’d be wrapping one form of async computation with another.

The library is available on NuGet, GitHub, and documented in a Dotnet Interactive Notebook however as an example constructing and using a retry policy looks like this:

let retryPolicy =
  Policy.retryAsync [
    retry (upto 10<times>)
    withIntervalsOf [50<ms> ; 150<ms> ; 500<ms>]
    beforeEachRetry (fun _ t _ -> printf "onRetry %d\n" t ; ())
let! resultOne = asyncWorkload |> retryPolicy

Future changes at this point include a more “word based” approach to policy composition as their are strong around current custom operators as used currently and addition of the pro-active policies.

Its an early version number on NuGet but I’m actively using it and I’ll bump it up to a 1.0.0 in fairly short order – any problems please let me know in a GitHub issue or on Twitter.

2020 Review

I think the most generous thing you could say about 2020 is that it was strange – nevertheless there have been some highs and lows and so, in true narcissistic fashion, are some of my personal high lights and low lights as a middle aged grumpy bastard working in tech.


A real highlight for me. Over 2019 I’d started to become ever more frustrated by C# with this exciting numbered list of issues bothering me the most:

  1. I increasingly felt I was “fighting” the language – its object oriented roots support patterns we’ve mostly decided are unhelpful to us now and while C# has evolved its still held back by those roots.
  2. Ever increasing complexity – as its evolved lots of complexity has been added and it shows no sign of slowing down.
  3. As a combination of (1) and (2) there is no longer an obvious path of least resistance.
  4. When I started to pick up F# some of the things I felt were helpful (immutability being a standout) were not baked into C#. They’re still not common.

In any case. I jumped ship to F# as I recognized my approach was increasingly functional with a dash of object orientation – which really is F#s bag. Their was definitely a steep learning curve to “do things well” (I mean how the actual fuck do you do a loop without being able to change a variable) but the pay off has been massive.

Their is also some fantastic framework support – the SAFE stack comes to mind.

In any case – at the end of it all I’ve never felt as productive as I do with F# – it was well worth the effort to learn. And there are some fantastic folk in the F# community – helpful, friendly, thoughtful and generous with their time.

Getting to talk at fsharpConf 2020 was bloody amazing too! A real genuine highlight of 2020 for me.

Receiving an MVP Award

That was super cool. It was nice to be recognized and I was super appreciative of the award and the nomination.

I’m not sure if I’ll be renewed – my community contributions have fallen off (change of role, bit burned out with everything going on) and its fair to say I’m about a million miles away from towing any kind of Microsoft line and have found myself quiet critical on a number of occasions this year (I hope they appreciate critical friends….!).

OSS in .NET land

I’ve bailed on this. To me at this point it seems like something of a lost cause. A mugs game as I’ve written before. Yes Microsoft are having another trip around the “how could we improve this” but the published pieces I’ve seen aren’t really encouraging.

I’m not going to blather on about it all here. Aaron Stannard has published many many thoughtful articles on the topic but my takeaway is: you’re better off working in another ecosystem or simply accepting the ecosystem for what it is, leveraging it as it stands, and building a product and business out of it. OSS may form part of your strategy (likely adoption and PR), it may not.


I became curious about AWS while poking around their support for ARM processors and discovering a significant economic advantage was to be had. I’ve migrated my bike performance website over to it and learned a lot through that. Ironically they seem to be making better use of .NET than Microsoft and Lambda is fantastic (having a full suite of F# blueprints puts Azure Functions to shame frankly).

If I was to compare AWS with Azure I would say that AWS feels more low level and like it is built for developers by developers with a more consistent foundation. You can get going more easily with Azure “in just 5 minutes” but once you get past that facade AWS just feels more “put together” to me. I can’t imagine working with AWS without a robust infrastructure as code solution (I’ve been using Pulumi).

If I was to start a new project today as a .NET developer what would I choose? AWS.

(and in fact I have been doing that this last couple of weeks)

A full year as CTO

If I’d known we were going to have a pandemic I’m not sure I’d have moved into a role that took me so far out of my comfort zone. Its had highs and lows. I still have the itch to make things (myself) and COVID + CTOing has left me too exhausted to scratch it in my free time which has been frustrating and led to quite a few started and unfinished projects. I’m trying not to beat myself up too much about that.

Personal stuff

Like many my mental health has definitely taken a battering this year – I’ve not been effected directly by COVID in terms of my health or my job (beyond operational issues) but I’ve seen people who are and its “always there” like a nasty hum in the background – combined with the role change it really gave me a pounding. I’d also finally in my 40s figured out ways to compensate for the things I don’t have in my life by putting other things in their place and they’ve ben cut off by the pandemic.

See the source image
End of the year James

I crawled into the Christmas break with quite bad insomnia, what I can only describe as “micro-panics” each night when I went to bed, and an utter absence of energy and enthusiasm.

It took about 12 days of my Christmas break to start feeling back to even vaguely normal again. I’m nervous as to how I’ll get on in the first quarter of 2021 but will push on.

Looking ahead

I’ve been thinking about what I’m good at and what interests me. Really its doing early stage product development on small budgets / tight resource constraints with tiny teams / solopreneur land. I love it and I’ve got quite practiced at optimizing the development side of it. I’m thinking maybe their is some writing and perhaps even business opportunity around that.

I’ve also got a truck load of product ideas. I always do….

Integrating Pulumi Stack Output with GitHub Actions

As part of migrating Performance for Cyclists to AWS I’ve been exploring the use of Pulumi to manage the infrastructure running through GitHub Actions when I commit code (targetting dev) or to live (when I create a release). To do this I’m using the Pulumi GitHub Action available in the marketplace.

This has been fairly straightforward if a little verbose compared to Farmer (which I use to do the same with Azure) – with one exception: using a Pulumi Stack Output in a subsequent GitHub Action step. For example passing the URL of a provisioned application load balancer on to an acceptance test suite or the endpoint for a database that I want to run a migration on.

I scratched my head for a while before stumbling on the secret to doing this in a GitHub Issue. It took a little bit of tweaking to get it actually working due to the way the Pulumi Action wraps the output command – it was a bit fiddly getting the escape sequences right. In any case here are the steps.

Firstly after your pulumi up step add a step that looks like this:

- name: Extract stack output
  uses: docker://pulumi/actions
  id: pulumiStackOutput
    args: stack output -j | jq --raw-output "'"to_entries | map(\"::set-output name=\" + .key+\"::\" + (.value | tostring)+\"^\") | .[]"'" | xargs -d \"^\" echo 
    PULUMI_ROOT: aws
    PULUMI_CI: stack

Its important your step has an ID so that you can reference it subsequently. Now lets assume one of your output variables is ApiUrl then you would consume it like the below (just showing it in a dotnet run type command):

- name: Test endpoint
  run: dotnet run -- ${{ steps.pulumiStackOutput.outputs.ApiUrl }}

This seems a common task in a real world CI / CD pipeline to me so its surprising that its not supported directly in the Action. As best I can tell (and I looked in the source) it doesn’t seem to be though Pull Requests have been created for it back in April. Hopefully they’ll add this capability soon as it feels very sticking plaster and over-complex without. Really I just want to be able to add an option like PULUMI_EXPOSE_OUTPUTS in the example below:

- name: Infrastructure up
  uses: docker://pulumi/actions
    args: up --yes
    PULUMI_CI: up
    PULUMI_ROOT: aws


  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments




GiottoPress by Enrique Chavez