If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.
Having conducted my ARM and x64 tests on AWS yesterday I was curious to see how Azure would fair – it doesn’t support ARM but ultimately that’s a mechanism for delivering value (performance and price) and not an end in and of itself. And so this evening I set about replicating the tests on Azure.
In the end I’ve massively limited my scope to two instance sizes:
- A2 – this has 2 CPUs and 4Gb of RAM (much more RAM than yesterdays) and costs $0.120 per hour
- B1S – a burstable VM that has 1 CPUand 1Gb RAM (so most similar to yesterdays t2.micro) and costs $0.0124 per hour
Note – I’ve begun to conduct tests on D series too, preliminary findings is that the D1 is similar to the A2 in performance characteristics.
I was struggling to find Azure VMs with the same pricing as AWS and so had to start with a burstable VM to get something in the same kind of ballpark. Not ideal but they are the chips you are dealt on Azure! I started with the B1S which was still more expensive than the ARM VM. I created the VM, installed software, and ran the tests – the machine comes with 30 credits for bursting. However after running tests several times it was still performing consistently so these were either exhausted quickly, made little difference, or were used consistently.
I moved to the A2_V2 because, frankly, the performance was dreadful on my early tests with the B1S and I also wanted something that wouldn’t burst. I was also trying to match the spec of the AWS machines – 2 cores and 1Gb of RAM. I’ll attempt the same tests with a D series when I can.
Test setup was the same and all tests are run on VMs accessed directly on their public IP using Apache as a reverse proxy to Kestrel and our .NET application.
I’ve left the t2.micro instance out of this analysis
With 2 clients per test we see the following response times:
We can see that the two Azure instances are already off to a bad start on this computationally heavy test.
At 10 clients per second we continue to see this reflected:
However at this point the two Azure instances begin to experience timeout failures (the threshold being set at 10 seconds in the load tester):
The A2_V2 instance is faring particularly badly particularly given it is 10x the cost of the AWS instances.
Unfortunately their is no meaningful compaison I can make under higher load as both Azure instances collapse when I push to 15 clients per second. For complete sake here are the results on AWS at 20 clients per second (average response and total requests):
Simulated Async Workload
With our simulated async workload Azure fares better at low scale. Here are the results at 20 requests per second:
As we push the scale up things get interesting with different patterns across the two vendors. Here are the average response times at 200 clients per second:
At first glance AWS looks to be running away with things however both the t4g.micro and t3.micro suffer from performance degradation at the extremes – the max response time is 17 seconds for both while for the Azure instances it is around 9 seconds.
You can see this reflected in the success and total counts where the AWS instances see a number of timeout failures (> 10 seconds) while the Azure instances stay more consistent:
However the AWS instances have completed many more requests overall. I’ve not done a percentile breakdown (see comments yesterday) but it seems likely that at the edges AWS is fraying and degrading more severely than Azure leading to this pattern.
The different VMs clearly have different strengths and weaknesses however in the computational test the Azure results are disappointing – the VMs are more expensive yet, at best, offer performance with different characteristics (more consistent when pushed but lower average performance – pick your poison) and at worst offer much lower performance and far less value for money. They seem to struggle with computational load and nosedive rapdily when pushed in that scenario.
|Test||Vendor||Instance||Clients per second||Min||Max||Average||Successful Responses||Timeouts|
|Mandelbrot||Azure||A2_V2 (x64)||15||ERROR RATE TOO HIGH|
|Mandelbrot||Azure||A2_V2 (x64)||20||ERROR RATE TOO HIGH|
|Mandelbrot||Azure||B1S (x64)||15||ERROR RATE TOO HIGH|
|Mandelbrot||Azure||B1S (x64)||20||ERROR RATE TOO HIGH|