Month: May 2018

Azure AD B2C – A Painful Journey, Goodbye For Now

I’ve dipped in and out of Azure AD B2C since it first launched. In theory it provides a flexible and fully managed consumer identity provider inside Azure and while I’ve had a couple of successes after recent experiences I’ve come to the conclusion that its simply not ready for anything other than narrow requirements and am pulling my current projects away from it and onto Auth0 (this isn’t a random choice – I’ve used Auth0 before and found it to be excellent). If you’re not on that happy path – it might do what you need but getting there is likely to be a time sink and involve a lot of preview features.

If you’re considering B2C for a project in the near future I’d suggest a couple of things:

  • Take a read of the below. It might save you some time.
  • Proof of concept all the B2C touch points (policies, customisation etc.) and ensure it will do what you need

Just a quick note on the below – I’ve discussed some of this with MS previously but hopefully they will provide some feedback and I will update if and as appropriate.

Documentation and Ease of Adoption

One of the more serious issues for Azure B2C is the absolutely awful state of the documentation and samples which often feel unfinished and half baked. Identity and the protocols and integration points that go with it are complex, can be intimidating, and important to get right – incorrect integration’s can lead to security vulnerabilities.

As an example of documentation done right I think Auth0 have this nailed – they have lots of detailed documentation, samples, and tutorials on a per framework basis that cover both common frameworks and more obscure ones and a component that will drop into nearly all scenarios.

In contrast Azure AD B2C, at the time of writing, doesn’t really have any of this. If you do find a link to a framework it will generally just drop you to a sample on GitHub and these are often out of date or incomplete. For example the SPA “how tos” don’t really tell you how to – they have you clone a sample of a plain JavaScript project (no framework) and enter the magic numbers.

If you then go and look at the MSAL library that this makes use of then a single scenario is documented (using a popup) and while this is actually a fairly comprehensive framework if you go to it’s API reference you will find no content at all – its just the method and property signatures with no real help on how to wire this together. There’s just enough there for you to figure it out if you know the space but it doesn’t make it easy. If you’re using React I recently published a component to wrap some of this up.

The level of friction is huge and you will have to rely almost entirely on Google and external sources.

In my view the biggest single improvement the Azure AD B2C team could make right now is to take a good look at how the competing vendors have documented their products and take a similar approach for the features they have covering step by step integration with a wide variety of platforms.

Customer Experience

If you read the Azure documentation the favored approach, and what most of the limited documentation covers is to follow the “sign in and sign up” policy route. This presents a user interface to customers that allows them to either sign in or register. Unfortunately the user experience around this is weak and nearly every client I’ve shown this to has, rightly in my view, felt that it would discourage customers from signing up – its all well and good having a strong call to action on your homepage but getting dropped into this kind of decision process is poor. You can see an example of this with local accounts below – note the small “Sign up now” button at the bottom:

Additionally many sites want to take people directly to a registration page – you’d imagine you could modify the authorization request to Azure to take people straight to this section of this policy but unfortunately that’s not possible. And so you need to introduce a sign up policy for this. Additionally carrying information through the sign up process (for example an invite code or a payment plan) can be awkward – and again poorly documented.

If you then want to use a sign in policy as a pair to this the user experience for this is truly abysmal with local accounts – the form is missing labels. This screenshot was taken from a out the box minty fresh B2C installation:

At the time of writing this has been like this for 4 days on multiple subscriptions and others have confirmed the issue for me – I’ve reported it on Twitter and via a support ticket (on Monday – someone literally just got back to me as I wrote that – I will update this with the outcome). It just seems so… shoddy. And that’s not confidence inspiring in an identity provider.

Update: I’ve had a back and forth with Azure Support over this and its not something they are going to fix. The advice is to use a “sign in or sign up” policy instead. Basically the sign up policy as it stands today is pretty much a legacy hangover and work is underway to address the issues. Unfortunately if you do need to split sign up from sign in (and their are many reasons: users can’t self service, account migration etc.) then your only option is to use the “sign in or sign up” policy and customise the CSS to hide the sign in link. Of course a savvy user can use browser tools to edit the CSS and put them right back in – you can’t really prevent sign up with this policy. Needless to say I think this is pretty poor – its a pretty basic feature of an identity solution.

Finally, and rightly, Azure requires people signing up with an email address to verify that email address – however the way they do this is extremely clunky and takes the following form:

  1. User enters an email address in the sign up page.
  2. User clicks the Send Verification Code button.
  3. Microsoft send an email to the user with the code in.
  4. User checks their email (in the middle of the sign in process).
  5. The user has to copy and paste the verification code from the email back into B2C the verification code.
  6. They can then hit “Create” on the registration page.

There are a couple of problems with this but the main issue is: it’s synchronous. If the user doesn’t receive the email or doesn’t have convenient access to it at the point of registration then they can’t proceed and you’ve likely lost that user. There is, as far as I’m aware, no option to send out a more traditional “click the link” to verify an email address and have this done asynchronously by the user – coupled with a claim about the email verification status this would then allow applications to make decisions about a users access based on email verification.

Branding and Customization

Azure AD B2C does support a reasonable degree of flexibility around the branding. Essentially you can replace all the HTML around the core components and use your own CSS to style those. But again this is inconsistent in how it is handled and, again, feels rushed and unfinished.

With the sign up or sign in policy we get a good range of options and you can easily provide a custom page design by uploading a HTML file to blob storage. Great:

With the sign up policy you get the options for customization shown below, again, makes sense:

Now we come to the sign in policy:

Wait, what? Where are the options? How do we customize this? Well it turns out this is done a different way on the Azure AD that the B2C AD lives within using the “Company Branding” feature that is more limited and only allows for basic customization:

Not only is this just, well, random (from a B2C perspective) but it makes it hard to get a unified branding experience across the piece. You essentially have to design for the lowest common denominator. A retort to this might be “well just use the sign in or sign up policy” but this isn’t always appropriate if sign ups are actually triggered elsewhere and self service isn’t enabled.

Azure Portal Experience

Firstly Azure AD B2C gets created as a resource in its own Azure AD directory which means it doesn’t really sit with the rest of the resources you are using to deliver a system and the Azure Portal is only able to look at one directory at a time. You can link it back to a resource group which provides a shortcut to the B2B resource but ultimately you’ll end up having to manage the directory in another tab in your browser and, if you’re like me and work on multiple systems with multiple clients, its yet another directory in the list.

Not the end of the world – but it can irk.

Beyond that it suffers, like some other services do, by being squeezed into the same one size fits all UI of the portal. Nothing major but its not as slick as vendors like Auth0 or Okta who have dedicated user interfaces.

Finally there is an extremely annoying bug – if you’re testing the sign up and login processes there’s a pretty good chance you’ll be creating and deleting users a lot. However if you try and delete a user after pressing the refresh button (to show the newly created user) on the Users page you get an “can’t find object with ID xxxx” error and have to move out of the users page and back again. Its not the end of the world but it sure is aggravating and has been unfixed for as long as I can remember.

Access to the Underlying Identity Provider Token

Lets say you’re using Facebook to log your users in and want to access the Facebook API on the users behalf – a fairly common requirement. You will generally require the identity and/or access token returned to you from Twitter. Azure AD B2C provides no documented way to do this – so although you can authenticate with social logins you can’t then use the APIs that go along with them.

Apparently this is possible through custom policies – and, after a hint from a MS PM, I believe you can do it from looking at the docs by introducing custom user flow policies but this is a lot of work for a common place requirement that ought to be achievable much more simply.

Again in sharp contrast this is clearly documented by Auth0 and is a fairly simple API call.

Custom Claims

Its not uncommon to want to store attributes against a user for custom claims and Azure AD B2C supports this via the Azure AD Graph API. This is a perfectly fine API and its fairly self explanatory though their is a pretty good chance you will bang your head against the wall for a while with the way that attributes are identified. Essentially they take the form of:


But as ever Azure AD B2C finds a way to make this un-intuitive and weird and there’s no way to quickly find this in the portal and instead you have to basically call the Graph API to get a list of attributes. When you do you’ll find that the azureAdApplicationId actually comes from the Azure AD that hosts your Azure AD B2C tenant – this is not the same AD your subscription lives in as each Azure AD B2C tenant gets created in its own Azure AD tenant (and yes that makes for real fun with lots of directory switching in the portal) and this AD will have an application in it called b2c-extensions-app:

The two blurred out sections are GUIDs – application IDs. And the azureAdApplicationId you actually need as part of the attribute name is the b2c-extensions-app ID with the dashes removed. Sigh.

Getting into the Authentication / Sign Up Flow

You can now do this through custom policies (in preview) but my goodness it’s fiddly and hard to test and involves a fair amount of copy and pasting and calling out to remote REST APIs that you will need to implement. This could eventually provide a lot of flexibility and is a reasonable fit with Azure Functions but given there are so many simple to crack nuts remaining this currently feels like the proverbial sledgehammer.

Again in contrast Auth0 make this a fair bit simpler with their rules and JavaScript approach.


To me, at this point, B2C is massively compromised by the state of the documentation and samples and its compounded that by some poor decisions that directly effect the customers experience in what is, for many systems, a critical part of the user acquisition journey.

It then further suffers on the developer side from inconsistencies and a somewhat byzantine approach to things.

To me it currently feels under-cooked on common requirements while trying to cover the all things to all men requirements with the complex custom policies.

Is it usable? Yes – if your requirements fit down its happy path and you’re prepared to put up with the many rough edges.

However if you want to do anything that veers away from straight “Sign in or sign up” type usage then at this point I’d suggest staying away, I just think there are too many pitfalls and it doesn’t feel either complete or ready, and there are plenty of established vendors in the space with alternative offerings.

First time I’ve really said that about an Azure service.

Build Elegant REST APIs with Azure Functions

When I originally posted this piece the library I was using and developing was in an early stage of development. The good news if you’re looking to build REST APIs with Azure Functions its now pretty mature, well documented, and carries the name Function Monkey.

A good jumping off point is this video series here:

Or the getting started guide here:

There are also a few more posts on this blog:

Original Post

Serverless technologies bring a lot of benefits to developers and organisations running compute activities in the cloud – in fact I’d argue if you’re considering compute for your next solution or looking to evolve an existing platform and you are not considering serverless as a core component then you’re building for the past.

Serverless might not form your whole solution but for the right problem the technology and patterns can be transformational shifting the focus ever further away from managing infrastructure and a platform towards focusing on your application logic.

Azure Functions are Microsoft’s offering in this space and they can be very cost-effective as not only do they remove management burden, scale with consumption, and simplify handling events but they come with a generous monthly free allowance.

That being the case building a REST API on top of this model is a compelling proposition.

However… its a bit awkward. Azure Functions are more abstract than something like ASP.Net Core having to deal with all manner of events in addition to HTTP. For example the out the box example for a function that responds to a HTTP request looks like this:

public static class Function1
    public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
        log.Info("C# HTTP trigger function processed a request.");

        string name = req.Query["name"];

        string requestBody = new StreamReader(req.Body).ReadToEnd();
        dynamic data = JsonConvert.DeserializeObject(requestBody);
        name = name ?? data?.name;

        return name != null
            ? (ActionResult)new OkObjectResult($"Hello, {name}")
            : new BadRequestObjectResult("Please pass a name on the query string or in the request body");

It’s missing all the niceties that come with a more dedicated HTTP framework, there’s no provision for cross cutting concerns, and if you want a nice public route to your Function you also need to build out proxies in a proxies.json file.

I think the boilerplate and cruft that goes with a typical ASP.NET Core project is bad enough and so I wouldn’t want to build out 20 or 30 of those to support a REST API. Its not that there’s anything wrong with what these teams have done (ASP.NET Core and Azure Functions) but they have to ship something that allows for as many user scenarios as possible – whereas simplifying something is generally about making decisions and assumptions on behalf of users and removing things. That I’ve been able to build both my REST framework and now this on top of the respective platforms is testament to a job well done!

In any case to help with this I’ve built out a framework that takes advantage of Roslyn and my commanding mediator framework to enable REST APIs to be created using Azure Functions in a more elegant manner. I had some very specific technical objectives:

  1. A clean separation between the trigger (Function) and the code that executes application logic
  2. Promote testable code
  3. No interference with the Function runtime
  4. An initial focus on HTTP triggers but extensible to support other triggers
  5. Support for securing Functions using token based security methods such as Open ID Connect
  6. Simple routing
  7. No complicated JSON configuration files
  8. Open API / Swagger generation – under development
  9. Validation – under development

Probably the easiest way to illustrate how this works is by way of an example – so fire up Visual Studio 2017 and follow the steps below.

Firstly create a new Azure Function project in Visual Studio. When you’re presented with the Azure Functions new project dialog make sure you use the Azure Functions v2 Preview and create an Empty project:

After your project is created you should see an empty Azure Functions project. The next step is to add the required NuGet packages – using the Package Manager Console run the following commands:

Install-Package FunctionMonkey -pre
Install-Package FunctionMonkey.Compiler -pre

The first package adds the references we need for the commanding framework and Function specific components while the second adds an MSBuild build target that will be run as part of the build process to generate an assembly containing our Functions and the corresponding JSON for them.

Next create a folder in the project called Model and into that add a class named BlogPost:

class BlogPost
    public Guid PostId { get; set; }

    public string Title { get; set; }

    public string Body { get; set; }

Next create a folder in the solution called Queries and into that add a class called GetBlogPostQuery:

public class GetBlogPostQuery : ICommand<BlogPost>
    public Guid PostId { get; set; }

This declares a command which when invoked with a blog post ID will return a blog post.

Now we need to write some code that will actually handle the invoked command – we’ll just write something that returns a blog post with some static content but with a post ID that mirrors that supplied. Create a folder called Handlers and into that add a class called GetBlogPostQueryHandler:

class GetBlogPostQueryHandler : ICommandHandler<GetBlogPostQuery, BlogPost>
    public Task<BlogPost> ExecuteAsync(GetBlogPostQuery command, BlogPost previousResult)
        return Task.FromResult(new BlogPost
            Body = "Our blog posts main text",
            PostId = command.PostId,
            Title = "Post Title"

At this point we’ve written our application logic and you should have a solution structure that looks like this:

With that in place its time to surface this as a REST end point on an Azure Function. To do this we need to add a class into the project that implements the IFunctionAppConfiguration interface. This class is used in two ways: firstly the FunctionMonkey.Compiler package will look for this in order to compile the assembly containing our function triggers and the associated JSON, secondly it will be invoked at runtime to provide an operating environment that supplies implementations for our cross cutting concerns.

Create a class called ServerlessBlogConfiguration and add it to the root of the project:

public class ServerlessBlogConfiguration : IFunctionAppConfiguration
    public void Build(IFunctionHostBuilder builder)
            .Setup((serviceCollection, commandRegistry) =>
            .Functions(functions => functions
                .HttpRoute("/api/v1/post", route => route

The interface requires us to implement the Build method and this is supplied a IFunctionHostBuilder and its this we use to define both our Azure Functions and a runtime environment. In this simple case its very simple.

Firstly in the Setup method we use the supplied commandRegistry (an ICommandRegistry interface – for more details on my commanding framework please see the documentation here) to register our command handlers (our GetBlogPostQueryHandler class) via a discovery approach (supplying ServerlessBlogConfiguration as a reference type for the assembly to search). The serviceCollection parameter is an IServiceCollection interface from Microsofts IoC extensions package that we can use to register any further dependencies.

Secondly we define our Azure Functions based on commands. As we’re building a REST API we can also group HTTP functions by route (this is optional – you can just define a set of functions directly without routing) essentially associating a command type with a verb. (Quick note on routes: proxies don’t work in the local debug host for Azure Functions but a proxies.json file is generated that will work when the functions are published to Azure).

If you run the project you should see the Azure Functions local host start and a HTTP function that corresponds to our GetBlogPostQuery command being exposed:

The function naming uses a convention based approach which will give each function the same name as the command but remove a postfix of Command or Query – hence GetBlogPost.

If we run this in Postman we can see that it works as we’d expect – it runs the code in our GetBlogPostQueryHandler:

The example here is fairly simple but already a little cleaner than rolling out functions by hand. However it starts to come into its own when we have more Functions to define. Lets elaborate on our configuration block:

public class ServerlessBlogConfiguration : IFunctionAppConfiguration
    private const string ObjectIdentifierClaimType = "";

    public void Build(IFunctionHostBuilder builder)
            .Setup((serviceCollection, commandRegistry) =>
                serviceCollection.AddTransient<IPostRepository, CosmosDbPostRepository>();
            .Authorization(authorization => authorization
                .Claims(mapping => mapping
                    .MapClaimToCommandProperty(ClaimTypes.ObjectIdentifierClaimType, "AuthenticatedUserId"))
            .Functions(functions => functions
                .HttpRoute("/api/v1/post", route => route
                    .HttpFunction<GetBlogPostQuery>("/{postId}", HttpMethod.Get)
                .HttpRoute("/api/v1/user", route => route
                    .HttpFunction<GetUserBlogPostsQuery>("/posts", HttpMethod.Get)
                .StorageQueueFunction<CreateZipBackupCommand>("StorageAccountConnectionString", "BackupQueue")

In this example we’ve defined more API endpoints and we’ve also introduced a Function with a storage queue trigger – this will behave just like our HTTP functions but instead of being triggered by an HTTP request will be triggered by an item on a queue and so applying the same principles to this trigger type (note: I haven’t yet pushed this to the public package).

You can also see me registering a dependency in our IoC container – this will be available for injection across the system and into any of our command handlers.

We’ve also added support for token based security with our Authorization block – this adds in a class that validates tokens and builds a ClaimsPrincipal from them which we can then use by mapping claims onto the properties of our commands. This works in exactly the same way as it does on my REST API commanding library and with or without claims mapping or Authorization sensitive properties can be prevented from user access with the SecurityPropertyAttribute in the same way as in the REST API library too.

The code for the above can be found in GitHub.

Development Status

The eagle eyed will have noticed that these packages I’ve referenced here are in preview (as is the v2 Azure Functions runtime itself) and for sure I still have more work to do but they are already usable and I’m using them in three different serverless projects at the moment – as such development on them is moving quite fast, I’m essentially dogfooding.

As a rough roadmap I’m planning on tackling the following things (no particular order, they’re all important before I move out of beta):

  1. Fix bugs and tidy up code (see 6. below)
  2. Documentation
  3. Validation of input (commands)
  4. Open API / Swagger generation
  5. Additional trigger / function support
  6. Return types
  7. Automated tests – lots of automated tests. Currently the framework is not well covered by automation tests – mainly because this was a non-trivial thing to figure out. I wasn’t quite sure what was going to work and what wouldn’t and so a lot of the early work was trying different approaches and experimenting. Now all that’s settled down I need to get some tests written.

I’ve pushed this out as a couple of people have been asking if they can take a look and I’d really like to get some feedback on it. The code for the implementation of the NuGet packages is in GitHub here (make sure you’re in the develop branch).

Please do let me have any feedback over on Twitter or on the GitHub Issues page for this project.

Using ReactJS with Azure AD B2C

Azure AD B2C is Microsoft’s identity provider for social and enterprise logins allowing you to, for example, unify the login process across Twitter, Facebook, and Azure AD / Office 365. It comes with a generous free tier and following that pricing is reasonable particularly compared to the pricing for “enterprise” logins with some of the competition.

However the downside is the documentation for B2C and integration with specific technologies isn’t that clear – there’s nothing particularly strange about B2C, ultimately its just an OpenID Connect identity provider, but there is some nuance in it.

In parallel Microsoft provide MSAL (MicroSoft Authentication Library) for handling authentication from JavaScript clients and here documentation is clearer but still a little incomplete and it can be difficult to figure out the implementation required for a particular scenario – not helped by the library reference having no content other than to repeat method definitions.

I’m currently working with a handful of projects based around React JS, Azure AD B2C, and a combination of ASP.Net Core MVC and Azure Functions and found myself grappling with this. What I was doing seemed eminently reusable (and I hope useful) and so I set some time aside to take what I’d learned and create a B2C specific npm package – react-azure-adb2c.

To install it if you’re using npm:

npm install react-azure-adb2c --save

Or if you’re using yarn:

yarn add react-azure-adb2c

Before continuing you’ll need to set up Azure AD B2C for API access and the three tutorials here are a reasonably easy to follow guide on how to do that. At the end of that process you should have a tenant name, a sign in and/or up policy, an application ID, and one or more scopes.

The first change to make in your app to use the package is to initialize it with your B2C details:

import authentication from 'react-azure-adb2c';

    // your B2C tenant
    tenant: '',
    // the policy to use to sign in, can also be a sign up or sign in policy
    signInPolicy: 'mysigninpolicy',
    // the the B2C application you want to authenticate with
    applicationId: '75ee2b43-ad2c-4366-9b8f-84b7d19d776e',
    // where MSAL will store state - localStorage or sessionStorage
    cacheLocation: 'sessionStorage',
    // the scopes you want included in the access token
    scopes: [''],
    // optional, the URI to redirect to after logout
    postLogoutRedirectUri: ''

There are then two main ways you can use the library. You can either protect the entire application (for example if you have a React app that is launched from another landing area) or specific components. To protect the entire application simply wrap the app startup code in index.js as shown below: => {
  ReactDOM.render(<App />, document.getElementById('root'));

To require authentication for specific components the react-azure-adb2c library provides a function that will wrap a component in a higher order component as shown in the example below:

import React, { Component } from 'react';
import authentication from 'react-azure-adb2c'
import { BrowserRouter as Router, Route, Switch } from "react-router-dom";
import HomePage from './Homepage'
import MembersArea from './MembersArea'

class App extends Component {
  render() {
    return (
      <Router basename={process.env.PUBLIC_URL}>
          <Route exact path="/" component={HomePage} />
          <Route exact path="/membersArea" component={authentication.required(MembersArea)}>

And finally to get the access token to use with API calls:

import authentication from 'react-azure-adb2c'

// ...

const token = authentication.getAccessToken();

If you find any issues please let me know over on GitHub.

Hopefully that’s useful and takes some of the pain out of using ReactJS with Azure AD B2C and as ever I can be reached on Twitter for discussion.



Azure Cosmos DB and its Perplexing Pricing Problems

I’ve had a lot of success with Azure Cosmos DB and have regularly tweeted about how “it does what it says on the tin” – and while the Graph support has been a bit of a bumpy .NET developer experience other than that it, so far (admittedly at limited volume in real world scenarios), looks good too. It’s certainly looking like a success for Microsoft though its had a massive marketing push over the last year.

However Microsoft seem to be continually undermining it with ridiculous pricing levels that put it out of reach of smaller more cost conscious customers that prevent it from being a truly end to end scalable database solution for the cloud which is disappointing given how it is usually pitched as the defacto database for Azure.

Until this weeks Build event pricing was governed by the amount of resource units (RUs) you applied to a collection (or Graph). Collections essentially being sets of documents that share a partition key.

There are two types of collection: fixed and unlimited. Fixed is:

  • Un-partitioned (no partition key is specified).
  • The minimum number of RUs you can allocate is 400.
  • The maximum is 10000 RUs.
  • Maximum storage of 10Gb.

Whereas unlimited is:

  • Partitioned – you need to supply a partition key when you create the collection and structure your code to make use of it.
  • Pricing starts at 1000RUs (down from its previous entry point of 2500RUs).
  • There is no maximum limit although if you want to go beyond 100,000 you might need to call Azure Support.
  • There is no maximum limit for storage.

To put those into financial context fixed pricing begins at around $23 per month per collection and unlimited at $58 per month per collection.

On its own that’s pretty reasonable but remember this is per collection so if you have multiple types of document that’s potentially multiple collections. I’ve heard two workarounds suggested for this. The first is to store multiple document types inside a single collection – the problem with this is that when you design a storage strategy for a solution often you know you are going to outgrow this kind of bodge so you’re essentially being forced to incur technical debt from day one (you’re just trading hosting costs for development cost). The second is to “take advantage of our free plans” and while its great to see free plans available its hard not to feel like you’ve just set a timer running on a small explosive and have sat on it, plus combined with other service usage those plans might not be sufficient: Cosmos is unlikely to be running on its own. (As an side – why is this such an issue? Different document types often require different partition keys and strategies based on both their content and the characteristics of the queries you want to run against them – its one of the most important considerations for the storage strategy with a database like Cosmos)

Now, with the announcement at Build, we’re able to provision RUs at the database level allowing collections in that database to share them. To me that sounded great – being able to provision, say, a 2500RU database and have it sit over 10 collections that I can later pull out and provision with their own RU allocations as my needs grow. Unfortunately the devil was in the detail on this one and visiting the Azure pricing page the minimum RU level for a database has been set at 50000. That’s approximately $3000 per month and as Howard van Rooijen noted on Twitter “feels like you’re renting the cluster”. I completely agree and its not the first time in the last few months I’ve felt that Azure seemingly has an internal inability to operate at a granular level and that that is limiting its service.

Maybe Microsoft feel ignoring the cost conscious segment of the market is ok but that’s running the risk of alienating up and coming startups or turning them to competing services – the cloud is an increasingly competitive market and Microsoft are still the underdog. And with the advent of .NET Core and its cross platform capabilities it’s getting easier than ever to run .NET code on other vendors – and sometimes even better than on Azure. For example Azure Functions have come on in leaps and bounds but are still beset by problems that are not an issue on AWS Lambda.

In fact in the same space Cosmos plays, AWS have DynamoDB as a document database and Neptune as a graph database. I’m no expert in either but they seem to have far more proportional pricing models than Cosmos and increasingly look worth investigating. And that’s dangerous to Microsoft – I’ve been using both .NET and Azure since they respectively launched and so am pretty much in their developer heartland – but the more I’m pushed away by decisions like this the more likely it is that I, and others, jump in a more wholesale manner. Its really not the message they want to be leaving us with at one of their premier events.

I guess its a case of watch this space. This pricing model is in preview and there’s track record for back tracking on RU limits but getting down from 50000 RUs to something affordable – that’s going to be a massive change.

I’d love to see it happen.

Publishing to GitHub Pages from Visual Studio Team Services

During the last major release of my commanding framework I wrote a lot of new documentation. I wrote this in Markdown and converted it to a documentation website using DocFX resulting in this website.

DocFX has a VSTS extension which make it easy to integrate into your build and release pipeline but I couldn’t find any simple way to publish it to GitHub Pages which is what I am using as a simple, and free, host. My initial solution was to use a short PowerShell script which took the following steps as part of a Release Plan:

  1. Clone the gh-pages branch from my repository into a folder
  2. Copy the output from the DocFX build artefact to the folder
  3. Commit the changes
  4. Push the branch to GitHub (which publishes the changes)

This worked nicely but it irked me not being able to do it in a more off the shelf manner and so I’ve tidied this up and released a VSTS Build and Release task to the marketplace. It come complete with documentation and hopefully it’s useful to people. The source code is available on GitHub.

Writing a simple VSTS build and release task is well documented and there are plenty of blog posts on the topic and so I won’t repeat that here but I did think it might be helpful to quickly cover just a couple of things that were often not covered.

Accessing Variables

I needed to find the default working folder within my task as I needed scratch space for cloning the gh-pages branch. VSTS exposes this as a built-in variable called System.DefaultWorkingDirectory – it’s a commonly used variable and you’ve probably seen it expressed as $(System.DefaultWorkingDirectory) when using tasks for releases.

You can access variables from within a script by using the Get-VstsTaskVariable cmdlet. For example:

$defaultWorkingDirectory = Get-VstsTaskVariable -Name 'System.DefaultWorkingDirectory'

Reporting Errors

You can write to the VSTS log using the Write-Host cmdlet but I wanted a nice red error message when things went wrong. You can do this by prefixing your message as shown below:

Write-Host "##vso[task.logissue type=error;]Your error message"

To exit the script and have it fail you need to exit with a non-zero return code and using a specific Exit method:



And that’s it. As always I hope this is useful and I can be reached for discussion and questions on Twitter.




  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments




GiottoPress by Enrique Chavez