Debugging the Visual Studio Team Services Build and Release System

I’ve been doing a lot of work with the new task based build and release system in VSTS which, thankfully, is immeasurably saner than the XAML abomination Microsoft have subjected us to in the past.

Mostly things have “just worked” but on a couple of occasions, normally while deploying to Azure, I’ve had inscrutable summary error messages presented to me and needed to know more about what exactly was run.

It turns out its really easy to flip the whole system into verbose logging mode by adding a variable to your build or release called System.Debug and setting its value to True.

Super useful and on each occasion I’ve had an issue this has helped me get to the bottom of it.

PowerShell, Binding Redirects, and Visual Studio Team Services

I’ve blogged previously about setting up binding redirects for Powershell with Newtonsoft.Json being a particularly troublesome package – it’s such a common dependency for NuGet packages that if you deal with a complex project you’ll almost certainly need a redirect in your app/web.config’s to get things to play ball and if you use the Azure cmdlets with others (such as your own) you’re likely to face this problem in Powershell.

I’ve recently moved my projects into Visual Studio Team Services using the new (vastly improved!) scriptable build system where I often make use of the PowerShell script task to perform custom actions. If you hit a dependency issue that requires a binding redirect to resolve then my previous approach of creating a Powershell.exe.config file for PowerShell won’t work in VSTS as unless you build a custom build agent you don’t have access to the machine at this level.

After a bit of head scratching I came up with an alternative solution that in many ways is neater and more generally portable as it doesn’t require any special machine setup. My revised approach is to hook the AssemblyResolve event and return a preloaded target assembly as shown in the example below:

Note that you can’t use the more common Register-ObjectEvent method of subscribing to events as this will balk at the need for a return value.

You can of course use this technique to deal with other assemblies that might be giving you issues.

Capturing and Tracing All HTTP Requests in C# and .Net

Modern applications are complex and often rely on a large number of external resources increasingly accessed using HTTP – for example most Azure services are interacted with using the HTTP protocol.

That being the case it can be useful to get a view of the requests your application is making and while this can be done with a tool like Fiddler that’s not always convenient in a production environment.

If you’re using the HttpClient class another option is to pass a custom message handler to it’s constructor but this relies on you being in direct control of all the code making HTTP requests and that’s unlikely.

A simple way of capturing this information without getting into all the unpleasantness of writing a TCP listener or HTTP proxy is to use the System.Diagnostics namespace. From .net 4.5 onwards the framework has been writing HTTP events to the System.Diagnostics.Eventing.FrameworkEventSource source. This isn’t well documented and I found the easiest way to figure out what events are available, and their Event IDs, is to read the source.

Once you’ve found the HTTP events it’s quite straightforward to write an event listener that listens to this source. The below class will do this and output the details to the trace writer (so you can view it in the Visual Studio Debug Output window) but you can easily output it to a file, table storage, or any other output format of your choosing.

To set it running all you need to do is instantiate the class.

If you’d like to see this kind of data and much more, collected, correlated and analysed then you might want to check out my project Hub Analytics that is currently running a free beta.

Changing the App Service Plan of an Azure App Service

To allow a number of App Services to scale independently I needed to pull one of them out of an App Service Plan where it had lived with 3 others to sit in it’s own plan – experience had shown me that it’s scaling characteristics are really quite different from the other App Services.

You can do this straightforwardly and pretty much instantly either in the Portal (there’s a Change App Service Plan option in Settings) or with PowerShell (with the Set-AzureRmAppServicePlan cmdlet).

Super simple – but I did encounter one gotcha. This doesn’t move any deployment slots you might have created and so you end up in a situation with the main App Service sat in one App Service Plan and it’s deployment slots in another which probably isn’t what you want and, in any case, Azure won’t let you swap slots in different service plans.

The solution is simple: you can also move them between App Service Plans in the same way.


Serving Static Markdown Content from ASP.Net MVC to JavaScript

I recently moved a bunch of documentation into the Markdown format as I wanted to render it into multiple output formats and inside multiple hosting technologies – including an AngularJS based single page applicaiton.

While doing this I decided it would make sense to have a single source of truth for these files and so placed them as content inside my MVC 5 based website that is entirely public access. After dropping them into the website the first step is to configure ASP.Net to serve the content which involves adding the below to a web.config file:

This enables the Markdown to be served to a browser but if you try and download it from JavaScript you find it blocked by CORS. I solved this with a small ASP.Net module that is installed in the web.config above and the code for which is below:

Hope it’s useful to you.

Signalling API Unavailability

When doing upgrades of websites it’s often useful to be able to signal to users that your service is offline for maintenance either in part or in entirety which is quite straightforward to implement unless you’ve got something like an AngularJS or React app, that could well be cached in the browser, and that actually wants to respond to 503 status calls returned from a web based API. Then CORS has a habit of getting in the way.

To help with that I’ve just pushed this super simple and lightweight ASP.Net website to GitHub that will respond with a 503 status code to any request made of it while ensuring that the CORS protocol will succeed meaning that the 503 status code will make its way through to your own error handling.

It’s ideal for hosting in an Azure deployment slot during upgrades that swap slots.

Note: an alternative approach would be to use the URL rewriter in web.config. It’s not particularly intuitive or, to my taste, readable but I believe can be configured to perform the same task.

Accidental Fish Application Support v3.3.0 Release

Last night I published a minor update to this framework to GitHub and NuGet that adds new timer capabilities via the new ITimerFactory interface.

An interval timer is available that runs an action, then pauses for the specified on completion, and runs the task again until cancelled (either by the action itself or a cancellation token). A regular metronomic timer is available that runs an action n seconds irrespective of task duration. Importantly in the latter case if the action takes longer than the duration of the timer to complete it will be cancelled to prevent compounding overlapping action issues (excessive CPU usage, out of memory etc.).

I’ve also added an IBackoffFactory interface that allows the backoff policies to be created with custom backoff timings. The backoff policies continue to be directly injectable with their default timings.

I hope these additions are useful. Feedback is, as ever, welcome.

Microservice Analytics – Update

For those who’ve got in touch asking for a beta code – thank you for your interest, it’s very much appreciated and I’m nearly ready to start issuing them.

Before releasing them I wanted to make sure I was capturing user and session data properly and that that was getting integrated deep across the other measures and data I capture. Happily this is all working and I’ve just deployed it to my live site.

In this initial release, unless you configure data capture otherwise, then magic numbers will be generated for users and sessions (you can override this to provide your own IDs if you so choose) and these are correlated with everything else that is happening so that it is possible to look at the data from both perspectives:

  • What are your users doing and what system activity is that generating in your applications?
  • Given an system activity (say a SQL command or Web API call) which user initiated it and/or under which session?

There are some specific views to help with the above but in addition you can tag any user or session and then the whole user interface is filtered by that allowing you to explore quite organically this subset of data.

I’ve attached a couple of screenshots below (taken immediately after deploying the new code to live – so not many users and sessions captured yet!) and as I mentioned earlier I’m nearly ready to share those invite codes, thanks for your patience.