Game of the Year 2014

Winner: Forza Horizon 2
Runner Up: The Last of Us Remastered


There’s been a lot of negativity in the gaming world this year about the next-gen consoles and how they are yet to deliver and while I’ve not played anything game-changing (pun not intended) I’ve had plenty of great gaming experiences over the last 12 months including Wolfenstein, Elite: Dangerous, Monument Valley, Far Cry 4, Alien: Isolation, Diablo III, Dragon Age and Mario Kart.

However my winner and runner up stood head and shoulders above those other games with only a hair breadth between them. Forza Horizon 2 is the first time I’ve whooped and hollered while playing a driving game since Daytona in the arcade and the most satisfying since the Project Gotham series to which this feels, to me, like a spiritual successor. For me it strikes the perfect balance between arcade and simulation with a driving model that sends me into a trance like state while hours slip by. I’ve recently tucked into the Storm Island DLC and some of the tracks are incredible with masses of air and dirty rallying to go at. It all looks gorgeous too with beautiful scenery, incredible car models, and a steady framerate – it runs at 30fps but I’ve never seen it drop a beat and although I prefer racers to run at 60fps the nature of the driving means this isn’t particularly detrimental to the experience and I soon got used to it.

Despite owning a PS3 somehow I missed The Last of Us on that console and eagerly awaited it in remastered form – I was not disappointed. I can’t remember ever being so gripped by a videogames story and characters before and from beginning to end I was desperate to see what would happen next yet dreading something bad happening to Ellie and Joel. The pacing was fantastic mixing up quiet and frantic moments and the occasional scare to get the pulse going. When the game finished I was both sad to see the story finish but satisfied that the story was complete – I hope Naughty Dog doesn’t cheapen the experience by revisiting Ellie and Joel. The DLC was also fantastic and I loved the parts without combat, really pulled on the heartstrings as throughout you know where thats going. It’s a beautiful game throughout being both technically solid and visually arresting with great art direction and the gameplay was, for me, just the right balance of stealth and combat.

As I say literally only a hairs breadth seperates those two games and on another day in a different mood I could easily swap them round. I think Forza Horizon 2 just pips it because of it’s sheer scale – I’ve driven a ridiculous number of miles and am still enjoying it as much as I did when I first started with it.

Special mentions also need to go to Alien: Isolation for the sheer level of terror it generates (I still haven’t completed it due to fear and dread!) and Elite: Dangerous for being mesmerising even while in beta.

Gadget of the Year 2014

Winner: Surface Pro 3
Runner up: iPhone 6 Plus


I’ve invested in a tonne of new kit this year including cameras, laptops, tablets, phones, games consoles and the many accessories that go along with all of those however the device that has surprised and delighted me the most is the Surface Pro 3.

It was an almost hesitant purchase for me as I had concerns about how well it would work as a laptop (the word “lapability” seems to have been invented for Surface reviews – never a good sign when you need a new word) and as an environment for running fairly demanding apps such as Visual Studio, Lightroom and PhotoShop. Coupled to that I’ve never had a Windows device that had decent battery life though admittedly since my first PowerBook my main experience of Windows devices has been cheap corporate supplied crapware (happily these days also a decent bit of kit – MacBook Pro Retina). However I’ve been really intrigued by the Surface concept and Microsoft’s approach since the first version and the Pro 3 is the first time it seemed they were starting to hit their vision so I made a purchase thinking if I didn’t get on I could sell it on.

However my concerns quickly abated after a few days use. The 8Gb i7 model I purchased is well up to running those applications and makes for a great mobile development environment and photo lab and I can fit the odd game of Civilization V in too. The only issue I’ve had with “lapability” is my early morning lay on the sofa drinking coffee posture where I tend to have my laptop almost vertical while I snooze (err, work) and ease into my day. Because it doesn’t have any top of display to bottom of keyboard rigidity it just doesn’t work in that position.

The build quality is excellent, on a par with my MacBook Pro Retina, and I find the pen a useful addition for quick diagrams (though I’ve yet to come across a great tool for converting scrawls into shapes). For the most part it stays quite cool and silent only really warming up and ramping up fan noise if I do intensive work on large image files (24Mb RAW files) or play a game.

As a tablet it’s insanely fast and I quite like Windows 8.1 as I make frequent use of the side by side snap views that just aren’t there on iOS. It’s battery life isn’t as good as my iPad Air (go figure, it’s running an i7) but I easily get a few hours out of it and I’ve never had to worry about it running out. It’s also heavy compared to an iPad Air – again no surprise given what it’s packing.

Ultimately I love it. It’s become my carry everywhere gadget as I can fit it in my camera bag with ease and in my main laptop bag alongside my corporate device. Whether I want to read, do some coding or photography, play a game, get online, or watch a movie it can do it. It doesn’t have the same horsepower or sandbox restrictions of an iOS device (or Microsoft’s RT variant).

I also think there are some real takeaways for Apple here (a year or two back the MacBook Air would have been my first choice for something so mobile). On the laptop front they really, really, need to sort out the display on the MacBook Air. At the top spec pricepoint it utterly sucks and, for me, simply makes it a no purchase at this point in time. On the tablet side they really need to sort out a keyboard solution for the iPad that is as elegant and slender as the Surfaces. Sure there are third party options (I have a couple) but none of them come close to the Surface’s simply because they are afterthoughts designed to fit around a tablet that wasn’t designed for a keyboard to be attached.

Finally a couple of brief notes on my runner up – the iPhone 6 Plus. This is the first “big screen” phone I’ve owned and like the Surface I really bought it to see how I got on particularly as I mostly use my phone as a computer and reading device rather than as a phone. The extra screen space has massively improved my smartphone experience, it’s still light, and it fits in my pocket (no bending yet!).


Apologies for the tumbleweed round these parts lately particularly if you left a comment asking for help and I never got back to you. Unfortunately I had a rough few months with illness and have been slowly easing myself back into business as usual and my priority was getting back to things that paid my bills!

I’ve got a few “… of the year 2014” posts and then I hope 2015 will be back to usual.

Requiring the use of https on an AngularJS application hosted in an Azure Website

As I was reaching the tail end of my AngularJS coding challenge one of the tasks I had outstanding was to enforce the use of https across the site, even if a visitor accessed it via a http prefixed link.

For the last few years I’ve mostly been working in MVC and have done this selectively at the filter level (RequireHttps) but my Angular site was just a collection of client side files with no server component – that was hosted off in a second site that fronted up only a restful web API.

I’m familiar with the concept of URL rewriting from ages back but hadn’t used it (ever? or at least for as long as I can remember) on IIS / ASP.Net. Turns out it’s pretty simple to do, all I had to do was drop the block below into my sites web.config file:

    <compilation debug="true" targetFramework="4.5.1" />
    <httpRuntime targetFramework="4.5.1" />
        <clear />
        <rule name="Redirect to https" stopProcessing="true">
          <match url="(.*)" />
            <add input="{HTTPS}" pattern="off" ignoreCase="true" />
          <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" redirectType="Permanent" appendQueryString="false" />

Handy to know if you’re deploying an AngularJS site to an Azure website or IIS.

CORS with Owin

In a recent post I’d written about my struggles with the Web API CORS support and in the comments Taiseer suggested looking at accomplishing it through OWIN. I needed to create a new Web API project this week and so tried out this approach – the short version is it was utterly painless.

This project is (sorry!) closed source / commercial so I can’t share the source directly however it was an easy set of steps to enable CORS support through OWIN on my Web API project.

1. Either through the Package Manager GUI or console install the package Microsoft.Owin.Cors into your Web API project.

2. Within one of your startup files (easiest one out the box is Startup.Auth.cs and that seemed to be a reasonable place for enabling CORS) just add the line:


And that’s it.

If you want finer grain support over what domains and verbs are allowed you can supply detailed configuration via a the CorsOptions class.

Angular JS Pack

I’ve spent some time over the weekend and last week tidying up my initial AngularJS code and learning about some more advanced topics in order to reduce code repetition and standardise my approach to handling success and failure around $http calls to Web API.

Not being someone who is shy of creating new public GitHub repositories I’ve spun what I’ve written out into an open source (MIT License) Accidental Fish AngularJS Pack (how grand!) and written some documentation for it. You can find it over at GitHub.

It contains some element level directives for Bootstrap 3 boilerplate and some wrappers for communicating with Web API responses.

Code quality should improve over time as I learn more about AngularJS – I’m really only a beginner.

12 Hour Coding Challenge – AngularJS and Azure

This last weekend I set myself a challenge to build and ship a web application in under 12 hours and in the process learn something about AngularJS – a framework I’ve been itching to try for some time. I also wanted to build something that would serve as a basis for a small iOS application written in Swift (to learn a little about that) and that might allow for extension opportunities in iOS8.

The good news is I succeeded in my challenge and you can find the website here and the source code on GitHub. I’ll post more details on how to build, configure and deploy this application shortly.

Without further ado – this is how my day went.

Hour 1 – Idea, Technology Choices, System Architecture

With only 12 hours and some learning to do I needed something I could pair back to a useful absolute minimum but that I could evolve additional features on top of easily in the future. I used to be a big user of delicious but gave up on it while it was under Yahoo’s tenure. I have tonnes of different devices, on different platforms (so for example iCloud syncing of bookmarks isn’t enough for me iPhone and iPad yes but what about my Surface?) so thought I’d have a bash at a simple online bookmark site. I paired things back to a handful of requirements I thought I could achieve:

  • Signup and sign-in (in a way that I could expand to include social account sign in in a later version)
  • View links in descending order they were saved
  • Save links in the website
  • Save links from a bookmarklet (button on your browsers toolbar)
  • Tag links
  • View links in tags in descending order they were saved

As both AngularJS is new to me, and Swift also when I get rount to that, I wanted to build everything else on a solid well understood foundation and so for the backend picked:

  • C# and .Net 4.5.1 – I’m hankering to experiment with Go but if I added that to everything else there’s no way I’d finish in 12 hours so I stuck to C#.·
  • Azure – table storage, web sites and a worker role. I plumped for table storage rather than SQL as I want this to scale easily and be super cheap to run – I’m paying for it myself and am willing to sacrifice features and some development complexity for a lower monthly bill.
  • Asp.Net Web API – I know this really well now so an obvious choice for the web service layer given I was using C#.
  • My open source Azure application framework, it wraps up most common operations you’d undertake against the core of Azure in a testable fashion and makes deployment and configuration easy.
  • My open source ASP.Net Identity 2.0 table storage provider.
  • The Bootstrap CSS template for the UI. Looks ok out the box and I can apply an off the shelf theme easily later (or tinker with it myself).

Most of the above took place in my head with a handful of notes.

Hour 2 – AngularJS Research

I didn’t expect this to hold too many surprises in terms of the overall system architecture as I’m pretty familiar with rich browser application development in jQuery and have some experience in backbone and Knockout but I didn’t know how to structure an application properly in AngularJS.

All I’d done with this previously was really an equivalent of the sample on the home page tucked away inside another website but it looked to be a super useful and comprehensive single page application framework that could provide a really clean MVC structure for a JavaScript client. I basically went through the developer guide at quite a clip and looked at the structure of a couple of non-trivial apps.

I was no expert after an hour, and didn’t expect to be, but I felt I could build the backend and not get surprised by the front end to a degree that would cause me to uproot the system architecture. Importantly I learned how I could integrate the client with the authentication endpoint via a helpful blog post (thank you!) which also introduced me to interceptors – most handy.

Hours 3 to 6 – Build the Backend

The storage model fell pretty trivially out of the requirements and came together quickly. I used the Chrome plugin Postman to test my REST interface without needing to write further code. I used my standard approach to this sort of thing in terms of project structure.

Nothing really new at all so largely just predictable legwork and at the end of the period I had a clean back end following a fairly pure REST model that I was fairly sure would work for the UI and I could deploy into Azure. So I did just that.

Hours 6 to 12

Best summarised as grappling with AngularJS with lots of Googling and referring to the documentation!

Actually to be fair other than a couple of pain points it was rather simple and I’m pretty sold on AngularJS as a framework for single page applications, I will certainly be using it on future projects.

I basically copied the application folder structure I would use if I was building a traditional server page request website in ASP.Net MVC and that I’d seen used in a couple of other apps that worked out really well with controllers, views and services separated by folders. I added a service layer that used the $http angular module to talk to my Web API and kept the http grub out of the controllers.

I managed to avoid the temptation to fall back to old patterns and stuck as far as I could to straight AngularJS, for example it was tempting to start inserting jQuery all over the place to show and hide things whereas I really wanted to do this in a clean and declarative fashion using Angular.

I had to refactor things a couple of times but nothing major – it came together quite easily. The last hour was putting a front page on it and dealing with something I hadn’t considered – as a new user when I went to the bookmark feed there is no real clue as to what to do next so I added a quick “if there are no links welcome page”. By the time I’d begun the UI work my own test account was littered with links!

The things that caused me most pain:

  • CORS support. My client website was running in a different domain (localhost in development) to my Web API and would in production (it’s static HTML so why waste expensive Asp.Net server resource!) and this meant I needed to use the CORS protocol (Cross Origin Resource Sharing) to access the Web API from JavaScript. Except…. it didn’t work. After much teeth gnashing it turned out that there were issues with the Web API 2.0 binaries and accompanying Web API 2.0 CORS package and I would need to upgrade to 2.2. I did this but in Microsoft’s recent “fun” fashion that included breaking changes on a point release. Fortunately simple to fix and then everything worked fine.
  • Infinite scrolling. I wanted to take an infinite scrolling approach to the bookmark list. You’ll have seen this if you’ve use things like Facebook or Twitter in a web browser – there are no “next page” and “previous page” buttons you simply scroll towards the end of the page and the next page is fetched in the background and added to the bottom. There is an AngularJS module that purports to assist with this however it had a number of bugs (must do a pull request) and so I spent 30 minutes learning the code and fixing them. Fortunately it was only 100 lines of code to deal with and still was a net win in terms of time. Maybe I’ve just missed something in terms of library dependencies.

Lessons Learned

  • AngularJS is pretty cool! Easy to use, well considered, and provides excellent structure. My only concern is that while digging around for answers to common problems it seems to be evolving very quickly with not a lot of consideration given to backwards compatibility. Maybe I’m wrong – I’m utterly new to it.
  • By keeping things tightly focussed I had something in the hands of a customer (me!) in 12 hours from start to finish. It doesn’t let me do things I’d eventually want to do (edit links and tags, delete links for example) and has some rough edges but I can already use it and “pilot” it myself in a beta fashion. I shipped as early as I absolutely possibly could.
  • The aggressive timescale meant I couldn’t go off on architectural / development flights of fancy, not that that’s really my thing – see below. I think every line of code I wrote in this system is specific to the business problem in hand. No custom UI components, no custom architecture, no funky “time saving” code / model / blah blah blah generators that are often the symptom of over architected solutions put together by people with too much time on their hands! My first choice was always to go find something off the shelf (hence my infinite scrolling bug fixes – and even factoring that in it was quicker than writing it myself).
  • There are lots of things I’d like a site like this to do (social sharing, editing as above, public / private feeds and links, trending URLs) and while I had those in mind and have rough views of how to build them I did not allow myself to get distracted by them. If I had my 12 hours would have become a week, would have become 2 weeks and so on. Just because they are not in v1 (or v0.1 if you prefer) doesn’t mean they can’t be put into v1.1.
  • You really do need to stand on the shoulders of giants – I can’t emphasise enough how strong my belief is that if the code you are writing is not specific to your particular problem then you’re going wrong: you’re hovering around the ankle while someone else starts at head height!

Next Steps

  • Understand the best way to unit test AngularJS and, errr, write the unit tests.
  • Present a tag list in the UI.
  • Deal with error conditions, particularly http errors, in a consistent way.
  • Beef up validations on both the server and the client.

Uploading an image to a Blob Container via Web API

Handling image (or other binary object) uploads via Web API for storing in Azure blob storage without using the local file system (handy if, for example, you’re using Azure Websites) seems to be a frequently asked question.

I’ve not tested this in anger yet but I’ve posted my own attempt at solving this issue as a gist on GitHub and shown below. It seems to work but as I say I’ve not tested it in anger, only in fairly limited scenarios.

If you use my Azure Application Framework I’ve also added a GetMultipartStreamProvider method to the IAsynchronousBlockBlobRepository interface that provides a pre-configured implementation for a given blob container.

Hope thats helpful.

ASP.Net Identity Provider 2.0 and Table Storage

I’ve started to implement the new storage interfaces in AccidentalFish.AspNet.Identity.Azure so you should be able to use the new features of Identity Provider 2.0 with table storage.

I’ve added an MVC5 project that shows how to configure the provider.

It’s not fully tested yet but if you come across any bugs please log them on GitHub and I’ll get right on it. Once complete I’ll publish the new version on NuGet.

Azure by Default

When I first started with Azure it only existed in PaaS form and had a very limited set of services compared to the rich variety available now. Adopting Azure at the time was seen as something as a risk, even within a heavy C# / .Net shop, and my first use of it was on a carefully targeted project – one on which I wasn’t betting the bank so to speak.

Over the last few years the platform has matured significantly adding additional PaaS features and IaaS features along the way and proven to be robust, reliable, cost effective and flexible in the development and operation of real systems running real customers. It’s done what it says on the tin and the people I have worked with who also have significant Azure experience largely say the same.

As such it’s been interesting to observe my own corresponding shift in behaviour over the last 12 months and throughout 2014 in particular. When I started on this journey back in 2011 I would have spoken of it in terms of interest and caution. Throughout late 2012 and 2013 I would have spoken of it as being an excellent option to be considered for many systems. Finally leading me to today where in the last few weeks I have found myself recommending it as the “default choice”.

By this I don’t mean it’s the only tool for every job but it’s the platform I now look to first for greenfield development and then look for reasons as to why it might not be a good fit, drilling into those reasons hard as the benefits of the platform are so great. The kind of thing that can make me look elsewhere are regulatory or compliance considerations, or a peculiar or edge case technical requirement.

It’s been a fascinating journey and still is, at this point I consider Azure to be amongst the best things Microsoft have done, right up there with C#, it’s a massively enabling technology. If you’ve not looked at it yet, and particularly if you’re a .Net / Microsoft developer, you really should.

Recent Posts

Recent Tweets

Recent Comments




GiottoPress by Enrique Chavez