Author: James

Follow up: Use Azure Table Storage as an OAuth Identity Store with Web API 2

My original blog post and NuGet package have generated a lot of interest (great!) and a couple of bugs (not so great!) in the slightly naive code I dropped out of a early development project I was working on.

I’ve since had the opportunity to test the code at reasonable scale (> 1 million users, 1000s of concurrent logons) and made a few changes to the table structures as a result of learning more about the ASP.Net provider workflow.

There’s also a fix on the way for the “Google” issue – invalid (from an Azure table storage perspective) characters being placed in partition and row keys.

Finally apologies in the delay for replying to everybody’s comments – I took a solid 2 weeks off over the Christmas break and since rejoining the fray have been absolutely snowed under. I hope normal service is now resumed and as well as these bug fixes I’ve got some blog posts on the way covering topics such as cross-table concurrent table storage patterns, alerting around an Azure system and using Shared Access Signatures to reduce server rental fees.

Azure AD, Groups, Roles and the Authorize Attribute

Configuring a new MVC 5 website to authenticate against an Azure Active Directory is really simple – all you need to do is configure using the ASP.Net 4.5 Web Application creation wizard when you create a new project as described here.

If you complete that successfully and add an Authorize attribute to one of your controllers or actions then you’ll find everything works as you’d expect.

However if you try and use roles with AD groups (obvious thing to try right – particularly now groups are exposed in the management portal) you’ll find you get UnauthorizedException’s being thrown. This is because, out of the box, the current implementation doesn’t translate group memberships into role claims.

Fortunately this is fairly straightforward to resolve by supplying a specialization of the System.Security.Claims.ClaimsAuthenticationManager class to ASP.Net and using the Azure AD Graph API to retrieve the memberships of the authenticated user.

The good news is I’ve done the grunt work for you and extended the NuGet package I blogged about recently to include an implementation of a claims authentication manager that does just this. You can take a look at the code involved here. It makes use of the Windows AD Graph API Helper Library posted on MSDN which takes care of some of the required service wrapping.

As before the source for the whole library is in GitHub and covered by the MIT license and a NuGet package is available.

In order for you to use AD groups as roles then your application is going to need to be able to read data from your Azure AD and to allow that you’ll need to do some additional configuration of your application inside the Azure Management Portal. The New Project wizard will have added your website to your Active Directory which you can see in the Management Portal on the Applications page. You can see mine listed below:

adstep1

Select your application and you’ll be taken to the application configuration page which should look something like the below:

adstep2

To configure your app you need to tap the Manage Access button down the bottom. Do that and then select Change the directory access for this app. Then on the next page select Single sign on, read directory data.

Azure will whirr away for a short while changing settings on your AD and when it’s done you need to go to the Configure tab (click configure at the top of the page as shown in the image above). In here you need to create a key that your application can use to authenticate with the Graph API and read AD data.

To do this scroll down to the Keys section and click the drop down and choose whether you want a 1 year or 2 year key. In the screenshot below I’ve picked 1 year.

adstep3

After you’ve done this click save in the toolbar at the bottom and you’ll see your key.

We’ve just about done in the Azure Management Portal all you need to do before you leave is take note of the Client ID and the Key as shown in the image below (mine are blurred out!).

adstep4

For the rest of this walkthrough I’m assuming you’re configuring a web site rather than an Azure Web Role but the claims manager we are using uses the Azure Configuration Manager for both so if you are using  a Web Role you can simply put the settings in your .cscfg and .csdef files.

Firstly add the NuGet package to your project which you can do in the Package Manager GUI or in the console:

Install-Package accidentalfish.aspnet.identity.azure

Then you need to edit your web.config file with a couple of app settings. For this you need your Client ID and Key that you noted down earlier.

adstep5

The RoleClaimIssuer is optional but is the claim issuer you want inserted into the claim, if you leave this out the issuer will be set as DefaultRoleIssuer.

The final step you need to take is to tell the ASP.Net identity model about the claims manager. To do this locate the <system.identitymodel> section of the web.config file and insert the line highlighted below at the bottom:

adstep6

To save you some typing the line to paste is:

<claimsAuthenticationManager type="AccidentalFish.AspNet.Identity.Azure.GraphRoleClaimsAuthenticationManager, AccidentalFish.AspNet.Identity.Azure"/>

With that you’re done. You can use the groups you’ve configured in the Azure AD as roles with the [Authorize(Roles=”…”)] attribute.

I hope all this is useful, if you have any feedback then please do get in touch.

Azure Application Framework

As I’ve worked with Azure over the last 18 months there is a bunch of plumbing I’ve found to be commonplace but with which, other than effort, their is little inherent intellectual property value. Examples include dependency injectable patterns for resource access, configuring components across multiple projects and servers, deployment, separation of concerns, sending emails, fault diagnosis and a management dashboard, to name just a few.

The AccidentalFish.ApplicationSupport framework is my attempt to bring solutions to these common requirements into a reusable package in order to bootstrap my own, and hopefully others, future work. The code is all purpose written specifically for this framework and with reuse in mind and it’s heavily focussed on an asynchronous programming model.

Different parts of the framework are at different states of maturity but moving quite quickly as since deciding to author this code as a framework I’m using it in this form in two personal projects, one of which is the previously alluded to companion application for this website.

The framework itself continues to evolve as I continue to learn more about Azure and as Azure itself moves forward and it’s covered by the permissive MIT License so is free to use in open source and commercial applications. The source code can be found on GitHub.

Documentation is currently scant but that is a priority for me – I hope I can provide documentation to a similar standard as I have written for Simple Paging Grid as I think that’s played a big part in it’s modest success.

I’m publishing early basically so some friends can make use of it and so that I can pull it out of my applications to manage separately, but if you have any feedback let me know. Bug fix submissions greatly welcomed!

And finally for those who are wandering – Accidental Fish is the moniker (and UK Limited Company) under which I publish applications on the iOS App Store and under which I place the copyright for most of my open source work. Somehow using it in namespaces seems less egotistical than my own name or initials, though as I’m the sole employee it is me and I am it.

How To: Use Azure Table Storage as an OAuth Identity Store with Web API 2

In my previous post I looked at how to register, login and authenticate using the new OWIN based ASP.Net architecture underpinning MVC 5 and Web API 2. The default website provided was configured to use SQL database which is why we needed to configure a SQL Database within Azure as we deployed our website.

There’s a fair chance, if you’re experienced with Azure, that you’re wondering if you can swap that out and use Table Storage, fortunately one of the improvements in this latest version of ASP.Net is to better abstract storage away from management.

There is however a fair bit of leg work to doing so. I’m going to firstly touch on how you go about this, then look at the NuGet package I’ve put online (source code in GitHub) that means you don’t have to do this leg work!, and finally we’ll look at the changes you would need to make in the Web API 2 sample project we introduced in the previous post. I’m going to make reference to that code so you can quickly grab it from GitHub if that’s useful.

1) Implementing a New Identity Store

The clue to how to go about this can, again, be found at the top of the Startup.Auth.cs file in the App_Start folder:

Step1

A factory function is assigned and asked to return a user manager and a user store.

The UserManager is a core class of the identity framework (Microsoft.AspNet.Identity.Core) and works with classes that describe users through the IUser interface (another core class) and persists them with implementations of the IUserStore interface.

The core identity framework provides no storage implementation and the UserStore class that is being instantiated here is provided by Microsoft.AspNet.Identity.EntityFramework as is the IdentityUser class.

In fact if we look at the Microsoft.AspNet.Identity.Core assembly we can see it’s really very focussed on managing abstract interfaces:

Step2

It’s not difficult to see where we’re going at this point – to implement our own store we need to provide implementations for a number of interfaces. If we want to replicate full local identity management in the same way as the Entity Framework supplied implementation then realistically we need to implement most of the interfaces shown above – IRole, IRoleStore, IUser, IUserClaimStore, IUserLoginStore, IUserPasswordStore, IUserRoleStore, IUserSecurityStampStore and IUserStore.

That’s not as daunting as it sounds as most of the interfaces are quite simple, for example IUserStore:

Step3

The remaining interfaces also follow this asynchronous CRUD pattern and are fairly simple to implement, here’s the Entity Framework implementation for CreateAsync:

Step4

And by way of contrast here’s a Table Storage implementation:

Step5

Pretty much the same I think you’ll agree, however it’s still a lot of boilerplate code to write so I’ve wrapped it into a NuGet package called AccidentalFish.AspNet.Identity.Azure which can also be found on GitHub.

2) Using AccidentalFish.AspNet.Identity.Azure

To get started either download the package from NuGet using the package manager in Visual Studio or download the source from GitHub and attach the project to your solution. The NuGet package manager console to install the package is:

Install-Package accidentalfish.aspnet.identity.azure

You can use the package in commercial or open source projects as it’s published under the permissive MIT license though as ever I do appreciate an email or GitHub star if it’s useful to you – yes I’m that vain (and I like to hear about my code being used).

Once you’ve got the package installed you’ll find there is still a little work to do to integrate it into your Web API 2 project as although the Microsoft.AspNet.Identity framework is nice and clean it seems that whoever put the Web API 2 project template didn’t think it made sense to keep a nice level of abstraction and have tied it tightly to the Entity Framework implementation in a few places.

However it’s not too onerous (just a couple of steps) and I’ve built the package with replacement in mind. To help I’ve included the Web API 2 project from my previous post and commented out the old Entity Framework code that bleeds into the MVC host site. I’ll walk through these changes below.

Firstly we need to visit the Startup.Auth.cs file and make three changes at the start:

Step6

1) Update the factory assignment to return a UserManager that manipulates users of type TableUser and users a data store of type TableUserStore. Pass it your Azure connection string. The constructor is overloaded with parameters for table names and whether or not to create them if they don’t exist – by default it will.

2) Replace the ApplicationOAuthProvider with a generic version of it contained within the package. This code is exactly the same but replaces the fixed IdentityUser types with a generic (you can see what I was referring to in regard to the template – this ought to have been this way out of the box).

3) Update the declaration for the factory to return a UserManager manipulating users of type TableUser.

At this point we’ve done the bulk of the OAuth work but unfortunately the MVC AccountController also has a, needless, hard dependency on the Entity Framework library so we need to sort that out. To do this either go through the class and replace the type declarations of IdentityUser and IdentityUserLogin with TableUser and TableUserLogin respectively. Alternatively you can “cheat” and remove the Microsoft.AspNet.Identity.EntityFramework using reference and add a pair of aliases:

Step7

That’s it you’re done. Identity information should now be persisted in Azure Table Storage.

I’ll be doing some more work on this package in the coming weeks – I want to test it at scale and I know I need to build at least one index for one of the IUserStore calls which queries in the reverse way to which the partition and row key are set up: I’ve tried to set up the partition and row keys for the most commonly used methods.

If you find any problems or have any feedback then please let me know.

 

How To: Register and Authenticate with Web API 2, OAuth and OWIN

The latest release of Asp.Net introduces some fundamental architectural changes that have a significant effect on frameworks such as MVC and Web API as Asp.Net now sits on top of the OWIN stack.

As part of this change Microsoft have yet again changed the authentication and authorisation model. Yes you still use the Authorize attribute within your MVC and Web API controllers but the workflow around authentication has been rejigged considerably.

If, like me, you have a penchant for writing mobile apps that consume Web API based services hosted in Azure chances are you’ll want to register and authenticate with your services from the device. This is really simple to achieve with Web API 2 and OWIN, in fact it’s all in place out of the box, but the trouble is that it’s barely documented.

Having spent a morning going through significant pain figuring this out I’ve put this How To guide together to show how to do this. For added fun I’ve built the client in Xamarin as an iOS application but the approach will work on any platform including from Windows 8, JavaScript, whatever you like. In fact the C# code that I outline below can be lifted straight from the Xamarin project and dropped into any other C# application. If you want to skip ahead to the example code it can be found on GitHub.

We’ll get started by creating a Web API project. In Visual Studio create a new solution and pick ASP.Net Web Application. On the ASP.Net project type page select the Web API template and change the authentication type to Individual User Accounts:

Step1

Inside this project will be a API controller called ValuesController – this is the normal Web API sample. You’ll notice the class has an Authorize attribute which will protect it from anonymous access. Your exact URL will vary depending on what port IIS Express has been set up with but enter the URL to get the values from the controller (in my case http://localhost:5287/api/Values) and you should see some XML (I’m using Chrome):

<Error>
<Message>Authorization has been denied for this request.</Message>
</Error>

So far so good – we need to be logged in to access the API which is what we want. In order to login first we’re going to need to register a user.

If you visit the help page for the API (for example http://localhost:5287/Help) you’ll see that there is an Account API that fronts lots of interesting methods – one of which is Register:

Step2

So if we post the right data to that action we should be able to register an account and believe it or not we’re done with the website so publish it to a free website in Azure and take note of the URL you’ve dropped it on. I recommend using VS2013 and the latest version of the Azure SDK (2.2 at the time of writing) as the tools make this really simple. The only real gotcha to watch out for is to point your website at a SQL Database rather than the local file approach that will be configured within your website. To point to a real database just make sure that you pick, or create as in the example below, a real database server:

Step4

And then make sure that the DefaultConnection is updated:

Step5

With the website deployed and ready it’s time to create an iOS application. You can use either Xamarin Studio or the Visual Studio plugin. I used the Hello World template for my application and targetted iOS 7 which I then reworked to give me the user interface below:

ReworkedUI

I’m not going to spend too much time talking about the user interface of the Xamarin application (it’s not really the focus of this How To) but all I really did was update it with the user interface above and added the Json.Net component from the Xamarin Component Store (if you’re from the .Net world – think NuGet, looking forward to a PCL version of Json.Net!). None of the connectivity code was Xamarin specific.

For the  application to register a user it needs to send the right model to the Register action we located earlier. The website contains a class called RegisterBindingModel which we’re going to replicate in our Xamarin application (in a production application I recommend pulling these models out into a Portable Class Library rather than copying and pasting code) in a class called RegisterModel:

class RegisterModel
{
    public string UserName { get; set; }
    public string Password { get; set; }
    public string ConfirmPassword { get; set; }
}

We’re going to form up a HttpWebRequest with the registration information and send it to the controller. As long as we don’t get a HTTP error then registration has been successful. I’ve wrapped this in a class called RegisterServiceClient:

class RegisterService
{
    public async Task Register(string username, string password, string confirmPassword)
    {
        RegisterModel model = new RegisterModel
        {
            ConfirmPassword = confirmPassword,
            Password = password,
            UserName = username
        };
 
        HttpWebRequest request = new HttpWebRequest(new Uri(String.Format("{0}api/Account/Register", Constants.BaseAddress)));
        request.Method = "POST";
        request.ContentType = "application/json";
        request.Accept = "application/json";
        string json = JsonConvert.SerializeObject(model);
        byte[] bytes = Encoding.UTF8.GetBytes(json);
        using(Stream stream = await request.GetRequestStreamAsync())
        {
            stream.Write(bytes, 0, bytes.Length);
        }
 
        try
        {
            await request.GetResponseAsync();
            return true;
        }
        catch (Exception ex)
        {
            return false;
        }
    }
}

That will register user but how do we log in. If we refer back to our websites API help page although there are a lot of interesting looking methods there is no Login method.

This is where OWIN comes in. If you take a look at the code that was generated for the website you’ll see that in MVC 5 / Web API 2 there is a new file in the solution (compared to MVC 4) called Startup.Auth.cs:

Step7

This is where the configuration for authentication and authorization takes place with OWIN and the real interesting bit for us is the section towards the top where OAuthOptions are set:

Step8

Essentially OWIN is running an OAuth authentication server within our website and setting up OAuth endpoints for us. I’m not going to dwell on OAuth but essentially to authenticate we need to request a token using our username and password to identify ourselves and then in subsequent service calls supply this token as a HTTP header in the request.

The token end point we need to call can be seen in the image above: it’s /Token. We need to pass it the username and password and also an additional piece of information: the grant type. We need the grant type to be password. The endpoint responds to form data and we make the call as shown below:

class LoginService
{
    public async Task Login(string username, string password)
    {
        HttpWebRequest request = new HttpWebRequest(new Uri(String.Format("{0}Token", Constants.BaseAddress)));
        request.Method = "POST";
 
        string postString = String.Format("username={0}&amp;password={1}&amp;grant_type=password", HttpUtility.HtmlEncode(username), HttpUtility.HtmlEncode(password));
        byte[] bytes = Encoding.UTF8.GetBytes(postString);
        using (Stream requestStream = await request.GetRequestStreamAsync())
        {
            requestStream.Write(bytes, 0, bytes.Length);
        }
 
        try
        {
            HttpWebResponse httpResponse =  (HttpWebResponse)(await request.GetResponseAsync());
            string json;
            using (Stream responseStream = httpResponse.GetResponseStream())
            {
                json = new StreamReader(responseStream).ReadToEnd();
            }
            TokenResponseModel tokenResponse = JsonConvert.DeserializeObject(json);
            return tokenResponse.AccessToken;
        }
        catch (Exception ex)
        {
            throw new SecurityException("Bad credentials", ex);
        }
    }
}

In response to a successful call on the Token endpoint the OAuth server will return us JSON data that includes the access token and some additional information, I’m deserializing it into a class called TokenResponseModel but the thing we’re really interested in is the access token. The full response is modelled like this:

class TokenResponseModel
{
    [JsonProperty("access_token")]
    public string AccessToken { get; set; }
 
    [JsonProperty("token_type")]
    public string TokenType { get; set; }
 
    [JsonProperty("expires_in")]
    public int ExpiresIn { get; set; }
 
    [JsonProperty("userName")]
    public string Username { get; set; }
 
    [JsonProperty(".issued")]
    public string IssuedAt { get; set; }
 
    [JsonProperty(".expires")]
    public string ExpiresAt { get; set; }
}

Now we’ve got that access token we can use it against any future requests that require authentication and authorization – so finally we can return to our attempt to access the ValuesController. We need to supply the access token in a HTTP header called Authorization and the value for the header must have the format “Bearer {token}”, the space between Bearer and the token is significant – if you miss it authorization will fail. Here’s how we use the token to retrieve the list of values from the controller:

class ValuesService
{
    public async Task&lt;IEnumerable&gt; GetValues(string accessToken)
    {
        HttpWebRequest request = new HttpWebRequest(new Uri(String.Format("{0}api/Values", Constants.BaseAddress)));
        request.Method = "GET";
        request.Accept = "application/json";
        request.Headers.Add("Authorization", String.Format("Bearer {0}", accessToken));
 
        try
        {
            HttpWebResponse httpResponse = (HttpWebResponse)(await request.GetResponseAsync());
            string json;
            using (Stream responseStream = httpResponse.GetResponseStream())
            {
                json = new StreamReader(responseStream).ReadToEnd();
            }
            List values = JsonConvert.DeserializeObject&lt;List&gt;(json);
            return values;
        }
        catch (Exception ex)
        {
            throw new SecurityException("Bad credentials", ex);
        }
    }
}

Obviously I’ve cut out a lot of error handling and have taken some short cuts to stay focussed on the topic but really it’s quite simple, just appallingly documented at the moment.

You can find the full code on GitHub.

Saving Images to a Blob Container with Azure SDK 2.2 and VS2013

On the face of it the below is quite an obscure post but since this has bitten me and image upload is so common I figure it will bite others too. And to be fair I have called this blog Azure From The Trenches so inevitably it’s going to get a bit grubby every now and then!

I’ve been quite happily working on my companion application for this blog over the last few evenings and tonight ported it all over to VS2013, Azure SDK 2.2 and the various NuGet updates that have been released alongside all that and hit an odd problem – a simple image upload that worked previously no longer does.

Part of my application uploads images into blob storage via Web API and I’d been using code that looks somewhat like the below:

1
2
3
4
5
Image myImage = new Image(); // paraphrasing here
using(CloudBlobStream uploadStream = blobRef.OpenWrite())
{
    myImage.Save(uploadStream, ImageFormat.Png);
}

This has been working with no issues at all (on my laptop, desktop and in Azure) until I ran the app after the upgrade this evening and then I began to get a NotSupportedException with the following root cause:

Microsoft.WindowsAzure.Storage.Blob.BlobWriteStreamBase.get_Length() is not supported

Cracking out dotPeek to take a look at the storage client code in the latest NuGet package (2.1.0.3) reveals that yes, getting the length of a stream on a block blob (the kind I am using) is unsupported:

1
2
3
4
5
6
7
8
9
10
public override long Length
{
  get
  {
    if (this.pageBlob != null)
      return this.pageBlobSize;
    else
      throw new NotSupportedException();
  }
}

However this is the exact same implementation on the previous version of the client and so it’s not that – and it makes sense right? You can’t really expect to get the length of a stream that is writing to a remote server.

This led me off into System.Drawing. I’ll spare you the grisly details but if your Image class doesn’t have raw data, and chances are you don’t, and you call Save without an EncoderParameters parameter (and even sometimes when you do) then a native method called GdipSaveImageToStream is called and this expects a COM stream. Your .net stream is converted to a COM stream using a class called ComStreamFromDataStream and unfortunately this class has a nasty habit of calling get_Length() on your .net stream.

And this is what causes the crash.

It’s easy to fix: grab yourself a byte array or memory stream first and upload that. I’ve wrapped this into my application framework with an UploadImage method but essentially this does the following:

1
2
3
4
5
6
7
Image myImage = new Image(); // paraphrasing here
using (MemoryStream ms = new MemoryStream())
{
    image.Save(ms, ImageFormat.Png);
    ms.Position = 0;
    blobRef.UploadFromStream(ms);
}

I’m still none the wiser as to why this has become a problem now but I guess with the move to VS2013 there is a lot of change and the layers of abstraction are now so deep that without spending hours reading Microsoft code I’m never going to know.

Relational Data in Azure

One of the primary concerns of any application is how to store and retrieve information and Azure offers many options both familiar and new.

Although some of the newer cloud oriented options such as Table Storage can be more cost effective and scalable, relational databases are still an important part of most cloud applications and Azure presents a number of options for managing relational data with the most accessible and familiar probably being SQL Database. However SQL Database is most definitely not SQL Server as you know it and comes with a number of limitations and differences that will mean you either need to develop your application using techniques you may not have with SQL Server or consider other approaches. Picking up an existing codebase built against SQL Server and dropping it straight into SQL Database might get you started but is likely to run into difficulties.

Below I’ve looked at some of the options and hopefully my experiences with them will help you decide which way to go. I’ve got the most to say about SQL Database simply because it’s the one I’ve used the most and I suspect will be what most people are interested in.

SQL Database

The most important thing to realise about SQL Database is that it’s basically a $5 database running in the cloud, it’s not SQL Server running on a big piece of tin in your data centre dedicated to serving requests for your application.

That said the vast majority of the time in practice I’ve found it to perform extremely well but, like all cloud services, you do need to begin using it with your eyes open.

When you create a SQL Database in Azure your database is running on what is effectively a multi-tenanted SQL Server cluster that is attempting to balance your needs with the other tenants while still allowing Microsoft to operate this at the price it does. You have a SQL database but you have no guarantees over server resources (there is now a Premium Database option that I will touch on towards the end of this post).

The first place you’ll catch sight of this is in the SLA:

A 5-minute interval is marked as unavailable if all the customer’s attempts to establish a connection to SQL Azure fail or take longer than 30 seconds to succeed, or if all basic valid read and write operations (as described in our technical documentation) fail after connection is established.

What does this mean in practice? Lets say you have a website that is moderately active and user activity results in it making 1000 requests to your database every 5 minutes and those requests are all short, low impact, read operations that typically take a few milliseconds to resolve in normal circumstances.

However lets say Microsoft suffer some network difficulty within their data centre or another tenant causes issues with the server and that causes a dramatic change in behaviour of your database requests resulting in 999 of those requests taking 40 seconds to succeed and 1 of them takes 30 seconds to succeed.

At this point your customers are probably giving up on your website but SQL Database is still within it’s SLA and Microsoft are under no obligation to either give you service credits (which is probably the least of your concerns at this point) or more importantly resolve the issue.

How likely is this to happen? I’m not sure I can give you odds but I can say that it’s rare based on sustained usage however it’s possible and it’s happened to me once and it lasted, intermittently, for 48 hours. On that occasion despite it being an acknowledged fault in the data centre (firewall) the Azure Operations Team initially refused to resolve it as the issue was within SLA. Fortunately common sense prevailed and despite being within the “letter” of the SLA it was not within the “spirit” of Azure and the situation was ultimately resolved.

What is far more likely, actually it’s a certainty, is that you will find yourself being throttled by SQL Database either due to your own usage or the usage of another tenant. This will present itself to your application as a SQL exception with a specific error code and a request to retry in 10 seconds time. These errors are known as transient faults and in most situations you try again and things will go through fine.

Because SQL Database is multi-tenanted it’s almost impossible to say when you’ll be throttled but something I’ve seen from experimentation and observation is that as the number of simple low read select queries per second grows past 220 throttling begins to kick in. That’s by no means hard and fast but it’s the best data I have to share.

However another cause of retry that I’ve seen on consistently busy systems seems to be caused by SQL Database doing a failover to another node on what looks to be a scheduled basis as it has a pretty consistent cadence of around 23 to 24 hours, presumably this is Microsoft moving the active node around to enable maintenance on the inactive nodes. This transient fault can be a little more problematic to deal with as it can take longer than 30 seconds for the failover to complete.

Ultimately you need to design your application to function with failure – this can mean simply accepting that sometimes things will fail or building in retry logic (it’s fairly easy to do and is generally recommended) and considering how long you want to wait balancing off the expense of immediate failure against the expense of recovery. In either case ultimately you do have to design for failure – but this is no different to a sound data centre applications architecture.

Finally I want to touch on database size – SQL Database is limited to an individual database size of 150Gb and at that capacity is an expensive monthly resource when compared to, for example, Table Storage. By comparison 150Gb of SQL Database will cost, at the time of writing, $225.78 per month whereas 150Gb of Table Storage will cost $14.25 per month.

SQL Database has many more limitations when compared to a big tin SQL Server installation and it’s well worth familiarising yourself with these in depth and the documentation is a good starting point.

As long as you make yourself aware of these limitations and plan to work with them (and there are many techniques to deal with these constraints that I’ll go through in more depth in the future) then SQL Database is still an effective solution for relational data within Azure and, certainly if you’re starting with a greenfield project, will probably be your best option for relational data.

Premium SQL Database

Recently Microsoft introduced Premium SQL Database which comes in two flavours both of which give you a guarantee of a minimum level of resource however it comes at a steep price. The service is currently running in preview with a 50% discount on the price but even at that a single 1.6GHz core with 8Gb of memory will cost you $479.25 per month for a single 150Gb database. A dual 1.6GHz server with 16Gb of memory will cost basically twice that at $944.25.

You still need to be aware that this is still a multi-tenanted offering and if you exceed the resource limits you have paid for then your application is subject to the same restrictions as standard SQL Database and in fact you may suffer transient faults in any case due to things like server failover.

As the service is in preview there isn’t an SLA available yet but I would expect it to be very similar to the existing SQL Database SLA, certainly in terms of service availability.

My recommendation would be to only use the Premium Database if you are porting a project from a traditional data centre into Azure or have a very specific dependency on SQL but I would look carefully at if this makes economic sense and consider concluding the port with code change to support standard SQL Database.

SQL Server

It’s fairly straightforward to run SQL Server inside Azure using Virtual Machines and in fact you can deploy this using a gallery item that also includes the SQL Server license. This approach can work but there are a number of things to be aware of.

Firstly the cost – you need a SQL Server license to deploy this and, certainly compared, to SQL Database that isn’t cheap. A two node SQL Server cluster using the Standard edition of SQL Server and large virtual machines (4x 1.6GHz cores and 7Gb of memory) will cost you $1354.08 per month and that’s before you factor in storage costs but it does include licensing.

Secondly on top of that you have to manage this yourself just as you would a standard SQL Server and finally the real kicker: it still might not behave as your on premise SQL Server behaves as you are running on top of Azure storage and the network latencies and other characteristics will almost certainly be (at best) different to your on premise solution.

Like Premium SQL Database I would really only recommend this if you are porting a project from a traditional data centre into Azure or you have a very specific requirement (for example there are many SQL Server features not available in SQL Database) but you really need to look and see if it makes economic sense.

MySQL

MySQL has always been a popular database in the open source world, at least until Oracle bought it! It and it’s derivatives are completely viable to run within Azure and may well be a good option for you. I’m not going to go into the in’s and out’s of MySQL but there are two fairly straightforward options for running this within Azure.

Firstly you can run your own virtual machine and install MySQL – personally if doing this I would run a Linux virtual machine, it’s cheaper than an equivalent Windows machine and is MySQLs natural home. Like running SQL Server yourself if you take this route you are responsible for all the management activities you would need to undertake if you were running it on premise.

Alternatively cleardb provide a MySQL as a service option available from the Azure store. I can’t give any real information based on experience however this blog is running WordPress underpinned by this service and setup was simple. As a service administration and backup activities are taken care of for you and they offer a free option to get you started.

Conclusion

Azure offers a variety of ways for you to handle relational data, and ultimately through virtual machines you can host any database you like, but for most applications SQL Database is going to prove a cost effective and reliable option with the added benefit of familiarity to most .Net developers. Just make sure you bear in mind it’s limitations and design your application accordingly. I’ll be coming back to this in a future post.

 

Welcome (and a Table Storage date sorting tip)

Welcome to Azure From The Trenches, a blog where I hope to share information learned from using Azure to develop and run real production applications. The  applications themselves shall largely remain nameless but to illustrate some points I have a sample application to go along with this blog which I’ll be publishing on GitHub shortly.

I really want to cover Azure in a broad sense so expect to see topics on subjects such as:

  • Architecture
  • Business
  • Code
  • Cost modelling
  • Deployment
  • Operating an Azure system
  • Scaling
  • SLAs
  • Testing

As well as sharing information I’d love for this blog to be visited and commented on by others working on Azure so that I can learn from you and improve my own understanding and future work.

As my companion application isn’t quite ready I thought I’d kick things off with a short tip for sorting table storage within a partition by date in a descending order. This had me scratching my head for a while when I first wanted to do this during my early use of Table Storage.

The solution relies on the rows in a partition being stored sorted by the row key in an ascending fashion. The row key is a string so you can’t just drop a date in their but as long as you’re dealing with recent dates you can quickly and easily place a string in the row key that will let you sort in an ascending date count: the Tick property of the DateTime class.

This is a long value that counts the number of ticks (100 nanoseconds) that have elapsed since midnight on the 1st January 0001. If we zero pad that as a string then the partition will be sorted by date:

1
RowKey = String.Format("{0:D19}", DateTime.UtcNow.Ticks);

To sort in a descending date order we simply need to subtract the current number of ticks from the number of ticks for the maximum date:

1
RowKey = String.Format("{0:D19}", DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks);

When I first began to use Azure and Table Storage my instinct was always to run back to SQL but with some lateral thinking you can often bend Table Storage to your will and get better scalability, reliability and cheaper running costs – a topic I plan on coming back to.

Contact

  • If you're looking for help with C#, .NET, Azure, Architecture, or would simply value an independent opinion then please get in touch here or over on Twitter.

Recent Posts

Recent Tweets

Invalid or expired token.

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez