Category: Azure

C# Cloud Application Architecture – Commanding via a Mediator (Part 1)

When designing cloud based systems we often think about the benefits of loosely coupling systems, for example through queues and pub/sub mechanisms, but fall back on more traditional onion or layered patterns for the application architecture when alternative approaches might provide additional benefits.

My aim over a series of posts is to show how combining the Command pattern with the Mediator pattern can yield a number of benefits including:

  • A better separation of concerns with a clear demarkation of domains
  • An internally decomposed application that is simple to work with early in it’s lifecycle yet can easily evolve into a microservice approach
  • Is able to reliably support none-transactional or distributed data storage operations
  • Less repetitive infrastructure boilerplate to write such as logging and telemetry

To illustrate this we’re going to refactor an example application, over a number of stages, from a layered onion ring style to a decoupled in-process application and finally on through to a distributed application. An up front warning, to get the most out of these posts a fair amount of code reading is encouraged. It doesn’t make sense for me to walk through each change and doing so would make the posts ridiculously long but hopefully the sample code provided is clear and concise. It really is designed to go along with the posts.

The application is a RESTful HTTP API supporting some basic, simplified, functionality for an online store. The core API operations are:

  • Shopping cart management: get the cart, add products to the cart, clear the cart
  • Checkout: convert a cart into an order, mark the order as paid
  • Product: get a product

To support the above there is also a sense of store product catalog, it’s not exposed through the API, but it is used internally.

I’m going to be making use of my own command mediation framework AzureFromTheTrenches.Commanding as it has some features that will be helpful to us as we progress towards a distributed application but alternatives are available such as the popular Mediatr and if this approach appeals to you I’d encourage you to look at those too.

Our Starting Point

The code for the onion ring application, our starting point, can be found on GitHub:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1a

Firstly lets define what I mean by onion ring or layered architecture. In the context of our example application it’s a series of layers built one on top of the other with calls up and down the chain never skipping a layer: the Web API talks to the application services, the application services to the data access layer, and the data access layer to the storage. Each layer drives the one below. To support this each layer is broken out into it’s own assembly and a mixture of encapsulation (at the assembly and class level) and careful reference management ensures this remains the case.

In the case of our application this looks like this:

Concrete class coupling is avoided through the use of interfaces and these are constructor injected to promote a testable approach. Normally in such an architecture each layer has its own set of models (data access models, application models etc.) with mapping taking place between them however for the sake of brevity, and as I’m just using mock data stores, I’ve ignored this and used a single set of models that can be found in the OnlineStore.Model assembly.

As this is such a common pattern, it’s certainly the pattern I’ve come across most often over the last few years (where application architecture has been considered at all – sadly the spaghetti / make it up as you go along approach is still prevalent), it’s worth looking at what’s good about it:

  1. Perhaps most importantly – it’s simple. You can draw out the architecture on a whiteboard with a set of circles or boxes in 30 seconds and it’s quite easy to explain what is going on and why
  2. It’s testable, obviously things get more complicated in the real world but above the data access layer testing is really easy: everything is mockable and the major concrete external dependencies have been isolated in our data access layer.
  3. A good example of the pattern is open for extension but closed for modification: because their are clear contractual interfaces decorators can be added to, for example, add telemetry without modifying business logic.
  4. It’s very well supported in tooling – if you’re using Visual Studio it’s built in refactoring tools or Resharper will have no problem crunching over this solution to support changes and help keep things consistent.
  5. It can be used in a variety of different operating environments, I’ve used a Web API for this example but you can find onion / layer type architectures in all manner of solutions.
  6. It’s what’s typically been done before – most engineers are familiar with the pattern and its become almost a default choice.

I’ve built many successful systems myself using this pattern and, to a point, its served me well. However I started looking for better approaches as I experienced some of the weaknesses:

  1. It’s not as loosely coupled and malleable as it seems. This might seem an odd statement to make given our example makes strong use of interfaces and composition via dependency injection, an approach often seen to promote loose coupling and flexibility. And it does to a point. However we still tend to think of operations in such an architecture in quite a tightly coupled way: this controller calls this method on this interface which is implemented by this service class and on down through the layers. It’s a very explicit invocation process.
  2. Interfaces and classes in the service layer often become “fat” – they tend to end up coupled around a specific part of the application domain (in our case, for example, the shopping basket) and used to group together functionality.
  3. As an onion ring application gets larger it’s quite common for the Application Layer to begin to resemble spaghetti – repositories are made reference to from across the code base, services refer to services, and helpers to other helpers. You can mitigate this by decomposing the application layer into further assemblies, and making use of various design patterns, but it only takes you so far. These are all sticking plaster over a fundamental problem with the onion ring approach: it’s designed to segregate at the layer level not at the domain or bounding context level (we’ll talk about DDD later). It’s also common to see concerns start to bleed across the layer with what once were discrete boundaries.
  4. In the event of fundamental change the architecture can be quite fragile and resistant to easy change. Let’s consider a situation where our online store has become wildly successful and to keep it serving the growing userbase we need to break it apart to handle the growing scale. This will probably involve breaking our API into multiple components and using patterns such as queues to asynchronously run and throttle some actions. With the onion ring architecture we’re going to have to analyse what is probably a large codebase looking for how we can safely break it apart. Almost inevitably this will be more difficult than it first seems and it will probably seem quite difficult to begin with (I may have been here myself!) and we’ll uncover all manner of implicit tight coupling points.

There are of course solutions to many of these problems, particularly the code organisation issues, but at some point it’s worth considering if the architecture is still helping you or beginning to hinder you and if perhaps there are alternative approaches.

With that in mind, and as I said earlier, I’d like to introduce an alternative. It’s by no means the alternative but its one I’ve found tremendously helpful when building applications for the cloud.

The Command Pattern with a Mediator

The classic command pattern aims to encapsulate all the information needed to perform an action typically within a class that contains properties (the data needed to perform the action) and an execute method (that undertakes the action). For example:

public class GetProductCommand : ICommand<Product>
{
    public Guid ProductId { get; set; }

    public Task<Product> Execute()
    {
         /// ... logic
    }
}

This can be a useful way of structuring an application but tightly couples state and execution and that can prove limiting if, for example, it would be helpful for the command to participate in a pub/sub approach, take part in a command chain (step 1 – store state in undo, step 2 – execute command, step 3 – log command was executed), or operate across a process boundary.

A powerful improvement can be made to the approach by decoupling the execution from the command state via the mediator pattern. In this pattern all commands, each of which consist of simple serializable state, are dispatched to the mediator and the mediator determines which handlers will execute the command:

Because the command is decoupled from execution it is possible to associate multiple handlers with a command and moving a handler to a different process space (for example splitting a Web API into multiple Web APIs) is simply a matter of changing the configuration of the mediator. And because the command is simple state we can easily serialize it into, for example, an event store (the example below illustrates a mediator that is able to serialize commands to stores directly, but this can also be accomplished through additional handlers:

It’s perhaps worth also briefly touching on the CQRS pattern – while the Command Message pattern may facilitate a CQRS approach it neither requires it or has any particular opinion on it.

With that out the way lets take a look at how this pattern impacts our applications architecture. Firstly the code for our refactored solution can be found here:

https://github.com/JamesRandall/CommandMessagePatternTutorial/tree/master/Part1b

And from our layered approach we now end up with something that looks like this:

It’s important to realise when looking at this code that we’re going to do more work to take advantage of the capabilities being introduced here. This is really just a “least work” refactor to demonstrate some high level differences – hopefully over the next few posts you’ll see how we can really use these capabilities to simplify our solution. However even at this early stage there are some clear points to note about how this effects the solution structure and code:

  1. Rather than structure our application architecture around technology we’ve restructured it around domains. Where previously we had a data access layer, an application layer and an API layer we now have a shopping cart application, a checkout application, a product application and a Web API application. Rather than the focus being about technology boundaries its more about business domain boundaries.
  2. Across boundaries (previously layers, now applications) we’ve shifted our invocation semantics from interfaces, methods and parameters to simple state that is serializable and persistable and communicated through a “black box” mediator or dispatcher. The instigator of an operation no longer has any knowledge about what will handle the operation and as we’ll see later that can even include the handler executing in a different process space such as another web application.
  3. I’ve deliberately used the term application for these business domain implementations as even though they are all operating in the same process space it really is best to think of them as self contained units. Other than a simple wiring class to allow them to be configured in the host application (via a dependency injector) they interoperate with their host (the Web API) and between themselves entirely through the dispatch of commands.
  4. The intent is that each application is fairly small in and of itself and that it can take the best approach required to solve it’s particular problem. In reality it’s quite typical that many of the applications follow the same conventions and patterns, as they do here, and when this is the case its best to establish some code organisation concentions. In fact it’s not uncommon to see a lightweight onion ring architecture used inside each of these applications (so for lovers of the onion ring – you don’t need to abandon it completely!).

Let’s start by comparing the controllers. In our traditional layered architecture our product controller looked like this:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly IProductService _productService;

    public ProductController(IProductService productService)
    {
        _productService = productService;
    }

    [HttpGet("{id}")]
    public async Task<StoreProduct> Get(Guid id)
    {
        return await _productService.GetAsync(id);
    }
}

As expected this accepts an instance of a service, in this case of an IProductService, and it’s get method simply passes the ID on down to it. ALthough the controller is abstracted away from the implementation of the IProductService it is clearly and directly linked to one type of service and a method on that service.

Now lets look at its replacement:

[Route("api/[controller]")]
public class ProductController : Controller
{
    private readonly ICommandDispatcher _dispatcher;

    public ProductController(ICommandDispatcher dispatcher)
    {
        _dispatcher = dispatcher;
    }

    [HttpGet("{id}")]
    public async Task Get(Guid id)
    {
        return await _dispatcher.DispatchAsync(new GetStoreProductQuery
        {
            ProductId = id
        });
    }
}

In this implementation the controller accepts an instance of ICommandDispatcher and then rather than invoke a method on a service it calls DispatchAsync on that dispatcher supplying it a command model. The controller no longer has any knowledge of what is going to handle this request and all our commands are executed by discrete handlers. In this case the GetStoreProductQuery command is handled the the GetStoreProductQueryHandler in the Store.Application assembly:

internal class GetStoreProductQueryHandler : ICommandHandler<GetStoreProductQuery, StoreProduct>
{
    private readonly IStoreProductRepository _repository;

    public GetStoreProductQueryHandler(IStoreProductRepository repository)
    {
        _repository = repository;
    }

    public Task<StoreProduct> ExecuteAsync(GetStoreProductQuery command, StoreProduct previousResult)
    {
        return _repository.GetAsync(command.ProductId);
    }
}

It really does no more than the implementation of the ProductService in our initial application but importantly derives from the ICommandHandler generic interface supplying as generic types the type of the command it is handling and the result type. Our command mediation framework takes care of routing the command to this handler and will ultimately call ExecuteAsync on it. The framework we are using here allows commands to be chained and so as well as being given the command state it is also given any previous result.

Handlers are registered against actors as follows (this can be seen in the IServiceCollectionExtenions.cs file of Store.Application):

commandRegistry.Register<GetStoreProductQueryHandler>();

The command mediation framework has all it needs from the definition of GetStoreProductQueryHandler to map the handler to the command.

I think it’s fair to say that’s all pretty loosely coupled! In fact its so loose that if we weren’t going to go on to make more of this pattern we might conclude that the loss of immediate traceability is not worth it.

In the next part we’re going to visit some of our more complicated controllers and commands, such as the Put verb on the ShoppingCartController, to look at how we can massively simplify the code below:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put(Guid productId, int quantity)
{
    CommandResponse response = await _dispatcher.DispatchAsync(new AddToCartCommand
    {
        UserId = this.GetUserId(),
        ProductId = productId,
        Quantity = quantity
    });
    if (response.IsSuccess)
    {
        return Ok();
    }
    return BadRequest(response.ErrorMessage);
}

By way of a teaser the aim is to end up with really thin controller action that, across the piece, look like this:

[HttpPut("{productId}/{quantity}")]
public async Task<IActionResult> Put([FromRoute] AddToCartCommand command) => await ExecuteCommand(command);

We’ll then roll that approach out over the rest of the API. Then get into some really cool stuff!

Part 2 can be found here.

Upgrading an ASP.Net Core 1.1 project to ASP.Net Core 2.0 and run on Azure

Following the instructions on the ASP.Net Core 2.0 announcement blog post I was quickly able to get a website updated to ASP.Net Core 2.0 and run it locally.

The website is hosted on Azure in the App Service model and unfortunately after publishing my project I encountered IIS 502.5 errors. Luckily this was in a deployment slot so my public website was unaffected.

After a bit of head scratching I found two extra steps were required to upgrade an existing web site.

Firstly In your .csproj file you should have an ItemGroup like the one below:

<ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.1" />
</ItemGroup>

You need to change the version to 2.0.0.

Secondly you need to clear out all the old files from your Azure hosted instance. The easiest way to do this is to open Kudu (Advanced Tools in the portal), navigate to the site/wwwroot folder and delete them.

After that if you publish your website again it should work fine.

Hope that saves someone a few hours of head-scratching.

PowerShell, Binding Redirects, and Visual Studio Team Services

I’ve blogged previously about setting up binding redirects for Powershell with Newtonsoft.Json being a particularly troublesome package – it’s such a common dependency for NuGet packages that if you deal with a complex project you’ll almost certainly need a redirect in your app/web.config’s to get things to play ball and if you use the Azure cmdlets with others (such as your own) you’re likely to face this problem in Powershell.

I’ve recently moved my projects into Visual Studio Team Services using the new (vastly improved!) scriptable build system where I often make use of the PowerShell script task to perform custom actions. If you hit a dependency issue that requires a binding redirect to resolve then my previous approach of creating a Powershell.exe.config file for PowerShell won’t work in VSTS as unless you build a custom build agent you don’t have access to the machine at this level.

After a bit of head scratching I came up with an alternative solution that in many ways is neater and more generally portable as it doesn’t require any special machine setup. My revised approach is to hook the AssemblyResolve event and return a preloaded target assembly as shown in the example below:

Note that you can’t use the more common Register-ObjectEvent method of subscribing to events as this will balk at the need for a return value.

You can of course use this technique to deal with other assemblies that might be giving you issues.

Capturing and Tracing All HTTP Requests in C# and .Net

Modern applications are complex and often rely on a large number of external resources increasingly accessed using HTTP – for example most Azure services are interacted with using the HTTP protocol.

That being the case it can be useful to get a view of the requests your application is making and while this can be done with a tool like Fiddler that’s not always convenient in a production environment.

If you’re using the HttpClient class another option is to pass a custom message handler to it’s constructor but this relies on you being in direct control of all the code making HTTP requests and that’s unlikely.

A simple way of capturing this information without getting into all the unpleasantness of writing a TCP listener or HTTP proxy is to use the System.Diagnostics namespace. From .net 4.5 onwards the framework has been writing HTTP events to the System.Diagnostics.Eventing.FrameworkEventSource source. This isn’t well documented and I found the easiest way to figure out what events are available, and their Event IDs, is to read the source.

Once you’ve found the HTTP events it’s quite straightforward to write an event listener that listens to this source. The below class will do this and output the details to the trace writer (so you can view it in the Visual Studio Debug Output window) but you can easily output it to a file, table storage, or any other output format of your choosing.

To set it running all you need to do is instantiate the class.

If you’d like to see this kind of data and much more, collected, correlated and analysed then you might want to check out my project Hub Analytics that is currently running a free beta.

Changing the App Service Plan of an Azure App Service

To allow a number of App Services to scale independently I needed to pull one of them out of an App Service Plan where it had lived with 3 others to sit in it’s own plan – experience had shown me that it’s scaling characteristics are really quite different from the other App Services.

You can do this straightforwardly and pretty much instantly either in the Portal (there’s a Change App Service Plan option in Settings) or with PowerShell (with the Set-AzureRmAppServicePlan cmdlet).

Super simple – but I did encounter one gotcha. This doesn’t move any deployment slots you might have created and so you end up in a situation with the main App Service sat in one App Service Plan and it’s deployment slots in another which probably isn’t what you want and, in any case, Azure won’t let you swap slots in different service plans.

The solution is simple: you can also move them between App Service Plans in the same way.

 

Signalling API Unavailability

When doing upgrades of websites it’s often useful to be able to signal to users that your service is offline for maintenance either in part or in entirety which is quite straightforward to implement unless you’ve got something like an AngularJS or React app, that could well be cached in the browser, and that actually wants to respond to 503 status calls returned from a web based API. Then CORS has a habit of getting in the way.

To help with that I’ve just pushed this super simple and lightweight ASP.Net website to GitHub that will respond with a 503 status code to any request made of it while ensuring that the CORS protocol will succeed meaning that the 503 status code will make its way through to your own error handling.

It’s ideal for hosting in an Azure deployment slot during upgrades that swap slots.

Note: an alternative approach would be to use the URL rewriter in web.config. It’s not particularly intuitive or, to my taste, readable but I believe can be configured to perform the same task.

Azure Resource Manager and Powershell 1.0

Microsoft recently updated Azure Powershell to 1.0 in the process introducing a large number of breaking changes to the CmdLet’s. Essentially they’ve removed Switch-AzureMode and instead of that the Azure Resource Manager cmdlets have all had an Rm introduced to them so for example:

New-AzureResourceGroupDeployment

becomes

New-AzureRmResourceGroupDeployment

For the most part the errors on upgrading a Powershell script are obvious and a rename will get you going however somewhat confusingly there are now pairs of cmdlet’s that do the same thing but only effect the different cmdlet sets.

An example of one that caught me out is Select-AzureSubscription. I’ve got numerous Azure subscriptions and scripts that deploy into different subscriptions depending on what I’m doing. Previously I used Select-AzureSubscription along with Switch-AzureMode and this worked. The problem is that Select-AzureSubscription still works – but it has no effect on the cmdlets that use the Rm prefix.

This led, confusingly, to a resource group being created in a different subscription to the one I intended and resources with the same name as in my intended resource group (resources which already existed) being created, or attempted to be created, in this new resource group. They failed as they already existed with the same names elsewhere.

The fix was to use Select-AzureRmSubscription.

 

Multi-Tenanted Authentication with Azure AD and Office 365 (and IdentityServer3)

With solutions such as Azure AD and Office 365 becoming more common as a source of an organisations identity on the Internet it can be useful to have an application offer authentication against them. As a typical scenario let’s imagine you’ve developed a time tracking SaaS solution and you have a number of customers who wants to use it but logon with their Office 365 identities.

It’s a bit of a lengthy process but not difficult to do when you know how and hopefully by way of this worked example I can show you how to add this functionality yourself.

As it’s fairly commonplace to want to offer other external identities alongside Office 365 I’m going to integrate this into the excellent IdentityServer3 OpenID Connect Provider however nothing here really depends on IdentityServer3 and if you’ve got a basic familiarity with OWIN it should be fairly obvious how to decouple what I’ve done here.

The source code for all the below can be found in GitHub.

Setup the Visual Studio Project

Firstly open up Visual Studio (I’m using 2015) and create a new ASP.Net Web Application. In the New ASP.Net Project dialog select to build an MVC application and ensure that the authentication method is set to No Authentication:

1

Now you need to add a set of NuGet packages to the project:

Install-Package Thinktecture.IdentityServer3
Install-Package Microsoft.Owin.Host.SystemWeb
Install-Package Microsoft.Owin.Security.Cookies
Install-Package Microsoft.Owin.Security.OpenIdConnect

Now in the properties pane for the project enable SSL and note the port number:

2

And in the project properties (I love how this is spread liberally about the place!) set the start page to be the https site:

3

Now add a class called Startup to the root of the project as below. As we’ve added the Microsoft.Owin.Host.SystemWeb package it’s going to look for this on startup and will throw an exception if it can’t find it – we’ll fill it in later.

4

 

Now add the following to your web.config file:

15

 

At this point you should be able to run your solution and the boilerplate MVC website should appear secured with a self signed SSL certificate generated by Visual Studio.

Setup an Azure AD

We’re going to use two domains for the following – one for our application to authenticate against and federate against other directories and a second domain that we’re going to use to login with.

If you’ve got an Azure subscription you should already have a default domain so we’ll use that for logging in.

To enable the multi-tenanted scenario create a second Azure AD in the (old) management portal. This AD is used to host an Azure AD Application and handles the federation between the other domains.

Go to the Applications tab for your directory and add an application, select “Add an application my organisation is developing”. Give the application and appropriate customer facing name (they will be able to see this in their own domain configuration and during sign up) and set this to be a web application:

5

In the next page set the sign on URL to be based on the the root of your web project (as you noted earlier) extended with the path /identity/signin-azuread giving you a URL such as:

https://localhost:44300/identity/signin-azuread

In the second box you need to set a unique URI for your application. The best way to do this is based on your own domain in the form http://mydomain.onmicrosoft.com/myapplication:

6

Azure will now whirr away for a second or two before showing you the application dashboard. Click the configure tab. Move down to the middle of the page and set Application is Multi-Tenant to on and create a new key. Then hit save. The key will display after you’ve hit save. Note down both the key and the client ID as you will need them later.

7

Configure IdentityServer3 and Azure AD / Office 365 as an Identity Provider

Back in Visual Studio edit the Startup class we created earlier. Firstly add a folder to the root of the project called Configuration, download the IdentityServer3 test certificate, and add it to this folder setting it to Copy if Newer in the properties panel:

8

IdentityServer3 will use this certificate (when configured below) to sign the tokens it issues however it’s important to be clear that in a production environment this certificate needs to be generated and kept securely. The certificate used here is a certificate used in the IdentityServer3 tests and stored / loaded in a manner designed for brevity in the source code rather than security. In a production environment you’d keep this in your certificate store.

Now configure IdentityServer3 as an endpoint within this solution by adding the following to the Startup class:

9

We’ve set IdentityServer3 up to use it’s in memory stores and created a client that we’ll later authenticate against (see the IdentityServer3 docs for more details on this).

We now need to configure Azure AD / Office 365 as an identity provider by filling in the ConfigureIdentityProviders method:

10

It’s vital that the client ID and redirect URI supplied to the Open ID Connect options exactly match those in the Azure AD application you created earlier and the Authority string must look exactly as written here including the trailing /.

As this point things will build but there’s nothing in our solution requiring authorization however you can navigate to the published configuration endpoint and should see some json describing our service. My endpoint is at:

https://localhost:44300/identity/.well-known/openid-configuration

Authorizing and Viewing Claims

We’re going to use an Authorize attribute on an MVC action to test our work so far. To begin with we need to get our MVC project to use the token end point we’ve created above. We need to add the following to the bottom of our Configuration method:

11

You need to ensure you use the port your website is running on on your development machine. Mine is on 44300.

Modify the About action in HomeController.cs to look like the following:

12

And the corresponding View to:

13

Now run the project and if you’ve got all the magic numbers and URLs right you should be navigated to the Azure AD logon page:

14

Login using a different AD to the one we set up earlier and if you now login you should end up back at the about page with a set of claims showing:

16

If you now head back into the Azure Management Portal and inspect the Applications tab for the Azure AD you used to login with you should find that the application we created earlier has been added here:

17

If all that worked then great – that’s the happy, easy, path dealt with! If not I suggest checking out my number one tip for dealing with IdentityServer3 issues.

The Administrator Consent Workflow

Depending on your domain configuration (the one you logged in with) then you might have hit the error page shown below:

18

Many domains, particularly of larger organisations, will have enabled Administrator Consent. This prevents users from being able to use their domain account to access resources they are not authorized to – unfortunately the error above isn’t very useful to end users.

To allow users in a domain configured this way to be able to access your application you need to implement a workflow that allows the Administrator to grant consent for users. Before continuing, if you’ve followed the steps above, then in Azure delete the application from your login domain – select it in the management portal and click Manage Access, then click Remove Access.

Firstly let’s turn on Administrator Consent in our login domain – select the domain in the management portal and click the Configure tab. When Azure has finished whirring away scroll down to the Integrated Applications section and set “USERS MAY GIVE APPLICATIONS PERMISSION TO ACCESS THEIR DATA” and “USERS MAY ADD INTEGRATED APPLICATIONS” to no. Then press save.

19

Now in the users section of your domain add a new user and give them only the User role:

20

Next head back to the Azure AD application we created at the start of this post, click the Configure tab and scroll down to the single sign on section. Enter a URL for the root of your website and click save:

24

Ensure the browser is closed and if you now run the Visual Studio solution again and attempt to login with this new user then you should see the error message shown earlier. If not check the configuration in Azure once more.

At this point we’ve got a domain that requires administrator consent for users to attach to our application. Begin by adding a new Nuget package:

Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory

Then add a class called AzureTenant as shown below:

21

We now need to be able to manage our tenants. Normally you’d use some form of persistent store for this but I’m going to use a very basic in memory service by way of an example. The full service code is in GitHub here however there are two important sections. Firstly our controller (which we’ll work on later) will create a temporary tenant before handing off to Azure and later on after association is complete we convert this to a permanent tenant. This two stage process can be useful as in a more realistic example you may want to track some additional state across this request however all you need is someway of attaching data to a persistent unique ID – reusing the tenant like this keeps the example simple. However this code can be seen below:

25

When our controller hands off to Azure to handle the domain association we need to do so by forming up a URL and doing this can be seen below:

26

And finally following the association we need to retrieve the tenant ID for the newly associated tenant (and this is where your client key created at the beginning of this piece is required):

Now we need to add an MVC controller that is going to present this functionality to the user. There is some boilerplate at the bottom but I want to focus here on the AdminConsent and Associate methods as shown below:

22

The default admin consent view presents a button to the user and a paragraph confirming what they are about to do – when the post back hits the controller it does two things. Firstly it creates a temporary tenant in our data store and then redirects to the Azure authorization endpoint using the temporary ID of the tenant as the state.

When the user confirms the association in the Azure AD management page presented during the redirect then it will call back to the Associate action passing in the temporary tenant ID as the state. In more complex examples tracking the state in this way allows us to track back the Azure request to any information we’ve gathered before confirmation. The Associate action then uses our in memory service to confirm the association and handles the routing off to the appropriate page based on success or failure.

Finally having done all of the above we’re going to put a link to our admin consent page on the home page. I modified the Jumbotron to point at the AdminConsent view.

At this point you should be able to run the solution (and this is the complete solution in GitHub) and if things are wired up correctly then you should be able to click your admin consent link. Try this with your test user and it will fail but if you then try it with your admin user you should see a screen like this:

23

However if you’ve got everything right you should end up back at the success screen on your website and can now navigate into the About section.

Hopefully that helps anyone trying to figure out how to do this themselves and feel free to ask me questions in the comments or on Twitter.

Finally, worth repeating – the code is all available on GitHub.

Powershell and External IP Addresses

While doing work with Powershell and Azure Resource Manager templates today I wanted to configure the Azure SQL Database firewall to allow my build machine access to configure schema and stored procedures. To reliably know what your Internet facing IP one fairly reliable method is to bounce it back from a remote server and so I stood up a super simple website on Azure with a REST API to return to you your IP. You can find it’s swagger endpoint here. All that’s really behind the get action is a simple return of a request header:

public string Get()
{
    return HttpContext.Current.Request.UserHostAddress;
}

Getting hold of it from Powershell is pretty simple, you can use the Invoke-WebRequest cmdlet as follows (and I’m also removing the double quotes surrounding the returned json string):

$ipAddress = (Invoke-WebRequest https://afwhatsmyip.azurewebsites.net/ip).Content
$ipAddress = $ipAddress.Substring(1,$ipAddress.Length-2)

Feel free to use the endpoint yourself, I’m not planning on taking it down.

Azure Notification Hub – Double Message Gotcha

I recently worked on a piece of Azure hosted software that was required to send a large volume of push notifications to mobile devices and so decided to give Azure Notification Hubs a whirl.

On the whole I found it to be pretty easy to set up and get going with and very well documented with walkthroughs for most scenarios (both server side and device) across all the major platforms.

But, and you knew there was a but coming didn’t you?, I did come across one major gotcha while building an implementation based off the tutorial on sending notifications from your server.

After implementing this and doing a bit of debugging I was finding that every time I sent a message from the server to the device my device was responding to it multiple times. Due to how devices and the registration process work you do have to be careful to ensure that your device isn’t already registered with the notification hub and Microsoft themselves show you how to do this in this code snippet:

public class DeviceRegistration
{
    public string Platform { get; set; }
    public string Handle { get; set; }
    public string[] Tags { get; set; }
}
 
// POST api/register
// This creates a registration id
public async Task<string> Post(string handle = null)
{
    string newRegistrationId = null;
 
    // make sure there are no existing registrations for this push handle (used for iOS and Android)
    if (handle != null)
    {
        var registrations = await hub.GetRegistrationsByChannelAsync(handle, 100);
 
        foreach (RegistrationDescription registration in registrations)
        {
            if (newRegistrationId == null)
            {
                newRegistrationId = registration.RegistrationId;
            }
            else
            {
                await hub.DeleteRegistrationAsync(registration);
            }
        }
    }
 
    if (newRegistrationId == null) 
        newRegistrationId = await hub.CreateRegistrationIdAsync();
 
    return newRegistrationId;
}
 
// PUT api/register/5
// This creates or updates a registration (with provided channelURI) at the specified id
public async Task<HttpResponseMessage> Put(string id, DeviceRegistration deviceUpdate)
{
    RegistrationDescription registration = null;
    switch (deviceUpdate.Platform)
    {
        case "mpns":
            registration = new MpnsRegistrationDescription(deviceUpdate.Handle);
            break;
        case "wns":
            registration = new WindowsRegistrationDescription(deviceUpdate.Handle);
            break;
        case "apns":
            registration = new AppleRegistrationDescription(deviceUpdate.Handle);
            break;
        case "gcm":
            registration = new GcmRegistrationDescription(deviceUpdate.Handle);
            break;
        default:
            throw new HttpResponseException(HttpStatusCode.BadRequest);
    }
 
    registration.RegistrationId = id;
    var username = HttpContext.Current.User.Identity.Name;
 
    // add check if user is allowed to add these tags
    registration.Tags = new HashSet<string>(deviceUpdate.Tags);
    registration.Tags.Add("username:" + username);
 
    try
    {
        await hub.CreateOrUpdateRegistrationAsync(registration);
    }
    catch (MessagingException e)
    {
        ReturnGoneIfHubResponseIsGone(e);
    }
 
    return Request.CreateResponse(HttpStatusCode.OK);
}

You’ll notice the comment in the built of the Post action about making sure their are no existing registrations for the push handle. The handle is a token you obtain from the device and use as illustrated as part of the registration process.

The problem is – this code sample doesn’t work.

I learned the hard way that when the registration is created (in the Put action) the notification hub converts the handle to all upper case text. The handle from my device (which I obtained using the Cordova PushPlugin – I was working in Ionic) contained a mix of lower case letters and numbers.

When you subsequently search for registrations that match the handle (var registrations = await hub.GetRegistrationsByChannelAsync(handle, 100)) it performs a case sensitive search and none of your previous registrations will be found.

The result is that you can find yourself registering the same device token multiple times against different registration IDs and when you do a message send it is sent multiple times – multiple registrations == multiple sends.

If you suspect you might have a similar problem you can enumerate all registrations with the GetAllRegistrationsAsync method on the NotificationHubClient.

Other than that, and as I said earlier, my experience with the Notification Hub was really very smooth.

Recent Posts

Recent Tweets

Recent Comments

Archives

Categories

Meta

GiottoPress by Enrique Chavez