Updates from June, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 7:36 pm on June 24, 2012 Permalink |  

    Applying CQRS to Azure 

    I was recently fortunate enough to hear Nathan Totten‘s talk about the architecture of the Tankster game. What interested me was the broadly CQRS approach to the architecture of the game. As a means of comparison, a traditional web service provides CRUD operations via a RESTful interface like this:

    Tankster on the other hand, has a Web Service for write only. This service essentially ends up placing JSON documents in Windows Azure Blob Storage. The documents are made publicly available, and are laid out in such a way that they look like a read only RESTful interface. Clever. Now your solution has a highly scalable read capability which you’re paying a very low cost for, you’re certainly not paying for the compute to deliver it (in Azure you pay for storage, transactions and egress on blob storage, and compute by the hour).

    In the case of Tankster there is no security around the GETs on the Blob Store. The URLs are Guids which are hard to guess, and in any case, it’s only game state. Security could be added by creating shared access signatures for the blobs you wish to provide read access to.

    But what would happen if you took this example a stage further? Why use your compute to to write to Blob Storage? All your code needs to do is decide which resources a client should be allowed read/write access to, the actual transfer of data to these endpoints is not something you need to be involved with (unless input validation is required). We could turn our application into a resource facilitator rather than a resource provider.

    In fact, as of the June 2012 refresh, Shared Access Signatures can also be created for Queues and Tables. So direct access to these resources could also be provided.

    How would this work if the client was a web browser? Well let’s assume it’s a single page application (now called an SPA apparently). The web page and associated scripts and resources are all static, so can be placed in Blob Storage. Calls to the REST Web Service included the identity of the user, and can be achieved using GET request over JSONP, and allows us to navigate around the single origin policy. The CRUD activity of our application is provided using shared access signatures, and writing to the Blob Store directly. You only use compute resource to decide which blobs the Client can read or write to. For data heavy applications this could make a massive difference. Your Azure compute time needs only to be sent servicing lightweight requests for permission, and not transferring data.

    This is all very theoretical at the moment. I’ll report back with a working example!

    • Andrew 9:04 pm on June 24, 2012 Permalink | Log in to Reply

      Ive been thinking about this also recently. Could make for some very low cost highly scalable applications.

  • Richard 8:53 am on June 21, 2012 Permalink |  

    Windows Azure 2.0 – the smaller changes 

    On the 7th of June a number of large changes were made to Windows Azure, amongst these big announcements a number of smaller changes were made, which haven’t caught people’s attention quite so much. Here’s what I have found so far:

    1. 20 storage accounts per subscription (used to be 5).
    2. The new portal allows you to update Cloud Service Roles independently (you did have to update a whole deployment at a time).
    3. Blobs can now be copied between storage accounts.
    4. The service definition schema was updated to include ‘InstanceInputEndpoints’, giving each instance it’s own port number, so you can directly address worker/web roles through the load balancer.
    5. A ‘LoadBalanerProbe’ allows you to specify a custom URL for the load balancer to inspect the health of your web role.
    6. The new 1.7 SDK can be installed side-by-side with the 1.6 SDK.
    7. The 1.7 SDK works with Visual Studio 11.
    8. UDP is enabled.
    9. Shared access signatures available for tables and queues.
    10. Azure Web Sites work with SSL out of the box.
    • Simon Hart 10:10 pm on June 26, 2012 Permalink | Log in to Reply

      Good attention to detail!

      The load balancer probe can only be set via the management API right now and not through the portal which is probably why folks haven’t talked about it/found it. There are a set of PowerShell commandlets that have been shipped with the preview release that help setting this up. The URL that is configured for Windows Azure to call in order to figure out whther that server should be removed from the load-balanced cluster must also be anonymous and respond to a HTTP GET with a code 200 is all is ok.

      UDP is enabled both ingress and egress of Windows Azure data centers and any protocol between Azure IAAS servers but there are TCP protocol limitations going egress from a VM onto the internet.


      • Richard 11:37 pm on June 26, 2012 Permalink | Log in to Reply

        Thanks for the information Simon. It’s interesting to see how PowerShell is the primary means of support for some of these new features (media services is the same). I guess we’re waiting for the portal to catch up.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc