Updates from October, 2013 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 10:53 am on October 28, 2013 Permalink |  

    Deleting Instances on Azure Cloud Services 

    One small new feature added to the Windows Azure Management API recently is the ability to ‘delete’ individual instances in a Cloud Service.

    More information here: http://msdn.microsoft.com/en-us/library/windowsazure/dn469418.aspx

    What’s changed?

    Previously you were able to reduce the instance count of a Cloud Service deployment, which would always delete the last instance in the Role. This new feature allows you to choose which instances you want to remove.

    Why is this useful?

    In most scenarios it isn’t. However there are a few cases where this could be useful:

    Batch Processing

    If you have a large computational job you want to run in the cloud, it makes sense to chop this into a series of small jobs and distribute these amongst a number of instances, perhaps using a queue to coordinate this. But what happens when the queue is drained? Some workers are likely to finish before others, and you can’t predict which is going to complete first (drawing from the queue randomises the order).

    Now you can write some logic so that instances delete themselves when the queue is empty. Because billing is now down to the nearest minute, it makes sense to turn machines off when you get the opportunity.

    Machine Quality

    Perhaps you request 100 machines and one turns out to be a duffer. By this I mean you don’t get the perf you expect. I’m not aware of this being  much of a problem in Azure, however other clouds which contain a mix of old/new hardware certainly do suffer from this.

    Running a quick benchmark on boot would reveal whether the machine is good enough, if not, delete and ask for another one – hoping you don’t get the same one back again!

    Multi-tenancy

    Some (clever) people are taking old single-tenant software and multi-tenanting on Azure. They’re dynamically provisioning customers on Web/Worker Roles, and then programming a reverse proxy like ARR or Nginx to route incoming traffic for the right instance for that customer. This means that load is not always evenly distributed across instances.

    When it comes to scaling down, it’s often easier to remove one of the instances with only a few customers provisioned on it, as this would mean transitioning fewer installations.

    Update 1

    Gaurav Mantri has posted an excellent code sample on his blog:

    http://gauravmantri.com/2013/10/16/a-new-version-of-windows-azure-service-management-api-is-available-with-delete-specific-role-instances-and-more-goodies/

    Update 2

    I have since created a ‘self destruct’ project on GitHub, making it really easy for an instance to delete itself.

    https://github.com/richorama/Two10.Azure.SelfDestruct

    Advertisement
     
  • Richard 7:44 pm on October 10, 2013 Permalink |  

    Push APIs 

    We had a long discussion at work yesterday about how to build an API which supported push. By this I mean that messages originate at the server and clients, which could be other servers, browsers or apps, consume these messages.

    First a bit of background, colleague Rob Blackwell has a good yardstick to test API ease of use, called ‘the cURL test‘. If your API can’t be consumed with a simple cURL command, you’ve failed. That doesn’t mean that your API is useless, it’s just hard to use, test and document.

    Let’s look at a few options:

    #1 Polling

    Well this isn’t push at all, but it’s a common approach, and makes for a good starting point.

    Pros:

    • Passes the cURL test.
    • Everything can poll.

    Cons:

    • Increases the average message delivery time, proportional to poll frequency.
    • Unnecessary load on the server, serving empty poll responses.
    • Bandwidth is wasted.
    • Extra work on the client.

    #2 Raw TCP Sockets

    Seriously? I’m not sure sockets can really count as an API, but they will do the job.

    Pros:

    • Efficient.

    Cons:

    • Hard to work with, you’ll probably need to ship a client library in a variety of languages.
    • Won’t work for web browser clients
    • Fails the cURL test.
    • Unsuitable for infrequent messages to a large number of clients, as there is a cost in keeping the connection open.
    • Sockets are sometimes not available on the network (thanks to network infrastructure).

    #3 Web Hooks

    The concept here is that you use a normal REST API to register a URL to be called back on. When your message appears on the server, you’ll normally get an HTTP post to that URL, with the body being the message. There are a few standards lying around for web hooks, and adoption seems quite good, with GitHub as an example.

    Pros:

    • Simple technology.
    • Good for infrequent messages to many clients, as TCP/IP connections do not need to be kept open.

    Cons:

    • Due to the cost of establishing an outbound connection for every message, it’s unsuitable for scenarios where a high volume of messages are sent to a small number of clients.
    • Won’t work for web browser clients.
    • Fails the cURL test (you can’t get called back).

    #4 Web Sockets

    Using the newish HTML5 standard, a browser can upgrade a request to a server to be a full duplex communication channel. By web sockets I mean RAW web sockets. Libraries such as Socket.IO, SignalR, Faye, etc. etc… build protocols on top of web sockets, and support alternative transports.

    Pros:

    • Good native support in some web browsers.

    Cons:

    • The last time I checked, there wasn’t consistent support for the same web socket standard across all browsers.
    • Poor support for server-server communication, as there aren’t many client libraries for talking raw web sockets.
    • Poor support for creating a raw web socket server.
    • Unsuitable for infrequent messages to a large number of clients, as there is a cost in keeping the connection open.
    • Fails the cURL test.
    • Web sockets are sometimes not available on the network (thanks to network infrastructure).

    #5 Web Socket Libraries

    By this I mean the libraries that support web sockets, and degrade to a variety of exotic transports like long polling, forever frames, flash sockets, SSE etc… There are a number of these around, including Socket.IO, SignalR, Faye. I also include 3rd party services in this, such as Pusher, Azure Service Bus and various notification services in this category.

    Pros:

    • Good browser support.
    • Sometimes good server support (i.e. a C# client for SignalR)
    • Handles degrading to alternative transports.

    Cons:

    • Lock-in to a technology, which will severely limit client adoption.
    • Fails the cURL test.
    • Some of these libraries don’t scale well, or aren’t that ‘finished’ (I’m not naming names!).
    • This doesn’t seem like the right way to build an API.

    #6 The Long Get (Comet)

    I’ve seen this technique for consuming the twitter firehose, and you basically make a GET to a server, the connection is kept open, and data is streamed down to your client.

    Pros:

    • Passes the cURL test!
    • Good for a high volume of messages sent to a small number of clients.
    • In theory this works in a browser (I haven’t tested it).

    Cons:

    • You have to invent your own framing protocol.
    • Long running HTTP requests are terminated by network equipment in some cases. You would need retry logic.
    • Not _that_ easy to implement in all client and server technology.

    Conclusion

    So there you have it, there isn’t a single best way. I think your decision comes down to answering two key questions:

    1. Are you dealing with high message throughput to a small number of clients, or the occasional message to a large number of clients?
    2. Do you need to support browsers and/or servers?

    Depending on these answers, I think realistically you’re looking at web hooks and the long poll as the two best options, possibly combining both of them.

    What about long polling, or SSE, or something else? I don’t think many people use these technologies in their raw state, instead they form one of the possible transports in section #5.

    If there’s anything I’ve missed, please let me know!

    I expect in a few years time there will be a fancy new technology which solves this problem, but for now, we’ll have to make do.

    (thanks to Andy Cross for the comet tip).

     
  • Richard 3:33 pm on October 3, 2013 Permalink |  

    Managing Windows Azure Reporting Services 

    Windows Azure provides Reporting Services (SSRS) as a service. This allows you to take your SSRS report, upload to the portal, and start running the report for your users. I have blogged before about how to download a rendered report programmatically, but how do you upload the reports in the first place?

    You can manually upload reports in the portal. However, there seems to be a small problem with this technique. You can create folders, and sub folders to hold your reports, but you can’t upload a report to a subfolder.

    I set about attempting to upload reports from code. There’s no management API support for reports, instead you need to use the SSRS Web Service (using something called SOAP – consult your history books if you’re under the age of 60).

    I had some problem (using WSDL in mono) creating the proxy, so for your convenience, here’s a proxy generated for you.

    With the proxy referenced in your project, you’re good to go. Here’s how to log in to the service:

     

    var serviceUsername = "your username";
    var servicePassword = "your password";
    var url = @"https://XXX.reporting.windows.net/reportserver/reportservice2010.asmx";
    var client = new ReportingService2010
    {
      Url = url,
      CookieContainer = new CookieContainer()
    };
    client.LogonUser(serviceUsername, servicePassword, url);
    
    

    You’ve now got a logged in client for the web service, ready to manage reports.

     

    First, let’s list the reports we’ve got:

     

    var items = rs.ListChildren ("/", true);
    
    

    This will list everything, data sources, folders and reports. To see just the reports, you can filter by the TypeName:

     

    foreach (var child in children.Where(x => x.TypeName == "Folder"))
    {
      Console.WriteLine(child.Path);
    }
    
    

    The ‘Path’ property is the full name of the report, including the path  representing the folder hierarchy.

    Uploading a report is a bit more involved, so I’ll save you some time by giving you this extension method:

     

    public static void DeployReport (this ReportingService2010 client, string reportFile, string parent = "/", string name = "new report")
    {
      using (var stream = File.OpenRead(reportFile))
      {
        var definition = new Byte[stream.Length];
        stream.Read(definition, 0, (int)stream.Length);
      }
      Warning[] warnings = null;
      client.CreateCatalogItem("Report", name, parent, true, definition, null, out warnings);
      if (warnings != null)
      {
        foreach (var warning in warnings)
          Console.WriteLine(warning.Message);
      }
    }
    
    

    If you want to set the description of the report, just add a NameValueItem to the Properties argument. If the NameValueItem has a Name of  ‘Description’, the value will be added as the description of the report.

    A report can be deleted like this:

     

    client.DeleteItem(reportName);
    
    

    Simple!

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel