Updates from May, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 12:42 pm on May 21, 2012 Permalink |  

    Introducing the Azure Plugin Library 

    tl;dr

    An open source library of plugins installed with a command line tool (a package manager). Once a plugin is installed, it can be easily packaged with an Azure deployment to install application dependencies on a Web or Worker role.

    http://richorama.github.com/AzurePluginLibrary/

    Watch a screen cast demonstration

    Background

    One of the key strengths of Windows Azure, is the Platform as a Service offering. Why would you want to patch an operating system, manage the deployment of your application, and check the health of your infrastructure? It’s better left to someone else (Microsoft) so you can focus on your application. However, the pitfall is when your application depends on something extra being installed or configured on the machine.

    There are a few ways for installing 3rd party components on an Azure instance. This blog post has a good summary of the options.

    In summary, start-up tasks are the best mechanism available for installing dependencies, which is fine for something straight forward, but for more complicated components (like MongoDB for example) there is quite a bit of work involved in scripting out the installation. Projects like AzureRunMe help with this, but ideally you want something that just works, without you having to write a lot of script.

    Azure Plugin Library

    The Azure Plugin Library exploits an undocumented feature of the Azure SDK, whereby modules referenced in the Service Definition file are bundled with your application in a package, which is uploaded and deployed to the Azure instances. The SDK uses this mechanism to set up Remote Desktop, Connect, WebDeploy and Diagnostics, however, additional plugins can be added by copying the files to the “Windows Azure SDK\v1.6\bin\plugins” folder.

    The Azure Plugin Library offers a range of additional plugins which you can download, and include with your Azure deployment. The library is hosted on GitHub, and is open source (accepting contributions).

    http://richorama.github.com/AzurePluginLibrary/

    Installing a plugin using APM

    The AzurePluginManager (APM) is a command line utility to discover, install, update and remove plugins:

    apm list                   (displays a list of plugins available in the library)
    apm installed              (displays a list of installed plugins)
    apm install [PluginName]   (installs the specified plugin)
    apm remove [PluginName]    (removes the specified plugin)
    apm update [PluginName]    (updated the specified plugin)
    apm update                 (updates all plugins)

    Download options for APM.

    What plugins are available?

    At launch only a few plugins are available but this list is set to grow. Community contributions will be accepted, so please fork the repository and issue a pull request with your own ideas.

    How do I include a plugin in my Azure package?

    Installed plugins will be included in your Azure package if you add them as an Import in your ServiceDefinition.csdef file:

    <ServiceDefinition>
      <WorkerRole>
        <Imports>
          <Import moduleName="[PluginName]" />
        </Imports>
      </WorkerRole>
    </ServiceDefinition>

    How do I add my own plugin to the library?

    The library has some instructions on how to do this.

     
    • vbmagic 1:09 pm on May 24, 2012 Permalink | Log in to Reply

      Reblogged this on VB Magic and commented:
      A great open source library of plug-in’s that should hopefully make life easier in Azure to make use of third party software or currently unsupported components like Classic ASP.

    • jmayrbaeurl 9:33 am on May 27, 2012 Permalink | Log in to Reply

      Really cool stuff and I’ll try to develop my first own plugin asap.

  • Richard 9:28 am on May 18, 2012 Permalink |  

    Windows Azure CDN Nodes – on a map! 

    The CDN node locations according to this list.

    Explore the locations on Google Maps.

     
  • Richard 1:41 pm on May 17, 2012 Permalink |  

    Command to create a certificate with a private key (pfx) 

    This batch file will create a certificate (.cer) and a certificate with a private key (.pfx). Useful for creating self signed SSL certificates on Azure!

    The first argument is the certificate name, the second is the password:

    makecert -sky exchange -r -n "CN=%~1" -a sha1 -len 2048 -pe -ss My "%~1.cer"
    certutil.exe -user -p "%~2" -exportPFX -privatekey "%~1" "%~1.pfx"

    You’ll probably need to run it from the Visual Studio command prompt.

     
  • Richard 3:19 pm on May 11, 2012 Permalink |  

    Three things you should know about querying the Azure Table Storage SDK 

    1. .AsTableServiceQuery()

    If your query spans multiple partitions, or exceeds the maximum number of entities allowed in a single request, you’ll get a subset of the results, and a continuation token to retrieve the remaining records. Adding AsTableServiceQuery() to the end of your statement will handle the continuation token for you.

    var account = CloudStorageAccount.DevelopmentStorageAccount;
    var context = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);
    var query = context.CreateQuery<Foo>("Foo").AsTableServiceQuery();

    It will also introduce a retry policy for you, in case your request failed.

    2. .ToString()

    Calling ToString() on a table service query will give you URL that will be used to access the Table Storage API. This is very useful for debugging your queries.

    So this:

    var account = CloudStorageAccount.DevelopmentStorageAccount;
    var context = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);
    var query = context.CreateQuery<Foo>("Foo").Where(x => x.PartitionKey == "123");
    Console.WriteLine(query);

    Produces this:

    http://127.0.0.1:10002/devstoreaccount1/Foo()?$filter=PartitionKey eq '123'

    In fact Entity Framework queries do a similar thing (with some worrying SQL), as do Expressions.

    3. It’s plain old reflection

    The Table Storage SDK will simply map the properties in the Table, onto the type you have supplied. This means you could read and write the same entities using different classes. This could be useful if you want to model inheritance, or extend the properties you store for certain parts of your code. The only drawback is that you need a new context object when you want to switch which type maps to table.

    Consider two classes, Bar inherits from Foo.

    public class Foo : TableServiceEntity
    {
        public string FooVariable { get; set; }        
    }
    
    public class Bar : Foo
    {
        public string BarVariable { get; set; }
    }

    This code will write a Bar, and then read the entity back as a Foo:

    var account = CloudStorageAccount.DevelopmentStorageAccount;
    var context = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);
    context.AddObject("Foo", new Bar { PartitionKey = "1", RowKey= "2", FooVariable = "FOO", BarVariable = "BAR" });
    context.SaveChanges();
    
    context = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);
    var foo = context.CreateQuery<Foo>("Foo").AsTableServiceQuery().FirstOrDefault();
     
  • Richard 8:58 am on May 2, 2012 Permalink |  

    Using HTML5 WebSockets 

    One of the interesting features of HTML5 is the inclusion of WebSockets. This allows you to open up a TCP/IP socket, and do full duplex messaging between the browser and the server.

    There are a variety of frameworks out there that make it easy to do socket programming (e.g. SignalR, socket.io) but I was curious to understand how raw WebSockets works.

    In the browser, it’s fairly straight forward to use the WebSockets API. The code looks a bit like this:

    if ("WebSocket" in window)
    {
      ws = new WebSocket("ws://your.domain.com:1337/Path");
      ws.onopen = function() {
        // the socket is open
        ws.send("Hello");
      };
      ws.onmessage = function (evt){ 
        // message received
        console.log(evt.data);
      };
      ws.onclose = function() { 
        // websocket is closed.
      };
    }

    It’s a fairly straight forward method to send a¬†message, and an event is fired when a message is¬†received.

    On the server side however, things are a little more complicated.

    When the browser wants to start using WebSockets, an HTTP GET is made, requesting an upgrade to the connection. The request looks like this:

    GET http://your.domain.com:1337/Path/ HTTP/1.1
    Upgrade: websocket
    Connection: Upgrade
    Host: your.domain.com:133
    Origin: http://another.domain.com
    Sec-WebSocket-Key: yrLV2RDDPo10K4jwARFt6Q==
    Sec-WebSocket-Version: 13
    Cookie: xxx

    The server should then response like this:

    HTTP/1.1 101 Switching Protocols
    Upgrade: WebSocket
    Connection: Upgrade
    Sec-WebSocket-Accept: 3EAoJRawXLXN/IeksBFhfwlhGec=

    The important part is the ‘Sec-WebSocket-Key’ and ‘Sec-WebSocket-Accept’ headers. You must take the ‘Sec-WebSocket-Key’ value, append this value to it¬†‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’, and then calcuate a base64 encoded SHA-1 hash, which you set as the ‘Sec-WebSocket-Accept’ header in the response (don’t ask me why!).

    This bit of JavaScript (node.js) will do this for you:

    if (data.toString().substring(0,3) == "GET") {
      var key = getRequestVariable(data.toString().split("\r\n"), 'Sec-WebSocket-Key')
      var sha = sha1(key + '258EAFA5-E914-47DA-95CA-C5AB0DC85B11');
      socket.write("HTTP/1.1 101 Switching Protocols\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept:" + sha + "\r\n\r\n");
    }
    
    var crypto = require('crypto');
    
    function sha1(value) {
      var sha = crypto.createHash('sha1');
      sha.update(value);
      return sha.digest('base64');
    }
    
    function getRequestVariable(items, name){
      for (var i = 0; i < items.length; i++) {
        var parts = items[i].split(':');
        if (parts && parts.length > 1 && parts[0] == name) {
          return parts[1].trim();
        }
      }   
    }

    Now that you’ve shaken hands, you’re ready to start talking across your socket. However, the socket data you receive on the server (from the browser) is encoded, and can’t simply be read as text. The format is complicated, but essentially the frame takes this format:

    • The first bit is hard coded to 129 (and can be ignored).
    • The next few bytes describe the length of the data (more bytes are required to indicate the length, depending on the value for length).
    • The next 4 bytes hold values for masks, used to decode the data.
    • The remaining bytes contain the data XORd with the masks.

    To decode the data, you must extract the masks, and then apply an XOR with each byte against a mask value (cycling through the 4 masks in turn).

    You can use this function to decode the data for you:

    function decodeWebSocket(data){
      var datalength = data[1] & 127;
      var indexFirstMask = 2;
      if (datalength == 126) {
        indexFirstMask = 4;
      } else if (datalength == 127) {
        indexFirstMask = 10;
      }
      var masks = data.slice(indexFirstMask,indexFirstMask + 4);
      var i = indexFirstMask + 4;
      var index = 0;
      var output = "";
      while (i < data.length) {
        output += String.fromCharCode(data[i++] ^ masks[index++ % 4]);
      }
      return output;
    }

    Sending data to the browser is slightly more simple, as you don’t need to use the masks. However, you still need to set the byte (or bytes) indicating the length.

    function encodeWebSocket(bytesRaw){
      var bytesFormatted = new Array();
      bytesFormatted[0] = 129;
      if (bytesRaw.length <= 125) {
        bytesFormatted[1] = bytesRaw.length;
      } else if (bytesRaw.length >= 126 && bytesRaw.length <= 65535) {
        bytesFormatted[1] = 126;
        bytesFormatted[2] = ( bytesRaw.length >> 8 ) & 255;
        bytesFormatted[3] = ( bytesRaw.length ) & 255;
      } else {
        bytesFormatted[1] = 127;
        bytesFormatted[2] = ( bytesRaw.length >> 56 ) & 255;
        bytesFormatted[3] = ( bytesRaw.length >> 48 ) & 255;
        bytesFormatted[4] = ( bytesRaw.length >> 40 ) & 255;
        bytesFormatted[5] = ( bytesRaw.length >> 32 ) & 255;
        bytesFormatted[6] = ( bytesRaw.length >> 24 ) & 255;
        bytesFormatted[7] = ( bytesRaw.length >> 16 ) & 255;
        bytesFormatted[8] = ( bytesRaw.length >> 8 ) & 255;
        bytesFormatted[9] = ( bytesRaw.length ) & 255;
      }
      for (var i = 0; i < bytesRaw.length; i++){
        bytesFormatted.push(bytesRaw.charCodeAt(i));
      }
      return new Buffer(bytesFormatted);
    }

    Simple!

     
    • Tom√°s Hidalgo 8:18 am on September 18, 2012 Permalink | Log in to Reply

      Hello i’m very interested on your article.

      Then, if i understood well. You mean that it’s possible to stablish a raw socket connection from a web browser client, towards a device/application with a raw socket listening ?

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel