Updates from April, 2013 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 12:19 pm on April 15, 2013 Permalink |  

    Publishing a Node.js Website to Azure from the Raspberry Pi in 3 mins 

    Just for fun, here is a screen cast of me creating a ‘hello world’ in Node.js, and publishing it as a new website in Windows Azure, all done on my Raspberry Pi in 3 minutes.

    I used SSH to get onto the Pi, just to make the screen recording possible.

    Advertisement
     
  • Richard 10:48 am on April 12, 2013 Permalink |  

    Streaming a Node.js Azure Website log 

    Following Scott Hanselman’s comprehensive blog post on logging in Windows Azure Websites, I tried it out with Node.js. Here’s how…

    First, create your site, you can just do this in the portal or the command line as described in Scott’s post.

    Create your node application, and write debug information to the console. Here’s a quick example:

    var http = require('http');
    http.createServer(function(req, res){
      console.log("Hello World");
      res.end("Hello World");
    }).listen(process.env.port);

    To enable logging, you need to set it up in the  iisnode.yml file. The easiest way to do this is to use a custom deploy script – which will give you access to the file. Run this command in the root of your app:

     

    $> azure site deploymentscript NAME_OF_YOUR_SITE –node

     

    Which will put a copy of the iisnode.yml in the same directory.

    Edit this file and turn logging on by setting ‘loggingEnabled’ to ‘true’:

     

    loggingEnabled: true
    debuggingEnabled: false
    devErrorsEnabled: false
    node_env: production

    Now deploy your site (using git or dropbox, or whatever!).

    One deployed, you can stream your log using this command:

    $> azure site log tail NAME_OF_YOUR_SITE

    The log lines will appear in your console like this:

    info: Executing command site log tail
    2013-04-12T10:43:25 Welcome, you are now connected to log-streaming service.
    Hello World
    Hello World
    Hello World
    Hello World

    If your site throws an uncaught exception, this will also show up.

     
  • Richard 2:58 pm on April 8, 2013 Permalink |  

    (Roll Your Own) Website Monitoring in Azure 

    Web Endpoint Status

    A new ‘web endpoint status’ feature recently appeared in the Azure portal to automatically monitor Azure Websites for you.

    Untitled

    The system will poll up to two endpoints for you, every 5 minutes from up  to 3 locations (i.e. other data centres).

    Clicking on the name of your endpoint (‘Home’ in this case) will give you the last 5 ‘ping’ times from each location:

    Untitled

    You will get a warning in the portal if your application takes more than 30 seconds to respond.

    Your website must be running in ‘reserved’ mode to enable this feature.

    For me, 30 seconds is a long time. If your website is taking that long to respond, you’ve got a problem. I’d also like to monitor websites running in the free and shared modes. There are services out there which offer comprehensive monitoring, pingdom being the obvious example. However, I thought it would be interesting to roll my own..

    Roll Your Own Monitoring

    Azure Mobile Services allows you to schedule a task every 15 minutes. This task is actually a Node.js process, and the ‘request‘ library is one of the few packages available. This is my simple monitor:

     

    var request = require('request');
    
    function CheckWebsiteHealth() {
        var options = {
        	uri: "http://XYZ.azurewebsites.net/",
        	timeout : 3 * 1000 //ms
        }
    
        var startTime = new Date().getTime();
        request(options, function(error, response, body){
        	var endTime = new Date().getTime();
        	if (error){
        		console.error(error);
        	} else {
        		console.log("ping response in " + (endTime - startTime) + "ms");
        	}
        });
    }
    
    

    I have chosen to only allow 3 seconds for the site to respond, this is just an option you can choose what is appropriate for you. You could also extend this code to monitor multiple sites.

    It writes out to the Mobile Services log every time it makes the request. The entry will show up as an error if the site returns an error.

    Untitled

    One advantage here is that you get far more history than just the 5 most recent pings.

    It would be fairly straight forward to call send an email using SendGrid or simliar service. Alternatively you could write this data out to table storage, and hook up a nice application with a graph…

     
  • Richard 12:08 pm on April 4, 2013 Permalink |  

    Ruby + Sinatra + Azure 

    Following excellent work done by Thomas Conte and Richard Conway, I thought it would be interesting to get a Ruby application running in Azure’s PaaS – Cloud Services (i.e. a WorkerRole) in a really simple way.

    I have created a sample on GitHub, this article walks through the steps I took to set it up.

    Create a Visual Studio project

    Start by creating a Cloud Project in Visual Studio, and add a Worker Role. This will basically bootstrap the Ruby application.

    In the Worker Role project we’ll add the files we need deployed to the Azure machine.

    Installing Ruby

    To install Ruby, I used the x64 Windows Installer from RubyInstaller.com.

    Download the installer and add it to the Worker Role project. On properties of this file, select ‘Copy to output directory’ -> ‘Copy if newer’ to ensure it gets included in the package.

    Add a script file ‘install.cmd’ to the project, ensure this is encoded as US-ASCII (anything but the default encoding for a text file in Visual Studio!) set the properties to ‘Copy if newer’.

    The file should look like this:

    rubyinstaller-2.0.0-p0-x64.exe /silent
    D:\Ruby200-x64\bin\gem.bat install sinatra --no-ri --no-rdoc

    This will install ruby and the Sinatra gem.

    The Application

    Add a ‘main.rb’ file to the Worker Role project, set it to ‘copy if newer’. The file should look like this:

    require 'sinatra'
    set :environment, :production
    set :port, 8080
    get '/' do
      "Hello World!"
    end

    I had trouble getting Sinatra to work on the default port, but 8080 seems to work fine.

    Add a script file called ‘start.cmd’ to the Worker Role project, and as before, set it to ‘copy if newer’. The file should contain a line like this, and which will start the Sinatra process:

    D:\Ruby200-x64\bin\ruby.exe main.rb

    Configuring the Cloud Project

    The final piece is to wire up the script files, and open the correct ports in the firewall. This is done by modifying the ServiceDefinition.csdef file to look like this:

    <?xml version="1.0" encoding="utf-8"?>
    <ServiceDefinition name="RubyOnAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2012-10.1.8">
      <WorkerRole name="WorkerRole1" vmsize="Small">
        <Startup>
          <Task commandLine="install.cmd" executionContext="elevated" taskType="simple"></Task>
          <Task commandLine="start.cmd" executionContext="elevated" taskType="background"></Task>
        </Startup>
        <Imports>
          <Import moduleName="Diagnostics" />
        </Imports>
        <Endpoints>
          <InputEndpoint name="Endpoint1" protocol="tcp" port="80" localPort="8080" />
        </Endpoints>
      </WorkerRole>
    </ServiceDefinition>

    Example

    I have put the example project on GitHub.

     
  • Richard 11:30 am on April 4, 2013 Permalink |  

    Node.js + LevelDB 

    This article is based on a recent NodeUp podcast dedicated to LevelDB, I recommend listening to it.

    I have played around a bit with Level, and I’ve been really impressed by it’s speed, ease and simplicity. You should give it a go.

    LevelDB was developed by Jeffrey Dean and Sanjay Ghemawat at Google. It is an ‘in process’ sorted key-value pair database. It persists data to the file system. This means that it works in a similar way to SQLite, in that the database is private to your process. Level is great if you want to create your own database (Riak for example) or you want to create a portable app that persists data itself (like Chrome).

    Node has some good support for LevelDB with two packages, LevelDown for low-level bindings to the LevelDB DLL, and LevelUp, a higher level abstraction (i.e. easier to use) over LevelDown.

    Getting started

    To get started, just install LevelUp.

    $> npm install levelup

    In your node app, you can now start using the database like this:

    var level = require('levelup');
    var db = level('./DatabaseDirectory');

    You can then read, write and delete key/value in the database like this:

    db.put('key', 'value');
    db.get('key', function(error, data){ //data == 'value' }); 
    db.del('key');

    Iterating over a range of keys

    What’s more interesting is you can iterate over the keys like this:

    var options = {start:'key1', end: 'key9'};
    var readStream = db.createReadStream(options);
    readStream.on('data', function (data) {
      // each key/value is returned as a 'data' event
      console.log(data.key + ' = ' + data.value);
    });

    To stop reading records, just call destroy on the stream, for example if you’re looking for a particular value (this is a bad idea):

    var readStream = db.createReadStream({});
    readStream.on('data', function (data) {
      if (data.value == 'foo'){
        readStream.destroy();
      }
    });

    Using the options, you can iterate forwards/backwards, retrieve just the keys or values, and limit the number of keys returned.

    Multiple tables?

    There is only one table in LevelDB. To store different types of data you need to express a hierarchy in the key names. As the keys in level are stored in unicode order, the ‘~’ (tilde) character seems to be a good choice for separating parts of a key, like this:

    customers~customer1~orders~

    You can then retrieve all orders for customer1 using this range (use this for the options object in the previous example):

    {start:'customers~customer1~', end: 'customers~customer1~~'};

    The double tilde as the end of the range ensures you get all the orders for ‘customer1’.

    More?

    There’s a few more bits of Level, including transactions and an event system etc… You can find out more on the GitHub readme.

     
    • loveencounterflow 2:03 pm on September 12, 2014 Permalink | Log in to Reply

      as i try to demonstrate in https://github.com/loveencounterflow/hollerith#lexicographic-order-and-utf-8, i consider using `~` as a field delimiter as snake oil / mediocre. you write, “As the keys in level are stored in unicode order, the ‘~’ (tilde) character seems to be a good choice for separating parts of a key”—well, first of all, LevelDB *itself* knows nothing about Unicode, only octets (bytes). it is the particular encoding you apply to your strings that define how your characters will relate to the ordering of keys inside the DB; as such, you can very well `npm install iconv-lite` and then apply whatever encoding comes to your mind. that could be an encoding where the latin letters are sorted as `AEIOUBDGFH…ZXY`. not that that would make sense in the general case, but still: LevelDB knows no Unicode.

      and yes, if your encoding is utf-8, which makes total sense, then you will get strings sorted in a lexicographic way *as if* sorted by Unicode character IDs (codepoint values). this is because although utf-8 is a variable-length encoding, it nevertheless does preserve the relative ordering of all encoded codepoints.

      next, your statement is somewhat self-contradictory: “because Unicode, `~` is good”. why? i’d say, “because ASCII, `~` is good”, because ASCII is 7bit and has `~` at position 0x7e (= 0b1111110), making it the highest printable codepoint of ASCII that will come after everything else in ASCII.

      but not in Unicode, where roughly 112’956 – 120 = 112’836 eqv. to 99.99% printing codepoints come *after* position 0x7e. the first time you enter a German, Swedish, French, Hungarian, Russian, Danish, … customer’s name as part of a key, chances are that any ÆÇÄÕÑ… will violate your assumption.

      why would you want to store data as opposed to (or alongside with) field names as a key? well, simply because that’s how you build indexes in LevelDB. if you don’t do that, you can never quickly iterate over all customers age 40 and older who live in Washington, DC or Århus, Denmark, which is, like, *the* classical DB application example.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel