Updates from August, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 2:28 pm on August 31, 2012 Permalink |  

    Azure Mobile Services = Node.js 

    The new Mobile Services preview in Windows Azure provides developers with a database and API to store data for a mobile application. You can hook in your own custom code, to perform validation or send messages, etc… for each of the CRUD operations on an entity. The code is JavaScript, which seem to be hosted by a Node.js process. A small amount of playing around revealed that you can use the ‘require’ function to load in some of the standard node modules.

    Available Modules

    Access is not provided to all node modules, but these are available:

    • events
    • util
    • crypto
    • string_decoder
    • path
    • dns
    • url
    • punycode
    • readline
    • assert
    • zlib
    • azure

    These module are not:

    • azure
    • tls
    • os
    • fs
    • dgram
    • http
    • https
    • repl
    • vm
    • child_process
    • tty
    • cluster

    The ‘process’ variable is not available either.

    Logging

    If you write to the console, your message will appear in the log! (see the source code above):

    Arguments

    The objects that get passed to your function look like this:

    item = { text: 'A VALUE', complete: false } // this will depend on your model
    user = { level: 'anonymous' }
    request = { operation: 'insert', parameters: {}, execute: [Function], respond: [Function] }
    this = null

    Conclusion

    Whether anything really interesting can be done with this limited set of modules remains to be seen, but I’m really pleased to see that node is being used. Time for a bit more playing!

    Advertisement
     
    • Paul Batum 5:14 pm on August 31, 2012 Permalink | Log in to Reply

      Try require(‘request’) – this is how we make http available. We plan to expand what is available (‘azure’ was a particularly painful omission). Feedback on the available modules is very welcome!

      • Tony 3:53 am on September 11, 2012 Permalink | Log in to Reply

        How about added rx.js.. that’s what I am talking about!

  • Richard 3:59 pm on August 30, 2012 Permalink |  

    What is Big Data? 

    We had a discussion today in the office today about big data, trying to pin down what it is and where you can use it. Big Data seems like a difficult thing to define, and I find most of the definitions on the internet unsatisfactory. Like this, and this.

    So here’s my definition:

    “If you’ve got more data than will fit in a relational database, you’ve got big data.”

    In Azure this is easy to quantify, currently you can’t create a database larger than 150GB.

    This means that you have to split your data into smaller databases (i.e. sharding or SQL Federations), but to really embrace big data you need to forget SQL, transactions and aggregate queries, and embrace NoSQL and eventual consistency.

    In Azure you can use Blob or Table Storage to hold up to 100TB per account, and if you design your storage correctly you can get scale without compromising performance. Hadoop is also available to perform map-reduce data transformations, and Cloud Services allow you to scale out a compute farm very quickly, to crunch this data.

     
  • Richard 1:29 pm on August 29, 2012 Permalink |  

    Node.js + SQL Azure = Node-SqlServer 

    The Microsoft Driver for Node.js for SQL Server is seems to be the most reliable way of getting Node to talk to SQL Server. However, it’s a preview, and it’s not finished.

    To install, just do:

    npm install node-sqlserver

    You can then start querying in your application like this:

    var sql = require("node-sqlserver");
    
    sql.query("CONNECTION STRING", "SELECT * FROM Foo WHERE ID = ?", [1], function(error, results){
    	if (error){
    		console.log(error);
    	}
    	results.forEach(function(result){
    		console.log(result);
    	});
    });

    The `query` function takes 4 arguments;

    1. Connection string
    2. SQL Command
    3. An array of parameters, which will replace ? characters in the command text (optional)
    4. A callback, with an error object and an array of results

    You get an object for each record, with the fields as properties of the object, which is nice.

    However, there are a few things to watch out for:

    Installation

    The modules consists of a C++ library, which is compiled when you install with npm. However, this doesn’t seem to get put in the right place. The compiled DLL is called sqlserver.node, and is deposited in the \build\Release folder. The javascript library is expecting to find it in \lib. You just need to manually copy it.

    Multiple commands

    If you insert some data, and then want a copy of the object back (not an unreasonable request – you’ll probably want to know the ID of it) you’ll write a command something like this:

    INSERT Foo (bar) VALUES ('baz'); SELECT * FROM Foo WHERE ID = @@IDENTITY

    However, I just got back an empty array or an array with this in it:

    { Columnremove: undefined }

    I found that setting NOCOUNT to ON overcame this:

    SET NOCOUNT ON; INSERT Foo (bar) VALUES ('baz'); SELECT * FROM Foo WHERE ID = @@IDENTITY

    I now get a nice object back:

    { ID: 1, bar: 'baz' }

    Errors

    I managed to get a dialog to display when attempting to delete a record that doesn’t exist:

    According to GitHub this is fixed, but I still seem to be causing me problems.

    Empty Records

    I can’t reproduce this in isolation, but I seem to get extra objects in my results array which are empty. The object contains all the properties of a record, but every value is undefined:

    { ID: 1, bar: 'baz' }
    { ID: undefined, bar: undefined }

    Conculusion

    Node-SqlServer looks really promising. I like the design, and the simplicity, I don’t think it’s quite ready though.

     
  • Richard 3:53 pm on August 24, 2012 Permalink |  

    Windows 8 

    There’s been a lot of discussion on the internet predicting the success of Windows 8, and I can’t help but insert my thoughts into the discussion.

    I think the battle will be fought with the tablet form factor, and the new and important technology that Microsoft is releasing is WinRT.

    What is WinRT?

    WinRT (Windows Run Time) is a new framework, a sandbox if you like, which hosts applications and provides them with access to the services that the operating system provides, like files, web cams, location etc etc… Applications that run in WinRT can be written in C++, C# or JavaScript (basically hosted in IE10) and all have access to a common API. These application appear in the ‘live tile’ user interface, and not in desktop mode.

    The lower power (ARM) devices running Windows 8 will only support WinRT, whereas the more powerful devices (such as laptops and desktops) will also have the traditional Windows Desktop.

    Why are WinRT devices important?

    The iPad has had a good head start and there’s a lot of ground for Microsoft to cover here. The quality of the Android tablets doesn’t seem to be there (at the right price point) and the OS is not quite there either, although it’s getting very close.

    However, it’s a big market, and there are a few areas where Windows 8 is strong.

    1. Price. If rumours are to be believed, the Surface will launch at $199. This is unbelievably cheap given the specs of the device.
    2. Developer Friendly. You can develop for WinRT in Javascript, C++ or C#, each seems to have equal footing on the platform. That’s a large developer community that can hook straight in with a relatively small learning curve. Visual Studio is an excellent IDE, and the development and testing process is straight forward.
    3. Enterprise. With WinRT, IT departments will be able to bypass the app store, and load their applications directly on to the device. Allowing IT staff to manage tablets as just another Windows device will be a good reason to choose Windows 8 for field staff or corporate types.
    4. Azure. WinRT devices are going to have relatively low storage capacity, so cheap scalable storage in the Microsoft cloud compliments this limitation. Push notifications, and the ServiceBus in Azure should also play well together. The Media Services piece provides an easy to consume set of services to facilitate video content. WinRT devices will be internet devices, and Microsoft have a serious internet capability to back this up.

    Not another Vista?

    My concern is not that this will be another Vista, I think there is some real benefit to this OS. It’s faster and leaner than previous versions, and I’ve enjoyed using it over the past few days.

    My concern is that it’ll be another Windows Phone 7. An OS that looked really promising – with some brilliant ideas, but ultimately failed to deliver high quality applications. Consumers will decide whether Windows 8 is any good based on Twitter, Facebook, LinkedIn, eBay and YouTube apps. Developers will ultimately make or break this.

     
  • Richard 12:07 pm on August 20, 2012 Permalink |  

    Data access directly from the browser (using Azure Blobs) 

    A few weeks ago I blogged about an architectural pattern I had been thinking about, where an application running in the browser could load/save data directly to the (Windows Azure Blob) storage service without going through your server side code. This makes a lot of sense in some scenarios, why consume CPU cycles that you’re paying for on the server to move data from one place to another? If the browser can do this directly, then why not bypass the server and write directly?

    I have since built an example ‘todo list’ application that does this, ok there are a lot of pieces missing and it’s very rough, but it’s an illustration. The source is available here: https://github.com/richorama/AjaxBlobExample. There is an example of it running here: http://two10ra.blob.core.windows.net/index.html  – prepare for bugs!

    The server side component of the application can run anywhere, and is accessed by the browser using JSONP. I have written this in Node, and it’s running on an Azure Website. It exposes one endpoint, which takes a username and returns a URL to a blob, which the code in the browser can then access for an hour. There’s no real security here, it’s just for fun:

    var azure = require("azure");
    var app = require('express').createServer();
    app.enable("jsonp callback");
    
    var blobService = azure.createBlobService();
    blobService.createContainerIfNotExists("container", function(error){});
    
    app.get('/login/:username', function(req, res){
    	var url = blobService.generateSharedAccessSignature("container", req.params.username, {
    	AccessPolicy : {
    		Permissions : "rwdl",
    		Expiry : getDate()
    	}});
    	res.json({url: url.url()});
    });
    
    app.listen(process.env.port || 210);	
    
    function getDate(){
    	var date = new Date();
    	date.setHours((date).getHours() + 1);
    	return date;
    }

    The client side of the application is a static HTML file served out of Blob Storage. It calls the node service using a jQuery AJAX GET using JSONP, and then proceeds to GET and PUT to the blob to read/write data.

    There are a couple of things to watch out for;

    1. A shared access signature on the blob is not immediately available. I implemented some retry logic in the application.
    2. To write to the blob you need to issue a ‘PUT’. Not all browsers will support this (I guess).

    The conclusion is that the idea works, but what does this pattern mean?

    1. It allows you to think of your API as a set of static files stored in a JSON format. This has obvious scale-out advantage, especially if the blobs are enabled for public read.
    2. Your application just governs resources, and provides access to them as the user requires them. In my example, one request to my node server provided an hour of use, with no further involvement. There is huge cost saving here, you could now run a web app with significantly less resource.
    3. You lose control of input validation. This would need to be carefully considered before building out this pattern.

    Can anyone think of a name for the pattern? Has it been documented before?

     

     
  • Richard 11:09 am on August 20, 2012 Permalink |  

    Handling Continuation Tokens with Node.js on Windows Azure Table Storage 

    If you’re querying Windows Azure Table Storage, and you could have more than 1,000 results, or your entities span more than one PartitionKey – you may have your results split. In the C# SDK, there is a handy ‘AsTableServiceQuery‘ extension method, that handles this automatically for you. In the node SDK (as far as I know) this doesn’t exist. Instead, you are passed a continuation token object which you can use to retrieve the remaining results. It’s slightly awkard to use, so here is an example of some code which will call you back when all results have been retrieved:

    var tableService = require("azure").createTableService();;
    
    function queryWithContinuation(query, cb) {
        tableService.queryEntities(query, function(error, entities, continuationToken){
            if (continuationToken.nextPartitionKey) { 
                nextPage(entities, continuationToken, cb);
            } else {
                cb(entities);                    
            }
        });
    }
    
    // used to recursively retrieve the results
    function nextPage(entities, continuationToken, cb){
        continuationToken.getNextPage(function(error, results, newContinuationToken){
            entities = entities.concat(results);
            if (newContinuationToken.nextPartitionKey){
                nextPage(entities, newContinuationToken, cb);
            } else {
                cb(entities);
            }
        });
    }
    
    // example usage
    var query = azure.TableQuery.select().from('really-big-table');
    queryWithContinuation(query, function(results){
        console.log(results);
    });

    This code has no error handling, but it shows how the continuation token is used.

     
  • Richard 3:05 pm on August 16, 2012 Permalink |  

    Securing node.js RESTful services with JWT Tokens 

    I wanted to create a web service in node. It needed to be stateless, and secure such that only users with the correct credentials could access certain entities. The answer was to use a token. There are a few token modules for node, and I settled on node-jwt-simple. This gives you a JWT (JSON Web Token), which is a:

    …means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JavaScript Object Notation (JSON) object that is digitally signed or MACed using JSON Web Signature (JWS) and/or encrypted using JSON Web Encryption (JWE).

    To implement this in Node; first, allow users to log in, check they’re ok, and return them a token (I’m using express):

    var app = require('express').express();
    var jwt = require('jwt-simple');
    
    app.get('/token/', function(req, res) {
    	var username = req.headers.username;
    	var password = req.headers.password;
    
    	if(checkUser(username, password)) {
    		var token = jwt.encode({username: username}, tokenSecret);
    		res.json({token : token});
    	} else {
    		res.json({result: "AuthError"});
    	}
    });

    Notice that I’m sending the username and password as headers. I could be using a post and pass them in the body. I don’t think it really matters. When you create the token, you have the opportunity to set some claims, basically properties of an object. Here I set the username, but if there’s something you need to know about your user, you can put it here.

    From the browse I can call this endpoint, passing the username and password in on the header, to retrieve the token:

    $.ajax({
    	type: "GET",
    	cache: false,
    	dataType: "json",
    	url: "/token/",
    	headers: {username:username, password:password},
    	success: function(token){
    		setToken(token);
    	}
    });

    Back in Node, I can then add some more endpoints to my API, and check the token on each request to ensure it’s valid.

    app.get('/XYZ/', function(req, res){
    	var decoded = jwt.decode(req.headers.token, tokenSecret);
    	if (checkUserCanAccessResource(decoded.username)){
    		...
    	}
    }

    The token is read from the header, so you need to add it to each jQuery request:

    $.ajax({
    	type: "GET",
    	cache: false,
    	dataType: "json",
    	url: "/XYZ/",
    	headers: { token:getToken(); },
    	success: function(data){
    		...
    	}
    });

    This code is only an illustration. You need to think about expiry, error messages etc…

     
    • canadagoosejakkeonlinesale.blogspot.com 9:58 am on November 28, 2012 Permalink | Log in to Reply

      I’m usually just using the search engines to look up information.What you need is just rest.Spring is a pretty season.Walking up and down the stairs would beat any exercise machine.Help yourself.The population of the city is close to a million.The population of the city is close to a million.What shall we do tonight? Keep it up!I think you have the Wrong number.

    • vrossign 3:23 pm on October 3, 2013 Permalink | Log in to Reply

      I’m wondering what’s the point of decoding the token?
      Since the token is probably attached to the user somewhere in a database you can still find the user by toke. Did I miss something ?

    • Richard 3:43 pm on October 3, 2013 Permalink | Log in to Reply

      Decoding it ensures that your node process is the one that encoded in the first place (because only node knows the secret). This protects you against spoof tokens. You can also embed some extra information in the token, which may save you some database lookups.

    • vrossign 1:04 pm on October 4, 2013 Permalink | Log in to Reply

      thanks for the answer, still if the token has been stolen there is nothing that can prevent the bad user to use it.

      At the end of the post you talk about expiry of the token… any useful link to share regarding this topic and jwt-simple.

    • Richard 1:15 pm on October 4, 2013 Permalink | Log in to Reply

      Sure, if the token is stolen, then you’ve got a problem. But if you decode it and check it, then you prevent users from creating their own tokens (a more likely attack vector).

      For expiry, just write an expiry date to your payload data, and check it after you have decrypted it.

    • gsarwohadi 11:48 pm on March 3, 2014 Permalink | Log in to Reply

      Thanks for the article. I’ve successfully implemented with passport-local and jwt-simple, and it works great. I’m using this for a cordova app calling REST server. I’ve also improvise it a bit, by including the device uuid in the jwt.encode body and for the secret, I use the session ID generated from passport-local. I know that jwt has an expire parameter, but not sure if it’s secure enough as the encoded token is always the same (due to same data and secret). With using session ID as the secret, it acts as an expiry date and also help generate different token.

      Let me know what you think about this. Thanks

    • vrossign 8:20 pm on March 11, 2014 Permalink | Log in to Reply

      how do you get the session ID from passport local ?

    • alejandropaciotti 3:44 pm on June 12, 2014 Permalink | Log in to Reply

      Where is the setToken function?

      Thanks!

    • Richard 3:47 pm on June 12, 2014 Permalink | Log in to Reply

      Good question, that’s up for you to implement in the browser.By storing it in a variable or local storage.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel