Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Richard 2:53 pm on April 16, 2014 Permalink | Log in to leave a Comment  

    Running Clojure on Azure Web Sites 

    The recent release of Java on Microsoft Azure Web Sites was a big one. Not only because there are a load of Java developers out there that can now get easy access to a great public cloud (and yes, Web Sites have sticky sessions out of the box) but the announcement is much more than just Java, it’s the Java Virtual Machine.

    The JVM is host to a number of interesting languages, so this announcement unlocks the potential to run thing likes Scala, Clojure, Groovy, JRuby, Jython and Kotlin on Azure Web Sites.

    I thought it would be fun to start with Clojure. It’s what Uncle Bob would do.

    This tutorial takes you from scratch to having a clojure ‘hello world’ running. I did this all from Windows 8, but I assume you could do this just as easily from any desktop operating system.


    Install Java

    Install Java SE Development Kit 7

    Install Clojure 1.6

    Install Leiningen

    Install Git

    (yes there’s a lot of installing)

    Create a Hello World Application

    We’ll use compojure to create a simple hello world app, and then create a war file.

    From the command prompt type this:

    > lein new compojure hello-world

    You can test your new web application by typing in:

    > cd hello-world
    > lein ring server

    … which will start the web server on port 3000.

    Now create a ‘war’ file, which we will use to deploy the application to Azure:

    > lein ring uberwar

    This should create something called target/hello-world-0.1.0-SNAPSHOT-standalone.war

    Creating the Azure Web Site from the Jetty Template

    The easiest way to get Java running is to take the existing Jetty template.

    In the Azure Portal go to ‘New’ and select Compute -> Web Site -> From Gallery.

    Select the ‘Jetty’ template (it’s in the ‘Templates’ sub section).

    Type in a unique url, and select a region, and click the ‘tick’ button to create the site.

    This will create a template website with the Jetty stuff already configured.

    Deploy to Azure

    To deploy our Clojure app, we need to put the war file in the right place in the Jetty template, and then push it up to Azure.

    In the portal navigate to the newly created website and click ‘Set up deployment from source control’ and select ‘Local Git repository’.

    This will take a few seconds, then give you a git url.

    Copy this url, and then from the command prompt, navigate out of the clojure app directory, and clone the website to a new directory (make sure the USERNAME and WEBSITENAME bit are correct for your site).

    > cd ..
    > git clone

    Now take the ‘target/hello-world-0.1.0-SNAPSHOT-standalone.war’ file, and save it in the ‘bin/jetty-distribution-9.1.2.v20140210/webapps’ directory as ‘ROOT.war’. This should replace the existing file.

    > git commit -am "added clojure war"
    > git push origin master

    And you’re done.  Navigate to the URL for your web site, and you should see ‘Hello World’.


    It’s exciting to see Microsoft providing support for the JVM in Azure Web Sites. The JVM support a rich ecosystem of new languages and web frameworks, and deployment to Azure websites is a simple git push away.

    Oh, and Clojure is fun.

  • Richard 9:51 am on April 3, 2014 Permalink | Log in to leave a Comment  

    Microsoft Codename Orleans Public Preview 


    Yesterday at build the public preview of Micosoft’s Codename Orleans project was announced.

    You can download the Orleans binaries here:

    The blog post on the .NET blog has more details:

    I was privileged enough to have early access to the project, and I wrote a few of the samples available in the Codeplex repository:


    Why is Orleans Interesting?

    Orleans is ideally suited to problems requiring both high scale and low latency, but that doesn’t preclude you from using it at small scale.

    What Orleans does well is manage a large number of grains (the Orleans term for an actor) in memory, think hundreds of thousands of grains per server. It does this by managing the lifecycle of these objects, so they are only active when required, and after a certain period they can be garbage collected. Keeping these object in memory gives you low latency, as you can respond to requests on your system out of state you have in memory, rather than having to load information out of a database. It’s a bit like an in-memory cache in some regards, only the objects in the cache are programmable.

    Orleans can run on a cluster of machines (think hundreds of machines) to provide the high scale. The system has a fairly linear relationship between throughput and cluster size, which in plain english means that adding more machines to your cluster will increase your capacity, and doesn’t degrade too badly due to inter-server communication.

    Orleans is written in .NET, and you write your grains in C#. Grains are simply an interface and a class. The system is designed to be easy for the developer use. You don’t have to be a distributed system expert, those kinds of problems have been solved for you in the framework.

    Orleans will run on Windows Server, but it’s really designed for Azure. It’s a simple xcopy deploy to install, and comes with libraries designed to work with Worker Roles. You’ll need to front Orleans with your own API, which will have to be .NET (probably ASP.NET/WebAPI).

    Orleans gives you some constraints you must work within. You cannot have any shared state between grains, and all code must be asynchronous.

    However Orleans gives you some good guarantees. These are that there will only ever be one instance of a grain of a given identity and that the code in the grain will only run in a single thread (you can have exceptions to these rules). This means you don’t have to worry about thread safety, locking and parallel programming. You just write simple class implementations.

    Single threaded async, it sounds a bit like node.js doesn’t it? It even has the node hexagons. This is probably the reason I like Orleans so much!

    What next?

    Watch this space. I expect a small ecosystem of tools and libraries to grow up around Orleans, let me know if you’re interested.

  • Richard 2:52 pm on February 20, 2014 Permalink | Log in to leave a Comment  

    Async weirdness 

    What would you expect this program to print? (it’s set to .NET Framework 4.5 as the target framework)


    I’m not a world leading expert on C#, but I would expect to see this:


    But this is actually what you get:


    The program attempts to read the value of ‘Result’ while the task is delayed. The program then blocks at this point, waiting for the delay to elapse. It then doesn’t need to ‘Wait’, because the task has already completed.

    You don’t need to call ‘Wait’ at all. The ‘Result’ property will ‘Wait’ for you when you try to read it.

    I think it’s generally assumed that reading a property isn’t an expensive operation, so this behaviour came as quite a surprise to me (thanks Rob Blackwell for pointing it out). I can understand the motivation for this behaviour, it makes code less buggy, but it strikes me as odd.

  • Richard 10:29 am on January 21, 2014 Permalink | Log in to leave a Comment  

    Node.js + Visual Studio 

    One of the things that attracted me to Node.js in the first place was the opportunity to break away from the development stack I have been using for years (C#, .NET and Visual Studio) and use something completely different. Something that was fast, lightweight, productive, cross platform and fun.

    I quickly found Sublime text was the most productive place to develop Node. I’m not a hardcore VIM/Emacs person (much as I’d like to be) and Sublime gave me some autocompletion, and I can add in things like JSHint. I was happy.

    Then Microsoft released their Node.js tools for Visual Studio.

    I was sceptical. There was nothing wrong with my development workflow, and I was enjoying my life without Visual Studio. However, after watching the video I got interested.

    There are a number of really powerful features in these tools, but for me it’s the debugging that makes the biggest difference to my productivity. Here are a few screen shots.

    1. Press F5 to Run, automatic Git integration, and package inspection.


    You can have a solution file which contains multiple projects. You can choose which file in the project is the one for Node to run when you press F5. This saves you the break in development workflow of going to the console and starting your app manually. Yes you do have to add a new ‘project’ file, and a ‘solution’ file too, but Visual Studio doesn’t pollute your filesystem beyond that.

    2. Automatically break into the debugger on an Error.


    If an error is thrown Visual Studio breaks at the line in question, and give you a nice summary of the problem. For me, this is much quicker than reading the error reported by node, and going and looking up the line. At this point you are debugging, so you can go and inspect the variables and the stack trace as shown below.

    3. Set breakpoints and debug


    You can set a break point to launch the debugger. When debugging you can hover over any variable and inspect it’s value. You can drill into the objects and arrays. I normally solve this kind of problem with lots of calls to console.log. I have also used node-inspector, but this is better and easier to use.

    4. Get the stack trace


    When you debugging the code you get a stack trace, which you can go and explore.


    None of these features give you anything that you can’t already do in one way or another, but what they do it bring everything together into one integrated environment. This is what Visual Studio has always been good at, some would say that historically Visual Studio has attempted to do too much. The real test is to ask yourself whether using these tools is saving you time and increasing the quality of your code. I have developed software with Visual Studio for years, so for me the answer is yes.

  • Richard 10:02 am on January 9, 2014 Permalink | Log in to leave a Comment  

    Publishing a Node.js Website to Azure in under 2 minutes using only the browser 

    A while ago I recorded a screen cast showing how you can publish a Web Site to Windows Azure using a Raspberry Pi. It took 3 minutes.

    With the recent launch of Visual Studio ‘Monaco’ and it’s integration with Window Azure Web Sites I thought it would be interesting to try again. Instead of the Pi, I’d use only my web browser.

    It took 1 minute 25 seconds (it would be quicker if I could type faster!).

    I have been really impressed by Monaco. I probably wouldn’t use it as my default IDE, but for hacking together something quickly from a random web browser, it’s just the ticket.

  • Richard 12:51 pm on December 24, 2013 Permalink | Log in to leave a Comment  

    The Windows Azure Architect’s Reference 

    My degree was not in computer science, I’m a mechanical engineer in disguise. On the first week of my degree course I was given a reference book which contained all kinds of useful facts, figures and formulas which you can pull together to construct a mathematical model of the world (which is what half of engineering is about). This book had everything from the periodic table, to the formulas for Newtonian physics, the tensile strength of various steels to the formulas for fluid mechanics. It was really useful.

    In the IT world we face similar problems. We construct a solution to a problem by pulling together various technologies. However, we often rely on gut instinct (or experience) to know whether a solution will scale in an acceptable way (I’m guilty of this sometimes anyway).

    Rob Blackwell suggested that we pull together a one page summary on the features in Windows Azure, to provide architects with a reference to help inform design decisions. This doesn’t include cost information, as this is well served by the calculator, but it does include performance expectations, SLAs, and naming restrictions. All figures come straight from MSDN documentation, so there’s no original content here, just a collation of data already on the internet:

    There are a large number of gaps at the moment, and perhaps you can help to fill them in? I hope this is something that will grow over time, and become increasingly useful.

  • Richard 10:46 am on December 20, 2013 Permalink | Log in to leave a Comment  

    Continuous Integration in Windows Azure 

    I wanted to do some continuous integration on a .NET project using Virtual Machines in Windows Azure.

    The project is hosted in Bitbucket, and I wanted to try out Teamcity to build it.

    Firing up a VM, and installing Teamcity was easy. Configuring Teamcity to build the application wasn’t much harder.

    Now we get to the interesting bit. My build machine isn’t very busy. It take a few seconds to build my project (it’s not a very big application) and I don’t spend all day and night working on it, so the machine spends most of it’s time being idle. However, being idle is still racking up a bill.

    The solution is to start the machine when it’s needed, and shut it down when not.

    Bitbucket (like GitHub) supports service hooks. This is the ability to POST to any URL when a commit is made to the repository, so I created a node.js app which will run on Azure Web Sites, and listen to a service hook. It then starts a Virtual Machine, and shuts it down again, 15 minutes after the last request to start it.

    The code is available in this gist:

    You can clone the gist, and install it in your own Web Site using Git and the Azure Command Line Tools:

      git clone vmstartstop
      cd vmstartstop
      azure site create --git
      [edit server.js to set your own values for subscriptionId, hostedService, deploymentId, roleInstance]
      [add a mycert.pem file, following these steps:]
      git add .
      git commit -am "intial commit"
      git push azure master

    The app supports two actions, start or stop:


    To install in Bitbucket go to Settings, Hooks and add a ‘POST’:


    Or in GitHub go to settings, Service Hooks and select WebHook URLs:



    When you check in to Bitbucket, the VM starts up, builds the project, emails me if there is a problems, and then gets shut down again. Magic.

  • Richard 2:18 pm on December 12, 2013 Permalink | Log in to leave a Comment  

    100% Cloud Business 

    A week or two ago we switched off the last of our on-premises services, our two domain controllers. I tweeted this at the time, and was surprised by the reaction (mainly from Exchange Server MVPs) seeing this as a risky or forward thinking thing to do. I didn’t think so, in fact I haven’t logged into the domain for over 6 months. The only services I use locally are network printers (which don’t require authentication). Just to set expectations, we’re a small business and we’ve been using cloud technology for a while.

    Many startups probably aren’t encumbered by any on-prem infrastructure, but for us, every service used by the business was once hosted in our own racks in a locked, air conditioned room.

    What services are we using now?

    • Windows Azure for hosting our software products, and for ad-hoc VMs. We have also been experimenting with VMs as a development environment.
    • Office365 for email, calendars, file archive (SharePoint) and Lync.
    • Bitbucket and Visual Studio Online for source control, continual integration and managing product backlog.
    • SkyDrive for personal file backup – although I prefer Dropbox :¬P
    • Passpack for password management.
    • Microsoft CRM for managing customers and support cases.
    • Basecamp for project management.
    • (there are probably a number of other small tools that guys in the team use)

    Was the transition painful?

    Not at all, there were a couple of bumps along the way, but hats off to our IT team, they managed the transition with no surprises. There was a large data migration task to do, for example moving old subversion repositories up to git, and migrating on-premises CRM to the cloud offering. This wasn’t always straight forward, but it was carefully planned, and it now all does seem to work!

    Does it work?

    Yes! In many cases you wouldn’t know any different from using the services on-premises. OK, SharePoint online does’t allow as much customisation, but we’ve never had to do that in the past. Outlook is almost exactly the same regardless of where the exchange server is hosted, and a git remote is a git remote.

    I think there is some way to go. We could probably benefit from stitching together some of these systems to create a better ‘single view’ of the organisation, many of these services have APIs, and it would be interesting to see what we could build on top of them.

    What we don’t have to worry about any more is keeping a rack of servers running, patching operating systems, updating product versions, and planning downtime.

    What about network disruption?

    One concern many people have about cloud migration is being unable to access these services during periods of network outage. But systems like git (with Bitbucket), Outlook for email and Dropbox for files allow you to keep a local copy of what you’re working on, so for short periods of network disruption (which is relatively rare) there isn’t any interruption to work. In fact the ability to use all these services from anywhere on the internet (i.e. at home/coffee shop/conference/customer site) without having to mess about with a VPN far outweighs any drawback.


    You probably don’t have a water well in your back garden, you probably don’t keep a cow and grind flour. In the near-future people will only maintain servers for fun.

  • Richard 12:43 pm on November 6, 2013 Permalink | Log in to leave a Comment  

    Benchmarking PostgreSQL on Windows Azure VMs 

    Out of curiosity I benchmarked a vanilla PostgreSQL install on a Medium sized VM on Azure.

    I used the pgbench tool, with transactions set to 1000.

    I didn’t do any performance tuning, or attempt to move the data onto separate disks.

    Ubuntu Server 13.10

    starting vacuum...end.
    transaction type: TPC-B (sort of)
    scaling factor: 1
    query mode: simple
    number of clients: 1
    number of threads: 1
    number of transactions per client: 1000
    number of transactions actually processed: 1000/1000
    tps = 56.782699 (including connections establishing)
    tps = 56.827597 (excluding connections establishing)

    Windows Server 2012 R2 SP1

    starting vacuum...end.
    transaction type: TPC-B (sort of)
    scaling factor: 1
    query mode: simple
    number of clients: 1
    number of threads: 1
    number of transactions per client: 1000
    number of transactions actually processed: 1000/1000
    tps = 96.959695 (including connections establishing)
    tps = 97.290895 (excluding connections establishing)

    That’s right. Windows is faster, much faster.

  • Richard 12:32 pm on November 1, 2013 Permalink | Log in to leave a Comment  

    One Commit a Day #codevember 

    I thought it would be fun to try and make at least one commit to an open source project on GitHub every day for the month of November. It beats growing facial hair, and benefits the OSS community. I don’t promise to make some amazing commit to the linux kernel, or solve world peace, I may just update one of my own projects or fix a small bug somewhere else. OSS is the sum total of a lot of small contributions, and that’s all I’m aiming to do.

    I’ll update this post with progress, or you can check my GitHub profile.

    I encourage you to do the same. Let’s make it a thing #codevember.


    Day 1 – AzureRunMe

    A nice easy start, I upgraded the AzureRunMe project to the latest version of the Azure SDK (2.2).

    View the pull request.

    Day 2 – MuonRedux

    This is my personal wiki, which I use for making notes. It uses dropbox for storage. Upgraded to bootstrap 3, and support folders.

    View the commit. View the app.

    Day 3 – NodeStorageExplorer

    My lightweight and quick node.js app for browsing Azure Storage. I fixed it up, and upgraded to bootstrap 3. This app could do with some serious improving, feel free to help out!

    View the commit. View the app.

    Day 4 – NetMFAzureStorage

    Contributed (whether he likes it or not) to Andy Cross’ storage client for the .NET Micro Framework.

    View the pull request.

    Day 5 – gluon.js

    Created a JavaScript library for dynamic templating in twitter bootstrap.

    View the commit.

    Day 6 – Scoop & AzurePluginLibrary

    Added a custom bucket to install APM using Scoop. View the commit.

    Added Scoop as a plugin in the AzurePluginLibrary. View the commit.

    Day 7 - NetMFAzureStorage (again)

    Added an experimental feature to use reflection to discover types when inserting into tables.

    View the commit.

    Day 8 – Scoop-Extras

    Added the Nano text editor to the scoop-extras repo.

    View the pull request.

    Day 9 – NetMFAzureStorage (again!!)

    Added some more experimental code to read table entities the horrible way.

    View the commit.

    Day 10 – NodeStorageExplorer (again)

    Add support for downloading private blobs.

    View the commit.

    Day 11 – NetMFAzureStorage

    Added table querying support

    View the pull request.

    Day 12 – Gluon

    Add the ability to turn a form back into a JavaScript object

    View the commit.

    Day 13 – Side Waffle

    Added a Windows Azure Cloud Service Plugin Template to the Side Waffle library

    View the pull request.

    Day 14 – Muon Redux

    Added icons to the buttons and lists.

    View the commit.

    Day 15 – AzureDirectory

    Brushed up some of the code in my fork of the AzureDirectory project.

    View the commit.

    Day 16 – Windows Azure Python SDK

    Upgraded the X_MS_VERSION header to enable cross-account blob copy.

    View the pull request.

    Day 17 - NetMFAzureStorage

    Added table entity update

    View the commit.

    Day 18 – NetMFAzureStorage

    Added documentation and refactored project structure.

    View the pull request.

    Day 19 – NetMFAzureStorage

    Fixed a bug on table writes.

    View the commit.

    Day 20 – MuonRedux

    Added grunt to build dependencies.

    View the commit.

    Day 21 – NetMFAzureStorage

    Added table entity delete.

    View the commit.

    Day 22 – NetMFStorage

    Updated the readme file

    View the commit.

    Day 23 – FAIL!

    Day 24 – Azure Directory

    Tidy up.

    View the commit.

    Day 25 – AzureDirectory

    Made blob compression an option rather than a debug symbol.

    View the commit.

    Day 26 – AzureDirectory

    Switched close to dispose on streams

    View the commit.

    Day 27 – AzureDirectory

    Refactored code.

    View the commit.

    Day 28 – Scoop

    Added support for escript.exe.

    View the pull request.

    Day 29 – github/gitignore

    Added node_modules directory for the .gitignore file for Visual Studio.

    View the pull request.

    Day 30 – AzureTableBackup

    Created a utility to backup and restore Windows Azure Table Storage.

    View the commit.

    Other Codevemberers

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 478 other followers