Updates from December, 2013 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 12:51 pm on December 24, 2013 Permalink |  

    The Windows Azure Architect’s Reference 

    My degree was not in computer science, I’m a mechanical engineer in disguise. On the first week of my degree course I was given a reference book which contained all kinds of useful facts, figures and formulas which you can pull together to construct a mathematical model of the world (which is what half of engineering is about). This book had everything from the periodic table, to the formulas for Newtonian physics, the tensile strength of various steels to the formulas for fluid mechanics. It was really useful.

    In the IT world we face similar problems. We construct a solution to a problem by pulling together various technologies. However, we often rely on gut instinct (or experience) to know whether a solution will scale in an acceptable way (I’m guilty of this sometimes anyway).

    Rob Blackwell suggested that we pull together a one page summary on the features in Windows Azure, to provide architects with a reference to help inform design decisions. This doesn’t include cost information, as this is well served by the calculator, but it does include performance expectations, SLAs, and naming restrictions. All figures come straight from MSDN documentation, so there’s no original content here, just a collation of data already on the internet:


    There are a large number of gaps at the moment, and perhaps you can help to fill them in? I hope this is something that will grow over time, and become increasingly useful.

  • Richard 10:46 am on December 20, 2013 Permalink |  

    Continuous Integration in Windows Azure 

    I wanted to do some continuous integration on a .NET project using Virtual Machines in Windows Azure.

    The project is hosted in Bitbucket, and I wanted to try out Teamcity to build it.

    Firing up a VM, and installing Teamcity was easy. Configuring Teamcity to build the application wasn’t much harder.

    Now we get to the interesting bit. My build machine isn’t very busy. It take a few seconds to build my project (it’s not a very big application) and I don’t spend all day and night working on it, so the machine spends most of it’s time being idle. However, being idle is still racking up a bill.

    The solution is to start the machine when it’s needed, and shut it down when not.

    Bitbucket (like GitHub) supports service hooks. This is the ability to POST to any URL when a commit is made to the repository, so I created a node.js app which will run on Azure Web Sites, and listen to a service hook. It then starts a Virtual Machine, and shuts it down again, 15 minutes after the last request to start it.

    The code is available in this gist: https://gist.github.com/richorama/8052843

    You can clone the gist, and install it in your own Web Site using Git and the Azure Command Line Tools:

      git clone https://richorama@gist.github.com/8052843.git vmstartstop
      cd vmstartstop
      azure site create --git
      [edit server.js to set your own values for subscriptionId, hostedService, deploymentId, roleInstance]
      [add a mycert.pem file, following these steps: https://coderead.wordpress.com/2013/08/09/managing-azure-iaas-from-node-js/]
      git add .
      git commit -am "intial commit"
      git push azure master

    The app supports two actions, start or stop:



    To install in Bitbucket go to Settings, Hooks and add a ‘POST’:


    Or in GitHub go to settings, Service Hooks and select WebHook URLs:



    When you check in to Bitbucket, the VM starts up, builds the project, emails me if there is a problems, and then gets shut down again. Magic.

    • danmiser 1:52 pm on May 21, 2014 Permalink | Log in to Reply

      Thanks for this. I was able to deploy the web site, but it’s not stopping my VM. I don’t see anything coming back in the console. I imagine I didn’t update my variables correctly, but I checked the online reference for startRole from the MSDN site and the only one that seems confusing is the deploymentId variable. I tried both with “prod” (arbitrary) and the Deployment Name listed in the Azure portal for my cloud service linked to my VM. Any thoughts about how best to debug where I went wrong?

      • Richard 6:00 am on May 25, 2014 Permalink | Log in to Reply

        Hi, pleased to hear you’re finding this useful.

        The environment should be either Production or Staging.

        For debugging I would try using the log steaming service. If you look at the command line tools or the kudu console you should be able to get the log output, and see where it’s going wrong



    • danmiser 12:51 pm on May 25, 2014 Permalink | Log in to Reply

      Awesome. Thanks for the reply. The log streaming service showed me it was an issue with the cert, so I’ll go back and double check my work there, and I appreciate the environment answer as well.

      Thanks again.

  • Richard 2:18 pm on December 12, 2013 Permalink |  

    100% Cloud Business 

    A week or two ago we switched off the last of our on-premises services, our two domain controllers. I tweeted this at the time, and was surprised by the reaction (mainly from Exchange Server MVPs) seeing this as a risky or forward thinking thing to do. I didn’t think so, in fact I haven’t logged into the domain for over 6 months. The only services I use locally are network printers (which don’t require authentication). Just to set expectations, we’re a small business and we’ve been using cloud technology for a while.

    Many startups probably aren’t encumbered by any on-prem infrastructure, but for us, every service used by the business was once hosted in our own racks in a locked, air conditioned room.

    What services are we using now?

    • Windows Azure for hosting our software products, and for ad-hoc VMs. We have also been experimenting with VMs as a development environment.
    • Office365 for email, calendars, file archive (SharePoint) and Lync.
    • Bitbucket and Visual Studio Online for source control, continual integration and managing product backlog.
    • SkyDrive for personal file backup – although I prefer Dropbox :¬P
    • Passpack for password management.
    • Microsoft CRM for managing customers and support cases.
    • Basecamp for project management.
    • (there are probably a number of other small tools that guys in the team use)

    Was the transition painful?

    Not at all, there were a couple of bumps along the way, but hats off to our IT team, they managed the transition with no surprises. There was a large data migration task to do, for example moving old subversion repositories up to git, and migrating on-premises CRM to the cloud offering. This wasn’t always straight forward, but it was carefully planned, and it now all does seem to work!

    Does it work?

    Yes! In many cases you wouldn’t know any different from using the services on-premises. OK, SharePoint online does’t allow as much customisation, but we’ve never had to do that in the past. Outlook is almost exactly the same regardless of where the exchange server is hosted, and a git remote is a git remote.

    I think there is some way to go. We could probably benefit from stitching together some of these systems to create a better ‘single view’ of the organisation, many of these services have APIs, and it would be interesting to see what we could build on top of them.

    What we don’t have to worry about any more is keeping a rack of servers running, patching operating systems, updating product versions, and planning downtime.

    What about network disruption?

    One concern many people have about cloud migration is being unable to access these services during periods of network outage. But systems like git (with Bitbucket), Outlook for email and Dropbox for files allow you to keep a local copy of what you’re working on, so for short periods of network disruption (which is relatively rare) there isn’t any interruption to work. In fact the ability to use all these services from anywhere on the internet (i.e. at home/coffee shop/conference/customer site) without having to mess about with a VPN far outweighs any drawback.


    You probably don’t have a water well in your back garden, you probably don’t keep a cow and grind flour. In the near-future people will only maintain servers for fun.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc