Updates from March, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • Richard 1:59 pm on March 23, 2012 Permalink |  

    Three Tier Architecture in the Cloud? 

    A brief history of system architecture

    Back at the dawn of the computing era, machines were single mainframes, often with dumb terminal clients, these mainframes ran a program for lots of simultaneous users, and everyone was happy. Over time, our software needs outgrew the capability of a single machine, so we decided to scale. The easiest way to scale is upwards, separating out the presentation code and business code into separate physical layers. This architecture still operates today (think desktop applications talking to a centralised database).

    Fast forward a few more years, and in the 1990s we have the classical three tier architecture, which seems to have stuck. The first tier is typically a database server (like SQL Server), in the middle you have some business logic (DCOM or Web Services or something equally nasty) which is fronted by a presentation tier at the top (ASP.NET or a web sever).

    Advantages cited for this design include logical consistency, scalability and security. You have a nice (physical) separation of your code, and each part can be replaced without affecting the other parts (apparently). You can scale because you can add more hardware in at any point. It’s secure because you would  typically expose your presentation layer on your DMZ, but keep the other tiers within your internal infrastructure, thus reducing your attack surface.

    Enter the cloud

    Does the three tier architecture still stand up in cloud environments? In summary, it can, but I don’t think it’s the best architecture for cloud deployments.

    There are a number differences between cloud and physical environments, that have a major influence on architecture:

    1. You don’t have a DMZ. Whilst you can keep servers hidden from the outside world, I tend to find the people want to expose their middle layer anyway, as an API to consume from outside the cloud.
    2. Compute time is money. Because you can scale up and down so easily, you want to get the most from your virtual hardware, to minimise the number of machine in your infrastructure,  which minimizes your costs. Putting the presentation  business logical tiers on the same physical machine will reduce latency, which gives you better throughput. You also have the problem of scaling efficiently. If your application is split across different machine types; each scales independently. You probably have a formula that says you need 4 presentation machines for 3 business logic machines or something, so what happens if you want to add just a bit more, how many machines do you need? Which layer should you increment? Adjusting the deployment to get maximum value from every machine is difficult. Having an architecture where all machines are identical, and can handle every kind of request  makes this easier, it’s just one dial to adjust.
    3. Resilience is everything. Cloud applications should be designed for hardware failure. You don’t know when a machine is going to be removed from your infrastructure. Therefore you need at least two of every ‘type’ of machine (i.e. 2x presentation, 2x business logic) which gives you a minimum deployment of four machines. That’s quite a high starting point for some applications.
    4. Compute is just a utility. In the physical world, the clear logical separation of layers gives you the flexibility to swap out your middle tier, to upgrade the operating system or switch to an alternative platform. That’s just not the paradigm in cloud, you’re not responsible for maintaining those machines any more, you don’t need that flexibility. You’re not going to switch your middle tier over to another operating system, concentrate on your application rather than the platform.

    So I think there a number of compelling reasons for simplifying your application to reside on a single role or ‘type’ of virtual machine. I don’t think the tiered architecture translates to the cloud, I don’t think it brings any value. Sure, you can still build your application with logical tiers, but don’t just translate those straight onto physical layers.

    So what can you do?

    The service definition for an Azure Web Role allows you to install multiple applications on the same role:

        <Sites>
          <Site name="<web-site-name>">
            <VirtualApplication name="<application-name>" physicalDirectory="<directory-path>"/>
            <VirtualDirectory name="<directory-path>" physicalDirectory="<directory-path>"/>
            <Bindings>
              <Binding name="<binding-name>" endpointName="<endpoint-name-bound-to>" hostHeader="<url-of-the-site>"/>
            </Bindings>
          </Site>
        </Sites>

    This allows you can put your Web Services on the same box, and use the loopback address (localhost) to connect to them. However, you’re paying a serialization/deserialization overhead.

    It’s better to have an application that doesn’t have a web services layer at all, just go straight through to your logical business tier from your presentation code.

    The Worker Role is often used to process background tasks, which happen with no user interaction. These can also be consolidated into the same machine as your web server, just add a ‘RoleEntryPoint’ to your web project. The code will be run in a separate app domain.

    So we’re back to the old mainframe architecture, albeit with a database as a service. However it’s a mainframe that can grow horizontally, the clients connecting to it are powerful web browsers rather than a dumb terminals, and the hardware is significantly faster (most of the time).

    Advertisement
     
    • Dan 11:35 pm on May 10, 2012 Permalink | Log in to Reply

      Good stuff!

    • kay28point5 2:49 pm on March 31, 2014 Permalink | Log in to Reply

      Hi. Having trouble imagining how you would remove a web server tier and still communicate with those powerful browsers? (Unless you were implying coding a whole subset of browser-interfacing code which the web service already provides…)

  • Richard 12:36 pm on March 23, 2012 Permalink |  

    Downloading a report from the SQL Azure Reporting Service 

    To download a rendered report from SQL Azure Report Service is more straight forward than I expected:

    ServerReport report = new ServerReport();
    report.ReportServerUrl = new Uri("https://xxx.reporting.windows.net/ReportServer");
    report.ReportServerCredentials = new ReportServerCredentials("username", "password", "xxx.reporting.windows.net");
    report.ReportPath = "/ReportName.rdl";
    var bytearray = report.Render("PDF");
    using (var fs = new FileStream(@"c:\file.pdf", FileMode.Create))
    {
        fs.Write(bytearray, 0, bytearray.Length);
    }

    I’m saving the byte array to the disk, but you could just as easily write it to a blob.

    The format can be and of these values:

    HTML3.2, HTML4.0, MHTML, IMAGE, EXCEL, WORD, CSV, PDF, XML, and NULL

    Not sure why you would want ‘null’, but there you are.

     
    • A 4:36 pm on May 9, 2012 Permalink | Log in to Reply

      Thanks for the blog. I believe it is taking me into the right direction.

      However, I am getting a strange rsAuthenticationExtensionError message. “The Authentication Extension threw an unexpected exception or returned a value that is not valid: identity==null.”

      I have googled it and spent nearly a day on it. Solutions I found on the web are either not applicable as it is an azure version of reporting services or the solutions don’t work.

      Usual with Microsoft, most of the error messages are misleading. I keep thinking that its an error related to the assembly version. But no luck so far.

      Any suggestions would be much appreciated.

    • slackshot 9:47 pm on August 29, 2013 Permalink | Log in to Reply

      Just wanted to post, that I love your work. I have been pouring over it all day. I’m a big Azure fan, been working with it, for quite a while now. Started out doing non-profit projects.. for a local at-risk youth program..

  • Richard 9:55 pm on March 14, 2012 Permalink |  

    Tunnelling UDP over the Service Bus – or how to get Sentinel licencing server working on Azure 

    Before I start, I don’t think that a licencing server is necessary for cloud applications. It just adds extra complexity and cost with no benefit. If you are in control of your code piracy shouldn’t be a concern. However, I accept that in some cases software may be licensed via third parties, or business may not want a version of their codebase with  licencing disabled.

    Sentinel cannot be installed on a VM, I believe it needs a fixed MAC address, and it talks UDP which isn’t currently available in Azure. Therefore we need to keep Sentinel on premises and find a way of getting UDP traffic across the internet. Well it turns out that this is a problem shared with the Quake 3 community. The Quake server also uses UDP, and playing against people on a different network is a challenge. Their solution is ‘Tunnel’ a UDP -> TCP tunnel written in Java:

    http://tunnel.mrq3.com/

    Another problem is that businesses are unlikely to want to expose their licencing server on the internet, but Azure has the solution; the Service Bus. PortBridge is a ready to go application VPN, which can traverse firewalls, and connect together otherwise unconnectable endpoints:

    http://vasters.com/clemensv/2009/11/18/Port+Bridge.aspx

    The deployed system looks like this:

    The steps to get this up and running are as follows:

    1. Configure a ServiceBus endpoint in the Azure Management Portal.
    2. Put the Java runtime in a zip file, and upload it to Blob Storage (you need to manually deploy this to the Azure Role).
    3. Create a cloud project with a Worker/Web Role, and add the PortBridge Agent and Tunnel client files to the project, set the files to copy local. This will package up these dependencies with your application.
    4. Update the PortBridge config file, so the portBridgeAgent section looks something like this:
       
      <portBridgeAgent serviceBusNamespace="YOUR_NAMESPACE" serviceBusIssuerName="owner" serviceBusIssuerSecret="YOUR_SECRET">
        <portMappings>
          <port localTcpPort="6667" targetHost="YOUR_SENTINEL_SERVER_NAME" remoteTcpPort="6667">
            <firewallRules>
              <rule source="127.0.0.1"/>
              <rule sourceRangeBegin="10.0.0.0" sourceRangeEnd="10.255.255.255"/>
            </firewallRules>
          </port>
        </portMappings>
      </portBridgeAgent>
    5. Update the client.txt of Tunnel to something like this:
       
      #Login data for the server
      username = ech
      password = password
      
      #Address of the tunnel server
      tunnel = 127.0.0.1:6667
      
      #Number of TCP connections to make (1-30)
      connections = 30
      
      redirect1 = 5093 -> 127.0.0.1:5093
    6. The Worker Role should then have a startup task, which will download Java, and unzip it. (I tend to use 7zip console to extract the files from the archive, I just wrote a quick .NET console application to download the zip for me). Add startup tasks to run the PortBridgeAgent and the Tunnel client as background processes. I used this script to start the Tunnel, which will restart the Tunnel if the connection drops:
       
      :start
      
      REM pause for 10 seconds, waiting for portbridge to start
      PING 1.1.1.1 -n 1 -w 10000 >NUL
      
      c:\applications\java\bin\java -classpath build;lib\log4j-1.2.7.jar -Dlog4j.configuration=file:.\log4j.properties com.mrq3.tunnel.Client 
      
      goto start
    7. Configure the PortBridge service config file (which will run on premises):
       
      <portBridge serviceBusNamespace="YOUR_NAMESPACE" serviceBusIssuerName="owner" serviceBusIssuerSecret="YOUR_SECRET">
        <hostMappings>
          <add targetHost="YOUR_SENTINEL_SERVER_NAME" allowedPorts="6667"/>
        </hostMappings>
      </portBridge>
    8. Configure the Tunnel sever.txt file (which will run on premises):
       
      user1 = ech
      pass1 = password
    9. Start the on premises components, deploy the Worker Role, and cross your fingers…
     
  • Richard 1:22 pm on March 2, 2012 Permalink |  

    Azure HTTP timeouts 

    An avid reader of my blog pointed me towards this post, which suggests that Azure will time out your incoming requests after a minute.

    I constructed a quick test in MVC, which would block the server for a number of miliseconds before returning:

    public ActionResult Index(int duration)
    {
        System.Threading.Thread.Sleep(duration);
        return Content(string.Format("Response after {0}ms wait", duration));
    }

    I passed in variety of durations, and found that anything under 4 minutes (240000 ms) worked, and over 4 minutes didn’t.

    Trying the same test with IE on the VM (using the local ip address) and thus not going through the load balancer resulted in much higher times I didn’t have the patience to find out exactly how long.

    We can conclude that something in the Azure infrastructure (like the firewall or load balancer) is terminating HTTP requests after 4 minutes (my reader confirms this too).

    It’s not uncommon for network infrastructure (which could be anywhere between you and the cloud) to terminate HTTP connections after 1 minute or 2, so your code shouldn’t try and block for this long anyway.

     
    • vbmagic 10:31 am on March 5, 2012 Permalink | Log in to Reply

      It would be interesting to see if you get the same time-outs when using a non windows web server such as jBoss/Apache

      • Richard 11:38 am on March 5, 2012 Permalink | Log in to Reply

        It should be exactly the same (unless the alternative server closes the connection earlier) as it’s the Azure infrastructure which is closing the connection. Worth trying though!

        • vbmagic 11:57 am on March 5, 2012 Permalink | Log in to Reply

          I was wondering if its a feature of IIS that exists in Azure compute role (Different to the IIS that runs in the Server 2008 (R2) ) or like you said the azure infrastructure.

        • Richard 12:08 pm on March 5, 2012 Permalink | Log in to Reply

          Good question. I did all of my testing from inside Azure. Some requests I sent via the load balancer, by using the *.cloudapp.net address, other requests I made directly by using the machine’s internal IP address. When I went via the NLB, it timed out after 4 minutes. Otherwise the requests lasted a lot longer.

          Having said all of this, some people are experiencing longer timeouts, and I’m not sure why.

  • Richard 12:37 pm on March 1, 2012 Permalink |  

    Cloud Computing Talk 

    Here is the presentation from my cloud computing talk at the first joint workshop between academia and industry at Essex University Computer Science and Electronic Engineering department.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel