Three Tier Architecture in the Cloud?

A brief history of system architecture

Back at the dawn of the computing era, machines were single mainframes, often with dumb terminal clients, these mainframes ran a program for lots of simultaneous users, and everyone was happy. Over time, our software needs outgrew the capability of a single machine, so we decided to scale. The easiest way to scale is upwards, separating out the presentation code and business code into separate physical layers. This architecture still operates today (think desktop applications talking to a centralised database).

Fast forward a few more years, and in the 1990s we have the classical three tier architecture, which seems to have stuck. The first tier is typically a database server (like SQL Server), in the middle you have some business logic (DCOM or Web Services or something equally nasty) which is fronted by a presentation tier at the top (ASP.NET or a web sever).

Advantages cited for this design include logical consistency, scalability and security. You have a nice (physical) separation of your code, and each part can be replaced without affecting the other parts (apparently). You can scale because you can add more hardware in at any point. It’s secure because you would  typically expose your presentation layer on your DMZ, but keep the other tiers within your internal infrastructure, thus reducing your attack surface.

Enter the cloud

Does the three tier architecture still stand up in cloud environments? In summary, it can, but I don’t think it’s the best architecture for cloud deployments.

There are a number differences between cloud and physical environments, that have a major influence on architecture:

  1. You don’t have a DMZ. Whilst you can keep servers hidden from the outside world, I tend to find the people want to expose their middle layer anyway, as an API to consume from outside the cloud.
  2. Compute time is money. Because you can scale up and down so easily, you want to get the most from your virtual hardware, to minimise the number of machine in your infrastructure,  which minimizes your costs. Putting the presentation  business logical tiers on the same physical machine will reduce latency, which gives you better throughput. You also have the problem of scaling efficiently. If your application is split across different machine types; each scales independently. You probably have a formula that says you need 4 presentation machines for 3 business logic machines or something, so what happens if you want to add just a bit more, how many machines do you need? Which layer should you increment? Adjusting the deployment to get maximum value from every machine is difficult. Having an architecture where all machines are identical, and can handle every kind of request  makes this easier, it’s just one dial to adjust.
  3. Resilience is everything. Cloud applications should be designed for hardware failure. You don’t know when a machine is going to be removed from your infrastructure. Therefore you need at least two of every ‘type’ of machine (i.e. 2x presentation, 2x business logic) which gives you a minimum deployment of four machines. That’s quite a high starting point for some applications.
  4. Compute is just a utility. In the physical world, the clear logical separation of layers gives you the flexibility to swap out your middle tier, to upgrade the operating system or switch to an alternative platform. That’s just not the paradigm in cloud, you’re not responsible for maintaining those machines any more, you don’t need that flexibility. You’re not going to switch your middle tier over to another operating system, concentrate on your application rather than the platform.

So I think there a number of compelling reasons for simplifying your application to reside on a single role or ‘type’ of virtual machine. I don’t think the tiered architecture translates to the cloud, I don’t think it brings any value. Sure, you can still build your application with logical tiers, but don’t just translate those straight onto physical layers.

So what can you do?

The service definition for an Azure Web Role allows you to install multiple applications on the same role:

    <Sites>
      <Site name="<web-site-name>">
        <VirtualApplication name="<application-name>" physicalDirectory="<directory-path>"/>
        <VirtualDirectory name="<directory-path>" physicalDirectory="<directory-path>"/>
        <Bindings>
          <Binding name="<binding-name>" endpointName="<endpoint-name-bound-to>" hostHeader="<url-of-the-site>"/>
        </Bindings>
      </Site>
    </Sites>

This allows you can put your Web Services on the same box, and use the loopback address (localhost) to connect to them. However, you’re paying a serialization/deserialization overhead.

It’s better to have an application that doesn’t have a web services layer at all, just go straight through to your logical business tier from your presentation code.

The Worker Role is often used to process background tasks, which happen with no user interaction. These can also be consolidated into the same machine as your web server, just add a ‘RoleEntryPoint’ to your web project. The code will be run in a separate app domain.

So we’re back to the old mainframe architecture, albeit with a database as a service. However it’s a mainframe that can grow horizontally, the clients connecting to it are powerful web browsers rather than a dumb terminals, and the hardware is significantly faster (most of the time).

About these ads