• Call: +1 (858) 429-9131

Web 2.0 application architecture Template

Application created for a Startup based in Chicago

The term ‘Load Balancer’ is quite self-explanatory, it balances the load on application servers behind it. There can be ‘n’ number of application servers behind the Load Balancer  (LB) which would not be directly facing the end users.


The LB balances the load caused by the requests from end users accessing the web application, among different application servers with various scheduling – balancing algorithms. So as the load is balanced among different application servers, the performance of the portal will be far better if the same was served using a single application server. But now if there is only one LB in place, that point makes SPOF (Single Point Of Failure) in the whole architecture. So in-order to overcome this we can have 2 or more LB. But what happens if one LB is facing some issue and is no more capable of handling the requests? In case of 2 LB based deployment architecture this again makes SPOF. In-order to overcome this we can configure ‘Automation’ at the LB layer in such a way that when a LB becomes unhealthy it will get replaced with a fresh LB without any manual intervention.

load balancer

For automation we are using “AutoScaling” a service offered by AWS(Amazon Web Services) and also the architecture is deployed in AWS. Auto Scaling allows us to scale our Amazon EC2 capacity up or down automatically according to conditions we define. Here we have setup a custom LB in EC2 with an application tend to work as LB. A job is also configured in the same instance in such a way that it uploads the application’s configuration file to S3 at certain intervals of time. A base AMI with this setup is taken and Autoscaling is configured above this.

Autoscaling is configured in such a way that there will be always ‘n’ number of LB instances in the LB layer. As of now we don’t mess with DNS level, as it is maintained with constant values(IP’s). Configuration of Autoscaling includes instance type, security groups, availability zone, key-pair, userdata etc. As autoscaling is configured to always have ‘n’ number of LB instances, whenever AWS detects that our LB instance is unhealthy it will get terminated and a new LB instance will be launched in it’s place via Autoscaling and the corresponding public IP of the terminated one will get attached and latest configuration on which the LB application was working will get downloaded to the new instance as per the settings made in user-data and starts working on it’s basis. In doing so no change has to be done at DNS level. During this process there won’t be any outage for the portal as other LB’s will be available to serve the end-user requests. Whenever there is a major change at LB application level it is possible to modify Autoscaling configuration accordingly.


Associated Links

  • Amazon Web
  • Amazon.com
  • Cloud computing
  • Cloud infrastructure
  • Network management
  • Web services
  • Autoscaling
  • Web hosting
  • Amazon Elastic Compute Cloud
  • Amazon Web Services
  • Load balancing
  • Single point of failure
  • Application server

  • One Comment

    1. Pinaki says:

      Good job guys and kudos to Joseph and the rest.

    Leave a Reply