• Call: +1 (858) 429-9131

Posts Tagged ‘Amazon EC2’

Upgrading Rocket Chat

This is a very quick post. Many organizations are using Rocket chat as the slack / IRC alternative.

One important point that is not often documented is the upgrade. We ourselves ran into issues with Node.js errors, Meteror errors etc multiple times.

Here is how we can upgrade without breaking anything.

take a backup of the installation directory

cd /home/rocket
su -l rocket
cp -rfp Rocket.chat backups/date +%F

Assuming that backups are kept in /home/rocket/backups and db backups are in /home/rocket/backups/db

cd backups/db
mongodbump
rm -rf /home/rocket/Rocket.chat

#get the new version

cd /home/rocket
curl -L https://rocket.chat/releases/latest/download -o rocket.chat.tgz
tar xf rocket.chat.tgz
mv bundle Rocket.chat

install dependencies

(cd /home/rocket/Rocket.chat/programs/server && npm install )

All set to start the new server, migrate the dabase etc.

cd /home/rocket/Rocket.chat
node main .js

The last step performs upgrade and maintenance of the db scheme. With most of the versions this method will work.

It is assumed that the user rocket is the user under which Rocket.chat is installed.

Further, the server is running as a service which was done with :

sudo forever-service install -s main.js -e "ROOT_URL=https://chat.agileblaze.net/ MONGO_URL=mongodb://localhost:27017/rocketchat PORT=3000" rocketchat

From CAP, Puppet Now Chef, Evolution of Configuration Management Tools

CHEF, PUPPET & CAPISTRANO are used basically for two purposes  :

Application Deployment is all of the activities that make a software system available for use.

Configuration Management is software configuration management is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines.

Let me enlighten on how we evolved from the beginning when we were using tools like ssh, scp to the point where we began to abstract and began to equip our-self with these sophisticated yet simple to use tools. Earlier the following tools like

  • ssh which is used as a configuration management solution for admins.
  • scp act as a secure channel for application deployment.

The need for any other tools was out of question until things got complicated!!!

HISTORY

Earlier an Application Deployment  was just a few steps away such as

  1. scp app to production box
  2. restart server (optional)
  3. profit

And these software refreshing/updates were done

  1. Manual (ssh)
  2. with shell scripts living on the servers
  3. or not done at all

CAPISTRANO
(Introduced by Jamis Buck, written in Ruby, initially for Rails project)

Capistrano is a developer tool for deploying web applications. It is typically installed on a workstation, and used to deploy code from your source code management (SCM) to one, or more servers.In its sim­plest form, Capis­trano al­lows you to copy code from your source con­trol repos­i­tory (SVN or Git) to your server via SSH, and per­form pre & post-de­ploy func­tions like restart­ing a web­server, bust­ing cache, re­nam­ing files, run­ning data­base mi­gra­tions and so on.

Nice things cap introduced :

  1. Automate deploys with one set of files
  2. The files don’t have to live on the production server
  3. The language (Ruby) allows some abstraction

Now application deployment step can be coded and tested like rest of the project. It has also become the de facto way to deploy the Ruby on Rails applications. It has also had tools like webistrano build on top of it to provide a graphical interface to the command line tool.

Drawback : The tool seems to be widely used but not well supported.

PUPPET

(Written in Ruby and evolved from cfengine)

Luke Kanies came up with the idea for Puppet in 2003 after getting fed up with existing server-management software in his career as a systems administrator. In 2005 he quit his job at BladeLogic, a maker of data-center management software, and spent the next 10 months writing code to automate the dozens of steps required to set up a server with the right software, storage space, and network configurations. The result: scores of templates for different kinds of servers, which let systems administrators become, in Kanies’s metaphor, puppet masters, pulling on strings to give computers particular personalities and behaviors. He formed Puppet Labs to begin consulting for some of the thousands of companies using the software—the list includes Google, Zynga, and Twitter etc

Puppet is typically used in a client server formation, with all your clients talking to one or more servers. Each client contacts the servers periodically (every half an hour by default), downloads the latest configuration and makes sure it is sync with that configuration.

The Server in Puppet is called Puppet Master.
Puppet Manifests contains all the configuration details which are declarative as opposed to imperative.

The DSL is not Ruby as you are not writing scripts you are writing definitions, Install order is determined through dependencies.
The Puppet Master is idempotent which will make sure the client machines match the definitions.This is good as you can implement changes across machines automatically just by updating the manifest in the Puppet Master.

CHEF
(written in ruby evolved from puppet)

CHEF is an open source configuration management tool using pure-Ruby, the chef domain specific language for writing system configuration related stuff (recipes and cookbook)
CHEF brings a new feel with its interesting naming conventions relating to cookery like Cookbooks (they contain codes for a software package installation and configuration in the form of Recipes), Knife (API tool), Databags (act like global variables) etc

Chef Server – deployment scripts called Cookbooks and Recipes, configuration instructions called Nodes, security details etc. The clients in the chef infrastructure are called Nodes. Chef recipes are imperative as opposed to declarative. The DSL is extended Ruby so you can write scripts as well as definitions. Install order is script order NO dependency checking.

CHEF & PUPPET

Chef and Puppet automatically set up and tweak the operating systems and programs that run in massive data centers and the new-age “cloud” services, designed to replace massive data centers.

Chef Recipes is more programmer friendly as it is easily understood by a developer unlike a Puppet Manifest.

And when it comes to features in comparison to puppet, chef is rather more intriguing .
For example “Chef’s ability to search an environment and use that information at run time is very appealing.

Knife is Chef’s powerful command line interface. Knife allows you to interact with your entire infrastructure and Chef code base. Use knife to bootstrap a server, build the scaffolding for a new cookbook, or apply a role to a set of nodes in your environment. You can use knife ssh to execute commands on any number of nodes in your environment. knife ssh + search is a very powerful combination.

The part of defining dependencies in Puppet was overly verbose and cumbersome. With Chef, order matters and dependencies would be met if we specified them in the proper order.

We can deploy additional software applications on virtual machine instances without dealing with the overhead of doing everything manually,” Stowe explains. “We can do it with code — recipes that define how various applications and libraries are deployed and configured.” According to Stowe, creating and deploying a new software image now takes minutes or hours rather than hours or weeks. They call this technique DevOps because it applies traditional programming techniques to system administration tasks. “It’s just treating IT operations as a software development problem, – Stowe, CEO of Cycle Computing, a Greenwich, Connecticut-based start-up that uses Chef to manage the software underpinning the online “supercomputing” service it offers to big businesses and academic outfits. “Before this, there were ways of configuring servers and managing them, but DevOps has gotten it right.”

Lets CATEGORIZE

Let me help you to know onto which buckets does the above tools fell into and other similar tools…

App Deploy Capistrano, ControlTier, Fabric, Fun, mCollective
SysConfig Chef, Puppet, cfengine, Smart Frog, Bcfg2
Cloud/VM Xen, Ixc, openVZ, Eucalyptus, KVM
OS Install Kickstart, Jumpstart, Cobbler, OpenQRM, xCAT

Achieving HIPAA on AWS / EC2 with Windows Server 2008

When you are creating a HIPAA compliant system on cloud service like AWS / EC2 / S3, you have to carefully examine the different levels of data security provided by the Cloud Service provider

At a minimum level, the following should be ascertained:

i) Where is the Cloud provider’s data center physically located. In some countries, HIPAA restricts Protected Health Information ( PHI ) to be stored on servers located outside of the country.

ii) Whether the cloud provider contractually obligated to protect the customer’s data at the same level as the customer’s own internal policies?

iii) Cloud provider’s Backup and Recovery policies

iv) What are the provider’s policies on data handling/management and access control? Do adequate controls exist to prevent impermissible copying or removal of customer data by the provider, or by unauthorized employees of the company?

v) What happens to data when it is deleted? This is very important as customers will be storing data on virtual Machines. Also What happens to cloud hardware when the hardware is replaced?

In this blog we are only looking at the different security levels to be taken by the application developer to make sure that a web application built on AWS / EC2 using Windows Server 2008 / .NET / MSSQL / IIS 7 / is HIPAA compliant. The basic requirement is to encrypt all the data at rest and transit

1. Encrypting Data in transit between the user ( clients ) and the server ( Webserver )

SSL over HTTP ( HTTPS )

Steps used to Implement SSL on IIS are the following:

[bash]
1.Open IIS Manager.
2.Click on the server name.
3.Double-click the “Server Certificates” button in the “Security” section
4.Click on self-signed certificate
5.Enter certificate name and click ok
6. Select the name of the server to which the certificate was installed.

7. From the “Actions” menu (on the right), click on “Bindings.” This will open the “Site Bindings” window

8. In the “Site Bindings” window, click “Add” This will open the “Add Site Binding” window

9. Under “Type” choose https. The IP address should be the IP address of the site , and the port over which traffic will be secured by SSL is usually 443. The “SSL Certificate” field should specify the certificate that was installed in step 5.

10.Click “OK.” . SSL is now installed .
[/bash]

2 ) Encrypting Data at Rest ( Document Root )

EFS with IIS

You can use EFS ( Encrypted File System ) in Windows 2008 Server to automatically encrypt your data when it is stored on the hard disk.

Encrypt a Folder:

[bash]
1. Open Windows Explorer.
2. Right-click the folder that you want to encrypt , and then click Properties.
3. On the General tab, click Advanced.
4. Under Compress or Encrypt attributes, select the Encrypt contents to secure data check box and then click OK.
5. Click OK.
6. In the Confirm Attribute Changes dialog box that appears, use one of the following steps:
i) If you want to encrypt only the folder, click Apply changes to this folder only, and then click OK.
ii) If you want to encrypt the existing folder contents along with the folder, click Apply changes to this folder, subfolders and files, and then click OK.
[/bash]

The folder becomes an encrypted folder. New files that you create in this folder are automatically encrypted


3 ) Encrypting MSSQL Database ( Data at Rest )

TDE ( Transparent Data Encryption )

TDE is a new feature inbuilt in MSSQL Server 2008 Enterprise Edition . Data is encrypted before it is written to disk; data is decrypted when it is read from disk. The “transparent” aspect of TDE is that the encryption is performed by the database engine and SQL Server clients are completely unaware of it. There is absolutely no code that needs to be written to perform the encryption and decryption .So there is no need for changing any code ( Database Queries ) in the Application .

STEPS

i) Create a Master Key

A master key is a symmetric key that is used to create certificates and asymmetric keys. Execute the following script to create a master key:

[bash]
USE master;
CREATE MASTER KEY
ENCRYPTION BY PASSWORD = ‘Pass@word1’;
GO
[/bash]

ii)Create Certificate

Certificates can be used to create symmetric keys for data encryption or to encrypt the data directly. Execute the following script to create a certificate:

[bash]
CREATE CERTIFICATE TDECert
WITH SUBJECT = ‘TDE Certificate’
GO
[/bash]

iii) Create a Database Encryption Key and Protect it by the Certificate

[bash]
1.Go to object explorer in the left pane of the MSSQL SERVER Management Studio
2.Right Click on the database on which TDE Requires
3.Click Tasks and Navigate to Manage Database Encryption
4. Select the encrytion algorithm (AES 128/192/256) and select the certificate you have created
5.Then Mark the check Box for Set Database Encryption On
[/bash]

You can query the is_encrypted column in sys.databases to determine whether TDE is enabled for a particular database.

[bash]
SELECT [name], is_encrypted FROM sys.databases
GO
[/bash]


4 ) Encrypting Data in transit between the Webserver and the MSSQL Database

MSSQL secure connection using SSL

i) Creating a self-singned cert using makecert
[bash]
makecert -r -pe -n “CN=YOUR_SERVER_FQDN” -b 01/01/2000 -e 01/01/2036 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky exchange -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 c:\test.cer
[/bash]

ii) Install this cert

[bash]
Copy c:\test.cer into your client machine, run c:\test.cer from command window, select “Install Certificate”. -> click “Next” -> select “Place all certificates in the following store” –> click “Browser” -> select “Trusted Root Certification Authorities” -> select OK and Finish
[/bash]

iii) Open SQL Server Configuration Manager

[bash]
Expand SQL Server Network Configuration, right-click “Protocols for MSSQLSERVER” then click “properties”. On the “Certificate” tab select the certificate just installed . On the “Flags” tab, set “ForceEncryption” YES.
[/bash]

Now SSL is ready to be used on the server. The only modification needed in the .NET code is connection string. It will be

[bash]
connectionString=”Data Source=localhost;Initial Catalog=mydb;User ID=user1;Password=pas@123;Encrypt=true;TrustServerCertificate=true”
[/bash]

SSL for Tomcat on AWS EC2

To launch an AWS/EC2 instance, at first setting up a security group to specify what network traffic is allowed to reach the instance. Then select an AMI and launch an instance from it. And create a volume in the same zone of the instance and attach with it. Format the device and mount it to a directory. After that follow the steps to create SSL for Tomcat:

1. For the tomcat we need java, so create a directory to save the Java Binary file.

[shell] mkdir /usr/java
cd /usr/java [/shell]

2. Download jdk binary file (jdk-x-linux-ix.bin) here
Use URL http://www.oracle.com/technetwork/java/archive-139210.html

3. Execute the Binary file

[shell] /usr/java/jdk-x-linux-ix.bin [/shell]

Now we have the Java in our device. Then Download the Tomcat and install it followed by the instructions:-

1. Create a directory to save the tomcat

[shell] mkdir /usr/tomcat
cd /usr/tomcat [/shell]

2. Download tomcat source file (apache-tomcat-x.tar.gz) here
Use URL http://apache.hoxt.com/tomcat/tomcat-6/v6.0.32/bin/

3. Extract that file

[shell] tar -zxvf apache-tomcat-x.tar.gz [/shell]

4. Edit the catalina.sh file

[shell] vim /usr/tomcat/apache-tomcat-x/bin/catalina.sh [/shell]

[shell]

#** Add at the top **
JAVA_HOME=/usr/java/jdk1.x.x_x

[/shell]

save and exit
5. Start the tomcat

[shell] /usr/tomcat/apache-tomcat-x/bin/startup.sh [/shell]

6. We can see the logs by using the given command

[shell]tail -f /usr/tomcat/apache-tomcat-x/logs/catalina.out [/shell]

7. Take the browser and enter the URL http://localhost
Now we can see the tomcat index page

8. To stop the tomcat

[shell]/usr/tomcat/apache-tomcat-x/bin/shutdown.sh [/shell]

Now configure the SSL Certificate for tomcat. When you choose to activate SSL on your web server you will be prompted to complete a number of questions about the identity of your website and your company. Your web server then creates two cryptographic keys – a Private Key and a Public Key. The Public Key does not need to be secret and is placed into a Certificate Signing Request (CSR) – a data file also containing your details.

Create a self signed certificate authority (CA) and keystore.

1. Make a directory to hold the certs and keystore. This might be something like:

[shell] mkdir /usr/tomcat/ssl
cd /usr/tomcat/ssl [/shell]

2. Generate a private key for the server and remember it for the next steps

[shell]openssl genrsa -des3 -out server.key 1024[/shell]

Generating RSA private key, 1024 bit long modulus
…………………..++++++
…++++++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying – Enter pass phrase for server.key:

3. Generate a CSR (Certificate Signing Request). Give the data after executing this command

[shell]openssl req -new -key server.key -out server.csr[/shell]

Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Berkshire]:
Locality Name (eg, city) [Newbury]:
Organization Name (eg, company) [My Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server’s hostname) []:
Email Address []:

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

4. Remove the passphrasse from the key

[shell]cp server.key server.key.org
openssl rsa -in server.key.org -out server.key[/shell]

Enter pass phrase for server.key.org:
writing RSA key

5. Generate the self signed certificate

[shell]openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt[/shell]

Signature ok
subject=/C=GB/ST=Berkshire/L=Newbury/O=My Company Ltd
Getting Private key

You should then submit the CSR. During the SSL Certificate application process, the Certification Authority will validate your details and issue an SSL Certificate containing your details and allowing you to use SSL. Typically an SSL Certificate will contain your domain name, your company name, your address, your city, your state and your country. It will also contain the expiration date of the Certificate and details of the Certification Authority responsible for the issuance of the Certificate.

Create a certificate for tomcat and add both to the keystore

1. Change the path to ssl

[shell]cd /usr/tomcat/ssl[/shell]

2. Create a keypair for ‘tomcat’

[shell]keytool -genkey -alias tom -keyalg RSA -keystore tom.ks[/shell]

Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]:
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:

Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]: yes

Enter key password for <tom>
(RETURN if same as keystore password):
Re-enter new password:

3. Generate a CSR (Certificate Signing Request) for tomcat

[shell]keytool -keystore tom.ks -alias tom -certreq -file tom.csr[/shell]

Enter keystore password:

4. create unique serial number

[shell]echo 02 > serial.txt[/shell]

5. Sign the tomcat CSR

[shell]openssl x509 -CA server.crt -CAkey server.key -CAserial serial.txt -req -in tom.csr -out tom.cer -days 365[/shell]

Signature ok
subject=/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=Unknown
Getting CA Private Key

6. Import the server CA certificate into the keystore

[shell]keytool -import -alias serverCA -file server.crt -keystore tom.ks[/shell]

Enter keystore password:
Owner: O=My Company Ltd, L=Newbury, ST=Berkshire, C=GB
Issuer: O=My Company Ltd, L=Newbury, ST=Berkshire, C=GB
Serial number: ee13c90cb351968b
Valid from: Thu May 19 02:12:51 EDT 2011 until: Fri May 18 02:12:51 EDT 2012
Certificate fingerprints:
MD5: EE:F0:69:01:4D:D2:DA:A2:4E:88:EF:DC:A8:3F:A9:00
SHA1: 47:97:72:EF:30:02:F7:82:BE:CD:CA:F5:CE:4E:ED:89:73:23:4E:24
Signature algorithm name: SHA1withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

7. Add the tomcat certificate to the keystore

[shell]keytool -import -alias tom -file tom.cer -keystore tom.ks[/shell]

Enter keystore password:
Certificate reply was installed in keystore

To configure a secure (SSL) HTTP connector for Tomcat, verify that it is activated in the $TOMCAT_HOME/conf/server.xml file. Edit this file and add the following lines.

Tomcat configuration

1. Edit the given portion of tomcat configuretion file and change the port as 80

[shell]vim /usr/tomcat/apache-tomcat-6.0.13/conf/server.xml[/shell]

[shell]“””””” <Connector port=”8080″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ /> “”””””

<Connector port=”80″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ />

[/shell]

2. Add the given portion to server.xml and give your password in the password portion

[shell]

<Connector port=”443″ protocol=”HTTP/1.1″ SSLEnabled=”true”
maxThreads=”150″ scheme=”https” secure=”true”
keystoreFile=”tom.ks”
keystorePass=”password”
clientAuth=”false” sslProtocol=”TLS” />

[/shell]

When you start the Tomcat Your web server will match your issued SSL Certificate to your Private Key. Your web server will then be able to establish an encrypted link between the website and your customer’s web browser.

Start the tomcat with SSL Certificate

1. Restart tomcat

[shell]/usr/tomcat/apache-tomcat-6.0.13/bin/shutdown.sh
/usr/tomcat/apache-tomcat-6.0.13/bin/startup.sh[/shell]

2. Go to https://Public DNS name:443/

Then your browser shows a security issue. Click the Approve button. Then you can enter to the tomcat with your certificate. When a browser connects to a secure site it will retrieve the site’s SSL Certificate and check that it has not expired, it has been issued by a Certification Authority the browser trusts, and that it is being used by the website for which it has been issued. If it fails on any one of these checks the browser will display a warning to the end user letting them know that the site is not secured by SSL.

You are Done !!!

Creating phusion passenger AMI on Amazon EC2

Phusion Passenger is an Apache and Nginx module for deploying Ruby web applications.(such as those built on the Ruby on Rails web framework). Phusion Passenger works on any POSIX-compliant operating system,which means practically any operating system , except Microsoft Windows.

Here we are not going to discuss much about ruby on rails applications as our aim is creating an ami of an ubuntu aws instance from which we can launch an instance for developing and deploying rails applications pre-built.

Install apache2 web-server

[bash]
sudo apt-get install apache2 ( By default its DocumentRoot is /var/www/ )
[/bash]

 

Install mysql-server and mysql-client ( To support rails applications that access database )

 

 

[bash]sudo apt-get install mysql-server mysql-client[/bash]

 

 

 

Install Ruby from repository

The default ruby1.8 is missing some important files. So install ruby1.8-dev. Otherwise at some stage when using gem install, it may end up with “ Error : Failed to build gem native extensions “.

[bash]sudo apt-get install ruby1.8-dev[/bash]

 

Install RubyGems

Install rubygems >= 1.3.6

The package can be downloaded from here

wget http://rubyforge.org/frs/download.php/70696/rubygems-1.3.7.tgz

 

[bash]
tar xvzf rubygems-1.3.7.tgz
cd rubygems-1.3.7
sudo ruby setup.rb
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem
[/bash]

Install Rails via rubygems

 

 

Once rubygems is installed use it to install Rails :

 

[bash]sudo gem install rails[/bash]

 

 

 

Installing Phusion Passenger

 

There are three ways to install Phusion Passenger :

1. By installing the Phusion Passenger gem.

2. By Downloading the source tarball from the PhusionPassenger website(passenger-x.x.x.tar.gz).

3. By installing the native Linux package (eg: Debian package)

Before installing, you will probably need to switch to the root user first. The Phusion Passenger installer will attempt to automatically detect Apache, and compile Phusion Passenger against that Apache version. It does this by looking for the apxs or apxs2 command in the PATH environment variable.

Apache installed in a non-standard location, prevent the Phusion Passenger installer from detecting Apache.To solve this, become root user and export the path of apxs.

Easiest way to install Passenger is installing via the gem

Please install the rubygems and then run the Phusion Passenger installer, by typing the following commands as root.

1.Open a terminal, and type:

[bash]gem install passenger[/bash]

2.Type:

[bash]passenger-install-apache2-module[/bash]

and follow the instructions from the installer.

The installer will :

1. Install the Apache2 module.

2. instruct how to configure Apache.

3. inform how to deploy a Ruby on Rails application.

If anything goes wrong, this installer will advise you on how to solve any problems.

The installer will ask to add the following lines to the apache2.conf file.

[bash] LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.0/

ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/

gems/passenger-3.0.0

PassengerRuby /usr/bin/ruby1.8 [/bash]


Now consider, you have a rails application in directory /var/www/RPF_tool/. Add the following virtualhost entry to your apache configuration file

[bash]
<VirtualHost *:80>

ServerName  www.yoursite.com

DocumentRoot  /home/RFP_tool/public

<Directory  /var/www/RFP_tool/public>

AllowOverride  all

Options  -MultiViews

</Directory>

</VirtualHost>
[/bash]

Restart your apache server.

Phusion Passenger installation is finished.

Installation via the source tarball

Extract  the tarball to whatever location you prefer

[bash]
cd /usr/local/passenger/tar xzvf passenger-x.x.x.tar.gz
/usr/local/passenger/ passenger-x.x.x/bin/passenger-install-apache2-module
[/bash]

Please follow the instructions given by the installer. Do not remove the passenger-x.x.x folder after installation. Furthermore, the passenger-x.x.x folder must be accessible by Apache.

CREATING AN AMI OF AN EC2 INSTANCE

First you will have to install ec2-api-tools.zip from

http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9

[bash]
unzip ec2-api-tools.zip
mkdir ~/ec2
cp -rf ec2-api-tools/* ~/ec2
[/bash]

Upload your aws certificate and private-key to /mnt of the instance.

 

Then add the following to ~/.bashrc

[bash]
export EC2_HOME=~/ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=/mnt/pk-xxxxxxxxxxxxxxxxxxx.pem
export EC2_CERT=/mnt/cert-xxxxxxxxxxxxxxxx.pem
export JAVA_HOME=/usr/local/java/ ( your JAVA_HOME here)
export PATH=~/ec2/bin:$PATH
[/bash]

If your EC2 instance is an EBS-backed one, you can use the following command to create an AMI

[bash]ec2-create-image -n your-image-name instance-id[/bash]

If your instance is an s3-backed ( instance store ) one, you will have to install ec2-ami-tools first. It can be downloaded from

 

http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip

[bash]
unzip ec2-ami-tools.zip
cp ec2-ami-tools-x.x-xxxxx/bin/* ~/ec2/bin
[/bash]

vim ~/.bashrc

export EC2_AMITOOL_HOME=~/ec2/ec2-ami-tools-1.3-56066/

Now you can use the following commands to create an AMI of your s3-backed instance

[bash] mkdir /mnt/bundle-vol/
ec2-bundle-vol -u USER-ID -c /mnt/cert-xxxxxxx.pem -k
/mnt/pk-xxxx.pem -d /mnt/bundle-vol [/bash]

( Login to your AWS account; your USER-ID is available from Account–> Security Credentials )

[bash] ec2-upload-bundle -u s3-bucket-name -a aws-access-key -s aws-secret-key -d
/mnt/bundle-vol/ -m
/mnt/bundle-vol/image.manifest.xml
ec2-register -K  /mnt/pk-xxxxxx.pem -C/mnt/cert-xxxxxxx.pem s3-bucket-name/image.manifest.xml -n name-of-the-image [/bash]

To see the created images

[bash]ec2-describe-images [/bash]

Deploying a load balanced e-commerce portal in Amazon EC2

Update: NFS should not be used as that will be a SPOF. One should use S3 or other object stores. An alternative could be multi-node GlusterFS if someone needs volumes shared across nodes.

When building an infrastructure for an eCommerce portal on Cloud, it is important to note that it should be available all the time, that it is fail safe with outages like the one we had recently in AWS EU and U.S. East Regions, survive Hardware failure or any other issues like bug in the system or deployment errors. We built an infrastructure on AWS Cloud that address all these issues with LAMP using various AWS Cloud services like EC2, S3, RDS, EBS etc. It is described in detail below:

 

Achieving High Availability & Fail over across Datacenters

Elastic Load Balancer (ELB)

The Elastic Loadbalancer ( ELB ) service provided by AWS tries to achieve the following:

(i) Spans across Datacenters: Loadbalance traffic across mulitple datacenters (AZ )thus providing high availability even if one datacenter goes down. So you should always make sure that when you launch instances under an ELB, you should launch it in different Availability zones. You can also launch instances in the same AZ but by default ELB will redirect request across multiple AZ in a Round Robin way.

(ii) Failover: ELB will periodically monitor the health of the instances and if any of the instance or monitored service ( e.g. Http ) goes down, ELB will stop redirecting requests to that instance and all the request will be redirected to the remaining number of instances registered under ELB. When the instance comes backup, it will again start redirecting requests to that instance.

(iii) Handling root domain ( apex / main domain ) and subdomains: ELB can loadbalance only those requests coming to alias / subdomain( www ). It cannot handle request coming to root domain. This is because when you configure DNS for enabling ELB, you can only set CNAME to ELB for subdomains. There are 2 reasons for this. One is when you configure ELB, you will only get a Public DNS name for the ELB like the following instead of a Public IP.

[bash]Test-1736333854.us-east-1.elb.amazonaws.com [/bash]

This is because AWS changes the Public IP of the ELB periodically for providing scalability for ELB itself. Another reason why you cannot redirect main domain request to ELB is that DNS protocol itself restricts the usage of CNAME or anything other than “A” record for a root domain. So you cannot CNAME root domain to ELB DNS name.

So for serving root domain requests with ELB , there are only work arounds like mentioned below:

a) We have to assign an elastic IP for an instance under ELB. But what if this instance goes down? Set heartbeat to switch EIP? This is a bit complicated setup as switching EIP to instances present across AZ takes time.

b)The other option is to have the root domain point to the IP addresses of the destination by configuring one or more “A” records (address records) for root domain. You can do that if you know the destination IP addresses are fixed, such as if you are using EC2 Elastic IP addresses. We wouldn’t recommend this because IP addresses will be cached at the client end for long time even if you set low value of TTL at the nameservers. This is because TTL value can also be configured at the the client end overriding the TTL value provided by the nameserver of the domain. e.g. with nscd ( Nameserver Caching Daemon) you can set the TTL value manually in its configuration file.

c) You can keep a separate web server not under ELB with a Redirect Rule for redirecting root domain requests to www. You should make sure that this webserver is highly available as well.

d) A better solution is to go for Domain Registrars ( DNS service providers ) who provide this feature of redirecting root domain requests to www. So this can be handled at the DNS itself. The DNS service provided by AWS “Route53” can be used for this ‘Zone apex’ ( root domain ) redirection.

(iv) SSL Termination

There is support for “SSL termination” in ELB which means you can use ELB to loadbalance HTTPS requests too. You just need to buy the SSL certificate and simply upload it to ELB. ELB will redirect all the HTTPS request to the backend servers. So you can make an eCommerce portal highly secure and highly available with ELB.

(v) Persistent Session

You can enable Sticky Session with ELB but the problem is users will be logged out if any of the instance / webserver goes down and ELB will redirect the subsequent requests from the same user to a different instance and it will prompt the user to login again. To tackle this there were few options we had considered –
a)You can either setup distributed failover memcached server or
b)You can use RDS for storing Session.

We went for RDS as our Session Management store since RDS is an excellent choice for Database Administration as well if you are using MySQL as the Database.

Your application must be configured to write session data to an RDS database. So when an instance / webserver goes down and when the ELB redirects the user request to a different instance, the user will not be asked to login again as all servers are reading session data from the same place that is RDS. The user won’t notice anything at all, even though they’ve now started talking to another server. We recommend using a Multi-AZ RDS instance and write session data into this. So if one of your EC2 instances goes down, the other instances will still have access to the RDS database, and likewise if an RDS zone goes down, Amazon fail this over to the second AZ internally, transparently to you and your application.

So the easiest and most reliable way to share sessions for failover on a multi-server environment is to use RDS, since Amazon handle the database layer’s failover for you.

So basically you can achieve two things by using RDS – Session management and Database Management.

 

AutoScaling

The Autoscaling service provided by AWS allows you to scale horizontally up / down with CPU usage, RAM, Disk I/O etc.

Ideally you should use a Base AMI with Autoscaling that will pull the required packages from a Centralized location like Chef Platform and code from the Version Control System or S3. You can write a startup script to run on instance bootup for this purpose. So when Autoscaling launches a new instance it will pull all the latest updated versions of the packages, code and also any other required custom configurations from a centralized location. This will also make it easier to manage all the configuration details, code updates from a centralized location using tools like Chef Platform, Version Control System or S3 respectively.

Apart from Centralised Configuration / Code management, the reason for using Base ami with Autoscaling is that it is not possible to change the ami configured with Autoscaling service dynamically.

 

Storage for Application Files

We came across lot of options for storing the application files. However you have to consider your priorities before you select a storage service for the code. Following are the points to consider for your application file storage system:

(i)Latency issues: All shared storage systems like NFS / GlusterFS / EBS / S3 etc have latency issues when compared to Instance store (Ephemeral Storage)

(ii)High availability: If you are using a shared storage service like NFS, it should never go down for the entire system to be available all the time.

(iii)Access to the code: How to get the latest code during incremental roll out of a new instance because if you are using a shared storage, it becomes difficult to gives access to the shared storage system when a new instance is launched

We went for instance store / ephemeral store that gives you better I/O performance. You can keep your own highly available SVN repository or go for publicly available Version Control Systems like GitHub. At the same time you can also keep a copy in S3 and sync to it whenever there is a code update. This will make it more redundant.

The problem with using shared storage service like NFS / GlusterFS with EBS / S3 is it becomes difficult to avoid single point failure for NFS / GlusterFS service. But if your site doesn’t have much hits and your priority becomes redundancy, you can go for mounting S3 as filesystem using tools like s3cmd and use that as a shared storage with NFS for multiple instance. The problem with S3 is that it is not intended to be used as a filesystem and there have been issues reported with speed and caching. Or you can use EBS volume for code storage if you have only a single instance serving the request. Even using NFS with EBS volumes ( with frequent snapshots to S3 ) gives better performance than using S3 as shared storage for files.

Not only does instance store gives you better performance, error rates very rare. with EBS volumes error rates are reported frequently. Recent outages with AWS EU & US East Regions shows that the down time was made worse due to increase in time taken to recover from EBS errors.

 

Code Deployment

For automating code deployment, you can configure deployment tools like Capistrano. This will become very handy when you have multiple servers to update simultaneously. Capistrano uses Ruby language and is built for Ruby code deployment but with little changes, you can automate deployment of PHP / Perl / Python / JAVA based application.

chef-deploy is another tool that comes with chef for automating code deployment. Continuous Integration tools like Hudson / Cruise Control are excellent tools when you want to automate the Build, Deployment, Test and Rollback process.

For code deployment, we follow a Release Management process where we keep a staging environment that is an exact replica of the production environment. We push code to the production environment only when it’s been completely tested in the staging environment and approved by the Release Manager. This will further reduce the errors / bugs / and downtime time caused due to the code release.

 

Database Server

We went for RDS across AZ for High availability. AWS will take care of Redundancy, Performance Optimisation, Scalability and Backup. You can avoid the hassle of managing a Database Server by using RDS. RDS is as an excellent distributed highly available Session Management System. You can also take regular backup from RDS and keep it in S3.

You can also use Master–Slave Replication setup instead of RDS. This is also a good option for achieving high availability for Database server. The challenging part will be to manually configure failover for both master and slave servers, achieving scalability with traffic, backup configuration and performance optimization with increasing load. With RDS, all these will be managed by AWS.

 

Log handling

Keep all the important logs like Application logs, Syslogs, SSH log etc in EBS volume. You can either schedule regular snapshots of these EBS volume to S3 or you can even sync these log files to an S3 bucket periodically using tools like s3sync.

 

Configuration Management

If you have more than one server or are planning to scale up in future or would like to automate a lot of administration / coding stuffs, you should definitely use one of the Open Source freely available Configuration Management tools like Chef / puppet / Cfengine

Chef is new and has default support for AWS / EC2. We use Chef extensively for managing our infrastructure in AWS. Chef provide a lot of readily available cookbooks ( recipes / roles ) for LAMP, JAVA app, Cassandra, Hadoop, Nagios etc which can be used readily ( or with minimum customization ) to automate the infrastructure setup and configuration. Chef also comes with a tool called Chef-deploy for automating deployment of code.

So using Chef along with tools like Hudson / Cruisecontrol, you can automate the entire setup from infrastructure setup to configuration management to building, deployment and testing of your application.

 

Performance

To improve performance you can implement the following:

(i)Use caching mechanisms like Memcache(DB scaling) / aiCache / Varnish.

(ii)CDN ( Content Delivery Network ) is a must if you want to provide better end-user response time. There are lot of CDN providers but we recommend AWS CloudFront or Akamai for serving static files and images. For start-up and small business, CDN might be costly but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times.

 

Monitoring & Alert

For monitoring, go for open source monitoring tools along with a SaaS based monitoring application.

(i)There are lot of free and open source option available in the market – Nagios, Zenoss,Zabbix etc. This can be automated with Chef in such a way that when a new server is launched in to the cluster, it will be automatically added to the Nagios list of monitored servers.

(ii)You can also use excellent SaaS based monitoring apps like Pingdom, mon.itor.us, site24x7.com etc for monitoring and alerting via email, SMS or Twitter.

(iii)Custom scripts or tools like Munin & Monit for monitoring and restarting services if it crashes.

 

Backup

You can keep copies of code in an S3 Bucket and sync it with tools like s3sync with every update. For DB Backup, in addition to automated RDS Backup, you can take periodical standard DB backups using mysqldump and store it in S3 bucket.You can also use EBS volumes for keeping replica of code and DB Backup with periodical snapshots to S3.

An important thing to note about S3 storage is it is only a Highly available Storage System. It is not backed up automatically. That means if you delete anything manually from s3, it will be forever gone unless you have manually backed it up with multiple copies in S3. So make sure that you have enough backups available in S3.