• Call: +1 (858) 429-9131

Posts Tagged ‘Java’

Mapreduce using Hadoop + pig/hive on AWS EC2 hadoop cluster

This article discuss about running mapreduce jobs using the apache tools called pig and hive.Before we can process the data we need to upload the files to be processed to HDFS/S3.  We recommend uploading to hdfs and keeping the important files in s3 for backup is a better practice. s3 is easily accessible from commandline using tools like s3cmd. HDFS is a failover cluster filesystem which provides enough protection to your data over instance failures.

Mapreduce

MapReduce is a programming model and an associated implementation for processing and generating large data sets. We can specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.

The main steps hadoop takes to run a job are

  1. The client, which submits the MapReduce job.
  2. The jobtracker, which coordinates the job run. The jobtracker is a Java application whose main class is JobTracker.
  3. The tasktrackers, which run the tasks that the job has been split into. Tasktrackers are Java applications whose main class is TaskTracker.
  4. The distributed filesystem (normally HDFS), which is used for sharing job files between the other entities.

Hadoop Map/Reduce is very powerful, but

o   Requires a Java Programmer.

o   Harder to write and also time consuming.

o   Difficult to update frequently.

A solution is to Run jobs using pig(Piglatin)/hive(HiveQL).

Pig

• An engine for executing programs on top of Hadoop

• It provides a language, Pig Latin, to specify these programs

Pig has Two main parts:

– A high level language to express data analysis

– Compiler to generate mapreduce programs (which can run on top of Hadoop)

Pig Latin is the name of the language with which Pig scripts are written. Pig also provides an interactive shell for executing simple commands, called Grunt. Pig Latin is a high level language. Pig runs on top of Hadoop. It collect the data for processing from Hadoop HDFS filesystem and Submit the jobs to the Hadoop mapreduce system.

A sample mapreduce job (like a Hello World program) using pig is given below

It is assumed that you are on one of the machines which is a part of a hadoop cluster having NameNode/DataNode as well as JobTracker/TaskTracker setup.

We will be executing piglatin commands using grunt shell. Switch to hadoop user first .

Consider we have a file ‘users’ on our local filesystem which contain data to be processed.First we have to upload it to hdfs. Then

# pig -x mapreduce

this command will take you to grunt shell. Pig Latin statements are generally

organized in the following manner:

A LOAD statement reads data from the file system.Then we process the data.And writes output to the file system using STORE statement. A DUMP statement displays output to the screen.

grunt> Users = load ‘users’ as (name, age);

grunt> Fltrd = filter Users by age >= 18 and age <= 25;

grunt> Pages = load ‘pages’ as (user, url);

grunt> Jnd = join Fltrd by name, Pages by user;

grunt> Grpd = group Jnd by url;

grunt> Smmd = foreach Grpd generate group, COUNT(Jnd) as clicks;

grunt> Srtd = order Smmd by clicks desc;

grunt> Top5 = limit Srtd 5;

grunt> store Top5 into ‘top5sites’;

We can also view the progress of the job through the web interface http://<ipaddress of jobtracker machine>:50030.

Tools like PigPen (an eclipse plugin) are available  that helps us create pig-scripts, test them using the example generator and then submit them to a hadoop cluster.

There is another tool called oozie – Oozie is a server based Workflow Engine specialized in running workflow jobs with actions that run Hadoop Map/Reduce and Pig jobs.

Pig tasks can be modeled as a workflow in oozie. These are deployed to the Oozie server using a command line utility. Once deployed, the workflows can be started and manipulated as necessary using the same utility. Once the workflow is started Oozie will run through each flow.. The web console for Oozie server can be used to monitor the progress of various workflow jobs being managed by the server.

Hive

 

Pig, was causing some slowdowns at Facebook company as it needed training to bring business intelligence users up to speed. So the development team decided to write Hive which has an SQL like syntax.

Apache Hive is a data warehouse infrastructure built on top of Apache Hadoop. It provides tools for querying and analysis of large data sets stored in Hadoop files. Hive defines a simple SQL-like query language, called HiveQL, that enables users familiar with SQL to query the data. Also it allows custom mappers and reducers to perform more sophisticated analysis that may not be supported by the built-in capabilities of the language.

Some of the queries in HiveQL are given below, which is very similar to the SQL.

# show tables;

# describe <tablename>;

# SELECT * FROM <tablename> LIMIT 10;

#  CREATE TABLE table_name

#  ALTER TABLE table_name RENAME TO new_table_name

#  DROP TABLE table_name

NoSQL databases like Cassandra provide support for hadoop. Cassandra supports running Hadoop MapReduce jobs against the Cassandra cluster. With proper cluster configuration, MapReduce jobs can retrieve data from Cassandra and then output results either back into Cassandra, or into a file system.

HADOOP Cluster on AWS EC2 with hadoop-0.20 and ubuntu-10.04

Let’s start with a small introduction- what is hadoop ?. Hadoop is an open-source project administered by the Apache Software Foundation. Apache Hadoop is a Java software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.

Technically, Hadoop consists of two key services: reliable data storage using the Hadoop Distributed File System (HDFS) and high-performance parallel data processing using a technique called MapReduce.

Dealing with big data requires two things:

  • Inexpensive, reliable storage; and
  • New tools for analyzing unstructured and structured data.

Hadoop creates clusters of machines and coordinates work among them. Clusters can be built with inexpensive computers.If one fails, Hadoop continues to operate the cluster without losing data or interrupting work, by shifting work to the remaining machines in the cluster.

HDFS manages storage on the cluster by breaking incoming files into pieces, called “blocks,” and storing each of the blocks redundantly across the pool of servers.

The main services running in a hadoop cluster will be

1)namenode

2)jobtracker

3)secondarynamenode

These three will be running only on a single node(machine) ; that machine is the central machine which controls the cluster.

4)datanode

5)tasktracker

These two services will be running on all other nodes in the cluster.

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on.

Above the file systems comes the MapReduce  engine, which consists of one Job Tracker, to which client applications submit MapReduce jobs. The Job Tracker pushes work out to available Task Tracker nodes in the cluster, striving to keep the work as close to the data as possible.

The only purpose of the secondary name-node is to perform periodic checkpoints. The secondary name-node periodically downloads current name-node image and edits log files, joins them into new image and uploads the new image back to the (primary and the only) name-node.

Now Let us have a look at how to build a hadoop cluster using Cloudera hadoop-0.20 on ubuntu-10.04

You should install sun –jdk  first. Then add the following repositories to the apt sources list.

vim /etc/apt/sources.list.d/cloudera.list

[bash]

deb http://archive.cloudera.com/debian lucid-cdh3u0 contrib

deb-src http://archive.cloudera.com/debian lucid-cdh3u0 contrib

[/bash]

Import key

[bash]curl -s http://archive.cloudera.com/debian/archive.key | apt-key add -[/bash]

Then run

[bash]apt-get update[/bash]

For Namenode/Jobtracker ( These two services should run only on a single central machine in the cluster)

[bash]

apt-get install hadoop –yes

apt-get install hadoop-0.20-namenode

apt-get install hadoop-0.20-jobtracker

apt-get install hadoop-0.20-secondarynamenode

[/bash]

Configuration

vim /etc/hadoop/conf/hadoop-env.sh

Append these

[bash]

export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.24/   ( your java home comes here )

export HADOOP_CONF_DIR=/etc/hadoop/conf

export HADOOP_HOME=/usr/lib/hadoop-0.20

export HADOOP_NAMENODE_USER=hdfs

export HADOOP_SECONDARYNAMENODE_USER=hdfs

export HADOOP_DATANODE_USER=hdfs

export HADOOP_JOBTRACKER_USER=mapred

export HADOOP_TASKTRACKER_USER=mapred

export HADOOP_IDENT_STRING=hadoop

[/bash]

vim /etc/hadoop/conf/core-site.xml

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://< ip address of this machine >:8020</value>

</property>

</configuration>

[/bash]

vim /etc/hadoop/conf/hdfs-site.xml

 

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>

<property>

<name>dfs.name.dir</name>

<value>/var/lib/hadoop-0.20/name</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/var/lib/hadoop-0.20/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

</configuration>

[/bash]

vim /etc/hadoop/conf/mapred-site.xml

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>< ip address of this machine >:8021</value>

</property>

<property>

<name>mapred.system.dir</name>

<value>/var/lib/hadoop-0.20/system</value>

</property>

<property>

<name>mapred.local.dir</name>

<value>/var/lib/hadoop-0.20/mapred</value>

</property>

</configuration>

[/bash]

——————————————————————————————————————————————

[bash]

mkdir  / var/lib/hadoop-0.20/name

mkdir  / var/lib/hadoop-0.20/data

mkdir  / var/lib/hadoop-0.20/system

mkdir  / var/lib/hadoop-0.20/mapred

chown -R hdfs /var/lib/hadoop-0.20/name

chown -R hdfs /var/lib/hadoop-0.20/data

chown -R mapred /var/lib/hadoop-0.20/mapred

[/bash]

Now format NameNode

[bash]yes Y | /usr/bin/hadoop namenode –format[/bash]

Start namenode

[bash]/etc/init.d/hadoop-0.20-namenode start[/bash]

Check the log Files for error:

less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-<ip>.log

Also you can check whether the Namenode process is up or not using the command

[bash]# jps[/bash]

Start the SecondaryNamenode

[bash]/etc/init.d/hadoop-0.20-secondarynamenode start[/bash]

Log: less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-secondarynamenode-<ip>.log

[bash]

sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-0.20/system

sudo -u hdfs hadoop fs -chown mapred /var/lib/hadoop-0.20/system

[/bash]

Now Start the JobTracker

[bash]/etc/init.d/hadoop-0.20-jobtracker start[/bash]

Log : less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-jobtracker-ip-10-108-39-34.log

Now  jps  command will show the three processes up

# jps

19233 JobTracker

18994 SecondaryNameNode

18871 NameNode

For Datanode/Tasktracker ( These two services should be running on all the other machines in the cluster )

[bash]

apt-get install hadoop-0.20-datanode

apt-get install hadoop-0.20-tasktracker

[/bash]

Configuration

vim /etc/hadoop/conf/core-site.xml

 

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

&nbsp;

<!– Put site-specific property overrides in this file. –>

&nbsp;

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://< ip address of the namenode >:8020</value>

</property>

</configuration>

[/bash]

vim /etc/hadoop/conf/hdfs-site.xml

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

&nbsp;

<!– Put site-specific property overrides in this file. –>

&nbsp;

<configuration>

<property>

<name>dfs.name.dir</name>

<value>/var/lib/hadoop-0.20/name</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/var/lib/hadoop-0.20/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

</configuration>

[/bash]

vim /etc/hadoop/conf/mapred-site.xml

[bash]

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

&nbsp;

<!– Put site-specific property overrides in this file. –>

&nbsp;

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>< ip address of jobtracker  >:8021</value>

</property>

<property>

<name>mapred.system.dir</name>

<value>/var/lib/hadoop-0.20/system</value>

</property>

<property>

<name>mapred.local.dir</name>

<value>/var/lib/hadoop-0.20/mapred</value>

</property>

</configuration>

[/bash]

———————————————————————————————————————————————

[bash]

mkdir  /var/lib/hadoop-0.20/data/

chown -R hdfs /var/lib/hadoop-0.20/data

mkdir /var/lib/hadoop-0.20/mapred

chown -R mapred /var/lib/hadoop-0.20/mapred

[/bash]

Start the DataNode

[bash]/etc/init.d/hadoop-0.20-datanode start[/bash]

Log : less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-<ip>.log

Start the TaskTracker

[bash]/etc/init.d/hadoop-0.20-tasktracker start[/bash]

Log: less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-tasktracker-<ip>.log

You can now check the interface

http://< namenode-ip >:50070   – for HDFS overview

and

http://< jobtracker –ip>:50030  – for Mapreduce overview

Cassandra Cluster on AWS EC2 with Cassandra 7.x and ubuntu 10.04

Cassandra is a highly scalable, eventually consistent, distributed, structured key-value store. Cassandra brings together  Dynamo’s fully distributed design  and Bigtable’s ColumnFamily-based data model.

In a cluster, Cassandra nodes exchange information about one another using a mechanism called Gossip. The nodes in a cluster needs to know one another.  Nodes named “seed”s are the centre of this communication mechanism. It’s customary to pick a small number of relatively stable nodes to serve as your seeds. Do make sure that each seed also knows of at least one other. Having two nodes is what is preferred.

Lets have a look at how we can bring a Cassandra cluster up with Cassandra 7.x on ubuntu 10.04

First of all you have to install the java/jdk .  As that is out of scope for our discussion please do it on your own and let’s start with cassandra.

Add the following repositories to your apt sources list

vim /etc/apt/sources.list.d/cassandra.list

[bash]deb http://www.apache.org/dist/cassandra/debian 07x main
deb-src http://www.apache.org/dist/cassandra/debian 07x main[/bash]

Import the following keys and add it to apt-key

[bash]

gpg –keyserver keyserver.ubuntu.com –recv-keys 4BD736A82B5C1B00

gpg –export –armor 4BD736A82B5C1B00 | sudo apt-key add –

gpg –keyserver keyserver.ubuntu.com –recv-keys F758CE318D77295D

gpg –export –armor F758CE318D77295D | sudo apt-key add –

[/bash]

Execute

[bash]apt-get update[/bash]

and make sure that no error is there with accessing the packages.

Installing cassandra on all nodes(machines) with  which we intend to build the cluster.

[bash]apt-get install cassandra  –yes[/bash]

Now edit the configuration file for Cassandra

vim /etc/cassandra/cassandra.yaml

Here  I will discuss the important directives that has to be edited for the cluster to take effect

initial_token:

eg:  initial_token:  136112946768375385385349842972707284582

This parameter determines the position of each node in the Cassandra ring. Initial token for the first seed node should be ‘0’.Here is a simple Python script that helps to calculate the token values.

[bash]

#! /usr/bin/python

import sys

if (len(sys.argv) > 1):

num=int(sys.argv[1])

else:

num=int(raw_input(“How many nodes are in your cluster? “))

for i in range(0, num):

print ‘node %d: %d’ % (i, (i*(2**127)/num))

[/bash]

executing this script will prompt you for the no. of nodes in your cluster. Then it will output the initial tokens for each node.

For eg: Consider a 2 node cluster, the tokens will be

node 0: 0

node 1: 85070591730234615865843651857942052864

auto_bootstrap: false

You can set this to false as we are just going to start the cluster for the first time.

seeds:

-< ip address >

As I told you earlier, the seeds mentioned here will control the communication between the nodes.

You can give the ips of the two nodes here  for which you assigned the first two initial tokens generated by the script above.

Example:

Seeds:

-192.168.1.10

-192.168.1.13

This seed entries should be the same on all nodes of the cluster.

listen_address:

&

rpc_address:

You can leave both empty.

Starting  the Cassandra

For starting Cassandra you can either use an init script/ or the command “cassandra”. Here I will use the second option.

As Cassandra service was started during the installation some values will be stored in /var/lib/cassandra/data directory. So Before starting Cassandra follow these steps.

[bash]

1)      /etc/init.d/cassandra stop

2)      rm –rf  /var/lib/cassandra/data

3)      mkdir /var/lib/cassandra/data

[/bash]

After doing these steps on all the nodes please run the following  command to start Cassandra on each node starting from the seed node 1

[bash]# cassandra &[/bash]

After starting Cassandra on all the nodes you can check the cluster status using the following command

[bash]nodetool -h <ip of the node >  -p 8080 ring[/bash]

or

[bash]nodetool -h localhost -p 8080 ring[/bash]

SSL for Tomcat on AWS EC2

To launch an AWS/EC2 instance, at first setting up a security group to specify what network traffic is allowed to reach the instance. Then select an AMI and launch an instance from it. And create a volume in the same zone of the instance and attach with it. Format the device and mount it to a directory. After that follow the steps to create SSL for Tomcat:

1. For the tomcat we need java, so create a directory to save the Java Binary file.

[shell] mkdir /usr/java
cd /usr/java [/shell]

2. Download jdk binary file (jdk-x-linux-ix.bin) here
Use URL http://www.oracle.com/technetwork/java/archive-139210.html

3. Execute the Binary file

[shell] /usr/java/jdk-x-linux-ix.bin [/shell]

Now we have the Java in our device. Then Download the Tomcat and install it followed by the instructions:-

1. Create a directory to save the tomcat

[shell] mkdir /usr/tomcat
cd /usr/tomcat [/shell]

2. Download tomcat source file (apache-tomcat-x.tar.gz) here
Use URL http://apache.hoxt.com/tomcat/tomcat-6/v6.0.32/bin/

3. Extract that file

[shell] tar -zxvf apache-tomcat-x.tar.gz [/shell]

4. Edit the catalina.sh file

[shell] vim /usr/tomcat/apache-tomcat-x/bin/catalina.sh [/shell]

[shell]

#** Add at the top **
JAVA_HOME=/usr/java/jdk1.x.x_x

[/shell]

save and exit
5. Start the tomcat

[shell] /usr/tomcat/apache-tomcat-x/bin/startup.sh [/shell]

6. We can see the logs by using the given command

[shell]tail -f /usr/tomcat/apache-tomcat-x/logs/catalina.out [/shell]

7. Take the browser and enter the URL http://localhost
Now we can see the tomcat index page

8. To stop the tomcat

[shell]/usr/tomcat/apache-tomcat-x/bin/shutdown.sh [/shell]

Now configure the SSL Certificate for tomcat. When you choose to activate SSL on your web server you will be prompted to complete a number of questions about the identity of your website and your company. Your web server then creates two cryptographic keys – a Private Key and a Public Key. The Public Key does not need to be secret and is placed into a Certificate Signing Request (CSR) – a data file also containing your details.

Create a self signed certificate authority (CA) and keystore.

1. Make a directory to hold the certs and keystore. This might be something like:

[shell] mkdir /usr/tomcat/ssl
cd /usr/tomcat/ssl [/shell]

2. Generate a private key for the server and remember it for the next steps

[shell]openssl genrsa -des3 -out server.key 1024[/shell]

Generating RSA private key, 1024 bit long modulus
…………………..++++++
…++++++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying – Enter pass phrase for server.key:

3. Generate a CSR (Certificate Signing Request). Give the data after executing this command

[shell]openssl req -new -key server.key -out server.csr[/shell]

Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Berkshire]:
Locality Name (eg, city) [Newbury]:
Organization Name (eg, company) [My Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server’s hostname) []:
Email Address []:

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

4. Remove the passphrasse from the key

[shell]cp server.key server.key.org
openssl rsa -in server.key.org -out server.key[/shell]

Enter pass phrase for server.key.org:
writing RSA key

5. Generate the self signed certificate

[shell]openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt[/shell]

Signature ok
subject=/C=GB/ST=Berkshire/L=Newbury/O=My Company Ltd
Getting Private key

You should then submit the CSR. During the SSL Certificate application process, the Certification Authority will validate your details and issue an SSL Certificate containing your details and allowing you to use SSL. Typically an SSL Certificate will contain your domain name, your company name, your address, your city, your state and your country. It will also contain the expiration date of the Certificate and details of the Certification Authority responsible for the issuance of the Certificate.

Create a certificate for tomcat and add both to the keystore

1. Change the path to ssl

[shell]cd /usr/tomcat/ssl[/shell]

2. Create a keypair for ‘tomcat’

[shell]keytool -genkey -alias tom -keyalg RSA -keystore tom.ks[/shell]

Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]:
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:

Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]: yes

Enter key password for <tom>
(RETURN if same as keystore password):
Re-enter new password:

3. Generate a CSR (Certificate Signing Request) for tomcat

[shell]keytool -keystore tom.ks -alias tom -certreq -file tom.csr[/shell]

Enter keystore password:

4. create unique serial number

[shell]echo 02 > serial.txt[/shell]

5. Sign the tomcat CSR

[shell]openssl x509 -CA server.crt -CAkey server.key -CAserial serial.txt -req -in tom.csr -out tom.cer -days 365[/shell]

Signature ok
subject=/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=Unknown
Getting CA Private Key

6. Import the server CA certificate into the keystore

[shell]keytool -import -alias serverCA -file server.crt -keystore tom.ks[/shell]

Enter keystore password:
Owner: O=My Company Ltd, L=Newbury, ST=Berkshire, C=GB
Issuer: O=My Company Ltd, L=Newbury, ST=Berkshire, C=GB
Serial number: ee13c90cb351968b
Valid from: Thu May 19 02:12:51 EDT 2011 until: Fri May 18 02:12:51 EDT 2012
Certificate fingerprints:
MD5: EE:F0:69:01:4D:D2:DA:A2:4E:88:EF:DC:A8:3F:A9:00
SHA1: 47:97:72:EF:30:02:F7:82:BE:CD:CA:F5:CE:4E:ED:89:73:23:4E:24
Signature algorithm name: SHA1withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

7. Add the tomcat certificate to the keystore

[shell]keytool -import -alias tom -file tom.cer -keystore tom.ks[/shell]

Enter keystore password:
Certificate reply was installed in keystore

To configure a secure (SSL) HTTP connector for Tomcat, verify that it is activated in the $TOMCAT_HOME/conf/server.xml file. Edit this file and add the following lines.

Tomcat configuration

1. Edit the given portion of tomcat configuretion file and change the port as 80

[shell]vim /usr/tomcat/apache-tomcat-6.0.13/conf/server.xml[/shell]

[shell]“””””” <Connector port=”8080″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ /> “”””””

<Connector port=”80″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ />

[/shell]

2. Add the given portion to server.xml and give your password in the password portion

[shell]

<Connector port=”443″ protocol=”HTTP/1.1″ SSLEnabled=”true”
maxThreads=”150″ scheme=”https” secure=”true”
keystoreFile=”tom.ks”
keystorePass=”password”
clientAuth=”false” sslProtocol=”TLS” />

[/shell]

When you start the Tomcat Your web server will match your issued SSL Certificate to your Private Key. Your web server will then be able to establish an encrypted link between the website and your customer’s web browser.

Start the tomcat with SSL Certificate

1. Restart tomcat

[shell]/usr/tomcat/apache-tomcat-6.0.13/bin/shutdown.sh
/usr/tomcat/apache-tomcat-6.0.13/bin/startup.sh[/shell]

2. Go to https://Public DNS name:443/

Then your browser shows a security issue. Click the Approve button. Then you can enter to the tomcat with your certificate. When a browser connects to a secure site it will retrieve the site’s SSL Certificate and check that it has not expired, it has been issued by a Certification Authority the browser trusts, and that it is being used by the website for which it has been issued. If it fails on any one of these checks the browser will display a warning to the end user letting them know that the site is not secured by SSL.

You are Done !!!

Creating phusion passenger AMI on Amazon EC2

Phusion Passenger is an Apache and Nginx module for deploying Ruby web applications.(such as those built on the Ruby on Rails web framework). Phusion Passenger works on any POSIX-compliant operating system,which means practically any operating system , except Microsoft Windows.

Here we are not going to discuss much about ruby on rails applications as our aim is creating an ami of an ubuntu aws instance from which we can launch an instance for developing and deploying rails applications pre-built.

Install apache2 web-server

[bash]
sudo apt-get install apache2 ( By default its DocumentRoot is /var/www/ )
[/bash]

 

Install mysql-server and mysql-client ( To support rails applications that access database )

 

 

[bash]sudo apt-get install mysql-server mysql-client[/bash]

 

 

 

Install Ruby from repository

The default ruby1.8 is missing some important files. So install ruby1.8-dev. Otherwise at some stage when using gem install, it may end up with “ Error : Failed to build gem native extensions “.

[bash]sudo apt-get install ruby1.8-dev[/bash]

 

Install RubyGems

Install rubygems >= 1.3.6

The package can be downloaded from here

wget http://rubyforge.org/frs/download.php/70696/rubygems-1.3.7.tgz

 

[bash]
tar xvzf rubygems-1.3.7.tgz
cd rubygems-1.3.7
sudo ruby setup.rb
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem
[/bash]

Install Rails via rubygems

 

 

Once rubygems is installed use it to install Rails :

 

[bash]sudo gem install rails[/bash]

 

 

 

Installing Phusion Passenger

 

There are three ways to install Phusion Passenger :

1. By installing the Phusion Passenger gem.

2. By Downloading the source tarball from the PhusionPassenger website(passenger-x.x.x.tar.gz).

3. By installing the native Linux package (eg: Debian package)

Before installing, you will probably need to switch to the root user first. The Phusion Passenger installer will attempt to automatically detect Apache, and compile Phusion Passenger against that Apache version. It does this by looking for the apxs or apxs2 command in the PATH environment variable.

Apache installed in a non-standard location, prevent the Phusion Passenger installer from detecting Apache.To solve this, become root user and export the path of apxs.

Easiest way to install Passenger is installing via the gem

Please install the rubygems and then run the Phusion Passenger installer, by typing the following commands as root.

1.Open a terminal, and type:

[bash]gem install passenger[/bash]

2.Type:

[bash]passenger-install-apache2-module[/bash]

and follow the instructions from the installer.

The installer will :

1. Install the Apache2 module.

2. instruct how to configure Apache.

3. inform how to deploy a Ruby on Rails application.

If anything goes wrong, this installer will advise you on how to solve any problems.

The installer will ask to add the following lines to the apache2.conf file.

[bash] LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.0/

ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/

gems/passenger-3.0.0

PassengerRuby /usr/bin/ruby1.8 [/bash]


Now consider, you have a rails application in directory /var/www/RPF_tool/. Add the following virtualhost entry to your apache configuration file

[bash]
<VirtualHost *:80>

ServerName  www.yoursite.com

DocumentRoot  /home/RFP_tool/public

<Directory  /var/www/RFP_tool/public>

AllowOverride  all

Options  -MultiViews

</Directory>

</VirtualHost>
[/bash]

Restart your apache server.

Phusion Passenger installation is finished.

Installation via the source tarball

Extract  the tarball to whatever location you prefer

[bash]
cd /usr/local/passenger/tar xzvf passenger-x.x.x.tar.gz
/usr/local/passenger/ passenger-x.x.x/bin/passenger-install-apache2-module
[/bash]

Please follow the instructions given by the installer. Do not remove the passenger-x.x.x folder after installation. Furthermore, the passenger-x.x.x folder must be accessible by Apache.

CREATING AN AMI OF AN EC2 INSTANCE

First you will have to install ec2-api-tools.zip from

http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9

[bash]
unzip ec2-api-tools.zip
mkdir ~/ec2
cp -rf ec2-api-tools/* ~/ec2
[/bash]

Upload your aws certificate and private-key to /mnt of the instance.

 

Then add the following to ~/.bashrc

[bash]
export EC2_HOME=~/ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=/mnt/pk-xxxxxxxxxxxxxxxxxxx.pem
export EC2_CERT=/mnt/cert-xxxxxxxxxxxxxxxx.pem
export JAVA_HOME=/usr/local/java/ ( your JAVA_HOME here)
export PATH=~/ec2/bin:$PATH
[/bash]

If your EC2 instance is an EBS-backed one, you can use the following command to create an AMI

[bash]ec2-create-image -n your-image-name instance-id[/bash]

If your instance is an s3-backed ( instance store ) one, you will have to install ec2-ami-tools first. It can be downloaded from

 

http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip

[bash]
unzip ec2-ami-tools.zip
cp ec2-ami-tools-x.x-xxxxx/bin/* ~/ec2/bin
[/bash]

vim ~/.bashrc

export EC2_AMITOOL_HOME=~/ec2/ec2-ami-tools-1.3-56066/

Now you can use the following commands to create an AMI of your s3-backed instance

[bash] mkdir /mnt/bundle-vol/
ec2-bundle-vol -u USER-ID -c /mnt/cert-xxxxxxx.pem -k
/mnt/pk-xxxx.pem -d /mnt/bundle-vol [/bash]

( Login to your AWS account; your USER-ID is available from Account–> Security Credentials )

[bash] ec2-upload-bundle -u s3-bucket-name -a aws-access-key -s aws-secret-key -d
/mnt/bundle-vol/ -m
/mnt/bundle-vol/image.manifest.xml
ec2-register -K  /mnt/pk-xxxxxx.pem -C/mnt/cert-xxxxxxx.pem s3-bucket-name/image.manifest.xml -n name-of-the-image [/bash]

To see the created images

[bash]ec2-describe-images [/bash]

Apache-Tomcat Load Balanced Persistent Session Setup on Amazon EC2

Although Tomcat is a good option for heavy java applications, it gives a poor performance under high pressure.The best way to solve this problem is to set up an Apache-Tomcat Load Balanced on your Amazon EC2 environment. In this case you will have more than one parallel running tomcat instances and each will be able to share the part of the traffic. Read more…

Deploying a load balanced e-commerce portal in Amazon EC2

Update: NFS should not be used as that will be a SPOF. One should use S3 or other object stores. An alternative could be multi-node GlusterFS if someone needs volumes shared across nodes.

When building an infrastructure for an eCommerce portal on Cloud, it is important to note that it should be available all the time, that it is fail safe with outages like the one we had recently in AWS EU and U.S. East Regions, survive Hardware failure or any other issues like bug in the system or deployment errors. We built an infrastructure on AWS Cloud that address all these issues with LAMP using various AWS Cloud services like EC2, S3, RDS, EBS etc. It is described in detail below:

 

Achieving High Availability & Fail over across Datacenters

Elastic Load Balancer (ELB)

The Elastic Loadbalancer ( ELB ) service provided by AWS tries to achieve the following:

(i) Spans across Datacenters: Loadbalance traffic across mulitple datacenters (AZ )thus providing high availability even if one datacenter goes down. So you should always make sure that when you launch instances under an ELB, you should launch it in different Availability zones. You can also launch instances in the same AZ but by default ELB will redirect request across multiple AZ in a Round Robin way.

(ii) Failover: ELB will periodically monitor the health of the instances and if any of the instance or monitored service ( e.g. Http ) goes down, ELB will stop redirecting requests to that instance and all the request will be redirected to the remaining number of instances registered under ELB. When the instance comes backup, it will again start redirecting requests to that instance.

(iii) Handling root domain ( apex / main domain ) and subdomains: ELB can loadbalance only those requests coming to alias / subdomain( www ). It cannot handle request coming to root domain. This is because when you configure DNS for enabling ELB, you can only set CNAME to ELB for subdomains. There are 2 reasons for this. One is when you configure ELB, you will only get a Public DNS name for the ELB like the following instead of a Public IP.

[bash]Test-1736333854.us-east-1.elb.amazonaws.com [/bash]

This is because AWS changes the Public IP of the ELB periodically for providing scalability for ELB itself. Another reason why you cannot redirect main domain request to ELB is that DNS protocol itself restricts the usage of CNAME or anything other than “A” record for a root domain. So you cannot CNAME root domain to ELB DNS name.

So for serving root domain requests with ELB , there are only work arounds like mentioned below:

a) We have to assign an elastic IP for an instance under ELB. But what if this instance goes down? Set heartbeat to switch EIP? This is a bit complicated setup as switching EIP to instances present across AZ takes time.

b)The other option is to have the root domain point to the IP addresses of the destination by configuring one or more “A” records (address records) for root domain. You can do that if you know the destination IP addresses are fixed, such as if you are using EC2 Elastic IP addresses. We wouldn’t recommend this because IP addresses will be cached at the client end for long time even if you set low value of TTL at the nameservers. This is because TTL value can also be configured at the the client end overriding the TTL value provided by the nameserver of the domain. e.g. with nscd ( Nameserver Caching Daemon) you can set the TTL value manually in its configuration file.

c) You can keep a separate web server not under ELB with a Redirect Rule for redirecting root domain requests to www. You should make sure that this webserver is highly available as well.

d) A better solution is to go for Domain Registrars ( DNS service providers ) who provide this feature of redirecting root domain requests to www. So this can be handled at the DNS itself. The DNS service provided by AWS “Route53” can be used for this ‘Zone apex’ ( root domain ) redirection.

(iv) SSL Termination

There is support for “SSL termination” in ELB which means you can use ELB to loadbalance HTTPS requests too. You just need to buy the SSL certificate and simply upload it to ELB. ELB will redirect all the HTTPS request to the backend servers. So you can make an eCommerce portal highly secure and highly available with ELB.

(v) Persistent Session

You can enable Sticky Session with ELB but the problem is users will be logged out if any of the instance / webserver goes down and ELB will redirect the subsequent requests from the same user to a different instance and it will prompt the user to login again. To tackle this there were few options we had considered –
a)You can either setup distributed failover memcached server or
b)You can use RDS for storing Session.

We went for RDS as our Session Management store since RDS is an excellent choice for Database Administration as well if you are using MySQL as the Database.

Your application must be configured to write session data to an RDS database. So when an instance / webserver goes down and when the ELB redirects the user request to a different instance, the user will not be asked to login again as all servers are reading session data from the same place that is RDS. The user won’t notice anything at all, even though they’ve now started talking to another server. We recommend using a Multi-AZ RDS instance and write session data into this. So if one of your EC2 instances goes down, the other instances will still have access to the RDS database, and likewise if an RDS zone goes down, Amazon fail this over to the second AZ internally, transparently to you and your application.

So the easiest and most reliable way to share sessions for failover on a multi-server environment is to use RDS, since Amazon handle the database layer’s failover for you.

So basically you can achieve two things by using RDS – Session management and Database Management.

 

AutoScaling

The Autoscaling service provided by AWS allows you to scale horizontally up / down with CPU usage, RAM, Disk I/O etc.

Ideally you should use a Base AMI with Autoscaling that will pull the required packages from a Centralized location like Chef Platform and code from the Version Control System or S3. You can write a startup script to run on instance bootup for this purpose. So when Autoscaling launches a new instance it will pull all the latest updated versions of the packages, code and also any other required custom configurations from a centralized location. This will also make it easier to manage all the configuration details, code updates from a centralized location using tools like Chef Platform, Version Control System or S3 respectively.

Apart from Centralised Configuration / Code management, the reason for using Base ami with Autoscaling is that it is not possible to change the ami configured with Autoscaling service dynamically.

 

Storage for Application Files

We came across lot of options for storing the application files. However you have to consider your priorities before you select a storage service for the code. Following are the points to consider for your application file storage system:

(i)Latency issues: All shared storage systems like NFS / GlusterFS / EBS / S3 etc have latency issues when compared to Instance store (Ephemeral Storage)

(ii)High availability: If you are using a shared storage service like NFS, it should never go down for the entire system to be available all the time.

(iii)Access to the code: How to get the latest code during incremental roll out of a new instance because if you are using a shared storage, it becomes difficult to gives access to the shared storage system when a new instance is launched

We went for instance store / ephemeral store that gives you better I/O performance. You can keep your own highly available SVN repository or go for publicly available Version Control Systems like GitHub. At the same time you can also keep a copy in S3 and sync to it whenever there is a code update. This will make it more redundant.

The problem with using shared storage service like NFS / GlusterFS with EBS / S3 is it becomes difficult to avoid single point failure for NFS / GlusterFS service. But if your site doesn’t have much hits and your priority becomes redundancy, you can go for mounting S3 as filesystem using tools like s3cmd and use that as a shared storage with NFS for multiple instance. The problem with S3 is that it is not intended to be used as a filesystem and there have been issues reported with speed and caching. Or you can use EBS volume for code storage if you have only a single instance serving the request. Even using NFS with EBS volumes ( with frequent snapshots to S3 ) gives better performance than using S3 as shared storage for files.

Not only does instance store gives you better performance, error rates very rare. with EBS volumes error rates are reported frequently. Recent outages with AWS EU & US East Regions shows that the down time was made worse due to increase in time taken to recover from EBS errors.

 

Code Deployment

For automating code deployment, you can configure deployment tools like Capistrano. This will become very handy when you have multiple servers to update simultaneously. Capistrano uses Ruby language and is built for Ruby code deployment but with little changes, you can automate deployment of PHP / Perl / Python / JAVA based application.

chef-deploy is another tool that comes with chef for automating code deployment. Continuous Integration tools like Hudson / Cruise Control are excellent tools when you want to automate the Build, Deployment, Test and Rollback process.

For code deployment, we follow a Release Management process where we keep a staging environment that is an exact replica of the production environment. We push code to the production environment only when it’s been completely tested in the staging environment and approved by the Release Manager. This will further reduce the errors / bugs / and downtime time caused due to the code release.

 

Database Server

We went for RDS across AZ for High availability. AWS will take care of Redundancy, Performance Optimisation, Scalability and Backup. You can avoid the hassle of managing a Database Server by using RDS. RDS is as an excellent distributed highly available Session Management System. You can also take regular backup from RDS and keep it in S3.

You can also use Master–Slave Replication setup instead of RDS. This is also a good option for achieving high availability for Database server. The challenging part will be to manually configure failover for both master and slave servers, achieving scalability with traffic, backup configuration and performance optimization with increasing load. With RDS, all these will be managed by AWS.

 

Log handling

Keep all the important logs like Application logs, Syslogs, SSH log etc in EBS volume. You can either schedule regular snapshots of these EBS volume to S3 or you can even sync these log files to an S3 bucket periodically using tools like s3sync.

 

Configuration Management

If you have more than one server or are planning to scale up in future or would like to automate a lot of administration / coding stuffs, you should definitely use one of the Open Source freely available Configuration Management tools like Chef / puppet / Cfengine

Chef is new and has default support for AWS / EC2. We use Chef extensively for managing our infrastructure in AWS. Chef provide a lot of readily available cookbooks ( recipes / roles ) for LAMP, JAVA app, Cassandra, Hadoop, Nagios etc which can be used readily ( or with minimum customization ) to automate the infrastructure setup and configuration. Chef also comes with a tool called Chef-deploy for automating deployment of code.

So using Chef along with tools like Hudson / Cruisecontrol, you can automate the entire setup from infrastructure setup to configuration management to building, deployment and testing of your application.

 

Performance

To improve performance you can implement the following:

(i)Use caching mechanisms like Memcache(DB scaling) / aiCache / Varnish.

(ii)CDN ( Content Delivery Network ) is a must if you want to provide better end-user response time. There are lot of CDN providers but we recommend AWS CloudFront or Akamai for serving static files and images. For start-up and small business, CDN might be costly but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times.

 

Monitoring & Alert

For monitoring, go for open source monitoring tools along with a SaaS based monitoring application.

(i)There are lot of free and open source option available in the market – Nagios, Zenoss,Zabbix etc. This can be automated with Chef in such a way that when a new server is launched in to the cluster, it will be automatically added to the Nagios list of monitored servers.

(ii)You can also use excellent SaaS based monitoring apps like Pingdom, mon.itor.us, site24x7.com etc for monitoring and alerting via email, SMS or Twitter.

(iii)Custom scripts or tools like Munin & Monit for monitoring and restarting services if it crashes.

 

Backup

You can keep copies of code in an S3 Bucket and sync it with tools like s3sync with every update. For DB Backup, in addition to automated RDS Backup, you can take periodical standard DB backups using mysqldump and store it in S3 bucket.You can also use EBS volumes for keeping replica of code and DB Backup with periodical snapshots to S3.

An important thing to note about S3 storage is it is only a Highly available Storage System. It is not backed up automatically. That means if you delete anything manually from s3, it will be forever gone unless you have manually backed it up with multiple copies in S3. So make sure that you have enough backups available in S3.