• Call: +1 (858) 429-9131

Posts Tagged ‘web applications’

It’s Phab! That makes your life easier

We have been using plenty of different tools for tracking bugs/product management/project management/to do lists/code review; such as ClearCase, ClearQuest, Bugzilla, Github, Asana, Pivotal Tracker, Google Drive etc. We found Phabricator as a “Too Good To Be True” software engineering web application platform originally developed at Facebook. It has code review, wiki, repository browsing,tickets and a lot more to make Phab more fabulous.

Phabricator is an open source collaboration of web applications which help software companies to build better software. It is a suite of applications. Following are the most important tools in phabricator :
Maniphest – Bug tracker/task management tracker
Diffusion- source code browser
Differential – code review tool that allows developers to easily submit reviews to one another via command line tool when they check in code using Git or Subversion
Phriction – wiki tool

How to setup and configure the code review and project management tool – Phabricator

Installation

Server – 4GB Digital ocean droplet
OS – Ubuntu 14.04

1. Install dependencies

apt-get install mysql-server apache2 dpkg-dev php5 php5-mysql php5-gd php5-dev php5-curl php-apc php5-cli php5-json

2. Get code

#cd /var/www/codereview

git clone https://github.com/phacility/libphutil.git

git clone https://github.com/phacility/arcanist.git

git clone https://github.com/phacility/arcanist.git

3. Configure virtual host entry

#add below lines

#######################################################################

DocumentRoot /var/www/codereview/webroot
RewriteEngine on
RewriteRule ^/rsrc/(.*) – [L,QSA]
RewriteRule ^/favicon.ico – [L,QSA]
RewriteRule ^(.*)$ /index.php?__path__=$1 [B,L,QSA]
Order allow,deny
allow from all
#######################################################################
4. Enable the virtual host entry for phabricator.

# a2ensite phabricator.conf
# service apache2 reload

5. Configure the MySQL database configuration for phabricator

– create database
# /var/www/codereview/phabricator/bin/config set mysql.user mysql_username
# /var/www/codereview/phabricator/bin/config get mysql.pass mysql_password
# /var/www/codereview/phabricator/bin/config get mysql.host mysql_host
# /var/www/codereview/phabricator/bin/config storage upgrade
-tweak mysql

Open /etc/mysql/my.cnf and add the following line under [mysqld] section:

sql-mode = STRICT_ALL_TABLES

#service mysql restart

Set the Base URI of Phabricator install

# /var/www/codereview/phabricator/bin/config set phabricator.base-uri

(eg: phabricator.your-domain.com)

Configure Outbound Email – External SMTP (Google Apps)

Set the following configuration keys using /var/www/codereview/phabricator/bin/config set value

– metamta.mail-adapter -> PhabricatorMailImplementationPHPMailerAdapter
– phpmailer.mailer -> smtp
– phpmailer.smtp-host -> smtp.gmail.com
– phpmailer.smtp-port -> 465
– phpmailer.smtp-user -> Your Google apps mail id
– phpmailer.smtp-password -> set to your password used for authentication
– phpmailer.smtp-protocol -> ssl

Start the phabricator daemons

You can start all the phabricator deamons using the script
# /var/www/codereview/phabricator/bin/phd start
To start daemons at the boot time, add this entry to the file /etc/rc.local

/var/www/codereview/phabricator/bin/phd start

Diffusion repository hosting with git

1. Install git

#apt-get install git

2. Create a local repository directory:

#mkdir -p /data/repo

3. Edit the repository.default-local-path key to the new local repository directory.

Go to the Config -> Repositories -> repository.default-local-path

4. Configure System user accounts

Phabricator uses as many as three user accounts. These are system user accounts on the machine Phabricator runs on, not Phabricator user accounts.

* daemon-user – The user the daemons run as

We will configure the root user to run the daemons

* www-user – The user the web server run as

We will use www-data to be the web user

* vcs-user – The user that users will connect over SSH as

We will configure git user to the vcs-user

To enable SSH access to repositories, edit /etc/sudoers file using visudo to contain:

#includedir /etc/sudoers.d
git ALL=(root) SETENV: NOPASSWD: /usr/bin/git-upload-pack, /usr/bin/git-receive-pack, /usr/bin/git

Since we are going to enable SSH access to the repository, ensure the following holds good.

– Open /etc/shadow and find the line for vcs-user, git.

The second field (which is the password field) must not be set to !!. This value will prevent login. If it is set to !!, edit it and set it to NP (“no password”) instead.

– Open /etc/passwd and find the line for the vcs-user, git.
The last field (which is the login shell) must be set to a real shell. If it is set to something like /bin/false, then sshd will not be able to execute commands. Instead, you should set it to a real shell, like /bin/sh.

– Use phd.user as our daemon user;
# /var/www/phab/phabricator/bin/config phd.user root
# /var/www/phab/phabricator/bin/config set diffusion.ssh-user git

5. Configuring SSH

We will move the normal sshd daemon to another port, say 222. We will use this port to get a normal login shell. We will run highly restrictive sshd on port 22 managed by Phabricator.

Move Normal SSHD

– make a backup of sshd_config before making any changes.

#cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup

– Update /etc/ssh/sshd_config, change the port to some othert port like 222.

Port 222

– Restart sshd and verify that you are able to connect to the new port

ssh -p 222 user@host

Configure and start Phabricator SSHD

We now configure and start a second SSHD instance which will run on port 22. This instance will use special locked down configuration that uses Phabricator to handle the authentication and command execution.

– Create a phabricator-ssh-hook.sh file

– Create a sshd_phabricator config file

– Start a copy of sshd using the new configuration

Create phabricator-ssh-hook.sh: Copy the template in phabricator/resources/sshd/ phabricator-ssh-hook.sh to somewhere like /usr/lib/phabricator-ssh-hook.sh and edit it to have the correct settings

##############################################################

#!/bin/sh

# NOTE: Replace this with the username that you expect users to connect with.
VCSUSER=”git”

# NOTE: Replace this with the path to your Phabricator directory.
ROOT=”/var/www/codereview/phabricator”

if [ “$1” != “$VCSUSER” ];
then
exit 1
fi

exec “$ROOT/bin/ssh-auth” $@
##############################################################

Make it owned by root and restrict editing;

#sudo chown root /usr/lib/phabricator-ssh-hook.sh
#chmod 755 /usr/lib/phabricator-ssh-hook.sh

Create sshd_config for Phabricator: Copy the template in /phabricator/sshd/sshd_config.phabricator.example to somewhere like /etc/ssh/sshd_config.phabricator

Start Phabricator SSHD

#sudo /usr/sbin/sshd -f /etc/ssh/sshd_config.phabricator

Note:-
Add this entry to the /etc/rc.local to start the daemon on startup.

If you did everything correctly, you should be able to run this;

#echo {} | ssh git@phabricator.your-company.com conduit conduit.ping

and get a response like this;

{“result”:”phab-server”,”error_code”:null,”error_info”:null}

You should now be able to access your instance over ssh on port 222 for normal login and administrative purposes. Phabricator SSHD runs on port 22 to handle authentication and command execution.

6. To create a git repository

Go to Diffusion -> New Repository -> Create a New Hosted Repository

Upgrade Phabricator

Since phabricator is under development, you should update frequently. To update phabricator:

– Stop the web server
– Run git pull in libphutil/, arcanist/, and phabricator.
– Run phabricator/bin/storage upgrade.
– Restart the web server.
Also you can use a script similar to this one to automate the process:
http://www.phabricator.com/rsrc/install/update_phabricator.sh

Apache on the Cloud – The things you should know

    LAMP forms the base of most web applications.  As the load on an server increases, the bottlenecks in the underlying infrastructure become more apparent in the form of slow response to user requests.

     To overcome this slow response  the primary choice of most people is to add more hardware resources ( incase of AWS increasing the instance type). This will definitely  increases performance but will cost you more money.  The webserver and database eat most of the resources. Most commonly used web server is apache and database is MySQL. So if we can optimize these two we can improve the performance.

   Apache optimization techniques can often provide significant acceleration boosts  even when other acceleration techniques are in use, such as a CDN.  mod_pagespeed is a module from Google for Apache HTTP Servers that can improve the page load times of your website. you can read more on this from here.  If you want to deploy a PHP app on AWS Cloud, Its better to using some kind of caching mechanism.  Its already discussed in our blog .

      Once we came into a situation where we have to use a micro instance for a web server with less than 500 hits a day

      When the site started running live, and we feel like disappointed. when accessing website, it would sometimes pause for several seconds before serving the requested page. It took  hours to figure out what was going on. finally we run the command top and quickly discovered that when the site was accessing by certain amount of users the CPU would spike, but the spike was not the typical user or system CPU. For testing what’s happening in  server we used the apache benchmark tool ‘ab’ and run the following command on  localhost.

                                             #ab -n 100 -c 10 http://mywebserver.com/

      This will show  how fast our web server can handle 100 requests, with a maximum of 10 requests running concurrently. In the meantime we were monitoring the output of top command on web server.

     For further investigation we started with  sar – Linux command to  Collect, report, or save system activity information

  #sar 1

      According to amazon documentation “Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available”.

       If you use 100% CPU for more than a few minutes, Amazon will “steal” CPU time from the instance, meaning that they throttle your instance.  This last  as long as five minutes, and then you get a few seconds of 100% again, then the restrictions are back.  This will effect your website, making it slow, and even timing-out requests. basically means the physical hardware is busy and the hypervisor can’t give the VM the amount of CPU cycles it wants.

   Real tuning required on prefork. This is where we can tell apache to only generate so many processes. The defaults values  are high, and which cant be handled by micro instance. Suppose you get 10 concurrent requests for a php page and require around 64MB of RAM when requested (you have to make sure that  php memory_limit is above that value). That’s around 640MB of RAM on micro instance of 613MB RAM.  This is the case  with 10 connections – apache is configured to allow 256 clients by default,  We need to  scale these down , normally with 10-12 MaxClients. As per out case, this is still a huge number because 10-12 concurrent connections would use all our memory. If you want to be really cautious, make sure that your max memory usage is less than 613MB. Something like 64M php memory limit and 8 max clients keeps you under your limit with space to spare – this helps ensure that our MySQL process when your server is under load.

           Maxclients an important tuning parameter regarding the performance of the Apache web server. We can calculate the value of this for a t1.micro instance

Theoretically,

MaxClients =(Total Memory – Operating System Memory – MySQL memory) / Size Per Apache process.

t1.micro have a server with 613MB of Total memory. Suppose We are using RDS instead of mysql server.

Stop apache and run

#ps aux | awk ‘{sum1 +=$4}; END {print sum1}’.

 we will get the amount of memory thats used by processes other than apache.

Suppose we get a value around 30.

from top command we can check the average memory that each apache resources use.

suppose its 60mb.

Max clients = (613 – 30 ) 60 = 9.71 ~ 10 approx …

       Micro instances are awesome, especially when cost becomes a major concern, however that they are not right for all applications. A simple website with only a few hundreds  hits a day will do just fine since it will only need CPU in short bursts.

      For Servers that serves dynamic content, better approach is to employ a reverse-proxy. This would be done this apache’s mod_proxy or Squid. The main advantages of this configurations are content caching, load balancing etc. Easy method is to use mod_proxy and the ProxyPass directive to pass content to another server. mod_proxy supports a degree of caching that can offer a significant performance boost. But another advantage is that since the proxy server and the web server are likely to have a very fast interconnect, the web server can quickly serve up large content, freeing up a apache process, why the proxy slowly feeds out the content to clients

If you are using ubuntu, you can enable module by

                                        #a2enmod proxy

                                        #a2enmod proxy_http    

and in apache2.conf

                                         ProxyPass  /  http://192.168.1.46/

                                         ProxyPassReverse  /   http://192.168.1.46/

         The ProxyPassreverse directive captures the responses from the web server and masks the URL as it would be directly responded by the Apache  hiding the identity/location of the web server. This is a good security practice, since the attacker won’t be able to know the ip of our web server.

      Caching with Apache2 is another important consideration.  We can configure apache  to set the Expires HTTP header, max-age directive of the Cache-Control HTTP header of static files ,such as images, CSS and JS files, to a date in the future so that these files will be cached by your visitors browsers. This saves bandwidth and makes web site appear faster if a user visits your site for a second time, static files will be fetched from the browser cache

                                      #a2enmod expires

  edit  /etc/apache2/sites-available/default

  <IfModule mod_expires.c>
               ExpiresActive On
               ExpiresByType image/gif “access plus 4 weeks”
               ExpiresByType image/jpg “access plus 4 weeks”

</IfModule>

This would tell browsers to cache .jpg, .gif  files for four week.

       If your server requires a large amount of read / write operations, you might consider provisioned IOPS ebs volumes on your server. This is really effective if you use database server on ec2 instances.  we can use iostat on the command line to take a look at your read/sec and write/sec. You can also use CloudWatch metrics to determine read and write operations.

       Once we move to the security side of apache, our major concern is DDos attacks. If a server is under a DDoS attack, it is quite difficult to detect the attack before the damage is done.  Attack packets usually have spoofed source IP addresses. Hence, it is more difficult to trace them back to their real source. The limit on the number of simultaneous requests that will be served by Apache is decided by the MaxClients directive, and is set to safe limit, by default. Any connection attempts over this limit will normally be queued up.

     If you want to protect your apache against DOS,  DDOS attacks use mod_evasive module.  This module is designed specifically as a remedy for Apache DoS attacks. This module will allow you to specify a maximum number of requests executed by the same IP address. If the limit is reached, the IP address is blacklisted for the time period you specify.

From CAP, Puppet Now Chef, Evolution of Configuration Management Tools

CHEF, PUPPET & CAPISTRANO are used basically for two purposes  :

Application Deployment is all of the activities that make a software system available for use.

Configuration Management is software configuration management is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines.

Let me enlighten on how we evolved from the beginning when we were using tools like ssh, scp to the point where we began to abstract and began to equip our-self with these sophisticated yet simple to use tools. Earlier the following tools like

  • ssh which is used as a configuration management solution for admins.
  • scp act as a secure channel for application deployment.

The need for any other tools was out of question until things got complicated!!!

HISTORY

Earlier an Application Deployment  was just a few steps away such as

  1. scp app to production box
  2. restart server (optional)
  3. profit

And these software refreshing/updates were done

  1. Manual (ssh)
  2. with shell scripts living on the servers
  3. or not done at all

CAPISTRANO
(Introduced by Jamis Buck, written in Ruby, initially for Rails project)

Capistrano is a developer tool for deploying web applications. It is typically installed on a workstation, and used to deploy code from your source code management (SCM) to one, or more servers.In its sim­plest form, Capis­trano al­lows you to copy code from your source con­trol repos­i­tory (SVN or Git) to your server via SSH, and per­form pre & post-de­ploy func­tions like restart­ing a web­server, bust­ing cache, re­nam­ing files, run­ning data­base mi­gra­tions and so on.

Nice things cap introduced :

  1. Automate deploys with one set of files
  2. The files don’t have to live on the production server
  3. The language (Ruby) allows some abstraction

Now application deployment step can be coded and tested like rest of the project. It has also become the de facto way to deploy the Ruby on Rails applications. It has also had tools like webistrano build on top of it to provide a graphical interface to the command line tool.

Drawback : The tool seems to be widely used but not well supported.

PUPPET

(Written in Ruby and evolved from cfengine)

Luke Kanies came up with the idea for Puppet in 2003 after getting fed up with existing server-management software in his career as a systems administrator. In 2005 he quit his job at BladeLogic, a maker of data-center management software, and spent the next 10 months writing code to automate the dozens of steps required to set up a server with the right software, storage space, and network configurations. The result: scores of templates for different kinds of servers, which let systems administrators become, in Kanies’s metaphor, puppet masters, pulling on strings to give computers particular personalities and behaviors. He formed Puppet Labs to begin consulting for some of the thousands of companies using the software—the list includes Google, Zynga, and Twitter etc

Puppet is typically used in a client server formation, with all your clients talking to one or more servers. Each client contacts the servers periodically (every half an hour by default), downloads the latest configuration and makes sure it is sync with that configuration.

The Server in Puppet is called Puppet Master.
Puppet Manifests contains all the configuration details which are declarative as opposed to imperative.

The DSL is not Ruby as you are not writing scripts you are writing definitions, Install order is determined through dependencies.
The Puppet Master is idempotent which will make sure the client machines match the definitions.This is good as you can implement changes across machines automatically just by updating the manifest in the Puppet Master.

CHEF
(written in ruby evolved from puppet)

CHEF is an open source configuration management tool using pure-Ruby, the chef domain specific language for writing system configuration related stuff (recipes and cookbook)
CHEF brings a new feel with its interesting naming conventions relating to cookery like Cookbooks (they contain codes for a software package installation and configuration in the form of Recipes), Knife (API tool), Databags (act like global variables) etc

Chef Server – deployment scripts called Cookbooks and Recipes, configuration instructions called Nodes, security details etc. The clients in the chef infrastructure are called Nodes. Chef recipes are imperative as opposed to declarative. The DSL is extended Ruby so you can write scripts as well as definitions. Install order is script order NO dependency checking.

CHEF & PUPPET

Chef and Puppet automatically set up and tweak the operating systems and programs that run in massive data centers and the new-age “cloud” services, designed to replace massive data centers.

Chef Recipes is more programmer friendly as it is easily understood by a developer unlike a Puppet Manifest.

And when it comes to features in comparison to puppet, chef is rather more intriguing .
For example “Chef’s ability to search an environment and use that information at run time is very appealing.

Knife is Chef’s powerful command line interface. Knife allows you to interact with your entire infrastructure and Chef code base. Use knife to bootstrap a server, build the scaffolding for a new cookbook, or apply a role to a set of nodes in your environment. You can use knife ssh to execute commands on any number of nodes in your environment. knife ssh + search is a very powerful combination.

The part of defining dependencies in Puppet was overly verbose and cumbersome. With Chef, order matters and dependencies would be met if we specified them in the proper order.

We can deploy additional software applications on virtual machine instances without dealing with the overhead of doing everything manually,” Stowe explains. “We can do it with code — recipes that define how various applications and libraries are deployed and configured.” According to Stowe, creating and deploying a new software image now takes minutes or hours rather than hours or weeks. They call this technique DevOps because it applies traditional programming techniques to system administration tasks. “It’s just treating IT operations as a software development problem, – Stowe, CEO of Cycle Computing, a Greenwich, Connecticut-based start-up that uses Chef to manage the software underpinning the online “supercomputing” service it offers to big businesses and academic outfits. “Before this, there were ways of configuring servers and managing them, but DevOps has gotten it right.”

Lets CATEGORIZE

Let me help you to know onto which buckets does the above tools fell into and other similar tools…

App Deploy Capistrano, ControlTier, Fabric, Fun, mCollective
SysConfig Chef, Puppet, cfengine, Smart Frog, Bcfg2
Cloud/VM Xen, Ixc, openVZ, Eucalyptus, KVM
OS Install Kickstart, Jumpstart, Cobbler, OpenQRM, xCAT

DevOps on EC2 using Capistrano

DevOps is the combination of development and operation processes. Cloud with your DevOps offers some fantastic properties. The ability to leverage all the advancements made in software development around repeatability and testability with your infrastructure. The ability to scale up as need be real time and among other things being able to harness the power of self healing systems.

The process piece of devops is about taking the principles behind Agile to the entire continuous software development process. The obvious step is bringing Agile ideas to the operations team, which is sorely needed. Traditionally in the enterprise, the application development team is in charge of gathering business requirements for a software program and writing code. The development team tests their program in an isolated development environment for quality assurance which is later handed over to the operations team. The operations team is tasked with deploying and maintaining the program. The problem with this paradigm is that when the two teams work separately, the development team may not be aware of operational roadblocks that prevent the program from working as anticipated.

Capistrano

Capistrano is a developer tool for running scripts on multiple servers, mainly used for deploying web applications on to the servers. It is typically installed on a workstation, and used to deploy code from your source code management to one, or more servers. Capistrano is originally called “SwitchTower”, the name was changed to Capistrano in March 2006 because of some trademark conflict. It is a time saving command line tool and it is very useful to AWS/EC2 servers because we can deploy the code to 1000’s of aws servers by using a single command. For the security of servers we are commonly using aws ssh key authentication. In capistrano  we use this aws ssh key to deploy the web applications to the aws servers.

In Cloud Computing, deploying applications to production/live servers is always a delicate task. The whole process needs to be quick to minimize downtime. Automating the deployment process helps running repetitive tasks minimizing the possibility human error. It is also a good idea to have a proven and easy way to rollback to a previous version if something goes wrong.

It is a standalone utility that can also integrate nicely with Rails. We simply provide Capistrano with a deployment “recipe” or “formula” that describes our various servers and their roles. It is a single-command deployment. it even allows us to roll a bad version out of production and it revert back to the previous release very easily.

Capistrano Deployment

The main functionality of the Capistrano is to Deploy the rails application which we have already developed and we are using the “SVN” or “GIT” to manage the code. It will transfer all the files of our rails application which we have developed in our local host to aws servers directly by simply executing a simple command in our command prompt.

Steps to deploy a rails application

[shell]gem install capistrano[/shell]

Now,we need to capistranize our rails application using the following commands

[shell]capify .[/shell]

It will create two files

[shell]

config/deploy.rb
capfile .

[/shell]

How to set up deploy.rb file

[shell]

require ‘rubygems’
require ‘activesupport’
set :application, “<application name>”
set :scm_username/ “<username>”
set :use_sudo, false
set :repository, “http://#{scm_username}@www.example.com/svn/trunk”
set :deploy_to, “/var/www/#{application}”
set :deploy_via, :checkout
set :scm, :git
set :user, “root”
role :app, “<domain_name>”
role :web, “<domain_name>”
rold :db, “<domain_name>”, :primary => true
namespace :migrations do
desc “Run the Migrations”
task :up, :roles => :app do
run “cd #{current_path}; rake db:auto:migrate;”
end
task :down, :roles => :app do
run “cd #{current_path}; rake db:drop; rake
db:create”
end
end

[/shell]

where,

scm_username’ is your user name
application’ is an arbitrary name you create to identify your application on the server
use_sudo’ specifies to capistrano that it does not need to append ‘sudo’ before all the commands it will run
repository’ identifies where your subversion repository is located

If we aren’t deploying to server’s default path, we need to specify the actual location by using the ‘deploy_to’ variable as given below

[shell]
set :deploy_to, “/var/www/#{application}”
set :deploy_via, :checkout
[/shell]

If we are using the git to manage our source code, specify the SCM by using the ‘scm’ variable as given below

[shell]
set :scm, :git
set :user, “root”
role :app, “<domain_name>”
role :web, “<domain_name>”
rold :db, “<domain_name>”, :primary => true
[/shell]

Since most rails users will have the same domain name for their web,app and database, we can simply use our domain variable we set earlier.

[shell]
namespace :migrations do
desc “Run the Migrations”
task :up, :roles => :app do
run “cd #{current_path}; rake db:auto:migrate;”
end
task :down, :roles => :app do
run “cd #{current_path}; rake db:drop; rake
db:create”
end
end

[/shell]

After completion of our settings in the deploy.rb file, we need to commit the application by using “svn commit” command if we use svn.

Then we need to run the following command:

[shell]

cap deploy:setup

[/shell]

It is used to create the directory structure in server.

[shell]cap deploy:check[/shell]

It checks all the dependencies/things like directory permission and necessary utilities to deploy the application by using capistrano.

If everything is successful, you should see a message like:
You appear to have all necessary dependencies installed
And finally deploy the application by using the following command:

[shell]cap deploy[/shell]

Command finished successfully

To Clean up the releases directory, leaving the five most recent releases

[shell]Cap cleanup[/shell]

Prints the difference between what was last deployed, and what is currently in our repository

[shell]cap diff_from_last_deploy[/shell]

To Rolls back to the previously deployed version

[shell]cap deploy:rollback:code[/shell]

Amazon’s EC2 cloud cuts the requisition time of the order & delivery stages down to just minutes. This is already a 75% savings in deployment time! But, without automated deployment, you’ll still need a week to get your application installed.

Creating phusion passenger AMI on Amazon EC2

Phusion Passenger is an Apache and Nginx module for deploying Ruby web applications.(such as those built on the Ruby on Rails web framework). Phusion Passenger works on any POSIX-compliant operating system,which means practically any operating system , except Microsoft Windows.

Here we are not going to discuss much about ruby on rails applications as our aim is creating an ami of an ubuntu aws instance from which we can launch an instance for developing and deploying rails applications pre-built.

Install apache2 web-server

[bash]
sudo apt-get install apache2 ( By default its DocumentRoot is /var/www/ )
[/bash]

 

Install mysql-server and mysql-client ( To support rails applications that access database )

 

 

[bash]sudo apt-get install mysql-server mysql-client[/bash]

 

 

 

Install Ruby from repository

The default ruby1.8 is missing some important files. So install ruby1.8-dev. Otherwise at some stage when using gem install, it may end up with “ Error : Failed to build gem native extensions “.

[bash]sudo apt-get install ruby1.8-dev[/bash]

 

Install RubyGems

Install rubygems >= 1.3.6

The package can be downloaded from here

wget http://rubyforge.org/frs/download.php/70696/rubygems-1.3.7.tgz

 

[bash]
tar xvzf rubygems-1.3.7.tgz
cd rubygems-1.3.7
sudo ruby setup.rb
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem
[/bash]

Install Rails via rubygems

 

 

Once rubygems is installed use it to install Rails :

 

[bash]sudo gem install rails[/bash]

 

 

 

Installing Phusion Passenger

 

There are three ways to install Phusion Passenger :

1. By installing the Phusion Passenger gem.

2. By Downloading the source tarball from the PhusionPassenger website(passenger-x.x.x.tar.gz).

3. By installing the native Linux package (eg: Debian package)

Before installing, you will probably need to switch to the root user first. The Phusion Passenger installer will attempt to automatically detect Apache, and compile Phusion Passenger against that Apache version. It does this by looking for the apxs or apxs2 command in the PATH environment variable.

Apache installed in a non-standard location, prevent the Phusion Passenger installer from detecting Apache.To solve this, become root user and export the path of apxs.

Easiest way to install Passenger is installing via the gem

Please install the rubygems and then run the Phusion Passenger installer, by typing the following commands as root.

1.Open a terminal, and type:

[bash]gem install passenger[/bash]

2.Type:

[bash]passenger-install-apache2-module[/bash]

and follow the instructions from the installer.

The installer will :

1. Install the Apache2 module.

2. instruct how to configure Apache.

3. inform how to deploy a Ruby on Rails application.

If anything goes wrong, this installer will advise you on how to solve any problems.

The installer will ask to add the following lines to the apache2.conf file.

[bash] LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.0/

ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/

gems/passenger-3.0.0

PassengerRuby /usr/bin/ruby1.8 [/bash]


Now consider, you have a rails application in directory /var/www/RPF_tool/. Add the following virtualhost entry to your apache configuration file

[bash]
<VirtualHost *:80>

ServerName  www.yoursite.com

DocumentRoot  /home/RFP_tool/public

<Directory  /var/www/RFP_tool/public>

AllowOverride  all

Options  -MultiViews

</Directory>

</VirtualHost>
[/bash]

Restart your apache server.

Phusion Passenger installation is finished.

Installation via the source tarball

Extract  the tarball to whatever location you prefer

[bash]
cd /usr/local/passenger/tar xzvf passenger-x.x.x.tar.gz
/usr/local/passenger/ passenger-x.x.x/bin/passenger-install-apache2-module
[/bash]

Please follow the instructions given by the installer. Do not remove the passenger-x.x.x folder after installation. Furthermore, the passenger-x.x.x folder must be accessible by Apache.

CREATING AN AMI OF AN EC2 INSTANCE

First you will have to install ec2-api-tools.zip from

http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9

[bash]
unzip ec2-api-tools.zip
mkdir ~/ec2
cp -rf ec2-api-tools/* ~/ec2
[/bash]

Upload your aws certificate and private-key to /mnt of the instance.

 

Then add the following to ~/.bashrc

[bash]
export EC2_HOME=~/ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=/mnt/pk-xxxxxxxxxxxxxxxxxxx.pem
export EC2_CERT=/mnt/cert-xxxxxxxxxxxxxxxx.pem
export JAVA_HOME=/usr/local/java/ ( your JAVA_HOME here)
export PATH=~/ec2/bin:$PATH
[/bash]

If your EC2 instance is an EBS-backed one, you can use the following command to create an AMI

[bash]ec2-create-image -n your-image-name instance-id[/bash]

If your instance is an s3-backed ( instance store ) one, you will have to install ec2-ami-tools first. It can be downloaded from

 

http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip

[bash]
unzip ec2-ami-tools.zip
cp ec2-ami-tools-x.x-xxxxx/bin/* ~/ec2/bin
[/bash]

vim ~/.bashrc

export EC2_AMITOOL_HOME=~/ec2/ec2-ami-tools-1.3-56066/

Now you can use the following commands to create an AMI of your s3-backed instance

[bash] mkdir /mnt/bundle-vol/
ec2-bundle-vol -u USER-ID -c /mnt/cert-xxxxxxx.pem -k
/mnt/pk-xxxx.pem -d /mnt/bundle-vol [/bash]

( Login to your AWS account; your USER-ID is available from Account–> Security Credentials )

[bash] ec2-upload-bundle -u s3-bucket-name -a aws-access-key -s aws-secret-key -d
/mnt/bundle-vol/ -m
/mnt/bundle-vol/image.manifest.xml
ec2-register -K  /mnt/pk-xxxxxx.pem -C/mnt/cert-xxxxxxx.pem s3-bucket-name/image.manifest.xml -n name-of-the-image [/bash]

To see the created images

[bash]ec2-describe-images [/bash]