• Call: +1 (858) 429-9131

Posts Tagged ‘Operating System’

Apache on the Cloud – The things you should know

    LAMP forms the base of most web applications.  As the load on an server increases, the bottlenecks in the underlying infrastructure become more apparent in the form of slow response to user requests.

     To overcome this slow response  the primary choice of most people is to add more hardware resources ( incase of AWS increasing the instance type). This will definitely  increases performance but will cost you more money.  The webserver and database eat most of the resources. Most commonly used web server is apache and database is MySQL. So if we can optimize these two we can improve the performance.

   Apache optimization techniques can often provide significant acceleration boosts  even when other acceleration techniques are in use, such as a CDN.  mod_pagespeed is a module from Google for Apache HTTP Servers that can improve the page load times of your website. you can read more on this from here.  If you want to deploy a PHP app on AWS Cloud, Its better to using some kind of caching mechanism.  Its already discussed in our blog .

      Once we came into a situation where we have to use a micro instance for a web server with less than 500 hits a day

      When the site started running live, and we feel like disappointed. when accessing website, it would sometimes pause for several seconds before serving the requested page. It took  hours to figure out what was going on. finally we run the command top and quickly discovered that when the site was accessing by certain amount of users the CPU would spike, but the spike was not the typical user or system CPU. For testing what’s happening in  server we used the apache benchmark tool ‘ab’ and run the following command on  localhost.

                                             #ab -n 100 -c 10 http://mywebserver.com/

      This will show  how fast our web server can handle 100 requests, with a maximum of 10 requests running concurrently. In the meantime we were monitoring the output of top command on web server.

     For further investigation we started with  sar – Linux command to  Collect, report, or save system activity information

  #sar 1

      According to amazon documentation “Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available”.

       If you use 100% CPU for more than a few minutes, Amazon will “steal” CPU time from the instance, meaning that they throttle your instance.  This last  as long as five minutes, and then you get a few seconds of 100% again, then the restrictions are back.  This will effect your website, making it slow, and even timing-out requests. basically means the physical hardware is busy and the hypervisor can’t give the VM the amount of CPU cycles it wants.

   Real tuning required on prefork. This is where we can tell apache to only generate so many processes. The defaults values  are high, and which cant be handled by micro instance. Suppose you get 10 concurrent requests for a php page and require around 64MB of RAM when requested (you have to make sure that  php memory_limit is above that value). That’s around 640MB of RAM on micro instance of 613MB RAM.  This is the case  with 10 connections – apache is configured to allow 256 clients by default,  We need to  scale these down , normally with 10-12 MaxClients. As per out case, this is still a huge number because 10-12 concurrent connections would use all our memory. If you want to be really cautious, make sure that your max memory usage is less than 613MB. Something like 64M php memory limit and 8 max clients keeps you under your limit with space to spare – this helps ensure that our MySQL process when your server is under load.

           Maxclients an important tuning parameter regarding the performance of the Apache web server. We can calculate the value of this for a t1.micro instance

Theoretically,

MaxClients =(Total Memory – Operating System Memory – MySQL memory) / Size Per Apache process.

t1.micro have a server with 613MB of Total memory. Suppose We are using RDS instead of mysql server.

Stop apache and run

#ps aux | awk ‘{sum1 +=$4}; END {print sum1}’.

 we will get the amount of memory thats used by processes other than apache.

Suppose we get a value around 30.

from top command we can check the average memory that each apache resources use.

suppose its 60mb.

Max clients = (613 – 30 ) 60 = 9.71 ~ 10 approx …

       Micro instances are awesome, especially when cost becomes a major concern, however that they are not right for all applications. A simple website with only a few hundreds  hits a day will do just fine since it will only need CPU in short bursts.

      For Servers that serves dynamic content, better approach is to employ a reverse-proxy. This would be done this apache’s mod_proxy or Squid. The main advantages of this configurations are content caching, load balancing etc. Easy method is to use mod_proxy and the ProxyPass directive to pass content to another server. mod_proxy supports a degree of caching that can offer a significant performance boost. But another advantage is that since the proxy server and the web server are likely to have a very fast interconnect, the web server can quickly serve up large content, freeing up a apache process, why the proxy slowly feeds out the content to clients

If you are using ubuntu, you can enable module by

                                        #a2enmod proxy

                                        #a2enmod proxy_http    

and in apache2.conf

                                         ProxyPass  /  http://192.168.1.46/

                                         ProxyPassReverse  /   http://192.168.1.46/

         The ProxyPassreverse directive captures the responses from the web server and masks the URL as it would be directly responded by the Apache  hiding the identity/location of the web server. This is a good security practice, since the attacker won’t be able to know the ip of our web server.

      Caching with Apache2 is another important consideration.  We can configure apache  to set the Expires HTTP header, max-age directive of the Cache-Control HTTP header of static files ,such as images, CSS and JS files, to a date in the future so that these files will be cached by your visitors browsers. This saves bandwidth and makes web site appear faster if a user visits your site for a second time, static files will be fetched from the browser cache

                                      #a2enmod expires

  edit  /etc/apache2/sites-available/default

  <IfModule mod_expires.c>
               ExpiresActive On
               ExpiresByType image/gif “access plus 4 weeks”
               ExpiresByType image/jpg “access plus 4 weeks”

</IfModule>

This would tell browsers to cache .jpg, .gif  files for four week.

       If your server requires a large amount of read / write operations, you might consider provisioned IOPS ebs volumes on your server. This is really effective if you use database server on ec2 instances.  we can use iostat on the command line to take a look at your read/sec and write/sec. You can also use CloudWatch metrics to determine read and write operations.

       Once we move to the security side of apache, our major concern is DDos attacks. If a server is under a DDoS attack, it is quite difficult to detect the attack before the damage is done.  Attack packets usually have spoofed source IP addresses. Hence, it is more difficult to trace them back to their real source. The limit on the number of simultaneous requests that will be served by Apache is decided by the MaxClients directive, and is set to safe limit, by default. Any connection attempts over this limit will normally be queued up.

     If you want to protect your apache against DOS,  DDOS attacks use mod_evasive module.  This module is designed specifically as a remedy for Apache DoS attacks. This module will allow you to specify a maximum number of requests executed by the same IP address. If the limit is reached, the IP address is blacklisted for the time period you specify.

Installation of MongoDB and its performance test

Why MongoDB?

  • Document-oriented
    • Documents (objects) map nicely to programming language data types
    • Embedded documents and arrays reduce need for joins
    • Dynamically-typed (schemaless) for easy schema evolution
    • No joins and no multi-document transactions for high performance and easy scalability
  • High performance
    • No joins and embedding makes reads and writes fast
    • Indexes including indexing of keys from embedded documents and arrays
    • Optional streaming writes (no acknowledgements)
  • High availability
    • Replicated servers with automatic master failover
  • Easy scalability
    • Automatic sharding (auto-partitioning of data across servers)
    • Reads and writes are distributed over shards
    • No joins or multi-document transactions make distributed queries easy and fast
    • Eventually-consistent reads can be distributed over replicated servers

Mongo data model

  • A Mongo system (see deployment above) holds a set of databases
  • A database holds a set of collections
  • A collection holds a set of documents
  • A document is a set of fields
  • A field is a key-value pair
  • A key is a name (string)
  • A value is a
    • basic type like string, integer, float, timestamp, binary, etc.,
    • a document, or
    • an array of value

    Mongo query language

  • To retrieve certain documents from a db collection, you supply a query document containing the fields the desired documents should match. For example, {name: {first: 'John', last: 'Doe'}} will match all documents in the collection with name of John Doe. Likewise, {name.last: 'Doe'} will match all documents with last name of Doe. Also, {name.last: /^D/} will match all documents with last name starting with ‘D’ (regular expression match).
  • Queries will also match inside embedded arrays. For example, {keywords: 'storage'} will match all documents with ‘storage’ in its keywords array. Likewise, {keywords: {$in: ['storage', 'DBMS']}} will match all documents with ‘storage’ or ‘DBMS’ in its keywords array.
  • If you have lots of documents in a collection and you want to make a query fast then build an index for that query. For example, ensureIndex({name.last: 1}) or ensureIndex({keywords: 1}). Note, indexes occupy space and slow down updates a bit, so use them only when the tradeoff is worth it.

Install MongoDB on Ubuntu 10.04

Configure Package Management System (APT)

The Ubuntu package management tool (i.e. dpkg and apt) ensure package consistency and authenticity by requiring that distributors sign packages with GPG keys. Issue the following command to import the 10gen public GPG Key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10

Create a /etc/apt/sources.list.d/10gen.list file and include the following line for the 10gen repository.

deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

Now issue the following command to reload your repository:

sudo apt-get update

Install Packages

Issue the following command to install the latest stable version of MongoDB:

sudo apt-get install mongodb-10gen

When this command completes, you have successfully installed MongoDB! Continue for configuration and start-up suggestions.

Configure MongoDB

These packages configure MongoDB using the /etc/mongodb.conf file in conjunction with the control script. You will find the control script is at /etc/init.d/mongodb.

This MongoDB instance will store its data files in the /var/lib/mongodb and its log files in /var/log/mongodb, and run using the mongodb user account.

Note

If you change the user that runs the MongoDB process, you will need to modify the access control rights to the /var/lib/mongodb and /var/log/mongodb directories.

Controlling MongoDB

Starting MongoDB

You can start the mongod process by issuing the following command:

sudo service mongodb start

You can verify that mongod has started successfully by checking the contents of the log file at /var/log/mongodb/mongodb.log.

Stopping MongoDB

As needed, you may stop the mongod process by issuing the following command:

sudo service mongodb stop

Restarting MongoDB

You may restart the mongod process by issuing the following command:

sudo service mongodb restart

Controlling mongos

As of the current release, there are no control scripts for mongos. mongos is only used in sharding deployments and typically do not run on the same systems where mongod runs. You can use the mongodb script referenced above to derive your own mongos control script.

Using MongoDB

Among the tools included with the MongoDB package, is the mongo shell. You can connect to your MongoDB instance by issuing the following command at the system prompt:

mongo
> show dbs (); —> To show your databases
> use <databasename> —-> To switch database
> db.createCollection(“collectionname”) —> To create collection
> db.collectionname.find(); —> To see the contents in the collection
> db.addUser(“theadmin”, “anadminpassword”) —> To create user and password

Mongodb performance test :-

To monitor database system we can use Mongotop

Mongotop tracks and reports the current read and write activity of a MongoDB instance.
Mongotop provides per-collection visibility into use.
Use mongotop to verify that activity and use match expectations.
Mongotop returns time values specified in milliseconds (ms.)
Mongotop only reports active namespaces or databases, depending on the –locks option.
If you don’t see a database or collection, it has received no recent activity.

By default mongotop connects to the MongoDB instance running on the localhost port 27017. However,mongotop can optionally connect to remote mongod instances

Next, we can use Mongostat

Mongostat captures and returns counters of database operations. Mongostat reports operations on a per-type (e.g. insert, query, update, delete, etc.) basis. This format makes it easy to understand the distribution of load on the server. Use  Mongostat to understand the distribution of operation types and to inform capacity planning.
The Mongostat utility provides a quick overview of the status of a currently running mongod or Mongos instance. Mongostat is functionally similar to the UNIX/Linux file system utility vmstat, but provides data regarding mongod and Mongos instances.

Use  db.serverStatus()
It provides an overview of the database process’s state.

Then REST interface

MongoDB provides a REST interface that exposes a diagnostic and monitoring information in a simple web page. Enable this by setting rest to true, and access this page via the local host interface using the port numbered 1000 more than that the database port. In default configurations the REST interface is accessible on 28017. For example, to access the REST interface on a locally running mongod instance: http://localhost:28017

These are a few basic tips on making your application better/faster/stronger without knowing anything about indexes or sharding.

Connecting

Connecting to the database is a (relatively) expensive operation. Try to minimize the number of times you connect and disconnect: use persistent connections or connection pooling (depending on your language).

there are some  side effects with the PHP connection code.

$connection = new Mongo ( );

$connection->connect( );

In this code it appears the user wants to create a new connection. However, under the hood the following is happening:

The constructor connects to the database.
connect( ) sees that you’re already connected, assumes you want to reset the connection.
Disconnects from the database.
Connects again.

The result is that you have doubled your execution time.

ObjectIds

ObjectIds seem to be uncomfortable, so they convert their ObjectIds into strings. The problem is, an ObjectId takes up 12 bytes but its string representation takes up 29 bytes (almost two and a half times bigger).

Numbers vs. Strings

MongoDB is type-sensitive and it’s important to use the correct type: numbers for numeric values and strings for strings.

If you have large numbers and you save them as strings (“1234567890″ instead of 1234567890), MongoDB may slow down as it strcmps the entire length of the number instead of doing a quicker numeric comparison. Also, “12″ is going to be sorted as less than “9″, because MongoDB will use string, not numeric, comparison on the values. This can lead to some errors.

Driver-specific
Find out if you’re driver is particularly weaknesses (or strengths). For instance, the Perl driver is one of the fastest drivers, but it is not good at decoding Date types (Perl’s DateTime objects take a long time to create).
MongoDB adopts a documented-oriented format, so it is more similar to RDBMS than a key-value or column oriented format.

MongoDB operates on a memory base and places high performance above data scalability.Mongo DB uses BSON for data storage

Mongo uses memory mapped files, which means that a lot of the memory reported by tools such as top may not actually represent RAM usage. Check mem[“resident”], which tells you how much RAM Mongo is actually using.

“mem” : {
    “resident” : 2,
    “virtual” : 2396,

    “supported” : true,
    “mapped” : 0
},

Backup

There are basically two approaches to backing up a Mongo database:

Mongodump and Mongorestore are the classic approach. Dumps the contents of the database to files. The backup is stored in the same format as Mongo uses internally, so is very efficient. But it’s not a point-in-time snapshot.
To get a point-in-time snapshot, shut the database down, copy the disk files (e.g. with cp) and then start mongod up again. Alternatively, rather than shutting mongod down before making your point-in-time snapshot, you could just stop it from accepting writes:

> db._adminCommand({fsync: 1, lock: 1})
{
        “info” : “now locked against writes, use db.$cmd.sys.unlock.findOne() to unlock”,

        “ok” : 1
}

To unlock the database again, you need to switch to the admin database and then unlock it

> use admin
switched to db admin
> db.$cmd.sys.unlock.findOne()
{ “ok” : 1, “info” : “unlock requested” }

Replication
Start your master and slave up like this:

$ mongod –master –oplogSize 500

$ mongod –slave –source localhost:27017 –port 3000 –dbpath /data/slave

When seeding a new slave server from master use the –fastsync option.

You can see what’s going on with these two commands:
> db.printReplicationInfo() # tells you how long your oplog will last
> db.printSlaveReplicationInfo() # tells you how far behind the slave is

If the slave isn’t keeping up,Check the mongo log for any recent errors. Try connecting with the mongo
console. Try running queries from the console to see if everything is working. Run the status commands
above to try and find out which database is taking up resources.
Timeout

Connection timeout in milliseconds. Defaults to 20000

Connection::query_timeout.

How many milliseconds to wait for a response from the server. Set to 30000 (30 seconds) by default. -1 waits forever (or until TCP times out, which is usually a long time).

Default pool

The default pool has a maximum of 10 connections per mongodb host. This value is controlled by the variable  “connectionsPerHost” within the class

MongoDB Server Connections

The MongoDB server has a property called “maxConns” that  is the max number of simultaneous connections. The
default number for maxConns is 80% of the available file descriptors for connections. One way to check the number of connections is by opening the mongo shell and executing:

>db.serverStatus() and in the previous mail I have send the screen shot of this.

The standard format of the MongoDB connection URI used to connect to a MongoDB database server.

mongodb://[username:password@]host1[:port1][,host2[:port2],…[,hostN[:portN]]][/[database][?options]]

Finding the Min and Max values in MongoDB

In MongoDB, the min() and max() functions work as limitors – essentially the same as “gte” (>=) and “lt” (<).

To find the highest (maximum) value in MongoDB, you can use this command;

db.thiscollection.find().sort({“thisfieldname”:-1}).limit(1)

This essentially sorts the data by the fieldname in decending and takes the first value.

The lowest (minimum) value can be determined in a similar way.

    db.thiscollection.find().sort({“thisfieldname”:1}).limit(1)

Memory Mapped Storage Engine :-

This is the current storage engine for MongoDB, and it uses memory-mapped files for all disk I/O.  Using this strategy, the operating system’s virtual memory manager is in charge of caching.  This has several implications:

There is no redundancy between file system cache and database cache: they are one and the same.
MongoDB can use all free memory on the server for cache space automatically without any configuration of a cache size.
Virtual memory size and resident size will appear to be very large for the mongod process.

This is benign: virtual memory space will be just larger thanthe size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.

This command shows the memory usage information :- db.serverStatus().mem

For example :-

> db.serverStatus().mem
{
    “bits” : 64,
    “resident” : 31,
    “virtual” : 146,
    “supported” : true,
    “mapped” : 0,
    “mappedWithJournal” : 0
}

We can verify there is no memory leak in the mongod process by comparing the mem.virtual and mem.mapped values (these values are in megabytes).  If you are running with journaling disabled, the difference should be relatively small compared to total RAM on the machine. If you are running with journaling enabled, compare mem.virtual to 2*mem.mapped.   Also watch the delta over time; if it is increasing consistently, that could indicate a leak.

Also we can use to check what percent of memory is being used for memory mapped files by the free command:

Here 2652mb of memory is being used to memory map files

root@manager-desktop:~# free -tm

             total       used       free     shared    buffers     cached
Mem:          3962       3602        359          0        411       2652

-/+ buffers/cache:        538       3423

Swap:        1491        52       1439

Total:        5454       3655   1799

Garbage collection handling :-

When we remove an object from MongoDB collection, the space it occupied is not automatically garbage collected and new records are only appended to the end of data files, making them grow bigger and bigger.MongoDB maintains lists of deleted blocks within the datafiles when objects or collections are deleted.  This space is reused by MongoDB but never freed to the operating system.

To shrink the amount of physical space used by the datafiles themselves, by reclaiming deleted blocks, we must rebuild the database by using  the command “db.repairDatabase( )” . repairDatabase copies all the database records to new files.

We will need enough free disk space to hold both the old and new database files while the repair is running, the repairDatabase  will take a long time to complete.Also rather than compacting an entire database,

you can compact just a single collection by using  “db.runCommand({compact:’collectionmname;})

This does not shrink any datafiles,however; it only defragments deleted space so that larger objects might reuse it.

The compact command will never delete or shrink database files, and in general requires extra space to do its work.

Thus, it is not a good option when you are running critically low on disk space.

MySQL Optimization

Database optimization is the process of configuring database to use system resource efficiently and perform tasks quickly. To optimize mysql you should know the work flow of entire system, your hardware, operating system, disk I/O performance etc.
Why to Optimize
You can do more with less. The default mysql setup is optimized for a minimal system because it should work well on a minimal hardware. But when you use a dedicated mysql server with high traffic and complex queries you have to optimize mysql.
MySQL Server tuning Considerations
Here you will find some common optimization parameters.

  • MySQL variables
  • Hardware
  • Disk
  • Application

MySQL Optimization
MySQL global variables don’t have any predefined optimum values. It is a trial and monitor process. It depends on all the above parameters. Here you will see some of the common parameters.
Key-buffer-size
It is size of the buffer used to index blocks for MyISAM tables. On a dedicated mysql server with MyISAM storage engine 25-30% of systems total memory you can allocate for key_buffer_size. To fine tune key_buffer_size you can compare the variables key_reads and the key_read_requests.
This ratio should be at least 1:100.

SHOW STATUS LIKE ‘%key_read%’;
+——————-+————-+
| Variable_name | Value |
+——————-+————-+
| Key_read_requests | 10726813161 |
| Key_reads | 92790146 |
+——————-+————-+
Here the ratio is 1:115 which is acceptable.
But suppose you get a ratio 1: 10 then you need to add more key buffer and upgrade hardware accordingly.
Query Cache
“My website is too slow while loading dynamic pages”. If it is a mysql database related issue, following MySQL variables will be your solution.
query_cache_type
Set the query cache type. There are 3 values 0 ,1 or 2

0 Do not cache any query result
1 Cache query results.
2 Cache results ondemand. Cacheable queries that begin with SELECT SQL_CACHE.

query_cache_size
The amount of memory used to cache query result. Default is 0 which disable query cache.
The optimum value is depend on your application.
query_cache_limit
Do not cache results that are larger than this number of bytes. The default value is 1MB.
Status checking
SHOW STATUS LIKE ‘%qcache%’;
+————————-+———-+
| Variable_name | Value |
+————————-+———-+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 8371272 |
| Qcache_hits | 23547551 |
| Qcache_inserts | 46909131 |
| Qcache_lowmem_prunes | 5110536 |
| Qcache_not_cached | 2760196 |
| Qcache_queries_in_cache | 0 |
| Qcache_total_blocks | 1 |
+————————-+———-+
There were 46909131 queries and out which 23547551 queries cached and remaining not cached. Here the issue will either the result is greater than query_cache_limit or greater than query_cache_size itself. You have to trial and monitor 🙂
Qcache_lowmem_prunes.
When a query is removed from the query cache, this value will be incremented. If it increases quickly, and you still have memory to spare, you can set query_cache_size high, If it never increases, you can reduce the cache size.

sort_buffer
The sort_buffer is a useful for speed up myisamchk operations. It can also be useful when performing large numbers of sorts.

tmp_table_size

This variable determines the maximum size for a temporary table in memory. The maximum in memory size is minimum of tmp_table_size and max_heap_table_size. You can compare
Created_tmp_disk_tables and Created_tmp_tables to optimize tmp_table_size.

innodb_buffer_pool_size

This variable is target for innodb table and it is similar to key_buffer_size in MyISAM table.
On a dedicated mysql server using innodb you can set this upto 80% of RAM.
Hardware for mysql
If you have large tables(>3GB), you should consider 64 bit hardware as mysql uses a lots of 64bit integers internally.

You need more memory(RAM) if you want mysql to handle large number of connections simultaneously. More RAM will speed up key updates by keeping most of the pages in RAM

Another consideration is Ethernet device, You can use a 1G Ethernet for a dedicated mysql server for fast remote connections.

Disk performance is also an important parameter.
Disk Optimization
Striping disk (RAID 0) will increase both read and write throughput.

Don’t use RAID 1 or mirroring on disk for temporary files.

On Linux, mount the disks with async (default) and noatime.
Optimizing your application
Cache process in your application

Specify the column name in queries(eg dont use SELECT * FROM……)

Use persistent connections

USE EXPLAIN to explain!!.You will see detail below.

Queries and Indexes

Let us start with a simple query SELECT firstname FROM student WHERE id=’145870′;
MySQL start searching from the beginning row to find the student with id 145870. It does not even know it exist or not. An index is a sorted file which have an entry for each row.MySQL can find the corresponding record very quickly by referring this index.
EXPLAIN is a nice tool to understand your queries

EXPLAIN SELECT firstname,lastname FROM student WHERE id=’145870′;

+———-+——+—————+——+———+——+——+————+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+———-+——+—————+——+———+——+——+————+
| student | ALL | NULL | NULL | NULL | NULL |10000 | where used |
+———-+——+—————+——+———+——+——+————+
The possible_keys is null. In this case mysql will check all the 10000 rows. We can say this query(or table) is not optimized.

Now suppose we have use index for above table and run EXPLAIN again then we will get
+———-+——-+—————+———+———+——-+——+——-+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+———-+——-+—————+———+———+——-+——+——-+
| employee | const | PRIMARY | PRIMARY | 10 | const | 1 | |
+———-+——-+—————+———+———+——-+——+——-+
The type is “const”, which means that the table has only one matching row. The primary key is being used to find this particular record.

There are many more optimization variables and indxing methods. It is difficult include everything in a single article. But you can start mysql fine tuning while you database is underperfoming.

Creating phusion passenger AMI on Amazon EC2

Phusion Passenger is an Apache and Nginx module for deploying Ruby web applications.(such as those built on the Ruby on Rails web framework). Phusion Passenger works on any POSIX-compliant operating system,which means practically any operating system , except Microsoft Windows.

Here we are not going to discuss much about ruby on rails applications as our aim is creating an ami of an ubuntu aws instance from which we can launch an instance for developing and deploying rails applications pre-built.

Install apache2 web-server

[bash]
sudo apt-get install apache2 ( By default its DocumentRoot is /var/www/ )
[/bash]

 

Install mysql-server and mysql-client ( To support rails applications that access database )

 

 

[bash]sudo apt-get install mysql-server mysql-client[/bash]

 

 

 

Install Ruby from repository

The default ruby1.8 is missing some important files. So install ruby1.8-dev. Otherwise at some stage when using gem install, it may end up with “ Error : Failed to build gem native extensions “.

[bash]sudo apt-get install ruby1.8-dev[/bash]

 

Install RubyGems

Install rubygems >= 1.3.6

The package can be downloaded from here

wget http://rubyforge.org/frs/download.php/70696/rubygems-1.3.7.tgz

 

[bash]
tar xvzf rubygems-1.3.7.tgz
cd rubygems-1.3.7
sudo ruby setup.rb
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem
[/bash]

Install Rails via rubygems

 

 

Once rubygems is installed use it to install Rails :

 

[bash]sudo gem install rails[/bash]

 

 

 

Installing Phusion Passenger

 

There are three ways to install Phusion Passenger :

1. By installing the Phusion Passenger gem.

2. By Downloading the source tarball from the PhusionPassenger website(passenger-x.x.x.tar.gz).

3. By installing the native Linux package (eg: Debian package)

Before installing, you will probably need to switch to the root user first. The Phusion Passenger installer will attempt to automatically detect Apache, and compile Phusion Passenger against that Apache version. It does this by looking for the apxs or apxs2 command in the PATH environment variable.

Apache installed in a non-standard location, prevent the Phusion Passenger installer from detecting Apache.To solve this, become root user and export the path of apxs.

Easiest way to install Passenger is installing via the gem

Please install the rubygems and then run the Phusion Passenger installer, by typing the following commands as root.

1.Open a terminal, and type:

[bash]gem install passenger[/bash]

2.Type:

[bash]passenger-install-apache2-module[/bash]

and follow the instructions from the installer.

The installer will :

1. Install the Apache2 module.

2. instruct how to configure Apache.

3. inform how to deploy a Ruby on Rails application.

If anything goes wrong, this installer will advise you on how to solve any problems.

The installer will ask to add the following lines to the apache2.conf file.

[bash] LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.0/

ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/

gems/passenger-3.0.0

PassengerRuby /usr/bin/ruby1.8 [/bash]


Now consider, you have a rails application in directory /var/www/RPF_tool/. Add the following virtualhost entry to your apache configuration file

[bash]
<VirtualHost *:80>

ServerName  www.yoursite.com

DocumentRoot  /home/RFP_tool/public

<Directory  /var/www/RFP_tool/public>

AllowOverride  all

Options  -MultiViews

</Directory>

</VirtualHost>
[/bash]

Restart your apache server.

Phusion Passenger installation is finished.

Installation via the source tarball

Extract  the tarball to whatever location you prefer

[bash]
cd /usr/local/passenger/tar xzvf passenger-x.x.x.tar.gz
/usr/local/passenger/ passenger-x.x.x/bin/passenger-install-apache2-module
[/bash]

Please follow the instructions given by the installer. Do not remove the passenger-x.x.x folder after installation. Furthermore, the passenger-x.x.x folder must be accessible by Apache.

CREATING AN AMI OF AN EC2 INSTANCE

First you will have to install ec2-api-tools.zip from

http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9

[bash]
unzip ec2-api-tools.zip
mkdir ~/ec2
cp -rf ec2-api-tools/* ~/ec2
[/bash]

Upload your aws certificate and private-key to /mnt of the instance.

 

Then add the following to ~/.bashrc

[bash]
export EC2_HOME=~/ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=/mnt/pk-xxxxxxxxxxxxxxxxxxx.pem
export EC2_CERT=/mnt/cert-xxxxxxxxxxxxxxxx.pem
export JAVA_HOME=/usr/local/java/ ( your JAVA_HOME here)
export PATH=~/ec2/bin:$PATH
[/bash]

If your EC2 instance is an EBS-backed one, you can use the following command to create an AMI

[bash]ec2-create-image -n your-image-name instance-id[/bash]

If your instance is an s3-backed ( instance store ) one, you will have to install ec2-ami-tools first. It can be downloaded from

 

http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip

[bash]
unzip ec2-ami-tools.zip
cp ec2-ami-tools-x.x-xxxxx/bin/* ~/ec2/bin
[/bash]

vim ~/.bashrc

export EC2_AMITOOL_HOME=~/ec2/ec2-ami-tools-1.3-56066/

Now you can use the following commands to create an AMI of your s3-backed instance

[bash] mkdir /mnt/bundle-vol/
ec2-bundle-vol -u USER-ID -c /mnt/cert-xxxxxxx.pem -k
/mnt/pk-xxxx.pem -d /mnt/bundle-vol [/bash]

( Login to your AWS account; your USER-ID is available from Account–> Security Credentials )

[bash] ec2-upload-bundle -u s3-bucket-name -a aws-access-key -s aws-secret-key -d
/mnt/bundle-vol/ -m
/mnt/bundle-vol/image.manifest.xml
ec2-register -K  /mnt/pk-xxxxxx.pem -C/mnt/cert-xxxxxxx.pem s3-bucket-name/image.manifest.xml -n name-of-the-image [/bash]

To see the created images

[bash]ec2-describe-images [/bash]

Microsoft SQL Server 2008 Encryption: Part ll

In our first article we had discussed security and privacy of data in cloud and how to meet MSSQL server HIPPA(Health Information Portability and Accountability Act) compliance. We had also learn how to encrypt data in transit between MSSQL server and applications. Here you will learn how to encrypt data in rest in MSSQL using Transparent Data Encryption(TDE).

It seems that most of the security products focused their effort on securing data in transit only, using SSL/TLS. But when you build a security system that meet the international security regulations and acts, you need to encrypt data in rest also. The insecure configuration of server, operating system, firewall and network in general, will make it easier for some one to gain access to data at rest.

TDE performs real time encryption and decryption of data while writing and reading data from MSSQL. As the name implies encryption is transparent, that is no need to modify code or architecture of applications when implementing TDE.

Implementation of TDE

1. Create a master key
A master key is a symmetric key that is used to create certificates and asymmetric keys.

[sql]
USE master;
CREATE MASTER KEY
ENCRYPTION BY PASSWORD = ‘<password>’;
GO
[/sql]

2.Create Certificate

Certificates can be used to create symmetric keys for data encryption or to encrypt the data directly. Execute the following script to create a certificate:

[sql]
CREATE CERTIFICATE TDECert
WITH SUBJECT = ‘TDECertificate’
GO
[/sql]

3.Create a Database Encryption Key and Protect it by the Certificate

1.Click object explorer in the left pane of the MSSQL SERVER Management Studio
2.Right Click on the database which you want to encrypt
3.Click Tasks and Navigate to Manage Database Encryption
4. Select the encrytion algorithm (AES 128/192/256) and select the certificate you have created
5.Then Mark the check Box for Set Database Encryption On

You can query the is_encrypted column in sys.databases to determine whether TDE is enabled for a particular database.

[sql]
SELECT [name], is_encrypted FROM sys.databases
GO
[/sql]

Important Back up the certifi cate and private key to a fi le to ensure recoverability as follows

[sql]
BACKUP CERTIFICATE TDECert TO FILE = ‘C:\TDECert.cer’
[/sql]

You are Done!!

Small to Medium Sized Data Center with Applogic Cloud

We use Applogic (A Grid Operating System) for Grid Computation.The Virtual Private Data Center with Applogic, provides much flexibilities to manage resources of the commodity servers, similar to physical access to the commodity servers. Applogic functions with the help of Xen Hypervisor, Distributed kernel, Logical Connection Manager and other back bone utilities. By using Applogic group, all these resources from different physical servers(commodity server) can be shifted into a single pool of corresponding resources(CPU,RAM,B/w,etc.). Read more…

Amazon EC2 Boot From EBS

Amazon EC2 has  announced the ability to boot instances directly from Amazon EBS snapshots, providing significantly increased flexibility on how customers can manage their instances.Amazon Machine Images (AMIs) can now either be stored in Amazon S3 or as an Amazon EBS  snapshot. Read more…