• Call: +1 (858) 429-9131

Posts Tagged ‘Ubuntu’

DevOps stories 1: working with a high traffic e-commerce portal

Looks like this is a good idea to write down first person stories of various DevOps – Cloud migration scenarios that we come across.

In this particular case we have a beast of a server with 32 processors with 8 cores each & 256 of RAM running LAMP stack, CakePHP &  X-cart shopping cart. And yes, everything is dead slow.

Cleaning up the X-cart cache

By default (?), the cache is at /var/www/html/cache or [DOCTUMENT_ROOT]/cache. If there are too many files, you will not be able to delete the files. The following commands can help.


touch /root/agileblaze/cache-file-list.txt #empty file
find . -name '.js' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f
find . -name 'sql.
' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f
find . -name 'rf*.php' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f

The permanant fix for this X-cart behaviour is to change the following row in the config.php file from:

define('USE_SQL_DATA_CACHE', true);
to
define('USE_SQL_DATA_CACHE', false);

MySQL

There are tons of issues like a db that is not upgraded, joins without indexes etc. We decided to make use of the RAM & have MySQL MYISAM temporary files in there for faster access. Don’t forget to create the required directory and add the necessary entries /etc/fstab to persist the changes over reboots.

/etc/my.cnf is changed as follows

tmpdir = /var/mysqltmp # changed from /var/lib/mysql/tmp

Now that we have some room to look into other matters, things should be easier.

We also had the non-so-friendly max connections error. We increased in the max connections from the default.

# MAX CONNECTIONS
max_connections = 300 #Sat Apr 30 03:35:25 CDT 2016

Slow Queries

If the slow query log is enabled, mysqldumpslow can be a very handy command

[root@714219-db1 mysql]# mysqldumpslow -a -s r -t 10 /var/log/mysql/slow.log

Reading mysql slow query log from /var/log/mysql/slow.log Count: 376687 Time=1.63s (613441s) Lock=0.00s (36s) Rows=203657.1 (76714970948), 2users@localhost SELECT productid, COUNT(remote_ip) AS total, AVG(vote_value) AS rating FROM xcart_product_votes GROUP BY productid

Controlling the RAM usage

 

The RAM usage on GNU/Linux based systems can be sometimes quite weird. The immediate path taken is to play around with sysctl and tweak swappiness & may be run drop_cache.

ie,

change swappiness to say, 10 & do a cache + buffer cleanup. But these may not be very handy but the /proc/sys/vm/vfs_cache_pressure changes seems to help further. (we have it around 512)

Further minimum free memory size is a parameter which can help preventing OOM errors. A sample value is shown below.

sysctl -w vm.min_free_kbytes=2621440

Further:

sysctl -w vm.vfs_cache_pressure=1024
sysctl -w vm.swappiness=10

 

Keep an eye on Caches and Buffers

This is often something people miss.   The difference between free command and the total process usage can give us the Cache + buffer usage.  slabtop is a very handy command to get exact details.

slabtop --delay=10 -s c

Can give a neat summary.

Screenshot from 2016-05-11 20-28-07

 

Another very useful tool is dstat

dstat -lrvn 10 output is shown below. This can give colourful details of cache usage.

the memory, CPU, network, IO columns above gives useful information.

 

How to read dstat : On a fully warmed-up system, memory should be around 95% in-use, with most of it in the cache column. CPUs should be in use with no more than 1-2% of iowait and 2-15% system time.

 

How to setup automatic updates:

Sometimes it is quite good to have automatic updates in place. For Ubuntu, automatic updates can be done following these instructions.

 

 

It’s Phab! That makes your life easier

We have been using plenty of different tools for tracking bugs/product management/project management/to do lists/code review; such as ClearCase, ClearQuest, Bugzilla, Github, Asana, Pivotal Tracker, Google Drive etc. We found Phabricator as a “Too Good To Be True” software engineering web application platform originally developed at Facebook. It has code review, wiki, repository browsing,tickets and a lot more to make Phab more fabulous.

Phabricator is an open source collaboration of web applications which help software companies to build better software. It is a suite of applications. Following are the most important tools in phabricator :
Maniphest – Bug tracker/task management tracker
Diffusion- source code browser
Differential – code review tool that allows developers to easily submit reviews to one another via command line tool when they check in code using Git or Subversion
Phriction – wiki tool

How to setup and configure the code review and project management tool – Phabricator

Installation

Server – 4GB Digital ocean droplet
OS – Ubuntu 14.04

1. Install dependencies

apt-get install mysql-server apache2 dpkg-dev php5 php5-mysql php5-gd php5-dev php5-curl php-apc php5-cli php5-json

2. Get code

#cd /var/www/codereview

git clone https://github.com/phacility/libphutil.git

git clone https://github.com/phacility/arcanist.git

git clone https://github.com/phacility/arcanist.git

3. Configure virtual host entry

#add below lines

#######################################################################

DocumentRoot /var/www/codereview/webroot
RewriteEngine on
RewriteRule ^/rsrc/(.*) – [L,QSA]
RewriteRule ^/favicon.ico – [L,QSA]
RewriteRule ^(.*)$ /index.php?__path__=$1 [B,L,QSA]
Order allow,deny
allow from all
#######################################################################
4. Enable the virtual host entry for phabricator.

# a2ensite phabricator.conf
# service apache2 reload

5. Configure the MySQL database configuration for phabricator

– create database
# /var/www/codereview/phabricator/bin/config set mysql.user mysql_username
# /var/www/codereview/phabricator/bin/config get mysql.pass mysql_password
# /var/www/codereview/phabricator/bin/config get mysql.host mysql_host
# /var/www/codereview/phabricator/bin/config storage upgrade
-tweak mysql

Open /etc/mysql/my.cnf and add the following line under [mysqld] section:

sql-mode = STRICT_ALL_TABLES

#service mysql restart

Set the Base URI of Phabricator install

# /var/www/codereview/phabricator/bin/config set phabricator.base-uri

(eg: phabricator.your-domain.com)

Configure Outbound Email – External SMTP (Google Apps)

Set the following configuration keys using /var/www/codereview/phabricator/bin/config set value

– metamta.mail-adapter -> PhabricatorMailImplementationPHPMailerAdapter
– phpmailer.mailer -> smtp
– phpmailer.smtp-host -> smtp.gmail.com
– phpmailer.smtp-port -> 465
– phpmailer.smtp-user -> Your Google apps mail id
– phpmailer.smtp-password -> set to your password used for authentication
– phpmailer.smtp-protocol -> ssl

Start the phabricator daemons

You can start all the phabricator deamons using the script
# /var/www/codereview/phabricator/bin/phd start
To start daemons at the boot time, add this entry to the file /etc/rc.local

/var/www/codereview/phabricator/bin/phd start

Diffusion repository hosting with git

1. Install git

#apt-get install git

2. Create a local repository directory:

#mkdir -p /data/repo

3. Edit the repository.default-local-path key to the new local repository directory.

Go to the Config -> Repositories -> repository.default-local-path

4. Configure System user accounts

Phabricator uses as many as three user accounts. These are system user accounts on the machine Phabricator runs on, not Phabricator user accounts.

* daemon-user – The user the daemons run as

We will configure the root user to run the daemons

* www-user – The user the web server run as

We will use www-data to be the web user

* vcs-user – The user that users will connect over SSH as

We will configure git user to the vcs-user

To enable SSH access to repositories, edit /etc/sudoers file using visudo to contain:

#includedir /etc/sudoers.d
git ALL=(root) SETENV: NOPASSWD: /usr/bin/git-upload-pack, /usr/bin/git-receive-pack, /usr/bin/git

Since we are going to enable SSH access to the repository, ensure the following holds good.

– Open /etc/shadow and find the line for vcs-user, git.

The second field (which is the password field) must not be set to !!. This value will prevent login. If it is set to !!, edit it and set it to NP (“no password”) instead.

– Open /etc/passwd and find the line for the vcs-user, git.
The last field (which is the login shell) must be set to a real shell. If it is set to something like /bin/false, then sshd will not be able to execute commands. Instead, you should set it to a real shell, like /bin/sh.

– Use phd.user as our daemon user;
# /var/www/phab/phabricator/bin/config phd.user root
# /var/www/phab/phabricator/bin/config set diffusion.ssh-user git

5. Configuring SSH

We will move the normal sshd daemon to another port, say 222. We will use this port to get a normal login shell. We will run highly restrictive sshd on port 22 managed by Phabricator.

Move Normal SSHD

– make a backup of sshd_config before making any changes.

#cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup

– Update /etc/ssh/sshd_config, change the port to some othert port like 222.

Port 222

– Restart sshd and verify that you are able to connect to the new port

ssh -p 222 user@host

Configure and start Phabricator SSHD

We now configure and start a second SSHD instance which will run on port 22. This instance will use special locked down configuration that uses Phabricator to handle the authentication and command execution.

– Create a phabricator-ssh-hook.sh file

– Create a sshd_phabricator config file

– Start a copy of sshd using the new configuration

Create phabricator-ssh-hook.sh: Copy the template in phabricator/resources/sshd/ phabricator-ssh-hook.sh to somewhere like /usr/lib/phabricator-ssh-hook.sh and edit it to have the correct settings

##############################################################

#!/bin/sh

# NOTE: Replace this with the username that you expect users to connect with.
VCSUSER=”git”

# NOTE: Replace this with the path to your Phabricator directory.
ROOT=”/var/www/codereview/phabricator”

if [ “$1” != “$VCSUSER” ];
then
exit 1
fi

exec “$ROOT/bin/ssh-auth” $@
##############################################################

Make it owned by root and restrict editing;

#sudo chown root /usr/lib/phabricator-ssh-hook.sh
#chmod 755 /usr/lib/phabricator-ssh-hook.sh

Create sshd_config for Phabricator: Copy the template in /phabricator/sshd/sshd_config.phabricator.example to somewhere like /etc/ssh/sshd_config.phabricator

Start Phabricator SSHD

#sudo /usr/sbin/sshd -f /etc/ssh/sshd_config.phabricator

Note:-
Add this entry to the /etc/rc.local to start the daemon on startup.

If you did everything correctly, you should be able to run this;

#echo {} | ssh git@phabricator.your-company.com conduit conduit.ping

and get a response like this;

{“result”:”phab-server”,”error_code”:null,”error_info”:null}

You should now be able to access your instance over ssh on port 222 for normal login and administrative purposes. Phabricator SSHD runs on port 22 to handle authentication and command execution.

6. To create a git repository

Go to Diffusion -> New Repository -> Create a New Hosted Repository

Upgrade Phabricator

Since phabricator is under development, you should update frequently. To update phabricator:

– Stop the web server
– Run git pull in libphutil/, arcanist/, and phabricator.
– Run phabricator/bin/storage upgrade.
– Restart the web server.
Also you can use a script similar to this one to automate the process:
http://www.phabricator.com/rsrc/install/update_phabricator.sh

Installation of MongoDB and its performance test

Why MongoDB?

  • Document-oriented
    • Documents (objects) map nicely to programming language data types
    • Embedded documents and arrays reduce need for joins
    • Dynamically-typed (schemaless) for easy schema evolution
    • No joins and no multi-document transactions for high performance and easy scalability
  • High performance
    • No joins and embedding makes reads and writes fast
    • Indexes including indexing of keys from embedded documents and arrays
    • Optional streaming writes (no acknowledgements)
  • High availability
    • Replicated servers with automatic master failover
  • Easy scalability
    • Automatic sharding (auto-partitioning of data across servers)
    • Reads and writes are distributed over shards
    • No joins or multi-document transactions make distributed queries easy and fast
    • Eventually-consistent reads can be distributed over replicated servers

Mongo data model

  • A Mongo system (see deployment above) holds a set of databases
  • A database holds a set of collections
  • A collection holds a set of documents
  • A document is a set of fields
  • A field is a key-value pair
  • A key is a name (string)
  • A value is a
    • basic type like string, integer, float, timestamp, binary, etc.,
    • a document, or
    • an array of value

    Mongo query language

  • To retrieve certain documents from a db collection, you supply a query document containing the fields the desired documents should match. For example, {name: {first: 'John', last: 'Doe'}} will match all documents in the collection with name of John Doe. Likewise, {name.last: 'Doe'} will match all documents with last name of Doe. Also, {name.last: /^D/} will match all documents with last name starting with ‘D’ (regular expression match).
  • Queries will also match inside embedded arrays. For example, {keywords: 'storage'} will match all documents with ‘storage’ in its keywords array. Likewise, {keywords: {$in: ['storage', 'DBMS']}} will match all documents with ‘storage’ or ‘DBMS’ in its keywords array.
  • If you have lots of documents in a collection and you want to make a query fast then build an index for that query. For example, ensureIndex({name.last: 1}) or ensureIndex({keywords: 1}). Note, indexes occupy space and slow down updates a bit, so use them only when the tradeoff is worth it.

Install MongoDB on Ubuntu 10.04

Configure Package Management System (APT)

The Ubuntu package management tool (i.e. dpkg and apt) ensure package consistency and authenticity by requiring that distributors sign packages with GPG keys. Issue the following command to import the 10gen public GPG Key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10

Create a /etc/apt/sources.list.d/10gen.list file and include the following line for the 10gen repository.

deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

Now issue the following command to reload your repository:

sudo apt-get update

Install Packages

Issue the following command to install the latest stable version of MongoDB:

sudo apt-get install mongodb-10gen

When this command completes, you have successfully installed MongoDB! Continue for configuration and start-up suggestions.

Configure MongoDB

These packages configure MongoDB using the /etc/mongodb.conf file in conjunction with the control script. You will find the control script is at /etc/init.d/mongodb.

This MongoDB instance will store its data files in the /var/lib/mongodb and its log files in /var/log/mongodb, and run using the mongodb user account.

Note

If you change the user that runs the MongoDB process, you will need to modify the access control rights to the /var/lib/mongodb and /var/log/mongodb directories.

Controlling MongoDB

Starting MongoDB

You can start the mongod process by issuing the following command:

sudo service mongodb start

You can verify that mongod has started successfully by checking the contents of the log file at /var/log/mongodb/mongodb.log.

Stopping MongoDB

As needed, you may stop the mongod process by issuing the following command:

sudo service mongodb stop

Restarting MongoDB

You may restart the mongod process by issuing the following command:

sudo service mongodb restart

Controlling mongos

As of the current release, there are no control scripts for mongos. mongos is only used in sharding deployments and typically do not run on the same systems where mongod runs. You can use the mongodb script referenced above to derive your own mongos control script.

Using MongoDB

Among the tools included with the MongoDB package, is the mongo shell. You can connect to your MongoDB instance by issuing the following command at the system prompt:

mongo
> show dbs (); —> To show your databases
> use <databasename> —-> To switch database
> db.createCollection(“collectionname”) —> To create collection
> db.collectionname.find(); —> To see the contents in the collection
> db.addUser(“theadmin”, “anadminpassword”) —> To create user and password

Mongodb performance test :-

To monitor database system we can use Mongotop

Mongotop tracks and reports the current read and write activity of a MongoDB instance.
Mongotop provides per-collection visibility into use.
Use mongotop to verify that activity and use match expectations.
Mongotop returns time values specified in milliseconds (ms.)
Mongotop only reports active namespaces or databases, depending on the –locks option.
If you don’t see a database or collection, it has received no recent activity.

By default mongotop connects to the MongoDB instance running on the localhost port 27017. However,mongotop can optionally connect to remote mongod instances

Next, we can use Mongostat

Mongostat captures and returns counters of database operations. Mongostat reports operations on a per-type (e.g. insert, query, update, delete, etc.) basis. This format makes it easy to understand the distribution of load on the server. Use  Mongostat to understand the distribution of operation types and to inform capacity planning.
The Mongostat utility provides a quick overview of the status of a currently running mongod or Mongos instance. Mongostat is functionally similar to the UNIX/Linux file system utility vmstat, but provides data regarding mongod and Mongos instances.

Use  db.serverStatus()
It provides an overview of the database process’s state.

Then REST interface

MongoDB provides a REST interface that exposes a diagnostic and monitoring information in a simple web page. Enable this by setting rest to true, and access this page via the local host interface using the port numbered 1000 more than that the database port. In default configurations the REST interface is accessible on 28017. For example, to access the REST interface on a locally running mongod instance: http://localhost:28017

These are a few basic tips on making your application better/faster/stronger without knowing anything about indexes or sharding.

Connecting

Connecting to the database is a (relatively) expensive operation. Try to minimize the number of times you connect and disconnect: use persistent connections or connection pooling (depending on your language).

there are some  side effects with the PHP connection code.

$connection = new Mongo ( );

$connection->connect( );

In this code it appears the user wants to create a new connection. However, under the hood the following is happening:

The constructor connects to the database.
connect( ) sees that you’re already connected, assumes you want to reset the connection.
Disconnects from the database.
Connects again.

The result is that you have doubled your execution time.

ObjectIds

ObjectIds seem to be uncomfortable, so they convert their ObjectIds into strings. The problem is, an ObjectId takes up 12 bytes but its string representation takes up 29 bytes (almost two and a half times bigger).

Numbers vs. Strings

MongoDB is type-sensitive and it’s important to use the correct type: numbers for numeric values and strings for strings.

If you have large numbers and you save them as strings (“1234567890″ instead of 1234567890), MongoDB may slow down as it strcmps the entire length of the number instead of doing a quicker numeric comparison. Also, “12″ is going to be sorted as less than “9″, because MongoDB will use string, not numeric, comparison on the values. This can lead to some errors.

Driver-specific
Find out if you’re driver is particularly weaknesses (or strengths). For instance, the Perl driver is one of the fastest drivers, but it is not good at decoding Date types (Perl’s DateTime objects take a long time to create).
MongoDB adopts a documented-oriented format, so it is more similar to RDBMS than a key-value or column oriented format.

MongoDB operates on a memory base and places high performance above data scalability.Mongo DB uses BSON for data storage

Mongo uses memory mapped files, which means that a lot of the memory reported by tools such as top may not actually represent RAM usage. Check mem[“resident”], which tells you how much RAM Mongo is actually using.

“mem” : {
    “resident” : 2,
    “virtual” : 2396,

    “supported” : true,
    “mapped” : 0
},

Backup

There are basically two approaches to backing up a Mongo database:

Mongodump and Mongorestore are the classic approach. Dumps the contents of the database to files. The backup is stored in the same format as Mongo uses internally, so is very efficient. But it’s not a point-in-time snapshot.
To get a point-in-time snapshot, shut the database down, copy the disk files (e.g. with cp) and then start mongod up again. Alternatively, rather than shutting mongod down before making your point-in-time snapshot, you could just stop it from accepting writes:

> db._adminCommand({fsync: 1, lock: 1})
{
        “info” : “now locked against writes, use db.$cmd.sys.unlock.findOne() to unlock”,

        “ok” : 1
}

To unlock the database again, you need to switch to the admin database and then unlock it

> use admin
switched to db admin
> db.$cmd.sys.unlock.findOne()
{ “ok” : 1, “info” : “unlock requested” }

Replication
Start your master and slave up like this:

$ mongod –master –oplogSize 500

$ mongod –slave –source localhost:27017 –port 3000 –dbpath /data/slave

When seeding a new slave server from master use the –fastsync option.

You can see what’s going on with these two commands:
> db.printReplicationInfo() # tells you how long your oplog will last
> db.printSlaveReplicationInfo() # tells you how far behind the slave is

If the slave isn’t keeping up,Check the mongo log for any recent errors. Try connecting with the mongo
console. Try running queries from the console to see if everything is working. Run the status commands
above to try and find out which database is taking up resources.
Timeout

Connection timeout in milliseconds. Defaults to 20000

Connection::query_timeout.

How many milliseconds to wait for a response from the server. Set to 30000 (30 seconds) by default. -1 waits forever (or until TCP times out, which is usually a long time).

Default pool

The default pool has a maximum of 10 connections per mongodb host. This value is controlled by the variable  “connectionsPerHost” within the class

MongoDB Server Connections

The MongoDB server has a property called “maxConns” that  is the max number of simultaneous connections. The
default number for maxConns is 80% of the available file descriptors for connections. One way to check the number of connections is by opening the mongo shell and executing:

>db.serverStatus() and in the previous mail I have send the screen shot of this.

The standard format of the MongoDB connection URI used to connect to a MongoDB database server.

mongodb://[username:password@]host1[:port1][,host2[:port2],…[,hostN[:portN]]][/[database][?options]]

Finding the Min and Max values in MongoDB

In MongoDB, the min() and max() functions work as limitors – essentially the same as “gte” (>=) and “lt” (<).

To find the highest (maximum) value in MongoDB, you can use this command;

db.thiscollection.find().sort({“thisfieldname”:-1}).limit(1)

This essentially sorts the data by the fieldname in decending and takes the first value.

The lowest (minimum) value can be determined in a similar way.

    db.thiscollection.find().sort({“thisfieldname”:1}).limit(1)

Memory Mapped Storage Engine :-

This is the current storage engine for MongoDB, and it uses memory-mapped files for all disk I/O.  Using this strategy, the operating system’s virtual memory manager is in charge of caching.  This has several implications:

There is no redundancy between file system cache and database cache: they are one and the same.
MongoDB can use all free memory on the server for cache space automatically without any configuration of a cache size.
Virtual memory size and resident size will appear to be very large for the mongod process.

This is benign: virtual memory space will be just larger thanthe size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.

This command shows the memory usage information :- db.serverStatus().mem

For example :-

> db.serverStatus().mem
{
    “bits” : 64,
    “resident” : 31,
    “virtual” : 146,
    “supported” : true,
    “mapped” : 0,
    “mappedWithJournal” : 0
}

We can verify there is no memory leak in the mongod process by comparing the mem.virtual and mem.mapped values (these values are in megabytes).  If you are running with journaling disabled, the difference should be relatively small compared to total RAM on the machine. If you are running with journaling enabled, compare mem.virtual to 2*mem.mapped.   Also watch the delta over time; if it is increasing consistently, that could indicate a leak.

Also we can use to check what percent of memory is being used for memory mapped files by the free command:

Here 2652mb of memory is being used to memory map files

root@manager-desktop:~# free -tm

             total       used       free     shared    buffers     cached
Mem:          3962       3602        359          0        411       2652

-/+ buffers/cache:        538       3423

Swap:        1491        52       1439

Total:        5454       3655   1799

Garbage collection handling :-

When we remove an object from MongoDB collection, the space it occupied is not automatically garbage collected and new records are only appended to the end of data files, making them grow bigger and bigger.MongoDB maintains lists of deleted blocks within the datafiles when objects or collections are deleted.  This space is reused by MongoDB but never freed to the operating system.

To shrink the amount of physical space used by the datafiles themselves, by reclaiming deleted blocks, we must rebuild the database by using  the command “db.repairDatabase( )” . repairDatabase copies all the database records to new files.

We will need enough free disk space to hold both the old and new database files while the repair is running, the repairDatabase  will take a long time to complete.Also rather than compacting an entire database,

you can compact just a single collection by using  “db.runCommand({compact:’collectionmname;})

This does not shrink any datafiles,however; it only defragments deleted space so that larger objects might reuse it.

The compact command will never delete or shrink database files, and in general requires extra space to do its work.

Thus, it is not a good option when you are running critically low on disk space.

Openstack Cloud Software

OpenStack : The Mission

“ To produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private cloud providers regardless of size, by being simple to implement and massively scalable.”

OpenStack is a collection of open source software projects that enterprises/service providers can use to setup and run their cloud compute and storage infrastructure.Rackspace and NASA are the key initial contributors to the stack. Rackspace contributed their “Cloud Files” platform (code) to power the Object Storage part of the OpenStack, while NASA contributed their “Nebula” platform (code) to power the Compute part. OpenStack consortium has managed to have more than 150 members including Canonical, Dell, Citrix etc.

There are 5 main service families under OpenStack

Nova         –   Compute Service

Swift         –    Storage Service

Glance      –    Imaging Service

Keystone  –    Identity Service

Horizon    –    UI Service

Open Stack Compute Infrastructure (Nova)

Nova is the Computing Fabric controller for the OpenStack Cloud. All activities needed to support the life cycle of instances within the OpenStack cloud are handled by Nova. This makes Nova a Management Platform that manages compute resources, networking, authorization, and scalability needs of the OpenStack cloud. But, Nova does not provide any virtualization capabilities by itself; instead, it uses libvirt API to interact with supported hypervisors. Nova exposes all its capabilities through a web services API that is compatible with the EC2 API of Amazon Web Services.

Functions and Features:

• Instance life cycle management

• Management of compute resources

• Networking and Authorization

• REST-based API

• Asynchronous eventually consistent communication

• Hypervisor agnostic : support for Xen, XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V

OpenStack Storage Infrastructure (Swift)

Swift provides a distributed, eventually consistent virtual object store for OpenStack. It is analogous to Amazon Web Services – Simple Storage Service (S3). Swift is capable of storing billions of objects distributed across nodes. Swift has built-in redundancy and fail-over management and is capable of archiving and media streaming. It is extremely scalable in terms of both size (several petabytes) and capacity (number of objects).

Functions and Features

• Storage of large number of objects

• Storage of large sized objects

• Data Redundancy

• Archival capabilities – Work with large datasets

• Data container for virtual machines and cloud apps

• Media Streaming capabilities

• Secure storage of objects

• Backup and archival

• Extreme scalability

OpenStack Imaging Service (Glance)

OpenStack Imaging Service is a lookup and retrieval system for virtual machine images. It can be configured to use any one of the following storage backends:

• Local filesystem (default)

• OpenStack Object Store to store images

• S3 storage directly

• S3 storage with Object Store as the intermediate for S3 access.

• HTTP (read-only)

Functions and Features

• Provides imaging service

OpenStack Identity Service (Keystone)

Keystone provides identity and access policy services for all components in the OpenStack family. It implements it’s own REST based API (Identity API). It provides authentication and authorization for all components of OpenStack including (but not limited to) Swift, Glance, Nova. Authentication verifies that a request actually comes from who it says it does. Authorization is verifying whether the authenticated user has access to the services he/she is requesting for.

Keystone provides two ways of authentication. One is username/password based and the other is token based. Apart from that, keystone provides the following services:

• Token Service (that carries authorization information about an authenticated user)

• Catalog Service (that contains a list of available services at the users’ disposal)

• Policy Service (that let’s keystone manage access to specific services by specific users or groups).

Openstack Administrative Web-Interface (Horizon)

Horizon the web based dashboard can be used to manage /administer OpenStack services. It can be used to manage instances and images, create keypairs, attach volumes to instances, manipulate Swift containers etc. Apart from this, dashboard even gives the user access to instance console and can connect to an instance through VNC. Overall, Horizon

Features the following:

• Instance Management – Create or terminate instance, view console logs and connect through VNC, Attaching volumes, etc.

• Access and Security Management – Create security groups, manage keypairs, assign floating IPs, etc.

 • Flavor Management – Manage different flavors or instance virtual hardware templates.

 • Image Management – Edit or delete images.

 • View service catalog.

 • Manage users, quotas and usage for projects.

 • User Management – Create user, etc.

 • Volume Management – Creating Volumes and snapshots.

 • Object Store Manipulation – Create, delete containers and objects.

 • Downloading environment variables for a project.

INSTALLATING OPEN STACK

We can install open stack ESSEX very easily using StackGeek script. Login to your box and install git with apt-get. We’ll become root and do an update first.

sudo  su
apt-get update
apt-get install git

Now checkout the StackGeek scripts from Github:

git clone git://github.com/StackGeek/openstackgeek.git   
cd openstackgeek

Install the Base Scripts

Be sure to take a look at the scripts before you run them. Keep in mind the scripts will periodically prompt you for input, either for confirming installation of a package, or asking you for information for configuration.

Start the installation by running the first script:

./openstack_base_1.sh

When the script finishes you’ll see instructions for manually configuring your network. You can edit the interfaces file by doing a:

vim /etc/network/interfaces

Copy and paste the network code provided by the script into the file and then edit:

auto eth0 
iface eth0 inet static
  address 192.168.1.48		
  network 192.168.1.0		
  netmask 255.255.255.0
 broadcast 192.168.1.255
  gateway 192.168.1.124			
  dns-nameservers 8.8.8.8  
auto eth1

Change the settings for your network configuration and then restart networking and run the next script:

/etc/init.d/networking restart

Then run the second script :

./openstack_base_2.sh

After the second script finishes, you’ll need to set up a logical volume for Nova to use for creating snapshots and volumes. Nova is OpenStack’s compute controller process.

Here’s the output from the format and volume creation process:-

root@manager-System-Product-Name:/openstackgeek# fdisk /dev/sda
Device contains neither a valid DOS partition table,nor Sun,SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xb39fe7af.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p Partition number (1-4, default 1): 3  
First sector (2048-62914559, default 2048): 
 Using default value 2048 Last sector,(2048-62914559,default 62914559): 
Using default value 62914559 
Command (m for help): w The partition table has been altered! 
Calling ioctl() to re-read partition table. Syncing disks.
root@manager-System-Product-Name:/openstackgeek# pvcreate -ff /dev/sda3
 Physical volume "/dev/sda3" successfully created
root@manager-System-Product-Name:/openstackgeek# vgcreate nova-volumes /dev/sda3
 Volume group "nova-volumes" successfully created 

Note: Your device names may vary.

Installing MySql

The OpenStack components use MySQL for storing state information. Start the install script for MySQL by entering the following:

./openstack_mysql.sh

You’ll be prompted for a password used for each of the components to talk to MySQL:
Enter a password to be used for the OpenStack services
to talk to MySQL (users nova, glance, keystone): redhat
Note(Here “redhat” is the password given to nova,glance,keystone) 

During the installation process you will be prompted for a root password for MySQL. In our install example we use the same password, ‘redhat’. At the end of the MySQL install you’ll be prompted for your root password again.

mysql start/running, process 8796
################################################################################ 
Creating OpenStack databases and users. 
Use your database password when prompted. 
 Run './openstack_keystone.sh' when the script exits. 
################################################################################
Enter password:
After MySQL is running, you should be able to login with any of the OpenStack 
users and/or the root admin account by doing the following:

mysql -u root -predhat
mysql -u nova -predhat nova
mysql -u keystone -predhat keystone
mysql -u glance -predhat glance

Installing Keystone

Keystone is OpenStack’s identity manager. Start the install of Keystone by doing:

./openstack_keystone.sh

You’ll be prompted for a token, the password you entered for OpenStack’s services, and your email address. The email address is used to populate the user’s information in the database.

Enter a token for the OpenStack services to auth wth keystone: redhattoken 
Enter the password you used for the MySQL users (nova, glance, keystone):redhat 
Enter the email address for accounts(nova,glance,keystone):user@company.com
You should be able to query Keystone at this point. 
You’ll need to source the“stackrc” file before you talk to Keystone:
 . ./stackrc   
 keystone user-list    
 Keystone should return a list of users:
+----------------------------------+---------+------------------------+--------+
|                id                | enabled |         email          |  name  |
+----------------------------------+---------+------------------------+--------+
| b32b9017fb954eeeacb10bebf14aceb3 | True    | user@company.com       | demo   |
| bfcbaa1425ae4cd2b8ff1ddcf95c907a | True    | user@company.com       | glance |
| c1ca1604c38443f2856e3818c4ceb4d4 | True    | user@company.com       | nova   |
| dd183fe2daac436682e0550d3c339dde | True    | user@company.com       | admin  |
+----------------------------------+---------+------------------------+--------+

Installing Glance

Glance is OpenStack’s image manager. Start the install of Glance by doing:

./openstack_glance.sh

The script will download an Ubuntu 12.04 LTS cloud image from StackGeek’s S3 bucket.Once it’s done, you should be able to get a list of images:

glance index

Here’s the expected output:

ID              :- 71b8b5d5-a972-48b3-b940-98a74b85ed6a 
Name            :- Ubuntu 12.04 LTS
Disk Format     :- qcow2 
Container Format:- ovf 
Size            :- 226426880

Installing Nova

We’re almost done installing! The last component is the most important one as well. Nova is OpenStack’s compute and network manager. It’s responsible for starting instances, creating snapshots and volumes, and managing the network. Start the Nova install by doing:

./openstack_nova.sh

You’ll immediately be prompted for a few items, including your existing network interface’s IP address, the fixed network address, and the floating pool addresses:

######################################################
The IP address for eth0 is probably 192.168.1.48.
Keep in mind you need an eth1 for this to work.
######################################################
Enter the primary ethernet interface IP: 192.168.1.48
Enter the fixed network (eg. 10.0.2.32/27): 192.168.1.0/24
Enter the fixed starting IP (eg. 10.0.2.33): 192.168.1.1
############################################################################
The floating range can be a subset of your current network. 
Configure your DHCP server to block out the range before you choose it here. 
An example would be 10.0.1.224-255
############################################################################
Enter the floating network (eg. 10.0.1.224/27):  
Enter the floating netowrk size (eg. 32):

The fixed network is a set of IP addresses which will be local to the compute nodes. Think of these addresses as being held and routed internally inside any of the compute node instances.

The floating network is a pool of addresses which can be assigned to the instances you are running. For example, you could start a web server and map an external IP to it for serving a site on the Internet.


Finish Installing Nova

Nova should finish installing after you enter all the network information. When it’s done, you should be able to get a list of images from Glance via Nova:

 nova image-list

And get the expected output we saw earlier from Glance:

root@manager-System-Product-Name:/openstackgeek# nova image-list
+--------------------------------------+------------------+--------+--------+
|                  ID                  |       Name       | Status | Server |
+--------------------------------------+------------------+--------+--------+
| 71b8b5d5-a972-48b3-b940-98a74b85ed6a | Ubuntu 12.04 LTS | ACTIVE |        |
+--------------------------------------+------------------+--------+--------+

Installing Horizon

Horizon is the UI and dashboard controller for OpenStack. Install it by doing:

./openstack_horizon.sh

When it’s done installing, you’ll be given a URL to access the dashboard. 
You’ll be able to login with the user ‘admin’ 
and whatever you entered earlier for your password. 
If you’ve forgotten it, simply grep for it in your environment:

env |grep OS_PASSWORD

The URL will be : http://192.168.1.48

You can login the Openstack dashboard by the following credentials

USER : admin

PASSWORD : redhat

DevOPS on AWS Cloud using Opscode Chef

Rule the Cloud‘ with Chef
Chef is Infrastructure as Code,an API for your entire infrastructure. Assuming that you are well versed with cloud if not still you should have atleast heard of cloud computing and it is still an evolving paradigm and Cloud computing companies are the newest buzz in the IT sector. Chef is used in conjunction with cloud  from cloud providers say Amazon’s AWS. If a software thats being developed is a mix of technology which is interdependent and works in perfect harmony then why not the people behind it, this thought has led to the emergence of a new cultral trend called DevOPS. Now if you setup a number of instances on the cloud then whats next – new instances on cloud are just like bare metal server and the configuration has to be done from scratch and it would be feasible to do so manually for couple of them what if the count just got bigger say 100 live instances with different unix distros, although a script could be written but still it will not suffice,  in the long run considering management too. Here the CHEF comes into play

“chef is sysadmin robot performing configuration tasks automatically and much more quickly than a single admin could ever hope to” – Jesse Robbins, Opscode CEO.

CHEF is an open source configuration management tool using pure-Ruby,the chef domain specific language for writting system configuration related stuff (recipes and cookbook)

CHEF brings a new feel with its interesting naming conventions relating to cookery like Cookbooks (they contain codes for a software package installation and configuration in the form of Recipes), Knife (API tool), Databags (act like global variables) etc

Although there are many configuration management tools prevailing in the industry CHEF was able to secure its position in the race.

“CHEF take a step farther passes puppet and cfengine — like doing “LIVE SEARCH” within  configuration management like loadbalancer can call out to get a list of the app servers you need to balance  or an applicaton server can call out, get a reference to the master database server  etc …..the centralised chef server is indexing all the information about your infrasturctre  so that you could search in the command line using knife you know in real time so that application could lever that data..” by Seth Chisamore from the OPSCODE.

A techonology peak that isnt fluffy – Cloud
For those folks new to cloud- Its a whole bunch of activites which began as an innovation, recently given out as products and now they have become so widespread and so feature complete that they became suitable for utility services.

So if you dont want cloud in your business its like saying you dont want to use the electricity instead you built your own generator and use it according to your need. Now what do we loose if we continue with that is the competitive edge ie you get the pressure to keep your stuff upgraded inorder to find your place relative to the others in the ecosystem.

Cloud is API oriented, everything you see in cloud is ulitmately programmable.

Virtualization is the foundation of Cloud but virtualization is not Cloud by itself. It certainly enables many of the things we talk about when we talk Cloud but it is not necessary sufficient to be a cloud. Google app engine is a cloud that does not incorporate virtualization. One of the reasons that virtualization is great is because you can automate the procurement of new boxes.

A Culture thats on path to revolutionize IT – DevOPS
Devops is something that orginated in webshops predominantly and it require a kind of tools thats really not available except for home grown tools which the big webshops built over and over again. So the organisation who wanted to use devops started using the tools that enable this transition as most organisations depends on web as a source of revenue in a variety of different ways, even the enterprise desire to be as agile as the webshops. This has begun a revolution from the website permeate into the enterprise base more frequently.

Considering a real life example for Devops say facebook, the most popular social networking site here the developers/QA/operations – there is alot of communications, cross talk happening between them like the developers has to write codes, QA who has to make sure the good code goes out, the operations team has to make sure its up and running. Finally all of these has to be in records which altogether seems to be inefficient, this led to the evolving of the entire system. According to the conventional practices where the developers writes the code and throws it off to the testing. Once the testing is done then it moves to the operations etc. Contrary to that the developers , operations team are all involved in the entire lifecycle of the project as a team. This creates a symbiotic relationship. Now the operations people could understand what the engineers needs the most and the developers are able to see the value that operation people brings as they make architecture decisions.

Cloud with your DevOps offers some fantastic properties. The ability to leverage all the advancements made in software development around repeatability and testability with your infrastructure. The ability to scale up as need be real time (autoscaling) and among other things being able to harness the power of self healing systems. DevOps better with Cloud.

Configuration management say CHEF is one of the most fundamental elements allowing DevOps in the cloud. It allows you to have different VMs that have just enough OS that they can be provisioned, automatically through virtualization, and then through configuration management can be assigned to a distinct purpose within the cloud. The CM system handles turning the lightly provisioned VM into the type of server that it is intended to be.

DevOps & Chef
DevOps is nonthing but a cultural movement where everybody say the developers, QA, Operations, Testing etc get along. A project group formation with a mixed skillset that blurs the line between say a developer and sysadmin. This helps the project to meet its deadlines
and avoid unexpected situations. Cloud computing act like a catalyst to this movement. Thereby the CHEF also hops in.

Chef forms a critical layer in the Devops stack.Thanks to the concept of infrastructure as code and virtualization, we can define and build our infrastructure based on text files. Those files can be version-controlled and tested like regular code. The artifact (ami, image), can then be deployed on an infrastructure. The following image gives you an overview on the similarities.

Inadvertently the issues like “what if the application” or “what if the infrasturcture” are resolved, the fact is that application is the infrastructure and infrastructure is the application and we are here to enable business, also it helped bring peoples in the team into better alignment across the board.

Chef configuration is written in pure ruby.

Devops == Ruby

For those who think Bash is enough as a scripting language – Bash becomes a liability not an asset once your script exceeds 100 lines and a total nightmare if you need to parse or output HTML, CSV, XML, JSON, etc. A significant point to be noted is that Chef uses Ruby in its recipes unlike puppet where it uses its own configuration language that is based on Ruby although chef is heavily inspired from puppet. If you chose chef then you are effectively scripting your infrastructure with ruby.

Though Chef was only released on January 15th , 2009 it has gotten rapid adoption and gained a large number of contributors. According to the Opscode wiki there are 545 approved contributors to Opscode projects and 106 companies. Beyond that the #chef IRC channel is typically attended by over 100 users and Opscode staff, signs of a healthy, growing open source community.

Springsource division of VMware have signed on to contribute to the project. They are even being very public about it as seen in this endorsement:

“We are excited about the open source contributions the Springsource Division of VMware has made to Opscode Chef.” said Javier Soltero, CTO of Springsource Management Products at VMware. “Chef is an important tool for automating infrastructure management and we look forward to its continued growth and success.”

Moreover on my experience of using chef I really enjoyed the quick response I could get from the Opscode Support Team for all my queries and they had always being able to direct me towards a solution.

Automation Using Chef to create an Instance on Amazon Cloud Service Provider with Apache webserver configured in it.

Memo
chef-workstation – is the place where we customize our cookbooks and maintains the chef-repo
chef node – is the management node that we create using chef, it configures itself based on its runlist and downloaded cookbooks

The really cool thing with Chef is that you can rerun cookbooks against a node and it will not do anything it has already done i.e it will not change the end result on the target node as defined by the recipes being run against it. So you will always get the same outcome no matter what state the node and actions will not be taken if already done (and conversely run if detected it has not been run).  When reading about Chef you will see this described as being idempotent (There I’ve saved you looking it up).

Prerequisites – an AWS account, EC2 API configured, OS – Ubuntu.

1. Sign up an account at http://www.opscode.com/hosted-chef/# , Here we use the OHC (opscode hosted chef) where we get to create upto 5 nodes for free!!

2.Verify your opscode account.

3.Download the files

Create an organization in the Console page at www.manage.opscode.com, and then download the following files:

  • Your Organization validation key. This is used to automatically register new Chef Clients (like servers you manage).
  • The Knife configuration file.
  • Your User key. This is used to authenticate your user with Hosted Chef.
  • Edit knife.rb  to add aws access key and secret access key
  • knife[:aws_access_key_id]     = “Your AWS Access Key”
  • knife[:aws_secret_access_key] = “Your AWS Secret Access Key”

At this stage I have a chef ready user environment, an OpsCode organisation set up and now I want to start by spinning up an ec2 instance. I will not be going into any depth regarding  the ec2 specifics as that would make this post far too long.

4.Setting Up chef-Workstation

Install Ruby and Development Tools

#sudo apt-get update
#sudo apt-get install ruby ruby-dev libopenssl-ruby rdoc ri irb build-essential wget ssl-cert git-core
#sudo gem update –system

Install RubyGems

#cd /tmp
#wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.10.tgz
#tar zxf rubygems-1.8.10.tgz
#cd rubygems-1.8.10
#sudo ruby setup.rb –no-format-executable

Install Chef

#sudo gem install chef

5.To verify chef installation

#chef-client -v

6.Build the chef repository

#cd ~
#git clone https://github.com/opscode/chef-repo.git

Knife reads configuration files in .chef. so we need to create those as well

#mkdir -p ~/chef-repo/.chef

Copy the keys and knife configuration you downloaded earlier into this directory:

#cp USERNAME.pem ~/chef-repo/.chef
#cp ORGANIZATION-validator.pem ~/chef-repo/.chef
#cp knife.rb ~/chef-repo/.chef

Run the following command to confirm knife is working with the Hosted Chef API.

#cd ~/chef-repo
#knife client list

output : “ORGANIZATION-validator”

7.Now i need to download the apache2 cookbook on to my workstation, customize if required and then upload it to my account on the opscode platform

#knife cookbook site install apache2

this will notify git and also pulls down the desired cookbook

8.Upload the cookbook using the following command

#knife cookbook upload apache2

9.Enter the following command, sit back and  enjoy the show!!!

#knife ec2 server create -G default -I ami-1212ef7b -f m1.small -S <aws ssh key id> -i <ssh identity file> -x root -r ‘recipe[apache2]’


Before proceeding it would probably be a good idea to take time out and read the Opscode  Chef Recipe wiki which has a nice clear explanation on cookbook name spaces. Also remind yourself of the components that make up a cookbook it’s worth noting that recipes manage resources and those resources will be executed in the order they occur.

Cassandra Cluster on AWS EC2 with Cassandra 7.x and ubuntu 10.04

Cassandra is a highly scalable, eventually consistent, distributed, structured key-value store. Cassandra brings together  Dynamo’s fully distributed design  and Bigtable’s ColumnFamily-based data model.

In a cluster, Cassandra nodes exchange information about one another using a mechanism called Gossip. The nodes in a cluster needs to know one another.  Nodes named “seed”s are the centre of this communication mechanism. It’s customary to pick a small number of relatively stable nodes to serve as your seeds. Do make sure that each seed also knows of at least one other. Having two nodes is what is preferred.

Lets have a look at how we can bring a Cassandra cluster up with Cassandra 7.x on ubuntu 10.04

First of all you have to install the java/jdk .  As that is out of scope for our discussion please do it on your own and let’s start with cassandra.

Add the following repositories to your apt sources list

vim /etc/apt/sources.list.d/cassandra.list

[bash]deb http://www.apache.org/dist/cassandra/debian 07x main
deb-src http://www.apache.org/dist/cassandra/debian 07x main[/bash]

Import the following keys and add it to apt-key

[bash]

gpg –keyserver keyserver.ubuntu.com –recv-keys 4BD736A82B5C1B00

gpg –export –armor 4BD736A82B5C1B00 | sudo apt-key add –

gpg –keyserver keyserver.ubuntu.com –recv-keys F758CE318D77295D

gpg –export –armor F758CE318D77295D | sudo apt-key add –

[/bash]

Execute

[bash]apt-get update[/bash]

and make sure that no error is there with accessing the packages.

Installing cassandra on all nodes(machines) with  which we intend to build the cluster.

[bash]apt-get install cassandra  –yes[/bash]

Now edit the configuration file for Cassandra

vim /etc/cassandra/cassandra.yaml

Here  I will discuss the important directives that has to be edited for the cluster to take effect

initial_token:

eg:  initial_token:  136112946768375385385349842972707284582

This parameter determines the position of each node in the Cassandra ring. Initial token for the first seed node should be ‘0’.Here is a simple Python script that helps to calculate the token values.

[bash]

#! /usr/bin/python

import sys

if (len(sys.argv) > 1):

num=int(sys.argv[1])

else:

num=int(raw_input(“How many nodes are in your cluster? “))

for i in range(0, num):

print ‘node %d: %d’ % (i, (i*(2**127)/num))

[/bash]

executing this script will prompt you for the no. of nodes in your cluster. Then it will output the initial tokens for each node.

For eg: Consider a 2 node cluster, the tokens will be

node 0: 0

node 1: 85070591730234615865843651857942052864

auto_bootstrap: false

You can set this to false as we are just going to start the cluster for the first time.

seeds:

-< ip address >

As I told you earlier, the seeds mentioned here will control the communication between the nodes.

You can give the ips of the two nodes here  for which you assigned the first two initial tokens generated by the script above.

Example:

Seeds:

-192.168.1.10

-192.168.1.13

This seed entries should be the same on all nodes of the cluster.

listen_address:

&

rpc_address:

You can leave both empty.

Starting  the Cassandra

For starting Cassandra you can either use an init script/ or the command “cassandra”. Here I will use the second option.

As Cassandra service was started during the installation some values will be stored in /var/lib/cassandra/data directory. So Before starting Cassandra follow these steps.

[bash]

1)      /etc/init.d/cassandra stop

2)      rm –rf  /var/lib/cassandra/data

3)      mkdir /var/lib/cassandra/data

[/bash]

After doing these steps on all the nodes please run the following  command to start Cassandra on each node starting from the seed node 1

[bash]# cassandra &[/bash]

After starting Cassandra on all the nodes you can check the cluster status using the following command

[bash]nodetool -h <ip of the node >  -p 8080 ring[/bash]

or

[bash]nodetool -h localhost -p 8080 ring[/bash]

installing apt-fast from the repository with apt-get

apt-fast is a nifty tool that can be used to speed up installation of software packages on Debian GNU/Linux based systems (ie using .deb). This is a script that acts as a wrapper around apt-get apt aptitude etc.

Quick steps for the installation

 

sudo add-apt-repository ppa:saiarcot895/myppa

 

Press enter to accept the repository

apt-get update ; apt-get -y install apt-fast

 

Choose the the package manager tool that you want to use from the ‘dialogue’ as shown below. Normally we go for apt-get

 

 

You can keep the number of downloads per server at the default value.