• Call: +1 (858) 429-9131

Posts Tagged ‘SQL’

DevOps stories 1: working with a high traffic e-commerce portal

Looks like this is a good idea to write down first person stories of various DevOps – Cloud migration scenarios that we come across.

In this particular case we have a beast of a server with 32 processors with 8 cores each & 256 of RAM running LAMP stack, CakePHP &  X-cart shopping cart. And yes, everything is dead slow.

Cleaning up the X-cart cache

By default (?), the cache is at /var/www/html/cache or [DOCTUMENT_ROOT]/cache. If there are too many files, you will not be able to delete the files. The following commands can help.


touch /root/agileblaze/cache-file-list.txt #empty file
find . -name '.js' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f
find . -name 'sql.
' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f
find . -name 'rf*.php' | grep -vFf /root/agileblaze/cache-file-list.txt | xargs /bin/rm -f

The permanant fix for this X-cart behaviour is to change the following row in the config.php file from:

define('USE_SQL_DATA_CACHE', true);
to
define('USE_SQL_DATA_CACHE', false);

MySQL

There are tons of issues like a db that is not upgraded, joins without indexes etc. We decided to make use of the RAM & have MySQL MYISAM temporary files in there for faster access. Don’t forget to create the required directory and add the necessary entries /etc/fstab to persist the changes over reboots.

/etc/my.cnf is changed as follows

tmpdir = /var/mysqltmp # changed from /var/lib/mysql/tmp

Now that we have some room to look into other matters, things should be easier.

We also had the non-so-friendly max connections error. We increased in the max connections from the default.

# MAX CONNECTIONS
max_connections = 300 #Sat Apr 30 03:35:25 CDT 2016

Slow Queries

If the slow query log is enabled, mysqldumpslow can be a very handy command

[root@714219-db1 mysql]# mysqldumpslow -a -s r -t 10 /var/log/mysql/slow.log

Reading mysql slow query log from /var/log/mysql/slow.log Count: 376687 Time=1.63s (613441s) Lock=0.00s (36s) Rows=203657.1 (76714970948), 2users@localhost SELECT productid, COUNT(remote_ip) AS total, AVG(vote_value) AS rating FROM xcart_product_votes GROUP BY productid

Controlling the RAM usage

 

The RAM usage on GNU/Linux based systems can be sometimes quite weird. The immediate path taken is to play around with sysctl and tweak swappiness & may be run drop_cache.

ie,

change swappiness to say, 10 & do a cache + buffer cleanup. But these may not be very handy but the /proc/sys/vm/vfs_cache_pressure changes seems to help further. (we have it around 512)

Further minimum free memory size is a parameter which can help preventing OOM errors. A sample value is shown below.

sysctl -w vm.min_free_kbytes=2621440

Further:

sysctl -w vm.vfs_cache_pressure=1024
sysctl -w vm.swappiness=10

 

Keep an eye on Caches and Buffers

This is often something people miss.   The difference between free command and the total process usage can give us the Cache + buffer usage.  slabtop is a very handy command to get exact details.

slabtop --delay=10 -s c

Can give a neat summary.

Screenshot from 2016-05-11 20-28-07

 

Another very useful tool is dstat

dstat -lrvn 10 output is shown below. This can give colourful details of cache usage.

the memory, CPU, network, IO columns above gives useful information.

 

How to read dstat : On a fully warmed-up system, memory should be around 95% in-use, with most of it in the cache column. CPUs should be in use with no more than 1-2% of iowait and 2-15% system time.

 

How to setup automatic updates:

Sometimes it is quite good to have automatic updates in place. For Ubuntu, automatic updates can be done following these instructions.

 

 

Splunk on AWS EC2 CloudSplunk

Whats is Splunk ?

Splunk is a log, monitoring and reporting tool for IT system administrators with search capabilities. It crawls logs, metrics, and other data from applications, servers and network devices and indexes it in a searchable repository from which it can generate graphs, SQL reports and alerts. Splunk can be easily set on the AWS machine archival storage as EBS volumes and periodically syncing the archive from EBS to S3 Bucket or taking EBS snapshots for backup of the logs for the future use.

Generally its hard to track the logs from the server. We do have different monitoring tools such as Nagios, Zabix, here is a new tool named Splunk, which is a kind of bigger solution for providing monitor the visibility inside all the dynamic and complex environment. For example you have an application seems to be very slow, its not because the app have some issue , its because of the lack of free memory on the server. Such kind of details can be obtained from inside the splunk server.

Why do we go for Splunk ?

In auto-scaled where the instances are running under load-balancer scenarios, the servers gets scale up and down, and also there are some situations like some instance gets terminated without any alert. During this situation it will be good to get the login sessions during the server-down state, also the server access logs, so that we can track the reason for the server down. Managing logs on server is really hard, and also the logs will be available on different location. Inorder to address this problem, here we have setup Splunk to listen on a TCP port for any network traffic passes all others servers log to this host, then you will have a centralized, indexed log repository for all of your services.

Here i will guide you on deploying the splunk on the AWS EC2 and configuring splunk forwarder on the remote machine. Splunk is very flexible and is easy to install on any servers. You can select the appropriate hardware capacity planning for your Splunk deployment from here.

Once you have installed the Splunk server , follow the steps given below to start the app:

Now start the Splunk using the command given below:
[NOTE: The here Splunk is installed in /opt location]

/opt/splunk/bin/splunk start

Now you can access the Splunk web UI using the URL given below:

http://domain.com:8000

The Splunk need to be configure in such a way that it should be able to receive the data from the remote machine. For this you will need is to follow the following steps:

1. Login to Splunk WebUI eg. http://10.10.10.35:8000
2. Go to Manager –> Forwarding and receiving –> Receive data
3. Click on New Button and add default port i.e. 9997
4. Click on save button to save the settings.
NOTE: Make sure that the port is opened for the server to accept the data from the remote machine.

Next you will need to install Splunk forwarder on the remote machine. Once you have installed the forwarder start the app as shown below:

/opt/splunk/bin/splunk start

Then enable the forwarder using the command and restart the Splunk app.

./splunk enable app SplunkLightForwarder -auth
Splunk username: admin
Password: changeme
./splunk add forward-server 10.10.10.35:9997 -auth admin
./splunk restart

Now after few minutes you can see the Splunk dashboard indexes all it logs on the realtime dashboard.

Generally in Splunk deployment , we have a deployment server which pushes the configuration on to the deployment client, grouped into server class. The Splunk deployment server is a centralized manager which manages several splunk instances known as deployment client. The deployment client is the Splunk instance installed on the remote machine and parse the log on to the Splunk deployment server.

 

 

The Splunk generally collects the data from the remote machine which contain  the machine-to-machine and also from human-to-machine interaction. With these collected data it indexes to the engine and generates the reports and also drives alert. The email alert can be configured for the specific conditions like. For example we can configure the alert mail when it finds any log containing the error messages. The Splunk will access all these large volume of data and also provides the visibility and intelligence to IT and data ware house. And also will be able to perform the real-time and historic analysis of all the bulk data from the remote machine.

Its easy to use, also to install and also easier deploy method make this application different from others. The Splunk will be very useful for the developer team for finding and fixing the bugs and also helps to provide real time insights.

Mapreduce using Hadoop + pig/hive on AWS EC2 hadoop cluster

This article discuss about running mapreduce jobs using the apache tools called pig and hive.Before we can process the data we need to upload the files to be processed to HDFS/S3.  We recommend uploading to hdfs and keeping the important files in s3 for backup is a better practice. s3 is easily accessible from commandline using tools like s3cmd. HDFS is a failover cluster filesystem which provides enough protection to your data over instance failures.

Mapreduce

MapReduce is a programming model and an associated implementation for processing and generating large data sets. We can specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.

The main steps hadoop takes to run a job are

  1. The client, which submits the MapReduce job.
  2. The jobtracker, which coordinates the job run. The jobtracker is a Java application whose main class is JobTracker.
  3. The tasktrackers, which run the tasks that the job has been split into. Tasktrackers are Java applications whose main class is TaskTracker.
  4. The distributed filesystem (normally HDFS), which is used for sharing job files between the other entities.

Hadoop Map/Reduce is very powerful, but

o   Requires a Java Programmer.

o   Harder to write and also time consuming.

o   Difficult to update frequently.

A solution is to Run jobs using pig(Piglatin)/hive(HiveQL).

Pig

• An engine for executing programs on top of Hadoop

• It provides a language, Pig Latin, to specify these programs

Pig has Two main parts:

– A high level language to express data analysis

– Compiler to generate mapreduce programs (which can run on top of Hadoop)

Pig Latin is the name of the language with which Pig scripts are written. Pig also provides an interactive shell for executing simple commands, called Grunt. Pig Latin is a high level language. Pig runs on top of Hadoop. It collect the data for processing from Hadoop HDFS filesystem and Submit the jobs to the Hadoop mapreduce system.

A sample mapreduce job (like a Hello World program) using pig is given below

It is assumed that you are on one of the machines which is a part of a hadoop cluster having NameNode/DataNode as well as JobTracker/TaskTracker setup.

We will be executing piglatin commands using grunt shell. Switch to hadoop user first .

Consider we have a file ‘users’ on our local filesystem which contain data to be processed.First we have to upload it to hdfs. Then

# pig -x mapreduce

this command will take you to grunt shell. Pig Latin statements are generally

organized in the following manner:

A LOAD statement reads data from the file system.Then we process the data.And writes output to the file system using STORE statement. A DUMP statement displays output to the screen.

grunt> Users = load ‘users’ as (name, age);

grunt> Fltrd = filter Users by age >= 18 and age <= 25;

grunt> Pages = load ‘pages’ as (user, url);

grunt> Jnd = join Fltrd by name, Pages by user;

grunt> Grpd = group Jnd by url;

grunt> Smmd = foreach Grpd generate group, COUNT(Jnd) as clicks;

grunt> Srtd = order Smmd by clicks desc;

grunt> Top5 = limit Srtd 5;

grunt> store Top5 into ‘top5sites’;

We can also view the progress of the job through the web interface http://<ipaddress of jobtracker machine>:50030.

Tools like PigPen (an eclipse plugin) are available  that helps us create pig-scripts, test them using the example generator and then submit them to a hadoop cluster.

There is another tool called oozie – Oozie is a server based Workflow Engine specialized in running workflow jobs with actions that run Hadoop Map/Reduce and Pig jobs.

Pig tasks can be modeled as a workflow in oozie. These are deployed to the Oozie server using a command line utility. Once deployed, the workflows can be started and manipulated as necessary using the same utility. Once the workflow is started Oozie will run through each flow.. The web console for Oozie server can be used to monitor the progress of various workflow jobs being managed by the server.

Hive

 

Pig, was causing some slowdowns at Facebook company as it needed training to bring business intelligence users up to speed. So the development team decided to write Hive which has an SQL like syntax.

Apache Hive is a data warehouse infrastructure built on top of Apache Hadoop. It provides tools for querying and analysis of large data sets stored in Hadoop files. Hive defines a simple SQL-like query language, called HiveQL, that enables users familiar with SQL to query the data. Also it allows custom mappers and reducers to perform more sophisticated analysis that may not be supported by the built-in capabilities of the language.

Some of the queries in HiveQL are given below, which is very similar to the SQL.

# show tables;

# describe <tablename>;

# SELECT * FROM <tablename> LIMIT 10;

#  CREATE TABLE table_name

#  ALTER TABLE table_name RENAME TO new_table_name

#  DROP TABLE table_name

NoSQL databases like Cassandra provide support for hadoop. Cassandra supports running Hadoop MapReduce jobs against the Cassandra cluster. With proper cluster configuration, MapReduce jobs can retrieve data from Cassandra and then output results either back into Cassandra, or into a file system.

MySQL Optimization

Database optimization is the process of configuring database to use system resource efficiently and perform tasks quickly. To optimize mysql you should know the work flow of entire system, your hardware, operating system, disk I/O performance etc.
Why to Optimize
You can do more with less. The default mysql setup is optimized for a minimal system because it should work well on a minimal hardware. But when you use a dedicated mysql server with high traffic and complex queries you have to optimize mysql.
MySQL Server tuning Considerations
Here you will find some common optimization parameters.

  • MySQL variables
  • Hardware
  • Disk
  • Application

MySQL Optimization
MySQL global variables don’t have any predefined optimum values. It is a trial and monitor process. It depends on all the above parameters. Here you will see some of the common parameters.
Key-buffer-size
It is size of the buffer used to index blocks for MyISAM tables. On a dedicated mysql server with MyISAM storage engine 25-30% of systems total memory you can allocate for key_buffer_size. To fine tune key_buffer_size you can compare the variables key_reads and the key_read_requests.
This ratio should be at least 1:100.

SHOW STATUS LIKE ‘%key_read%’;
+——————-+————-+
| Variable_name | Value |
+——————-+————-+
| Key_read_requests | 10726813161 |
| Key_reads | 92790146 |
+——————-+————-+
Here the ratio is 1:115 which is acceptable.
But suppose you get a ratio 1: 10 then you need to add more key buffer and upgrade hardware accordingly.
Query Cache
“My website is too slow while loading dynamic pages”. If it is a mysql database related issue, following MySQL variables will be your solution.
query_cache_type
Set the query cache type. There are 3 values 0 ,1 or 2

0 Do not cache any query result
1 Cache query results.
2 Cache results ondemand. Cacheable queries that begin with SELECT SQL_CACHE.

query_cache_size
The amount of memory used to cache query result. Default is 0 which disable query cache.
The optimum value is depend on your application.
query_cache_limit
Do not cache results that are larger than this number of bytes. The default value is 1MB.
Status checking
SHOW STATUS LIKE ‘%qcache%’;
+————————-+———-+
| Variable_name | Value |
+————————-+———-+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 8371272 |
| Qcache_hits | 23547551 |
| Qcache_inserts | 46909131 |
| Qcache_lowmem_prunes | 5110536 |
| Qcache_not_cached | 2760196 |
| Qcache_queries_in_cache | 0 |
| Qcache_total_blocks | 1 |
+————————-+———-+
There were 46909131 queries and out which 23547551 queries cached and remaining not cached. Here the issue will either the result is greater than query_cache_limit or greater than query_cache_size itself. You have to trial and monitor 🙂
Qcache_lowmem_prunes.
When a query is removed from the query cache, this value will be incremented. If it increases quickly, and you still have memory to spare, you can set query_cache_size high, If it never increases, you can reduce the cache size.

sort_buffer
The sort_buffer is a useful for speed up myisamchk operations. It can also be useful when performing large numbers of sorts.

tmp_table_size

This variable determines the maximum size for a temporary table in memory. The maximum in memory size is minimum of tmp_table_size and max_heap_table_size. You can compare
Created_tmp_disk_tables and Created_tmp_tables to optimize tmp_table_size.

innodb_buffer_pool_size

This variable is target for innodb table and it is similar to key_buffer_size in MyISAM table.
On a dedicated mysql server using innodb you can set this upto 80% of RAM.
Hardware for mysql
If you have large tables(>3GB), you should consider 64 bit hardware as mysql uses a lots of 64bit integers internally.

You need more memory(RAM) if you want mysql to handle large number of connections simultaneously. More RAM will speed up key updates by keeping most of the pages in RAM

Another consideration is Ethernet device, You can use a 1G Ethernet for a dedicated mysql server for fast remote connections.

Disk performance is also an important parameter.
Disk Optimization
Striping disk (RAID 0) will increase both read and write throughput.

Don’t use RAID 1 or mirroring on disk for temporary files.

On Linux, mount the disks with async (default) and noatime.
Optimizing your application
Cache process in your application

Specify the column name in queries(eg dont use SELECT * FROM……)

Use persistent connections

USE EXPLAIN to explain!!.You will see detail below.

Queries and Indexes

Let us start with a simple query SELECT firstname FROM student WHERE id=’145870′;
MySQL start searching from the beginning row to find the student with id 145870. It does not even know it exist or not. An index is a sorted file which have an entry for each row.MySQL can find the corresponding record very quickly by referring this index.
EXPLAIN is a nice tool to understand your queries

EXPLAIN SELECT firstname,lastname FROM student WHERE id=’145870′;

+———-+——+—————+——+———+——+——+————+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+———-+——+—————+——+———+——+——+————+
| student | ALL | NULL | NULL | NULL | NULL |10000 | where used |
+———-+——+—————+——+———+——+——+————+
The possible_keys is null. In this case mysql will check all the 10000 rows. We can say this query(or table) is not optimized.

Now suppose we have use index for above table and run EXPLAIN again then we will get
+———-+——-+—————+———+———+——-+——+——-+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+———-+——-+—————+———+———+——-+——+——-+
| employee | const | PRIMARY | PRIMARY | 10 | const | 1 | |
+———-+——-+—————+———+———+——-+——+——-+
The type is “const”, which means that the table has only one matching row. The primary key is being used to find this particular record.

There are many more optimization variables and indxing methods. It is difficult include everything in a single article. But you can start mysql fine tuning while you database is underperfoming.