Using Apache Hadoop and Impala together with MySQL for data analysis

Apache Hadoop is commonly used for data analysis. It is fast for data loads and scalable. In a previous post I showed how to integrate MySQL with Hadoop. In this post I will show how to export a table from  MySQL to Hadoop, load the data to Cloudera Impala (columnar format) and run a reporting on top of that. For the examples below I will use the “ontime flight performance” data from my previous post (Increasing MySQL performance with parallel query execution). I’ve used the Cloudera Manager v.4 to install Apache Hadoop and Impala. For this test I’ve (intentionally) used an old hardware (servers from 2006) to show that Hadoop can utilize the old hardware and still scale. The test cluster consists of 6 datanodes. Below are the specs:

PurposeServer specs
Namenode, Hive metastore, etc + Datanodes2x PowerEdge 2950, 2x L5335 CPU @ 2.00GHz, 8 cores, 16G RAM, RAID 10 with 8 SAS drives
Datanodes only4x PowerEdge SC1425, 2x Xeon CPU @ 3.00GHz, 2 cores, 8G RAM, single 4TB drive

As you can see those a pretty old servers; the only thing I’ve changed is added a 4TB drive to be able to store more data. Hadoop provides redundancy on the server level (it writes 3 copies of the same block to all datanodes) so we do not need RAID on the datanodes (need redundancy for namenodes thou).

Data export

There are a couple of ways to export data from MySQL to Hadoop. For the purpose of this test I have simply exported the ontime table into a text file with:

(you can use “|” or any other symbol as a delimiter) Alternatively, you can download data directly from site using this simple script:

Load into Hadoop HDFS

First thing we will need to do is to load data into HDFS as a set of files. Hive or Impala it will work with a directory to which you have imported your data and concatenate all files inside this directory. In our case it is easy to simply copy all our files into the directory inside HDFS

 Create external table in Impala

Now, when we have all data files loaded we can create an external table:

Note the “EXTERNAL” keyword and LOCATION (LOCATION points to a directory inside HDFS, not a file). The impala will create a meta information only (will not modify the table). We can query this table right away, however, impala will need to scan all files (full scan) for queries.


(Note that “group by” will not sort the rows, unlike MySQL. To sort we will need to add “ORDER BY yeard”)

Explain plan:

As we can see it will scan 45G of data.

Impala with columnar format and compression

The great benefit of the impala is that it supports columnar format and compression. I’ve tried the new “parquet” format with “snappy” compression codec. As our table is very wide (and de-normalized) it will help alot to use columnar format. To take advantages of the “parquet” format we will need to load data into it, which is easy to do when we already have a table inside impala and files inside HDFS:

Then we can test our query against the new table:

As we can see it will scan much smaller amount of data: 3.95 (with compression) compared to 45GB


And the response time is much better as well.

Impala complex query example

I’ve used the complex query from my previous post. I had to adapt it for use with Impala: it does not support “sum(ArrDelayMinutes>30)” notation but “sum(if(ArrDelayMinutes>30, 1, 0)” works fine.

The query is intentionally designed the way it does not take advantage of the indexes: most of the conditions will only filter out less than 30% of the data.

Impala results:

15.28 seconds is significantly faster than original MySQL results (15 min 56.40 sec without parallel execution and  5 min 47 with the parallel execution). However, this is not “apple to apple comparison”:

  • MySQL will scan 45G of data and Impala with parquet will only scan 3.5G
  • MySQL will run on a single server, Hadoop + Impala will run in parallel on 6 servers.

Nevertheless, Hadoop + Implala shows impressive performance and ability to scale out the box, which can help a lot with the large data volume analysis.


Hadoop + Impala will give us an easy way to analyze large datasets using SQL with the ability to scale even on the old hardware.

In my next posts I will plan to explore:

As always, please share your thoughts in the comments.

Share this post

Comments (15)

  • Peter Zaitsev


    I would wonder what numbers would InfiniDB deliver here. From the looks of it they have quite interesting offering being able to speak MySQL protocol to the data which is already in Hardoop (HDFS) as well as its own parallel column store engine.

    April 21, 2014 at 10:26 am
  • Mark Callaghan

    How big are the database files (with or without 3X replication)?

    April 21, 2014 at 10:58 am
  • Andrew Aksyonoff

    Would be interesting to compare Vertica on the same HW/data too. They’re free upto 1 TB.

    April 21, 2014 at 3:02 pm
  • Mark Callaghan

    1TB isn’t big data

    April 21, 2014 at 3:04 pm
  • Alexander Rubin

    Peter, yes, I will plan to experiment with InfiniDB next.

    Mark, the whole dataset is not that large, only ~50G raw files. I will plan to experiment with larger dataset.

    April 21, 2014 at 5:18 pm
  • Big Data Analytics

    Can’t you do the same thing in platforms such as hunk or datameer without coding?

    April 21, 2014 at 5:37 pm
  • Andrew Aksyonoff

    Big Data is any thing which is crash Excel (c)

    April 21, 2014 at 6:21 pm
  • Paul Carlucci

    I realize it’s a bit off topic, but since you’re comparing speed versus MySQL, then when you’re done with the run could you try converting the tables to TokuDB and give it another spin please?

    April 21, 2014 at 10:22 pm
  • david

    saw similar post on Stack Overflow but yeah quite impressive data.

    April 22, 2014 at 3:46 pm
  • mark

    Great article. I tried to issue the command to create the parquet tables using Impala (CDH5 VM) and got the following error message

    analysisexception: syntax error in line 1: set parquet_compression_codec=snappy ^ encountered: set expected: alter, compute, create, describe, drop, explain, insert, invalidate, load, refresh, select, show, use, values, with caused by: exception: syntax error

    April 27, 2014 at 2:13 am
  • mark

    I realize that my problem was because I issued the command on the Impala GUI query and not on the Impala shell. But how to start the impala shell?

    April 27, 2014 at 2:32 am
  • mark

    Ignore my previous 2 comments. The shell is started with impala-shell command. But the command
    create table ontime_parquet_snappy LIKE ontime_parquet_snappy STORED AS PARQUET;
    should copy from the csv file.

    April 27, 2014 at 2:49 am
  • Michael

    Hi Alexander,
    Now try to compare running queries on Impala on dataset that not fit into the memory (swap)
    and you will see the real performance impact. So you if query same size of data on mysql using indexes the performance should be better.

    April 28, 2014 at 5:04 pm
  • Luca Candela

    Let me make your download script for the flight info a little less tedious:

    for y in {1988..2013}
    for i in {1..12}
    wget $u -o ontime.log
    unzip -o On_Time_On_Time_Performance_${y}_${i}.zip
    rm On_Time_On_Time_Performance_${y}_${i}.zip

    February 6, 2015 at 4:47 pm
  • Gerard

    A question if I may.

    I am doing the Cloudera Live Lesson 2 on Impala.
    When I load the data and follow the course notes I get to see the data.
    The if I do a sqoop incremental load with newer data, I cannot see the newer data unless I recreate the external table. Odd.

    Any ideas?

    November 2, 2015 at 8:10 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.