Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark
This blog shares some column store database benchmark results, and compares the query performance of MariaDB ColumnStore v. 1.0.7 (based on InfiniDB), Clickhouse and Apache Spark.
I’ve already written about ClickHouse (Column Store database).
The purpose of the benchmark is to see how these three solutions work on a single big server, with many CPU cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so they should use many cores for SELECT queries.
For the benchmarks, I chose three datasets:
- Wikipedia page Counts, loaded full with the year 2008, ~26 billion rows
- Query analytics data from Percona Monitoring and Management
- Online shop orders
This blog post shares the results for the Wikipedia page counts (same queries as for the Clickhouse benchmark). In the following posts I will use other datasets to compare the performance.
Databases, Versions and Storage Engines Tested
- MariaDB ColumnStore v. 1.0.7, ColumnStore storage engine
- Yandex ClickHouse v. 1.1.54164, MergeTree storage engine
- Apache Spark v. 2.1.0, Parquet files and ORC files
Although all of the above solutions can run in a “cluster” mode (with multiple nodes), I’ve only used one server.
Hardware
This time I’m using newer and faster hardware:
- CPU: physical = 2, cores = 32, virtual = 64, hyperthreading = yes
- RAM: 256Gb
- Disk: Samsung SSD 960 PRO 1TB, NVMe card
Data Sizes
I’ve loaded the above data into Clickhouse, ColumnStore and MySQL (for MySQL the data included a primary key; Wikistat was not loaded to MySQL due to the size). MySQL tables are InnoDB with a primary key.
| Dataset Size (GB) | Column Store | Clickhouse | MySQL | Spark / Parquet | Spark / ORC file |
| Wikistat | 374.24 Gb | 211.3 Gb | n/a (> 2 Tb) | 395 Gb | 273 Gb |
| Query metrics | 61.23 Gb | 28.35 Gb | 520 Gb | ||
| Store Orders | 9.3 Gb | 4.01 Gb | 46.55 Gb |
Query Performance
Wikipedia page counts queries
| Test type (warm) | Spark | Clickhouse | ColumnStore |
| Query 1: count(*) | 5.37 | 2.14 | 30.77 |
| Query 2: group by month | 205.75 | 16.36 | 259.09 |
| Query 3: top 100 wiki pages by hits (group by path) | 750.35 | 171.22 | 1640.7 |
| Test type (cold) | Spark | Clickhouse | ColumnStore |
| Query 1: count(*) | 21.93 | 8.01 | 139.01 |
| Query 2: group by month | 217.88 | 16.65 | 420.77 |
| Query 3: top 100 wiki pages by hits (group by path) | 887.434 | 182.56 | 1703.19 |
Partitioning and Primary Keys
All of the solutions have the ability to take advantage of data “partitioning,” and only scan needed rows.
Clickhouse has “primary keys” (for the MergeTree storage engine) and scans only the needed chunks of data (similar to partition “pruning” in MySQL). No changes to SQL or table definitions is needed when working with Clickhouse.
Clickhouse example:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
:) select count(*), toMonth(date) as mon :-] from wikistat where toYear(date)=2008 :-] and toMonth(date) = 1 :-] group by mon :-] order by mon; SELECT count(*), toMonth(date) AS mon FROM wikistat WHERE (toYear(date) = 2008) AND (toMonth(date) = 1) GROUP BY mon ORDER BY mon ASC ┌────count()─┬─mon─┐ │ 2077594099 │ 1 │ └────────────┴─────┘ 1 rows in set. Elapsed: 0.787 sec. Processed 2.08 billion rows, 4.16 GB (2.64 billion rows/s., 5.28 GB/s.) :) select count(*), toMonth(date) as mon from wikistat where toYear(date)=2008 and toMonth(date) between 1 and 10 group by mon order by mon; SELECT count(*), toMonth(date) AS mon FROM wikistat WHERE (toYear(date) = 2008) AND ((toMonth(date) >= 1) AND (toMonth(date) <= 10)) GROUP BY mon ORDER BY mon ASC ┌────count()─┬─mon─┐ │ 2077594099 │ 1 │ │ 1969757069 │ 2 │ │ 2081371530 │ 3 │ │ 2156878512 │ 4 │ │ 2476890621 │ 5 │ │ 2526662896 │ 6 │ │ 2460873213 │ 7 │ │ 2480356358 │ 8 │ │ 2522746544 │ 9 │ │ 2614372352 │ 10 │ └────────────┴─────┘ 10 rows in set. Elapsed: 13.426 sec. Processed 23.37 billion rows, 46.74 GB (1.74 billion rows/s., 3.48 GB/s.) |
As we can see here, ClickHouse has processed ~two billion rows for one month of data, and ~23 billion rows for ten months of data. Queries that only select one month of data are much faster.
For ColumnStore we need to re-write the SQL query and use “between ‘2008-01-01’ and 2008-01-10′” so it can take advantage of partition elimination (as long as the data is loaded in approximate time order). When using functions (i.e., year(dt) or month(dt)), the current implementation does not use this optimization. (This is similar to MySQL, in that if the WHERE clause has month(dt) or any other functions, MySQL can’t use an index on the dt field.)
ColumnStore example:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
MariaDB [wikistat]> select count(*), month(date) as mon -> from wikistat where year(date)=2008 -> and month(date) = 1 -> group by mon -> order by mon; +------------+------+ | count(*) | mon | +------------+------+ | 2077594099 | 1 | +------------+------+ 1 row in set (2 min 12.34 sec) MariaDB [wikistat]> select count(*), month(date) as mon from wikistat where date between '2008-01-01' and '2008-01-31' group by mon order by mon; +------------+------+ | count(*) | mon | +------------+------+ | 2077594099 | 1 | +------------+------+ 1 row in set (12.46 sec) |
Apache Spark does have partitioning however. It requires the use of partitioning with parquet format in the table definition. Without declaring partitions, even the modified query (“select count(*), month(date) as mon from wikistat where date between ‘2008-01-01’ and ‘2008-01-31’ group by mon order by mon”) will have to scan all the data.
The following table and graph shows the performance of the updated query:
| Test type / updated query | Spark | Clickhouse | ColumnStore |
| group by month, one month, updated syntax | 205.75 | 0.93 | 12.46 |
| group by month, ten months, updated syntax | 205.75 | 8.84 | 170.81 |
Working with Large Datasets
With 1Tb uncompressed data, doing a “GROUP BY” requires lots of memory to store the intermediate results (unlike MySQL, ColumnStore, Clickhouse and Apache Spark use hash tables to store groups by “buckets”). For example, this query requires a very large hash table:
|
1 2 3 4 5 6 7 8 9 10 |
SELECT path, count(*), sum(hits) AS sum_hits, round(sum(hits) / count(*), 2) AS hit_ratio FROM wikistat WHERE project = 'en' GROUP BY path ORDER BY sum_hits DESC LIMIT 100 |
As “path” is actually a URL (without the hostname), it takes a lot of memory to store the intermediate results (hash table) for GROUP BY.
MariaDB ColumnStore does not allow us to “spill” data on disk for now (only disk-based joins are implemented). If you need to GROUP BY on a large text field, you can decrease the disk block cache setting in Columnstore.xml (i.e., set disk cache to 10% of RAM) to make room for an intermediate GROUP BY:
|
1 2 3 |
<DBBC> <!-- The percentage of RAM to use for the disk block cache. Defaults to 86% --> <NumBlocksPct>10</NumBlocksPct> |
In addition, as the query has an ORDER BY, we need to increase max_length_for_sort_data in MySQL:
|
1 2 |
ERROR 1815 (HY000): Internal error: IDB-2015: Sorting length exceeded. Session variable max_length_for_sort_data needs to be set higher. mysql> set global max_length_for_sort_data=8*1024*1024; |
SQL Support
| SQL | Spark* | Clickhouse | ColumnStore |
| INSERT … VALUES | ✅ yes | ✅ yes | ✅ yes |
| INSERT SELECT / BULK INSERT | ✅ yes | ✅ yes | ✅ yes |
| UPDATE | ❌ no | ❌ no | ✅ yes |
| DELETE | ❌ no | ❌ no | ✅ yes |
| ALTER … ADD/DROP/MODIFY COLUMN | ❌ no | ✅ yes | ✅ yes |
| ALTER … change paritions | ✅ yes | ✅ yes | ✅ yes |
| SELECT with WINDOW functions | ✅ yes | ❌ no | ✅ yes |
*Spark does not support UPDATE/DELETE. However, Hive supports ACID transactions with UPDATE and DELETE statements. BEGIN, COMMIT, and ROLLBACK are not yet supported (only the ORC file format is supported).
ColumnStore is the only database out of the three that supports a full set of DML and DDL (almost all of the MySQL’s implementation of SQL is supported).
Comparing ColumnStore to Clickhouse and Apache Spark
| Solution | Advantages | Disadvantages |
| MariaDB ColumnStore |
|
|
| Yandex ClickHouse |
|
|
| Apache Spark |
|
|
Conclusion
Yandex ClickHouse is an absolute winner in this benchmark: it shows both better performance (>10x) and better compression than
MariaDB ColumnStore and Apache Spark. If you are looking for the best performance and compression, ClickHouse looks very good.
At the same time, ColumnStore provides a MySQL endpoint (MySQL protocol and syntax), so it is a good option if you are migrating from MySQL. Right now, it can’t replicate directly from MySQL but if this option is available in the future we can attach a ColumnStore replication slave to any MySQL master and use the slave for reporting queries (i.e., BI or data science teams can use a ColumnStore database, which is updated very close to realtime).
Table Structure and List of Queries
Table structure (MySQL / Columnstore version):
|
1 2 3 4 5 6 7 8 9 |
CREATE TABLE `wikistat` ( `date` date DEFAULT NULL, `time` datetime DEFAULT NULL, `project` varchar(20) DEFAULT NULL, `subproject` varchar(2) DEFAULT NULL, `path` varchar(1024) DEFAULT NULL, `hits` bigint(20) DEFAULT NULL, `size` bigint(20) DEFAULT NULL ) ENGINE=Columnstore DEFAULT CHARSET=utf8 |
Query 1:
|
1 |
select count(*) from wikistat |
Query 2a (full scan):
|
1 2 3 4 5 |
select count(*), month(dt) as mon from wikistat where year(dt)=2008 and month(dt) between 1 and 10 group by month(dt) order by month(dt) |
Query 2b (for partitioning test)
|
1 2 3 4 5 |
select count(*), month(date) as mon from wikistat where date between '2008-01-01' and '2008-10-31' group by mon order by mon; |
Query 3:
|
1 2 3 4 5 6 7 8 9 10 |
SELECT path, count(*), sum(hits) AS sum_hits, round(sum(hits) / count(*), 2) AS hit_ratio FROM wikistat WHERE project = 'en' GROUP BY path ORDER BY sum_hits DESC LIMIT 100; |





Comments (15)
I’ve been looking into different platforms to do analytics and this blog post makes me want to reconsider Clickhouse. What I don’t like about it it’s that apart of Yandex almost no one else is using it yet compared to hadoop based alternatives or MariaDB that I could easily get support in case I would have issues with them.
Also it would be really cool to see a performance comparison over multiple nodes to compare how well this different systems scale over a cluster.
Hello Luis,
as far as we can see, more than a hundred companies use ClickHouse. To make sure of this, simply join ClickHouse telegram chat or Google group. There you can ask any questions. The community and ClickHouse team responds promptly to them.
If you still need a support service, please leave your contacts at clickhouse-feedback@yandex-team.ru
It is gathering popularity quickly here in Russia. For instance, we were switching to Spark from our legacy statistical system but immediately dumped everything we did after the clickhouse was released:
1) It is turned to be much quicker
2) The fact it is server greatly benifits us: free input source split. With spark you either creates a table with many columns which bad for readability and insert statement can be really long, thus error prone. Or parse these sources several times and this can be overly expensive at times. Not a problem with clickhouse.
3) With clickhouse you don’t just have naturally distributed log parsing. You naturally have continuous data, second by second, minute by minute, day by day available in the single source. With Spark you will struggle with http://stackoverflow.com/questions/38793170/appending-to-orc-file.
4) Clickhouse gives free to use realtime access to collected data. This is really useful in many circumstances. It is a great time saver sometimes.
5) It is fast as I said. Hadoop is slow to the extent you could need several hosts just to discover you match the speed of relational operations over GNU utils (awk, grep, sort, join) on the single host. Or rather not quite up to that speed. Hadoop is just too slow.
Good to see that is getting traction, I couldn’t find many information about people using it but maybe if I would search on yandex I would get better information.
I think it unfair to compare db with Spark. Spark is a very general tool. You can do pretty much everything: from data ingestion, cleaning, structuring up to the ML and GraphX modelling and finally streaming, even Natural Language Processing. Don’t forget about BigDL. I also work with highly instructed data. Spark is incredible. Spark is more like a functional programming language at scale. Yes, it is slower, but that is the tradeoff between functionality and speed. Me as a data scientist I don’t see any competitors to Spark.
Another side note: I don’t know how hard it is to scale clickhouse. I know that mongo requires a lot of engineering in order to scale. As for Spark I can easily install it on cluster myself.
Yes, it is a good point: Spark is a more general tool and not *just* MPP database. However, for the purposes of this blog post I wanted to see how fast Spark is able to just process data. If you are using other features of Apache Spark (i.e. ML) – those are of cause not available in Clickhouse and ColumnStore.
Thank you for very informative article.
It would be nice if the comparison also included the difficulty of installation, data loading and tuning.
There is no any mention about tuning. Does it mean that the databases were used “out of the box” with default settings?
Also, how well MariaDB ColumnStore, ClickHouse and Apache Spark are supported online,
I mean by Internet users? Could you find answers to your problems on the Internet?
thanks a lot
Clickhouse has no Update or Delete functionality. It is still super fast, but lack of Update/Delete is a serious limitation for many users.
I sure hope that Percona can bring ClickHouse into the MySQL protocol so that percona toolkit will work with it, as well as the PMM. Very interesting. (sure wish there was Window functions support as I now have a postgres instance for that!!!?? and sore miss percona toolkit)
You should look into ProxySQL to talk MySQL with ClickHouse: https://github.com/sysown/proxysql/wiki/ClickHouse-Support
comparing apples to oranges
Alex, I would love to see same comparison with Druid and Pinot, which seem to be more in the same league than ClickHouse. Have you considered these two? Any comments on’em?
very cool, clickhouse is very fast
potentially ClickHouse can be accessible via MySQL protocol using proxysql-clickhouse
https://github.com/sysown/proxysql/wiki/ClickHouse-Support