Nested Data Structures in ClickHouse

Nested Data Structures in ClickHouse

PREVIOUS POST
NEXT POST

Nested Data StructuresIn this blog post, we’ll look at nested data structures in ClickHouse and how this can be used with PMM to look at queries.

Nested structures are not common in Relational Database Management Systems. Usually, it’s just flat tables. Sometimes it would be convenient to store unstructured information in structured databases.

We are working to adapt ClickHouse as a long term storage for Percona Monitoring and Management (PMM), and particularly to store detailed information about queries. One of the problems we are trying to solve is to count the different errors that cause a particular query to fail.

For example, for date 2017-08-17 the query:

was executed 1000 times. 25 times it failed with error code “1212”, and eight times it failed with error code “1250”. Of course, the traditional way to store this in relational data would be to have a table "Date, QueryID, ErrorCode, ErrorCnt" and then perform a JOIN to this table. Unfortunately, columnar databases don’t perform well with multiple joins, and often the recommendation is to have de-normalized tables.

We can create a column for each possible ErrorCode, but this is not an optimal solution. There could be thousands of them, and most of the time they would be empty.

In this case, ClickHouse proposes Nested data structures. For our case, these can be defined as:

This solution has obvious questions: How do we insert data into this table? How do we extract it?

Let’s start with INSERT. Insert can look like:

which means that the inserted query during 2017-08-17 gave error 1220 five times, error 1230 six times and error 1212 two times.

Now, during a different date, it might produce different errors:

Let’s take a look at ways to SELECT data. A very basic SELECT:

If we want to use a more familiar tabular output, we can use the ARRAY JOIN extension:

However, usually we want to see the aggregation over multiple periods, which can be done with traditional aggregation functions:

If we want to get really creative and return only one row per QueryID, we can do that as well:

Conclusion

ClickHouse provides flexible ways to store data in a less structured manner and variety of functions to extract and aggregate it – despite being a columnar database.

Happy data warehousing!

PREVIOUS POST
NEXT POST

Share this post

Comments (7)

  • Peter Colclough Reply

    Been storing this type of data in Elasticsearch for years…..well 3-4 years. It’s fast, flexible, and does not require relational setups. Seriously worth a look, especially with tge machine learning plugin, which can pick up anomalies for you and warn you when they happen. Add the graphing capability of Kibana, and you are set.

    Just my viewpoint 🙂

    August 30, 2017 at 2:58 pm
    • Peter Zaitsev Reply

      For relational data you will find ClickHouse significantly faster. Here are some third party benchmark which compares ClickHouse and Elastic for some SQL queries on the same hardware http://tech.marksblogg.com/benchmarks.html

      August 30, 2017 at 4:52 pm
    • Denis Reply

      > does not require relational setups

      at the cost of performance and storage efficiency.

      September 25, 2017 at 3:09 pm
  • Andy Reply

    How do you increment ErrorCnt? Every time when a query fails you want to increment the appropriate ErrorCnt, right? With a nested structure how do you do that?

    August 30, 2017 at 3:29 pm
    • Vadim Tkachenko Reply

      Andy,

      We do not increment.
      We aggregated reports and make a new entry, say, every 5 mins.

      That’s why to see the total we need to use the aggregation function (SUM in this case) over multiple entries

      August 30, 2017 at 3:32 pm
    • Emanuel Calvo Reply

      Clickhouse does not support update ops, last status is at https://github.com/yandex/ClickHouse/issues/923.

      August 30, 2017 at 3:44 pm
    • SUNG GON KIM Reply

      UPDATE and DELETE support is included in the roadmap in Q1 2018

      August 30, 2017 at 10:38 pm

Leave a Reply