Nested Data Structures in ClickHouse

Nested Data StructuresIn this blog post, we’ll look at nested data structures in ClickHouse and how this can be used with PMM to look at queries.

Nested structures are not common in Relational Database Management Systems. Usually, it’s just flat tables. Sometimes it would be convenient to store unstructured information in structured databases.

We are working to adapt ClickHouse as a long term storage for Percona Monitoring and Management (PMM), and particularly to store detailed information about queries. One of the problems we are trying to solve is to count the different errors that cause a particular query to fail.

For example, for date 2017-08-17 the query:

was executed 1000 times. 25 times it failed with error code “1212”, and eight times it failed with error code “1250”. Of course, the traditional way to store this in relational data would be to have a table "Date, QueryID, ErrorCode, ErrorCnt" and then perform a JOIN to this table. Unfortunately, columnar databases don’t perform well with multiple joins, and often the recommendation is to have de-normalized tables.

We can create a column for each possible ErrorCode, but this is not an optimal solution. There could be thousands of them, and most of the time they would be empty.

In this case, ClickHouse proposes Nested data structures. For our case, these can be defined as:

This solution has obvious questions: How do we insert data into this table? How do we extract it?

Let’s start with INSERT. Insert can look like:

which means that the inserted query during 2017-08-17 gave error 1220 five times, error 1230 six times and error 1212 two times.

Now, during a different date, it might produce different errors:

Let’s take a look at ways to SELECT data. A very basic SELECT:

If we want to use a more familiar tabular output, we can use the ARRAY JOIN extension:

However, usually we want to see the aggregation over multiple periods, which can be done with traditional aggregation functions:

If we want to get really creative and return only one row per QueryID, we can do that as well:


ClickHouse provides flexible ways to store data in a less structured manner and variety of functions to extract and aggregate it – despite being a columnar database.

Happy data warehousing!

Share this post

Comments (8)

  • Peter Colclough

    Been storing this type of data in Elasticsearch for years…..well 3-4 years. It’s fast, flexible, and does not require relational setups. Seriously worth a look, especially with tge machine learning plugin, which can pick up anomalies for you and warn you when they happen. Add the graphing capability of Kibana, and you are set.

    Just my viewpoint 🙂

    August 30, 2017 at 2:58 pm
    • Peter Zaitsev

      For relational data you will find ClickHouse significantly faster. Here are some third party benchmark which compares ClickHouse and Elastic for some SQL queries on the same hardware

      August 30, 2017 at 4:52 pm
    • Denis

      > does not require relational setups

      at the cost of performance and storage efficiency.

      September 25, 2017 at 3:09 pm
  • Andy

    How do you increment ErrorCnt? Every time when a query fails you want to increment the appropriate ErrorCnt, right? With a nested structure how do you do that?

    August 30, 2017 at 3:29 pm
    • Vadim Tkachenko


      We do not increment.
      We aggregated reports and make a new entry, say, every 5 mins.

      That’s why to see the total we need to use the aggregation function (SUM in this case) over multiple entries

      August 30, 2017 at 3:32 pm
    • Emanuel Calvo

      Clickhouse does not support update ops, last status is at

      August 30, 2017 at 3:44 pm

      UPDATE and DELETE support is included in the roadmap in Q1 2018

      August 30, 2017 at 10:38 pm
  • Nishit

    Is it possible to achieve the same result (If we want to use a more familiar tabular output, we can use the ARRAY JOIN extension) without array join for nested data column? As array joins are slow so we cannot use them.

    May 28, 2019 at 8:36 am

Comments are closed.