A Quick Look into TiDB Performance on a Single Server


TiDB is an open-source distributed database developed by PingCAP. This is a very interesting project as it is can be used as a MySQL drop-in replacement: it implements MySQL protocol, and basically emulates MySQL. PingCAP defines TiDB is as a “one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads”. In this blog post I have decided to see how TiDB performs on a single server compared to MySQL for both OLTP and OLAP workload. Please note, this benchmark is very limited in scope: we are only testing TiDB and MySQL on a single server – TiDB is a distributed database out of the box.

Short version: TiDB supports parallel query execution for selects and can utilize many more CPU cores – MySQL is limited to a single CPU core for a single select query. For the higher-end hardware – ec2 instances in my case – TiDB can be 3-4 times faster for complex select queries (OLAP workload) which do not use, or benefit from, indexes. At the same time point selects and writes, especially inserts, can be 5x-10x slower. Again, please note that this test was on a single server, with a single TiKV process.


Please note: the following setup is only intended for testing and not for production. 

I installed the latest version of TiDB to take advantage of the latest performance improvements, at the time of writing:

The normal installation process is described here (different methods are available).


The main purpose of this test is to compare MySQL to TiDB. As with any distributed database it is hard to design an “apples to apples” comparison: we may compare a distributed workload spanning across many servers/nodes (in this case TiDB) to a single server workload (in this case MySQL). To overcome this challenge, I decided to focus on “efficiency”. If the distributed database is not efficient – i.e. it may require 10s or 100s of nodes to do the same job as the non-distributed database – it may be cost prohibitive to use such database for a small or medium size DB.

The preliminary results are: TiDB is much more efficient for SELECT (OLAP workload) but much less efficient for WRITES and typical OLTP workload. To overcome these limitations it is possible to use more servers.

For this test I was using two types of benchmarks:

  1. OLAP: a set of complex queries on top of an “ontime” database (airline historical flight information database). For this benchmark I used different AWS ec2 instances with CPU cores ranging from 2 to 96. This is response time test (not a throughput test)
  2. OLTP: sysbench (as always): point-select and write-only standard workloads. This is throughput test, increasing the number of threads.

OLAP / analytical queries test

Database size is 70Gb in MySQL and 30Gb in TiDB (compressed). The table has no secondary indexes (except the primary key).

I used the following four queries:

  1. Simple count(*): select count(*) from ontime;
  2. Simple group by: select count(*), year from ontime group by year order by year;
  3. Complex filter for a full table scan: select * from ontime where UniqueCarrier = 'DL' and TailNum = 'N317NB' and FlightNum = '2' and Origin = 'JFK' and Dest = 'FLL' limit 10;
  4. Complex group by and order by query:

I used five ec2 instances:

  • t2.medium: 2 CPU cores
  • x1e.xlarge: 4 CPU cores
  • r4.4xlarge: 16 CPU cores
  • m4.16xlarge: 64 CPU cores
  • m5.24xlarge: 96 CPU cores

The following graph represents the results (bars represents the query response time, the smaller the better):

As we can see, TiDB scales very well increasing the number of CPU cores, as we go from lower to higher end instances. t2.medium and x1e.xlarge is interesting here thou:

  1. t2.medium has 2 CPU cores and not enough RAM (2Gb) to store database in memory. Both MySQL/InnoDB and TiDB/TiKV performs a lots of disk reads – this is disk bound workload
  2. x1e.xlarge is an example of the opposite instance type: 4 CPU core and  122GB RAM, I’m using memory bound workload here (where both MySQL and TiDB data is cached).

All other instances have enough RAM to cache the database in memory, and with more CPU TiDB can take advantages of query parallelism and provide better response time.

Sysbench test

Select test

I used point select (meaning select one row by primary key, threads ranges from 1 to 128) with Sysbench on an m4.16xlarge instance (memory bound: no disk reads). The results are here.  The bars represent the number of transactions per second, the more the better:

This workload is actually gives a great advantage to MySQL/InnoDB as it retrieves a single row based on the primary key. MySQL is significantly faster here: 5x to 10x faster. Unlike the previous workload – 1 single slow query – for “point select” queries MySQL scales much better than TiDB with more CPU cores.

Write only test

I have used a write-only sysbench workload as well with threads ranging from 1 to 128. The instance has enough memory to cache full datast. Here are the results:

Here we can see that TiDB is also significatly slower than MySQL (for an in-memory workload).

Limitations of the write only test
Running TiDB as a single server is not a recommended (or documented) configuration, so some optimizations for this case may be missing. To create a production level test, we would need to compare TiDB to MySQL with the binlog enabled + some sort of synchronous/virtually synchronous or semi-sync replication  (e.g. Percona XtraDB Cluster, group replication or semi-sync replication).  Both of these changes are known to decrease the write-throughput of MySQL considerably. Some tuning may be done to reduce the effects of that.
Some of the performance characteristics here are also derived from TiDB using RocksDB. The performance of InnoDB should be higher for an in-memory insert, with an LSM tree performing better for data sets that no longer fit in memory.


TiDB scales very well for OLAP / analytical queries (typically complex queries not able to take advantages of indexes) – this is the area where MySQL performs much worse as it does not take advantage of multiple CPU cores. At the same time, there is always a price to pay: TiDB has worse “efficiency” for fast queries (i.e. select by primary key) and writes. TiDB can scale across multiple servers (nodes). However, if we need to archive the same level of write efficiency as MySQL we will have to setup tens of nodes. In my opinion, TiDB can be a great fit for an analytical workload when you need almost full compatibility with MySQL: syntax compatibility, inserts/updates, etc.


Share this post

Comments (5)

  • Shen Li Reply

    Thank you for trying out TiDB! As you note, TiDB is well suited for analytics, but the performance numbers you experienced for the write-only test are lower than expected. This is great feedback, and something we are working on improving in our upcoming 3.0 release.

    January 27, 2019 at 11:06 am
  • Øystein Grøvlen Reply

    Hi Alexander,
    Thanks for sharing this study! What version of MySQL where you using?
    I find it a bit strange that MySQL will use more time for the simple count query on the larger machines than on x1e.xlarge. Why should the number of cores matter for single-thread executions? Do you have any explanation for that?
    Also, it seems the group by queries uses less time that the simple count on these larger machines. That seems strange to me.

    January 27, 2019 at 9:51 pm
  • zhao Reply

    I’m interested about the results of PXC & MGR & TiDB on OLTP workload,such as sysbench oltp.

    January 29, 2019 at 4:00 am
  • Michael Reply

    I think the result is misleading.
    Some result is even contradict as Øystein Grøvlen pointed out.
    What’s more, you don’t show the query execution plan. Maybe there is much room for MySQL to optimize.
    TiDB is not suited for OLAP at all, they recommend TiSpark to query complex SQL instead.
    As for OLTP, TiDB is no sense.

    February 1, 2019 at 9:29 pm
  • Ben Reply

    These benchmarks are interesting but they are kind of Apples to Oranges. That MySQL will win on any write heavy tasks is a sure thing, as both DBs are written with different designs. It will have been nice if you also included a replicated mysql cluster vs TiDB cluster in the results, to see a more real world view. Nobody in there right minds runs a single database instants ( unless you love the risk to lose data)? Another detail left out, … Databases like TiDB, CRDB are more tolerant to run on cheaper / lighter machines thanks to this build in default replication, where as your more or less forced to go with a very reliable ( aka expensive ) server, when faced with a single mysql instance. The lack of a downtime buffer is always present on a single instance mysql server.

    April 4, 2019 at 8:40 pm

Leave a Reply