Docker IO overheadThis will be another post on using Percona Server via a Docker image. I want to follow up on my previous post regarding CPU/Network overhead in Docker “Measuring Percona Server Docker CPU/network overhead” by measuring  if there is any docker IO overhead on operations.

After running several tests, it appears (spoiler alert) that there is no Docker IO overhead. I still think it is useful to understand the different ways Docker can be used with data volumes, however. Docker’s philosophy is to provide ephemeral containers, but ephemeral does not work well for data – we do not want our data to disappear.

So, the first pattern is to create data inside a docker container. This is the default mode:

(I am using --net=host to avoid network overhead; check the previous post for more information.)

The second pattern is to use an external data volume, there we need to substitute the data volume with -v /data/flash/d1/:/var/lib/mysql. The full command is:

Finally, there is third pattern: using data volume containers. For this example, I created a dummy container:

After stopping the ps13-data-volume container, we can start a real one using the data volume from ps13-data-volume  as:

I compared all these modes with Percona Server running on a bare metal box, and direct mounted in sysbench, for both read-intensive and write-intensive IO workloads. For the reference, sysbench command is:

I’m not going to show the final numbers or charts, as the results are identical for all docker modes and for the bare metal case. So I can confidently say there is NO IO overhead for any docker data volume pattern described above.

As next experiment, I want to measure the Docker container overhead in a multi-host network environment.

Newest Most Voted
Inline Feedbacks
View all comments
Ben Mildren

I think there’s an edge case here that can lead to some degradation when not using an external data volume.

“OverlayFS works at the file level not the block level. This means that all OverlayFS copy-up operations copy entire files, even if the file is very large and only a small part of it is being modified. This can have a noticeable impact on container write performance.”

It might be a little contrived as an example, but if anyone ever thought to commit test data to their image (rather than create it on startup), you could see a one time hit as the files are copied up into the running containers.

It looks like the overhead would be smaller with Overlay as opposed to AUFS, but I thought it was worth a mention.

Sune Keller

I’m pretty sure this VOLUME instruction makes it always be made a bind mounted volume regardless of options to docker run:


I know this blog post is a bit old by now, and that my comment applies specifically to a specific combination of docker/vm/cloud technology, however it might be worth leaving a warning to fellow users:

there are known cases of mysql disk writes slowing down in an extreme fashion when running in docker containers running on cloud vms.

This hit us this morning, when running the mysql command to import a 170MB sql dump file proved impossible: after 1 hour it had imported a mere 50MB of data in a single table. At which point we stopped it, and decided to instead seed the db via copy of the mysql data files from the live server – which took 2 minutes in all to copy 700 MBs…

At the moment I have not yet found the cause nor solution, but googling around clearly makes me think that this is not an isolated case