Sample Datasets for Benchmarking and Testing

Sometimes you just need some data to test and stress things. But randomly generated data is awful — it doesn’t have realistic distributions, and it isn’t easy to understand whether your results are meaningful and correct. Real or quasi-real data is best. Whether you’re looking for a couple of megabytes or many terabytes, the following sources of data might help you benchmark and test under more realistic conditions.

Datasets for Benchmarking

Post your favorites in the comments!

Share this post

Comments (21)

  • pcrews

    It should be noted that imdb has some restrictive licensing that would prevent anyone from making a data set publicly available. While the data is quite good and interesting, it would be of limited utility since it isn’t in database-ready form (IIRC).

    I’m also working on a dataset based on this information:

    Will be making it available on Launchpad (ala the employee dataset) once it is ready.

    There are also some interesting datasets here:

    February 1, 2011 at 5:09 am
  • Morgan Tocker

    @Baron – there’s also the world sample database. Useful for testing, but not for benchmarking:

    @pcrews – See There is nothing stopping you from having scripted recreation of the data.

    “easy to understand whether your results are meaningful and correct”.

    I think this part needs to be written in bold, with an underline. We use the IMDB data for examples in our Percona training courses.

    February 1, 2011 at 7:29 am
  • Baron Schwartz

    I bolded a couple of words 🙂

    February 1, 2011 at 7:40 am
  • pcrews


    Perhaps I’m missing something, but how do you get around this:

    using the data in a training course doesn’t seem like personal or non-commercial use.

    “IMDb grants you a limited license to access and make personal use of this site and not to download (other than page caching) or modify it, or any portion of it, except with express written consent of IMDb. This site or any portion of this site may not be reproduced, duplicated, copied, sold, resold, visited, or otherwise exploited for any commercial purpose without express written consent of IMDb. This license does not include any resale or commercial use of this site or its contents or any derivative use of this site or its contents. ”

    I’m not trying to be a pain, but rather to educate myself on these issues. When I was at MySQL/Sun/Oracle, we were expressly forbidden from trying to create any test datasets based on the data by their fleet of lawyers

    February 1, 2011 at 8:38 am
  • Baron Schwartz

    Hmm. If we were to ask a lawyer right now, I bet they’d scare us and tell us whatever we do, don’t admit wrongdoing. I think the best thing to do is say that we should probably find or generate a dataset whose licensing clearly permits using it for examples in our courses, and thank you for your feedback, Patrick.

    February 1, 2011 at 8:50 am
  • pcrews

    No worries and sorry if I was a wet-blanket. Mainly, I am very interested in datasets and making them available to everyone.

    The imdb data is very juicy and I want to use it, was mainly curious to see if I could do something with it as well : )

    February 1, 2011 at 8:59 am
  • Roland Bouman


    For relational work, the csv dumps are probably the quickest bet, but it does contain multivalued attributes. The quadruples dump is a normalized format. I haven’t gotten round to building a normalized format from that one.

    February 1, 2011 at 9:30 am
  • pcrews


    I had built a dataset that merged Grouplens and Freebase movie information, but that work has become dusty.

    However, the Freebase data is *great*. Just sifting through the movies-related information produced a fair number of tables with respectable populations : )

    February 1, 2011 at 9:48 am
  • Gerry

    My favorite is the Amarok player one, it’s small, has real data based on your music collection and listening habits, and the dataset is easy to understand and manipulate.

    My $.02

    February 1, 2011 at 11:20 am
  • John

    Where are the freely available literature citation datasets? DBLP seems to only cover compsci mostly. I need a REALLY big citation dataset over many disciplines and publishing houses (ACM, IEEE, Springer, etc.). Anyone?

    February 2, 2011 at 1:08 am
  • Ronald Bradford

    I have in the past compiled a list of public data sources. Many of the comments contain great links

    More information at

    February 2, 2011 at 10:32 am
  • Tim Riemenschneider

    For a large dataset, you could import the openstreetmap-data:

    February 4, 2011 at 9:28 am
  • J. Andrew Rogers

    A big problem with these data sets are that they are small, trivial cases, which limits the amount and kind of testing you can do. Large data sets exist but they are often implausibly large to move around over the Internet. You can use the listed data sets to easily test basic correctness but you can’t use them to test scaling behaviors.

    Synthetic data sets are not interesting but neither are they random or unrealistic if built by a competent designer. The great thing about synthetic data set generators beyond producing data of unbounded size is that you can configure arbitrary distributions and properties of the data that test a broad range of characteristics not possible with real-world data sets. For example, if I want to simulate several types of skew and bias that move logically in space and time over the properties of the data set, it is pretty simple to do that. A good example of this is location and sensing data, which has fairly complex skew patterns in reality that will break most spatial indexing systems at scale — it is hard to get a data set that demonstrates this, but it is fairly easy to generate a synthetic data set that generates the same bulk behavior.

    Purely synthetic data set generators can very accurately model real-world data patterns, it just requires the ability to generate complex skew behaviors and interactions in the data that do not rise above the noise floor in trivial samples. What would be useful, possibly more useful than sample data sets, is building a collection of synthetic workload generators with parameter sets that exactly match the distribution, dynamics, and skew of real-world data sets. The results are not interesting per se but they allow you to characterize runtime behaviors under all sorts of assumptions at arbitrary scales. For complex data sets like real-time spatial and graphs, synthetic is really the only way to get an accurate measure of a system.

    Real-world data sets would be preferable in theory but in practice you cannot test a lot of things that matter using them. I would not suggest using badly designed randomly generated data sets but a good synthetic data set generator can be an excellent tool.

    February 4, 2011 at 10:12 am
  • Baron Schwartz

    Here’s another one to add to the mix: the EFF’s dump of SSL certificate data (4gb)

    February 27, 2011 at 12:30 pm
  • Theresia

    Thanks for this post. I am currently searching for a dataset of blogs or forums. I need to use them to test different spam detection techniques (as part of my studies )but I did not manage to find any available data . Ideally it should be realistic data that contains both spam comments and realistic comments.

    Do you have an idea of where I can find such datasets please :s as I’ve been searching for a long time now and I did not find anything useful.


    March 1, 2011 at 1:08 pm
  • Ronald Speelman

    Hi Baron,

    Thanks for this list, some of them are really useful especially the airline data, that is very difficult to generate.
    For other stuff like e-mail addresses etc. ,I used to use a tool like Nice and free, but very limited, so about a year ago I decided to write my own generator that can be used directly in your application code.

    I have published an updated version (the code is provided in a zip file) on my blog because this version is very versatile and can be used to generate very good and real looking test data. This might be useful for many MySQL developers. This is the url to the article:

    June 24, 2012 at 6:47 am
  • chad ambrosius

    it looks like the BP energy use is now available as an excel file at also appears to have a lot of potentially good data sets (census, earthquakes, etc.) so you don’t have to search individual places like the census bureau or national earthquake information center.

    September 7, 2012 at 11:58 am
  • Martin Robaey

    And would someone have a clue to find public datasets produced in a NoSQL database?

    November 17, 2014 at 2:30 pm
  • Ashoke

    Hi Baron,

    Do you know of any data sets that follow normal distribution?


    January 12, 2015 at 5:16 am
  • Md Monjur Ul Hasan

    I have created a sqlite version of the employee database but cannot push into my fork. I would like to contribute that. How can I do that. The .db file is 73 MB in size.

    December 4, 2016 at 7:33 pm
  • farid

    I hv 24 tables in the schema, and needs to use DML and select operations on these tables for 24 tables to check the db performance testing. Can someone give the idea how can I generate the load on db??

    If you needs any details then let me know.

    August 10, 2017 at 1:27 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.