GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Mantain daily log tables by merging or partitioning

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mantain daily log tables by merging or partitioning

    I have a requirement that maintaining of daily transactions information of our application in tables for logging purpose and also we are having tools for analyzing the logs based on that tables .
    The problem is , we have one table let us say ‘x-table’ one day size is 2GB. we are merging the data every day by using ‘ insert into x-total-table select * from yesterday.x-table’. In this way we are merging the data. then size of x-total-table becomes very huge and merging also takes long time.
    Hence, I’m thinking about a solution to get rid these two problems 1) DataSize , 2) Merging Time, For that i would like to create day-wise tables and based on that i will like to create merge table.
    Could you please suggest me the right way.

  • #2
    MySQL Partitioning can handle that automatically for you, but it's very inflexible (small hard limit of partitions, not recommended if you are going to add one partition every day). I don't know if you were thinking about the merge engine, but if you were, don't.

    pt-archive, from Percona Toolkit is a small utility for safe archiving of data (in your case, to other tables), but all the data logic should be still implemented on your code.

    There are solutions for transparent multi-server querying (sharding) that, even used on a single server, could be useful: shard-query and the spider engine. Those are my suggestions. Sometimes, implementing your own specific way of archiving and querying multiple tables can be ok. In any case, try to optimize your data types and minimize index size to make things bearable.

    If your data becomes larger than what MySQL standard engines can scale efficiently (2GB/day doesn't seem to be the case yet), you can complement it with other engines for big data (or external tools for indexing and analytics) like the tokudb engine.

    Comment

    Working...
    X