Announcement

Announcement Module
Collapse
No announcement yet.

Xtrabackup and Innobackupex Hot Backup solution

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Xtrabackup and Innobackupex Hot Backup solution

    Hi guys,

    I have created 3 cluster servers, works already over 7 month.
    Also every 10 minutes executes "Incremental Backup" (at the beginning of a day creates new FULL and then ONLY Incremental) via innobackupex-1.5.1.
    So every time I have Backup of critical data for the last 10 minutes. But how about the rest of the data? Can I have Backup every 1 second (Point-in-time recovery)?

  • #2
    So our database grows up every time, it will have to read whole database in order to find pages that have changed since the last backup. So for really big databases when plain read will take longer than 10 minutes you can't take backups that often

    Comment


    • #3
      For example, the last "Incremental Backup" was done at 2014-Apr-22_15-00-01 and then the next "Incremental Backup" will be done in 10 minutes later. Between its period of time was wrong SQL statement (DROP, DELETE, etc) how can I restore data in this situation ?

      Comment


      • #4
        Taking incremental backups every 10 minutes seems excessive and not scalable, so I would definitely think more about that process to start with. Managing that sounds like a nightmare, and as the database grows the likelihood of those incremental backups starting to overlap because they cannot finish in the 10 minutes seems highly probable.

        As for other options, you could use a delayed slave. Meaning you have a slave database that is behind in time, i.e. by say an hour. Then if someone drops a table accidentally, you would be able to go to the delayed slave and get the table that was dropped before the drop table statement is processed. This can be done with pt-slave-delay or with MySQL 5.6 built-in functionality.

        http://www.percona.com/doc/percona-t...ave-delay.html
        https://dev.mysql.com/doc/refman/5.6...n-delayed.html

        Comment


        • #5
          Thanks for your reply! But in this situation we lose some date in database anyway. So I could restore droped, deleted - table, database, but what about other data that were inserted into other table, database ?

          Comment


          • #6
            So as you have told me I could restore data for the last 1 hour (if Slave will lag behind Master) but for the last 1 hour many data will be inserted into "Master node" database and I will lose it.

            Comment


            • #7
              And if I will set to execute innobackupex-1.5.1 via crontab say every 1 hour it will be the same as - "Delayed Slave"

              Comment


              • #8
                So I have 3 Percona XtraDB Cluster nodes - 192.168.0.1, 192.168.0.2, 192.168.0.3. All they are connected via 1 Gbit/s Ethernet Switch. What should I do to prevent any data loss (even 10 second of data loss) ?

                Comment


                • #9
                  Please advise us the best solution

                  Comment


                  • #10
                    And innobackupex-1.5.1 - is it combination of Xtrabackup and Innobackupex (wich allows Backup MYISAM) tables also ?

                    Comment


                    • #11
                      No matter what you do there is a chance of losing data. But with the delayed slave, you could likely recover from many situations with no data loss. For instance if someone drops a table, you can use mysqldump to export the data from just that table on the delayed slave and load it back onto the master.

                      Aside from this, the other method as mentioned is to use your binlogs. Then in a disaster scenario you would load your most recent backup (full backup and any incrementals), and then you could replay whatever binlogs you have from after the latest backup to get you up to the most recent data you have.

                      Comment


                      • #12
                        Aside from this, the other method as mentioned is to use your binlogs. Then in a disaster scenario you would load your most recent backup (full backup and any incrementals), and then you could replay whatever binlogs you have from after the latest backup to get you up to the most recent data you have.

                        Could you please give me a link or info how could I do this ?

                        Comment


                        • #13
                          How ofter I should do Backup ?

                          I have 3 PXC nodes, at the beginning of a day on each of them creates "FULL Backup" and after 1 hour creates "Incremental backup". My bash script holds FULL and its Incremental Backup for 12 hours. Then creates new FULL Backup (after that - removes previous old FULL and its Incremental for the last 12 hours) and after 1 hour creates Incremental Backup, and so on every day

                          Comment


                          • #14
                            How often you backup your data is primarily a business decision, with the caveat being there are technical limitations, so what the business wants may not always be feasible for what they are willing to pay for. From what I've seen, it is fairly common for people to do a full backup once a week, and then do an incremental backup daily to supplement that.

                            For info on using binlogs, there are a lot of different sources, so you'll want to read up on what the binlogs do and how to use them:

                            https://dev.mysql.com/doc/refman/5.5...-recovery.html
                            http://www.percona.com/doc/percona-x...overy_ibk.html
                            etc.

                            Comment


                            • #15
                              Hi,
                              Scott gave you a very good suggestion. You can check how often your binlog switches and decide how often do you need to back it up.

                              Recovering up to a particular moment in database’s history can be done with innobackupex (xtradb) and the binary logs of the server. The binary log contains the operations that modified the database from a point in the past. We need a full data directory as a base and then apply a series of operations from the binary log to make the data match what it was at the point in time you want, you can search for point-in-time recovery. If you need more details I can send you some steps to start.

                              Comment

                              Working...
                              X