Buy Percona SupportBuy Now

Lastest Forum Posts - July 14, 2016 - 2:45am
You might like to know this other Magna Force Plus types are already being processed. It should describe what I was hinting at earlier. I wonder why Magna Force Plus hasn't caught on here. There are modern deal breakers on that theory. This shows how much confidence most common people have in Magna Force Plus. I've been doing a good many research into Magna Force Plus. I'm a full member of this club. Magna Force Plus represents an opportunity, if it appeals to you. It was like taking candy from a baby. Please leave your thoughts as this concerns Magna Force Plus in the comments.

how to make incremental backup for week ..

Lastest Forum Posts - July 14, 2016 - 1:01am
Hello EveryOne

I have try the Incremental backup

In our scenario we have take the full backup on saturday. and incremental on Monday to Friday ...

Now after Incremental I have applied apply with –redo-only on full backup

Then --apply-log on Monday INC the base dir is Full-Backup this done but, issue with Tue INC we can not --apply-log here for Tue INC base dir is Monday Incremental.

is not done..

Please help in this anyone knows how to make incremental backup for week ...

I am in bad wellness-use

Lastest Forum Posts - July 13, 2016 - 10:03pm
No longer condemning its working even I particular have authoritatively endeavored it and prefer to oblige hazard with my wellbeing as a delayed final result of this vastly-in distinguished-extremely-brand new-day I moreover bought convinced via its traits and moreover through its ensures that its makers had been making this ultra-present day day via means Test X Core of its energy internet internet web page that it's going to supplies defended end result nevertheless as speedily as a form of months key sum I’m not regardless content material fabric with it and versus getting some capabilities some variety of muscular tissues or tore physique, its system even alternate into the clarification behind my undesirable structure.

Fighting Females Hair Loss Issues

Lastest Forum Posts - July 13, 2016 - 9:47pm
Industry activities cut-throat competition; consequently every corporation claims to become a lot better than another may a foolproof option. But how would you know that they definitely surpass what they promise. It would be great in case you read up the opinions of the various Hair Loss shampoos on the market. Notice which matches your requirement. The comments made in the review-will allow you to establish the potency of the item. Nevertheless, recognize that the review will have negative and positive things of the product. Thus you have to check the balance of the pro and cons to get a better understanding. You will need see for just about any info of side effects brought on by the merchandise like skin irritation as this could intensify your circumstance though going right through the evaluation.
Get More >>>>=====>>>>>

percona-nagios-plugins-1.1.6 on an OEL5 system compatibility

Lastest Forum Posts - July 13, 2016 - 1:37pm

When attempting to install "percona-nagios-plugins-1.1.6" on an OEL5 system the installation process fails:

$cat /etc/*release
Enterprise Linux Enterprise Linux Server release 5.8 (Carthage)
Oracle Linux Server release 5.8
Red Hat Enterprise Linux Server release 5.8 (Tikanga)
$yum install percona-nagios-plugins
Loaded plugins: priorities, security
nexus_rpm | 1.5 kB 00:00
nexus_rpm_snapshots | 1.5 kB 00:00
52 packages excluded due to repository priority protections
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package percona-nagios-plugins.noarch 0:1.1.6-1 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

================================================== ================================================== ================================================== ================================================== ====================================
Package Arch Version Repository Size
================================================== ================================================== ================================================== ================================================== ====================================
percona-nagios-plugins noarch 1.1.6-1 enservio 29 k

Transaction Summary
================================================== ================================================== ================================================== ================================================== ====================================
Install 0 Package(s)
Upgrade 1 Package(s)

Total size: 29 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
ERROR with rpm_check_debug vs depsolve:
rpmlib(FileDigests) is needed by percona-nagios-plugins-1.1.6-1.noarch
rpmlib(PayloadIsXz) is needed by percona-nagios-plugins-1.1.6-1.noarch
(1, [u'Please report this error in'])

The above error indicates as if the 1.1.6 plugins package is not designed for OEL5 but for OEL6 or later platforms?!?

Here is a post that indicates thats the case:

'Percona Monitoring Plugins Documentation Release 1.1.6' PDF which I read does not have any specifics on minimum kernel version required and just specifies the MySql version required:

"1.2 System Requirements
The plugins are all written in standard Unix shell script. They should run on any Unix or Unix-like operating system, such as GNU/Linux, Solaris, or FreeBSD.
The plugins are designed to be used with MySQL 5.0 and newer versions, but they may work on 4.1 or older versions as well."

What version of the plugins is supposed to be compatible w/OEL5 - 1.1.4 or 1.1.5 or 1.1.6?

Many thanks in advance!

Using Ceph with MySQL

Latest MySQL Performance Blog posts - July 13, 2016 - 10:48am

Over the last year, the Ceph world drew me in. Partly because of my taste for distributed systems, but also because I think Ceph represents a great opportunity for MySQL specifically and databases in general. The shift from local storage to distributed storage is similar to the shift from bare disks host configuration to LVM-managed disks configuration.

Most of the work I’ve done with Ceph was in collaboration with folks from RedHat (mainly Brent Compton and Kyle Bader). This work resulted in a number of talks presented at the Percona Live conference in April and the RedHat Summit San Francisco at the end of June. I could write a lot about using Ceph with databases, and I hope this post is the first in a long series on Ceph. Before I starting with use cases, setup configurations and performance benchmarks, I think I should quickly review the architecture and principles behind Ceph.

Introduction to Ceph

Inktank created Ceph a few years ago as a spin-off of the hosting company DreamHost. RedHat acquired Inktank in 2014 and now offers it as a storage solution. OpenStack uses Ceph as its dominant storage backend. This blog, however, focuses on a more general review and isn’t restricted to a virtual environment.

A simplistic way of describing Ceph is to say it is an object store, just like S3 or Swift. This is a true statement but only up to a certain point.  There are minimally two types of nodes with Ceph, monitors and object storage daemons (OSDs). The monitor nodes are responsible for maintaining a map of the cluster or, if you prefer, the Ceph cluster metadata. Without access to the information provided by the monitor nodes, the cluster is useless. Redundancy and quorum at the monitor level are important.

Any non-trivial Ceph setup has at least three monitors. The monitors are fairly lightweight processes and can be co-hosted on OSD nodes (the other node type needed in a minimal setup). The OSD nodes store the data on disk, and a single physical server can host many OSD nodes – though it would make little sense for it to host more than one monitor node. The OSD nodes are listed in the cluster metadata (the “crushmap”) in a hierarchy that can span data centers, racks, servers, etc. It is also possible to organize the OSDs by disk types to store some objects on SSD disks and other objects on rotating disks.

With the information provided by the monitors’ crushmap, any client can access data based on a predetermined hash algorithm. There’s no need for a relaying proxy. This becomes a big scalability factor since these proxies can be performance bottlenecks. Architecture-wise, it is somewhat similar to the NDB API, where – given a cluster map provided by the NDB management node – clients can directly access the data on data nodes.

Ceph stores data in a logical container call a pool. With the pool definition comes a number of placement groups. The placement groups are shards of data across the pool. For example, on a four-node Ceph cluster, if a pool is defined with 256 placement groups (pg), then each OSD will have 64 pgs for that pool. You can view the pgs as a level of indirection to smooth out the data distribution across the nodes. At the pool level, you define the replication factor (“size” in Ceph terminology).

The recommended values are a replication factor of three for spinners and two for SSD/Flash. I often use a size of one for ephemeral test VM images. A replication factor greater than one associates each pg with one or more pgs on the other OSD nodes.  As the data is modified, it is replicated synchronously to the other associated pgs so that the data it contains is still available in case an OSD node crashes.

So far, I have just discussed the basics of an object store. But the ability to update objects atomically in place makes Ceph different and better (in my opinion) than other object stores. The underlying object access protocol, rados, updates an arbitrary number of bytes in an object at an arbitrary offset, exactly like if it is a regular file. That update capability allows for much fancier usage of the object store – for things like the support of block devices, rbd devices, and even a network file systems, cephfs.

When using MySQL on Ceph, the rbd disk block device feature is extremely interesting. A Ceph rbd disk is basically the concatenation of a series of objects (4MB objects by default) that are presented as a block device by the Linux kernel rbd module. Functionally it is pretty similar to an iSCSI device as it can be mounted on any host that has access to the storage network and it is dependent upon the performance of the network.

The benefits of using Ceph

In a world striving for virtualization and containers, Ceph gives easily moves database resources between hosts.

IO scalability
On a single host, you have access only to the IO capabilities of that host. With Ceph, you basically put in parallel all the IO capabilities of all the hosts. If each host can do 1000 iops, a four-node cluster could reach up to 4000 iops.

High availability
Ceph replicates data at the storage level, and provides resiliency to storage node crash.  A kind of DRBD on steroids…

Ceph rbd block devices support snapshots, which are quick to make and have no performance impacts. Snapshots are an ideal way of performing MySQL backups.

Thin provisioning
You can clone and mount Ceph snapshots as block devices. This is a useful feature to provision new database servers for replication, either with asynchronous replication or with Galera replication.

The caveats of using Ceph

Of course, nothing is free. Ceph use comes with some caveats.

Ceph reaction to a missing OSD
If an OSD goes down, the Ceph cluster starts copying data with fewer copies than specified. Although good for high availability, the copying process significantly impacts performance. This implies that you cannot run a Ceph with a nearly full storage, you must have enough disk space to handle the loss of one node.

The “no out” OSD attribute mitigates this, and prevents Ceph from reacting automatically to a failure (but you are then on your own). When using the “no out” attribute, you must monitor and detect that you are running in degraded mode and take action. This resembles a failed disk in a RAID set. You can choose this behavior as default with the mon_osd_auto_mark_auto_out_in setting.

Every day and every week (deep), Ceph scrubs operations that, although they are throttled, can still impact performance. You can modify the interval and the hours that control the scrub action. Once per day and once per week are likely fine. But you need to set osd_scrub_begin_hour and osd_scrub_end_hour to restrict the scrubbing to off hours. Also, scrubbing throttles itself to not put too much load on the nodes. The osd_scrub_load_threshold variable sets the threshold.

Ceph has many parameters so that tuning Ceph can be complex and confusing. Since distributed systems push hardware, properly tuning Ceph might require things like distributing interrupt load among cores and thread core pinning, handling of Numa zones – especially if you use high-speed NVMe devices.


Hopefully, this post provided a good introduction to Ceph. I’ve discussed the architecture, the benefits and the caveats of Ceph. In future posts, I’ll present use cases with MySQL. These cases include performing Percona XtraDB Cluster SST operations using Ceph snapshots, provisioning async slaves and building HA setups. I also hope to provide guidelines on how to build and configure an efficient Ceph cluster.

Finally, a note for the ones who think cost and complexity put building a Ceph cluster out of reach. The picture below shows my home cluster (which I use quite heavily). The cluster comprises four ARM-based nodes (Odroid-XU4), each with a two TB portable USB-3 hard disk, a 16 GB EMMC flash disk and a gigabit Ethernet port.

I won’t claim record breaking performance (although it’s decent), but cost-wise it is pretty hard to beat (at around $600)!


Installation failing on RHEL 7.2

Lastest Forum Posts - July 13, 2016 - 7:19am
I tried to follow the instructions at to install on RHEL 7.2. I successfully ran part 1:

Code: yum install When I run part 2 (or part 3):

Code: yum list | grep percona I receive an error:
Code: $ yum list Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Repo rhel-7-workstation-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/8253033614287909731-key.pem [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Percona-Release YUM repository - noarch), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Disable the repository, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable percona-release-noarch 4. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=percona-release-noarch.skip_if_unavailable=true failure: repodata/repomd.xml from percona-release-noarch: [Errno 256] No more mirrors to try. [Errno 14] HTTP Error 404 - Not Found For the URL that is failing, it seems that the directory '7Workstation' does not exist. Sure would be nice if this just worked!

What I did instead: I found package 'percona-xtrabackup.x86_64' in the EPEL repositories (which at the time of writing installed version 2.2) and installed like this:

Code: yum install percona-xtrabackup.x86_64

TokuMX: M/R Post Processing Performance Issues

Lastest Forum Posts - July 13, 2016 - 4:45am
We are using TokuMX & are observing following performance issues with Map-Reduce...

* We are running M/R on several thousand documents from the "source" collection & saving results into a "target" collection.
* When we do this for the first time (i.e. when target collection does not contain results), M/R finishes in quick time (couple of mins max for several thousand records). No issues there.
* However, documents in the "source" collection get updated over time & we have a need to re-do the map-reduce from scratch. At this time, we first delete the previous M/R results from the target collection & then re-run exact same M/R process. But this time the same process takes anywhere from 1 to 3 hrs to finish.

As one can see, there is seemingly no difference in the map-reduce code as well as status of the target collection. Does anyone has an idea about why this could be happening?

When we checked mongo logs, we noticed that actual map & reduce functions run with same speed both the times, but the difference is due to "M/R Reduce Post Processing".

In the first case, it finishes in a jiffy & we see following lines in the log...
Tue Jul 12 03:36:57.033 [conn1546990] M/R Reduce Post Processing Progress: 9800
Tue Jul 12 03:37:00.158 [conn1546990] M/R Reduce Post Processing Progress: 51300 So post processing done in 4-5 secs.
In the second case, it seems to be able to process only couple of hundred records in 5 sec...
Wed Jul 13 03:00:31.802 [conn1565018] M/R Reduce Post Processing Progress: 200
Wed Jul 13 03:00:36.065 [conn1565018] M/R Reduce Post Processing Progress: 400
Wed Jul 13 03:00:40.378 [conn1565018] M/R Reduce Post Processing Progress: 600
Wed Jul 13 03:00:44.607 [conn1565018] M/R Reduce Post Processing Progress: 800
Wed Jul 13 03:11:51.050 [conn1565018] M/R Reduce Post Processing Progress: 53800 So more than 10mins for ~50k records.

In case of deletes, TokuMX inserts a delete message into a buffer in the fractal tree but the actual entry containing the data could still be present in the leaf node. So our guess is that actual data entries are being deleted one by one during Reduce Post Processing, since new entries with same IDs need to be added.

Any help in understanding this behaviour & any pointers or alternative approaches in fixing this would be appreciated.
ps. We checked behaviour with MongoDB & did not see any issues there. M/R worked with almost same speed, both the times. But we would like to stick to TokuMX as much as possible, due to overall speed advantages we get with it. We would need to fix this M/R issues for that.

Incremental Backup from Fullbackup

Lastest Forum Posts - July 13, 2016 - 1:58am
Hello Everyone,

I am taking the full backup of my database.

The take incremental backup also

but here i not clears the use of --apply-log-only and --redo-only.

dont know where used this to flags

I also refer the percona xtrabackup docs any one can help in same....

pmp-check-unix-memory check specifics

Lastest Forum Posts - July 12, 2016 - 12:59pm
Hello there,

It looks like 'pmp-check-unix-memory' is only checking the memory allocation on the system its running on (in my case these checks run on one nagios server for now against 3 percona mysql VMs and all 3 checks just report free memory of the nagios host). There is no host parameter that can be passed with the '-H' option/switch in the check according to documentation I read here:

Here is what my nagios setup looks like for one of the hosts I need to check memory on (previously i defined service/command separately but then since i realized its pulling info just from local nagios host i tried to define same check separately for each host within the 'myperconadbservername.cfg' file hoping that the check's logic takes the host name defined in define host {} block but it doesn't):

define host {
use prod-server
host_name myperconadbservername
hostgroups linux-servers,mysql-servers,mysql-percona

define service{
use mysql-percona-template
service_description rdba_unix_memory
check_command rdba-check-unix-memory3!96!98

define command{
command_name rdba-check-unix-memory3
command_line $USER1$/pmp-check-unix-memory -w $ARG1$ -c $ARG2$

How can this 'pmp-check-unix-memory' check the memory of the actual 'myperconadbservername' host and not report memory of the local nagios host?

Thanks in advance!

Call for Percona Live Europe MongoDB Speakers

Latest MySQL Performance Blog posts - July 12, 2016 - 9:23am

Want to become one of the Percona Live Europe MongoDB speakers? Read this blog for details.

The Percona Live Europe, Amsterdam call for papers is ending soon and we are looking for MongoDB speakers! This is a great way to build your personal and company brands. If you haven’t submitted a paper yet, here are a list of ideas we would love to see covered at this conference:

If you find any of these ideas interesting, simply let @Percona know and we can help get you listed as the speaker. If nothing on this list strikes your fancy or peaks your interest, please submit a similar talk of your own – we’d love to find out what you have to say!

Want to become one of the Percona Live Europe MongoDB speakers? Read this blog for details.

The Percona Live Europe, Amsterdam call for papers is ending soon, and we are looking for MongoDB speakers! This is a great way to build your personal and company brands. If you haven’t submitted a paper yet, here are a list of ideas we would love to see covered at this conference:

If you any these ideas interesting, simply let @Percona know and we can help get you listed as the speaker. If nothing on this list strikes your fancy or peaks your interest, please submit a similar talk of your own – we’d love to find out what you have to say!

Here are some other ideas that might get your thoughts bubbling:

  • Secret use of “hidden” and tagged ReplicaSets
  • To use a hashed shard key or not?
  • Understanding how a shard key is used in MongoDB
  • Using scatter-gathers to your benefit
  • WriteConcern and its use cases
  • How to quickly build a sharded environment for MongoDB in Docker
  • How to monitor and scale MongoDB in the cloud
  • MongoDB Virtualization: the good, the bad, and the ugly
  • MongoDB and VMware: a cautionary tale
  • Streaming MySQL bin logs to MongoDB and back again
  • How to ensure that other technologies can safely use the epilog for pipelining

The Percona team and conference commitee would love to see what other ideas the community has that we haven’t covered. Anything helps: using @Percona and mentioning topics you would like to see, to sharing topics on twitter you like, or even just sharing the link to the call for papers.

The call for papers closes next Monday (7/18), so let’s get some great things in this week and build a truly dynamic conference!

Percona Server for MongoDB 3.0.12-1.8 is now available

Latest MySQL Performance Blog posts - July 12, 2016 - 8:33am

Percona announces the release of Percona Server for MongoDB 3.0.12-1.8 on July 12, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.0.12-1.8 is an enhanced, open source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. Based on MongoDB 3.0.12, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as features like external authentication and audit logging. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: The MongoRocks storage engine is still under development. There is currently no officially released version of MongoRocks recommended for production.

This release includes all changes from MongoDB 3.0.12, and the following known issue that will be fixed in a future release:

  • Fixed the software version incorrectly reported by the --version option.

You can find the release notes in the official documentation.


Sudden error 1045 after server reboot

Lastest Forum Posts - July 12, 2016 - 3:53am
We've been running Percona on our server for 2 years now. No issues until today. Our hosting company has detected our CPU was overheating and they've fixed the cooling. To do that they had to shut the server down. After starting it up again, something is not right but we can't pinpoint what. I'll try to write what we currently know so any ideas are welcome.

service mysql stop fails.

It fails when while executing Code: /usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf shutdown The full output is:
Code: ++ /usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf shutdown + shutdown_out='/usr/bin/mysqladmin: connect to server at '\''localhost'\'' failed error: '\''Access denied for user '\''root'\''@'\''localhost'\'' (using password: NO)'\''' If I execute "mysqladmin shutdown", it shuts down properly and I can then do service mysql stop. When starting it back up I get this:

Code: root@xxx001:~# service mysql start [ ok ] Starting MySQL (Percona Server) database server: mysqld. [info] Checking for corrupt, not cleanly closed and upgrade needing tables.. root@xxx001:~# ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) Our root has a password. If I access via mysql -u root -p and enter the password I enter mysql properly and have full access to all databases and tables. MySQL Workbench, setup to connect via SSL connects successfully too.

~/.my.cnf has root passwords in it for [mysqladmin], [client], [mysql], [mysqldump] and [mysqldiff].

/etc/mysql/ contains:

Code: drwxr-xr-x 3 root root 4096 Jul 12 11:23 . drwxr-xr-x 89 root root 4096 May 16 10:42 .. drwxr-xr-x 2 root root 4096 Nov 2 2015 conf.d -rw------- 1 root root 0 Nov 2 2015 debian.cnf -rwxr-xr-x 1 root root 1286 Nov 2 2015 debian-start -rw-r--r-- 1 root root 3646 Nov 2 2015 my.cnf So everything we've checked is there. 2 days ago we did service mysql restart and it restarted successfully. No problem with the 'root' access. Now it suddenly has it and we can't resolve it to get our sites back to life.

For mighty the stratum

Lastest Forum Posts - July 11, 2016 - 10:52pm
The immoderate-tremendous title is further for special than no longer what others say; however that that makes it plausible for to be all set to have an have an have an have an have an effect on on on within the path of the path of the whole decency in a single bundle. All through the trail of this growth don't hear what others say or regardless of nonetheless attractive the schooling would even air any object. Tactics your wellness care in price and request the predominant mighty eudemonia complement that suits you and your physique. For mighty the stratum surveys Retinolla unit of measure demonstrating that the complement is passing prompts by way of companion in nursing large section to the scientific authorities and would furnish a full bundle of the whole predominant mighty fixings that unit of measure accelerated-headquartered for the physique. These multivitamins can help you to stay healthful and dynamic for the dimensions of the day and would reward all of you the pleasure and now not utilising a predominant lessons would undergo for.

XtraBackup version and MySQL version compatibility?

Lastest Forum Posts - July 11, 2016 - 3:00pm
Dear forum,

I have 2 questions regarding XtraBackup:

1) We currently use percona-xtrabackup-2.0.6 with MySQL 5.5.28. Is it valid to switch to latest xtrabackup version percona-xtrabackup-2.3.5-1.el6.x86_64.rpm with MySQL 5.5.28?

2) If yes, is it possible to restore a backup with latest xtrabackup 2.3.5-1 (containing myisam and innodb tables) which was taken with xtrabackup version 2.0.6 ?

Best regards,

pt-online-schema change and DB Backups, how to handle backups during schema change?

Lastest Forum Posts - July 11, 2016 - 1:24pm
I have a MySQL database that is hundreds of GB in size and it requires an alter. Normally mysqlhotcopy is used for a full backup once a month and the binary logs are used for incremental backups on a daily basis. Based on my tests the pt-online-schema-change run will take almost 5 days on my system. The toolkit is great in that it keeps my system up and running the whole time but during this time how should I handle backups?

A full backup would double the backup size as the temp tables would be backed up as well. The tool disables binary logging so there are no binary logs to grab either. Anyone have some tips for handling backups during long alters?

Webinar July 14, 10 am PDT: Introduction into storage engine troubleshooting

Latest MySQL Performance Blog posts - July 11, 2016 - 10:33am

Please join Sveta Smirnova for a webinar Thursday, July 14 at 10 am PDT (UTC-7) on an Introduction Into Storage Engine Troubleshooting.

The number of MySQL storage engines provide great flexibility for database users, administrators and developers. At the same time, engines add an extra level of complexity when it comes to troubleshooting issues. Before choosing the right troubleshooting tool, you need to answer the following questions (and often others):

  • What part of the server is responsible for my issue?
  • Was a lock set at the server or engine level?
  • Is a standard or engine-specific tool better?
  • Where are the engine-specific options?
  • How to know if an engine-specific command exists?

This webinar will discuss these questions and how to find the right answers across all storage engines in a general sense.

You will also learn:

  • How to troubleshoot issues caused by simple storage engines such as MyISAM or Memory
  • Why Federated is deprecated, and what issues affected that engine
  • How Blackhole can affect replication

. . . and more.

Register for the webinar here.

Note: We will hold a separate webinar specifically for InnoDB.

Sveta Smirnova, Principal Technical Services Engineer Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns which can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as a Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Use pt-table-checksum

Lastest Forum Posts - July 11, 2016 - 7:03am

I have 2 Mysql instances, Master & Slave. I added some rows to the slave so it is nor consistent with the master.

When I run pt-table-checksum I don't see any DIFFS:

Code: ii1> pt-table-checksum --empty-replicate-table -d test S=/tmp/5.6.22_3306_Master/data/mysql.sock --user=SlaveUser --password=*** TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 07-11T17:02:15 0 0 7 1 0 0.008 test.Countries 07-11T17:02:15 0 0 0 1 0 0.006 test.Persons ii1> ii1> pt-table-checksum --empty-replicate-table -d test S=/tmp/5.6.22_3310_Slave/data/mysql.sock --user=SlaveUser --password=*** TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 07-11T17:02:20 0 0 13 1 0 0.007 test.Countries 07-11T17:02:20 0 0 0 1 0 0.006 test.Persons What am I missing?


Exporting database from mysql to percona

Lastest Forum Posts - July 11, 2016 - 6:51am
Hi folks,

I'm trying to export a website database from localhost to my hosting account. The problem is that the website is missing some modifications which I did on localhost.
My localhost database is 5.7.9 - MySQL Community Server (GPL) and at the hosting side it is 5.6.30-76.3-log - Percona Server (GPL).
I suspect it has something to do with font encoding although I'm not sure, because the site is portuguese (brazil).
So please advice whether I have to do any steps before I upload to my hosting account.


Could pt-table-checksum use CPU's crc32 instruction ?

Lastest Forum Posts - July 11, 2016 - 1:49am
Many CPU has include crc32 instruction ,so could pt-table-checksum use this hardware acceleration?

General Inquiries

For general inquiries, please send us your question and someone will contact you.