Buy Percona SupportBuy Now

Lastest Forum Posts - July 6, 2016 - 9:27pm
Try and stay away from caffeine if you want to improve your beauty. Caffeine is an ingredient that ages your skin care review, makes you appear tired and can also cause the jitters. Drink no more than one cup of coffee or tea per day. Try substituting green tea or decaffeinated coffee as an alternative to however many cups of the regular beverage you usually have.

Pipelining versus Parallel Query Execution with MySQL 5.7 X Plugin

Latest MySQL Performance Blog posts - July 6, 2016 - 12:14pm

In this blog post, we’ll look at pipelining versus parallel query execution when using X Plugin for MySQL 5.7.

In my previous blog post, I showed how to use X Plugin for MySQL 5.7 for parallel query execution. The tricks I used to make it work:

  • Partitioning by hash
  • Open N connections to MySQL, where N = number of CPU cores

I had to do it manually (as well as to sort the result at the end) as X Plugin only supports “pipelining” (which only saves the round trip time) and does not “multiplex” connections to MySQL (MySQL does not use multiple CPU cores for a single query).

TL:DR; version

In this (long) post I’m playing with MySQL 5.7 X Plugin / X Protocol and document store. Here is the summary:

  1. X Plugin does not “multiplex” connections/sessions to MySQL. Similar to the original protocol, one connection to X Plugin will result in one session open to MySQL
  2. An X Plugin query (if the library supports it) returns immediately and does not wait until the query is finished (async call). MySQL works like a queue.
  3. X Plugin does not have any additional server-level durability settings. Unless you check or wait for the acknowledgement (which is asynchronous) from the server, the data might or might not be written into MySQL (“fire and forget”).

At the same time, X Protocol can be helpful if:

  • We want to implement an asynchronous client (i.e., we do not want to block the network communication such as downloading or API calls) when the MySQL table is locked.
  • We want to use MySQL as a queue and save the round-trip time.
Benchmark results: “pipelining” versus “parallelizing” versus a single query

I’ve done a couple of tests comparing the results between “pipelining” versus “parallelizing” versus a single query. Here are the results:

      1. Parallel queries with NodeJS:
        $ time node async_wikistats.js ... All done! Total: 17753 ... real 0m30.668s user 0m0.256s sys 0m0.028s
      2. Pipeline with NojeJS:
        $ time node async_wikistats_pipeline.js ... All done! Total: 17753 ... real 5m39.666s user 0m0.212s sys 0m0.024s
        In the pipeline with NojeJS, I’m reusing the same connection (and do not open a new one for each thread).
      3. Direct query – partitioned table:
        mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark_part where url like ‘%postgresql%’; +-----------------+ | sum(tot_visits) | +-----------------+ | 17753 | +-----------------+ 1 row in set (5 min 31.44 sec)
      4. Direct query – non-partitioned table.
        mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark where url like ‘%postgresql%’; +-----------------+ | sum(tot_visits) | +-----------------+ | 17753 | +-----------------+ 1 row in set (4 min 38.16 sec)
Advantages of pipelines with X Plugin 

Although pipelining with X Plugin does not significantly increase query response time (it can reduce the total latency), it might be helpful in some cases. For example, let’s say we are downloading something from the Internet and need to save the progress of the download as well as the metadata for the document. In this example, I use youtube-dl to search and download the metadata about YouTube videos, then save the metadata JSON into MySQL 5.7 Document Store. Here is the code:

var mysqlx = require('mysqlx'); # This is the same as running $ youtube-dl -j -i ytsearch100:"mysql 5.7" const spawn = require('child_process').spawn; const yt = spawn('youtube-dl', ['-j', '-i', 'ytsearch100:"mysql 5.7"'], {maxBuffer: 1024 * 1024 * 128}); var mySession = mysqlx.getSession({ host: 'localhost', port: 33060, dbUser: 'root', dbPassword: '<your password>' }); yt.stdout.on('data', (data) => { try { dataObj = JSON.parse(data); console.log(dataObj.fulltitle); mySession.then(session => { session.getSchema("yt").getCollection("youtube").add( dataObj ) .execute(function (row) { }).catch(err => { console.log(err); }) .then( function (notices) { console.log("Wrote to MySQL: " + JSON.stringify(notices)) }); }).catch(function (err) { console.log(err); process.exit(); }); } catch (e) { console.log(" --- Can't parse json" + e ); } }); yt.stderr.on('data', (data) => { console.log("Error receiving data"); }); yt.on('close', (code) => { console.log(`child process exited with code ${code}`); mySession.then(session => {session.close() } ); });

In the above example, I execute the youtube-dl binary (you need to have it installed first) to search for “MySQL 5.7” videos. Instead of downloading the videos, I only grab the video’s metadata in JSON format  (“-j” flag). Because it is JSON, I can save it into MySQL document store. The table has the following structure:

CREATE TABLE `youtube` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL, UNIQUE KEY `_id` (`_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

Here is the execution example:

$ node yt.js What's New in MySQL 5.7 Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["3f312c3b-b2f3-55e8-0ee9-b706eddf"]}} MySQL 5.7: MySQL JSON data type example Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["88223742-9875-59f1-f535-f1cfb936"]}} MySQL Performance Tuning: Part 1. Configuration (Covers MySQL 5.7) Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["c377e051-37e6-8a63-bec7-1b81c6d6"]}} Dave Stokes — MySQL 5.7 - New Features and Things That Will Break — php[world] 2014 Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["96ae0dd8-9f7d-c08a-bbef-1a256b11"]}} MySQL 5.7 & JSON: New Opportunities for Developers - Thomas Ulin - Forum PHP 2015 Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["ccb5c53e-561c-2ed5-6deb-1b325739"]}} Cara Instal MySQL 5.7.10 NoInstaller pada Windows Manual Part3 Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["95efbd79-8d79-e7b6-a535-271640c8"]}} MySQL 5.7 Install and Configuration on Ubuntu 14.04 Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["b8cfe132-aca4-1eba-c2ae-69e48db8"]}}

Now, here is what make this example interesting: as NodeJS + X Plugin = Asynchronous + Pipelining, the program execution will not stop if the table is locked. I’ve opened two sessions:

  • session 1: $ node yt.js > test_lock_table.log
  • session 2:
    mysql> lock table youtube read; select sleep(10); unlock tables; Query OK, 0 rows affected (0.00 sec) +-----------+ | sleep(10) | +-----------+ | 0 | +-----------+ 1 row in set (10.01 sec) Query OK, 0 rows affected (0.00 sec)


... Upgrade MySQL Server from 5.5 to 5.7 ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["d4d62a8a-fbfa-05ab-2110-2fd5cf6d"]}} OSC15 - Georgi Kodinov - Secure Deployment Changes Coming in MySQL 5.7 ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["8ac1cdb9-1499-544c-da2a-5db1ccf5"]}} MySQL 5.7: Create JSON string using mysql FreeBSD 10.3 - Instalación de MySQL 5.7 desde Código Fuente - Source Code Webinar replay: How To Upgrade to MySQL 5.7 - The Best Practices - part 1 How to install MySQL Server on Mac OS X Yosemite - ltamTube Webinar replay: How To Upgrade to MySQL 5.7 - The Best Practices - part 4 COMO INSTALAR MYSQL VERSION 5.7.13 MySQL and JSON MySQL 5.7: Merge JSON data using MySQL ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["a11ff369-6f23-11e9-187b-e3713e6e"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["06143a61-4add-79da-0e1d-c2b52cf6"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["1eb94ef4-db63-cb75-767e-e1555549"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e25f15b5-8c19-9531-ed69-7b46807a"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["02b5a4c9-6a21-f263-90d5-cd761906"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e0bef958-10af-b181-81cd-5debaaa0"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["f48fa635-fa63-7481-0668-addabbac"]}} ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["557fa5c5-3c8a-fe01-c17c-549c557e"]}} MySQL 5.7 Install and Configuration on Ubuntu 14.04 ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["456b11d8-ba03-0aec-8e06-9517c6e1"]}} MySQL WorkBench 6.3 installation on Ubuntu 14.04 ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["0b651987-9b23-b5e0-f8f7-49b8ba5c"]}} Going through era of IoT with MySQL 5.7 - FOSSASIA 2016 ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e133746c-836c-a7e0-3893-292a7429"]}} MySQL 5.7: MySQL JSON operator example ... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["4d13830d-7b30-5b31-d068-c7305e0a"]}}

As we can see, the first two writes were immediate. Then I’ve locked the table, and no MySQL queries went through. At the same time the download process (which is the slowest part here) proceeded and was not blocked (we can see the titles above, which are not followed by lines “… => wrote to MySQL:”). When the table was unlocked, a pile of waiting queries succeeded.

This can be very helpful when running a “download” process, and the network is a bottleneck. In a traditional synchronous query execution, when we lock a table the application gets blocked (including the network communication). With NodeJS and X Plugin, the download part will proceed with MySQL acting as a queue.

Pipeline Durability

How “durable” this pipeline, you might ask. In other words, what will happen if I will kill the connection? To test it out, I have (once again) locked the table (but now before starting the nodejs), killed the connection and finally unlocked the table. Here are the results:

Session 1: ---------- mysql> truncate table youtube_new; Query OK, 0 rows affected (0.25 sec) mysql> lock table youtube_new read; Query OK, 0 rows affected (0.00 sec) mysql> select count(*) from youtube_new; +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (0.00 sec) Session 2: ---------- (when table is locked) $ node yt1.js 11 03 MyISAM Switching to InnoDB from MyISAM tablas InnoDB a MyISAM MongoDB vs MyISAM (MariaDB/MySQL) MySQL Tutorial 35 - Foreign Key Constraints for the InnoDB Storage Engine phpmyadmin foreign keys myisam innodb Convert or change database manual from Myisam to Innodb ... >100 other results omited ... ^C Session 1: ---------- mysql> select count(*) from youtube_new; +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (0.00 sec) Id: 4916 User: root Host: localhost:33221 db: NULL Command: Query Time: 28 State: Waiting for table metadata lock Info: PLUGIN: INSERT INTO `iot`.`youtube_new` (doc) VALUES ('{"upload_date":"20140319","protocol":" mysql> unlock table; Query OK, 0 rows affected (0.00 sec) mysql> select count(*) from youtube_new; +----------+ | count(*) | +----------+ | 2 | +----------+ 1 row in set (0.00 sec) mysql> select json_unquote(doc->'$.title') from youtube_new; +---------------------------------+ | json_unquote(doc->'$.title') | +---------------------------------+ | 11 03 MyISAM | | Switching to InnoDB from MyISAM | +---------------------------------+ 2 rows in set (0.00 sec)

Please note: in the above, there isn’t a single acknowledgement from the MySQL server. When code receives a response from MySQL it prints “Wrote to MySQL: {“_state”:{“rows_affected”:1,”doc_ids”:[“…”]}}“. Also, note that when the connection was killed the MySQL process is still there, waiting on the table lock.

What is interesting here is is that only two rows have been inserted into the document store. Is there a “history length” here or some other buffer that we can increase? I’ve asked Jan Kneschke, one of the authors of the X Protocol, and the answers were:

  • Q: Is there any history length or any buffer and can we tune it?
    • A: There is no “history” or “buffer” at all, it is all at the connector level.
  • Q: Then why is 2 rows were finally inserted?
    • To answer this question I’ve collected tcpdump to port 33060 (X Protocol), see below

This is very important information! Keep in mind that the asynchronous pipeline has no durability settings: if the application fails and there are some pending writes, those writes can be lost (or could be written).

To fully understand how the protocol works, I’ve captured tcpdump (Jan Kneschke helped me to analyze it):

tcpdump -i lo -s0 -w tests/node-js-pipelining.pcap "tcp port 33060"

(see update below for the tcpdump visualization)

This is what is happening:

  • When I hit CTRL+C, nodejs closes the connection. As the table is still locked, MySQL can’t write to it and will not send the result of the insert back.
  • When the table is unlocked, it starts the first statement despite the fact that the connection has been closed. It then acknowledges the first insert and starts the second one.
  • However, at this point the script (client) has already closed the connection and the final packet (write done, here is the id) gets denied. The X Plugin then finds out that the client closed the connection and stops executing the pipeline.

Actually, this is very similar to how the original MySQL protocol worked. If we kill the script/application, it doesn’t automatically kill the MySQL connection (unless you hit CTRL+C in the MySQL client, sends the kill signal) and the connection waits for the table to get unlocked. When the table is unlocked, it inserts the first statement from a file.

Session 1 --------- mysql> select * from t_sql; Empty set (0.00 sec) mysql> lock table t_sql read; Query OK, 0 rows affected (0.00 sec) Session 2: ---------- $ mysql iot < t.sql $ kill -9 ... [3] Killed mysql iot < t.sql Session 1: ---------- mysql> show processlist; +------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+ | 4913 | root | localhost | iot | Query | 41 | Waiting for table metadata lock | insert into t_sql values('{"test_field":0}') | +------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+ 4 rows in set (0.00 sec) mysql> unlock tables; Query OK, 0 rows affected (0.00 sec) mysql> select * from t_sql; +-------------------+ | doc | +-------------------+ | {"test_field": 0} | +-------------------+ 1 row in set (0.00 sec)

Enforcing unique checks

If I restart my script, it finds the same videos again. We will probably need to enforce the consistency of our data. By default the plugin generates the unique key (_id) for the document, so it prevents inserting the duplicates.

Another way to enforce the unique checks is to create a unique key for youtube id. Here is the updated table structure:

CREATE TABLE `youtube` ( `doc` json DEFAULT NULL, `youtube_id` varchar(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.id'))) STORED NOT NULL, UNIQUE KEY `youtube_id` (`youtube_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

I’ve changed the default “_id” column to the YouTube’s unique ID. Now when I restart the script it shows:

MySQL 5.7: Merge JSON data using MySQL { [Error: Document contains a field value that is not unique but required to be] info: { severity: 0, code: 5116, msg: 'Document contains a field value that is not unique but required to be', sql_state: 'HY000' } } ... => wrote to MySQL: undefined

…as this document has already been loaded.


Although X Plugin pipelining does not necessarily significantly increase query response (it might save the roundtrip time) it can be helpful for some applications.We might not want to block the network communication (i.e., downloading or API calls) when the MySQL table is locked, for example. At the same time, unless you check/wait for the acknowledgement from the server, the data might or might not be written into MySQL.

Bonus: data analysis

Now we can see what we have downloaded. There are a number of interesting fields in the result:

"is_live": null, "license": "Standard YouTube License", "duration": 2965, "end_time": null, "playlist": ""mysql 5.7"", "protocol": "https", "uploader": "YUI Library", "_filename": "Douglas Crockford - The JSON Saga--C-JoyNuQJs.mp4", "age_limit": 0, "alt_title": null, "extractor": "youtube", "format_id": "18", "fulltitle": "Douglas Crockford: The JSON Saga", "n_entries": 571, "subtitles": {}, "thumbnail": "", "categories": ["Science & Technology"], "display_id": "-C-JoyNuQJs", "like_count": 251, "player_url": null, "resolution": "640x360", "start_time": null, "thumbnails": [{ "id": "0", "url": "" }], "view_count": 36538, "annotations": null, "description": "Yahoo! JavaScript architect Douglas Crockford tells the story of how JSON was discovered and how it became a major standard for describing data.", "format_note": "medium", "playlist_id": ""mysql 5.7"", "upload_date": "20110828", "uploader_id": "yuilibrary", "webpage_url": "", "uploader_url": "", "dislike_count": 5, "extractor_key": "Youtube", "average_rating": 4.921875, "playlist_index": 223, "playlist_title": null, "automatic_captions": {}, "requested_subtitles": null, "webpage_url_basename": "-C-JoyNuQJs"

We can see the most popular videos. To do that I’ve added one more virtual field on view_count, and created an index on it:

CREATE TABLE `youtube` ( `doc` json DEFAULT NULL, `youtube_id` varchar(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.id'))) STORED NOT NULL, `view_count` int(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.view_count'))) VIRTUAL, UNIQUE KEY `youtube_id` (`youtube_id`), KEY `view_count` (`view_count`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

We can run the queries like:

mysql> select json_unquote(doc->'$.title'), -> view_count, -> json_unquote(doc->'$.dislike_count') as dislikes -> from youtube -> order by view_count desc -> limit 10; +----------------------------------------------------------------------------------------------------+------------+----------+ | json_unquote(doc->'$.title') | view_count | dislikes | +----------------------------------------------------------------------------------------------------+------------+----------+ | Beginners MYSQL Database Tutorial 1 # Download , Install MYSQL and first SQL query | 664153 | 106 | | MySQL Tutorial | 533983 | 108 | | PHP and MYSQL - Connecting to a Database and Adding Data | 377006 | 50 | | PHP MySQL Tutorial | 197984 | 41 | | Installing MySQL (Windows 7) | 196712 | 28 | | Understanding PHP, MySQL, HTML and CSS and their Roles in Web Development - CodersCult Webinar 001 | 195464 | 24 | | jQuery Ajax Tutorial #1 - Using AJAX & API's (jQuery Tutorial #7) | 179198 | 25 | | How To Root Lenovo A6000 | 165221 | 40 | | MySQL Tutorial 1 - What is MySQL | 165042 | 45 | | How to Send Email in Blackboard Learn | 144948 | 28 | +----------------------------------------------------------------------------------------------------+------------+----------+ 10 rows in set (0.00 sec)

Or if we want to find out the most popular resolutions:

mysql> select count(*) as cnt, -> sum(view_count) as sum_views, -> json_unquote(doc->'$.resolution') as resolution -> from youtube -> group by resolution -> order by cnt desc, sum_views desc -> limit 10; +-----+-----------+------------+ | cnt | sum_views | resolution | +-----+-----------+------------+ | 273 | 3121447 | 1280x720 | | 80 | 1195865 | 640x360 | | 18 | 33958 | 1278x720 | | 15 | 18560 | 1152x720 | | 11 | 14800 | 960x720 | | 5 | 6725 | 1276x720 | | 4 | 18562 | 1280x682 | | 4 | 1581 | 1280x616 | | 4 | 348 | 1280x612 | | 3 | 2024 | 1200x720 | +-----+-----------+------------+ 10 rows in set (0.02 sec)

Special thanks to Jan Kneschke and Morgan Tocker from Oracle for helping with the X Protocol internals.

Update: Jan Kneschke also generated the visualization for the tcpdump I’ve collected (when connection was killed):

Percona Server 5.7.13-6 is now available

Latest MySQL Performance Blog posts - July 6, 2016 - 9:07am

Percona announces the GA release of Percona Server 5.7.13-6 on July 6, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.7.13, including all the bug fixes in it, Percona Server 5.7.13-6 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. All the details of the release can be found in the 5.7.13-6 milestone at Launchpad.

New Features:
  • TokuDB MTR suite is now part of the default MTR suite in Percona Server 5.7.
Bugs Fixed:
  • Querying the GLOBAL_TEMPORARY_TABLES table would cause server crash if temporary table owning threads would execute new queries. Bug fixed #1581949.
  • IMPORT TABLESPACE and undo tablespace truncate could get stuck indefinitely with a writing workload in parallel. Bug fixed #1585095.
  • Requesting to flush the whole of the buffer pool with doublewrite parallel buffer wasn’t working correctly. Bug fixed #1586265.
  • Audit Log Plugin would hang when trying to write log record of audit_log_buffer_size length. Bug fixed #1588439.
  • Audit log in ASYNC mode could skip log records which don’t fit into log buffer. Bug fixed #1588447.
  • In order to support innodb_flush_method being set to ALL_O_DIRECT, the log I/O buffers were aligned to innodb_log_write_ahead_size. That implementation missed the case that the variable is dynamic and could still lead to a server to crash. Bug fixed #1597143.
  • InnoDB tablespace import would fail when trying to import a table with different data directory. Bug fixed #1548597 (upstream #76142).
  • Audit Log Plugin was truncating SQL queries to 512 bytes. Bug fixed #1557293.
  • mysqlbinlog did not free the existing connection before opening a new remote one. Bug fixed #1587840 (upstream #81675).
  • Fixed a memory leak in mysqldump. Bug fixed #1588845 (upstream #81714).
  • Transparent Huge Pages check will now only happen if tokudb_check_jemalloc option is set. Bugs fixed #939 and #713.
  • Logging in ydb environment validation functions now prints more useful context. Bug fixed #722.

Other bugs fixed: #1541698 (upstream #80261), #1587426 (upstream, #81657), #1589431, #956, and #964.

The release notes for Percona Server 5.7.13-6 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Lastest Forum Posts - July 6, 2016 - 4:56am
Don't stop the calorie intake suddenly; doing so severely damages your metabolic system. Always refrain yourself from fad diet and diet pills. Low carbs, low protein, low calories and starvation with sugar cravings dramatically decrease the speed graph of metabolism. Diet pills work as long as you are using them but when you stop dieting you end up with almost no outcome.Get More >>>> =====>>>>>

Lastest Forum Posts - July 6, 2016 - 3:39am
Creams for wrinkles are very effective and helpful for your skin. Wrinkle creams can help you in lightening the appearance of wrinkles in your face. Anti aging products are basically manufactured to solve the aging problems like wrinkles, fine lines, dark spot etc.If you have the kind of sensitive skin that gets itchy or irritated from shaving gels, or if you're just looking for a cheaper solution, again, go for the E.V.O.O. Not only does it do a fantastic job when serving as shaving gel, it leaves your skin nice and soft, without irritation or razor bumps.Sounds familiar. Believe me - I bought into it all - the promise of younger looking skin over night - and how I'd look 10 years younger in a week....guess what? Surprise. Didn't work.Get More >>>>=====>>>>>

How Does Beverly Hills MD Lift and also Firm Work?

Lastest Forum Posts - July 6, 2016 - 2:58am
Beverly Hills MD Lift and Firmis a sophisticated sculpting lotion that consists of the latest innovations in skin care innovation to tighten up the skin in your reduced face, neck and décolleté areas. Many thanks to its potent formula that combines tested active ingredients, BEVERLY HILLS MD Lift and also Firm has the ability to take years off your face and neck. Functioning from the inside out to bring back and improve your skin's Herbal support structure, the cream reduces noticeable indications of aging- such as sagging skin.Beverly Hills MD Lift and Firm Sculpting Cream is now available for free trial

Lastest Forum Posts - July 5, 2016 - 11:26pm
Megadrox is treated as the best solution to increase your muscle growth and testosterone production. The two basic needs of any human begin is to eat the best food to generate positive and maximum energy level and secondly to gather strength to perform for more hours on bed. The sex life and the social can be maintained by using this body booster. This formula will help you to contrast stronger muscles with lots of power and confidence. It plays a crucial role by enhancing your sexual execution. The poor sexual execution may brings out many hurdles in your social life and this powerful booster with all the natural extracts can give you the solution to overcome with your problem.
The features and effects of any supplement make people conscious about its side effects. But if you are using Megadrox then you are free from all us confusion and doubts. As this supplement is the combination of all the natural ingredients in it by giving you the best results enhancing your body. The growth of muscles and increase in the sexual capabilities are maintained by the technique used in our ancient times by finding solutions from the natures lap. Here the creator used the rules in a smarter way to give you according to the wants and desire. After the tests and critical examination in laboratories, the experts suggest that this supplement can compete with any other options keeping the safety part in our mind.
For More info. Vist our Official websites


Lastest Forum Posts - July 5, 2016 - 9:53pm
X Alpha Muscle You do not want to find out about the limitations of your health insurance policy when you get sick. You should be prepared and understand what your policy covers ahead of time, as well as what options are available to you. You may find out that you have poor health insurance, but that is something that you can rectify at any time!

Open this > > >

Lastest Forum Posts - July 5, 2016 - 9:44pm
While I tried to wrap my head around the lifestyle changes I needed to make to keep my skin healthy, I searched for a good anti aging skin care review too.

Upgrade to Percona 5.6.30,something happened

Lastest Forum Posts - July 5, 2016 - 7:33pm
After I upgrade to Percona 5.6.30,my long_query_time set 1s, in slow log many "# administrator command: Prepare;" occured , it cost 1-3s . And “SELECT @@session.tx_isolation;" "SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;" also occur similarly . I want to know why "# administrator command: Prepare;" cost much time ? is it mysql bug ?

MySQL 8.0

Latest MySQL Performance Blog posts - July 5, 2016 - 3:18pm

If you haven’t heard the news yet, MySQL 8.0 is apparently the next release of the world-famous database server.

Obviously abandoning plans to name the next release 5.8, Percona Server’s upstream provider relabelled all 5.8-related bugs to 8.0 as follows:

Reported version value updated to reflect release name change from 5.8 to 8.0

What will MySQL 8.0 bring to the world?

While lossless RBR has been suggested by Simon Mudd (for example), the actual feature list (except a Boost 1.60.0 upgrade!) remains a secret.

As far as bug and feature requests go, a smart google query revealed which bugs are likely to be fixed in (or are feature requests for) MySQL 8.0.

Here is the full list:

  • MySQL Bug #79380: Upgrade to Boost 1.60.0
  • MySQL Bug #79037: get rid of dynamic_array in st_mysql_options
  • MySQL Bug #80793: EXTEND EXPLAIN to cover ALTER TABLE
  • MySQL Bug #79812: JSON_ARRAY and JSON_OBJECT return …
  • MySQL Bug #79666: fix errors reported by ubsan
  • MySQL Bug #79463: Improve P_S configuration behaviour
  • MySQL Bug #79939: default_password_lifetime &gt; 0 should print …
  • MySQL Bug #79330: DROP TABLESPACE fails for missing general …
  • MySQL Bug #80772: Excessive memory used in memory/innodb …
  • MySQL Bug #80481: Accesses to new data-dictionary add confusing …
  • MySQL Bug #77712: mysql_real_query does not report an error for …
  • MySQL Bug #79813: Boolean values are returned inconsistently with …
  • MySQL Bug #79073: Optimizer hint to disallow full scan
  • MySQL Bug #77732: REGRESSION: replication fails for insufficient …
  • MySQL Bug #79076: make hostname a dynamic variable
  • MySQL Bug #78978: Add microseconds support to UNIX_TIMESTAMP
  • MySQL Bug #77600: Bump major version of libmysqlclient in 8.0
  • MySQL Bug #79182: main.help_verbose failing on freebsd
  • MySQL Bug #80627: incorrect function referenced in spatial error …
  • MySQL Bug #80372: Built-in mysql functions are case sensitive …
  • MySQL Bug #79150: InnoDB: Remove runtime checks for 32-bit file …
  • MySQL Bug #76918: Unhelpful error for mysql_ssl_rsa_setup when …
  • MySQL Bug #80523: current_memory in sys.session can go negative!
  • MySQL Bug #78210: SHUTDOWN command should have an option …
  • MySQL Bug #80823: sys should have a mdl session oriented view
  • MySQL Bug #78374: “CREATE USER IF NOT EXISTS” reports an error
  • MySQL Bug #79522: can mysqldump print the fully qualified table …
  • MySQL Bug #78457: Use gettext and .po(t) files for translations
  • MySQL Bug #78593: mysqlpump creates incorrect ALTER TABLE …
  • MySQL Bug #78041: GROUP_CONCAT() truncation should be an …
  • MySQL Bug #76927: Duplicate UK values in READ-COMMITTED …
  • MySQL Bug #77997: Automatic mysql_upgrade
  • MySQL Bug #78495: Table mysql.gtid_executed cannot be opened.
  • MySQL Bug #78698: Simple delete query causes InnoDB: Failing …
  • MySQL Bug #76392: Assume that index_id is unique within a …
  • MySQL Bug #76671: InnoDB: Assertion failure in thread 19 in file …
  • MySQL Bug #76803: InnoDB: Unlock row could not find a 2 mode …
  • MySQL Bug #78527: incomplete support and/or documentation of …
  • MySQL Bug #78732: InnoDB: Failing assertion: *mbmaxlen &lt; 5 in file …
  • MySQL Bug #76356: Reduce header file dependencies for …
  • MySQL Bug #77056: There is no clear error message if …
  • MySQL Bug #76329: COLLATE option not accepted in generated …
  • MySQL Bug #79500: InnoDB: Assertion failure in thread …
  • MySQL Bug #72284: please use better options to …
  • MySQL Bug #78397: Subquery Materialization on DELETE WHERE …
  • MySQL Bug #76552: Cannot shutdown MySQL using JDBC driver
  • MySQL Bug #76532: MySQL calls exit(MYSQLD_ABORT_EXIT …
  • MySQL Bug #76432: handle_fatal_signal (sig=11) in …
  • MySQL Bug #41925: Warning 1366 Incorrect string value: … for …
  • MySQL Bug #78452: Alter table add virtual index hits assert in …
  • MySQL Bug #77097: InnoDB Online DDL should support change …
  • MySQL Bug #77149: sys should possibly offer user threads …

Connection Problem (pt-mysql-summary)

Lastest Forum Posts - July 5, 2016 - 3:53am

I'm new in MySQL, etc.

I Installed Toolkit and try to run pt-mysql-summary, but I get this error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
2016_07_05_13_49_14 Cannot connect to MySQL. Check that MySQL is running and that the options after -- are correct. The MySQL service is running (ps -eg | grep mysql):
aa_mysql 26148 25680 0 Jul04 pts/0 00:00:29 /tmp/5.6.22_3306_Master/bin/mysqld --basedir=/tmp/5.6.22_3306_Master --datadir=/tmp/5.6.22_3306_Master/data/ --plugin-dir=/tmp/5.6.22_3306_Master/lib/plugin --log-error=/tmp/5.6.22_3306_Master/data/mysql-error.log --open-files-limit=65535 --pid-file=/tmp/5.6.22_3306_Master/data/ --socket=/tmp/5.6.22_3306_Master/data/mysql.sock --port=3306 When I try to use the socket & port in the command option (pt-mysql-summary --socket=/tmp/5.6.22_3306_Master/data/mysql.sock --port=3306), I get this error:
Unknown option: --socket=/tmp/5.6.22_3306_Master/data/mysql.sock
Unknown option: --port=3306

Usage: pt-mysql-summary [OPTIONS] [-- MYSQL OPTIONS] Any suggestions?



How does get balanced between bad &amp;amp; good germs?

Lastest Forum Posts - July 5, 2016 - 2:20am
There are solid web links between the way that probiotics processes the foods we eat as well as stops the body from storing fat consequently. Avoiding foods that have an adverse impact on your body and system could be difficult sometimes but not impossible. Even taking small actions to choose organic fruits and vegetables as well as planning to incorporate probiotic culture into your normal diet are all wonderful methods to ensure you are receiving the advantages that excellent bacteria has to offer. I now take Perfect Biotics every day and I feel amazing and also have maintained my excellent weight. Say goodbye to bloated feeling, exhaustion or headaches. Visit here to get promo codes, coupon codes, video and trial offers

Can't restore backup with absolute innodb_data_file_path

Lastest Forum Posts - July 5, 2016 - 1:29am
I have a server configuration where I have the mysql files placed on two different disk arrays, /opt/mysql which is a RAID10 over 15kRPM disks, and /var/lib/mysql which is a RAID1 on S3710 SSDs. When atempting to restore a xtrabackup backup it fails as it interprets the my.cnf params differently than mysql, from what I can tell. Does anyone know of a simple workaround for this problem?

Code: root@jed:/opt/mysql-dump-20160704# cat /etc/apt/sources.list.d/percona.list # percona deb xenial main deb-src xenial main root@jed:/opt/mysql-dump-20160704# grep data /etc/mysql/my.cnf datadir = /var/lib/mysql/ innodb_data_home_dir = innodb_data_file_path = /opt/mysql/ibdata1:10M:autoextend:max:10G root@jed:/opt/mysql-dump-20160704# innobackupex --copy-back . 160705 10:14:45 innobackupex: Starting the copy-back operation IMPORTANT: Please check that the copy-back run completes successfully. At the end of a successful copy-back run innobackupex prints "completed OK!". innobackupex version 2.3.4 based on MySQL server 5.6.24 Linux (x86_64) (revision id: e80c779) 160705 10:14:45 [01] Copying ib_logfile0 to /opt/mysql/ib_logfile0 160705 10:14:55 [01] ...done 160705 10:14:55 [01] Copying ib_logfile1 to /opt/mysql/ib_logfile1 160705 10:15:08 [01] ...done innobackupex: Can't create directory '/var/lib/mysql/opt/mysql/' (Errcode: 2 - No such file or directory) [01] error: cannot open the destination stream for ibdata1 [01] Error: copy_file() failed. Regards,

Lastest Forum Posts - July 4, 2016 - 11:45pm
Vita Luminance is your savior if you have been troubled with aging skin issues. The gentle yet effective formula makes sure that your skin becomes self sufficient in tackling with the problems that you no longer have to use any external source. The formula, with daily use, eliminates wrinkles, fine lines, fights discoloration, and also makes your skin bright and luminous. Recommended by many famous skin doctors, this is a great way to keep skin the way you want and the best part is – you can improve the condition of your skin even when you are aging. Vita luminance Trails is a natural Anti aging formula that can removes wrinkles and aging signs from your face quicken. Vita luminance Reviews helps to recover your all face skin issues in which wrinkles, aging signs, and black spots are major. Vita luminance Advanced skin cream will provide you natural and effective results that no other product can provide you which are selling in market. It has all ingredients natural and pure that makes this cream effective and special over all other cosmetic products. Vita Luminance cream helps to makes your skin smooth and firmer like celebrities. This is well tested formula for removing wrinkles and aging signs that makes your skin dull and growing. Vita luminance cream helps to eliminates all fine lines and dark circles from your skin and makes it brighter and lighter naturally. Vita luminance anti wrinkle serum helps to remove the appurtenance of dark circles and dark bags around your eyes and provides you beautiful eyes in very short period of time. The free trial Vita luminance trial promises to replenish your skin and will remove all wrinkles and fine lines from your face. For more information visit our official site : http://vitaluminancetrails2.blogspot...e-reviews.html

Lastest Forum Posts - July 4, 2016 - 11:45pm
Instant Performer Reviews The gravity of impotence can't be underestimated. As a rule, a easy sexual quandary can lead to a variety of different problems. It's not unique for a lot of long-time relationships to be suddenly shattered due to sex-associated problems. If you are amongst many adult males who're experiencing the crisis of male impotence, you ought to no longer lose hope instant Performer, by the way, is a male enhancement cream. All you want is to use it in your penis to reap a colossal erection that will make your accomplice greater than convinced in mattress. In comparison with different male enhancement merchandise,
Instant Performer the on the spot Performer may be very speedy. That you may immediately believe its effect a few minutes after applying it for your penis. It works with no trouble in growing the blood glide on your penis. In the event you have no idea, most suitable blood circulation is a foremost component in attaining an erection. With immediate Performer, you will certainly gain better, higher, and longer-lasting erection that would make intercourse a more enjoyable experience for you and your companion.
More info click here .

Lastest Forum Posts - July 4, 2016 - 9:05pm
That is perfectly in line with what I advise men and women to do with their Max Test Xtreme. Those examples sure do work. I will reveal why. I saw this said on TV recently.

ALTER TABLE reporting 0 records affected in 5.6

Lastest Forum Posts - July 4, 2016 - 1:31pm
We are in the process of moving from Percona 5.5 to 5.6 and I just noticed a difference between the two I was hoping someone could shed some light on. When I execute an ALTER TABLE command in 5.5 it says "Query OK, 52990 rows affected", but when I run the same command on the same data in 5.6 I get "Query OK, 0 rows affected".

There are no warnings, or errors, and the change does happen as expected, it just says 0 rows were affected.

I know in 5.6 they have changed the functionality of ALTER TABLE, such that it doesn't block near as much, but is this why it says there are 0 rows?

Installation error

Lastest Forum Posts - July 4, 2016 - 6:31am

When I try to install xtrabackup I have a conflict :

Code: yum install percona-xtrabackup Transaction check error: file /etc/my.cnf from install of Percona-Server-shared-56-5.6.30-rel76.3.el7.x86_64 conflicts with file from package mysql-community-server-5.7.13-1.el7.x86_64 Someone can help me?

Thank you

Cluster-server unable to install due to xtra backup

Lastest Forum Posts - July 4, 2016 - 2:32am
I am trying to install PXC but getting dependency issue with xtrabackup which is already installed.

[root@dsdbvm2 software]# more /etc/redhat-release
CentOS release 6.6 (Final)
[root@dsdbvm2 software]# uname -a
Linux 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@dsdbvm2 software]# rpm -qa | grep -i percona
[root@dsdbvm2 software]#
[root@dsdbvm2 software]# rpm -ivh Percona-XtraDB-Cluster-server-56-5.6.30-25.16.1.el6.x86_64.rpm
warning: Percona-XtraDB-Cluster-server-56-5.6.30-25.16.1.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY
error: Failed dependencies:
percona-xtrabackup >= 2.2.5 is needed by Percona-XtraDB-Cluster-server-56-1:5.6.30-25.16.1.el6.x86_64
[root@dsdbvm2 software]#

General Inquiries

For general inquiries, please send us your question and someone will contact you.