]]>
]]>

You are here

Feed aggregator

MySQL QA Episode 7: Reducing Testcases for Beginners – single-threaded reducer.sh!

Latest MySQL Performance Blog posts - July 21, 2015 - 3:00am

Welcome to MySQL QA Episode #7 – Reducing Testcases for Beginners: single-threaded reducer.sh!

In this episode we’ll learn how to use reducer.sh. Topics discussed;

  1. reducer.sh introduction/concepts
  2. Basic configurable variables & their default reducer.sh settings
    1. INPUTFILE options
    2. MODE=x
    3. TEXT=”text”
    4. WORKDIR_LOCATION & WORKDIR_M3_DIRECTORY
    5. MYEXTRA
    6. MYBASE
    7. PQUERY_MOD & PQUERY_LOC
    8. MODE5_COUNTTEXT, MODE5_ADDITIONAL_TEXT & MODE5_ADDITIONAL_COUNTTEXT
    9. How to learn more about each of the settings
  3. Manual example
  4. Introduction to the script’s self-recursion concept – subreducer
  5. Quick setup re-cap, details of an already executed QA run
  6. Examples from pquery-prep-red.sh (including some issue reviews)
  7. Gotcha’s
  8. QUERYTIMEOUT & STAGE1_LINES Variables

Full-screen viewing @ 720p resolution recommended.

If the speed is too slow for you, consider setting YouTube to 1.25 playback speed.

The post MySQL QA Episode 7: Reducing Testcases for Beginners – single-threaded reducer.sh! appeared first on MySQL Performance Blog.

Percona Live Amsterdam discounted pricing ends July 26!

Latest MySQL Performance Blog posts - July 20, 2015 - 2:32pm

The Percona Live Data Performance Conference in Amsterdam is just two months away and it’s going to be an incredible event. With a new expanded focus on MySQL, NoSQL, and Data in the Cloud, this conference will be jam-packed with talks from some of the industry’s leading experts from MongoDB, VMware, Oracle, MariaDB, Facebook, Booking.com, Pythian, Google, Rackspace, Yelp (and many more, including of course Percona).

Early Bird pricing ends this Sunday (July 26)! So if you want to save €25, then you’d better register now. And for all of my readers, you can take an additional 10% off the entire registration price by using the promo code “10off” at checkout.

It’s also important to book your room at the Mövenpick Hotel for a special rate – but hurry because that deal ends July 27 and the rooms are disappearing fast due to some other big events going on in Amsterdam that week.

Sponsorship opportunities are also still available for Percona Live Amsterdam. Event sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks. Current sponsors include:

We’ve got a fantastic conference schedule on tap. Sessions, which will follow each morning’s keynote addresses, feature a variety of topics related to MySQL and NoSQL, High Availability, DevOps, Programming, Performance Optimization, Replication and Backup, MySQL in the Cloud, MySQL Case Studies, Security, and What’s New in MySQL and MongoDB.

Sessions Include:
  • “InnoDB: A Journey to the Core,” Jeremy Cole, Sr. Systems Engineer, Google, Inc. and Davi Arnaut, Software Engineer, LinkedIn
  • “MongoDB Patterns and Antipatterns for Dev and Ops,” Steffan Mejia, Principal Consulting Engineer, MongoDB, Inc.
  • “NoSQL’s Biggest Lie: SQL Never Went Away,” Matthew Revell, Lead Developer Advocate, Couchbase
  • “The Future of Replication is Today: New Features in Practice,” Giuseppe Maxia, Quality Assurance Architect, VMware
  • “What’s New in MySQL 5.7,” Geir Høydalsvik, Senior Software Development Director, Oracle
Tutorials include:
  • “Best Practices for MySQL High Availability,” Colin Charles, Chief Evangelist, MariaDB
  • “Mongo Sharding from the Trench: A Veterans Field Guide,” David Murphy, Lead DBA, Rackspace Data Stores
  • “Advanced Percona XtraDB Cluster in a Nutshell, La Suite: Hands on Tutorial Not for Beginners!,” Frederic Descamps, Senior Architect, Percona

The conference’s evening events will be a perfect way to network, relax and have FUN while seeing the beautiful city of Amsterdam!

Monday night, September 21, after the tutorial sessions conclude, attendees are invited to the Delirium Cafe located across the street from the conference venue. With more than 500 beers on tap and great food, this will be the perfect way to kick off the Conference.

Tuesday night, September 22, Booking.com will be hosting the Community dinner of the year at their very own headquarters located in historic Rembrandt Square in the heart of the city. Hop on one of the sponsored canal boats that will pick you up right outside of the Mövenpick for your chance to see the city from the water on the way to the community dinner! You’ll be dropped off right next to Booking.com’s offices!

Wednesday night, September 23, there will be a closing reception taking place at the Mövenpick for your last chance to visit with our exhibitors and to wrap up what promises to be an amazing conference!

See you in Amsterdam!

The post Percona Live Amsterdam discounted pricing ends July 26! appeared first on MySQL Performance Blog.

Fractal Tree library as a Key-Value store

Latest MySQL Performance Blog posts - July 20, 2015 - 6:50am

As you may know, Tokutek is now part of Percona and I would like to explain some internals of TokuDB and TokuMX – what performance benefits they bring, along with further optimizations we are working on.

However, before going into deep details, I feel it is needed to explain the fundamentals of Key-Value store, and how Fractal Tree handles it.

Before that, allow me to say that I hear opinions that the “Fractal Tree” name does not reflect an internal structure and looks more like a marketing term than a technical one. I will not go into this discussion and will keep using name “Fractal Tree” just out of the respect to inventors. I think they are in a position to name their invention with any name they want.

So with that said, the Fractal Tree library implements a new data structure for a more efficient handling (with main focus on insertion, but more on this later) of Key-Value store.

You may question how Key-Value is related to SQL Transactional databases – this is more from the NOSQL world. Partially this is true, and Fractal Tree Key-Value library is successfully used in Percona TokuMX (based on MongoDB 2.4) and Percona TokuMXse (storage engine for MongoDB 3.0) products.

But if we look on a Key-Value store in general, actually it maybe a good fit to use in structural databases. To explain this, let’s take a look in Key-Value details.

So what is Key-Value data structure?

We will use a notation (k,v), or key=>val, which basically mean we associate some value “v” with a key “k”. For software developers following analogies may be close:
key-value access is implemented as dictionary in Python, associative array in PHP or map in C++.
(More details in Wikipedia)

I will define key-value structure as a list of pairs (k,v).

It is important to note that both key and value cannot be just scalars (single value), but to be compound.
That is "k1, k2, k3 => v1, v2", which we can read as (give me two values by a 3-part key).

This brings us closer to a database table structure.
If we apply additional requirement that all (k) in list (k,v) must be unique, this will represent
a PRIMARY KEY for a traditional database table.
To understand this better, let’s take a look on following table:
CREATE TABLE metrics (
ts timestamp,
device_id int,
metric_id int,
cnt int,
val double,
PRIMARY KEY (ts, device_id, metric_id),
KEY metric_id (metric_id, ts),
KEY device_id (device_id, ts)
)

We can state that Key-Value structure (ts, device_id, metric_id => cnt, val), with a requirement
"ts, device_id, metric_id" to be unique, represents PRIMARY KEY for this table, actually this is how InnoDB (and TokuDB for this matter) stores data internally.

Secondary indexes also can be represented in Key=>Value notion, for example, again, how it is used in TokuDB and InnoDB:
(seconday_index_key=>primary_key), where a key for a secondary index points to a primary key (so later we can get values by looking up primary key). Please note that that seconday_index_key may not be unique (unless we add an UNIQUE constraint to a secondary index).

Or if we take again our table, the secondary keys are defined as
(metric_id, ts => ts, device_id, metric_id)
and
(device_id, ts => ts, device_id, metric_id)

It is expected from a Key-Value storage to support basic data manipulation and extraction operations, such as:

        – Add or Insert: add

(key => value)

        pair to a collection
        – Update: from

(key => value2)

        to

(key => value2)

        , that is update

"value"

        assigned to

"key"

        .
        – Delete: remove(key): delete a pair

(key => value)

        from a collection
        – Lookup (select): give a

"value"

        assigned to

"key"

and I want to add fifth operation:

        – Range lookup: give all values for keys defined by a range, such as

"key > 5"

        or

"key >= 10 and key < 15"

They way software implements an internal structure of Key-Value store defines the performance of mentioned operations, and especially if datasize of a store grows over a memory capacity.

For the decades, the most popular data structure to represent Key-Value store on disk is B-Tree, and within the reason. I won’t go into B-Tree details (see for example https://en.wikipedia.org/wiki/B-tree), but it provides probably the best possible time for Lookup operations. However it has challenges when it comes to Insert operations.

And this is an area where newcomers to Fractal Tree and LSM-tree (https://en.wikipedia.org/wiki/Log-structured_merge-tree) propose structures which provide a better performance for Insert operations (often at the expense of Lookup/Select operation, which may become slower).

To get familiar with LSM-tree (this is a structure used by RocksDB) I recommend http://www.benstopford.com/2015/02/14/log-structured-merge-trees/. And as for Fractal Tree I am going to cover details in following posts.

The post Fractal Tree library as a Key-Value store appeared first on MySQL Performance Blog.

Creating sort index takes 2.39990 seconds

Lastest Forum Posts - July 20, 2015 - 12:37am
select this_.WORKLIST_LOG_PK as WORKLIST1_164_0_, this_.APPLICATION_STATUS_LOG as APPLICAT2_164_0_, this_.test_CHECKER1_REMARKS_LOG_COL as test3_164_0_, this_.test_CHECKER2_REMARKS_LOG_COL as test4_164_0_, this_.TRANSACTION_NUMBER_LOG as TRANSACT5_164_0_, this_.HASERRORS_REMARKS_LOG as HASERRORS6_164_0_, this_.NEW_AR_FLAG_LOG_COL as NEW7_164_0_, this_.OWNER_ID_LOG as OWNER8_164_0_, this_.POS_CHECKER_REMARKS as POS9_164_0_, this_.PROPOSER_EIA_NUMBER_LOG as PROPOSER10_164_0_, this_.PROPOSER_ID_LOG_FK as PROPOSER11_164_0_, this_.REJECTED_REMARKS_ID_LOG as REJECTED12_164_0_, this_.REMARKS_LOG as REMARKS13_164_0_, this_.TRANSACTION_TYPE_LOG as TRANSAC14_164_0_, this_.WFACTION_DATE as WFACTION15_164_0_, this_.WFACTION_USER as WFACTION16_164_0_, this_.WF_DISPACTION_LOG as WF17_164_0_, this_.WF_STAGE_LOG as WF18_164_0_ from INSURANCE_WORKLIST_LOG this_ where this_.PROPOSER_EIA_NUMBER_LOG='2000545565555' and this_.TRANSACTION_TYPE_LOG='01' order by this_.WFACTION_DATE desc\G


query profile:

+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000066 |
| checking permissions | 0.000007 |
| Opening tables | 0.000011 |
| init | 0.000029 |
| System lock | 0.000009 |
| optimizing | 0.000011 |
| statistics | 0.000012 |
| preparing | 0.000013 |
| Sorting result | 0.000006 |
| executing | 0.000004 |
| Sending data | 0.000013 |
| Creating sort index | 2.393860 |
| end | 0.000018 |
| query end | 0.000010 |
| closing tables | 0.000015 |
| freeing items | 0.000023 |
| cleaning up | 0.000046 |
+----------------------+----------+

it takes 2.393 seconds for Creating sort index . how can i rectify this?

Error checksumming table Incorrect decimal value

Lastest Forum Posts - July 18, 2015 - 12:58am
Hello during the checksum after a server crash i can't checksum one table, i get this error:

Error checksumming table knox.prestazioni_dettagli: Error executing checksum query: Checksum query for table knox.prestazioni_dettagli cause
d MySQL error 1366:
Level: Error
Code: 1366
Message: Incorrect decimal value: '' for column '' at row -1
Query: REPLACE INTO `heartbeat`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT ?, ?, ?, ?, ?, ?,
COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS(' #', `id`, `idprestazione`, `idtariffa`, `iddescrizionetariffa`, `moltiplicatoretariffa`,
`costofornitore`, `costocliente`, `costoclientefinale`, `userdefined`, `minimofornitore`, `minimocliente`)) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `knox`
.`prestazioni_dettagli` FORCE INDEX(`PRIMARY`) WHERE ((`id` >= ?)) AND ((`id` <= ?)) /*checksum chunk*/

The table was cheched and repaired with mysql and now it's ok.

thank for your help
grazie

Marco

Load spike issue in PXC

Lastest Forum Posts - July 16, 2015 - 9:52am
Hi,
I have a PXC setup with 3 nodes in a two cluster groups with cross data center replication between two groups.My cluster setup is like below.
Array
here Group1 is primary datacenter and we have replication from B3 to A3 (A3 is slave).
we have created a new DB(LBC1) in Group1 without replicating it into group2(ignored Db LBC1 in group2).
The problem is when we are doing the load test on GROUP1 on LBC1 database , we are facing high Load issue on A3 and load is normal on A1,A2.
configuration is same in all nodes.Please help me ,how to fix this load issue on node A3.
log-slave-updates is enabled in A3 & B1.

Load spike issue in PXC

Lastest Forum Posts - July 16, 2015 - 9:52am
Hi,
I have a PXC setup with 3 nodes in a two cluster groups with cross data center replication between two groups.My cluster setup is like below.
Array
here Group1 is primary datacenter and we have replication from B3 to A3 (A3 is slave).
we have created a new DB(LBC1) in Group1 without replicating it into group2(ignored Db LBC1 in group2).
The problem is when we are doing the load test on GROUP1 on LBC1 database , we are facing high Load issue on A3 and load is normal on A1,A2.
configuration is same in all nodes.Please help me ,how to fix this load issue on node A3.
log-slave-updates is enabled in A3 & B1.

Percona XtraDB Cluster 2nd node unable to join the 1st node

Lastest Forum Posts - July 16, 2015 - 9:10am
I have 3 instances on Amazon EC2 running CentOS6. I want to set up a 3-node PXC.

My first node works fine and my second node also runs successfully. However, the second node is not able to join the first node.

Could anybody please help me with this issue?


Node 1:

https://gist.github.com/borislin/2b09185b1291ae4840dd


Node 2:

https://gist.github.com/borislin/a2894851b8102e13e798

Bypassing SST in Percona XtraDB Cluster with binary logs

Latest MySQL Performance Blog posts - July 16, 2015 - 3:00am

In my previous post, I used incremental backups in Percona XtraBackup as a method for rebuilding a Percona XtraDB Cluster (PXC) node without triggering an actual SST. Practically this reproduces the SST steps, but it can be handy if you already had backups available to use.

In this post, I want to present another methodology for this that also uses a full backup, but instead of incrementals uses any binary logs that the cluster may be producing.

Binary logs on PXC

Binary logs are not strictly needed in PXC for replication, but you may be using them for backups or for asynchronous slaves of the cluster.  To set them up properly, we need the following settings added to our config:

server-id=1 log-bin log-slave-updates

As I stated, none of these are strictly needed for PXC.

  • server-id=1 — We recommend PXC nodes share the same server-id.
  • log-bin — actually enable the binary log
  • log-slave-updates — log ALL updates to the cluster to this server’s binary log

This doesn’t need to be set on every node, but likely you would set these on at least two nodes in the cluster for redundancy.

Note that this strategy should work with or without 5.6 asynchronous GTIDs.

Recovering data with backups and binary logs

This methodology is conventional point-in-time backup recovery for MySQL.  We have a full backup that was taken at a specific binary log position:

... backup created in the past... # innobackupex --no-timestamp /backups/full # cat /backups/full/xtrabackup_binlog_info node3-bin.000002 735622700

We have this binary log and all binary logs since:

-rw-r-----. 1 root root 1.1G Jul 14 18:53 node3-bin.000002 -rw-r-----. 1 root root 1.1G Jul 14 18:53 node3-bin.000003 -rw-r-----. 1 root root 321M Jul 14 18:53 node3-bin.000004

Recover the full backup

We start by preparing the backup with –apply-log:

# innobackupex --apply-log --use-memory=1G /backups/full ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:60072936 ... InnoDB: Last MySQL binlog file position 0 735622700, file name node3-bin.000002 ... # innobackupex --copy-back /backups/full # chown -R mysql.mysql /var/lib/mysql

The output confirms the same binary log file and position that we knew from before.

Start MySQL without Galera

We need to start mysql, but without Galera so we can apply the binary log changes before trying to join the cluster. We can do this simply by commenting out all the wsrep settings in the MySQL config.

# grep wsrep /etc/my.cnf #wsrep_cluster_address = gcomm://pxc.service.consul #wsrep_cluster_name = mycluster #wsrep_node_name = node3 #wsrep_node_address = 10.145.50.189 #wsrep_provider = /usr/lib64/libgalera_smm.so #wsrep_provider_options = "gcache.size=8G; gcs.fc_limit=1024" #wsrep_slave_threads = 4 #wsrep_sst_method = xtrabackup-v2 #wsrep_sst_auth = sst:secret # systemctl start mysql

Apply the binary logs

We now check our binary log starting position:

# mysqlbinlog -j 735622700 node3-bin.000002 | grep Xid | head -n 1 #150714 18:38:36 server id 1 end_log_pos 735623273 CRC32 0x8426c6bc Xid = 60072937

We can compare the Xid on this binary log position to that of the backup. The Xid in a binary log produced by PXC will be the seqno of the GTID of that transaction. The starting position in the binary log shows us the next Xid is one increment higher, so this makes sense: we can start at this position in the binary log and apply all changes as high as we can go to get the datadir up to a more current position.

# mysqlbinlog -j 735622700 node3-bin.000002 | mysql # mysqlbinlog node3-bin.000003 | mysql # mysqlbinlog node3-bin.000004 | mysql

This action isn’t particularly fast as binlog events are being applied by a single connection thread. Remember that if the cluster is taking writes while this is happening, the amount of time you have is limited by the size of gcache and the rate at which it is being filled up.

Prime the grastate

Once the binary logs are applied, we can check the final log’s last position to get the seqno we need:

[root@node3 backups]# mysqlbinlog node3-bin.000004 | tail -n 500 ... #150714 18:52:52 server id 1 end_log_pos 335782932 CRC32 0xb983e3b3 Xid = 63105191 ...

This is indeed the seqno we put in our grastate.dat. Like in the last post, we can copy a grastate.dat from another node to get the proper format. However, this time we must put the proper seqno into place:

# cat grastate.dat # GALERA saved state version: 2.1 uuid: 1663c027-2a29-11e5-85da-aa5ca45f600f seqno: 63105191 cert_index:

Be sure the grastate.dat has the proper permissions, uncomment the wsrep settings and restart mysql on the node:

# chown mysql.mysql /var/lib/mysql/grastate.dat # grep wsrep /etc/my.cnf wsrep_cluster_address = gcomm://pxc.service.consul wsrep_cluster_name = mycluster wsrep_node_name = node3 wsrep_node_address = 10.145.50.189 wsrep_provider = /usr/lib64/libgalera_smm.so wsrep_provider_options = "gcache.size=8G; gcs.fc_limit=1024" wsrep_slave_threads = 4 wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = sst:secret # systemctl restart mysql

The node should now attempt to join the cluster with the proper GTID:

2015-07-14 19:28:50 4234 [Note] WSREP: Found saved state: 1663c027-2a29-11e5-85da-aa5ca45f600f:63105191

This, of course, still does not guarantee an IST. See my previous post for more details on the conditions needed for that to happen.

The post Bypassing SST in Percona XtraDB Cluster with binary logs appeared first on MySQL Performance Blog.

Bypassing SST in Percona XtraDB Cluster with incremental backups

Latest MySQL Performance Blog posts - July 16, 2015 - 2:00am
Beware the SST

In Percona XtraDB Cluster (PXC) I often run across users who are fearful of SSTs on their clusters. I’ve always maintained that if you can’t cope with a SST, PXC may not be right for you, but that doesn’t change the fact that SSTs with multiple Terabytes of data can be quite costly.

SST, by current definition, is a full backup of a Donor to Joiner.  The most popular method is Percona XtraBackup, so we’re talking about a donor node that must:

  1. Run a full XtraBackup that reads its entire datadir
  2. Keep up with Galera replication to it as much as possible (though laggy donors don’t send flow control)
  3. Possibly still be serving application traffic if you don’t remove Donors from rotation.

So, I’ve been interested in alternative ways to work around state transfers and I want to present one way I’ve found that may be useful to someone out there.

Percona XtraBackup and Incrementals

It is possible to use Percona XtraBackup Full and Incremental backups to build a datadir that might possibly SST.  First we’ll focus on the mechanics of the backups, preparing them and getting the Galera GTID and then later discuss when it may be viable for IST.

Suppose I have fairly recent full Xtrabackup and and one or more incremental backups that I can apply on top of that to get VERY close to realtime on my cluster (more on that ‘VERY’ later).

# innobackupex --no-timestamp /backups/full ... sometime later ... # innobackupex --incremental /backups/inc1 --no-timestamp --incremental-basedir /backups/full ... sometime later ... # innobackupex --incremental /backups/inc2 --no-timestamp --incremental-basedir /backups/inc1

In my proof of concept test, I now have a full and two incrementals:

# du -shc /backups/* 909M full 665M inc1 812M inc2 2.4G total

To recover this data, I follow the normal Xtrabackup incremental apply process:

# cp -av /backups/full /backups/restore # innobackupex --apply-log --redo-only --use-memory=1G /backups/restore ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:35694784 ... # innobackupex --apply-log --redo-only /backups/restore --incremental-dir /backups/inc1 --use-memory=1G # innobackupex --apply-log --redo-only /backups/restore --incremental-dir /backups/inc2 --use-memory=1G ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:46469942 ... # innobackupex --apply-log /backups/restore --use-memory=1G

I can see that as I roll forward on my incrementals, I get a higher and higher GTID. Galera’s GTID is stored in the Innodb recovery information, so Xtrabackup extracts it after every batch it applies to the datadir we’re restoring.

We now have a datadir that is ready to go, we need to copy it into the datadir of our joiner node and setup a grastate.dat. Without a grastate, starting the node would force an SST no matter what.

# innobackupex --copy-back /backups/restore # ... copy a grastate.dat from another running node ... # cat /var/lib/mysql/grastate.dat # GALERA saved state version: 2.1 uuid: 1663c027-2a29-11e5-85da-aa5ca45f600f seqno: -1 cert_index: # chown -R mysql.mysql /var/lib/mysql/

If I start the node now, it should see the grastate.dat with the -1 seqo and run –wsrep_recover to extract the GTID from Innodb (I could have also just put that directly into my grastate.dat).

This will allow the node to startup from merged Xtrabackup incrementals with a known Galera GTID.

But will it IST?

That’s the question.  IST happens when the selected donor has all the transactions the joiner needs to get it fully caught up inside of the donor’s gcache.  There are several implications of this:

  • A gcache is mmap allocated and does not persist across restarts on the donor.  A restart essentially purges the mmap.
  • You can query the oldest GTID seqno on a donor by checking the status variable ‘wsrep_local_cached_downto’.  This variable is not available on 5.5, so you are forced to guess if you can IST or not.
  • most PXC 5.6 will auto-select a donor based on IST.  Prior to that (i.e., 5.5) donor selection was not based on IST candidacy at all, meaning you had to be much more careful and do donor selection manually.
  • There’s no direct mapping from the earliest GTID in a gcache to a specific time, so knowing at a glance if a given incremental will be enough to IST is difficult.
  • It’s also difficult to know how big to make your gcache (set in MB/GB/etc.)  with respect to your backups (which are scheduled by the day/hour/etc.)

All that being said, we’re still talking about backups here.  The above method will only work if and only if:

  • You do frequent incremental backups
  • You have a large gcache (hopefully more on this in a future blog post)
  • You can restore a backup faster than it takes for your gcache to overflow

The post Bypassing SST in Percona XtraDB Cluster with incremental backups appeared first on MySQL Performance Blog.

--log-bin-trust-function-creators=1 percona master-master cluster -&amp;gt; safe?

Lastest Forum Posts - July 16, 2015 - 1:17am
hi,

one of our web application (I-DOIT) wants to create a function:

Database error : Query error: '
CREATE FUNCTION alphas(str CHAR(100)) RETURNS CHAR(100) DETERMINISTIC READS SQL DATA
BEGIN
DECLARE i, len SMALLINT DEFAULT 1;
DECLARE ret CHAR(100) DEFAULT '';
DECLARE c CHAR(1);
SET len = CHAR_LENGTH( str );
REPEAT
BEGIN
SET c = MID( str, i, 1 );
IF c REGEXP '[[:alpha:]]' THEN
SET ret=CONCAT(ret,c);
END IF;
SET i = i + 1;
END;
UNTIL i > len END REPEAT;
RETURN ret;
END':
You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

is it save to enable the variable on all nodes? The application connects via HAproxy the cluster (roundrobin).

============================== =
[...]
[mysqld]

# GENERAL #
port = 3306
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
# MUST DO UTF8 #
collation-server = utf8_unicode_ci
character-set-server = utf8
init_connect = 'SET NAMES utf8, collation_connection=utf8_unicode_ci'

# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 25M
max-connect-errors = 1000000
skip-name-resolve
#sql-mode = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ AUTO_VALUE_ON_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,ON LY_FULL_GROUP_BY
sql-mode = NO_ENGINE_SUBSTITUTION
# dekativiert, wegen i-doit
innodb-strict-mode = 0

# DATA STORAGE #
datadir = /var/lib/mysql/

# BINARY LOGGING #
binlog_format = ROW
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 2
max_binlog_size = 100M
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 4096

# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 256M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 6G
innodb_autoinc_lock_mode = 2

# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log

# Cluster Configuration #
wsrep_provider = /usr/lib/galera2/libgalera_smm.so
wsrep_provider_options = "pc.ignore_sb = yes; gcache.size=2G; gcache.page_size=1G"
wsrep_cluster_name = "XtraDB_Cluster"
wsrep_sst_method = xtrabackup-v2
wsrep_slave_threads = 16
wsrep_sst_auth = "sstuser:secret"
wsrep_cluster_address = "gcomm://......"

[sst]
streamfmt = xbstream

!includedir /etc/mysql/conf.d/
==================================


cu denny

startup script - any reason to exit if mysqld isn't running?

Lastest Forum Posts - July 15, 2015 - 12:49pm
With cluster in version 5.6 with auto-crash recovery, I'm looking at one of the support scripts: Percona-XtraDB-Cluster-5.6.24-72.2/support-files/mysql.server.sh ..

As it stands, if you have a cluster that died from a power cut, that script would prevent it from starting and being able to take advantage of auto-recovery. My question (as the topic states) - is there a reason to just bail if theres a pid file, but no matching process for it ? Why not just continue the start up ?

The function is "check_running()" and place in question is line 308 & 309.

If you go to the start case- line 339, abort unless ext_status=3 ..

Any thoughts appreciated.

Thanks
-b

working ss_get_by_ssh script, broken graph images

Lastest Forum Posts - July 15, 2015 - 11:22am
I'm having a strange issue on Centos 7 and Cacti 0.8.8b. I have the ss_get_by_ssh.php script working (from command line) but the cacti graphs are blank. Here's output from the script:

[root@ip-x.x.x.x ~]# sudo -u cacti php /usr/share/cacti/scripts/ss_get_by_ssh.php --type proc_stat --host y.y.y.y --items gw,gx,gy,gz
gw:2561826 gx:13105 gy:5533113 gz:2389303747

Every single Percona graph on Cacti is a broken image. Any ideas?

what is best way to setup pt-kill to kill queries 1+ min and report 40+ seconds?

Lastest Forum Posts - July 15, 2015 - 10:00am
hi,

What would be best way to setup pt-kill to do two tasks: report 40+ seconds queries and kill 1+ min queries? Just setup two different pt-kill with different params?

percona clister not replicating new created database

Lastest Forum Posts - July 15, 2015 - 3:00am
I am running a percona xtradb cluster of three nodes (per1, per2, per3). So far everything is working as expected however i an having an issue. I created a new database "testdb" on one of the servers (per1) and then i also created a new user "testuser" and gave full permissions on the new database.

Thing is the new database is not replicated on the other servers. I have checked my.cnf and i dont see anywhere specifying a specific database to be replicated. Apart from this, so far the other databases are properly replicated.

Any ideas?

Unknown option: remote-host

Lastest Forum Posts - July 15, 2015 - 1:59am
Hi everyone,
According to this : https://www.percona.com/doc/percona-...ckups_ibk.html
It's possible to store backups to a remote host, without the need of NFS.
So here is my command :
: innobackupex --remote-host=root@10.1.2.156 --user=percona --password=mypassword /data/db-backup/bdd Unknown option: remote-host innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3916 main::check_args() called at /usr/bin/innobackupex line 1531 innobackupex: Error: Bad command line arguments Damn, what am I missing ?
The remote-host option appears in the manpage, I don't understand why this doesn't works.

Regards,
Xavier C.

error join cluster

Lastest Forum Posts - July 15, 2015 - 1:23am
hi,
i am building a percona cluster onPercona-XtraDB-Cluster-5.6.24-72.2
and got error on join the cluster.

pls,help me out..
tks.

error.log on node1
: 2015-07-15 16:19:07 21106 [Note] WSREP: Tables flushed. rsync: recv_generator: mkdir "mysql" (in rsync_sst) failed: Permission denied (13) *** Skipping any contents from this failed directory *** rsync: recv_generator: mkdir "performance_schema" (in rsync_sst) failed: Permission denied (13) *** Skipping any contents from this failed directory *** rsync: recv_generator: mkdir "test" (in rsync_sst) failed: Permission denied (13) *** Skipping any contents from this failed directory *** rsync: open "ibdata01" (in rsync_sst) failed: Permission denied (13) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6] WSREP_SST: [ERROR] rsync returned code 23: (20150715 16:19:09.434) 2015-07-15 16:19:09 21106 [ERROR] WSREP: Failed to read from: wsrep_sst_rsync --role 'donor' --address '192.168.6.4:4444/rsync_sst' --auth '(null)' --socket '/mysql/data/mysql.sock' --datadir '/mysql/data/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid 'b8940f5c-2ac2-11e5-b81c-9aa06090fe52:0' 2015-07-15 16:19:09 21106 [ERROR] WSREP: Process completed with error: wsrep_sst_rsync --role 'donor' --address '192.168.6.4:4444/rsync_sst' --auth '(null)' --socket '/mysql/data/mysql.sock' --datadir '/mysql/data/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid 'b8940f5c-2ac2-11e5-b81c-9aa06090fe52:0': 255 (Unknown error 255) 2015-07-15 16:19:09 21106 [Note] WSREP: resuming provider at 5 2015-07-15 16:19:09 21106 [Note] WSREP: Provider resumed. 2015-07-15 16:19:09 21106 [ERROR] WSREP: Command did not run: wsrep_sst_rsync --role 'donor' --address '192.168.6.4:4444/rsync_sst' --auth '(null)' --socket '/mysql/data/mysql.sock' --datadir '/mysql/data/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid 'b8940f5c-2ac2-11e5-b81c-9aa06090fe52:0' 2015-07-15 16:19:09 21106 [Warning] WSREP: 0.0 (ip63): State transfer to 1.0 (ip64) failed: -255 (Unknown error 255) 2015-07-15 16:19:09 21106 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 0) and here is my conf file:
: [root@mytest1 lib]# cat /etc/my.cnf [client] port = 3306 socket = /mysql/data/mysql.sock #prompt = "\u@\d> " #pager = more [mysqld] # *************** main *************** port = 3306 user = mysql basedir = /usr/local/mysql datadir = /mysql/data socket = /mysql/data/mysql.sock pid-file = /mysql/data/mysql.pid log-error = error.err # *************** General *************** memlock skip-name-resolve explicit_defaults_for_timestamp max_connections = 1000 back_log = 500 max_connect_errors = 10000 table_open_cache = 16384 thread_cache_size = 64 query_cache_type = 0 query_cache_size = 0 max_allowed_packet = 1M # ************* Galera & Innodb & binlog ************** wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so wsrep_cluster_name = my_cluster wsrep_cluster_address = gcomm://192.168.6.3,192.168.6.4,192.168.6.5,192.168.6.6 wsrep_node_name = ip63 wsrep_node_address = 192.168.6.3 wsrep_sst_method = rsync #wsrep_sst_method = xtrabackup #wsrep_sst_auth = "sst:sst" log-bin = mysql-bin binlog_format = ROW expire_logs_days = 3 default_storage_engine = Innodb innodb_autoinc_lock_mode = 2 innodb_flush_method = O_DIRECT innodb_file_per_table = 1 innodb_buffer_pool_size = 2G innodb_log_buffer_size = 10M innodb_data_file_path = ibdata01:200M:autoextend innodb_log_file_size = 256M innodb_log_files_in_group = 2 innodb_open_files = 8192 innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 60 # ************* slow log ************** slow_query_log long_query_time = 3 # *************** MyIsam *************** key_buffer_size = 128M sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 32M
on NODE1,i startup mysql with:
: /etc/init.d/mysql start --wsrep_cluster_address=gcomm://

MySQL QA Episode 6: Analyzing and Filtering

Latest MySQL Performance Blog posts - July 15, 2015 - 12:00am

Welcome to MySQL QA Episode #6!

Today we will look into analyzing and filtering our QA run. We’ll use tools like pquery-prep-red.sh, pquery-clean-known.sh & pquery-results.sh

1. Quick re-cap and QA run setup
2. pquery-prep-red.sh
3. pquery-clean-known.sh
4. pquery-results.sh
5. Bonus: pquery reach – pquery-reach++.sh

We’ll also introduce the text_string.sh tool which extracts a most-specific text string from the error log in order to classify a bug/issue.

Full-screen viewing @ 720p resolution recommended

The post MySQL QA Episode 6: Analyzing and Filtering appeared first on MySQL Performance Blog.

XtraBackup | Replication | Table 'mysql.slave_master_info' cannot be opened.

Lastest Forum Posts - July 14, 2015 - 8:04am
Hello - I am looking for some advise, I used Xtrabackup with no issues, fantastic bit of kit.
I took a backup of my database from the master and then moved the files over to the slave without issue

It's running but I am unable to get replication working now, it had worked before but the DB was fluffed!

2015-07-13 00:29:02 57003 [Warning] InnoDB: Cannot open table mysql/slave_master_info from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/...eshooting.html for how you can resolve the problem.
2015-07-13 00:29:02 57003 [Warning] Info table is not ready to be used. Table 'mysql.slave_master_info' cannot be opened.
2015-07-13 00:29:02 57003 [ERROR] Error in checking mysql.slave_master_info repository info type of TABLE.
2015-07-13 00:29:02 57003 [ERROR] Error creating master info: Error checking repositories.
2015-07-13 00:29:02 57003 [ERROR] Failed to create or recover replication info repository.
2015-07-13 00:29:02 57003 [Note] Check error log for additional messages. You will not be able to start replication until the issue is resolved.

It can not read the following tables for some reason:

| slave_master_info |
| slave_relay_log_info |
| slave_worker_info |

mysql&gt; SELECT * from slave_worker_info;
ERROR 1146 (42S02): Table 'mysql.slave_worker_info' doesn't exist
mysql&gt;

The issue is extremely similar to to:
http://dba.stackexchange.com/questio...ts/48189#48189

I have tried to drop these and re-create them but it keeps saying it exists.
I have dropped the tables, deleted the files from /var/lib/mysql/mysql and then ran the mysql_install_db

rm -rf slave_master_info.ibd slave_relay_log_info.frm slave_relay_log_info.ibd slave_worker_info.frm slave_worker_info.ibd slave_master_info.frm

I have also tried the above steps, minus the install_db and tried to create the tables however MySQL says the tables already exist?

They are no longer showing below but I cant create them?! as it says they exist

mysql> use mysql;
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_mysql |
+---------------------------+
| columns_priv |
| db |
| event |
| func |
| general_log |
| help_category |
| help_keyword |
| help_relation |
| help_topic |
| host |
| ndb_binlog_index |
| plugin |
| proc |
| procs_priv |
| proxies_priv |
| servers |
| slow_log |
| tables_priv |
| time_zone |
| time_zone_leap_second |
| time_zone_name |
| time_zone_transition |
| time_zone_transition_type |
| user |
+---------------------------+
24 rows in set (0.00 sec)

mysql> CREATE TABLE `innodb_index_stats` ( `database_name` varchar(64) COLLATE utf8_bin NOT NULL, `table_name` varchar(64) COLLATE utf8_bin NOT NULL, `index_name` varchar(64) COLLATE utf8_bin NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `stat_name` varchar(64) COLLATE utf8_bin NOT NULL, `stat_value` bigint(20) unsigned NOT NULL, `sample_size` bigint(20) unsigned DEFAULT NULL, `stat_description` varchar(1024) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`database_name`,`table_name`,`index_name`,`stat_n ame`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0;
ERROR 1050 (42S01): Table '`mysql`.`innodb_index_stats`' already exists

Can anyone help me?

MongoDB benchmark: sysbench-mongodb IO-bound workload comparison

Latest MySQL Performance Blog posts - July 14, 2015 - 7:26am

In this post I’ll share the results of a sysbench-mongodb benchmark I performed on my server. I compared MMAP, WiredTiger, RocksDB and TokuMXse (based on MongoDB 3.0) and TokuMX (based on MongoDB 2.4) in an IO-intensive workload.

The full results are available here, and below I’ll just share the summary chart:

I would like to highlight that this benchmark was designed to emulate a heavy IO load on a (relatively) slow IO subsystem. This use case, I believe, is totally valid and represents frequently used “cloud” setups with limited memory and slow IO.

The WiredTiger engine, as B-Tree based, is expected to perform worse comparing to RocksDB and Toku Fractal Trees, which, are designed to handle IO-intensive workloads. My assumption is that WiredTiger will perform better (or even outperform others) for CPU intensive in-memory workloads (see for example Mark Callaghan’s results). Also WiredTiger is expected to perform better with faster storage.

The post MongoDB benchmark: sysbench-mongodb IO-bound workload comparison appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>