]]>
]]>

You are here

Feed aggregator

Mysql Desing Question: How store uuid+<a char string in database> column

Lastest Forum Posts - December 20, 2014 - 8:38am
One of the key application table has a column adid (Varchar(50)) as unique key, which has following format (uuid+ a character string). This column is used for heavy batch updaes (millions of updates, 100 updates are done with one commit). This causes slave and it is also a scalability concern for future.

0000015c-aae9-46f0-ac23-f3d188ca1686-0-us_gse
00000161-cefd-4c56-bd99-d3ba716d6acf-0-cts_gsb

Table has auto-increment primary key.

Is there more efficient way to store this data? For uuid it is recommended to convert into binary(16) ? What can we do here ? Can this string be also be converted into binary? Or any other design idea?

CentOS 6.5 And Percona XtraDB Cluster Just Doesn't Work?

Lastest Forum Posts - December 19, 2014 - 1:55pm
I can easily get Ubuntu/Debian clustering working. However, CentOS I get nothing but errors:
141219 14:42:55 mysqld_safe mysqld from pid file /var/lib/mysql/perconadb2.hsmovedev.com.pid ended
141219 14:43:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141219 14:43:30 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.WnB5jJ' --pid-file='/var/lib/mysql/perconadb2.hsmovedev.com-recover.pid'
141219 14:43:32 mysqld_safe WSREP: Recovered position cab4275d-87c6-11e4-8794-87a6df8cd654:0
141219 14:43:32 [Note] WSREP: wsrep_start_position var submitted: 'cab4275d-87c6-11e4-8794-87a6df8cd654:0'
141219 14:43:32 [Note] WSREP: Read nil XID from storage engines, skipping position init
141219 14:43:32 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/libgalera_smm.so'
141219 14:43:32 [Note] WSREP: wsrep_load(): Galera 2.11(r318911d) by Codership Oy <info@codership.com> loaded successfully.
141219 14:43:32 [Note] WSREP: Found saved state: cab4275d-87c6-11e4-8794-87a6df8cd654:-1
141219 14:43:32 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
141219 14:43:32 [Note] WSREP: Passing config to GCS: base_host = 192.168.102.210; base_port = 4567; cert.log_conflicts = no; debug = no; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = P30S; pc.weight = 1; protonet.backend = asio;
141219 14:43:32 [Note] WSREP: Assign initial position for certification: 0, protocol version: -1
141219 14:43:32 [Note] WSREP: wsrep_sst_grab()
141219 14:43:32 [Note] WSREP: Start replication
141219 14:43:32 [Note] WSREP: Setting initial position to cab4275d-87c6-11e4-8794-87a6df8cd654:0
141219 14:43:32 [Note] WSREP: protonet asio version 0
141219 14:43:32 [Note] WSREP: backend: asio
141219 14:43:32 [Note] WSREP: GMCast version 0
141219 14:43:32 [Note] WSREP: (14146b78, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
141219 14:43:32 [Note] WSREP: (14146b78, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
141219 14:43:32 [Note] WSREP: EVS version 0
141219 14:43:32 [Note] WSREP: PC version 0
141219 14:43:32 [Note] WSREP: gcomm: connecting to group 'my_centos_cluster', peer '192.168.102.209:,192.168.102.210:,192.168.102.211 :'
141219 14:43:32 [Warning] WSREP: (14146b78, 'tcp://0.0.0.0:4567') address 'tcp://192.168.102.210:4567' points to own listening address, blacklisting
141219 14:43:35 [Warning] WSREP: no nodes coming from prim view, prim not possible
141219 14:43:36 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.50768S), skipping check
141219 14:44:05 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():141
141219 14:44:05 [ERROR] WSREP: gcs/src/gcs_core.cpp:long int gcs_core_open(gcs_core_t*, const char*, const char*, bool)():204: Failed to open backend connection: -110 (Connection timed out)
141219 14:44:05 [ERROR] WSREP: gcs/src/gcs.cpp:long int gcs_open(gcs_conn_t*, const char*, const char*, bool)():1303: Failed to open channel 'my_centos_cluster' at 'gcomm://192.168.102.209,192.168.102.210,192.168.102.211': -110 (Connection timed out)
141219 14:44:05 [ERROR] WSREP: gcs connect failed: Connection timed out
141219 14:44:05 [ERROR] WSREP: wsrep::connect() failed: 7
141219 14:44:05 [ERROR] Aborting

141219 14:44:05 [Note] WSREP: Service disconnected.
141219 14:44:06 [Note] WSREP: Some threads may fail to exit.
141219 14:44:06 [Note] /usr/sbin/mysqld: Shutdown complete

141219 14:44:06 mysqld_safe mysqld from pid file /var/lib/mysql/perconadb2.hsmovedev.com.pid ended

I followed the guide to the letter, telnet to the ports jsut fine on the master. What is the issue?

Store UUID in an optimized way

Latest MySQL Performance Blog posts - December 19, 2014 - 6:00am

A few years ago Peter Zaitsev, in a post titled “To UUID or not to UUID,” wrote: There is timestamp based part in UUID which has similar properties to auto_increment and which could be used to have values generated at same point in time physically local in BTREE index.”

For this post I’ve rearranged the timestamp part of UUID (Universal Unique Identifier) and did some benchmarks.

Many people store UUID as char (36) and use as row identity value (PRIMARY KEY) because it is unique across every table, every database and every server and allow easy merging of records from different databases. But here comes the problem, using it as PRIMARY KEY causes the problems described below.

Problems with UUID
  • UUID has 36 characters which makes it bulky.
  • InnoDB stores data in the PRIMARY KEY order and all the secondary keys also contain PRIMARY KEY. So having UUID as PRIMARY KEY makes the index bigger which can not be fit into the memory
  • Inserts are random and the data is scattered.

Despite the problems with UUID, people still prefer it because it is UNIQUE across every table, can be generated anywhere. In this blog, I will explain how to store UUID in an efficient way by re-arranging timestamp part of UUID.

Structure of UUID

MySQL uses UUID version 1 which is a 128-bit number represented by a utf8 string of five hexadecimal numbers

  • The first three numbers are generated from a timestamp.
  • The fourth number preserves temporal uniqueness in case the timestamp value loses monotonicity (for example, due to daylight saving time).
  • The fifth number is an IEEE 802 node number that provides spatial uniqueness. A random number is substituted if the latter is not available (for example, because the host computer has no Ethernet card, or we do not know how to find the hardware address of an interface on your operating system). In this case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision should have very low probability.

The timestamp is mapped as follows:
When the timestamp has the (60 bit) hexadecimal value: 1d8eebc58e0a7d7. The following parts of the UUID are set:: 58e0a7d7-eebc-11d8-9669-0800200c9a66. The 1 before the most significant digits (in 11d8) of the timestamp indicates the UUID version, for time-based UUIDs this is 1.

Fourth and Fifth parts would be mostly constant if it is generated from a single server. First three numbers are based on timestamp, so they will be monotonically increasing. Lets rearrange the total sequence making the UUID closer to sequential. This makes the inserts and recent data look up faster. Dashes (‘-‘) make no sense, so lets remove them.
58e0a7d7-eebc-11d8-9669-0800200c9a66 => 11d8eebc58e0a7d796690800200c9a66

Benchmarking

I created created three tables

  • events_uuid – UUID binary(16) PRIMARY KEY
  • events_int – Additional BIGINT auto increment column and made it as primary key and index on UUID column
  • events_uuid_ordered – Rearranged UUID binary(16) as PRIMARY KEY

I created three stored procedures which insert 25K random rows at a time into the respective tables. There are three more stored procedures which call the random insert-stored procedures in a loop and also calculate the time taken to insert 25K rows and data and index size after each loop. Totally I have inserted 25M records.

    • Data Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Data Size in MB

      The data size for UUID table is more than other two tables.
    • Index Size
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Index Size in MB
    • Total Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Total Size in MB
    • Time taken
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Time Taken in seconds

For the table with UUID as PRIMARY KEY, you can notice that as the table grows big, the time taken to insert rows is increasing almost linearly. Whereas for other tables, the time taken is almost constant.

The size of UUID table is almost 50% bigger than Ordered UUID table and 30% bigger than table with BIGINT as PRIMARY KEY. Comparing the Ordered UUID table BIGINT table, the time taken to insert rows and the size are almost same. But they may vary slightly based on the index structure.

root@localhost:~# ls -lhtr /media/data/test/ | grep ibd -rw-rw---- 1 mysql mysql  13G Jul 24 15:53 events_uuid_ordered.ibd -rw-rw---- 1 mysql mysql  20G Jul 25 02:27 events_uuid.ibd -rw-rw---- 1 mysql mysql  15G Jul 25 07:59 events_int.ibd

Table Structure

#1 events_int CREATE TABLE `events_int` (  `count` bigint(20) NOT NULL AUTO_INCREMENT,  `id` binary(16) NOT NULL,  `unit_id` binary(16) DEFAULT NULL,  `event` int(11) DEFAULT NULL,  `ref_url` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,  `campaign_id` binary(16) COLLATE utf8_unicode_ci DEFAULT '',  `unique_id` binary(16) COLLATE utf8_unicode_ci DEFAULT NULL,  `user_agent` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,  `city` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,  `country` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,  `demand_partner_id` binary(16) DEFAULT NULL,  `publisher_id` binary(16) DEFAULT NULL,  `site_id` binary(16) DEFAULT NULL,  `page_id` binary(16) DEFAULT NULL,  `action_at` datetime DEFAULT NULL,  `impression` smallint(6) DEFAULT NULL,  `click` smallint(6) DEFAULT NULL,  `sold_impression` smallint(6) DEFAULT NULL,  `price` decimal(15,7) DEFAULT '0.0000000',  `actioned_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',  `unique_ads` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,  `notification_url` text COLLATE utf8_unicode_ci,  PRIMARY KEY (`count`),  KEY `id` (`id`),  KEY `index_events_on_actioned_at` (`actioned_at`),  KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; #2 events_uuid CREATE TABLE `events_uuid` (  `id` binary(16) NOT NULL,  `unit_id` binary(16) DEFAULT NULL, ~ ~ PRIMARY KEY (`id`),  KEY `index_events_on_actioned_at` (`actioned_at`),  KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; #3 events_uuid_ordered CREATE TABLE `events_uuid_ordered` (   `id` binary(16) NOT NULL,   `unit_id` binary(16) DEFAULT NULL, ~ ~ PRIMARY KEY (`id`),   KEY `index_events_on_actioned_at` (`actioned_at`),   KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Conclusions

 

    • Create function to rearrange UUID fields and use it

DELIMITER // CREATE DEFINER=`root`@`localhost` FUNCTION `ordered_uuid`(uuid BINARY(36)) RETURNS binary(16) DETERMINISTIC RETURN UNHEX(CONCAT(SUBSTR(uuid, 15, 4),SUBSTR(uuid, 10, 4),SUBSTR(uuid, 1, 8),SUBSTR(uuid, 20, 4),SUBSTR(uuid, 25))); // DELIMITER ;

Inserts

INSERT INTO events_uuid_ordered VALUES (ordered_uuid(uuid()),'1','M',....);

Selects

SELECT HEX(uuid),is_active,... FROM events_uuid_ordered ;

    • Define UUID as binary(16) as binary does not have any character set

 

References

 

The post Store UUID in an optimized way appeared first on MySQL Performance Blog.

wsrep/xtrabackup-v2 fails when having a dedicated binary logfile directory

Lastest Forum Posts - December 19, 2014 - 1:24am
I've already setup the donor and now want to start the second node.

rpm -qa | grep -i percona
Percona-XtraDB-Cluster-client-56-5.6.21-25.8.938.el6.x86_64
percona-xtrabackup-2.2.7-5050.el6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.21-25.8.938.el6.x86_64
percona-release-0.1-3.noarch
Percona-XtraDB-Cluster-galera-3-3.8-1.3390.rhel6.x86_64

sst-method is set to 'xtrabackup-v2'.
With a dedicated binlog directory '/db03/mysql56-01/binlogs', wsrep fails with the following messages:

WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20141219 09:03:29.673)
removed `/db02/mysql56-01/innodata/undo001'
removed `/db02/mysql56-01/innodata/ibdata2'
removed `/db02/mysql56-01/innodata/auto.cnf'
removed `/db02/mysql56-01/innodata/ibdata1'
removed `/db04/mysql56-01/innologs/ib_logfile1'
removed `/db04/mysql56-01/innologs/ib_logfile0'
dirname: extra operand `/db03/mysql56-01/binlogs/mysql56-01-bin'
Try `dirname --help' for more information.
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20141219 09:03:29.770)
2014-12-19 09:03:29 17264 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.137.171' --auth 'root:mysql' --datadir '/db02/mysql56-01/innodata/' --defaults-file '/db05/mysql56-01/admin/startup/my.cnf' --parent '17264' '' : 1 (Operation not permitted)
2014-12-19 09:03:29 17264 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2014-12-19 09:03:29 17264 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2014-12-19 09:03:29 17264 [ERROR] Aborting

The 'dirname' command is somehow used with the wrong amount of arguments, i guess.

The whole err-log is attached.

Kind regards,
Wolfsrudel

Need to understand the log cleanup in mysql

Lastest Forum Posts - December 18, 2014 - 10:53pm
Hi,

My my.cnf is

innodb_log_file_size = 100M

And

mysql> SHOW GLOBAL VARIABLES LIKE 'innodb_log_file_size';
+----------------------+-----------+
| Variable_name | Value |
+----------------------+-----------+
| innodb_log_file_size | 104857600 |
+----------------------+-----------+
1 row in set (0.02 sec)

Also /var/lib/mysql

-rw-rw---- 1 mysql mysql 104857600 Nov 6 12:03 ib_logfile0
-rw-rw---- 1 mysql mysql 104857600 May 21 2012 ib_logfile1


I understand the log file is reached its maximum allocated size. I want to know if i delete it simply by "rm -rf ib_logfile0" and "rm -rf "ib_logfile0" will be any impact to the mysql functionality. Kindly advice.

I am very new to mysql, stuck with one issue, our sos berlin jobscheduler is not running due to mysql issue. Please help me.

Thanks in Advance!
Bala..

Need to understand the log cleanup in mysql

Lastest Forum Posts - December 18, 2014 - 10:53pm
Hi,

My my.cnf is

innodb_log_file_size = 100M

And

mysql> SHOW GLOBAL VARIABLES LIKE 'innodb_log_file_size';
+----------------------+-----------+
| Variable_name | Value |
+----------------------+-----------+
| innodb_log_file_size | 104857600 |
+----------------------+-----------+
1 row in set (0.02 sec)

Also /var/lib/mysql

-rw-rw---- 1 mysql mysql 104857600 Nov 6 12:03 ib_logfile0
-rw-rw---- 1 mysql mysql 104857600 May 21 2012 ib_logfile1


I understand the log file is reached its maximum allocated size. I want to know if i delete it simply by "rm -rf ib_logfile0" and "rm -rf "ib_logfile0" will be any impact to the mysql functionality. Kindly advice.

I am very new to mysql, stuck with one issue, our sos berlin jobscheduler is not running due to mysql issue. Please help me.

Thanks in Advance!
Bala..

Why pt-kill hangs in connecting to MySQL?

Lastest Forum Posts - December 18, 2014 - 6:58pm
I could connect to local MySQL successfully:
: gf@li741-164 ~> mysql -uroot -S /run/mysqld/mysqld.sock Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 202 Server version: 5.6.21-70.1 Source distribution Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> but when using pt-kill,it seems hangs to connect to MySQL,I waited for 5 minutes ,but it still hangs:
: gf@li741-164 ~> pt-kill --busy-time 6 --user root --print --socket /run/mysqld/mysqld.sock

Making HAProxy 1.5 replication lag aware in MySQL

Latest MySQL Performance Blog posts - December 18, 2014 - 7:48am

HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.

Agent checks in HAProxy

HAProxy 1.5 allows us to run an agent check, which is a check that can be added to a regular health check. The benefit of agent checks is that the return value can be ‘up’ or ‘down’, but also a weight.

What is an agent? It is simply a program that can be accessed from a TCP connection on a given port. So if we want to run an agent on a MySQL server that will:

  • Mark the server as down in HAProxy if replication is not working
  • Set the weight to 100% if the replication lag is < 10s
  • Set the weight to 50% if the replication lag is >= 10s and < 60s
  • Set the weight to 5% in all other situations

We can use a script like this:

$ less agent.php = 10 && $lag < 60){ return "up 50%"; } else return "up 5%"; } set_time_limit(0); $socket = stream_socket_server("tcp://127.0.0.1:$port", $errno, $errstr); if (!$socket) { echo "$errstr ($errno) n"; } else { while ($conn = stream_socket_accept($socket,9999999999999)) { $cmd = "$mysql -h127.0.0.1 -u$user -p$password -P$mysql_port -Ee "$query" | grep Seconds_Behind_Master | cut -d ':' -f2 | tr -d ' '"; exec("$cmd",$lag); $weight = set_weight($lag[0]); unset($lag); fputs ($conn, $weight); fclose ($conn); } fclose($socket); } ?>

If you want the script to be accessible from port 6789 and connect to a MySQL instance running on port 3306, run:

$ php agent.php 6789 3306

You will also need a dedicated MySQL user:

mysql> GRANT REPLICATION CLIENT ON *.* TO 'haproxy'@'127.0.0.1' IDENTIFIED BY 'haproxy_pwd';

When the agent is started, you can check that it is working properly:

# telnet 127.0.0.1 6789 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. up 100% Connection closed by foreign host.

Assuming it is run locally on the app server, that 2 replicas are available (192.168.10.2 and 192.168.10.3) and that the application will send all reads on port 3307, you will define a frontend and a backend in your HAProxy configuration like this:

frontend read_only-front bind *:3307 mode tcp option tcplog log global default_backend read_only-back backend read_only-back mode tcp balance leastconn server slave1 192.168.10.2 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions server slave2 192.168.10.3 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions

Demo

Now that everything is set up, let’s see how HAProxy can dynamically change the weight of the servers depending on the replication lag.

No lag

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # HAProxy $ echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,200

Slave1 lagging

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 25 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,50 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,150

Slave2 down

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: NULL # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,DOWN (agent),100 read_only-back,BACKEND,UP,100

Conclusion

Agent checks are a nice addition in HAProxy 1.5. The setup presented above is a bit simplistic though: for instance, if HAProxy fails to connect to the agent, it will not mark the corresponding as down. It is then recommended to keep a regular health check along with the agent check.

Astute readers will also notice that in this configuration, if replication is broken on all nodes, HAProxy will stop sending reads. This may not be the best solution. Possible options are: stop the agent and mark the servers as UP using the stats socket or add the master as a backup server.

And as a final note, you can edit the code of the agent so that replication lag is measured with Percona Toolkit’s pt-heartbeat instead of Seconds_Behind_Master.

The post Making HAProxy 1.5 replication lag aware in MySQL appeared first on MySQL Performance Blog.

replication error

Lastest Forum Posts - December 17, 2014 - 11:14pm
Hi all.

My replication stopped with error Last_SQL_Errno: 1032 or Last_SQL_Errno: 1062
example error:
Last_SQL_Error: Could not execute Delete_rows event on table cportal.b_search_content_stem; Can't find record in 'b_search_content_stem', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mk-db1-master-bin.000488, end_log_pos 74862564

I am found this issue http://www.percona.com/blog/2013/09/...ith-mysql-5-6/
but it not working for me.
my version is Percona-Server-server-56-5.6.21-rel69.0.el6.x86_64.

my slave cnf http://pastebin.com/wTdcbTwU
my master cnf http://pastebin.com/q2uJAynU

thanks for help

Why pt-align discard first line automatically?

Lastest Forum Posts - December 17, 2014 - 11:10pm
When I type following commands:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" The result contains one column name title line and two host line:
: Host User Password Select_priv Insert_priv Update_priv Delete_priv Create_priv Drop_priv Reload_priv Shutdown_priv Process_priv File_priv Grant_priv References_priv Index_priv Alter_priv Show_db_priv Super_priv Create_tmp_table_priv Lock_tables_priv Execute_priv Repl_slave_priv Repl_client_priv Create_view_priv Show_view_priv Create_routine_priv Alter_routine_priv Create_user_priv Event_priv Trigger_priv Create_tablespace_priv ssl_type ssl_cipher x509_issuer x509_subjectmax_questions max_updates max_connections max_user_connections plugin authentication_string password_expired localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y Y Y 0 0 0 0 N But after using pt-align:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" | pt-align first column name title line is ignored,just left two host lines
: localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N Why this happens?

pt-table-checsum details

Lastest Forum Posts - December 17, 2014 - 12:51pm
Hi Team,

We have performed pt-table-check sum on specific server and understand that a Database by name Percona gets created under which checksums table will be added once we run the pt-table-checksum.

There are different fields in the checksum table. Could you please clarify the actual meaning of the below fields:
this_crc,
this_cnt,
master_crc,
master_cnt,
chunk,
chunk_index,
lower_boundary,
upper_boundary.

Best Regards,
Krishna

InnoDB’s multi-versioning handling can be Achilles’ heel

Latest MySQL Performance Blog posts - December 17, 2014 - 8:05am

I believe InnoDB storage engine architecture is great for a lot of online workloads, however, there are no silver bullets in technology and all design choices have their trade offs. In this blog post I’m going to talk about one important InnoDB limitation that you should consider.

InnoDB is a multiversion concurrency control (MVCC) storage engine which means many versions of the single row can exist at the same time. In fact there can be a huge amount of such row versions. Depending on the isolation mode you have chosen, InnoDB might have to keep all row versions going back to the earliest active read view, but at the very least it will have to keep all versions going back to the start of SELECT query which is currently running.

In most cases this is not a big deal – if you have many short transactions happening you will have only a few row versions to deal with. If you just use the system for reporting queries but do not modify data aggressively at the same time you also will not have many row versions. However, if you mix heavy updates with slow reporting queries going at the same time you can get into a lot of trouble.

Consider for example an application with a hot row (something like actively updated counter) which has 1000 updates per second together with some heavy batch job that takes 1000 to run. In such case we will have 1M of row versions to deal with.

Let’s now talk about how those old-row versions are stored in InnoDB – they are stored in the undo space as an essentially linked list where each row version points to the previous row version together with transaction visibility information that helps to decide which version will be visible by this query. Such design favors short new queries that will typically need to see one of the newer rows, so they do not have to go too far in this linked list. This might not be the case with reporting queries that might need to read rather old row version which correspond to the time when the query was started or logical backups that use consistent reads (think mysqldump or mydumper) which often would need to access such very old row versions.

So going through the linked list of versions is expensive, but how expensive it can get? In this case a lot depends upon whenever UNDO space fits in memory, and so the list will be traversed efficiently – or it does not, in which case you might be looking at the massive disk IO. Keep in mind undo space is not clustered by PRIMARY key, as normal data in InnoDB tables, so if you’re updating multiple rows at the same time (typical case) you will be looking at the row-version chain stored in many pages, often as little as one row version per page, requiring either massive IO or a large amount of UNDO space pages to present in the InnoDB Buffer pool.

Where it can get even worse is Index Scan. This is because Indexes are structured in InnoDB to include all row versions corresponding to the key value, current and past. This means for example the index for KEY=5 will contain pointers to all rows that either have value 5 now or had value 5 some time in the past and have not been purged yet. Now where it can really bite is the following – InnoDB needs to know which of the values stored for the key are visible by the current transaction – and that might mean going through all long-version chains for each of the keys.

This is all theory, so lets see how we can simulate such workloads and see how bad things really can get in practice.

I have created 1Bil rows “sysbench” table which takes some 270GB space and I will use a small buffer pool – 6GB. I will run sysbench with 64 threads pareto distribution (hot rows) while running a full table scan query concurrently: select avg(k) from sbtest1 Here is exact sysbench run done after prepare.

sysbench --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=pareto --oltp-table-size=1000000000 --mysql-user root --mysql-password=password --test /usr/share/doc/sysbench/tests/db/oltp.lua run

Here is the explain for the “reporting” query that you would think to be a rather efficient index scan query. With just 4 bytes 1 Billion of values would be just 4G (really more because of InnoDB overhead) – not a big deal for modern systems:

mysql> explain select avg(k) from sbtest1 G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: sbtest1 type: index possible_keys: NULL key: k_1 key_len: 4 ref: NULL rows: 953860873 Extra: Using index 1 row in set (0.00 sec)

2 days have passed and the “reporting” query is still running… furthermore the foreground workload started to look absolutely bizarre:

[207850s] threads: 64, tps: 0.20, reads: 7.40, writes: 0.80, response time: 222481.28ms (95%), errors: 0.70, reconnects: 0.00 [207860s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207870s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207880s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207890s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207900s] threads: 64, tps: 2.70, reads: 47.60, writes: 11.60, response time: 268815.49ms (95%), errors: 0.00, reconnects: 0.00 [207910s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207920s] threads: 64, tps: 2.30, reads: 31.60, writes: 9.50, response time: 294954.28ms (95%), errors: 0.00, reconnects: 0.00 [207930s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207940s] threads: 64, tps: 2.90, reads: 42.00, writes: 12.20, response time: 309332.04ms (95%), errors: 0.00, reconnects: 0.00 [207950s] threads: 64, tps: 0.20, reads: 4.60, writes: 1.00, response time: 318922.41ms (95%), errors: 0.40, reconnects: 0.00 [207960s] threads: 64, tps: 0.20, reads: 1.90, writes: 0.50, response time: 335170.09ms (95%), errors: 0.00, reconnects: 0.00 [207970s] threads: 64, tps: 0.60, reads: 13.20, writes: 2.60, response time: 292842.88ms (95%), errors: 0.60, reconnects: 0.00 [207980s] threads: 64, tps: 2.60, reads: 37.60, writes: 10.20, response time: 351613.43ms (95%), errors: 0.00, reconnects: 0.00 [207990s] threads: 64, tps: 5.60, reads: 78.70, writes: 22.10, response time: 186407.21ms (95%), errors: 0.00, reconnects: 0.00 [208000s] threads: 64, tps: 8.10, reads: 120.20, writes: 32.60, response time: 99179.05ms (95%), errors: 0.00, reconnects: 0.00 [208010s] threads: 64, tps: 19.50, reads: 280.50, writes: 78.90, response time: 27559.69ms (95%), errors: 0.00, reconnects: 0.00 [208020s] threads: 64, tps: 50.70, reads: 691.28, writes: 200.70, response time: 5214.43ms (95%), errors: 0.00, reconnects: 0.00 [208030s] threads: 64, tps: 77.40, reads: 1099.72, writes: 311.31, response time: 2600.66ms (95%), errors: 0.00, reconnects: 0.00 [208040s] threads: 64, tps: 328.20, reads: 4595.40, writes: 1313.40, response time: 911.36ms (95%), errors: 0.00, reconnects: 0.00 [208050s] threads: 64, tps: 538.20, reads: 7531.90, writes: 2152.10, response time: 484.46ms (95%), errors: 0.00, reconnects: 0.00 [208060s] threads: 64, tps: 350.70, reads: 4913.45, writes: 1404.09, response time: 619.42ms (95%), errors: 0.00, reconnects: 0.00 [208070s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208080s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208090s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208100s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208110s] threads: 64, tps: 1.60, reads: 24.20, writes: 6.80, response time: 42385.40ms (95%), errors: 0.10, reconnects: 0.00 [208120s] threads: 64, tps: 0.80, reads: 28.20, writes: 3.40, response time: 51381.54ms (95%), errors: 2.80, reconnects: 0.00 [208130s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208140s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208150s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208160s] threads: 64, tps: 0.60, reads: 14.20, writes: 2.40, response time: 93248.04ms (95%), errors: 0.80, reconnects: 0.00

As you can see we have long stretches of times when there are no queries completed at all… going to some spikes of higher performance. This is how it looks on the graph:

Corresponding CPU usage:

This shows what we are not only observing something we would expect with InnoDB design but also there seems to be some serve starvation happening in this case which we can confirm:

Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790809552640 has waited at btr0cur.cc line 3852 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790804735744 has waited at row0sel.cc line 3506 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790810756864 has waited at row0sel.cc line 4125 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790811158272 has waited at btr0cur.cc line 3852 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive

Waiting for the given buffer pool block to become available for more than 3 minutes is a big issue – this lock should never be held by more than a few microseconds.

SHOW PROCESSLIST confirms even most basic selects by primary key can get stalled for long time

| 5499 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5500 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5501 | root | localhost | sbtest | Query | 185 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5502 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5503 | root | localhost | sbtest | Query | 14 | statistics | SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 1 AND 1+99 ORDER BY c | 0 | 0 | | 5504 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5505 | root | localhost | sbtest | Query | 14 | updating | UPDATE sbtest1 SET k=k+1 WHERE id=1 | 0 | 0 | | 5506 | root | localhost | sbtest | Query | 236 | updating | DELETE FROM sbtest1 WHERE id=1 | 0 | 0 | | 5507 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5508 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id BETWEEN 1 AND 1+99 | 0 | 0 | | 5509 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5510 | root | localhost | sbtest | Query | 14 | updating | UPDATE sbtest1 SET c='80873149502-45132831358-41942500598-17481512835-07042367094-39557981480-593123 | 0 | 0 | | 5511 | root | localhost | sbtest | Query | 236 | updating | UPDATE sbtest1 SET k=k+1 WHERE id=18 | 0 | 1 | | 5512 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=7 | 0 | 0 | | 6009 | root | localhost | sbtest | Query | 195527 | Sending data | select avg(k) from sbtest1 | 0 | 0 |

How do I know it is UNDO space related issue in this case? Because it ends up taking majority of buffer pool

mysql> select page_type,count(*) from INNODB_BUFFER_PAGE group by page_type; +-------------------+----------+ | page_type | count(*) | +-------------------+----------+ | EXTENT_DESCRIPTOR | 1 | | FILE_SPACE_HEADER | 1 | | IBUF_BITMAP | 559 | | IBUF_INDEX | 855 | | INDEX | 2186 | | INODE | 1 | | SYSTEM | 128 | | UNDO_LOG | 382969 | | UNKNOWN | 6508 | +-------------------+----------+ 9 rows in set (1.04 sec)

And it does so in a very crazy way – when there is almost no work being done UNDO_LOG contents of the buffer pool is growing very rapidly while when we’re getting some work done the INDEX type pages take a lot more space. To me this seems like as the index scan is going it touches some hot rows and some not-so-hot ones, containing less row versions and so does not put much pressure on “undo space.”

Take Away: Now you might argue that this given workload and situation is rather artificial and rather narrow. It well might be. My main point here is what if you’re looking at just part of your workload, such as your main short application queries, and not taking reporting or backups into account “because their performance is not important.” In this case you might be in for a big surprise. Those background activities might be taking much more than you would expect, and in addition, they might have much more of a severe impact to your main application workload, like in this case above.

The post InnoDB’s multi-versioning handling can be Achilles’ heel appeared first on MySQL Performance Blog.

Replacing a MySQL server in a non-standard location

Lastest Forum Posts - December 17, 2014 - 7:17am
Good afternoon,

Is it possible to install Percona in a non-standard location? In this instance /opt/bitnami/mysql/bin?

Thank you for you help.

Query Optimization and tunning

Lastest Forum Posts - December 16, 2014 - 11:01am
can any one help me to optimize this query:
SELECT CONCAT(CCD.CONTACT_FIRST_NAME, ' ', CCD.CONTACT_LAST_NAME) AS NAME, A.EMAIL_IDS, CCD.UNSUBSCRIBE, IFNULL(CSL.LOG_DATE, '') AS LOG_DATE, CSL.IP_ADDRESS, IF(CSL.BROWSER IS NULL, '', CSL.BROWSER) AS BROWSER, (CASE WHEN (UNSUBSCRIBE = 0 OR UNSUBSCRIBE IS NULL) THEN 'Opted In' ELSE (CASE WHEN (UNSUBSCRIBE = 1) THEN 'Opted Out' ELSE (CASE WHEN ((UNSUBSCRIBE = 2 OR UNSUBSCRIBE = 3) AND CCD.CONTACT_ID NOT IN (SELECT CONTACT_ID FROM CM_SUBSCRIPTION_MAIL_DATA WHERE IS_MAIL_SENT = 'Y')) THEN 'Opt-In Request not Sent' ELSE 'Opt-In Pending' END) END) END) AS CURR_SUB, (CASE WHEN (SUBSCRIBE_FROM = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_FROM = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_FROM = 2 OR SUBSCRIBE_FROM = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_FROM, SUBSCRIBE_FROM, (CASE WHEN (SUBSCRIBE_TO = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_TO = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_TO = 2 OR SUBSCRIBE_TO = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_TO, SUBSCRIBE_TO FROM CM_CONTACT_DETAILS CCD LEFT JOIN ADDRESS A ON CCD.CONTACT_ID = A.FOREIGN_ID LEFT JOIN CM_SUBSCRIPTION_LOGS CSL ON CSL.CONTACT_ID = CCD.CONTACT_ID WHERE 1 = 1 GROUP BY CSL.LOG_DATE , CCD.CONTACT_ID ORDER BY NAME DESC , LOG_DATE DESC LIMIT 0 , 20

Query Optimization and tunning

Lastest Forum Posts - December 16, 2014 - 11:01am
can any one help me to optimize this query:
SELECT CONCAT(CCD.CONTACT_FIRST_NAME, ' ', CCD.CONTACT_LAST_NAME) AS NAME, A.EMAIL_IDS, CCD.UNSUBSCRIBE, IFNULL(CSL.LOG_DATE, '') AS LOG_DATE, CSL.IP_ADDRESS, IF(CSL.BROWSER IS NULL, '', CSL.BROWSER) AS BROWSER, (CASE WHEN (UNSUBSCRIBE = 0 OR UNSUBSCRIBE IS NULL) THEN 'Opted In' ELSE (CASE WHEN (UNSUBSCRIBE = 1) THEN 'Opted Out' ELSE (CASE WHEN ((UNSUBSCRIBE = 2 OR UNSUBSCRIBE = 3) AND CCD.CONTACT_ID NOT IN (SELECT CONTACT_ID FROM CM_SUBSCRIPTION_MAIL_DATA WHERE IS_MAIL_SENT = 'Y')) THEN 'Opt-In Request not Sent' ELSE 'Opt-In Pending' END) END) END) AS CURR_SUB, (CASE WHEN (SUBSCRIBE_FROM = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_FROM = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_FROM = 2 OR SUBSCRIBE_FROM = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_FROM, SUBSCRIBE_FROM, (CASE WHEN (SUBSCRIBE_TO = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_TO = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_TO = 2 OR SUBSCRIBE_TO = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_TO, SUBSCRIBE_TO FROM CM_CONTACT_DETAILS CCD LEFT JOIN ADDRESS A ON CCD.CONTACT_ID = A.FOREIGN_ID LEFT JOIN CM_SUBSCRIPTION_LOGS CSL ON CSL.CONTACT_ID = CCD.CONTACT_ID WHERE 1 = 1 GROUP BY CSL.LOG_DATE , CCD.CONTACT_ID ORDER BY NAME DESC , LOG_DATE DESC LIMIT 0 , 20

OpenStack Live tutorials & sessions to bring OpenStack users up to speed

Latest MySQL Performance Blog posts - December 16, 2014 - 9:28am

I attended the OpenStack Paris summit last month (Percona had a booth there). It was my first opportunity to meet face-to-face with this thriving community of developers and users. I’m proud that Percona is part of this open source family and look forward to reconnecting with many of the developers and users I met in Paris – as well as meeting new faces – at OpenStack Live in Silicon Valley April 13-14.

OpenStack summits, generally held twice a year, are the place where (for the most part) developers meet and design “in the open,” as the OpenStack organization says. OpenStack Live 2015, held in parallel with the annual Percona Live MySQL Conference and Expo, will be a unique opportunity for users and enthusiasts to learn from leading OpenStack experts in the field about top cloud strategies, improving overall cloud performance, and operational best practices for managing and optimizing OpenStack and its MySQL database core.

OpenStack Live will also provide some serious classroom-style learning. Percona announced the OpenStack Live tutorials sessions a couple days ago. Most sessions are three hours long and because they really are “hands-on” require that you bring your laptop – and a power cord (not to be confused with a “power chord,” though those also welcome”).

Let’s take a closer look at the OpenStack Live tutorial sessions.

Barbican: Securing Your Secrets.” Join Rackspace gurus Douglas Mendizábal, Chelsea Winfree and Steve Heyman on a tour through the magical world of Barbican (yes, they are dedicated members of the Barbican project).

Don’t be intimidated if don’t have any previous experience with Barbican (and if you’ve never heard of it, more the reason to attend!). A basic understanding of security components (such as keys and certificates) and a basic understanding of ReST is helpful, but not required.

By the end of the class you will know:
1)   Importance of secret storage
2)   How to store & retrieve secrets with Barbican
3)   How to submit an order with Barbican
4)   How to create a container
5)   Use cases for Barbican / Examples
6)   The future of Barbican –Ordering SSL Certs

Deploying, Configuring and Operating OpenStack Trove.” As the title suggests, these three hours focus squarely on Trove. The tutorial – led by Tesora founder & CTO Amrith Kumar, along with Doug Shelley, the company’s vice president of product development – will begin with a quick overview of OpenStack and the various services.

If you attend this tutorial you’ll actually deploy your own OpenStack environment – and create and manage a Nova (compute) instance using a command line and a graphical user interface (Horizon). And the fun continues! You’ll then install and configure Trove, and create and manage a single MySQL instance. Finally, pupils will create and operate a simple replicated MySQL instance pair and ensure that data is being properly replicated from master to slave.

Essential DevStack.” DevStack is an opinionated script to quickly create an OpenStack development environment. It can also be used to demonstrate starting/running OpenStack services and provide examples of using them from a command line. The power of DevStack lies within small trick that if people understand can hugely improve the contribution effectiveness, quality and required time. This three-hour tutorial will be led by Red Hat senior software engineer Swapnil Kulkarni.

OpenStack Networking Introduction,” with PLUMgrid’s Valentina Alaria and Brendan Howes. Buckle your seat belts! Designed for IT professionals looking to expand their OpenStack “networking” (no, not the LinkedIn sort of networking) knowledge, OpenStack Networking Fundamentals will be a comprehensive and fast-paced course.

This half-day training provides an overview of OpenStack, its components and then dives deep into OpenStack Networking – the features and plugin model and its role in building an OpenStack Cloud. The training is concluded with a hands-on lab to bring all the concepts together.

OpenStack Networking (Neutron) Introduction [1 hour]
– Goal of Neutron
– Architecture of Neutron
– Plugin Architecture
– Use cases for Neutron
– What’s new in Juno & what’s planned for Kilo

OpenStack Networking (Neutron) Advanced [1 hour]
– Interaction with other OpenStack components (Compute & Storage)
– Designing Neutron for HA
– Installing Neutron
– Troubleshooting Neutron

Hands-on Lab [1 hour]
– Creation of tenant networks
– Configuration of external connectivity
– Advanced Features Configurati

Percona’s director of conferences, Kortney Runyan, offered a sneak peek at the OpenStack sessions last week. Attendees of the Percona Live MySQL Conference and Expo 2015 (April 13-16, 2015) with full-pass access are also welcome to attend OpenStack Live sessions. The two conferences are running in parallel, which is very exciting since there will be crossover opportunities between them.

I hope to see you next April! And be sure to take advantage of Early Bird pricing for OpenStack Live (register here).
And if you are an organizer of an OpenStack (or MySQL) Meetup and need some financial help, Percona is happy to chip in as a sponsor. Just let me know and I’ll work with you to set that up! You can drop me a note in the comments and I’ll contact you offline.

The post OpenStack Live tutorials & sessions to bring OpenStack users up to speed appeared first on MySQL Performance Blog.

Which vision is to merge art and poetr

Lastest Forum Posts - December 16, 2014 - 9:15am
http://www.invisionpower.ro/topic/49...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/

Which vision is to merge art and poetr

Lastest Forum Posts - December 16, 2014 - 9:00am
http://www.invisionpower.ro/topic/49...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/

Backup fails with InnoDB: Operating system error number 0

Lastest Forum Posts - December 16, 2014 - 3:19am
Hi,

I was just trying to take hot backup with innobackupex but I get a read error whilst innobackupex is trying to scan the logfiles - strangely though the failure is an Error 0 which is reported as a success:

: xtrabackup_55 version 1.6.3 for Percona Server 5.5.9 Linux (x86_64) (revision id: undefined) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: Target instance is assumed as followings. xtrabackup: innodb_data_home_dir = ./ xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 5242880 141216 10:59:08 InnoDB: Using Linux native AIO 141216 10:59:08 InnoDB: Warning: allocated tablespace 53, old maximum was 9 >> log scanned up to (1221137553871) [01] Copying ./ibdata1 to /home/nick/2014-12-16_10-59-04/ibdata1 >> log scanned up to (1221137615178) >> log scanned up to (1221137624351) >> log scanned up to (1221137650292) InnoDB: Error: tried to read 1048576 bytes at offset 0 534773760. InnoDB: Was only able to read 40960. 141216 11:00:15 InnoDB: Operating system error number 0 in a file operation. InnoDB: Error number 0 means 'Success'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html InnoDB: File operation call: 'read'. InnoDB: Cannot continue operation. innobackupex: Error: ibbackup child process has died at /usr/bin/innobackupex line 336. Can anyone shed any light on what causes this? MySQL is using this Inno DB normally without any apparent errors, and starts up and shuts down normally when required, so I was assuming that the integrity of the data files + logs was ok, but perhaps not? Is this a known issue with xtrabackup? Is there a workaround or some way to get more information on this issue?

Thanks,

Nick

Recovering / Restoring a single database with Xtrabackup

Lastest Forum Posts - December 15, 2014 - 11:23pm
Hi ,

Is it possible to recovery / restore a single database using xtrabackup ?

Thanks & Regards
Raghupradeep

Pages

Subscribe to Percona aggregator
]]>