EmergencyEMERGENCY? Get 24/7 Help Now!

Get Low Rate Personal Loan,Flexible Installment

Lastest Forum Posts - January 13, 2016 - 11:38pm
With Gain Credit Personal Loans, you can get instant loan/money for a wide range of your personal needs like renovation of your home, marriage in the family, a family holiday, your child's education, buying a house, medical expenses or any other emergencies. With minimum documentation, you can now avail a personal loan at attractive 3% interest rates. This is trust and honest loans which you will not regret, Contact us via Email: gaincreditloan01@gmail.com

Your Full Details:
Full Name. . .. . .. . .. . .. . .
Loan Amount Needed. . ...
Loan Duration. . .. . .. . .. . .
Phone Number. . .. . .. . ..
Applied before. . .. . .. . ..
Country. . .. . .
Email Us: gaincreditloan01@gmail.com

I get 403 forbidden if I try to download Percona Monigoring Plugins

Lastest Forum Posts - January 13, 2016 - 3:24pm
I use this link: https://www.percona.com/downloads/pe...s-1.1.6.tar.gz . I get an error "403 forbidden". I'd like to test this plugin and compare to others.

Play the Percona Powerball Pool!!

Latest MySQL Performance Blog posts - January 12, 2016 - 10:25am
The Only Sure Thing is Percona Powerball Pool

Everyone is talking about the upcoming Powerball lottery draw. 1.4 BILLION dollars!! And growing! Millions of people are imagining what they would do IF they win. It’s the stuff of dreams.

That is literally true. The chances of winning the Powerball Lottery are 1 in 292.2 million. Or roughly speaking, the chances of picking the right combination of numbers is like flipping a coin and getting heads 28 times in a row. You’re more likely to get struck by lightning (twice) or bitten by a shark.

Sorry.

You know what is a sure thing? Percona’s ability to optimize your database performance and increase application performance. Our Support and Percona Care consultants will give you a 1 in 1 chance of making your database run better, solving your data performance issues, and improving the performance of your applications.

However, in the spirit of moment, Percona has bought 10 sets of Powerball numbers and have posted them on Facebook, Twitter and LinkedIn. It’s the Percona Powerball Pool! Like either post and share it, and you are qualified for one (1) equal share of the winnings! Use #perconapowerball when you share.

Here are the numbers:

We at Percona can’t promise a huge Powerball windfall (in fact, as data experts we’re pretty sure you won’t win!), but we can promise that our consultants are experts at helping you with your full LAMP stack environments. Anything affecting your data performance – on that we can guarantee you a win!

Full rules are here.

Getting lots of deadlocks on multimaster cluster

Lastest Forum Posts - January 12, 2016 - 9:39am
Hi all

ENVIRONMENT

Single 7 node Percona cluster

Datacenter1: (In use applications)
Percona node1
Percona node2
Percona node3
Percona node4

Datacenter2: (Disaster recovery only)
Percona node5
Percona node6
Percona node7


LOTS OF DEADLOCKS

We have an 4 node web application that is load balanced which connects to percona nodes like below:
We are getting lots of deadlock errors, out of 500 requests we get about 80-90 success.

webapp1 -> Percona node1
webapp2 -> Percona node2
webapp3 -> Percona node3
webapp4 -> Percona node4


VERY FEW DEADLOCK ERRORS

If we point all webapps to a single Percona node like below, we get very little deadlocks?

webapp1 -> Percona node1
webapp2 -> Percona node1
webapp3 -> Percona node1
webapp4 -> Percona node1


Can anyone please advise how we can fix this issue? We would like to use a true multi master percona cluster.

Thanks




import table partition procedure

Lastest Forum Posts - January 12, 2016 - 7:52am
Hi,
following the bellow link,
https://www.percona.com/doc/percona-...partition.html

If we want to attach the partition P2 to the table T1,
the procedure for import the partition previously created with xtrabackup is:

1- create a new table T2.
2- attach the partition P2 to the T2 table.
3- swap partitions between T2 and T1.

for which reason we can't use???:
ALTER TABLE t1 IMPORT PARTITION p2 thanks in advance, regards

max_allowed_packet reset

Lastest Forum Posts - January 12, 2016 - 6:37am
I am having some trouble with my global max_allowed_packet being reset. The variable is set in the my.cnf file but some process with the server, galera or xtrabackup periodically resets the value. I have not been able to determine from the logs where this is coming from. Anyone have a clue?

Percona Server 5.6.28-76.1 is now available

Latest MySQL Performance Blog posts - January 12, 2016 - 6:30am

Percona is glad to announce the release of Percona Server 5.6.28-76.1 on January 12, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.28, including all the bug fixes in it, Percona Server 5.6.28-76.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.28-76.1 milestone on Launchpad.

Bugs Fixed:

  • Clustering secondary index could not be created on a partitioned TokuDB table. Bug fixed #1527730 (DB-720).
  • When enabled, super-read-only option could break statement-based replication while executing a multi-table update statement on a slave. Bug fixed #1441259.
  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.
  • Setting the innodb_sched_priority_purge variable (available only in debug builds) while purge threads were stopped would cause a server crash. Bug fixed #1368552.
  • Small buffer pool size could cause XtraDB buffer flush thread to spin at 100% CPU. Bug fixed #1433432.
  • Enabling TokuDB with ps_tokudb_admin script inside the Docker container would cause an error due to insufficient privileges even when running as root. In order for this script to be used inside Docker containers this error has be been changed to a warning that a check is impossible. Bug fixed #1520890.
  • InnoDB status will start printing negative values for spin rounds per wait, if the wait number, even though being accounted as a signed 64-bit integer, will not fit into a signed 32-bit integer. Bug fixed #1527160 (upstream #79703).

Other bugs fixed: #1384595 (upstream #74579), #1384658 (upstream #74619), #1471141 (upstream #77705), #1179451, #1524763 and #1530102.

Release notes for Percona Server 5.6.28-76.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Percona Server 5.5.47-37.7 is now available

Latest MySQL Performance Blog posts - January 12, 2016 - 6:10am


Percona is glad to announce the release of Percona Server 5.5.47-37.7 on January 12, 2016. Based on MySQL 5.5.47, including all the bug fixes in it, Percona Server 5.5.47-37.7 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.47-37.7 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.

Other bugs fixed: #1179451, #1524763, and #1530102.

Release notes for Percona Server 5.5.47-37.7 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Update failed. garbd pre-removal script on Ubuntu

Lastest Forum Posts - January 12, 2016 - 4:05am
Hi,
I recently updated my whole Ubuntu 14.04 System. There were also some updates for Percona.

But the update of garbd failed because of this messages:

Preparing to unpack .../percona-xtradb-cluster-garbd-3.x_3.13-1.trusty_amd64.deb ...
invoke-rc.d: initscript garbd, action "stop" failed.
dpkg: warning: subprocess old pre-removal script returned error exit status 3
dpkg: trying script from the new package instead ...
invoke-rc.d: initscript garbd, action "stop" failed.
dpkg: error processing archive /var/cache/apt/archives/percona-xtradb-cluster-garbd-3.x_3.13-1.trusty_amd64.deb (--unpack):
subprocess new pre-removal script returned error exit status 3
* Garbd config /etc/default/garbd is not configured yet
Errors were encountered while processing:
/var/cache/apt/archives/percona-xtradb-cluster-garbd-3.x_3.13-1.trusty_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

I don't use garbd on that server (so its not running and also not configured in any way). I have installed "percona-xtradb-cluster-full-56"

Any Ideas to resolve this. This situation is blocking my whole package manager.
thx.
rgds.
Michael

userstats (INDEX_STATISTICS, TABLE_STATISTICS) and TokuDB

Lastest Forum Posts - January 12, 2016 - 2:57am
is there a reason why tokudb tables don't appear in INFORMATION_SCHEMA.INDEX_STATISTICS and INFORMATION_SCHEMA.TABLE_STATISTICS, while MyISAM and InnoDB do?

is it possible to enable that somehow?

Percona Monitoring Plugins 1.1.6 release

Latest MySQL Performance Blog posts - January 11, 2016 - 10:01pm

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.6.

Changelog:

  • Added new RDS instance classes to RDS scripts.
  • Added boto profile support to RDS scripts.
  • Added AWS region support and ability to specify all regions to RDS scripts.
  • Added ability to set AWS region and boto profile on data source level in Cacti.
  • Added period, average time and debug options to pmp-check-aws-rds.py.
  • Added ability to override Nginx server status URL path on data source level in Cacti.
  • Made Memcached and Redis host configurable for Cacti script.
  • Added the ability to lookup the master’s server_id when using pt-heartbeat with pmp-check-mysql-replication-delay.
  • Changed how memory stats are collected by Cacti script and pmp-check-unix-memory.
    Now /proc/meminfo is parsed instead of running free command. This also fixes pmp-check-unix-memory for EL7.

  • Set default MySQL connect timeout to 5s for Cacti script. Can be overridden in the config.
  • Fixed innodb transactions count on the Cacti graph for MySQL 5.6 and higher.
  • Fixed –login-path option in Nagios scripts when using it along with other credential options.

Thanks to contributors: David Andruczyk, Denis Baklikov, Mischa ter Smitten, Mitch Hagstrand.

The project is fully hosted on Github now including issues and Launchpad project is discontinued.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins
Percona Monitoring Plugins are monitoring and graphing components designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

Bare-metal servers for button-push database-as-a-service

Latest MySQL Performance Blog posts - January 11, 2016 - 9:50am

Enterprises’ demand flexibility, scalability and efficiency to keep up with the demands of their customers — while maintaining the bottom line. To solve this, they’re running to cloud infrastructure services to both cut costs and take advantage of cutting-edge technology innovations. Clouds have brought simplicity and ease of use to infrastructure management. However, with this ease of use often comes some sacrifice: namely, performance.

Performance degradation often stems from the introduction of virtualization and a hypervisor layer. While the hypervisor enables the flexibility and management capabilities needed to orchestrate multiple virtual machines on a single box, it also creates additional processing overhead.

Regardless, cloud servers also have huge advantages: they deploy at lightning speed and enable hassle-free private networking without the need for a private VLAN from the datacenter. They also allow the customer near instantaneous scalability without the burden of risky capital expenditures.

Bare-metal servers are one solution to this trade-off. A bare metal server is all about plain hardware. It is a single-tenant physical server that is completely dedicated to a single data-intensive workload. It prioritizes performance and reliability. A bare-metal server provides a way to enable cloud services that eliminates the overhead of virtualization, but retains the flexibility, scalability and efficiency.

On certain CPU-bound workloads, bare metal servers can outperform a cloud server of the same configuration by four times. Database management systems, being very sensitive to both CPU performance and IO speed, can obviously benefit from access to a bare metal environment.

Combine a bare metal server accessible via a cloud service with a high performance MySQL solution and you get all benefits of the cloud without sacrificing performance. This is an ideal solution for startups, side projects or even production applications.

In fact this is just what we’ve done with a partnership between Percona and Servers.com, where you can automatically provision Percona Server for MySQL on one of their bare metal servers. You can learn more about this service here.

MongoDB revs you up: What storage engine is right for you? (Part 2)

Latest MySQL Performance Blog posts - January 11, 2016 - 9:48am
Differentiating Between MongoDB Storage Engines: WiredTiger

In our last post, we discussed what a storage engine is, and how you can determine the characteristics of one versus the other. From that post:

“A database storage engine is the underlying software that a DBMS uses to create, read, update and delete data from a database. The storage engine should be thought of as a “bolt on” to the database (server daemon), which controls the database’s interaction with memory and storage subsystems.”

Check out the full post here.

Generally speaking, it’s important to understand what type of work environment the database is going to interact with, and to select a storage engine that is tailored to that environment.

The last post looked at MMAPv1, the original default engine for MongoDB (through release 3.0). This post will examine the new default MongoDB engine: WiredTiger.

WiredTiger

Find it in: MongoDB or Percona builds

MongoDB, Inc. introduced WiredTiger for document-level concurrency control for write operations in MongoDB v3.0. As a result of this introduction, multiple clients can now modify different documents of a collection at the same time. WiredTiger in MongoDB currently only supports B-trees for the data structure. However, it also has the ability to use LSM-trees, but it is not currently implemented in the MongoDB version of the engine.

WiredTiger has a few interesting features, most notably compression, document-level locking, and index prefix compression. B-trees, due to their rigidity in disk interaction and chattiness with storage, are not typically known for their performance when used with compression. However, WiredTiger has done an excellent job of maintaining good performance with compression and gives a decent performance/compression ratio with the “snappy” compression algorithm. Be that as it may, if deeper compression is necessary, you may want to evaluate another storage engine. Index prefix compression is a unique feature that should improve the usefulness of the cache by decreasing the size of indexes in memory (especially very repetitive indexes).

WiredTiger’s ideal use cases include data that are likely to stay within a few multiples of cache size. One can also expect good performance from TTL-like workloads, especially when data is within the limit previously mentioned.

Conclusion

Most people don’t know that they have a choice when it comes to storage engines, and that the choice should be based on what the database workload will look like. Percona’s Vadim Tkachenko performed an excellent benchmark test comparing the performances of RocksDB, PerconaFT and WiredTiger to help specifically differentiate between these engines.

In the next post, we’ll take a closer look at Percona’s MongoDB storage engine: PerconaFT.

 

Poor sysbench performance - 3 node pxc versus single node community version

Lastest Forum Posts - January 11, 2016 - 9:39am
I'm trying to benchmark the performance of 3-node pxc cluster. The sysbench result I'm getting on a 3-node pxc is much slower than the sysbench result on a single node community version server. The transaction rate on the pxc is about 1/2 the rate on the single node. I expected the performance of the 3-node cluster to be better than the single node. I suspect that I have configured the cluster and/or the test incorrectly. I've tried several different configurations, but still get similar results.

Are there any obvious problems with my setup?
Is there a guide to running sysbench on pxc?


Sysbench Results (tps): Threads Community-56 PXC-3node 1 178.8 122.41 8 1323.24 891.09 128 2166.06 595.68 1024 1427.13 563.8 2048 579.6 528.31 4096 42.02 304.98 3 Physical Hosts:
24 core Xenon
256 GB RAM
3T disk

pxc version: 5.6.24-72.2-56

pxc config:
[mysqld]
datadir=/clusterdb/mysql/data
user=mysql
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.12.65,192.168.12.66,192.168.12.67
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_node_address=192.168.12.65
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=my_centos_cluster
wsrep_sst_auth="xxxx:xxxxx"
log-error=/clusterdb/mysql/var/log/mysqld.log

max_connections = 6000
max_prepared_stmt_count = 65536
innodb_log_file_size = 1500M

community version 5.6.26 config:
[mysqld]
user = mysql
bind-address = 0.0.0.0
port = 50000
socket = /clusterdb/mysql/56/sbtest/var/run/mysql.sock
pid-file = /clusterdb/mysql/56/sbtest/var/run/mysql.pid
datadir = /clusterdb/mysql/56/sbtest/data
tmpdir = /clusterdb/mysql/56/sbtest/tmp

log-bin = /clusterdb/mysql/56/sbtest/var/log/binlog/mysql-bin.log
expire_logs_days = 5
max_binlog_size = 100M

general-log = 0
general_log_file = /clusterdb/mysql/56/sbtest/var/log/mysql.log

log-error = /clusterdb/mysql/56/sbtest/var/log/error.log

slow-query-log = 1
slow_query_log_file = /clusterdb/mysql/56/sbtest/var/log/slow.log

relay-log = /clusterdb/mysql/56/sbtest/var/log/relaylog/mysqld-relay-bin
relay-log-index = /clusterdb/mysql/56/sbtest/var/log/relaylog/mysqld-relay-bin.index
server-id = 1

innodb_fast_shutdown = 0

max_connections = 6000
open_files_limit = 32768
max_prepared_stmt_count = 65536
table_open_cache = 8000


sysbench version: 0.5

sysbench command:
sysbench --test=oltp \
--mysql-host=mysql-cluster1,mysql-cluster2,mysql-cluster3 \
--mysql-user=xxxx --mysql-password=xxxx \
--mysql-table-engine=InnoDB \
--mysql-engine-trx=yes \
--oltp-table-size=200000000 \
--max-time=60 \
--max-requests=0 \
--num-threads=$N \
--oltp-auto-inc=off \
--oltp-test-mode=complex \
--db-driver=mysql \
run

Can start mysql or resync SST after last update

Lastest Forum Posts - January 11, 2016 - 6:31am
Hi Guys,

We just updated our mysql cluster from 5.6.26-25.12-1.precise to 5.6.27-25.13-1.precise

Package involved:

percona-xtradb-cluster-client-5.6 (5.6.26-25.12-1.precise => 5.6.27-25.13-1.precise)
percona-xtradb-cluster-common-5.6 (5.6.26-25.12-1.precise => 5.6.27-25.13-1.precise)
percona-xtradb-cluster-galera-3.x (3.12.2-1.precise => 3.13-1.precise)
percona-xtradb-cluster-server-5.6 (5.6.26-25.12-1.precise => 5.6.27-25.13-1.precise)

Since the update, we have been able to start MySQL on 2 nodes but the third one don't want to start or sync

/etc/init.d/mysql start
* Starting MySQL (Percona XtraDB Cluster) database server mysqld

160111 15:15:03 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/data
160111 15:15:03 mysqld_safe WSREP: Running position recovery with --log_error='/srv/mysql/data/wsrep_recovery.I1fypn' --pid-file='/srv/mysql/data/server2-recover.pid'
2016-01-11 15:15:05 0 [Note] /usr/sbin/mysqld (mysqld 5.6.27-76.0-56) starting as process 5251 ...
160111 15:15:48 mysqld_safe WSREP: Recovered position 51c9a3db-aeaa-11e3-a5e6-bffb1faf3524:1305553
Log of wsrep recovery (--wsrep-recover):
2016-01-11 15:15:05 5251 [Note] Plugin 'FEDERATED' is disabled.
InnoDB: Warning: innodb_log_block_size has been changed from default value 512. (###EXPERIMENTAL### operation)
InnoDB: The log block size is set to 4096.
2016-01-11 15:15:05 5251 [Note] InnoDB: Using atomics to ref count buffer pool pages
2016-01-11 15:15:05 5251 [Note] InnoDB: The InnoDB memory heap is disabled
2016-01-11 15:15:05 5251 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-01-11 15:15:05 5251 [Note] InnoDB: Memory barrier is not used
2016-01-11 15:15:05 5251 [Note] InnoDB: Compressed tables use zlib 1.2.3.4
2016-01-11 15:15:05 5251 [Note] InnoDB: Using Linux native AIO
2016-01-11 15:15:05 5251 [Note] InnoDB: Using CPU crc32 instructions
2016-01-11 15:15:05 5251 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2016-01-11 15:15:06 5251 [Note] InnoDB: Completed initialization of buffer pool
2016-01-11 15:15:06 5251 [Note] InnoDB: Highest supported file format is Barracuda.
2016-01-11 15:15:45 5251 [Note] InnoDB: 128 rollback segment(s) are active.
2016-01-11 15:15:45 5251 [Note] InnoDB: Waiting for purge to start
2016-01-11 15:15:45 5251 [Warning] InnoDB: Setting thread 5395 nice to -10 failed, current nice 0, errno 13
2016-01-11 15:15:45 5251 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.27-76.0 started; log sequence number 5680797833
2016-01-11 15:15:45 5251 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery.
2016-01-11 15:15:45 5251 [Warning] InnoDB: Setting thread 5396 nice to -10 failed, current nice 0, errno 13
2016-01-11 15:15:45 5251 [Note] RSA private key file not found: /srv/mysql/data//private_key.pem. Some authentication plugins will not work.
2016-01-11 15:15:45 5251 [Note] RSA public key file not found: /srv/mysql/data//public_key.pem. Some authentication plugins will not work.
2016-01-11 15:15:45 5251 [Note] Server hostname (bind-address): '192.168.10.12'; port: 3306
2016-01-11 15:15:45 5251 [Note] - '192.168.10.12' resolves to '192.168.10.12';
2016-01-11 15:15:45 5251 [Note] Server socket created on IP: '192.168.10.12'.
2016-01-11 15:15:45 5251 [Note] WSREP: Recovered position: 51c9a3db-aeaa-11e3-a5e6-bffb1faf3524:1305553
2016-01-11 15:15:45 5251 [Note] Binlog end
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'partition'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'BLACKHOLE'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'ARCHIVE'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CHANGED_PAGES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_METRICS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMPMEM'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_CMP'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_LOCKS'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'INNODB_TRX'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'XTRADB_RSEG'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'XTRADB_INTERNAL_HASH_TABLES'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'XTRADB_READ_VIEW'
2016-01-11 15:15:45 5251 [Note] Shutting down plugin 'InnoDB'
2016-01-11 15:15:45 5251 [Note] InnoDB: FTS optimize thread exiting.
2016-01-11 15:15:45 5251 [Note] InnoDB: Starting shutdown...
2016-01-11 15:15:47 5251 [Note] InnoDB: Shutdown completed; log sequence number 5680801480
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'MyISAM'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'MRG_MYISAM'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'MEMORY'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'CSV'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'sha256_password'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'mysql_old_password'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'mysql_native_password'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'wsrep'
2016-01-11 15:15:47 5251 [Note] Shutting down plugin 'binlog'
2016-01-11 15:15:47 5251 [Note] /usr/sbin/mysqld: Shutdown complete

2016-01-11 15:15:49 0 [Note] /usr/sbin/mysqld (mysqld 5.6.27-76.0-56) starting as process 5413 ...
2016-01-11 15:15:49 5413 [Note] WSREP: Read nil XID from storage engines, skipping position init
2016-01-11 15:15:49 5413 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so'
2016-01-11 15:15:49 5413 [Note] WSREP: wsrep_load(): Galera 3.13(rb4bea65) by Codership Oy <info@codership.com> loaded successfully.
2016-01-11 15:15:49 5413 [Note] WSREP: CRC-32C: using hardware acceleration.
2016-01-11 15:15:49 5413 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
2016-01-11 15:15:49 5413 [Note] WSREP: Passing config to GCS: base_dir = /srv/mysql/data/; base_host = 192.168.10.12; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /srv/mysql/data/; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /srv/mysql/data//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quor
2016-01-11 15:15:49 5413 [Note] WSREP: Service thread queue flushed.
2016-01-11 15:15:49 5413 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2016-01-11 15:15:49 5413 [Note] WSREP: wsrep_sst_grab()
2016-01-11 15:15:49 5413 [Note] WSREP: Start replication
2016-01-11 15:15:49 5413 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2016-01-11 15:15:49 5413 [Note] WSREP: protonet asio version 0
2016-01-11 15:15:49 5413 [Note] WSREP: Using CRC-32C for message checksums.
2016-01-11 15:15:49 5413 [Note] WSREP: backend: asio
2016-01-11 15:15:49 5413 [Note] WSREP: restore pc from disk successfully
2016-01-11 15:15:49 5413 [Note] WSREP: GMCast version 0
2016-01-11 15:15:49 5413 [Note] WSREP: (00000000, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2016-01-11 15:15:49 5413 [Note] WSREP: (00000000, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2016-01-11 15:15:49 5413 [ERROR] WSREP: failed to open gcomm backend connection: 131: invalid UUID: 00000000 (FATAL)
at gcomm/src/pc.cpp:PC():271
2016-01-11 15:15:49 5413 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -131 (State not recoverable)
2016-01-11 15:15:49 5413 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1379: Failed to open channel 'prestaserver_cluster' at 'gcomm://192.168.10.11,192.168.10.12,192.168.10.13': -131 (State not recoverable)
2016-01-11 15:15:49 5413 [ERROR] WSREP: gcs connect failed: State not recoverable
2016-01-11 15:15:49 5413 [ERROR] WSREP: wsrep::connect(gcomm://192.168.10.11,192.168.10.12,192.168.10.13) failed: 7
2016-01-11 15:15:49 5413 [ERROR] Aborting

2016-01-11 15:15:49 5413 [Note] WSREP: Service disconnected.
2016-01-11 15:15:50 5413 [Note] WSREP: Some threads may fail to exit.
2016-01-11 15:15:50 5413 [Note] Binlog end
2016-01-11 15:15:50 5413 [Note] /usr/sbin/mysqld: Shutdown complete

160111 15:15:50 mysqld_safe mysqld from pid file /srv/mysql/data/server2.pid ended
* The server quit without updating PID file (/srv/mysql/data/server2.pid).
* The server quit without updating PID file (/srv/mysql/data/server2.pid). Tried to start mysql and forcing the donor to be the server1 but still the same issue

On the failed server:
-rw-rw---- 1 mysql mysql 104 Jan 11 15:29 grastate.dat
-rw-rw---- 1 mysql mysql 0 Jan 11 14:20 gvwstate.dat

cat grastate.dat
# GALERA saved state
version: 2.1
uuid: 00000000-0000-0000-0000-000000000000
seqno: -1
cert_index:


Any idea how to force the server to rsync and start?



Percona XtraDB Cluster 5.6.27-25.13 is now available

Latest MySQL Performance Blog posts - January 11, 2016 - 4:16am

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on January 11, 2016. Binaries are available from the downloads area or from our software repositories.

Percona XtraDB Cluster 5.6.27-25.13 is now the current release, based on the following:

All of Percona software is open-source and free, and all the details of the release can be found in the 5.6.26-25.12 milestone at Launchpad.

For more information about relevant Codership releases, see this announcement.

NOTE: Due to new dependency on libnuma1 package in Debian/Ubuntu, please run one of the following commands to upgrade the percona-xtradb-cluster-server-56 package:

  • aptitude safe-upgrade
  • apt-get dist-upgrade
  • apt-get install percona-xtradb-cluster-server-5.6

New Features:

  • There is a new script for building Percona XtraDB Cluster from source. For more information, see Compiling and Installing from Source Code.
  • wsrep_on is now a session only variable. That means toggling it will not affect other clients connected to said node. Only the session/client modifying it will be affected. Trying to toggle wsrep_on in the middle of a transaction will now result in an error. Trx will capture the state of wsrep_on during start and will continue to use it. Start here means when the first logical changing statement is executed within transaction context.

Bugs Fixed:

  • #1261688 and #1292842: Fixed race condition when two skipped replication transactions were rolled back, which caused [ERROR] WSREP: FSM: no such a transition ROLLED_BACK ->ROLLED_BACK with LOAD DATA INFILE
  • #1362830: Corrected xtrabackup-v2 script to consider only the last specified log_bin directive in my.cnf. Multiple log_bin directives caused SST to fail.
  • #1370532: Toggling wsrep_desync while node is paused is now blocked.
  • #1404168: Removed support for innodb_fake_changes variable.
  • #1455098: Fixed failure of LDI on partitioned table. This was caused by partitioned table handler disabling bin-logging and Native Handler (InnoDB) failing to generate needed bin-logs eventually causing skipping of statement replication.
  • #1503349: garbd now uses default port number if it is not specified in sysconfig.
  • #1505184: Corrected wsrep_sst_auth script to ensure that user name and password for SST is passed to XtraBackup through internal command-line invocation. ps -ef doesn’t list these credentials so passing it internally is fine, too.
  • #1520491: FLUSH TABLE statements are not replicated any more, because it lead to an existing upstream fix pending deadlock error. This fix also takes care of original fix to avoid increment of local GTID.
  • #1528020: Fixed async slave thread failure caused by redundant updates of mysql.event table with the same value. Redundant updates are now avoided and will not be bin-logged.
  • Fixed garb init script causing new UUIDs to be generated every time it runs. This error was due to missing base_dir configuration when gardb didn’t have write-access to current working directory. garbd will now try to use cwd. Then it will try to use /var/lib/galera (like most Linux daemons). If it fails to use or create /var/lib/galera, it will throw a fatal error.
  • Fixed replication of DROP TABLE statement with a mix of temporary and non-temporary tables (for example, DROP TABLE temp_t1, non_temp_t2), which caused errorneous DROP TEMPORARYTABLE stmt on replicated node. Corrected it by detecting such scenarios and creating temporary table on the replicated node, which is then dropped by follow-up DROP statement. All this workload should be part of same unit as temporary tables are session-specific.
  • Fixed error when wsrep_cluster_name value over 32 characters long caused gmcast message to exceed maximum length. Imposed a limit of 32 character on wsrep_cluster_name.
  • Added code to properly handle default values for wsrep_* variables, which caused an error/crash.
  • Fixed error when a CREATE TABLE AS SELECT (CTAS) statement still tried to certify a transaction on a table without primary key even if certification of tables without primary key was disabled. This error was caused by CTAS setting trx_id (fake_trx_id) to execute SELECT and failing to reset it back to -1 during INSERT as certification is disabled.
  • Fixed crashing of INSERT .... SELECT for MyISAM table with wsrep_replicate_myisam set to ON. This was caused by TOI being invoked twice when source and destination tables were MyISAM.
  • Fixed crash when caching write-set data beyond configured limit. This was caused by TOI flow failing to consider/check error resulting from limit enforcement.
  • Fixed error when loading MyISAM table from schema temporary table (with wsrep_replicate_myisam set to ON). This was caused by temporary table lookup being done usingget_table_name(), which could be misleading as table_name for temporary tables is set to temporary generated name. Original name of the table is part of table_alias. The fix corrected condition to consider both table_name and alias_name.
  • Fixed error when changing wsrep_provider in the middle of a transaction or as part of a procedure/trigger. This is now blocked to avoid inconsistency.
  • Fixed TOI state inconsistency caused by DELAYED_INSERT on MyISAM table (TOI_END was not called). Now the DELAYED_ qualifier will be ignored and statement will be interpreted as normal INSERT.
  • Corrected locking semantics for FLUSH TABLES WITH READ LOCK (FTWRL). It now avoids freeing inheritted lock if follow-up FLUSH TABLE statement fails. Only frees self-acquired lock.
  • Fixed crash caused by GET_LOCK + wsrep_drupal_282555_workaround. GET_LOCK path failed to free all instances of user-level locks after it inherited multiple-user-locks from Percona Server. The cleanup code now removes all possible references of locks.
  • Fixed cluster node getting stuck in Donor/Desync state after a hard recovery, because of an erroneous type cast in source code.
  • Corrected DDL and DML semantics for MyISAM:
    • DDL (CREATE/DROP/TRUNCATE) on MyISAM will be replicated irrespective of wsrep_replicate_miysam value
    • DML (INSERT/UPDATE/DELETE) on MyISAM will be replicated only if wsrep_replicate_myisam is enabled
    • SST will get full transfer irrespective of wsrep_replicate_myisam value (it will get MyISAM tables from donor if any)
    • Difference in configuration of pxc-cluster node on enforce_storage_engine front may result in picking up different engine for same table on different nodes
    • CREATE TABLE AS SELECT (CTAS) statements use non-TOI replication and are replicated only if there is involvement of InnoDB table that needs trx (involvement of MyISAM table will cause CTAS statement to skip replication)

Known Issues:

  • 1330941: Conflict between wsrep_OSU_method set to RSU and wsrep_desync set to ON was not considered a bug.
  • 1443755: Causal reads introduces surprising latency in single node clusters.
  • 1522385: Holes are introduced in Master-Slave GTID eco-system on replicated nodes if any of the cluster nodes are acting as asynchronous slaves to an independent master.
  • SST fails with innodb_data_home_dir/innodb_log_home_dir. This is a bug in Percona XtraBackup. It should be fixed in the next 2.3.2 release. Until then, please use 2.2.12 that doesn’t have this issue.
  • Enabling wsrep_desync (from previous OFF state) will wait until previous wsrep_desync=OFF operation is completed.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Ubuntu server 14.4 LTS and Percona

Lastest Forum Posts - January 11, 2016 - 1:41am
I have a new fresh installation of ubuntu server 14.4 LTS with LAMP.
Following this guide I have installed Percona Server and it's works correctly. Infact if I do

Code: SHOW VARIABLES LIKE"%version%"
it returns this:

and when I try
Code: SHOW ENGINES
it returns:




I have to install xtraDB storage engine? Or InnoDB is replaced by XtraDB? Thanks for clarification

ordering_operation: EXPLAIN FORMAT=JSON knows everything about ORDER BY processing

Latest MySQL Performance Blog posts - January 8, 2016 - 10:25pm

We’ve already discussed using the ORDER BY clause with subqueries. You can also, however, use the ORDER BY clause with sorting results of one of the columns. Actually, this is most common way to use this clause.

Sometimes such queries require using temporary tables or filesort, and a regular EXPLAIN  clause provides this information. But it doesn’t show if this job is needed for ORDER BY or for optimizing another part of the query.

For example, if we take a pretty simple query ( select distinct last_name from employees order by last_name asc) and run EXPLAIN  on it, we can see that both the temporary table and filesort were used. However, we can’t identify if these were applied to DISTINCT, or to ORDER BY, or to any other part of the query.

mysql> explain select distinct last_name from employees order by last_name ascG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: employees partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 299379 filtered: 100.00 Extra: Using temporary; Using filesort 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select distinct `employees`.`employees`.`last_name` AS `last_name` from `employees`.`employees` order by `employees`.`employees`.`last_name`

EXPLAIN FORMAT=JSON tells us exactly what happened:

mysql> explain format=json select distinct last_name from employees order by last_name ascG *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "360183.80" }, "ordering_operation": { "using_filesort": false, "duplicates_removal": { "using_temporary_table": true, "using_filesort": true, "cost_info": { "sort_cost": "299379.00" }, "table": { "table_name": "employees", "access_type": "ALL", "rows_examined_per_scan": 299379, "rows_produced_per_join": 299379, "filtered": "100.00", "cost_info": { "read_cost": "929.00", "eval_cost": "59875.80", "prefix_cost": "60804.80", "data_read_per_join": "13M" }, "used_columns": [ "emp_no", "last_name" ] } } } } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select distinct `employees`.`employees`.`last_name` AS `last_name` from `employees`.`employees` order by `employees`.`employees`.`last_name`

In the output above you see can see that ordering_operation does not use filesort:

"ordering_operation": { "using_filesort": false,

But DISTINCT does:

"duplicates_removal": { "using_temporary_table": true, "using_filesort": true,

If we remove the DISTINCT clause, we will find that ORDER BY started using filesort, but does not need to create a temporary table:

mysql> explain format=json select last_name from employees order by last_name ascG *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "360183.80" }, "ordering_operation": { "using_filesort": true, "cost_info": { "sort_cost": "299379.00" }, <rest of the output skipped>

This means that in the case of the first query, a sorting operation proceeded in parallel with the duplicate keys removal.

Conclusion: EXPLAIN FORMAT=JSON  provides details about ORDER BY  optimization which cannot be seen with a regular EXPLAIN operation.

Mongorestore hangs restoring a MongoDB 2.6 dump to a PS for MongoDB 3.0.8 wiredTiger

Lastest Forum Posts - January 8, 2016 - 3:20pm
Hi all,

We are trying to migrate our MongoDB databases but we're stuck with this issue that seems to occur randomly; some times it hit early, some other times it hit well beyond half of the restore process.

I found someone else posting a similar issue on stackoverflow: http://stackoverflow.com/questions/3...orestore-hangs

OS: CentOS 7 KVM-guest
My mongod.conf file is as follows:
# cat /etc/mongod.conf
Code: # mongod.conf, Percona Server for MongoDB # for documentation of all options, see: # http://docs.mongo.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: /var/lib/mongo journal: enabled: true # engine: mmapv1 # engine: PerconaFT # engine: rocksdb engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 1 # Storage engine various options # mmapv1: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: /var/log/mongo/mongod.log processManagement: fork: true pidFilePath: /var/run/mongod.pid # network interfaces net: port: 27017 bindIp: 127.0.0.1 #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: As you see the cache is reduced to 1Gb as previously Mongo was crashing back to OS, presumably after getting out of memory.

Finally: they say that sometimes a picture is worth a thousand words so http://i.imgur.com/p6tk0gL.png

Apache Spark with Air ontime performance data

Latest MySQL Performance Blog posts - January 7, 2016 - 5:28pm

There is a growing interest in Apache Spark, so I wanted to play with it (especially after Alexander Rubin’s Using Apache Spark post).

To start, I used the recently released Apache Spark 1.6.0 for this experiment, and I will play with “Airlines On-Time Performance” database from
http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time. You can find the scripts I used here https://github.com/Percona-Lab/ontime-airline-performance. The uncompressed dataset is about 70GB, which is not really that huge overall, but quite convenient to play with.

As a first step, I converted it to Parquet format. It’s a column based format, suitable for parallel processing, and it supports partitioning.

The script I used was the following:

# bin/spark-shell --packages com.databricks:spark-csv_2.11:1.3.0 val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("/data/opt/otp/On_Time_On_Time_Performance_*.csv") sqlContext.setConf("spark.sql.parquet.compression.codec", "snappy") df.write.partitionBy("Year").parquet("/data/flash/spark/otp")

Conveniently, by using just two commands (three if to count setting compression, “snappy” in this case) we can convert ALL of the .csv files into Parquet (doing it in parallel).

The datasize after compression is only 3.5GB, which is a quite impressive compression factor of 20x. I’m guessing the column format with repeatable data allows this compression level.

In general, Apache Spark makes it very easy to handle the Extract, Transform and Load (ETL) process.

Another one of Spark’s attractive features is that it automatically uses all CPU cores and execute complexes in parallel (something MySQL still can’t do). So I wanted to understand how fast it can execute a query compared to MySQL,  and how efficient it is in using multiple cores.

For this I decided to use a query such as:
"SELECT avg(cnt) FROM (SELECT Year,Month,COUNT(*) FROM otp WHERE DepDel15=1 GROUP BY Year,Month) t1"

Which translates to the following Spark DataFrame manipulation:

(pFile.filter("DepDel15=1").groupBy("Year","Month").count()).agg(avg("count")).show()

I should note that Spark is perfectly capable of executing this as SQL query, but I want to learn more about DataFrame manipulation.

The full script I executed is:

val pFile = sqlContext.read.parquet("/mnt/i3600/spark/otp1").cache(); for( a <- 1 to 6){ println("Try: " +a ) val t1=System.currentTimeMillis; (pFile.filter("DepDel15=1").groupBy("Year","Month").count()).agg(avg("count")).show(); val t2=System.currentTimeMillis; println("Try: "+a+" Time: " + (t2-t1)) } exit

And I used the following command line to call the script:

for i in `seq 2 2 48` ; do bin/spark-shell --executor-cores $i -i run.scala | tee -a $i.schema.res ; done

which basically tells it to use from 2 to 48 cores (the server I use has 48 CPU cores) in steps of two.

I executed this same query six times. The first time is a cold run, and data is read from the disk. The rest are hot runs, and the query should be executed from memory (this server has 128GB of RAM, and I allocated 100GB to the Spark executor).

I measured the execution time in cold and hot runs, and how it changed as more cores were added.

There was a lot of variance in the execution time of the hot runs, so I show all the results to demonstrate any trends.

Cold runs:

More cores seem to help, but after a certain point – not so much.

Hot runs:

The best execution time was when 14-22 cores were used. Adding more cores after that, seems to actually make things worse. I would guess that the datasize is small enough so that the communication and coordination overhead cost exceeded the benefits of more parallel processing cores.

Comparing to MySQL

Just to have some points for comparison, I executed the same query in MySQL 5.7 using the following table schema: https://github.com/Percona-Lab/ontime-airline-performance/blob/master/mysql/create_table.sql

The hot execution time for the same query in MySQL (MySQL can use only one CPU core to execute one query) is 350 seconds (or 350,000ms to compare with the data on charts) when using the table without indexes. This is about 11 times worse than the best execution time in Spark.

If we use a small trick and createa  covering index in MySQL designed for this query:

"ALTER TABLE ontime ADD KEY (Year,Month,DepDel15)"

then we can improve execution time to 90 seconds. This is still worse than Spark, but the difference is not as big. We can’t, however, create index for each ad-hoc query, while Spark is capable of processing a variety of queries.

In conclusion, I can say that Spark is indeed an attractive option for data analytics queries
(and in fact it can do much more). It is worth keep in mind, however, that in this experiment
it did not scale well with multiple CPU cores. I wonder if the same problem appears when we use multiple server nodes.

If you have recommendations on how I can improve the results, please post it in comments.

Spark configuration I used (in Standalone cluster setup):

export MASTER=spark://`hostname`:7077 export SPARK_MEM=100g export SPARK_DAEMON_MEMORY=2g export SPARK_LOCAL_DIRS=/mnt/i3600/spark/tmp export SPARK_WORKER_DIR=/mnt/i3600/spark/tmp



General Inquiries

For general inquiries, please send us your question and someone will contact you.