Buy Percona ServicesBuy Now!

Error on adding services.

Lastest Forum Posts - March 2, 2017 - 9:00am
I am running the PMM server inside a docker container. The PMM client is running on another docker container with CentOS which is connected to a mariadb docker container.

The client is connected to the server, but when I add a mysql service to the client, I get this error:

Error adding MySQL metrics: "service" failed: exec: "service": executable file not found in $PATH

pmm doesn't trust my Thawte certificate

Lastest Forum Posts - March 2, 2017 - 8:07am
Hello,

i have setup a pmm-server ( with the 1.1.1 docker image) with my wildcard SSL certificate buy whith Thawte and the user/password protection.
Access to the Pmm-server web pages is OK and the web browser trust the Thawte certificate, but when i try to connect a pmm-client to the pmm-server, i always have the following message :
Looks like PMM server running with self-signed SSL certificate.
Use 'pmm-admin config' with --server-insecure-ssl flag.

Why Pmm doesn't trust my Thawte certificate ?

ps: my pmm-client is 1.0.5, does it matter ?

Regards,

Krissfr

Using Percona Toolkit pt-mongodb-query-digest

Latest MySQL Performance Blog posts - March 1, 2017 - 3:21pm

In this blog post, we’ll look at how to use the pt-mongodb-query-digest tool in Percona Toolkit 3.0.

Percona’s pt-query-digest is one of our most popular Percona Toolkit MySQL tools. It is used on a daily basis by DBAs and developers to help identify the queries consuming the most resources. It helps in finding bottlenecks and optimizing database usage. The pt-mongodb-query-digest is a similar tool for MongoDB.

About the Profiler

Before we start, remember that the MongoDB database profiler is disabled by default, and should be enabled. It can be enabled server-wide, but the full mode that logs all queries is not recommended in production unless you are using Percona Server for MongoDB 3.2 or higher. We added a feature to allow the sample rate of non-slow queries (like in MySQL) to limit the overhead this causes. 

Additionally, by default, the profiler is only 1MB per database. You may want to remove/create the profiler to sufficient size to find the results useful. To do this, use:

org_prof_level = db.getProfilingLevel(); //Disable Profiler db.setProfilingLevel(0); db.system.profile.drop(); //Setup a 100M profile 1*Math.pow(1024,2) == 1M profiler_size = 100 * Math.pow(1024,2); db.runCommand( { create: "system.profile", capped: true, size: profiler_size } ); db.setProfilingLevel(org_prof_level);

According to the documentation, to check if the profiler is enabled for the samples database, run:

`echo "db.getProfilingStatus();" | mongo localhost:17001/samples`

Remember, you need to connect to a MongoDB instance, not a mongos. The output will be something like this:

MongoDB shell version: 3.2.12 connecting to: localhost:17001/samples { "was" : 0, "slowms" : 100 } bye

The value for the field “was” is 0, which means profiling is disabled. Let’s enable the profiler for the samples database.

You must enable the profiler on all MongoDB instances that could be related to a shard of our database. To check on which instances we should enable the profiler, I am going to use the pt-mongodb-summary tool. It shows us the information we need about our cluster:

./pt-mongodb-summary ./pt-mongodb-summary # Instances ############################################################################################## PID Host Type ReplSet Engine 11037 localhost:17001 SHARDSVR/PRIMARY r1 wiredTiger 11065 localhost:17002 SHARDSVR/SECONDARY r1 wiredTiger 11136 localhost:17003 SHARDSVR/SECONDARY r1 wiredTiger 11256 localhost:17004 SHARDSVR/ARBITER r1 wiredTiger 11291 localhost:18001 SHARDSVR/PRIMARY r2 wiredTiger 11362 localhost:18002 SHARDSVR/SECONDARY r2 wiredTiger 11435 localhost:18003 SHARDSVR/SECONDARY r2 wiredTiger 11513 localhost:18004 SHARDSVR/ARBITER r2 wiredTiger 11548 localhost:19001 CONFIGSVR - wiredTiger 11571 localhost:19002 CONFIGSVR - wiredTiger 11592 localhost:19003 CONFIGSVR - wiredTiger

We have mongod service running on the localhost on ports 17001~17003 and 18001~18003.

Now, let’s enable the profiler for the samples database on those instances. For this example, I am going to set the profile level to “2”, to collect information about all queries.

for port in 17001 17002 17003 18001 18002 18003; do echo "db.setProfilingLevel(2);" | mongo localhost:${port}/samples; done Running pt-mongodb-query-profile

Now we are ready to get statistics about our queries. To run pt-mongodb-query-digest, we need to specify at least “host: port/database”, like:

./pt-mongodb-query-digest localhost:27017/samples

The output will be something like this (I am showing a section for only one query):

# Query 0: 0.27 QPS, ID 2c0e2f94937d6660f510adeea98618f3 # Ratio 1.00 (docs scanned/returned) # Time range: 2017-02-22 12:27:21.004 -0300 ART to 2017-02-22 12:28:00.867 -0300 ART # Attribute pct total min max avg 95% stddev median # ================== === ======== ======== ======== ======== ======== ======= ======== # Count (docs) 845 # Exec Time ms 99 1206 0 697 1 0 29 0 # Docs Scanned 7 594.00 0.00 75.00 0.70 0.00 7.19 0.00 # Docs Returned 7 594.00 0.00 75.00 0.70 0.00 7.19 0.00 # Bytes recv 0 8.60M 215.00 1.06M 10.17K 215.00 101.86K 215.00 # String: # Namespaces samples.col1 # Operation query # Fingerprint user_id # Query {"user_id":{"$gte":3506196834,"$lt":3206379780}}

From the output, we can see that this query was seen 97 times, and it provides statistics for the number of documents scanned/retrieved by the server, the execution time and size of the results. The tool also provides information regarding the operation type, the fingerprint and a query example to help to identify the source. 

By default, the results are sorted by query count. It can be changed by setting the --order-by parameter to: count, ratio, query-time, docs-scanned or docs-returned.

A “-” in front of the field name denotes the reverse order. Example:

--order-by=-ratio

When considering what ordering to use, you need to know if you are looking for the most common queries (-count), the most cache abusive (-docs-scanned), or the worst ratio of scanned to returned (-ratio)? Please note you may be tempted to use (-query-time), however you will find this almost always ends up being more queries affected by, but not causing, issues.

Conclusion

This is a new tool in the Percona Toolkit. We hope in the future we can make it grow like its big brother for MySQL (pt-query-digest). This tool helps DBAs and developers identify and solve bottlenecks, and keep servers running at top performance.

Open Source Databases on Big Machines: Disk Speed and innodb_io_capacity

Latest MySQL Performance Blog posts - March 1, 2017 - 3:00pm

In this blog post, I’ll look for the bottleneck that prevented the performance in my previous post from achieving better results.

The powerful machine I used in the tests in my previous post has a comparatively slow disk, and therefore I expected my tests would hit a point when I couldn’t increase performance further due to the disk speed.

Hardware configuration:

Processors: physical = 4, cores = 72, virtual = 144, hyperthreading = yes
Memory: 3.0T
Disk speed: about 3K IOPS
OS: CentOS 7.1.1503
File system: XFS

Versions tested and configuration: same as in the first post of this series (check the post for specifics).

Even though I expected my tests would stop increasing in performance due to the disk speed, I did not observe high IO rates in the iostat output. I already tested with a full data set that fits in memory. In this case, write performance only affected data flushes and log writes. But we should still see a visible decrease in speed. So I decided to try RW tests totally in memory. I created a ramdisk and put the MySQL datadir on it. Surprisingly, results on the SSD and ramdisk did not differ.

I asked my colleagues from “Postgres Professional” to test PostgreSQL with the ramdisk. They got similar results:

It’s interesting that the value of innodb_io_capacity does not have any effect on this situation. Data for the graph below was taken when I ran tests on ramdisk. I wanted to see if I could control the IO activity of a disk, which is extremely fast by default, using this variable.

This totally contradicts all my past experiences with smaller machines. Percona re-purposed the machine with a faster disk (which I used before, described in this post), so I used a similar one with slower disk speed.

Hardware configuration:

Processors: physical = 2, cores = 12, virtual = 24, hyperthreading = yes
Memory: 47.2G
Disk speed: about 3K IOPS
OS: Ubuntu 14.04.5 LTS (trusty)
File system: ext4

Again, in this case innodb_io_capacity benchmarks with a smaller number of CPU cores showed more predictable results.

Conclusion:

Both MySQL and PostgreSQL on a machine with a large number of CPU cores hit CPU resources limits before disk speed can start affecting performance. We only tested one scenario, however. With other scenarios, the results might be different.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB MMAPv1

Latest MySQL Performance Blog posts - February 28, 2017 - 3:28pm

This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In this blog post, I hope to cover some areas to watch with Percona Monitoring and Management (PMM) when running MMAPv1. The graph examples from this article are from the MMAPv1 dashboard that will be released for the first time in PMM 1.1.2.

Since the very beginning of MongoDB, the MMAPv1 storage engine has existed. MongoDB 3.0 added a pluggable storage engine API. You could only use MMAPv1 with MongoDB before that. While MMAPv1 often offers good read performance, it has become famous for its poor write performance and fragmentation at scale. This means there are many areas to watch for regarding performance and monitoring.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB. It was developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

(Beware of) MMAPv1

mmap() is a system-level call that causes the operating system kernel to map on-disk files to memory while it is being read and written by a program.

As mmap() is a core feature of the Unix/Linux operating system kernel (and not the MongoDB code base), I’ve always felt that calling MMAPv1 a “storage engine” is quite misleading, although it does allow for a simpler explanation. The distinction and drawbacks of the storage logic being in the operating system kernel vs. the actual database code (like most database storage engines) becomes very important when monitoring MMAPv1.

As Unix/Linux are general-purpose operating systems that can have many processes, users and uses cases, they offer limited OS-level metrics in terms of activity, latency and performance of mmap(). Those metrics are for the entire operating system, not just for the MongoDB processes.

mmap() uses memory from available OS-level buffers/caches for mapping the MMAPv1 data to RAM — memory that can be “stolen” away by any other operating system process that asks for it. As many deployments “micro-shard” MMAPv1 to reduce write locks, this statement can become exponentially more important. If 3 x MongoDB instances run on a single host, the kernel fights to cache and evict memory pages created by 3 x different instances with no priority or queuing, essentially at random, while creating contention. This causes inefficiencies and less-meaningful monitoring values.

When monitoring MMAPv1, you should consider MongoDB AND the operating system as one “component” more than most engines. Due to this, it is critical that a database host runs a single MongoDB instance with no other processes except database monitoring tools such as PMM’s client. This allows MongoDB to be the only user of the operating system filesystem cache that MMAPv1 relies on. This also makes OS-level memory metrics more accurate because MongoDB is the only user of memory. If you need to “micro-shard” instances, I recommend using containers (Docker or plain cgroups) or virtualization to separate your memory for each MongoDB instance, with just one MongoDB instance per container.

Locking

MMAPv1’s has locks for both reads and writes. In the early days the lock was global only. Locking became per-database in v2.2 and per-collection in v3.0.

Locking is the leading cause of the performance issues we see on MMAPv1 systems, particularly write locking. To measure how much locking an MMAPv1 instance is waiting on, first we look at the “MMAPv1 Lock Ratio”:

Another important metric to watch is “MongoDB Lock Wait Time”, breaking down a number of time operations spend waiting on locks:

Three factors in combination influence locking:

  1. Data hotspots — if every query hits the same collection or database, locking increases
  2. Query performance — a lock is held for the duration of an operation; if that operation is slow, lock time increases
  3. Volume of queries — self-explanatory

Page Faults

Page faults happen when MMAPv1 data is not available in the cache and needs to be fetched from disk. On systems with data that is smaller than memory page faults usually only occur on reboot, or if the file system cache is dumped. On systems where data exceeds memory, this happens more frequently — MongoDB is asked for data not in memory.

How often this happens depends on how your application accesses your data. If it accesses new or frequently-queried data, it is more likely to be in memory. If it accesses old or infrequent data, more page faults occur.

If page faults suddenly start occurring, check to see if your data set has grown beyond the size of memory. You may be able to reduce your data set by removing fragmentation (explained later).

Journaling

As MMAPv1 eventually flushes changes to disk in batches, journaling is essential for running MongoDB with any real data integrity guarantees. As well as being included in the lock statistic graphs mentioned above, there are some good metrics for journaling (which is a heavy consumer of disk writes).

Here we have “MMAPv1 Journal Write Activity”, showing the data rates of journaling (max 19MB/sec):

“MMAPv1 Journal Commit Activity” measures the commits to the journal ops/second:

A very useful metric for write query performance is “MMAPv1 Journaling Time” (there is another graph with 99th percentile times):

This is important to watch, as write operations need to wait for a journal commit. In the above example, “write_to_journal” and “write_to_data_files” are the main metrics I tend to look at. “write_to_journal” is the rate of changes being written to the journal, and “write_to_data_files” is the rate that changes are written to on-disk data.

If you see very high journal write times, you may need faster disks or in-sharding scenarios. Adding more shards spreads out the disk write load.

Background Flushing

“MMAPv1 Background Flushing Time” graphs the background operation that calls flushes to disk:

This process does not block the database, but does cause more disk activity.

Fragmentation

Due to the way MMAPv1 writes to disk, it creates a high rate of fragmentation (or holes) in its data files. Fragmentation slows down scan operations, wastes some filesystem cache memory and can use much more disk space than there is actual data. On many systems I’ve seen, the size of MMAPv1 data files on disk take over twice the true data size.

Currently, our Percona Monitoring and Management MMAPv1 support does not track this, but we plan to add it in the future.

To track it manually, look at the output of the “.stats()” command for a given collection (replace “sbtest1” with your collection name):

> 1 - ( db.sbtest1.stats().size / db.sbtest1.stats().storageSize ) 0.14085410557184752

Here we can see this collection is about 14% fragmented on disk. To fix fragmentation, the most common fix is dropping and recreating the collection using a backup. Many just remove a replication member, clear the data and let it do a new replication initial sync.

Operating System Memory

In PMM we have graphed the operating system cached memory as it acts as the primary cache for MMAPv1:

For the most part, “Cached” is the value showing the amount of data that is cached MMAPv1 data (assuming the host is only running MongoDB).

We also graph the dirty memory pages:

It is important that dirty pages do not exceed the hard dirty page limit (which causes pauses). It is also important that dirty pages don’t accumulate (which wastes cache memory). The “soft” dirty page limit is the limit that starts dirty page cleanup without pausing.

On this host, you could probably lower the soft limit to clean up memory faster, assuming the increase in disk activity is acceptable. This topic is covered in this post: https://www.percona.com/blog/2016/08/12/tuning-linux-for-mongodb/.

What’s Missing?

As mentioned earlier, fragmentation rates are missing for MMAPv1 (this would be a useful addition). Due to the limited nature of the metrics offered for MMAPv1, PMM probably won’t provide the same level of graphs for MMAPv1 compared to what we provide for WiredTiger or RocksDB. There will likely be fewer additions to the graphing capabilities going forward.

If you are using a highly concurrent system, we highly recommend you upgrade to WiredTiger or RocksDB (both also covered in this monitoring series). These engines provide several solutions to MMAPv1 headaches: document-level locking, built-in compression, checkpointing that cause near-zero fragmentation on disk and much-improved visibility for monitoring. We just released Percona Server for MongoDB 3.4, and it provides many exciting features (including these engines).

Look out for more monitoring posts from this series!

MySQL 5.7.17-11 and 5.7.16-10 crashing after upgrade from MySQL 5.6

Lastest Forum Posts - February 28, 2017 - 11:20am
We successfully upgraded Percona MySQL server version 5.6.29-76.2 to 5.7.16-10 (first) but it was crashing then we upgraded to 5.7.17-11. We see 5.7.17-11 is also crashing.
So far we are not able to figure out anything except that there is something to do with lob/text data.
Following is error and stack trace.

00:01:39 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=268435456
read_buffer_size=1048576
max_used_connections=417
max_threads=1001
thread_count=408
connection_count=406
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1556244 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7ef95c03c880
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7ef8e6df7d40 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0xed056c]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0x7a0631]
/lib64/libpthread.so.0[0x31d080f7e0]
/usr/sbin/mysqld(_ZN10Field_blob15copy_blob_valueEP11st_mem_ root+0x28)[0x7e3d08]
/usr/sbin/mysqld(_Z25mysql_prepare_blob_valuesP3THDR4ListI4I temEP11st_mem_root+0x2b8)[0xe1b468]
/usr/sbin/mysqld(_Z12write_recordP3THDP5TABLEP9COPY_INFOS4_+ 0x87d)[0xe1c02d]
/usr/sbin/mysqld(_ZN14Sql_cmd_insert12mysql_insertEP3THDP10T ABLE_LIST+0x82d)[0xe1ca9d]
/usr/sbin/mysqld(_ZN14Sql_cmd_insert7executeEP3THD+0xc2)[0xe1d2d2]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THDb+0x18d7)[0xca9e67]
/usr/sbin/mysqld(_ZN18Prepared_statement7executeEP6Stringb+0 x357)[0xcd9e57]
/usr/sbin/mysqld(_ZN18Prepared_statement12execute_loopEP6Str ingbPhS2_+0xca)[0xcda25a]
/usr/sbin/mysqld(_Z19mysqld_stmt_executeP3THDmmPhm+0x13b)[0xcda58b]
/usr/sbin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_ server_command+0x190f)[0xcb197f]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x1b7)[0xcb24a7]
/usr/sbin/mysqld(handle_connection+0x2a0)[0xd76740]
/usr/sbin/mysqld(pfs_spawn_thread+0x1b4)[0xeed474]
/lib64/libpthread.so.0[0x31d0807aa1]
/lib64/libc.so.6(clone+0x6d)[0x31d00e8aad]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7ef95c12bec0): is an invalid pointer
Connection ID (thread ID): 4170
Status: NOT_KILLED


After converting it in readable format then we see this.

0xed056c my_print_stacktrace + 44
0x7a0631 handle_fatal_signal + 1121
0x31d080f7e0 _end + -829069216
0x7e3d08 Field_blob::copy_blob_value(st_mem_root*) + 40
0xe1b468 mysql_prepare_blob_values(THD*, List<Item>&, st_mem_root*) + 696
0xe1c02d write_record(THD*, TABLE*, COPY_INFO*, COPY_INFO*) + 2173
0xe1ca9d Sql_cmd_insert::mysql_insert(THD*, TABLE_LIST*) + 2093
0xe1d2d2 Sql_cmd_insert::execute(THD*) + 194
0xca9e67 mysql_execute_command(THD*, bool) + 6359
0xcd9e57 Prepared_statement::execute(String*, bool) + 855
0xcda25a Prepared_statement::execute_loop(String*, bool, unsigned char*, unsigned char*) + 202
0xcda58b mysqld_stmt_execute(THD*, unsigned long, unsigned long, unsigned char*, unsigned long) + 315
0xcb197f dispatch_command(THD*, COM_DATA const*, enum_server_command) + 6415
0xcb24a7 do_command(THD*) + 439
0xd76740 handle_connection + 672
0xeed474 pfs_spawn_thread + 436
0x31d0807aa1 _end + -829101279
0x31d00e8aad _end + -836568275

When we read stack trace from bottom to top then we can see that MySQL is working on some record which has something to do with blob data.

Has someone faced the same issue?

I have also filed a bug https://bugs.launchpad.net/percona-server/+bug/1667552 but so far no response.

Any pointer/clue highly appreciated.

Thanks

SST XtraDB Backup - Use a different tmp directory

Lastest Forum Posts - February 28, 2017 - 4:30am
Hi all,

Our current tmp directory is only 5GB and it's not able to perform SST backup as the tmp directory gets filled up really quickly.

Setting tmpdir under [mysqld] or [xtrabackup] doesn't work for me. I have also checked the permissions. It is still using /tmp for the SST.

We are using the following version:

Server version: 5.7.16-10-57-log Percona XtraDB Cluster (GPL), Release rel10, Revision bec0879, WSREP version 27.19, wsrep_27.19

Any ideas please?

Thanks!

pmm-server's requirements

Lastest Forum Posts - February 27, 2017 - 5:14pm
pmm-server(1.1.1) is running on host A and two pmm-clients(1.1.1) runs on B and C.
I run the container of pmm-server at -m 512M and for a short time [cgroup out of memory] error was occurred.
what is the requirements of ppm-server about memory,cpu ... etc?

Webinar Thursday March 2, 2017: MongoDB Query Patterns

Latest MySQL Performance Blog posts - February 27, 2017 - 5:06pm

Join Percona’s Senior Technical Services Engineer Adamo Tonete on Thursday, March 2, 2017, at 11:00 a.m. PST / 2:00 p.m. EST (UTC-8) as he reviews and discusses MongoDB® query patterns.

Register Now

MongoDB is a fast and simple-to-query schema-free database. It features a smart query optimizer that tries to use the easiest data retrieval method.

In this webinar, Adamo will discuss common query operators and how to use them effectively. The webinar will cover not only common query operations, but also the best practices for their usage.

Register for the webinar here.

Adamo Tonete, Senior Technical Services Engineer

Adamo joined Percona in 2015, after working as a MongoDB/MySQL Database Administrator for three years. As the main database member of a startup, he was responsible for suggesting the best architecture and data flows for a worldwide company in a 24/7 environment. Before that, he worked as a Microsoft SQL Server DBA for a large e-commerce company, mainly on performance tuning and automation. Adamo has almost eight years of experience working as a DBA, and in the past three years, he has moved to NoSQL technologies without giving up relational databases.

MySQL Ransomware: Open Source Database Security Part 3

Latest MySQL Performance Blog posts - February 27, 2017 - 2:28pm

This blog post examines the recent MySQL® ransomware attacks, and what open source database security best practices could have prevented them.

Unless you’ve been living under a rock, you know that there has been an uptick in ransomware for MongoDB and Elasticsearch deployments. Recently, we’re seeing the same for MySQL.

Let’s look and see if this is MySQL’s fault.

Other Ransomware Targets

Let’s briefly touch on how Elasticsearch and MongoDB became easy targets…

Elasticsearch

Elasticsearch® does not implement any access control: neither authentication nor authorization. For this, you need to deploy the Elastic’s shield offering. As such, if you have an Elasticsearch deployment that is addressable from the Internet, you’re asking for trouble. We see many deployments have some authentication around their access, such as HTTP Basic Auth – though sadly, some don’t employ authentication or network isolation. We already wrote a blog about this here.

MongoDB

MongoDB (< 2.6.0) does allow for access control through account creation. It binds to 0.0.0.0 by default (allowing access from anywhere). This is now changed in /etc/mongod.conf in versions >= 2.6.0. Often administrators don’t realize or don’t know to look for this. (Using MongoDB? My colleague David Murphy wrote a post on this issue here).

We began to see incidents where both Elasticsearch and MongoDB had their datasets removed and replaced with a README/note instructing the user to pay a ransom of 0.2BTC (Bitcoin) to the specified wallet address (if they wanted their data back).

MySQL

So is this latest (and similar) attack on MySQL MySQL’s fault? We don’t think so. MySQL and Percona Server® for MySQL by default do not accept authentication from everywhere without a password for the root user.

Let’s go over the various security options MySQL has, and describe some other best practices in order to protect your environment.

Default bind_address=127.0.0.1 in Percona Server for MySQL

MySQL currently still binds to 0.0.0.0 (listen to all network interfaces) by default. However, Percona Server for MySQL and Percona XtraDB Cluster have different defaults, and only bind on 127.0.0.1:3306 in its default configuration (Github pull request).

Recall, if you will, CVE-2012-2122. This ALONE should be enough to ensure that you as the administrator use best practices, and ONLY allow access to the MySQL service from known good sources. Do not setup root level or equivalent access from any host (% indicates any host is allowed). Ideally, you should only allow root access from 127.0.0.1 – or if you must, from a subset of a secured network (e.g., 10.10.0.% would only allow access to 10.10.0.0/24).

Prevent Access

Also, does the MySQL database really need a publicly accessible IP address? If you do have a valid reason for this, then you should firewall port 3306 and whitelist access only from hosts that need to access the database directly. You can easily use iptables for this.

Default Users

MySQL DOES NOT by default create accounts that can be exploited for access. This comes later through an administrator’s lack of understanding, sadly. More often than not, the grant will look something like the following.

GRANT ALL PRIVILEGES TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;

You may scoff at the above (and rightly so). However, don’t discount this just yet: “123456” was the MOST USED password in 2016! So it’s reasonable to assume that somewhere out there this is a reality.

Max Connection Errors

You can deploy max_connection_errors with a suitably low value to help mitigate a direct attack. This will not prevent a distributed attack, where many thousands of hosts are used. Network isolation is the only way to ensure your mitigation against this attack vector.

MySQL 5.7 Improvements on Security Default Root Password

Since MySQL 5.7, a random password is generated for the only root user (root@localhost) when you install MySQL for the first time. That password is then written in the error log and has to be changed. Miguel Ángel blogged about this before.

Connection Control Plugin

MySQL 5.7.17 introduced a new open source plugin called Connection Control. When enabled, it delays the authentication of users that failed to login by default more than three times. This is also part as of Percona Server for MySQL 5.7.17.

Here’s an example where the 4th consecutive try caused a one-second delay (default settings were used):

$ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.009s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.008s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.008s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m1.008s mysql> SELECT * FROM INFORMATION_SCHEMA.CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS; +---------------------+-----------------+ | USERHOST | FAILED_ATTEMPTS | +---------------------+-----------------+ | 'bleh2'@'localhost' | 4 | +---------------------+-----------------+ 1 row in set (0.01 sec)

Password Validation Plugin

MySQL 5.6.6 and later versions also ship with a password validation plugin, which prevents creating users with unsafe passwords (such as 123456) by ensuring passwords meet certain criteria: https://dev.mysql.com/doc/refman/5.7/en/validate-password-plugin.html

Summary

In order to get stung, one must ignore the best practices mentioned above (which in today’s world, should take some effort). These best practices include:

  1. Don’t use a publicly accessible IP address with no firewall configured
  2. Don’t use a root@% account, or other equally privileged access account, with poor MySQL isolation
  3. Don’t configure those privileged users with a weak password, allowing for brute force attacks against the MySQL service

Hopefully, these are helpful security tips for MySQL users. Comment below!

add generated virtual column still copy data

Lastest Forum Posts - February 27, 2017 - 1:59pm

When run the below query it take long time
alter table record_log add column record_key varchar(255) AS (md5(concat(record_time,record_campId))) VIRTUAL ;
show processlist show it copy data to tmp table, suppose it will done in place with data copy

34 | root | localhost | record | Query | 26 | copy to tmp table | alter table record_log add column record_key varchar(32) AS (md5(concat(record_time,reco | 0 |

MySQL version is 5.7.13-6-log Percona Server (GPL), Release 6, Revision e3d58bb

Thanks
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.