Percona build7 with latest patches

PREVIOUS POST
NEXT POST

We made new binaries for MySQL 5.0.67 build 7 which include patches we recently announced.

The -percona release includes:

and -percona-highperf release additionaly includes

You can download RPMs for RedHat / CentOS 4.x and 5.x for x86_64, binaries, sources and patches there

PREVIOUS POST
NEXT POST

Comments

  1. says

    Hi Vadim,

    I’m noticing some issues with the microslow_innodb.patch. I am using a long_query_time of 1000000, yet it is not logging my queries that last longer than a second. I noticed this is version 1.1 and when I was using 1.0 it worked properly. I just wanted to make sure the time is still in microseconds? Also, I noticed when I set the time to 1, then it would log the queries over 1 second in length.

    Davy

  2. dim says

    I’ve a problem to compile it under Solaris:

    cc -DHAVE_CONFIG_H -I. -I. -I.. -I./../include -I./../../include -I../../include -O -DDBUG_OFF -DDBUG_OFF -fast -DHAVE_RWLOCK_T -DDEBUG_OFF -DUNIV_SOLARIS -c os0file.c
    “os0file.c”, line 1953: undefined struct/union member: io_reads
    “os0file.c”, line 1954: undefined struct/union member: io_read
    “os0file.c”, line 1975: undefined struct/union member: io_reads_wait_timer
    “os0file.c”, line 3523: undefined struct/union member: io_reads
    “os0file.c”, line 3524: undefined struct/union member: io_read
    cc: acomp failed for os0file.c

    Is there any include files are missing? anything else?..
    Thanks in advance for your help!

    Rgds,
    -dim

  3. says

    dim,

    Did you use the tarball?

    mysql-5.0.67-percona-b7-src.tar.gz
    or
    mysql-5.0.67-percona-highperf-b7-src.tar.gz

    I’m afraid that you may fail to apply patches.

  4. dim says

    Yes, I used tarballs (both) – and the same error for both tarballs..
    BTW, I saw the same error when earlier I’ve tried to compile the build4, so it did not come recently.

    Rgds,
    -dim

  5. says

    dim,

    Hmm… I don’t know why the same files cause error at Solaris and don’t at Linux…

    But, the struct is defined at “trx0trx.h”, so
    If you add

    #include “trx0trx.h”

    to innobase/os/os0file.c , this compiling will succeed.

    Though, the another next error may happen…

  6. dim says

    Well, seems it was more tested with gcc rather Sun compiler :-)
    And I may confirm – there was no problem to compile on openSUSE 11.0.

    With Sun compiler (SS12) I’ve got another error next on the innodb part, but not with gcc (hard to believe gcc will include a missing file on its own, so I may suppose a different logic in makefiles while gcc is used (all the time then I see “if gcc …” ))

    However, even with using gcc I’m blocked next in pure C++ error on /sql:

    sql_class.h:49: error: expected constructor, destructor, or type conversion before “extern”
    sql_class.h:49: error: expected ,' or ;’ before “extern”
    sql_class.h:59: error: expected constructor, destructor, or type conversion before “extern”
    sql_class.h:59: error: expected ,' or ;’ before “extern”

    And seems something goes wrong with DECLS macros:

    #ifdef __cplusplus
    __BEGIN_DECLS
    #endif
    extern ulonglong frequency;
    #ifdef __cplusplus
    __END_DECLS
    #endif

    (same error with SS12 too)

    Any idea?..

    Rgds,
    -Dim

  7. dim says

    Just in case: Is it possible the problem is coming from GCC version difference?..
    On Suse I’ve gcc-4.3 and on Solaris gcc-3.4.3. Did you compile already your code with gcc3 ?..

    Rgds,
    -Dim

  8. says

    Dim,

    These symbols (__BEGIN_DECLS, __END_DECLS) are defined at /usr/include/sys/cdefs.h of my OpenSUSE 10.2 box. And it is header file of glibc.

    Hmm…

  9. dim says

    You know what?

    I’ve added explicitly into sql_class.hh:

    #undef __BEGIN_DECLS
    #undef __END_DECLS
    #ifdef __cplusplus
    # define __BEGIN_DECLS extern “C” {
    # define __END_DECLS }
    #else
    # define __BEGIN_DECLS /* empty */
    # define __END_DECLS /* empty */
    #endif

    And it worked! :-))

    Then I got an error about missing “strsep” function in sql_show.cc, so
    I’ve found and added a following code:

    char
    *strsep(char **stringp, const char *delim)
    {
    char *res;

    if (!stringp || !*stringp || !**stringp)
    return (char*)0;

    res = *stringp;
    while(**stringp && !strchr(delim, **stringp))
    ++(*stringp);

    if (**stringp) {
    **stringp = ”;
    ++(*stringp);
    }

    return res;
    }

    Hope it’s ok :-))

    Then at least compiling is finished without errors! :-))

    Now I’ll recompile it again to in 64-bit mode and will see if it’ll not give me a core dump on startup :-)) and then I hope to be able to test it finally :-))

    Rgds,
    -Dim

  10. dim says

    Yasufumi,

    how the less or more optimal my.conf file should looks for persona MySQL to keep let’s say 256 aggressive users in read-only mode, and also in read/write (50/50) mode?

    my current conf: (for MySQL 32bit)

    [mysqld]
    max_connections=2000
    key_buffer_size=200M
    low_priority_updates=1

    table_cache = 8000
    sort_buffer_size = 2097152

    innodb_file_per_table
    innodb_log_file_size=500M

    innodb_buffer_pool_size=1600M
    innodb_additional_mem_pool_size=20M
    innodb_log_buffer_size=8M

    innodb_checksums=0
    innodb_doublewrite=0
    innodb_support_xa=0
    innodb_thread_concurrency=0
    innodb_flush_log_at_trx_commit=0
    innodb_max_dirty_pages_pct=15

    NOTE: innodb_flush_log_at_trx_commit=0 because I’m interesting first on the scalability limits (then will see which kind of storage will be needed to keep redo writing on the required speed)..

    Thanks a lot for your help!
    Rgds,
    -Dim

  11. dim says

    NOTE: I’m asking about config file because currently “persona b7″ is performing much worse vs “standard” MySQL 5.0.67 from mysql.com …
    And I’m very interesting to understand why…

    Rgds,
    -Dim

  12. Vadim says

    Dim,

    To provide optimal my.cnf we should know your workload and server characteristics.
    And what is much worse – can you show some numbers ?

  13. says

    I was looking at the 5.1.28 percona build, and I don’t see the INNODB_BUFFER_POOL_CONTENT table in information schema – did that come after? My “show patches” command only shows “show_patches.patch”

    I’m really looking forward to seeing a 5.1.30 percona build in the very near future, as certain people are promising this will be GA in the next couple of weeks.

  14. dim says

    Vadim,

    My DB server: 48 cores, SPARC64-VII 2500Mhz, 192GB RAM
    Injector is a similar server; Gbit connection between injector and DB;

    Workload:
    1. read-only: 2 simple selects per “transaction”
    2. read-write: 2 simple selects per “transaction” + delete/insert/update of a single row

    no logical contention between queries;
    load is progressively increased from: 1 2 4 8 16 32 64 128 and 256 concurrent sessions;

    Test is running currently (comparing 5.0.67, 5.1.29, 6.0.7, and 5.0.67-persona); I’ll give you a real numbers by Monday.

    Rgds,
    -Dim

  15. says

    dim,

    Honestly, “I think (The other people may not so)”,
    -percona version (even -percona-highperf) may be always slower than normal MySQL.
    We should choose and discard patches of -percona to get more performance…
    Some of the patches must cause performance regression.

    And,

    I think you should check the build tree of -highperf if you use.

    innobase/ib_config.h contains
    #define HAVE_ATOMIC_BUILTINS 1
    or
    #define HAVE_ATOMIC_BUILTINS 0

    If it is “0”, there may be almost no effect of -highperf.
    It means the GCC doesn’t support the builtin feature of atomic operations.

  16. Vadim says

    Yasufumi,

    Actually I think the only userstats can be problem, as Google confirmed mutex contention.

    But this is task for you to found regression, I hope you will solve it :)

  17. dim says

    Yasufumi,

    I have:
    #define HAVE_ATOMIC_BUILTINS 1

    so it should be ok for -highperf effect :-)

    Few numbers so well comparing “standard” vs “persona” MySQL 5.0.67:
    – data load is at least x2 times faster with “persona”
    – index creation is at least x2 times faster with “persona”

    Workload tests – my initial “much worse” observation was due cold cache – now each test is replayed several times to compare “apples to apples”..

    But so well:
    – read-only test:
    – 3000 tps max for “persona”
    – 3500 tps max for “standard”

    – read+write test:
    – 2800 tps for “persona” (+ much more stable)
    – 2500 tps for “standard”

    But testing is still continuing- I’ve found MySQL is keeping workload much(!) more better with “nnodb_thread_concurrency=8″ rather “0” – it creating so many mutex contentions during processing – seems it’s better to limit an active number of running threads from the InnoDB side rather to leave them in the savage battle :-)

    Rgds,
    -Dim

  18. Vadim says

    Dim,

    Ok Thank you for sharing results!
    We are looking to fix some regressions in -percona.

    I think comments there not good place to communicate back and forth – I have some questions to you – can you drop me email to vadim at percona

  19. Aman Gupta says

    If I want to patch a pristine mysql src tree, in what order should I apply the patches? I am seeing failures on innodb_io_pattern.patch when applying them in the order listed above.

  20. Vadim says

    Aman,

    Order of patches is:

    show_patches.patch
    microslow_innodb.patch
    userstatv2.patch
    microsec_process.patch
    innodb_io_patches.patch
    mirror_binlog.patch
    mysqld_safe_syslog.patch
    innodb_locks_held.patch
    innodb_show_bp.patch
    innodb_check_fragmentation.patch
    innodb_io_pattern.patch
    innodb_fsync_source.patch
    innodb_show_hashed_memory.patch
    split_buf_pool_mutex_fixed_optimistic_safe.patch
    innodb_rw_lock.patch

Leave a Reply

Your email address will not be published. Required fields are marked *