Assertion failure in ha_innodb.cc

  • Filter
  • Time
  • Show
Clear All
new posts

  • Assertion failure in ha_innodb.cc

    I've been getting this error almost twice a week for the past month and haven't been able to track down the source. I'm assuming it's some kind of InnoDB table corruption but have not been able to track down a specific table (we have over 100k tables).

    121202 1:38:54 InnoDB: Assertion failure in thread 140707766650624 in file ha_innodb.cc line 4220InnoDB: Failing assertion: share->idx_trans_tbl.index_count == mysql_num_indexInnoDB: We intentionally generate a memory trap.InnoDB: Submit a detailed bug report to xInnoDB: If you get repeated assertion failures or crashes, evenInnoDB: immediately after the mysqld startup, there may beInnoDB: corruption in the InnoDB tablespace. Please refer toInnoDB:InnoDB: about forcing recovery.01:38:54 UTC - mysqld got signal 6 ;This could be because you hit a bug. It is also possible that this binaryor one of the libraries it was linked against is corrupt, improperly built,or misconfigured. This error can also be caused by malfunctioning hardware.We will try our best to scrape up some info that will hopefully helpdiagnose the problem, but since we have already crashed, something is definitely wrong and this may fail.key_buffer_size=33554432read_buffer_size=1310 72max_used_connections=136max_threads=500thread_co unt=16connection_count=16It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1126889 K bytes of memoryHope that's ok; if not, decrease some variables in the equation.Thread pointer: 0x88dd16c0Attempting backtrace. You can use the following information to find outwhere mysqld died. If you see no messages after this, something wentterribly wrong...stack_bottom = 7ff91472be58 thread_stack 0x40000/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x7a3ec5]/usr/sbin/mysqld(handle_fatal_signal+0x4a4)[0x67f894]/lib64/libpthread.so.0(+0xf4a0)[0x7ffbc66644a0]/lib64/libc.so.6(gsignal+0x35)[0x7ffbc581e885]/lib64/libc.so.6(abort+0x175)[0x7ffbc5820065]/usr/sbin/mysqld[0x7fbe18]/usr/sbin/mysqld(_ZN7handler7ha_openEP5TABLEPKcii+0x3e)[0x6818ae]/usr/sbin/mysqld(_ZN12ha_partition4openEPKcij+0x36d)[0x95553d]/usr/sbin/mysqld(_ZN7handler7ha_openEP5TABLEPKcii+0x3e)[0x6818ae]/usr/sbin/mysqld(_Z21open_table_from_shareP3THDP11TABLE_SHAR EPKcjjjP5TABLEb+0x58c)[0x6001dc]/usr/sbin/mysqld(_Z10open_tableP3THDP10TABLE_LISTP11st_mem_r ootP18Open_table_context+0xc53)[0x5535c3]/usr/sbin/mysqld(_Z11open_tablesP3THDPP10TABLE_LISTPjjP19Pre locking_strategy+0x486)[0x5542d6]/usr/sbin/mysqld(_Z20open_and_lock_tablesP3THDP10TABLE_LISTb jP19Prelocking_strategy+0x44)[0x554d24]/usr/sbin/mysqld[0x5830a4]/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1216)[0x586a26]/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x33 3)[0x58a323]/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3 THDPcj+0x15b2)[0x58b982]/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0xd7)[0x623a97]/usr/sbin/mysqld(handle_one_connection+0x51)[0x623bd1]/lib64/libpthread.so.0(+0x77f1)[0x7ffbc665c7f1]/lib64/libc.so.6(clone+0x6d)[0x7ffbc58d170d]Trying to get some variables.Some pointers may be invalid and cause the dump to abort.Query (7ff73ac24930): is an invalid pointerConnection ID (thread ID): 5798290Status: NOT_KILLEDThe manual page at x containsinformation that should help you find out what is causing the crash.121202 01:38:57 mysqld_safe Number of processes running now: 0

    I've left the general log on a few times while the crash happened and then tested all the databases that were being used prior to the crash and didn't find any issues.

    I've also restored from a snapshot from our other master (via ec2-consistent-snapshot) and let it catch up via replication, but I'm continuing to see the assertion failures pop up. 5 days will go by without issue, taking both reads and writes, but then it'll crash. Then after recovery it will immediately crash 2 days later. Rinse/repeat.

    Can anybody point me in the right direction? Dumping the entire data set and creating a new server is not an option, as the data is > 1.4 TB. We're running Percona Server 5.5.22 on a CentOS 6 EC2 instance.

  • #2

    It's not known issue and looks like a bug. Can you please report a bug here with all required information? https://bugs.launchpad.net/percona-server/5.5/+bugs

    so developers can look into it.


    • #3
      I've created a bug here: https://bugs.launchpad.net/percona-server/+bug/1086490

      Please let me know when you guys get a chance to look at it.


      • #4

        Thanks. Our developer will look into it. You can also track it with above bug page.


        • #5
          Have not received any help on this issue. We are willing to pay for developer/support time to look into this bug as it is occurring more frequently now, but haven't received emails back from either the Inquiry form post on the main site, or in email to the oncall@percona.com email address.

          We just need a quote for the work and we can proceed with getting payment arranged. Please let the appropriate parties know.


          • #6
            Hi Blake,

            We already received your email and inquiry form both and we have forwarded it to our sales dept. We are going to check it with sales today and try to give you an answer as soon as possible. Thanks.