Why MySQL Stored Procedures, Functions and Triggers Are Bad For Performance

Why MySQL Stored Procedures, Functions and Triggers Are Bad For Performance

PREVIOUS POST
NEXT POST

MySQL stored procedures, functions and triggers are tempting constructs for application developers. However, as I discovered, there can be an impact on database performance when using MySQL stored routines. Not being entirely sure of what I was seeing during a customer visit, I set out to create some simple tests to measure the impact of triggers on database performance. The outcome might surprise you.

Why stored routines are not optimal performance wise: short version

Recently, I worked with a customer to profile the performance of triggers and stored routines. What I’ve learned about stored routines: “dead” code (the code in a branch which will never run) can still significantly slow down the response time of a function/procedure/trigger. We will need to be careful to clean up what we do not need.

Profiling MySQL stored functions

Let’s compare these four simple stored functions (in MySQL 5.7):

Function 1:

This function simply declares a variable and returns it. It is a dummy function

Function 2:

This function calls another function, levenshtein_limit_n (calculates levenshtein distance). But wait: this code will never run – the condition IF 1=2 will never be true. So that is the same as function 1.

Function 3:

Here there are four conditions and none of these conditions will be true: there are 4 calls of “dead” code. The result of the function call for function 3 will be the same as function 2 and function 1.

Function 4:

This is the same as function 3 but the function we are running does not exist. Well, it does not matter as the select does_not_exit  will never run.

So all the functions will always return 0. We expect that the performance of these functions will be the same or very similar. Surprisingly it is not the case! To measure the performance I used the “benchmark” function to run the same function 1M times. Here are the results:

As we can see func3 (with four dead code calls which will never be executed, otherwise identical to func1) runs almost 3x slower compared to func1(); func3_nope() is identical in terms of response time to func3().

Visualizing all system calls from functions

To figure out what is happening inside the function calls I used performance_schema / sys schema to create a trace with ps_trace_thread() procedure

  1. Get the thread_id for the MySQL connection:
  2. Run ps_trace_thread in another connection passing the thread_id=49:
  3. At that point I switched to the original connection (thread_id=49) and run:
  4. The sys.ps_trace_thread collected the data (for 10 seconds, during which I ran the select func1() ), then it finished its collection and created the dot file:

I repeated these steps for all the functions above and then created charts of the commands.

Here are the results:

Func1()

Execution map for func1()

Func2()

Execution map for func2()

Func3()

Execution map for func3()

 

As we can see there is a sp/jump_if_not call for every “if” check followed by an opening tables statement (which is quite interesting). So parsing the “IF” condition made a difference.

For MySQL 8.0 we can also see MySQL source code documentation for stored routines which documents how it is implemented. It reads:

Flow Analysis Optimizations
After code is generated, the low level sp_instr instructions are optimized. The optimization focuses on two areas:

Dead code removal,
Jump shortcut resolution.
These two optimizations are performed together, as they both are a problem involving flow analysis in the graph that represents the generated code.

The code that implements these optimizations is sp_head::optimize().

However, this does not explain why it executes “opening tables”. I have filed a bug.

When slow functions actually make a difference

Well, if we do not plan to run one million of those stored functions we will never even notice the difference. However, where it will make a difference is … inside a trigger. Let’s say that we have a trigger on a table: every time we update that table it executes a trigger to update another field. Here is an example: let’s say we have a table called “form” and we simply need to update its creation date:

That is good and fast. Now we create a trigger which will call our dummy func1():

Now repeat the update. Remember: it does not change the result of the update as we do not really do anything inside the trigger.

Just adding a dummy trigger will add 2x overhead: the next trigger, which does not even run a function, introduces a slowdown:

Now, lets use func3 (which has “dead” code and is equivalent to func1):

However, running the code from the func3 inside the trigger (instead of calling a function) will speed up the update:

Memory allocation

Potentially, even if the code will never run, MySQL will still need to parse the stored routine—or trigger—code for every execution, which can potentially lead to a memory leak, as described in this bug.

Conclusion

Stored routines and trigger events are parsed when they are executed. Even “dead” code that will never run can significantly affect the performance of bulk operations (e.g. when running this inside the trigger). That also means that disabling a trigger by setting a “flag” (e.g. if @trigger_disable = 0 then ... ) can still affect performance of bulk operations.

PREVIOUS POST
NEXT POST

Share this post

Comments (6)

  • houndegnonm87 Reply

    Nice article, Very well explained and thanks fro drawing the full picture

    July 12, 2018 at 4:04 pm
  • dbdemon Reply

    Interesting findings! I tried declaring the functions with the DETERMINISTIC characteristic, but it made no difference.

    I also tried reproducing your results with MariaDB 10.3 and got similar results for func1 vs func2, but func3 performed somewhat better, relatively speaking: func1 x 1M: 1.739 sec, func2 x 1M: 2.249 sec, func3 x 1M: 3.054 sec.

    I also tried reproducing the issue with actual stored procedures. The benchmark function doesn’t support calling stored procedures, so I wrote my own benchmark stored procedure. This appears to introduce some overhead compared to the built-in benchmark function, so the results are not directly comparable, but there appears to be a similar effect with stored procedures, i.e. that func2 is slower than func1, and that func3 is slower than func2.

    July 12, 2018 at 4:37 pm
  • Jerry Wilborn Reply

    Assuming Oracle (or Percona) fixes the bug (feature?) of MySQL opening tables unnecessarily, then it sounds like there would be no performance impact in using the functions (even with never-executed branches). Right?

    Second question: What tables is it even trying to open? You’re doing a select on a function… I’m confused.

    July 13, 2018 at 12:38 pm
  • Zane Reply

    nice article,thank you

    July 17, 2018 at 8:23 am
  • Alexander Rubin Reply

    I was trying to find out how this works and found this comment in sql/sql_class.h
    (https://github.com/mysql/mysql-server/blob/4f1d7cf5fcb11a3f84cff27e37100d7295e7d5ca/sql/sql_class.h)

    /*
    Enum enum_locked_tables_mode and locked_tables_mode member are
    used to indicate whether the so-called “locked tables mode” is on,
    and what kind of mode is active.
    Locked tables mode is used when it’s necessary to open and
    lock many tables at once, for usage across multiple
    (sub-)statements.
    This may be necessary either for queries that use stored functions
    and triggers, in which case the statements inside functions and
    triggers may be executed many times, or for implementation of
    LOCK TABLES, in which case the opened tables are reused by all
    subsequent statements until a call to UNLOCK TABLES.
    The kind of locked tables mode employed for stored functions and
    triggers is also called “prelocked mode”.
    In this mode, first open_tables() call to open the tables used
    in a statement analyses all functions used by the statement
    and adds all indirectly used tables to the list of tables to
    open and lock.
    It also marks the parse tree of the statement as requiring
    prelocking. After that, lock_tables() locks the entire list
    of tables and changes THD::locked_tables_modeto LTM_PRELOCKED.
    All statements executed inside functions or triggers
    use the prelocked tables, instead of opening their own ones.
    Prelocked mode is turned off automatically once close_thread_tables()
    of the main statement is called.
    */

    August 13, 2018 at 1:54 pm
  • Tochi Reply

    Though the article is helpful, I don’t like the subject because it seems to discourage stored procedures, functions and triggers. Triggers intrinsically impact performance. I have no doubt. Stored procedures, functions are there for good reasons. The subject should be like Tips that prevent stored procedures, triggers and functions from slowing down performance.

    October 23, 2018 at 10:23 pm

Leave a Reply