These days I’m working with a customer who has an application based entirely on stored routines on MySQL side. Even though I haven’t worked much with stored procedures, I though it’s going to be a piece of cake. In the end – it was, but there’s a catch.
My initial idea was – I’ll just analyze queries in the slow query log generated by our mysql build running with long_query_time=0, get the slowest ones and work on them. It wasn’t really all the way I expected..
For a showcase I have created a function “whatstheweatherlike”. Let’s call it and see what shows up in the slow query log:
|
1 |
<br>mysql> SELECT whatstheweatherlike(5);<br>+----------------------------------------------------------+<br>| whatstheweatherlike(5) |<br>+----------------------------------------------------------+<br>| Its 5°C, time is 02:28:05, feels [like] almost summer! |<br>+----------------------------------------------------------+<br>1 row in set (0.00 sec)<br><br>host # tail /var/log/mysql/mysql-slow.log<br># Time: 090115 2:25:28<br># User@Host: me[me] @ localhost []<br># Thread_id: 3432 Schema: test<br># Query_time: 0.000695 Lock_time: 0.000284 Rows_sent: 1 Rows_examined: 0 Rows_affected: 0 Rows_read: 1<br># QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No<br># Filesort: No Filesort_on_disk: No Merge_passes: 0<br># No InnoDB statistics available for this query<br>SELECT whatstheweatherlike(5);<br> |
Not overly useful for query optimization, is it? Well, I can still aggregate the slow query log with mk-log-parser and find routines that MySQL spends most of the time on, but I don’t want to go over the complicated logic of stored procedure just to figure out what queries are being executed. This is where Jeremy’s SHOW PROFILE patch – which is included in all Community tree based Percona MySQL builds since it is part of MySQL community version ever since 5.0.37 (note – not all Percona builds are done on Community tree) – comes into play. Let me show you how this works:
|
1 |
<br>mysql> set profiling=1;<br>Query OK, 0 rows affected (0.00 sec)<br><br>mysql> SELECT whatstheweatherlike(3);<br>+-----------------------------------------------------------+<br>| whatstheweatherlike(3) |<br>+-----------------------------------------------------------+<br>| Its 3°C, time is 02:43:25, feels [like] snow is melting | <br>+-----------------------------------------------------------+<br>1 row in set (0.00 sec)<br><br>mysql> show profiles;<br>+----------+------------+-------------------------------------------------------------+<br>| Query_ID | Duration | Query |<br>+----------+------------+-------------------------------------------------------------+<br>| 1 | 0.00005100 | SELECT CURTIME() into time |<br>| 2 | 0.00014100 | SELECT feeling INTO feels FROM weather WHERE temp = in_temp |<br>+----------+------------+-------------------------------------------------------------+<br>2 rows in set (0.00 sec)<br> |
So now not only do we get to see what queres were executed, how much time they spent executing but we can also get a profile for each of the queries individually. I think this is pretty cool and it helped me a lot. Sure, you may still need to check routine and track the logic so having constants that were used rather than variables would be even more useful. Nevertheless I think this is a great way to find what’s happening under the scenes so you could start digging deeper.
On the other hand, this way of manual profiling you can’t really see real time stats, for example if it was slow because of locks, in the logs you may not see it. I will ask my colleagues to check if it would be hard to implement configurable variable for switching between routine and query logging, so maybe one day we’ll be able to have both ways of looking at it.
Resources
RELATED POSTS