After playing yesterday a bit with INSERT … SELECT I decided to check is Innodb locks are relly as efficient in terms of low resource usage as they are advertised.
Lets start with a bit of background – in Innodb row level locks are implemented by having special lock table, located in the buffer pool where small record allocated for each hash and for each row locked on that page bit can be set. This in theory can give overhead as low as few bits per row, so lets see how it looks in practice:
I used the same sample table as yesterday:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
CREATE TABLE `sample` ( `i` int(10) unsigned NOT NULL auto_increment, `j` varchar(255) default NULL, PRIMARY KEY (`i`), KEY `j` (`j`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 mysql> show table status like "sample" G; *************************** 1. row *************************** Name: sample Engine: InnoDB Version: 10 Row_format: Compact Rows: 1638757 Avg_row_length: 61 Data_length: 100253696 Max_data_length: 0 Index_length: 128974848 Data_free: 0 Auto_increment: 1638401 Create_time: 2006-07-12 07:31:51 Update_time: NULL Check_time: NULL Collation: utf8_general_ci Checksum: NULL Create_options: Comment: InnoDB free: 1591296 kB 1 row in set (0.27 sec) |
Table with rather small row size, containing 1638400 rows.
Now lets lock all rows in this table and see how long does it take:
|
1 2 3 4 5 6 7 8 9 10 |
mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> select count(i) from sample lock in share mode; +----------+ | count(i) | +----------+ | 1638400 | +----------+ 1 row in set (7.02 sec) |
Looking at SHOW INNODB STATUS we can see:
History list length 5
Total number of lock structs in row lock hash table 6092
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306982, ACTIVE 89 sec, process no 10099, OS thread id 1878960
6093 lock struct(s), heap size 503104
So 503104 bytes are used to lock 1638400 rows, making it less than 3 bits per locked row;
Now lets see how much exclusinve locks take:
|
1 2 3 4 5 6 7 8 9 10 |
mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> select count(i) from sample for update; +----------+ | count(i) | +----------+ | 1638400 | +----------+ 1 row in set (8.60 sec) |
Interestingly enough this statement already takes 1.5 seconds longer (I repeated runs many time to ensure it is repeatable) so exclusive locks seems to be more expensive to set.
On other hand it still consumes exactly same amount of memory:
History list length 5
Total number of lock structs in row lock hash table 6092
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306989, ACTIVE 195 sec, process no 10099, OS thread id 1878960
6093 lock struct(s), heap size 503104
MySQL thread id 7429, query id 24542 localhost root
show innodb status
Note, running statement in consistent read mode which does not set any locks is going to be faster than any of locking modes, however we can see locking overhead is not that large:
|
1 2 3 4 5 6 7 |
mysql> select count(i) from sample; +----------+ | count(i) | +----------+ | 1638400 | +----------+ 1 row in set (4.50 sec) |
So we can lock 1638400 rows in 2.5 seconds on this pretty old box, which means locking speed will be few millions per second on modern systems.
So we have tested how much memory is using locking all the rows – so all rows are locked from the page. In other test we’ll lock small random portion of rows (about 1/16) with random distribution to the pages (column j was populated by md5(rand()) values) :
|
1 2 3 4 5 6 7 8 9 10 |
mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> select count(*) from sample where j like "5%" lock in share mode; +----------+ | count(*) | +----------+ | 102216 | +----------+ 1 row in set (4.44 sec) |
Looking at SHOW INNODB STATUS we can see:
Total number of lock structs in row lock hash table 440
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306999, ACTIVE 133 sec, process no 10099, OS thread id 1878960
441 lock struct(s), heap size 44352
So we locked over 100K rows using about 44KB. This is still quite efficient using less than 4 bits per locked row.
In practice this means memory consumption by row level locks should not be the problem even for rather large databases – even billion of locked rows should take half GB of memory, which is small fraction of memory used on serious systems. Furtermore you would unlikely need or want to lock every row in your table/database which makes it even smaller problem.