Another 12 MySQL “recipes” in Bitrix24

In our last article, “11 MySQL Cooking Recipes in Bitrix24,” we mainly considered architectural solutions: is it worth using cloud services (such as Amazon RDS), which fork MySQL to choose, etc.

Judging by the reviews, the topic of competent operation of MySQL in large "highload" projects is very large and important. Therefore, we decided to talk more about some of the nuances of tuning and administering the database that we encountered during the development of Bitrix24 and which we use daily.

I remind you once again that this article (as well as the previous one) is not a universal “recipe” for perfect MySQL tuning for all occasions. :) This does not happen. :) But I sincerely believe that it will be useful for you to solve individual specific problems.

And at the end of the article - a surprise for the most patient readers. :)
1. Configuring QUERY CACHE A
huge number of articles have been written describing how Query Cache works in MySQL, and how to configure and use it.

And still, despite this, the most common misconception of system administrators setting up a database is " The more we give memory for the cache, the better ."

This is not true.

MySQL does not handle large Query Cache. In practice, we have come across the fact that with query_cache_size more than 512M, processes more often appear that hang for a short time in the “waiting for query cache lock” state (seen in SHOW PROCESSLIST).

In addition, if all of your requests fall into the cache, it is not wise to increase it. RAM in the system is precious!

Therefore, it is always important to understand what is happening in your project, and how effectively Query Cache is used with the current settings.

The key information for you is here:

mysql> SHOW STATUS LIKE 'Qcache%';
| Variable_name           | Value    |
| Qcache_free_blocks      | 10541    |
| Qcache_free_memory      | 36381984 |
| Qcache_hits             | 18888719 |
| Qcache_inserts          | 5677585  |
| Qcache_lowmem_prunes    | 1725258  |
| Qcache_not_cached       | 6096307  |
| Qcache_queries_in_cache | 36919    |
| Qcache_total_blocks     | 97285    |
8 rows in set (0.00 sec)

The most important are the ratios of Qcache_hits and Qcache_inserts, Qcache_inserts and Qcache_not_cached, as well as Qcache_lowmem_prunes - the number of requests cached from the cache - and Qcache_free_memory.

It’s best not to look at these statistics only occasionally, but to have analytics on hand. It can be collected using various monitoring tools. For example, Munin:
Looking at the graphs in dynamics - once a day, for example, we eventually came to the conclusion that for us these settings for one server are quite enough:

query_cache_size = 128M
query_cache_limit = 2M

Of course, perhaps for your project the picture will be different.

2. innodb_buffer_pool_size
Buffer Pool size is one of InnoDB's most important settings. This is the size of the memory buffer used by MySQL during the work for the data cache and table indexes (we’ll immediately recall that in order to avoid double caching by MySQL and the operating system, it is worth specifying innodb_flush_method = O_DIRECT in the settings ).

From the side of MySQL developers, one of the greatest crimes is to set the default value to 8M. :)
Actually, ideally, the value innodb_buffer_pool_size should be such that your entire base fits in your memory. At the same time, it is important to remember that the system is balanced from memory (we talked about this in the previous article ) - if you “tear up” the value of innodb_buffer_pool_size so that the whole system goes into swap, nothing good will come of it.

A good indicator of proper setup is the Buffer pool hit rate:

Buffer pool hit rate 994 / 1000, young-making rate 6 / 1000 not 0 / 1000

If the value is close to "1000/1000" - all is well. Otherwise, you need to increase innodb_buffer_pool_size . If there is not enough memory, add memory.

3. innodb_buffer_pool_instances
By default, InnoDB uses one instance for Buffer Pool.

At the same time, it is possible to select several blocks - and MySQL works with them in InnoDB in some cases much more efficiently.

Buffer Pool should be divided into several instances if you have it exceeds 2 GB. The size of each instance is worth doing 1 GB or more.

And here is an important question that many are confused with: innodb_buffer_pool_size Is the total pool size or the size of one instance?

The answer is right in the documentation - this is the total size. Therefore, for example, here is such a configuration:

innodb_buffer_pool_size = 4096M
innodb_buffer_pool_instances = 4

... says that each instance will occupy 1 GB.

Be careful that there are no incidents (one friend subscribing to MySQL DBA described his experience in operating MySQL with the settings innodb_buffer_pool_size = 1024M, innodb_buffer_pool_instances = 64 - planning to allocate 64 GB - and was outraged that it was somehow bad with performance ... :))
4 innodb_io_capacity
An interesting parameter, which, it seems (according to the official documentation :)), and does not affect performance very much, nevertheless, in practice, it gives some gain if it is correctly configured.

innodb_io_capacity sets the limit of input-output operations (in IOPS) for those InnoDB operations that are performed in the background (for example, flushing pages from the Buffer Pool to a disk).

The default value is 200.
Too small a value will cause these operations to "lag behind." Too much will cause the data from the Buffer Pool to be flushed too quickly.

Ideally, you should set the value corresponding to the actual performance of your disk system (again, in IOPS).

5. innodb_file_per_table
By default, MySQL in InnoDB stores data and indexes of all tables in one table space - the ibdata1 file.

If the settings use the innodb_file_per_table option , then in this case, for each table, a separate table_name.ibd file is created , in which data and indexes are stored.

Does this make sense?

In my personal opinion: if you use standard MySQL, then you do not need to use it. Storing tables in separate files can reduce performance, as the number of “expensive” resource-intensive file open operations in the system will dramatically increase.

We ourselves use Percona Server. And use innodb_file_per_table .

  • If in normal MySQL one or several tables are beaten for some reason, the entire database will “stand up”. In Percon you can use the innodb_corrupt_table_action = assert option , and then when using innodb_file_per_table the “broken” table will be marked, but the whole database as a whole will continue to work.
  • Already there is an opportunity to "speed up" some operations that clearly lose in performance when using innodb_file_per_table . For example, deleting tables. For such purposes, Percon has the option innodb_lazy_drop_table = 1 , which allows such operations in the background and does not reduce the overall system performance.
  • With the innodb_file_per_table
  • option turned on, using XtraBackup you can do fast binary import / export of tables.

6. max_connect_errors
Another criminal :) conspiracy of MySQL developers. The default
value of max_connect_errors is 10.
This means that in any more or less active project, in case of any unexpected failure - even a short-term one (for example, an incorrect password was entered in the scripts or some network problems occurred) - after the indicated number of unsuccessful attempts to establish a connection the host establishing the connection will be blocked. Until the MySQL server is restarted or the FLUSH HOSTS command is executed.

This means that before manual intervention (only if you did not post a script on cron in advance that executes FLUSH HOSTS once every few minutes :)) your project will not work. It is unpleasant if this happens at night, and you do not have round-the-clock monitoring.

It is better to protect yourself in advance and set the value of max_connect_errors large. For example:

max-connect-errors = 10000

7. Temporary tables
If the amount of RAM in the system allows - it is better to always work with temporary tables in memory.

Organizing this is easy enough. In MySQL settings:

tmpdir = /dev/shm

In the settings of file systems and partitions (if it comes to Linux - in the / etc / fstab file):

# <file system>   <mount point>   <type>   <options>   <dump>  <pass>

tmpfs      /dev/shm       tmpfs       defaults         0            0

8. Size of temporary tables
There are two similar parameters that are responsible for the size of tables in memory:

max_heap_table_size = 64M
tmp_table_size = 64M

max_heap_table_size - the maximum size of tables of type MEMORY that a user can create.
tmp_table_size - the maximum size of the temporary table to be created in memory (more on disk).

The less disk activity, the better. Therefore, if the amount of RAM in the system allows (remember the memory balance), it is better to work with all temporary tables in memory.

9. table_cache and table_definition_cache

table_cache = 4096
table_definition_cache = 4096

These two options are responsible for the maximum number of tables that will be stored in the cache of open tables.

The value of table_cache directly depends on the number of tables in your system, on the number of tables to open in the query (linked through JOINs, for example), and on the number of open connections to the database.

table_definition_cache defines the cache size for table structures (.frm) files. The more they are in the system, the greater the value of table_definition_cache needs to be set.

10. Struggle for long queries
In the last article we already mentioned that Percona Server has a good tool for determining the overall system performance ( SELECT * FROM INFORMATION_SCHEMA.QUERY_RESPONSE_TIME ).

In addition, in any MySQL it is possible to log all the "slow" queries and separately parse them.

When using Percona Server, the log of slow queries becomes much more informative.

log_output = FILE
slow_query_log = 1
slow_query_log_file = mysql_slow.log
long_query_time = 1

log_slow_verbosity = microtime,query_plan,innodb

All queries that take longer than 1 second are written to mysql_slow.log. Unlike the standard log, it looks something like this:

# Time: 120712  9:43:47
# User@Host: user[user] @  []
# Thread_id: 3513565  Schema: user  Last_errno: 0  Killed: 0
# Query_time: 1.279800  Lock_time: 0.000053  Rows_sent: 0  Rows_examined: 1  Rows_affected: 0  Rows_read: 0
# Bytes_sent: 52  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# InnoDB_trx_id: 33E7689B
# QC_Hit: No  Full_scan: No  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0
#   InnoDB_IO_r_ops: 0  InnoDB_IO_r_bytes: 0  InnoDB_IO_r_wait: 0.000000
#   InnoDB_rec_lock_wait: 0.000000  InnoDB_queue_wait: 0.000000
#   InnoDB_pages_distinct: 4
UPDATE b_user_option SET 'COMMON' = 'N', 'VALUE' = 'a:19', 'NAME' = 'openTab', 'CATEGORY' = 'IM' WHERE ID=1719;

We see not only the query execution time, the number of "scanned" rows, etc., but also much more detailed information - Full Scan, the use of temporary tables, the status of InnoDB.

All this helps a lot in analyzing slow queries and debugging them.

11. Detailed statistics without Percona
Even if you use standard MySQL, it also has good query debugging tools (of course, if you have already "caught" them and identified them :) - for example, using the same log of slow queries).

There is such a thing as Profile's. Do you use them? Not? In vain!

Empty set (0.02 sec)

Empty set (0.00 sec)

We turn on profiling and look at any request:

Query OK, 0 rows affected (0.00 sec)

mysql> SELECT COUNT(*) FROM mysql.user;
| COUNT(*) |
|     3024 |
1 row in set (0.09 sec)

| Query_ID | Duration   | Query                           |
|        1 | 0.09104400 | SELECT COUNT(*) FROM mysql.user |
1 row in set (0.00 sec)

| Status                         | Duration |
| starting                       | 0.000018 |
| Waiting for query cache lock   | 0.000004 |
| Waiting on query cache mutex   | 0.000004 |
| checking query cache for query | 0.000041 |
| checking permissions           | 0.000007 |
| Opening tables                 | 0.090854 |
| System lock                    | 0.000013 |
| init                           | 0.000012 |
| optimizing                     | 0.000007 |
| executing                      | 0.000010 |
| end                            | 0.000005 |
| query end                      | 0.000004 |
| closing tables                 | 0.000031 |
| freeing items                  | 0.000029 |
| logging slow query             | 0.000003 |
| cleaning up                    | 0.000004 |
16 rows in set (0.00 sec)

We immediately see what the bottleneck is - the network, working with the disk, using the cache or anything else.

12. How to use information from profile?

If you learned to find single long queries (it’s not so difficult - a log of slow queries and SHOW PROCESSLIST to help), if you correctly assess the state of the system as a whole ( SELECT * FROM INFORMATION_SCHEMA.QUERY_RESPONSE_TIME , external monitoring of the system, for example, nagios - real time, munin - analytics), it’s extremely important to understand what can affect performance in a system:

  • all internal resources of the system - for the base there is no “too much” neither CPU, nor RAM, nor disk system
  • locks (at the table level - more often in MyISAM, at the row level - InnoDB);
  • internal locks (for example, "waiting for query cache lock").

Understanding this and analyzing the data from the profile or, for example, from the extended log of slow queries, you can always correctly evaluate the further strategy of working with the database - in which cases you will need to upgrade the hardware, in which - change these or other settings, and where - perhaps reorganization data and query structures.

* * *
Successful debugging and successful operation of databases of any size and with any load! :)
* * *
Thank you for reading to this place! :)
I hope our tips on working with MySQL will be useful to you!

And since we are talking about our experience in operating MySQL in the Bitrix24 project , we would like to make a small gift for all readers of our blog on Habré.

Register at Bitrix24 at the specified link and get twice as much disk - 10 GB - at a free rate!

If suddenly you still do not know what Bitrix24 is, a detailed description is on our website . :)