I've been involved in very tight optimization recently, where each percent was important, so I spent a while checking
how exactly innodb_thread_concurrency works. It was worth it. Proper setting got some 20% extra performance.
In theory innodb_thread_concurrency was described as variable which limits number of threads which can run
in the Innodb kernel, the rest will wait in the queue until the slot is free. Looking at this explanation
the generic advice was to set this value to 2*(num_cpus+num_disks) so you do not get to much running threads
at the time or too many disk requests outstanding.
It turnes out practically it is not that simple.
The first problem is how wait is implemented. Instead of having instant wait in the queue if no slot is available,
thread will do sleep for 50 ms (recently changed to 10ms) if it does not have any locks set, before starting to wait.
As I understand this is done to give the preference to the threads which have locks set for the second entry, compared
to new transactions.
The effect of this is - if you have a lot of short single statement transactions and you have number of connections
significantly more than innodb_thread_concurrency you might get a lot of threads just sleeping to enter, while your
system will have significant idle portion. This can be spotted by looking at SHOW INNODB STATUS, You will see a lot of threads in "sleeping to enter innodb queue" status and very small number of threads inside of Innodb.
This wait is happily easy to reduce or eliminate (which is the best solution for some loads), check srv0srv.c file
and comment out os_wait(50000) line in innodb_thread_concurrency implementation function.
The next interesting item which can be adjusted is number of "tickets" thread gets. Lets clarify what those tokens are.
It would be too expensive to try to enter Innodb kernel for each handler operation (which can happen hundreds of times per second), so instead thread gets number of "entry rights" at once and only leaves the slot after it has used all of the tolkens or statement processing is complete. The default value for these "tickets" is 500, which is good enough for
short running statements. For long statements you might wish to increase it especially if you care about throughput more
than latency for competing short statements. It is defined as SRV_FREE_TICKETS_TO_ENTER in srv0srv.c
Finally lets look how innodb_thread_concurrency values may affect performance. It turns it a lot depends on workload.
There are multiple things to consider.
If you have short transactions and you're bound by log IO you might wish to have large innodb_thread_concurrency
(or even set it over 500 to disable at all) as this will allow group commit to work more efficiently - commit more concurrent transactions with single log write.
If there are a lot of short term latches (Number of OS Waits is large in SHOW INNODB STATUS) and your load is mainly
CPU bound you might try to reduce innodb_threads_concurrency down, NUM_CPUs*2 could be good value.
On other hand if there are a lot of row locks due to transactions competing you might wish to use relatively larger
values to avoid excessive thread switching. If transaction running inside of Innodb kernel has to wait on the row lock
it has to exit the Innodb queue to let other guys run (or unexpected deadlocks would happen). Exiting and entering Innodb
kernel all the time could cause excessive raid of context switches which will waste a lot of CPU time.
So this is an option worth experimenting with.
阅读(1591) | 评论(1) | 转发(0) |