Mariadb Benchmarking – Low transaction and query per second

The question:

We have a MariaDB 10.3 on a Redhat Server (VM) with 90 GB Ram and 20 VCPUs. I’m trying to optimize the Database. I installed Sysbench for Benchmarking and after a 50 seconds test I got this values (with default MariaDB variables):

#Benchmarking command:
    sysbench oltp_read_write --threads=2 --report-interval=3 --histogram 
    --time=50 --table-size=1000000 --db-driver=mysql --mysql-host=firstserver
    --mysql-db=sbtest  --mysql-user=sbtest_user --mysql-password=password run

SQL statistics:
    queries performed:
        read:                            271040
        write:                           77440
        other:                           38720
        total:                           387200
    transactions:                        19360  (387.15 per sec.)
    queries:                             387200 (7742.96 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          50.0054s
    total number of events:              19360

Latency (ms):
         min:                                    3.75
         avg:                                    5.16
         max:                                   26.99
         95th percentile:                        6.55
         sum:                                99964.49

Threads fairness:
    events (avg/stddev):           9680.0000/2.00

For this Benchmarking the innodb_buffer_pool_size was only 2GB.
I tried to optimize the DB to get more transactions and queries per second. I changed this variables:

#set memory (was 2 G)
innodb_buffer_pool_size=70G 

#set log file size  (was 64MB)
innodb_log_file_size=2G

#set log buffer size (was 16MB)
innodb_log_buffer_size=128M

#set temporary in memory table size  (was 16MB)
tmp_table_size=64M
max_heap_table_size= 64M


# set query cache     (was 1MB)
query_cache_size=64M

But the result changed barely.

The Solutions:

Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.

Method 1

I notice you have 20 CPU cores, but you’re only running sysbench with 2 threads. You probably can get better results with more threads. The guideline often given is 2x the number of CPU cores, so try it with at least --threads=40.

I have also found sysbench itself has scalability limits (as any client app would). To get the best result, I ran multiple instances of client hosts, all running sysbench and connecting over the network to the same database.

Increasing the buffer pool 35x would not help at all if your data is small anyway, which I suppose it is for a sysbench run. It’s like upgrading your 35,000 square foot warehouse to a million square foot warehouse to store your personal photo collection.

It’s not clear why you thought increasing the other variables would help. Did you have some measurement that pointed to those being bottlenecks? How many times did your innodb commits need to wait for the log buffer to flush? What percentage of your queries used temp tables, but had to write the temp table to disk instead of using in-memory temp tables?

My point is that making random tuning choices to see what effect it has on the benchmark is not going to lead you to optimize your server. You’re not approaching it like a scientist or like an engineer. If one of these made a difference, how do you know which one was important? Or would you therefore make all the same changes on your production MySQL server? If so, how is that not a superstitious ritual, like throwing salt over your shoulder for good luck?

Method 2

To help other DBAs I wanna add some sentences from the book “O’Reilly High Performance mysql” to Bill’s answer:

What Not to Do

You might be expected (or believe that you’re expected) to set up a benchmark suite
and “tune” your server by changing its configuration iteratively in search of optimal
settings. This usually is not something we advise most people to do. It requires so
much work and research, and the potential payoff is so small in most cases, that it can
be a huge waste of time. You are probably better off spending that time on other
things such as checking your backups, monitoring changes in query plans, and so on.

You should not “tune by ratio.” The classic “tuning ratio” is the rule of thumb that
your InnoDB buffer pool hit ratio should be higher than some percentage, and you
should increase the cache size if the hit rate is too low. This is very wrong advice.
Regardless of what anyone tells you, the cache hit ratio has nothing to do with whether
the cache is too large or too small. To begin with, the hit ratio depends on the workload
—some workloads simply aren’t cacheable no matter how big the cache is—and sec‐
ondly, cache hits are meaningless, for reasons we’ll explain later. It sometimes hap‐
pens that when the cache is too small, the hit rate is low, and increasing the cache size
increases the hit rate. However, this is an accidental correlation and does not indicate
anything about performance or proper sizing of the cache.

The problem with correlations that sometimes appear to be true is that people begin
to believe they will always be true. Oracle DBAs abandoned ratio-based tuning years
What Not to Do
| 105
ago, and we wish MySQL DBAs would follow their lead.
2
We wish even more fer‐
vently that people wouldn’t write “tuning scripts” that codify these dangerous practi‐
ces and teach them to thousands of people. This leads to our next suggestion of what
not to do: don’t use tuning scripts! There are several very popular ones that you can
find on the internet. It’s probably best to ignore them.

We also suggest that you avoid the word tuning, which we’ve used liberally in the past
few paragraphs. We favor configuration or optimization instead (as long as that’s what
you’re actually doing). The word tuning conjures up images of an undisciplined nov‐
ice who tweaks the server and sees what happens. We suggested in the previous sec‐
tion that this practice is best left to those who are researching server internals.
“Tuning” your server can be a stunning waste of time.

On a related topic, searching the internet for configuration advice is not always a
great idea. You can find a lot of bad advice in blogs, forums, and so on. Although
many experts contribute what they know online, it is not always easy to tell who is
qualified. We can’t give unbiased recommendations about where to find real experts,
of course. But we can say that the credible, reputable MySQL service providers are a
safer bet in general than what a simple internet search turns up because people who
have happy customers are probably doing something right. Even their advice, how‐
ever, can be dangerous to apply without testing and understanding because it might
have been directed at a situation that differed from yours in a way you don’t
understand.

Finally, don’t believe the popular memory consumption formula—yes, the very one
that MySQL itself prints out when it crashes. (We won’t repeat it here.) This formula
is from an ancient time. It is not a reliable or even useful way to understand how
much memory MySQL can use in the worst case. You might see some variations on
this formula on the internet, too. These are similarly flawed, even though they add in
more factors that the original formula doesn’t have. The truth is that you can’t put an
upper bound on MySQL’s memory consumption. It is not a tightly regulated database
server that controls memory allocation.


All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Comment