The question:
I have two servers:
- Server A: AMD Epyc 7371 – 16c / 32t – 3.1 GHz / 3.8 GHz 256 GB ECC 2400 MHz 2 × 960 GB NVMe SSD MySql 8.0.26
- Server B: Dual Intel Xeon Gold 6242R – 20c / 40t – 3.1 GHz / 4.1 GHz 384 GB ECC 2933 MHz 6 × 3.84 TB NVMe SSD 2 × 480 GB SATA SSD
On server A, despite it being smaller, the database works well and transactions are very fast with over 200 concurrent users.
On the much larger server B this happens to me: up to 50 concurrent users is not a problem and the database works well, but after the number of users to go beyond 50, the database starts to slow down.
Both servers have the same database, same tables, same stored procedure, same applications, in practice they are a copy of each other.
How can I figure out where the problem lies?
This is the configuration file:
[client]
pipe=
socket=MYSQL
port=3306
[mysql] no-beep
default-character-set=
server_type=1
[mysqld]
port=3306
datadir=E:/ProgramData/MySQL/MySQL Server 8.0Data
character-set-server=
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
log-output=FILE
general-log=0
general_log_file="NS31525947.log"
slow-query-log=1
slow_query_log_file="NS31525947-slow.log"
long_query_time=10
log-error="NS31525947.err"
log-bin="NS31525947-bin"
server-id=1
lower_case_table_names=1
secure-file-priv="C:/ProgramData/MySQL/MySQL Server 8.0/Uploads"
max_connections = 500
table_open_cache=3G
tmp_table_size=3G
thread_cache_size=100
myisam_max_sort_file_size=10G
myisam_sort_buffer_size=68G
key_buffer_size=61M
read_buffer_size=23M
read_rnd_buffer_size=256K
innodb_flush_log_at_trx_commit=0
innodb_log_buffer_size=512M
innodb_buffer_pool_size=10G
innodb_log_file_size=1G
innodb_thread_concurrency=0
innodb_autoextend_increment=64
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=80
flush_time=0
join_buffer_size=256K
max_allowed_packet=4M
max_connect_errors=100
open_files_limit=10000
sort_buffer_size=256K
table_definition_cache=1400
binlog_row_event_max_size=8K
sync_relay_log=10000
sync_relay_log_info=10000
loose_mysqlx_port=33060
default_authentication_plugin = mysql_native_password
I also tried to put the my.ini
from server A on server B, but nothing changed.
I entered the database status references
Right now
The Solutions:
Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.
Method 1
table_open_cache=3G -- NO! That's files, not bytes! Drop to a few thousand
innodb_buffer_pool_size=10G -- increase to about 70% of RAM
No more than 1% of available RAM (each):
tmp_table_size=3G --> 200M
max_heap_table_size --> 200M
myisam_sort_buffer_size=68G --> 200M
If you would like further analysis of settings, see http://mysql.rjweb.org/doc.php/mysql_analysis#tuning
That analysis will include discovering whether you are I/O-bound, hence confirming that SATA vs NVMe is a big issue.
Speeding up queries is another way to scale; let’s analyze the SlowLog
The Answer talking about SSDs leads to the need to speed up “slow” queries. Note that the slowlog measures elapsed time, hence captures disk I/O. Be sure to set long_query_time = 1.0
(or lower).
More
Buffer_pool of 10G; data folder with 13.8GB… When a server is first started, nothing is in the buffer_pool and queries run slower than they will eventually. This is due to I/O. (And as already discussed the machines have different I/O speeds.)
Eventually the buffer_pool will fill up and further actions may (or may not, depending on the “working set size”) lead to thrashing — I/O to bump out stuff in the cache in order to load new things. This I/O will be especially problematic if queries are doing full table scans.
Since you have lots of RAM, you should increase innodb_buffer_pool_size
. However, if your data size continues to grow, you may eventually hit the sluggishness again. You should locate any “slow” queries and consider how to avoid “full table scans” (or whatever is making them slow).
Method 2
The one word that grabs my attention is SATA. SATA is slower than SSD
There is a webpage entitled Types of hard drives: SATA vs. SSD vs. NVMe
It says in part
SSD stands for Solid State Drive. These disks don’t have any moving parts. Instead, all of the data is stored on non-volatile flash memory. That means that there isn’t a needle that has to move to read or write data and that they are significantly faster than SATA drives. It’s difficult to find an exact speed because it varies by manufacturer and form factor, but even the lower-performing drives are comparable to SATA drives.
I would suggest putting everything on SSDs. I mentioned getting away from SATA in my old post MySQL/Percona Server is taking long and I’d prefer to skip all this waiting even if it means data loss
I would also suggest splitting the redo logs away from the data using separate disks (See my old post MySQL on SSD – what are the disadvantages?)
I would also set innodb_thread_concurrency to 64 (See my old post MariaDB 10.1.22 to use more RAM rather than CPU)
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0