My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal.
My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places:
- RAM Buffers
- commitlog
- binary log
- actual tables
- All of the above places on my DB slave
This is too much "durability" given th开发者_JAVA技巧at I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives.
During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables.
In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that?
If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.
If you can live with some additional risk, these two changes will greatly increase your write performance.
Set innodb_flush_log_at_trx_commit=0
Set sync_binlog=0
In addition, your Buffer Pool Size should be around 70-80% of server memory. Increasing Log File Size and Log Buffer Size can also help to some degree.
For a large part, InnoDB already does this.
When you commit data, it's written to the log file for recovery purposes but modification to the tablespace (data) is only done later as a background process ('checkpoint').
You can specify how many IOPS (http://en.wikipedia.org/wiki/IOPS) you want to dedicate to that background process in newer InnoDB releases (innodb_io_capacity), and provided that you set your innodb_log_file_size large enough InnoDB will just fall behind for a while and catch back up later.
If InnoDB falls too far behind on background work it can create sharp dips in performance when you reach the end of your log files, and have to cycle back through. See the 'none' line on these benchmarks: http://www.mysqlperformanceblog.com/2009/09/15/which-adaptive-should-we-use/
精彩评论