Is Your MySQL Buffer Pool Warm? Make It Sweat!

at April 16th, 2013

This blog post is part one of two. The second part is here. Thank you to Peter Boros at Percona for contributing to this project and blog post.

There are numerous solutions for MySQL high availability. Many rely on MySQL’s asynchronous replication to maintain a warm standby server which is flipped into service if the active master server has an issue. At Groupon, our standard MySQL database configuration follows this active/passive pattern. There is a virtual IP that points to the active server of a pair. The passive server has mysqld running and replicating from the active master. In theory, failing over is the simple matter of moving the virtual IP. In practice, it’s slightly more complicated. To manage this process, we use a tool co-developed with Percona.

“Warm standby server”? Did you catch that? What does that mean? In most of these high availability solutions it means that mysqld is running on the standby server and that replication is not lagging. Unfortunately, that’s often not sufficient for failover during peak traffic. The standby server does not process query traffic except for replication. The buffer pool and adaptive hash index on the standby server will not have recently accessed pages. When it starts handling queries after failover, the lower cache hit rates can lead to outages. Specifically, at Groupon, our servers would be heavily I/O bound after failover as the buffer pool pages were loaded from disk.

Replaying Queries

Working with Percona, we have developed a solution to keep the standby server’s caches hot. (See Peter Boros’ Fosdem 2013 slides for information about discarded designs and benchmarks.)

Diagram of playback architecture

First, we set long_query_time to 0 in order to log every query. (See part two for handling massive slow log volume.) The slow logs are served, via HTTP, by mysql_slowlogd. This daemon is similar to running `tail -f slow.log`, except that it knows how to follow the log stream across log rotation events. On the standby server, the logs are replayed with Percona Playback by streaming the slow log from the active server.

wget -q -O - http://master_server:3307/slow | percona-playback --mysql-host --mysql-username playback --mysql-password PaSSwOrd --mysql-schema schema_name --query-log-stdin --dispatcher-plugin thread-pool --thread-pool-threads-count 100 --session-init-query \"set innodb_fake_changes=1\" > /var/log/playback.log 2>&1 &

Percona added a few features to Percona Playback to make it work better for this use case. You will need version 0.6 or later to get these features. Be aware that playback output is really verbose, in production, most likely you want it to redirect to /dev/null, and only have a log file for debugging purposes.

  1. Streaming logs from stdin
    Percona Playback now supports the –query-log-stdin command-line option for accepting a never-ending stream of queries to playback.
  2. Read-only playback
    Using the –session-init-query command-line option, we set the option innodb_fake_changes to prevent INSERTs, UPDATEs, and DELETEs from corrupting the data on the standby server. You will need Percona Server in order to use innodb_fake_changes.
  3. Thread pool
    Percona Playback added a connection pool option via –dispatcher-plugin-thread-pool that will allow connection reuse. This is necessary when running a large stream of queries.


We benchmarked with slow query logs captured from our production systems. We restored a production database backup to our test database so that our test database was consistent before applying the captured query traffic. This is an important step because update statements that match no rows or insert statements that have duplicate key errors may be faster than an actual database write.

The slow logs were split into chunks, each containing roughly 1M queries. We warmed the cold database with the first chunk and replayed the second chunk after the warmup.

Disk read I/O for chunk 1 followed by chunk 2

The y axis is logarithmic, so the difference between the IO usage is 2 orders of magnitude. All graphs looked like this (we did 39 measurements), the next graph shows chunk 4’s workload warmed up with chunk 3.

Disk read I/O for chunk 3 followed by chunk 4

The result is similar for every single graph, each chunk warmed up the buffer pool for the next one.

Disk read I/O for chunk 1 followed by itself

As an additional experiment we tried replaying the same chunk again. We expected everything to be cached if we warmed the cache with the exact same data. All the graphs from such self-warming experiments look like this one. The green part of the graph lines up with the blue part.

This post has been cross-posted to the MySQL Performance Blog. Peter Boros will be at Percona Live next week and would love to see you if you are there. And, I am looking for MySQL experts to join our team at Groupon. Drop me a line if you are interested!

No Tags

One thought on “Is Your MySQL Buffer Pool Warm? Make It Sweat!

  1. [...] blog post diving into the waters of the MySQL buffer pool is a cross-post from Groupon’s engineering blog, and is Part 1 of 2. Thank you to Kyle Oppenheim at Groupon for contributing to this project and [...]

    pingback by Is your MySQL buffer pool warm? Make it sweat! on May 28, 2014 at 1:37 pm

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>