Saturday, September 5, 2009

MySQL's query cache

MySQL's query cache is somewhat unique and can be very beneficial to certain applications, such as web apps that run the same queries over and over. If you're unfamiliar with the query cache, here are some key points:

  • The query cache holds SELECT statements and their result sets. By "SELECT statements" I mean the entire query is considered 1 entry in the cache. Subselects or queries that are part of a UNION are not cached individually.

  • This cache is optional. In fact it is off by default in most MySQL binaries. Although query_cache_type might be set to "ON", if query_cache_size=0 then the cache is effectively off.

  • This is not the only cache MySQL has. The storage engines often have lower level caches (e.g.,InnoDB caches data and index pages in its cache configured with setting innodb_buffer_pool_size and MyISAM uses key_buffer_size for caching index pages)

  • When enabled, the query cache is before the parser. This means that in order for your query to be a cache "hit", it must match a cached query exactly (e.g., it is case-sensitive and even white space matters).

  • The cache will never serve stale data (unless you are running multiple mysqlds pointing to the same data directory - in that case do not use the query cache). So queries need to be invalidated when data changes. In order to make this fast and easy, it's done on a table-by-table basis; when a table changes, all queries that used that table are removed.

  • Since the query cache is before the parser, you might wonder how authorization is done. When the query was originally parsed and inserted in the cache, MySQL figured out which table(s) were used by the query (which is useful for invalidating queries, see above). By knowing the tables involved, a user's privileges can be checked. However, if a user has any column-level privileges on those tables, the query cache is not useful. I tend to avoid column-level privileges (since 5.0 you can use views instead).

  • Do not set the query cache too large (I'd say no more than 256MB) because it becomes less efficient.

  • Monitor its effectiveness: Use SHOW GLOBAL STATUS to look at qcache_hits vs. com_select (query cache misses). Also, if you have Qcache_lowmen_prunes, you may want to increase the query_cache_size (though heed the earlier advice about not setting it too large). Lots of Qcache_inserts may be an indicator that queries are being invalidated and re-cached frequently.

  • MySQL Cluster can use the query cache, however, each MySQL server has its own query cache. Queries will be invalidated if data in the cluster changes. To incur less overhead, you can configure ndb_cache_check_time which tell NDB how often to check the cache for invalidation.

Thursday, August 27, 2009

Redundant management nodes in MySQL Cluster

Every time I teach the MySQL Cluster architecture, someone inevitably asks "Isn't the management node (ndb_mgmd) a single point of failure?" The short answer: no. The management node is not a SPOF because the cluster can continue without it. However, it's inconvenient if your management node is down because the management node does several things such as:

  • Provide status information about the cluster and allow you to use the ndb_mgm for various maintenance tasks like taking a hot backup
  • Own the cluster config file (therefore it must be running to start a node)
  • Arbitration in case of a potential split-brain
  • Logging


So while the management node can be down, it is nice to have a redundant one for failover. This is very easy to do:


  1. Add 2 [NDB_MGMD] sections to config.ini:
    [NDB_MGMD]
    #Id is required when defining multiple mgmt nodes
    Id=1
    Hostname=192.168.0.31

    [NDB_MGMD]
    Id=2
    Hostname=192.168.0.32

  2. Change the ndb-connectstring to include both IPs of the management nodes:
    [mysql_cluster]
    ndb-connectstring=192.168.0.31,192.168.0.32

  3. Make sure the config.ini is on both management nodes and that the files are identical. Start both ndb_mgmd nodes.


That's it! The management nodes will act in an active-passive way and failover as necessary. Make sure you do not run any management node on the same physical host as a data node - it will cause a cluster shutdown if they fail simultaneously.

Sunday, August 23, 2009

Examples of bad queries

In my years of teaching MySQL topics, I've noticed that many people write bad queries. It's not necessarily their fault. It could be that they don't understand some aspect of SQL, or that they solved the problem with the first technique that came to mind, or that their query is fine except that MySQL optimizes it poorly. I'll give a few examples.

Correlated vs. Non-Correlated subquery
For those that don't know what a correlated subquery is: it means that the subquery relies on values from the outer query. If the subquery could be executed by itself, then it would be non-correlated. Here's an example of a non-correlated query. Using the `world` database, I want to return all cities that have a population larger than New York's population:

SELECT name FROM City WHERE population > (SELECT population FROM City WHERE name = 'new york');

Since the subquery can be run by itself, it's non-correlated. These queries are generally okay, though if they are slow you should check to see if indexes are being used or consider re-writing as a join.

Now here's an example of a correlated subquery. The query returns the countries that have a city bigger than 9 million people:

SELECT name FROM Country AS co WHERE EXISTS (SELECT * FROM City WHERE population>9000000 AND co.code = countrycode);

Notice that the subquery references the outer query (co.code). This causes the subquery to execute for each row of the outer query. That's a lot of work! This could have been written as a non-correlated subquery:

SELECT name FROM Country WHERE code IN(SELECT countrycode FROM City WHERE population>9000000);

The problem with the above query (as you can verify with EXPLAIN) is that MySQL 5.1 executes this as a correlated subquery even though it shouldn't be. This is fixed in 6.0! In the meantime... run EXPLAIN on your slow subqueries and look for "DEPENDENT SUBQUERY" - that means it's correlated. Try to re-write it. Here's the above query as a join:

SELECT DISTINCT Country.name FROM Country JOIN City ON code = countrycode WHERE City.population > 9000000;

The MySQL optimizer in 5.x has a lot of options when optimizing joins, such as choosing the best table order and considering all indexes.

The point of this post is: the easiest query is not always the best one. If it's slow, use EXPLAIN and think about another strategy. There's more than one way to skin the cat. And remember to upgrade to the latest version of MySQL - the optimizer team is hard at work.

Sunday, August 16, 2009

Best MySQL backup strategy

There are several ways to take backups (some good, some bad, and some will depend on your situation). Here's the thought process I use for choosing a backup strategy.

  1. mysqldump

  2. If your data set is small (I realize "small" is a relative term.. to qualify it, let's say <10GB), then mysqldump will probably work great. It's easy, it's online and it's very flexible. Just a few things mysqldump can do:
    • backup everything or just certain databases or tables
    • backup only the DDL
    • optimize the dump for a faster restore
    • make the resultant sql file more compatible with other RDBMSes
      and many more things.

    However, the most important options are related to the consistency of your backup. My favorite options are:
    • --single-transaction : this option gives a consistent backup, if (and only if) the tables are using InnoDB storage engine. If you have any non-read-only MyISAM tables, then don't use this option when backing them up.
    • --master-data=2 : this option will make sure your dump is consistent (by doing a lock-all-tables unless you've added the option --single-transaction). The --master-data option also records the binary log position in the resulting dump file (=2 causes this line to be a comment in the dump file)


    Final note about mysqldump: keep in mind that the restore time may be significantly longer than the backup time. It will depend on several factors, for example, how many indexes you have.

  3. LVM snapshot

  4. For those that have larger datasets, a physical backup is the way to go. While you could take a cold backup (i.e., shutdown the MySQL service, copy the data directory, restart the service), many people do not want downtime. My favorite solution is snapshots. This can be hot (for InnoDB) or require a brief lock (for MyISAM). Don't forget to include all your data (include the ib_logfiles). Lenz provides a nice utility to help with this: http://www.lenzg.net/mylvmbackup/

  5. Other

  6. So let's say you have more than 10GB of data and you don't have the ability to use LVM. There are many other strategies that have their pros and cons. I'll quickly list some:

    • Cold backup
    • mysqlhotcopy (This applies only to MyISAM tables. It should not use the word "hot" since it is actually a "warm" backup)
    • Use replication and take cold backups on the slave
    • InnoDB HotBackup commercial product
    • Zmanda recover manager (This isn't a backup strategy per se, but it's a management console for backup and recovery)


  7. Binary log

  8. Regardless of which backup strategy you choose, it's important to synchronize the binary logs with your backup. This can be done by:

    • Restarting MySQL which starts a new binary log file
    • Issue FLUSH LOGS
    • Capture the binary log coordinates during the backup (e.g., SHOW MASTER STATUS)


    If you choose the mysqldump approach, then simply include --master-data=2 option. If you choose the LVM snapshot, including a flush logs would do nicely. You don't need to keep your binary logs forever. Consider using the --expire-logs-days option or manually archiving them.

Friday, August 14, 2009

Common indexing mistakes

Here's a quick list of common mistakes that I've seen regarding indexing in MySQL.

1. Not using an index. Enable your slow query log (and consider setting long_query_time and log_queries_not_using_indexes too) and watch for queries that aren't using an index.

2. Using CREATE INDEX. While CREATE INDEX is not inherently bad, it only allows you to do one thing: add a single index to a table. It is mapped to an ALTER TABLE, so you might as well just use ALTER TABLE and then you have the benefit of being able to do multiple things in the same statement (e.g., add an index and remove an index or add 2 indexes).

3. Misusing a composite index. Take this example:
CREATE TABLE customer (
custid INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
fname VARCHAR(30),
lname VARCHAR(30),
INDEX (lname, fname)
);

The composite index on (lname, fname) can be used for searches that use lname and fname OR for searches only on lname. For example these queries can use that index:
SELECT * FROM customer WHERE fname = 'Sarah' AND lname = 'Sproehnle';
SELECT * FROM customer WHERE lname = 'Sproehnle'; -- uses the first part of the index
SELECT * FROM customer WHERE lname LIKE 'Spr%'; -- uses the first part of the index
SELECT fname FROM customer WHERE lname = 'Sproehnle'; -- uses it as a covering index
SELECT * FROM customer WHERE lname = 'Sproehnle' ORDER BY fname; -- uses the index and avoids filesort

However, the following query will not use the index:
SELECT * FROM customer WHERE fname = 'Sarah'

Another mistake is to add an additional index on lname. In most cases you wouldn't want that since the composite index can already be used for looks on lname.

4. Using an expression on a column. I often see developers write queries that use a function or other expression in their WHERE clause. If the expression is directly on the column, an index can't be used. For example, let's say I want customers who joined this year, I could do this:
SELECT * FROM customer WHERE year(customer_join_date) = 2009; -- Cannot use an index on customer_join_date

If I rewrote the query I could take advantage of an index on customer_join_date:
SELECT * FROM customer WHERE customer_join_date BETWEEN '2009-01-01' AND '2009-12-31';

MySQL does not have function-based indexes. There are ways to mimic these, but I'll leave that for another blog post.

5. Appending the primary key to an index on an InnoDB table. This applies to InnoDB, not MyISAM. InnoDB uses the primary key as a row pointer in your secondary indexes. So effectively an index on column lname is actually implemented like an index on (lname, custid)

Sunday, August 2, 2009

Common mistake: looking for "random" rows

There are many wrong ways to find a "random" row. For example:

SELECT .. ORDER BY rand() LIMIT 1;


The rand() function is great, and LIMIT is phenomenal, but the above query is really, really bad. It evaluates the rand() function for every row of the table and then (if that wasn't bad enough) it orders all the rows by the result of the rand function. Then it returns the first row. Wow. Don't make MySQL do that much work!

There are better ways to find a "random" row. Maybe in a future blog post I'll go into them, but at the very least consider calculating a random number ahead of time and looking up a row with that id.