Lob's website experience is not optimized for Internet Explorer.
Please choose another browser.

Engineering
February 16, 2016

Running VACUUM FULL on RDS Postgres with Minimal Downtime

by 
Marcus Gartner

Why VACUUM FULL Is Useful

Automated, non-blocking VACUUM’s are helpful in keeping Postgres lean and performant. However, these Roomba™-like regular cleanups can miss some types of database dust bunnies. For example, automated vacuums do not reclaim space from dropped columns. This wasted space, if significantly large, can bloat Postgres’ cache and hinder performance. The more rigorous VACUUM FULL rewrites every row in a table, freeing up space from columns that have been dropped in the past, and compressing rows onto the same pages on disk. As a result, the same number of rows to be cached in less memory.

Drawbacks

While VACUUM FULL is powerful, it has a downside. It holds an exclusive write lock on the table. With a table with millions of rows, it can take hours to complete, creating mandatory downtime for a service relying on the table.

The popular alternative to VACUUM FULL is the pg_repack extension which is able to perform the same duty without locking the table. Unfortunately, pg_repack is not an officially supported extension if you are using AWS RDS (or perhaps another hosted Postgres provider).

VACUUM FULL With Only Partial Downtime

There is an alternative, however, to avoid blocking inserts to the table, if your application can tolerate temporarily inaccurate reads and failed updates on existing rows. In our specific use case, we very adamantly wanted to avoid downtime for inserts, but cared less about whether rows already inserted went missing temporarily. We also see very few updates to existing rows, making the solution below viable.

First, create a copy of the table you wish to VACUUM FULL.

CREATE TABLE postcards_new (
 LIKE postcards
 INCLUDING ALL
);

Then swap the names of the two tables in a transaction. Once this completes, any SELECTS or UPDATES for existing rows in the original table will fail. Inserts will successfully be written to the newly created temporary table.

BEGIN;
 ALTER TABLE postcards RENAME TO postcards_vacuum;
 ALTER TABLE postcards_new RENAME TO postcards;
COMMIT;

Now that the original table is no longer being written to, run VACUUM FULL on the table. I’ve added VERBOSE below as it adds some nice status messages that help you track the progress of the cleaning.

VACUUM FULL VERBOSE postcards_vacuum;

Once the vacuum is complete, swap the table names again in a transaction.

BEGIN;
 ALTER TABLE postcards RENAME TO postcards_new;
 ALTER TABLE postcards_vacuum RENAME TO postcards;
COMMIT;

Finally, insert any newly inserted rows in the temporary table into the original table.

INSERT INTO postcards
SELECT * from postcards_new;

Once it is verified that all the rows in the temporary table have been transferred to the original table, you can drop the temporary table.

DROP TABLE postcards_new;

This solution only works in a very specific application with requirements similar to ours. The ideal solution is to use pg_repack, but if you lack the freedom to install custom extensions, this may be one way to avoid complete downtime.

Additional Postgres Resources