We're excited to announce that we recently open-sourced pg_insights, a library of SQL snippets that aid in Postgres administration. Over the past few years, we've curated these scripts from across the internet in addition to concocting our own. They have proven invaluable in helping us gauge high-level database health. This collection has played a crucial role in debugging performance issues and planning for growth across our fleet of PG instances ranging from small staging databases to huge production installations. We hope that the broader community benefits from these snippets as well.
The scripts run easily in any Postgres client—just copy/paste them unaltered from Github. If you use the packaged `psql` client, you can clone the repository and run a script against your database with the `-f` flag followed by the path to the script.
It should be noted that pg_insights is not an end-all-be-all tool for monitoring PG health. It works great in tandem with other tools for collecting live metrics, graphing metrics over time, and diagnosing slow queries or performance.
Let's take a look at a few of the scripts included in pg_insights.
The cache_hit_rate.sql snippet displays the percentage of block reads that hit Postgres's shared buffers for both indices and tables. Hitting in-memory shared buffers speeds up queries because it does not incur an OS system call to fetch the blocks from disk. (However, note that the system call to read from disk may still hit the in-memory OS page cache. Therefore, a shared-buffers-miss does not necessarily end up falling back to fetching data from disk.)
It's good to strive for very high cache-hit rates, and some sources suggest shooting for 99+%. In environments with fast and high bandwidth disk I/O, it may be less necessary to have such high cache hit rates.
In the example below, index block reads and table block reads hit shared buffers 59% and 92.5% of the time, respectively.
The index_hit_rate.sql snippet shows the percentage of times that queries to each table utilize an index. It also shows the approximate number of rows in the table. This information aids in identifying under-indexed tables.
It's important to note that for small tables, with a few hundred rows or less, Postgres may not use indices. Postgres smartly chooses to avoid indices in these cases because scanning every row of a small table is often more performant than incurring the overhead for traversing an index. So, low index hit rates for small tables aren't a concern.
In the example below, queries against table_a and table_b use an index 100% of the time, while queries against table_c use an index 98.1% of the time. Postgres almost never relies on an index when selecting from the tiny table_d with only 251 rows.
The unused_indices.sql snippet identifies rarely used indices. It displays both the size of the index on disk, in bytes, and the number of times the index has been scanned since the index was created (or since Postgres stats were reset with `select pg_stat_reset();`). This helps pinpoint indices that are taking up disk space (and potentially memory) but providing no benefit.
This example shows a 29 GB index that has never been used!
The library contains a handful of other useful scripts beyond the ones mentioned above. Try them out and let us know if they are helpful to you! We hope that it continues to evolve and grow over time.
If you're interested in helping maintain our Postgres fleet and working on a range of challenging technical problems, you're in luck—we're hiring!