Skip to content
Nikolay Samokhvalov edited this page Feb 10, 2020 · 1 revision

Welcome to the postgres_dba wiki!

  1. A query is slow, it is SeqScan with a large number of rows, lack of an index
  • create index
  1. "Postgres doesn't use my index"
  • ...
  1. "I dumped/restored a table and it's much smaller than on production"
  • make autovacuum more aggressive, pack (pg_repack?) existing, already bloated tables/indexes
  1. The second run of a query is much faster than the first one
  • cache optimization: ...
  1. Cannot connect to Postgres, "FATAL: sorry, too many clients already"
  • can we define which user/app has a lot of connections?
  • are connections being closed properly?
  • check states of sessions (idle in transaction? => setup timeout for idle in tx)
  1. Recently, the overall performance became significantly worse than in the previous months
  • DB size quickly grows and now cache is not so effective? => More memory | Memory-related params tuning | Partitioning | ...
  1. "I need only a few rows from a table, there is a proper index to speed it up, but a lot of them are scanned with SeqScan or IndexScan, and I see 'Rows Removed by Filter: N', where N is a very big number"
  • random_page_cost is default (4)? Lower values (1-1.5) might be much better
  • use loose index scan technique?
  1. Index scan is slow
  • ...
  • Check index size, is it too big (compared to the table)? => Partial indexes | Less columns | BRIN | ...
  1. select ... for update is slow
  • can we use select ... for update skip locked in this case?
  1. The query is slow, there are huge differences between "Planned rows" and "Actual rows" in the plan
  • increase default_statistics_target or do alter table set statistics for corresponding tables/columns?
  • use multi-column statistics?
  1. A query is not so fast, there are IndexOnlyScans but with "Heap Fetches: N" where N is rather big
  • the visibility map is outdated, tune autovacuum to be more aggressive (analyze scale factor) + perform manual analyze
  1. Planning time for a query is too big
  • Reduce amount of statistics? (alter table set statistics with lower values)
  • Simplify query
  1. alter table ... add column ... default ... runs quickly on dev, but hangs on prod
  • alter table ... add column w/o default, then alter table ... alter column ... set default
  1. "N+1 SELECT" problem (e.g. ORM generates N queries in a loop instead of a single query returning whole result set).
  1. Too extensive logging for Postgres server, leading to too high I/O and affecting the server's performance
  • raise log_min_duration_statement, turn off auto_explain.log_timing, etc)
Clone this wiki locally