-
Notifications
You must be signed in to change notification settings - Fork 123
cases
Nikolay Samokhvalov edited this page Feb 10, 2020
·
1 revision
Welcome to the postgres_dba wiki!
- A query is slow, it is SeqScan with a large number of rows, lack of an index
- create index
- "Postgres doesn't use my index"
- ...
- "I dumped/restored a table and it's much smaller than on production"
- make autovacuum more aggressive, pack (pg_repack?) existing, already bloated tables/indexes
- The second run of a query is much faster than the first one
- cache optimization: ...
- Cannot connect to Postgres, "FATAL: sorry, too many clients already"
- can we define which user/app has a lot of connections?
- are connections being closed properly?
- check states of sessions (idle in transaction? => setup timeout for idle in tx)
- Recently, the overall performance became significantly worse than in the previous months
- DB size quickly grows and now cache is not so effective? => More memory | Memory-related params tuning | Partitioning | ...
- "I need only a few rows from a table, there is a proper index to speed it up, but a lot of them are scanned with SeqScan or IndexScan, and I see 'Rows Removed by Filter: N', where N is a very big number"
-
random_page_cost
is default (4)? Lower values (1-1.5) might be much better - use loose index scan technique?
- Index scan is slow
- ...
- Check index size, is it too big (compared to the table)? => Partial indexes | Less columns | BRIN | ...
-
select ... for update
is slow
- can we use
select ... for update skip locked
in this case?
- The query is slow, there are huge differences between "Planned rows" and "Actual rows" in the plan
- increase
default_statistics_target
or doalter table set statistics
for corresponding tables/columns? - use multi-column statistics?
- A query is not so fast, there are IndexOnlyScans but with "Heap Fetches: N" where N is rather big
- the visibility map is outdated, tune autovacuum to be more aggressive (analyze scale factor) + perform manual analyze
- Planning time for a query is too big
- Reduce amount of statistics? (
alter table set statistics
with lower values) - Simplify query
-
alter table ... add column ... default ...
runs quickly on dev, but hangs on prod
-
alter table ... add column
w/o default, thenalter table ... alter column ... set default
- "N+1 SELECT" problem (e.g. ORM generates N queries in a loop instead of a single query returning whole result set).
- rewrite the app's code (get rid of SQL inside loops)
- https://www.sitepoint.com/silver-bullet-n1-problem/, etc (google it)
- Too extensive logging for Postgres server, leading to too high I/O and affecting the server's performance
- raise
log_min_duration_statement
, turn offauto_explain.log_timing
, etc)