How Postgres bloat is dangerous?

The question:

A table is containing 300mB of bloat. It’s a few less than 20% of the table records.
The autovaccum is about to clean it in a few days, when it’s maybe 350-400mB.
Disk space is not a problem.

What is the impact for my production to have this bloat? It seems it should be evicted from cache since it’s not queried, but is the bloat in RAM too?

Does it affect the latency, cpu usage or anything other than the space on disk?

The Solutions:

Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.

Method 1

That amount of bloat is no problem, it is the “wriggle room” a healthy table needs.

In general, the impact of bloat on your database is:

  • it wastes disk space

  • it slows down sequential scans (but not index scans)

  • it wastes RAM used for caching empty space


All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Comment