The question:
I have a huge table that I cannot delete the rows, only update the columns that are storing huge base64 data, that I should update to null to try to release space.
So I programmed a script that are able to set all the images in base64 expecting that the space will be released after Vacuum!
The images are set to null, vacuum executed, but the table is still with the exactly same size, and I am pretty sure the space must be released immediately, so what I am doing wrong?
Will vacuum full able to release space from huge varchar data that I updated to null? (Because I will need to lock the table if I will do this and I need to be sure)
The dump size decreased by 10 times so I expect similar behavior on the database size.
The Solutions:
Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.
Method 1
Short answer – Yes.
When you update rows, you create a new version of each row that is held inside the table.
- Vacuum does not remove these row versions.
It does make the space occupied by these “dead” rows available for reuse, but that doesn’t reduce the overall size. - Vacuum Full removes the dead rows completely.
That’s what releases space back to the operating system.
And yes; the table will get locked up while this is happening.
See the PostgreSQL Wiki page on “Vacuum Full”.
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0