I would like to logically replicate pg_catalog tables from various (100s) databases to a single cluster to help me reliably compare schemas via query. I have tried FDW (and dblink) but found that network instability, at times, would leave me with unsatisfactory results. To combat that problem I attempted materializing FDW queries but the scheduling of so many refresh was a pain. I’d really rather just replicate if at all possible.
Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.
No, that is not possible. For one, the destination table would have to have the same name and lie in the same schema. Trigger-based replication is also not an option, because you cannot have triggers on catalog tables.
Foreign tables are your only choice. If the connection is unstable, use a materialized view on top of the foreign table and take snapshots regularly. That way you have at least the most recent snapshot.
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0