Tue Nov 23 06:31:33 PST 2010
- Previous message: [Slony1-general] Upgrading from 1.2.20 to 1.2.21 is easy, but what about remote nodes with log shipping?
- Next message: [Slony1-general] Feature Idea: improve performance when a large sl_log backlog exists
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
This idea is to try and deal with the following problem. A user has a multi-node cluster and needs to bring one of the nodes down for a while (say a few day). During this time slony collects the transactions to replicate in sl_log_1 and sl_log_2. The problem is that sl_log_1 or sl_log_2 will bloat to be very big. Generating SYNC's will take a lot longer and reading SYNC's for other nodes can take a lot longer. Slony can get into a state where it can't keep up/catch up with replication because the sl_log table is so large. Does this problem bite people often enough in the real world for us to devote effort to fixing? If instead of having 2 sl_log tables we could have a dynamic number of them. During a log switch if no sl_log table is ready to be truncated we could instead create a new one. This would complicate the logic where we select from sl_log_1 and UNION with sl_log_2.
- Previous message: [Slony1-general] Upgrading from 1.2.20 to 1.2.21 is easy, but what about remote nodes with log shipping?
- Next message: [Slony1-general] Feature Idea: improve performance when a large sl_log backlog exists
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list