Wed Jun 15 09:18:19 PDT 2005
- Previous message: [Slony1-general] Nodes in multiple Clusters
- Next message: [Slony1-general] Doubt about unistall and how to implement the Log shipping allows creating offline replicas
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On T, 2005-06-14 at 12:04 -0400, Christopher Browne wrote: > "Tang, Jason" <jason.tang at teamuk.telstra.com> writes: > > Would I be assuming correct that a node can belong to more than one > > cluster. It would mean I just have to call slonik twice with commands > > describing a different clustername, but can reuse the node ids? > > That seems right, yes. > > The cluster-specific stuff resides in the _clustername namespace, so > there's no reason why you couldn't have a particular database > participating in multiple clusters. > > I don't think anyone has tried it, so you may uncover new ground, and > new problems. I'm doing it and , yes, I do uncover new ground, and new problems. ;) the main problem is (in slony 1.0.5): 1) xxtid datatype has problems when conversion operations are installed in 2nd cluster schema, requires manual tweaking (I did actual adding of 2,d cluster via pg_dump/reaload, so it may be connected to that. OTOH, having many sets replicating between dissimilar databases has its own problems: 1) EXECUTE SCRIPT often fails on the nodes where some of the tables/sequences/whatever is not present, requiring manual deleting of the event form sl_log_1 after it has applied where it can be. This is due to fact that EXECUTE SCRIPT is tied to a master node rather than set of tables, and thus it tries to EXECUTE the SCRIPT on _all_ other nodes, even on the ones where it fails due to missing objects. 2) switching over master node for several sets has sync problems, which also need manual tweaking of sl_log_1 and other sl_* tables to solve. This is on a 4-node cluster with 10 sets, each with different set of master and subscribers. Sorry for not reporting it, but this has been in high-load/(and high- stress :) ) situations, and not easy to replicate in a way of providing a failing test case. So my recommendation is rather to use one slon daemon _per_replication_set_, if you see a need to swith over some node which is master for several dissimilar sets and possibly also a slave for some other set. I'm not yet doing it myself, so it may have some other set of problems. > What I would be concerned about is that this provides more > opportunities for locks/blocks/deadlocks as you'll have more slon > processes connecting in concurrently. This should not be a problem, as each daemon must have its own schema and tables. This might be a problem for a db schema extraction scripts. -- Hannu Krosing <hannu at skype.net>
- Previous message: [Slony1-general] Nodes in multiple Clusters
- Next message: [Slony1-general] Doubt about unistall and how to implement the Log shipping allows creating offline replicas
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list