Wed Oct 17 15:45:21 PDT 2012
- Previous message: [Slony1-hackers] Failover never completes
- Next message: [Slony1-hackers] Failover never completes
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 10/17/2012 03:35 PM, Jan Wieck wrote: > Please elaborate on those constraints. Does that mean you cannot deploy > any binaries on an existing, running master? not my choice, but yes > If that is the case, you could deploy the 2.1.2 binaries but not use > them yet on all replicas. Switch over to one of them (still using 2.1.0) > to deploy the 2.1.2 on the previous master (now replica). Then use the > regular Slony upgrade mechanism from there. The environment is strictly controlled, and binaries only deployed through the original rpms in the repo when the machine was provisioned. This situation might force a reevaluation of that. But in any case this is the least of our problems unless you can tell me that 2.1.2 won't have the same problem when we failover. >> At the moment we are testing with clusters that are all running 2.1.0. >> It is in this configuration where failover is failing. > > People need to stop using FAILOVER when there is actually no physical > problem with the existing master node. What you probably want to do is a > controlled MOVE SET instead. Not possible. We MUST failover because when we are all done the original master will be taken out of service. If we do a move set we cannot take out the old node from service. >> We could possibly test a cluster with all 2.1.2, which might be >> instructive, especially if it turns out that the problem we are running >> into is solved in 2.1.2. However we would still have the challenge of >> getting from existing 2.1.0 clusters to 2.1.2 clusters without excessive >> downtime. > > Stopping the slon processes, running the update functions slonik script > and starting the slon processes again doesn't take hours. As I said, when are verboten from doing in place upgrades of binaries. So to do this upgrade we would need to stop the entire cluster, pg_dump the master, restore to a new master, and then bring up a new cluster. Takes too long. Joe -- Joe Conway credativ LLC: http://www.credativ.us Linux, PostgreSQL, and general Open Source Training, Service, Consulting, & 24x7 Support
- Previous message: [Slony1-hackers] Failover never completes
- Next message: [Slony1-hackers] Failover never completes
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-hackers mailing list