Mon May 26 03:00:19 PDT 2008
- Previous message: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
- Next message: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
I'm not sure that would work, if N1 was subscribed to S and S subscribed to M, then the replication clusters would be separate wouldn't they? Can you have downtime on the subscribers? Maybe you could take down S, create a new temporary S at the same site as M, replicate it, back up the data directory send it over to the the site where the real S is and restore the data and bring S back up. ----- Original Message ---- > From: Martin Eriksson <M.Eriksson at albourne.com> > To: slony1-general at lists.slony.info > Sent: Monday, 26 May, 2008 8:23:47 AM > Subject: Re: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact? > > that doesn't sound to encouraging! > > maybe I'll try the following: > > got Master node M, and Slave node S. > > on the machine where i got node S i create a new postgres installation and start > up a second postgres creating a new database on it and calling it N1. > > start up a slony deamon for N1 saying that S is the master and since its all > locally it goes pretty fast to replicate across all data to N1, then when all > synced up I switch it over to to use M as the master instead of S and then > cutting out S from the cluster leaving N1 in its place. > > and setting up a new site in a similar way create a new postgres installation on > a machine that already has a slave node, replicate it over same way as above > then just copy the whole data directory onto a portable hard drive take it to > the new location and just copy it in and fire it up as a new node (would require > some restarts of the master node i guess to know of the added slave) but > hopefully doable? > > > ----- Original Message ----- > From: "Glyn Astill" > To: "Martin Eriksson" , slony1-general at lists.slony.info > Sent: Monday, May 26, 2008 9:59:12 AM GMT +02:00 Athens, Beirut, Bucharest, > Istanbul > Subject: Re: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact? > > I looked into this briefly a few weeks ago. > > >From what I could see slony references the OIDs of individual objects, and > since it's not possible to maintain object OIDs across a pg_dump and pg_restore > the slony schema is in effect "shagged". > > ----- Original Message ---- > > From: Martin Eriksson > > To: slony1-general at lists.slony.info > > Sent: Monday, 26 May, 2008 7:25:34 AM > > Subject: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact? > > > > Hi people, > > > > I'm wondering if there is a way to shut down a slave slony node take a dump > > using pg_dump of the database on that node > > dropping that database then using pg_restore to restore the database and then > > happily just fire up slony again. > > > > I tried this and it did not work. > > > > The reason why I ask is that we got a few slave nodes distributed around the > > world and some nodes are on rather slow links mainly due to the geographic > > location. The database is around 40 gigs so just re-creating the node and > > letting it replicate across is just not possible the slave node will never > catch > > up. replicating 40 gigs across to these nodes will take several days and that > is > > not really an option. > > > > we are trying to do 2 things. > > > > 1. upgrade to postgres 8.3.1 (currently on 8.2.4) > > 2. adding a new slave node in a remote location. (its ALOT faster to FedEx a > > harddrive with the dump and pg_restore it there then the full replicaiton > would > > ever be) > > > > but as i said I've had no luck using pg_dump and pg_restore to re-create a > wiped > > slave node :( > > > > what i've tried so far on a test system is: > > > > 1. replicate the database (1 Master 1 Slave) > > 2. Verify replication is working correctly > > 3. shutting down slony (for both slave and master, using slon_kill) > > 4. use "pg_dump -Z 9 -Fc -o db > /tmp/slonyDump.dmp" on the salve db (tried > > without the -o as well) > > 5. re-create the db and then load the dump it into it with "pg_restore -Fc -d > db > > /tmp/slonyDump.dmp" > > > > then i try to just start it again, with the slon_start for both master and > > slave) > > > > but I get: > > > > 2008-05-23 18:35:30 EEST ERROR remoteWorkerThread_1: "select > > "_db_cluster".sequenceSetValue(1,1,'581','32'); " PGRES_FATAL_ERROR ERROR: > > Slony-I: sequenceSetValue(): sequence 1 not found > > > > so maybe either my pg_dump or my pg_restore was not doing a complete job. > > > > I'm open to try more or less anything so please if anyone got some ideas of a > > way to restore a database in a way that will make slony startup and not force > a > > full replication? > > _______________________________________________ > > Slony1-general mailing list > > Slony1-general at lists.slony.info > > http://lists.slony.info/mailman/listinfo/slony1-general > > > > __________________________________________________________ > Sent from Yahoo! Mail. > A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html > _______________________________________________ > Slony1-general mailing list > Slony1-general at lists.slony.info > http://lists.slony.info/mailman/listinfo/slony1-general > _______________________________________________ > Slony1-general mailing list > Slony1-general at lists.slony.info > http://lists.slony.info/mailman/listinfo/slony1-general __________________________________________________________ Sent from Yahoo! Mail. A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html
- Previous message: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
- Next message: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list