Martin Eriksson M.Eriksson at albourne.com
Mon May 26 03:51:39 PDT 2008
Ya i guess that is true :/

setting up a temp database next to the master, add a new temp node with 8.3.1 and replicate to it then copy the data directory onto some mobile device and take it to the other site install 8.3.1 and just dump it in the data directory.. switch over that node to the new one and let it catch up on the replication..

just that when you talk to people and you mention copying the data directory most posgres people frown big time.. and I agree its not exactly a "nice" way of doing it but sort of feels like the only way currently..

we can allow for some down time, slave node specific down time we could allow for up to 48h as other nodes can step in but it will be slower for users, and the main master we could have down for 4-6h if properly scheduled so at least we are not working under 0 downtime pressure, some downtime is allowed is managed properly.



----- Original Message -----
From: "Glyn Astill" <glynastill at yahoo.co.uk>
To: "Martin Eriksson" <M.Eriksson at albourne.com>, slony1-general at lists.slony.info
Sent: Monday, May 26, 2008 1:00:19 PM GMT +02:00 Athens, Beirut, Bucharest, Istanbul
Subject: Re: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?

I'm not sure that would work, if N1 was subscribed to S and S subscribed to M,  then the replication clusters would be separate wouldn't they?

Can you have downtime on the subscribers? Maybe you could take down S, create a new temporary S at the same site as M, replicate it, back up the data directory send it over to the the site where the real S is and restore the data and bring S back up.

----- Original Message ----
> From: Martin Eriksson <M.Eriksson at albourne.com>
> To: slony1-general at lists.slony.info
> Sent: Monday, 26 May, 2008 8:23:47 AM
> Subject: Re: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
> 
> that doesn't sound to encouraging! 
> 
> maybe I'll try the following:
> 
> got Master node M, and Slave node S.
> 
> on the machine where i got node S i create a new postgres installation and start 
> up a second postgres creating a new database on it and calling it N1.
> 
> start up a slony deamon for N1 saying that S is the master and since its all 
> locally it goes pretty fast to replicate across all data to N1, then when all 
> synced up I switch it over to to use M as the master instead of S and then 
> cutting out S from the cluster leaving N1 in its place.
> 
> and setting up a new site in a similar way create a new postgres installation on 
> a machine that already has a slave node, replicate it over same way as above 
> then just copy the whole data directory onto a portable hard drive take it to 
> the new location and just copy it in and fire it up as a new node (would require 
> some restarts of the master node i guess to know of the added slave) but 
> hopefully doable?
> 
> 
> ----- Original Message -----
> From: "Glyn Astill" 
> To: "Martin Eriksson" , slony1-general at lists.slony.info
> Sent: Monday, May 26, 2008 9:59:12 AM GMT +02:00 Athens, Beirut, Bucharest, 
> Istanbul
> Subject: Re: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
> 
> I looked into this briefly a few weeks ago.
> 
> >From what I could see slony references the OIDs of individual objects, and 
> since it's not possible to maintain object OIDs across a pg_dump and pg_restore 
> the slony schema is in effect "shagged".
> 
> ----- Original Message ----
> > From: Martin Eriksson 
> > To: slony1-general at lists.slony.info
> > Sent: Monday, 26 May, 2008 7:25:34 AM
> > Subject: [Slony1-general] pg_dump/pg_restore keeping all slony stuff intact?
> > 
> > Hi people,
> > 
> > I'm wondering if there is a way to shut down a slave slony node take a dump 
> > using pg_dump of the database on that node
> > dropping that database then using pg_restore to restore the database and then 
> > happily just fire up slony again.
> > 
> > I tried this and it did not work.
> > 
> > The reason why I ask is that we got a few slave nodes distributed around the 
> > world and some nodes are on rather slow links mainly due to the geographic 
> > location. The database is around 40 gigs so just re-creating the node and 
> > letting it replicate across is just not possible the slave node will never 
> catch 
> > up. replicating 40 gigs across to these nodes will take several days and that 
> is 
> > not really an option.
> > 
> > we are trying to do 2 things.
> > 
> > 1. upgrade to postgres 8.3.1 (currently on 8.2.4)
> > 2. adding a new slave node in a remote location. (its ALOT faster to FedEx a 
> > harddrive with the dump and pg_restore it there then the full replicaiton 
> would 
> > ever be)
> > 
> > but as i said I've had no luck using pg_dump and pg_restore to re-create a 
> wiped 
> > slave node :(
> > 
> > what i've tried so far on a test system is:
> > 
> > 1. replicate the database (1 Master 1 Slave)
> > 2. Verify replication is working correctly
> > 3. shutting down slony (for both slave and master, using slon_kill)
> > 4. use "pg_dump -Z 9 -Fc -o db > /tmp/slonyDump.dmp" on the salve db (tried 
> > without the -o as well)
> > 5. re-create the db and then load the dump it into it with "pg_restore -Fc -d 
> db 
> > /tmp/slonyDump.dmp"
> > 
> > then i try to just start it again, with the slon_start for both master and 
> > slave)
> > 
> > but I get:
> > 
> > 2008-05-23 18:35:30 EEST ERROR  remoteWorkerThread_1: "select 
> > "_db_cluster".sequenceSetValue(1,1,'581','32'); " PGRES_FATAL_ERROR ERROR:  
> > Slony-I: sequenceSetValue(): sequence 1 not found
> > 
> > so maybe either my pg_dump or my pg_restore was not doing a complete job.
> > 
> > I'm open to try more or less anything so please if anyone got some ideas of a 
> > way to restore a database in a way that will make slony startup and not force 
> a 
> > full replication?
> > _______________________________________________
> > Slony1-general mailing list
> > Slony1-general at lists.slony.info
> > http://lists.slony.info/mailman/listinfo/slony1-general
> 
> 
> 
>       __________________________________________________________
> Sent from Yahoo! Mail.
> A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-general
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-general



      __________________________________________________________
Sent from Yahoo! Mail.
A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html
_______________________________________________
Slony1-general mailing list
Slony1-general at lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general


More information about the Slony1-general mailing list