Thu Sep 9 16:37:55 PDT 2004
- Previous message: [Slony1-general] Slony-I Capabilities . . .
- Next message: [Slony1-general] Slony-I Capabilities . . .
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wednesday September 8 2004 5:29, Jan Wieck wrote: > On 9/8/2004 1:51 PM, Ed L. wrote: > > On Monday September 6 2004 4:20, Christopher Browne wrote: > >> And the real problem would most likely come in the initial "seeding." > >> When you provision a new subscriber, it has to take on _all_ of the > >> data then sitting at its provider, all in one transaction. That takes > >> a while, across a slow link, and if that link is not sufficiently > >> reliable, you might never get the first "sync." > > > > Our typical DB is around 10GB. Do I understand correctly that the > > first seeding transfer will include all of the 10GB in one very large, > > very long transaction on the slave? Any concerns about that much data > > going across? > > Why would that concern you? The only possible concern that comes to mind would be a very long transaction on the master/provider. Sounds like you have no concerns about the volume of data in the first sync? > > And the path traveled is from provider pgsql to provider slon to > > subscriber slon to subscriber pgsql? Any concerns there about memory > > needs? Or is it pipelined in transfer? > > The subscriber slon has a DB connection to the provider and the local > subscriber DB. It does "COPY foo TO stdout" on the provider and on the > subscriber DB "COPY foo FROM stdin", then it forwards the entire data in > chunks via PQgetCopyData(), PQputCopyData(). I didn't bother to > multithread that process. So, is it the case that the data for foo is ever completely buffered in either slon process? In other words, to sync a 10GB table, roughly how much memory would the slon processes need? Ed
- Previous message: [Slony1-general] Slony-I Capabilities . . .
- Next message: [Slony1-general] Slony-I Capabilities . . .
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list