Rod Taylor pg
Tue Mar 29 18:22:05 PST 2005
On Tue, 2005-03-29 at 11:22 -0500, Christopher Browne wrote:
> Rod Taylor wrote:
> 
> >
> >It takes me about 12 to 24 hours to copy the largest tables (that's
> >each, not collectively); so there are a number of blocks where it
> >rescans from transaction ID X to some larger number many times if I use
> >grouping sets less than 7000.

> That suggests to me that you might want to change the slon's -s and -t 
> options a bit in order to save on the work required to generate SYNCs; 
> whilst doing these subscriptions, it would probably be a _bit_ of an 
> improvement to use "-s60000" so that you'd only get one SYNC per minute.

Yeah.. The difficulty with that is I do have other subscribers off doing
their own things. If I could set it on a per set basis that would work. 

I've slowed down their sync times a little bit but not enough to be
noticeable on the other machines. It's only a small section, but it's a
rather important one.

> If it's taking 5h to process a set of SYNCs, there is little sense in 
> generating 18K sync events during that time (e.g. - one per second).  
> Far better to generate 5x60, namely 300 of them.
> 
> But in view of your comments, I have bumped up the sync_group_maxsize to 
> 10000 for 1.1.

Excellent.

> The changes to the cleanup thread will doubtless be helpful to you; now, 
> it checks to see if old transactions are still running (e.g. - a COPY or 
> some ludicrously large SYNC group), and falls back to an ANALYZE rather 
> than VACUUM ANALYZE if the same ancient transaction is blocking vacuums.

Yes, those looked pretty good at the time although I've not actually run
out of resources (always more IO and another CPU available).

> I won't be messing around with the internals of the queries right now; 
> that would injure stability when it would be highly desirable to get a 
> 1.1.0 release out...

Agreed. Having the process take more time than necessary is far far
better than having it fail altogether.

Thanks for all the hard work!

-- 



More information about the Slony1-general mailing list