Christopher Browne cbbrowne at ca.afilias.info
Mon Feb 18 11:39:08 PST 2008
"henry" <henry at zen.co.za> writes:
> My replication cluster periodically lags behind due to load, etc.  This is
> a problem for other systems.
>
> I'm using "slon -g256 -o1000" on the slaves to try and speed up
> replication, or to force it to replicate larger chunks of rows at a time.
>
> Is this the best way to do this, and if so, how much larger can I safely
> make those values?

There has been a discussion about setting a slon.h parameter,
SLON_DATA_FETCH_SIZE, higher, which might be of some assistance.  That
change causes slon to process more INSERT/UPDATE/DELETE queries in one
request.

> If not, is there some other parameter I can set to force slony to
> replicate larger chunks of data at a time (obviously at the cost of sys
> and network load )?  ... or am I totally off the mark here?
>
> Replication lag is a real deal-killer in my case.

"Larger chunks of data" is likely to be at odds with "reducing
replication lag."

The least lag requires processing only the few latest updates
together.

--> The only way that you're processing more than 1 SYNC at a time is
    if replication is lagging behind.

    In that case, the two options are at odds with one another:

     - Processing a bunch of SYNCs together should churn through
       the changes quicker, but still means you're accepting a
       possibly substantial lag time.

     - Processing small numbers of SYNCs at a time means that
       replication won't catch up as quickly, but does mean
       lower lag between processing SYNCs.

    Realistically, the usual answer is to process a bunch of SYNCs to
    catch up ASAP.  But that *is* at odds with your requirement.

--> If replication is up to date, then no grouping of SYNCs is done;
    they are processed as soon as they are received.
-- 
let name="cbbrowne" and tld="linuxdatabases.info" in String.concat "@" [name;tld];;
http://linuxfinances.info/info/
Q: What does FAQ stand for?
A: We are Frequently Asked this Question, and thus far have no idea.


More information about the Slony1-general mailing list