Cyril SCETBON scetbon at echo.fr
Fri Jul 11 10:32:56 PDT 2008
Cyril SCETBON wrote:
>
>
> Jan Wieck wrote:
>> On 7/11/2008 11:00 AM, chris wrote:
>>> Cyril SCETBON <scetbon at echo.fr> writes:
>>>> I've got a lot of updates (~1000 w/s) on the master and on the nearest
>>>> slave (like others) I've got a big replication lag. It seems that
>>>> fetching lines from the cursor is not taking much time, but processing
>>>> events is not well managed.
>>>>
>>>> Here you can find an extract of the log that shows that the proposed
>>>> size for grouping is often just 3 and not greater :
>>>>
>>>> http://pastebin.com/f2a2974a
>>>>
>>>> The slon parameters used are : -g 1000 -o 0
>>>>
>>>> Any idea to speed up the processing ?
>>>
>>> There's something a bit confusing in the logs; it's not affecting how
>>> the grouping is actually working.  The "just 3" looks to be when it's
>>> evaluating sync grouping for events coming from *other* nodes than the
>>> provider, which is pretty much irrelevant since those syncs don't lead
>>> to any actual work.
>>
>> Correct. The 3 or 4 group size is for non origins. The current group 
>> size for node 1, which seems to be the only origin in the system, is 
>> actually 255.
>>
>>>
>>> Looks to me like the processing of data from node #1 is working out
>>> about as can be expected on a node that is processing a lot of data.
>>> There may be relevant/material improvements to handling of this in
>>> 2.0; I don't think there's anything to be done in terms of
>>> configuration.
>>
>> I seem to remember that Slony had some problems with large numbers of 
>> sets. Although it seems that in this particular case it only accounts 
>> for 0.3 out of 135 seconds to completely process 255 sync events.
>>
>> Anyhow, I counted 31 sets with a total of 513 tables which have a 
>> pretty strange pattern. Up to set 29, all the odd set id's have 17 
>> tables while the even set id's have 18 tables. Is this one of those 
>> misdesigned applications that creates a separate set of tables per 
>> user or the like?
> we got 30 sets (1 is for an heartbeat table) to be able to spread them 
> over up to three/four differents  masters if needed (one at this 
> time). The number of tables is a manual partitionning (lot of rows and 
> performance issues)
>
> Is does not seem to hurt performance as you said, but do you see 
> anything else ?
If I check the different grouping size I see :

count      size

    1              255
    1              763
    2                  7
290                  3

If I understand, you both mean that 301.017 seconds is the duration of 
the processing of events from node 1 which size is 255 and not the 290 
groups of 3 events ?

>>
>>
>> Jan
>>
>

-- 
Cyril SCETBON


More information about the Slony1-general mailing list