hannu at skype.net hannu
Tue Oct 12 17:33:25 PDT 2004
> On 10/12/2004 10:35 AM, hannu at skype.net wrote:
>
>>> Hi again ;)
>>>
>>> now I've got the following situation
>>>
>>> I did some subscription changes (slony 1.0.2, pg 7.4.5) and now I have
>>> nodes 1 and 5 both subscribing to master node 2
>>>
>>> the replication is happening, but both 1 and 5 ended up with
>>> subscription
>>> option forward = yes
>>>
>>> userdb=# select * from _userdb_cluster.sl_subscribe ;
>>>  sub_set | sub_provider | sub_receiver | sub_forward | sub_active
>>> ---------+--------------+--------------+-------------+------------
>>>        1 |            2 |            5 | t           | t
>>>        1 |            2 |            1 | t           | t
>>> (2 rows)
>>>
>>> resulting in sl_log_1 with ~3M rows
>>
>> Ok Now I was at leas able to convince slony to drop node5 , but I still
>> have
>> sl_log_1 file of 3.3M rows.
>>
>> This manifests itself as periods of tens of seconds when the system
>> halts
>> to a crawl as slony vacuums that big table and clients timeout. The
>> selects form sl_log_1 are slow as well, but at least they do not affect
>> other operations ;(
>>
>> My current solution is to stop slony on slaves during peak periods, but
>> what I really would like is that the sl_log_1 could be trimmed down.
>
> We have to work on log switching ... yes.

If the entries in _userdb_cluster.sl_log_1 are meant to live forever,
then the queries which use it need some work - running a seqscan over 3,3M
rows is not a good replication strategy ;(





More information about the Slony1-general mailing list