hannu at skype.net hannu
Sun Oct 17 15:52:08 PDT 2004
> On 10/17/2004 6:55 AM, hannu at skype.net wrote:
>>> On 10/12/2004 12:38 PM, hannu at skype.net wrote:
>>> Drop nodes that you cannot get back online. Better rebuild them from
>>> scratch than to accumulate billions of log rows.
>>
>> removing sl_confirm entries for already dropped nodes (on both master
>> and
>> slave) fixed the problem for master - the size sl_log_1 is reasonable.
>>
>> unfortunately it did not help on slave (sl_log_1 is 5.5M rows)
>
> Are you sure the slave went through the cleanup procedure since?

No, I'm not sure - how can I tell ?

It has been a few days since the master cleaned its log though ...

>> I am also unable to set subscription on slave (node 1) to forward=no
>> using
>> slonik command :
>> subscribe set ( id = 1, provider = 2, receiver = 1, forward = no);
>
> This problem was fixed in release 1.0.2 ... do you mind upgrading?

AFAIK I am running 1.0.2 , but I may have missed some secret incantations
when upgrading ;)


> Sure, on the subscriber you can do
>
>      select _userdb_cluster.subscribeSet(1, 2, 1, 'f');
>
>>
>> Using some sl function or maybe just "delete from
>> _userdb_cluster.sl_log_1
>> where ???" and "update _userdb_cluster.sl_subscribe set sub_forward =
>> false where sub_set=1 and sub_provider=2 and sub_receiver=1"
>>
>> Must the master node also be aware of slave's state change from
>> forward=yes to forward=no.
>
> The above will do that. Just be aware that if you turn forwarding off,
> the slave not only cannot be used for cascading any more, it will also
> refuse to take ownership of the set in a switchover or failover.

Mainly I hope this will clean the log, I can then turn forwarding back again.

-----------------
Hannu





More information about the Slony1-general mailing list