Jeff Amiel jamiel at istreamimaging.com
Wed Aug 22 11:26:51 PDT 2007
never mind...I just noticed in the logs...

# NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=9139
CONTEXT:  SQL statement "SELECT  "_replication_cluster".cleanupNodelock()"
PL/pgSQL function "cleanupevent" line 77 at perform
NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=9150
CONTEXT:  SQL statement "SELECT  "_replication_cluster".cleanupNodelock()"
PL/pgSQL function "cleanupevent" line 77 at perform
NOTICE:  Slony-I: log switch to sl_log_2 complete - truncate sl_log_1

and now the tables are empty/nearly empty.

Good 'ol slony!

Jeff Amiel wrote:
> I recently reconfigured some replication from a single node to 2 
> subscribers.
> One of the subscribers worked very well and is now in sync.
> One of the subscribers has some network connectivity issues and has 
> been working in fits and starts (never actually completing the 
> subscription of the first set).
> So today, I removed the node from the replication cluster via pg-admin 
> (mistake?).
>
> However, there now appears to be a LOT of stale/old entries in 
> sl_event, log_1, and log_2 (and wherever else) representing that old 
> node.
> I did a delete cascade on the replication schema on that dead node in 
> preparation for recreating the database from scratch, but am at a loss 
> as to how to deal with this 'bloat' on the master node side.
>
>
> How can I clean this up...and what SHOULD I have done to drop that 
> node properly?
>
>
>



More information about the Slony1-general mailing list