Jason Culverhouse jason at merchantcircle.com
Wed Jul 29 12:13:28 PDT 2009
Hi,
I am using Slony 1.2.15 and  I have one slave that is ~2 hours behind  
or about 5000 events.

I see this in the log on node 20 where node 40 is the master and 20  
and 30 are slaves
I rearranged the servers ~October of 2008 from 30 master to 40 master  
and I ma fairly that I I have restarted everything since then.

2009-07-29 11:45:55 PDT DEBUG2 remoteWorkerThread_40: current local  
log_status is 2
2009-07-29 11:45:55 PDT DEBUG2 remoteWorkerThread_40_30: current  
remote log_status = 3
2009-07-29 11:45:55 PDT DEBUG2 remoteWorkerThread_40_40: current  
remote log_status = 3

I see these notices on node 20
NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2  
not truncated
NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2  
not truncated
NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2  
not truncated

I see FETCH statements taking up 100% cpu on node 30

Any Idea on what to do here?

The only reference I could find was:

  4.6. I upgraded my cluster to Slony-I version 1.2. I'm now getting  
the following notice in the logs:
NOTICE: Slony-I: log switch to sl_log_2 still in progress - sl_log_1  
not truncated
Both sl_log_1 and sl_log_2 are continuing to grow, and sl_log_1 is  
never getting truncated. What's wrong?
This is symptomatic of the same issue as above with dropping  
replication: if there are still old connections lingering that are  
using old query plans that reference the old stored functions,  
resulting in the inserts to sl_log_1
Closing those connections and opening new ones will resolve the issue.
In the longer term, there is an item on the PostgreSQL TODO list to  
implement dependancy checking that would flush cached query plans when  
dependent objects change.



More information about the Slony1-general mailing list