Rod Taylor pg
Wed May 31 09:02:14 PDT 2006
> > The standard sync process that happens in the background on a day to day
> > basis. I don't know if it is because pg_listener passes a threshold and
> > locks get in the way or if sl_event or sl_log_1 grow too large, but at
> > some point it can take Slony significant amounts of time to replicate a
> > 'sync' event.
> > 
> > The queries that scoop data from sl_log_1 start to take a long time and
> > can pass the timeout. After that it seems to run away with itself.

> I've seen both cases happen.  With long running transactions, you pretty
> much need to eliminated the transactions and get a vacuum of pg_listener
> through.  Vacuum full of pg_listener is sometimes needed.

That is a problem, but I don't believe it is the problem here.

> If sl_log_1 gets huge, clustering the table might help you but that's
> going to block access for you.  If memory servers me, there are usually

I usually bump the group size to something between 8000 and 10000 ensure
that the slony DB Backend and the Slon daemon a dedicated CPU each
whenever sl_log_1 grows beyond a few hundred million tuples.

That's pretty rare though, and not the case here.

Keep in mind that this happens the opposite direction of data flow.
sl_log_1 is completely empty and sl_event really only has confirmation
events.


Node 1 is the data provider and node 4 is a subscriber.

Slon for Node 1 will have several hundred connections to node 4
listening for events and data originating from node 4.


Restarting slon for node 1 will clear all of the connections (takes some
time as a result of the now very heavy contention over pg_listener).
That's why I expect it to go away after Slony 1.2 is release -- no more
pg_listener entries).

-- 




More information about the Slony1-general mailing list