Vick Khera vivek at khera.org
Tue Nov 9 11:22:30 PST 2010
On Tue, Nov 9, 2010 at 1:54 PM, sharadov <sreddy at spark.net> wrote:
>
>
> We have slony replication set up, and the replication on the slave has
> fallen behind by 10 days. On investigating I noticed that the sl_log_1 table
> has 25K records, but the sl_log_2 table has over 100 million rows, and they
> keep going up. How do I go about troubleshooting this?

What is the output of

select * from _XXX.sl_status;

on your master node, where XXX is the cluster name?

are the slon daemons running?


More information about the Slony1-general mailing list