Rod Taylor pg
Thu Jul 21 15:33:03 PDT 2005
> Rod Taylor indicated that he ran into this sort of problem with his
> "Slony-I with PostgreSQL 7.2" frankenstein system; this was the case
> when catching up after replicating a Truly Enormous Set.
> 
> It would be quite interesting to see what it is that is occupying all
> that memory.  Big processes surprise me too...

I figured it out when doing the 7.4 to 8.0 upgrade with standard Slony
as it happened there too. I thought I sent an email on the subject, but
perhaps not.


If a table with 1 million entries has an average tuple size of 6kb but
some tuples (say 1000) are as large as 30MB and you have 100 slots in
Slony (fetch 100), over many iterations a 30MB entry is bound to be in
each of those 100 slots and each slot ends up being 30MB in size.

30MB * 100 = 3GB

I have yet to put in any effort to determine whether this is a libpq
problem or a Slony problem.
-- 



More information about the Slony1-general mailing list