Christopher Browne cbbrowne at ca.afilias.info
Thu Apr 17 10:08:27 PDT 2008
salman <salmanb at quietcaresystems.com> writes:
> At some point last night, a few of our scripts went haywire and
> generated an enormous amount of bogus data. Due to this, slony fell
> behind and now there are 13 million+ rows in the sl_log_2 table.
>
> I would like to remove the bogus data that was inserted into the log
> table and have come up with a query which should do that. If I stop
> the slon daemons, run this query, and then restart the services, will
> slony care about the missing records due to the jump in seq numbers?
>
> Is there any other table that I should update as well?

If you delete the "bogus" rows from sl_log_2, then, when the
corresponding SYNC goes looking for rows to apply, it simply won't
find them.

Deletion of data from the log tables definitely *is* "rocket surgery;"
you need to be quite careful to identify the updates that you don't
want replicated.  But if you're sure you have the right ones, there
isn't anything much to worry about in the other tables.

The thing I'd be worried about is that by having 13M updates committed
on the origin, I imagine there might be some rather huge difference
between the data in the source tables on the origin and the
equivalents on the other nodes.  You might be forcing there to be a
huge variation between origin and other nodes...

Or did the changes cancel each other out?
-- 
output = ("cbbrowne" "@" "acm.org")
http://www3.sympatico.ca/cbbrowne/spreadsheets.html
"Computers in the future may weigh no more than 1.5 tons".  -- POPULAR
MECHANICS magazine forecasting the "relentless march of science" 1955


More information about the Slony1-general mailing list