Thu Apr 19 04:14:04 PDT 2007
- Previous message: [Slony1-general] FETCH from LOG taking too long
- Next message: [Slony1-general] FETCH from LOG taking too long
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
* Michal Taborsky - Internet Mall <michal.taborsky at mall.cz> [070419 13:00]: > Hello everyone. > > We have a pretty simple setup. Two servers, one database replicated from master to slave, everything in one replication set, about 200 tables. Slony version is 1.2.9, PostgreSQL version is 8.1.5. > > We loaded "a bit" of data into one of the tables on master, about 6 million records. Which generated 6 million sl_log_1 rows. The problem now is, that the slon on slave issues a "fetch 100 from LOG;" command, > which takes about 2 minutes to complete! The servers are loaded, but not overloaded (they are both dual-dual-core Xeons with 4G RAM). > > Thre replication is virtually stopped at the moment, the sl_log_1 is growing bigger and bigger, because the primary server does some OLTP work. > > Can anything be done to remedy this? The problem obviously is with the FETCH, but I thought there were supposed to be some indexes on that table to speed-up this. drop the replication, restart it from scratch. Basically, slony is prone to getting overwhelmed by to many changes in a given time interval. E.g. one "update table set count=count+1;" will generate one update per row. > The primary server may not be taken offline. The secondary could be, but only as a last resort, if there are no other ways. Restart the replication from scratch. Andreas
- Previous message: [Slony1-general] FETCH from LOG taking too long
- Next message: [Slony1-general] FETCH from LOG taking too long
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list