Chris Browne cbbrowne at lists.slony.info
Wed Jun 6 15:17:12 PDT 2007
Update of /home/cvsd/slony1/slony1-engine/doc/adminguide
In directory main.slony.info:/tmp/cvs-serv5758

Modified Files:
      Tag: REL_1_2_STABLE
	faq.sgml loganalysis.sgml 
Log Message:
Elaborate on documentation in a couple places


Index: faq.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.66.2.4
retrieving revision 1.66.2.5
diff -C2 -d -r1.66.2.4 -r1.66.2.5
*** faq.sgml	16 Mar 2007 19:01:26 -0000	1.66.2.4
--- faq.sgml	6 Jun 2007 22:17:10 -0000	1.66.2.5
***************
*** 346,349 ****
--- 346,353 ----
  could also announce an admin to take a look...  </para> </answer>
  
+ <answer><para> As of &postgres; 8.3, this should no longer be an
+ issue, as this version has code which invalidates query plans when
+ tables are altered. </para> </answer>
+ 
  </qandaentry>
  

Index: loganalysis.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/loganalysis.sgml,v
retrieving revision 1.4.2.3
retrieving revision 1.4.2.4
diff -C2 -d -r1.4.2.3 -r1.4.2.4
*** loganalysis.sgml	16 Feb 2007 23:36:18 -0000	1.4.2.3
--- loganalysis.sgml	6 Jun 2007 22:17:10 -0000	1.4.2.4
***************
*** 695,699 ****
  <listitem><para><command>ERROR: remoteWorkerThread_%d: SYNC aborted</command></para> 
  
! <para> If any errors have been encountered that haven't already aborted the <command>SYNC</command>, this catches and aborts it.</para></listitem>
  
  <listitem><para><command>DEBUG2: remoteWorkerThread_%d: new sl_rowid_seq value: %s</command></para> 
--- 695,707 ----
  <listitem><para><command>ERROR: remoteWorkerThread_%d: SYNC aborted</command></para> 
  
! <para>The <command>SYNC</command> has been aborted. The &lslon; will
! likely retry this <command>SYNC</command> some time soon.  If the
! <command>SYNC</command> continues to fail, there is some continuing
! problem, and replication will likely never catch up without
! intervention.  It may be necessary to unsubscribe/resubscribe the
! affected slave set, or, if there is only one set on the slave node, it
! may be simpler to drop and recreate the slave node.  If application
! connections may be shifted over to the master during this time,
! application downtime may not be necessary.  </para></listitem>
  
  <listitem><para><command>DEBUG2: remoteWorkerThread_%d: new sl_rowid_seq value: %s</command></para> 



More information about the Slony1-commit mailing list