Chris Browne cbbrowne at lists.slony.info
Tue Mar 30 08:28:40 PDT 2010
Update of /home/cvsd/slony1/slony1-engine/doc/adminguide
In directory main.slony.info:/tmp/cvs-serv11500

Modified Files:
      Tag: REL_2_0_STABLE
	faq.sgml slon.sgml 
Log Message:
Add notes on recommending *not* suppressing WAL via synchronous_commit
GUC per discussion on list with Peter Geoghegan


Index: slon.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/slon.sgml,v
retrieving revision 1.34
retrieving revision 1.34.2.1
diff -C 2 -d -r1.34 -r1.34.2.1
*** slon.sgml	27 Mar 2008 21:00:58 -0000	1.34
--- slon.sgml	30 Mar 2010 15:28:38 -0000	1.34.2.1
***************
*** 357,361 ****
       </para>
  
!      <para> See more details on <xref linkend="slon-config-command-on-logarchive">.</para>
      </listitem>
     </varlistentry>
--- 357,361 ----
       </para>
  
!      <para> See more details on <xref linkend="slon-config-command-on-logarchive"/>.</para>
      </listitem>
     </varlistentry>

Index: faq.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.80.2.4
retrieving revision 1.80.2.5
diff -C 2 -d -r1.80.2.4 -r1.80.2.5
*** faq.sgml	11 Feb 2010 17:52:01 -0000	1.80.2.4
--- faq.sgml	30 Mar 2010 15:28:38 -0000	1.80.2.5
***************
*** 1338,1341 ****
--- 1338,1411 ----
  above, it could be faster to drop the node and recreate it than to let
  it catch up across a week's worth of updates...  </para> </answer>
+ </qandaentry>
+ 
+ <qandaentry>
+ 
+ <question> <para>As of version 8.3, &postgres; offers a
+ <envar>synchronous_commit</envar> parameter which may be set either
+ via GUC or <filename>postgresql.conf</filename> which can provide some
+ performance gains by not logging results in WAL.  Would it be sensible
+ to turn off synchronous commit on a subscriber node?  </para>
+ </question>
+ 
+ <answer><para> Unfortunately, there is an unpleasant failure case
+ which this would introduce. </para>
+ 
+ <para>Consider...</para>
+ 
+ <itemizedlist>
+ 
+ <listitem><para> Node #2 claims to have committed up to transaction
+ T5, but the WAL only really has records up to T3.</para></listitem>
+ 
+ <listitem><para> Node #1, the "master", got the report back that #2 is
+ up to date to T5.</para></listitem>
+ 
+ <listitem><para> Node #2 experiences a failure (e.g. - power
+ outage).</para></listitem>
+ 
+ </itemizedlist>
+ 
+ <para>There are two possible outcomes, now, one OK, and one not so OK...</para>
+ 
+ <itemizedlist>
+ 
+ <listitem><para> OK </para>
+ 
+ <para> Node #2 gets restarted, replays WAL, knows it's only got data
+ up to T3, and heads back to node #1, asking for transaction T4 and
+ others.</para>
+ 
+ <para> No problem.</para></listitem>
+ 
+ <listitem><para> Not so OK.</para>
+ 
+ <para> Before node #2 gets back up, node #1 has run an iteration of
+ the cleanup thread, which trims out all the data up to T5, because the
+ other nodes confirmed up to that point.</para>
+ 
+ <para> Node #2 gets restarted, replays WAL, knows it's only got data
+ up to T3, and heads back to node #1, asking for transaction T4 and
+ T5.</para>
+ 
+ <para> Oops.  Node #1 just trimmed those log entries
+ out.</para></listitem>
+ </itemizedlist>
+ 
+ <para>The race condition here is quite easy to exercise - you just
+ need to suppress the restart of node 2 for a while, long enough for
+ node 1 to run the cleanup thread.</para>
+ 
+ <para>You might evade the problem somewhat by setting the &lslon; parameter
+ <xref linkend="slon-config-cleanup-interval"/> to a larger value.</para>
+ 
+ <para>Unfortunately, any time the outage of node 2 could exceed that
+ interval, the risk of losing log data inexorably emerges.</para>
+ 
+ </answer>
+ 
+ <answer><para> As a result, it cannot be recommended that users run
+ &slony1; in a fashion where it suppresses WAL writing via
+ <envar>synchronous_commit</envar>. </para> </answer>
  
  </qandaentry>



More information about the Slony1-commit mailing list