CVS User Account cvsuser
Fri Oct 27 08:35:32 PDT 2006
Log Message:
-----------
Apply changes from CVS HEAD to 1.2 - includes:
- More release checklist items
- new SYNC slonik command and notes on its use

Tags:
----
REL_1_2_STABLE

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.23 -> r1.23.2.1)
        adminscripts.sgml (r1.40 -> r1.40.2.1)
        faq.sgml (r1.66.2.1 -> r1.66.2.2)
        help.sgml (r1.18 -> r1.18.2.1)
        monitoring.sgml (r1.29 -> r1.29.2.1)
        releasechecklist.sgml (r1.3 -> r1.3.2.1)
        slonik_ref.sgml (r1.61.2.1 -> r1.61.2.2)

-------------- next part --------------
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.23
retrieving revision 1.23.2.1
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.23 -r1.23.2.1
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -88,12 +88,30 @@
 <listitem><para> 
 Add the table to the new set <xref linkend="stmtsetaddtable"> 
 </para></listitem>
-<listitem><para> 
-Request subscription <xref linkend="stmtsubscribeset"> for this new set. If there are several nodes, you will need to <xref linkend="stmtsubscribeset"> once for each node that should subscribe.
-</para></listitem>
-<listitem><para> 
-Once the subscriptions have all been set up so that the new set has an identical set of subscriptions to the old set, you can merge the new set in alongside the old one via <xref linkend="stmtmergeset">
-</para></listitem>
+
+<listitem><para> Request subscription <xref
+linkend="stmtsubscribeset"> for this new set. If there are several
+nodes, you will need to <xref linkend="stmtsubscribeset"> once for
+each node that should subscribe.  </para></listitem>
+
+<listitem><para> If you wish to know, deterministically, that each
+subscription has completed, you'll need to submit the following sort
+of slonik script for each subscription:
+
+<screen>
+SUBSCRIBE SET (ID=1, PROVIDER=1, RECEIVER=2);
+WAIT FOR EVENT (ORIGIN=2, CONFIRMED = 1);
+SYNC(ID = 1);
+WAIT FOR EVENT (ORIGIN=1, CONFIRMED=2);
+</screen></para>
+</listitem>
+
+<listitem><para> Once the subscriptions have all been set up so that
+the new set has an identical set of subscriptions to the old set, you
+can merge the new set in alongside the old one via <xref
+linkend="stmtmergeset"> If you submit the request too soon, there is
+the risk that the subscription won't actually be complete on all
+nodes, and some subscribers may break down.  </para></listitem>
 </itemizedlist>
 </sect2>
 
@@ -104,9 +122,15 @@
 to the effect of <quote>How do I modify the definitions of replicated
 tables?</quote></para>
 
-<para>If you change the <quote>shape</quote> of a replicated table, this needs to take place at exactly the same point in all of the <quote>transaction streams</quote> on all nodes that are subscribed to the set containing the table.</para>
-
-<para> Thus, the way to do this is to construct an SQL script consisting of the DDL changes, and then submit that script to all of the nodes via the Slonik command <xref linkend="stmtddlscript">.</para>
+<para>If you change the <quote>shape</quote> of a replicated table,
+this needs to take place at exactly the same point in all of the
+<quote>transaction streams</quote> on all nodes that are subscribed to
+the set containing the table.</para>
+
+<para> Thus, the way to do this is to construct an SQL script
+consisting of the DDL changes, and then submit that script to all of
+the nodes via the Slonik command <xref
+linkend="stmtddlscript">.</para>
 
 <para> There are a number of <quote>sharp edges</quote> to note...</para>
 
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.29
retrieving revision 1.29.2.1
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.29 -r1.29.2.1
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -100,8 +100,8 @@
 
 <indexterm><primary>script test_slony_state to test replication state</primary></indexterm>
 
-<para> This script is in preliminary stages, and may be used to do
-some analysis of the state of a &slony1; cluster.</para>
+<para> This script does various sorts of analysis of the state of a
+&slony1; cluster.</para>
 
 <para> You specify arguments including <option>database</option>,
 <option>host</option>, <option>user</option>,
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.40
retrieving revision 1.40.2.1
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.40 -r1.40.2.1
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -1,9 +1,18 @@
 <!-- $Id$ -->
-<sect1 id="altperl">
+<sect1 id="adminscripts">
 <title>&slony1; Administration Scripts</title>
 
 <indexterm><primary>administration scripts for &slony1;</primary></indexterm>
 
+<para> A number of tools have grown over the course of the history of
+&slony1; to help users manage their clusters.  This section along with
+the ones on <xref linkend="monitoring"> and <xref
+linkend="maintenance"> discusses them. </para>
+
+<sect2 id="altperl"> <title>altperl Scripts</title>
+
+<indexterm><primary>altperl scripts for &slony1;</primary></indexterm>
+
 <para>In the <filename>altperl</filename> directory in the
 <application>CVS</application> tree, there is a sizable set of
 <application>Perl</application> scripts that may be used to administer
@@ -22,7 +31,7 @@
 <emphasis>before</emphasis> submitting it to <xref
 linkend="slonik">.</para>
 
-<sect2><title>Node/Cluster Configuration - cluster.nodes</title>
+<sect3><title>Node/Cluster Configuration - cluster.nodes</title>
 <indexterm><primary>cluster.nodes - node/cluster configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYNODES</envar> is used
@@ -74,8 +83,8 @@
                                         # = disable,allow,prefer,require
 );
 </programlisting>
-</sect2>
-<sect2><title>Set configuration - cluster.set1, cluster.set2</title>
+</sect3>
+<sect3><title>Set configuration - cluster.set1, cluster.set2</title>
 <indexterm><primary>cluster.set1 - replication set configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYSET</envar> is used to
@@ -117,8 +126,8 @@
 <para> An array of names of sequences that are to be replicated</para>
 </listitem>
 </itemizedlist>
-</sect2>
-<sect2><title>slonik_build_env</title>
+</sect3>
+<sect3><title>slonik_build_env</title>
 <indexterm><primary>slonik_build_env</primary></indexterm>
 
 <para>Queries a database, generating output hopefully suitable for
@@ -129,99 +138,99 @@
 <listitem><para> The arrays <envar>@KEYEDTABLES</envar>,
 <envar>nvar>@SERIALT</envar>nvar>, and <envar>@SEQUENCES</envar></para></listitem>
 </itemizedlist>
-</sect2>
-<sect2><title>slonik_print_preamble</title>
+</sect3>
+<sect3><title>slonik_print_preamble</title>
 
 <para>This generates just the <quote>preamble</quote> that is required
 by all slonik scripts.  In effect, this provides a
 <quote>skeleton</quote> slonik script that does not do
 anything.</para>
-</sect2>
-<sect2><title>slonik_create_set</title>
+</sect3>
+<sect3><title>slonik_create_set</title>
 
 <para>This requires <envar>SLONYSET</envar> to be set as well as
 <envar>SLONYNODES</envar>; it is used to generate the
 <command>slonik</command> script to set up a replication set
 consisting of a set of tables and sequences that are to be
 replicated.</para>
-</sect2>
-<sect2><title>slonik_drop_node</title>
+</sect3>
+<sect3><title>slonik_drop_node</title>
 
 <para>Generates Slonik script to drop a node from a &slony1; cluster.</para>
-</sect2>
-<sect2><title>slonik_drop_set</title>
+</sect3>
+<sect3><title>slonik_drop_set</title>
 
 <para>Generates Slonik script to drop a replication set
 (<emphasis>e.g.</emphasis> - set of tables and sequences) from a
 &slony1; cluster.</para>
-</sect2>
+</sect3>
 
-<sect2><title>slonik_drop_table</title>
+<sect3><title>slonik_drop_table</title>
 
 <para>Generates Slonik script to drop a table from replication.
 Requires, as input, the ID number of the table (available from table
 <envar>sl_table</envar>) that is to be dropped. </para>
-</sect2>
+</sect3>
 
-<sect2><title>slonik_execute_script</title>
+<sect3><title>slonik_execute_script</title>
 
 <para>Generates Slonik script to push DDL changes to a replication set.</para>
-</sect2>
-<sect2><title>slonik_failover</title>
+</sect3>
+<sect3><title>slonik_failover</title>
 
 <para>Generates Slonik script to request failover from a dead node to some new origin</para>
-</sect2>
-<sect2><title>slonik_init_cluster</title>
+</sect3>
+<sect3><title>slonik_init_cluster</title>
 
 <para>Generates Slonik script to initialize a whole &slony1; cluster,
 including setting up the nodes, communications paths, and the listener
 routing.</para>
-</sect2>
-<sect2><title>slonik_merge_sets</title>
+</sect3>
+<sect3><title>slonik_merge_sets</title>
 
 <para>Generates Slonik script to merge two replication sets together.</para>
-</sect2>
-<sect2><title>slonik_move_set</title>
+</sect3>
+<sect3><title>slonik_move_set</title>
 
 <para>Generates Slonik script to move the origin of a particular set to a different node.</para>
-</sect2>
-<sect2><title>replication_test</title>
+</sect3>
+<sect3><title>replication_test</title>
 
 <para>Script to test whether &slony1; is successfully replicating
 data.</para>
-</sect2>
-<sect2><title>slonik_restart_node</title>
+</sect3>
+<sect3><title>slonik_restart_node</title>
 
 <para>Generates Slonik script to request the restart of a node.  This was
 particularly useful pre-1.0.5 when nodes could get snarled up when
 slon daemons died.</para>
-</sect2>
-<sect2><title>slonik_restart_nodes</title>
+</sect3>
+<sect3><title>slonik_restart_nodes</title>
 
 <para>Generates Slonik script to restart all nodes in the cluster.  Not
 particularly useful.</para>
-</sect2>
-<sect2><title>slony_show_configuration</title>
+</sect3>
+<sect3><title>slony_show_configuration</title>
 
 <para>Displays an overview of how the environment (e.g. - <envar>SLONYNODES</envar>) is set
 to configure things.</para>
-</sect2>
-<sect2><title>slon_kill</title>
+</sect3>
+<sect3><title>slon_kill</title>
 
 <para>Kills slony watchdog and all slon daemons for the specified set.  It
 only works if those processes are running on the local host, of
 course!</para>
-</sect2>
-<sect2><title>slon_start</title>
+</sect3>
+<sect3><title>slon_start</title>
 
 <para>This starts a slon daemon for the specified cluster and node, and uses
 slon_watchdog to keep it running.</para>
-</sect2>
-<sect2><title>slon_watchdog</title>
+</sect3>
+<sect3><title>slon_watchdog</title>
 
 <para>Used by <command>slon_start</command>.</para>
 
-</sect2><sect2><title>slon_watchdog2</title>
+</sect3><sect3><title>slon_watchdog2</title>
 
 <para>This is a somewhat smarter watchdog; it monitors a particular
 &slony1; node, and restarts the slon process if it hasn't seen updates
@@ -230,36 +239,37 @@
 <para>This is helpful if there is an unreliable network connection such that
 the slon sometimes stops working without becoming aware of it.</para>
 
-</sect2>
-<sect2><title>slonik_store_node</title>
+</sect3>
+<sect3><title>slonik_store_node</title>
 
 <para>Adds a node to an existing cluster.</para>
-</sect2>
-<sect2><title>slonik_subscribe_set</title>
+</sect3>
+<sect3><title>slonik_subscribe_set</title>
 
 <para>Generates Slonik script to subscribe a particular node to a particular replication set.</para>
 
-</sect2><sect2><title>slonik_uninstall_nodes</title>
+</sect3><sect3><title>slonik_uninstall_nodes</title>
 
 <para>This goes through and drops the &slony1; schema from each node;
 use this if you want to destroy replication throughout a cluster.
 This is a <emphasis>VERY</emphasis> unsafe script!</para>
 
-</sect2><sect2><title>slonik_unsubscribe_set</title>
+</sect3><sect3><title>slonik_unsubscribe_set</title>
 
 <para>Generates Slonik script to unsubscribe a node from a replication set.</para>
 
-</sect2>
-<sect2><title>slonik_update_nodes</title>
+</sect3>
+<sect3><title>slonik_update_nodes</title>
 
 <para>Generates Slonik script to tell all the nodes to update the
 &slony1; functions.  This will typically be needed when you upgrade
 from one version of &slony1; to another.</para>
+</sect3>
 </sect2>
 
 <sect2 id="mkslonconf"><title>mkslonconf.sh</title>
 
-<indexterm><primary>script - mkslonconf.sh</primary></indexterm>
+<indexterm><primary>generating slon.conf files for &slony1;</primary></indexterm>
 
 <para> This is a shell script designed to rummage through a &slony1;
 cluster and generate a set of <filename>slon.conf</filename> files
@@ -355,8 +365,7 @@
 
 <sect2 id="launchclusters"><title> launch_clusters.sh </title>
 
-<indexterm><primary>script - launch_clusters.sh</primary></indexterm>
-
+<indexterm><primary>launching &slony1; cluster using slon.conf files</primary></indexterm>
 
 <para> This is another shell script which uses the configuration as
 set up by <filename>mkslonconf.sh</filename> and is intended to either
@@ -473,6 +482,171 @@
 </itemizedlist>
 
 </sect2>
+
+<sect2 id="configurereplication"> <title> Generating slonik scripts
+using <filename>configure-replication.sh</filename> </title>
+
+<indexterm><primary> generate slonik scripts for a cluster </primary></indexterm>
+
+<para> The <filename>tools</filename> script
+<filename>configure-replication.sh</filename> is intended to automate
+generating slonik scripts to configure replication.  This script is
+based on the configuration approach taken by the <xref
+linkend="testbed">.</para>
+
+<para> This script uses a number (possibly large, if your
+configuration needs to be particularly complex) of environment
+variables to determine the shape of the configuration of a cluster.
+It uses default values extensively, and in many cases, relatively few
+environment values need to be set in order to get a viable
+configuration. </para>
+
+<sect3><title>Global Values</title>
+
+<para> There are some values that will be used universally across a
+cluster: </para>
+
+<variablelist>
+<varlistentry><term><envar>  CLUSTER </envar></term>
+<listitem><para> Name of Slony-I cluster</para></listitem></varlistentry>
+<varlistentry><term><envar>  NUMNODES </envar></term>
+<listitem><para> Number of nodes to set up</para></listitem></varlistentry>
+
+<varlistentry><term><envar>  PGUSER </envar></term>
+<listitem><para> name of PostgreSQL superuser controlling replication</para></listitem></varlistentry>
+<varlistentry><term><envar>  PGPORT </envar></term>
+<listitem><para> default port number</para></listitem></varlistentry>
+<varlistentry><term><envar>  PGDATABASE </envar></term>
+<listitem><para> default database name</para></listitem></varlistentry>
+
+<varlistentry><term><envar>  TABLES </envar></term>
+<listitem><para> a list of fully qualified table names (<emphasis>e.g.</emphasis> - complete with
+           namespace, such as <command>public.my_table</command>)</para></listitem></varlistentry>
+<varlistentry><term><envar>  SEQUENCES </envar></term>
+<listitem><para> a list of fully qualified sequence names (<emphasis>e.g.</emphasis> - complete with
+           namespace, such as <command>public.my_sequence</command>)</para></listitem></varlistentry>
+
+</variablelist>
+
+<para>Defaults are provided for <emphasis>all</emphasis> of these
+values, so that if you run
+<filename>configure-replication.sh</filename> without setting any
+environment variables, you will get a set of slonik scripts.  They may
+not correspond, of course, to any database you actually want to
+use...</para>
+
+<sect3><title>Node-Specific Values</title>
+
+<para>For each node, there are also four environment variables; for node 1: </para>
+<variablelist>
+<varlistentry><term><envar>  DB1 </envar></term>
+<listitem><para> database to connect to</para></listitem></varlistentry>
+<varlistentry><term><envar>  USER1 </envar></term>
+<listitem><para> superuser to connect as</para></listitem></varlistentry>
+<varlistentry><term><envar>  PORT1 </envar></term>
+<listitem><para> port</para></listitem></varlistentry>
+<varlistentry><term><envar>  HOST1 </envar></term>
+<listitem><para> host</para></listitem></varlistentry>
+</variablelist>
+
+<para> It is quite likely that <envar>DB*</envar>,
+<envar>USER*</envar>, and <envar>PORT*</envar> should be drawn from
+the global <envar>PGDATABASE</envar>, <envar>PGUSER</envar>, and
+<envar>PGPORT</envar> values above; having the discipline of that sort
+of uniformity is usually a good thing.</para>
+
+<para> In contrast, <envar>HOST*</envar> values should be set
+explicitly for <envar>HOST1</envar>, <envar>HOST2</envar>, ..., as you
+don't get much benefit from the redundancy replication provides if all
+your databases are on the same server!</para>
+
+</sect3>
+
+<sect3><title>Resulting slonik scripts</title>
+
+<para> slonik config files are generated in a temp directory under
+<filename>/tmp</filename>.  The usage is thus:</para>
+
+<itemizedlist>
+
+<listitem> <para><filename>preamble.slonik</filename> is a
+<quote>preamble</quote> containing connection info used by the other
+scripts.</para>
+
+<para> Verify the info in this one closely; you may want to keep this
+permanently to use with future maintenance you may want to do on the
+cluster.</para></listitem>
+
+<listitem><para> <filename>create_set.slonik</filename></para>
+
+<para>This is the first script to run; it sets up the requested nodes
+as being &slony1; nodes, adding in some &slony1;-specific config
+tables and such.</para>
+
+<para>You can/should start slon processes any time after this step has
+run.</para></listitem>
+
+<listitem><para><filename>  store_paths.slonik</filename></para>
+
+<para> This is the second script to run; it indicates how the &lslon;s
+should intercommunicate.  It assumes that all &lslon;s can talk to all
+nodes, which may not be a valid assumption in a complexly-firewalled
+environment.  If that assumption is untrue, you will need to modify
+the script to fix the paths.</para></listitem>
+
+<listitem><para><filename>create_set.slonik</filename></para>
+
+<para> This sets up the replication set consisting of the whole bunch
+of tables and sequences that make up your application's database
+schema.</para>
+
+<para> When you run this script, all that happens is that triggers are
+added on the origin node (node #1) that start collecting updates;
+replication won't start until #5...</para>
+
+<para>There are two assumptions in this script that could be
+invalidated by circumstances:</para>
+
+<itemizedlist>
+     <listitem><para> That all of the tables and sequences have been
+     included.</para>
+
+     <para> This becomes invalid if new tables get added to your
+     schema and don't get added to the <envar>TABLES</envar>
+     list.</para> </listitem>
+
+     <listitem><para> That all tables have been defined with primary
+     keys.</para>
+
+     <para> Best practice is to always have and use true primary keys.
+     If you have tables that require choosing a candidate primary key
+     or that require creating a surrogate key using <xref
+     linkend="stmttableaddkey">, you will have to modify this script
+     by hand to accomodate that. </para></listitem>
+
+</itemizedlist>
+</listitem>
+
+<listitem><para> <filename> subscribe_set_2.slonik </filename>
+
+  <para> And 3, and 4, and 5, if you set the number of nodes
+  higher... </para>
+
+  <para> This is the step that <quote>fires up</quote>
+  replication.</para>
+
+  <para> The assumption that the script generator makes is that all
+  the subscriber nodes will want to subscribe directly to the origin
+  node.  If you plan to have <quote>sub-clusters,</quote> perhaps
+  where there is something of a <quote>master</quote> location at each
+  data centre, you may need to revise that.</para>
+
+  <para> The slon processes really ought to be running by the time you
+  attempt running this step.  To do otherwise would be rather
+  foolish.</para> </listitem>
+</itemizedlist>
+
+</sect2>
 </sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.61.2.1
retrieving revision 1.61.2.2
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.61.2.1 -r1.61.2.2
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -1257,9 +1257,14 @@
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
-     MERGE SET ( ID = 2, 
-     ADD ID = 9999,
-     ORIGIN = 1 );
+     # Assuming that set 1 has direct subscribers 2 and 3
+     SUBSCRIBE SET (ID = 999, PROVIDER = 1, RECEIVER = 2);
+     WAIT FOR EVENT (ORIGIN = 2, CONFIRMED = 1);
+     SYNC (ID=1);
+     SUBSCRIBE SET (ID = 999, PROVIDER = 1, RECEIVER = 3);
+     WAIT FOR EVENT (ORIGIN = 3, CONFIRMED = 1);
+     SYNC (ID=1);
+     MERGE SET ( ID = 2, ADD ID = 9999, ORIGIN = 1 );
     </programlisting>
    </refsect1>
    <refsect1> <title> Locking Behaviour </title>
@@ -2070,7 +2075,19 @@
 
      <listitem><para> The fact that the request returns immediately
      even though the subscription may take considerable time to
-     complete may be a bit surprising. </para> </listitem>
+     complete may be a bit surprising. </para> 
+
+     <para> Processing of the subscription involves
+     <emphasis>two</emphasis> events; the
+     <command>SUBSCRIBE_SET</command>, initiated from the provider
+     node, and an <command>ENABLE_SUBSCRIPTION</command>, which is
+     initiated on the subscriber node.  This means that <xref
+     linkend="stmtwaitevent"> cannot directly wait for completion of a
+     subscription.  If you need to wait for completion of a
+     subscription, then what you need to do instead is to submit a
+     <xref linkend="stmtsync"> request, and wait for
+     <emphasis>that</emphasis> event.</para>
+     </listitem>
 
      <listitem><para> This command has <emphasis>two</emphasis>
      purposes; setting up subscriptions (which should be unsurprising)
@@ -2078,7 +2095,7 @@
      obvious to intuition. </para> </listitem>
 
      <listitem><para> New subscriptions are set up by using
-     <command>DELETE</command> or <command>TRUNCATE</emphasis> to
+     <command>DELETE</command> or <command>TRUNCATE</command> to
      empty the table on a subscriber.  If you created a new node by
      copying data from an existing node, it might <quote>seem
      intuitive</quote> that that data should be kept; that is not the
@@ -2822,6 +2839,48 @@
     <para> This command was introduced in &slony1; 1.1 </para>
    </refsect1>
   </refentry>
+<!-- **************************************** -->
+
+  <refentry id="stmtsync"><refmeta><refentrytitle>SYNC</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
+
+   <refnamediv><refname>SYNC</refname>
+    
+    <refpurpose> Generate an ordinary SYNC event </refpurpose></refnamediv>
+   <refsynopsisdiv>
+    <cmdsynopsis>
+     <command>SYNC (options);</command>
+    </cmdsynopsis>
+   </refsynopsisdiv>
+   <refsect1>
+    <title>Description</title>
+    
+    <para> Generates a SYNC event on a specified node.</para>
+    
+     <variablelist>
+      <varlistentry><term><literal> ID = ival </literal></term>
+       <listitem><para> The node on which to generate the SYNC event.</para></listitem>
+       
+      </varlistentry>
+     </variablelist>
+
+   </refsect1>
+   <refsect1><title>Example</title>
+    <programlisting>
+     SUBSCRIBE SET (ID = 10, PROVIDER = 1, RECEIVER = 2);
+     WAIT FOR EVENT (ORIGIN = 2, CONFIRMED = 1);
+     SYNC (ID = 1);
+     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = 2);
+    </programlisting>
+   </refsect1>
+   <refsect1> <title> Locking Behaviour </title>
+
+    <para> No application-visible locking should take place. </para>
+   </refsect1>
+   <refsect1> <title> Version Information </title>
+    <para> This command was introduced in &slony1; 1.1.6 / 1.2.1 </para>
+   </refsect1>
+  </refentry>
 
 
  </reference>
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.18
retrieving revision 1.18.2.1
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.18 -r1.18.2.1
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -8,6 +8,13 @@
 
 <itemizedlist>
 
+<listitem><para> Before submitting questions to any public forum as to
+why <quote>something mysterious</quote> has happened to your
+replication cluster, please run the <xref linkend="testslonystate">
+tool.  It may give some clues as to what is wrong, and the results are
+likely to be of some assistance in analyzing the problem. </para>
+</listitem>
+
 <listitem><para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official
 <quote>home</quote> of &slony1;</para></listitem>
 
Index: releasechecklist.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/releasechecklist.sgml,v
retrieving revision 1.3
retrieving revision 1.3.2.1
diff -Ldoc/adminguide/releasechecklist.sgml -Ldoc/adminguide/releasechecklist.sgml -u -w -r1.3 -r1.3.2.1
--- doc/adminguide/releasechecklist.sgml
+++ doc/adminguide/releasechecklist.sgml
@@ -18,9 +18,19 @@
 <listitem><para>Binary RPM packages 
 
   </para></listitem> 
-<listitem><para>Tag with the release ID. For version 1.1.2, this would be <envar>REL_1_1_2 </envar>
+
+<listitem><para>If the release is a <quote>.0</quote> one, we need to
+open a new STABLE branch</para>
+
+<para> <command> cvs tag -b REL_1_2_STABLE</command>
+
+<listitem><para>Tag the with the release ID. For version 1.1.2, this
+would be <envar>REL_1_1_2 </envar></para>
+
+<para> <command> cvs tag REL_1_1_2 </command>
 
   </para></listitem> 
+
 <listitem><para>Check out a copy via <command>cvs export -rREL_1_1_2 </command>
 
   </para></listitem> 
@@ -43,13 +53,16 @@
 
 <para> RPM spec files used to contain release tags as well as names of tarballs which needed to be updated. As of 2005-12-13, there is less of this...   For those platforms with specific spec files such as SuSE, some editing still needs to be done. see the file(s) in the <filename>suse</filename> dir for more information </para>
 
-<para> The admin guide <filename>version.sgml</filename> file needs to contain the release name. This should not need to be touched; version.sgml is generated automatically with the release name/tag on demand. </para>
+<para> The admin guide <filename>version.sgml</filename> file needs to
+contain the release name. This should not need to be touched;
+<filename>version.sgml</filename> is generated automatically with the
+release name/tag on demand. </para>
 
 <para> It sure would be nice if more of these could be assigned automatically, somehow.
 
 </para></listitem> 
-<listitem><para>commit the new configure 
 
+<listitem><para><emphasis>Don't</emphasis> commit the new configure.
   </para></listitem> 
 
 <listitem><para>make sure that the generated files from .l and .y are
@@ -59,27 +72,44 @@
 make all && make clean</command> but that is a somewhat ugly approach.
 
 </para></listitem> 
-<listitem><para>Generate HTML tarball, and RTF/PDF, if possible... Note that the HTML version should include *.html (duh!), *.jpg, *.png, as well as *.css 
-  </para></listitem> 
 
-<listitem><para>Run <command>make clean</command> in <filename>doc/adminguide</filename> before generating the tarball in order that <filename>bookindex.sgml</filename> is NOT part of the tarball 
+<listitem><para> Make sure that generated Makefiles and such from the
+previous step(s) are removed.</para>
 
-  </para></listitem> 
-<listitem><para>Alternatively, delete <filename>doc/adminguide/bookindex.sgml </filename>
+<para> <command>make distclean</command> ought to do
+that... </para></listitem>
 
-  </para></listitem> 
-<listitem><para>Rename the directory (e.g <filename>slony1-engine</filename>) to a name based on the release, e.g. - <filename>slony1-1.1.2 </filename>
+<listitem><para>Generate HTML tarball, and RTF/PDF, if
+possible... Note that the HTML version should include *.html (duh!),
+*.jpg, *.png, as well as *.css </para></listitem>
 
-  </para></listitem> 
-<listitem><para>Generate a tarball - <command>tar tfvj slony1-1.1.2.tar.bz2 slony1-1.1.2 </command>
+<listitem><para>Run <command>make clean</command> in
+<filename>doc/adminguide</filename> before generating the tarball in
+order that <filename>bookindex.sgml</filename> is NOT part of the
+tarball </para></listitem>
 
+<listitem><para>Alternatively, delete
+<filename>doc/adminguide/bookindex.sgml </filename> </para></listitem>
+
+<listitem><para>Rename the directory (<emphasis>e.g.</emphasis> -
+<filename>slony1-engine</filename>) to a name based on the release,
+<emphasis>e.g.</emphasis> - <filename>slony1-1.1.2</filename>
   </para></listitem> 
-<listitem><para>Build the administrative documentation, and build a tarball as <filename>slony1-1.1.2-html.tar.bz2</filename></para>
- <para> Make sure that the docs are inside a subdir in the tarball.
 
+<listitem><para>Generate a tarball - <command>tar tfvj
+slony1-1.1.2.tar.bz2 slony1-1.1.2 </command> </para></listitem>
+
+<listitem><para>Build the administrative documentation, and build a
+tarball as <filename>slony1-1.1.2-html.tar.bz2</filename></para>
+
+<para> Make sure that the docs are inside a subdir in the tarball.
 </para></listitem> 
-<listitem><para>Put these tarballs in a temporary area, and notify everyone (on some mailing list???) that they should test them out ASAP based on the Standard Test Plan. 
-</para></listitem></itemizedlist>
+
+<listitem><para>Put these tarballs in a temporary area, and notify
+everyone that they should test them out ASAP based on the Standard
+Test Plan. </para></listitem>
+
+</itemizedlist>
 </article>
 <!-- Keep this comment at the end of the file
 Local variables:



More information about the Slony1-commit mailing list