Wed Feb 9 14:42:50 PST 2005
- Previous message: [Slony1-commit] By cbbrowne: Added document on handling of plain paths...
- Next message: [Slony1-commit] By cbbrowne: Updated tagging on all of the documents to resolve SGML
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Log Message: ----------- Updated tagging on all of the documents to resolve SGML parser errors Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.8 -> r1.9) adminscripts.sgml (r1.13 -> r1.14) cluster.sgml (r1.7 -> r1.8) concepts.sgml (r1.8 -> r1.9) ddlchanges.sgml (r1.9 -> r1.10) defineset.sgml (r1.9 -> r1.10) dropthings.sgml (r1.9 -> r1.10) failover.sgml (r1.9 -> r1.10) faq.sgml (r1.15 -> r1.16) firstdb.sgml (r1.8 -> r1.9) help.sgml (r1.9 -> r1.10) installation.sgml (r1.8 -> r1.9) intro.sgml (r1.8 -> r1.9) legal.sgml (r1.4 -> r1.5) listenpaths.sgml (r1.10 -> r1.11) maintenance.sgml (r1.9 -> r1.10) monitoring.sgml (r1.10 -> r1.11) prerequisites.sgml (r1.9 -> r1.10) reshape.sgml (r1.9 -> r1.10) slon.sgml (r1.8 -> r1.9) slonik.sgml (r1.9 -> r1.10) slonik_ref.sgml (r1.11 -> r1.12) slony.sgml (r1.12 -> r1.13) startslons.sgml (r1.7 -> r1.8) subscribenodes.sgml (r1.8 -> r1.9) usingslonik.sgml (r1.2 -> r1.3) versionupgrade.sgml (r1.1 -> r1.2) -------------- next part -------------- Index: versionupgrade.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/versionupgrade.sgml,v retrieving revision 1.1 retrieving revision 1.2 diff -Ldoc/adminguide/versionupgrade.sgml -Ldoc/adminguide/versionupgrade.sgml -u -w -r1.1 -r1.2 --- doc/adminguide/versionupgrade.sgml +++ doc/adminguide/versionupgrade.sgml @@ -1,12 +1,12 @@ <!-- $Id$ --> -<sect1 id="versionupgrade"><title>Using <productname>Slony-I</productname> for <productname>PostgreSQL</productname> Upgrades</title> +<sect1 id="versionupgrade"><title>Using &slony1; for &postgres; Upgrades</title> <para> A number of people have found -<productname>Slony-I</productname> useful for helping perform upgrades -between major <productname>PostgreSQL</productname> releases +&slony1; useful for helping perform upgrades +between major &postgres; releases (<emphasis> e.g.</emphasis> which mandates running <application>initdb</application> to create a new database instance) -without requiring a substantial downtime. +without requiring a substantial downtime.</para> <para> The <quote>simple</quote> way that one might imagine doing such an upgrade would involve running <application>pg_dump</application> on @@ -18,43 +18,48 @@ required involves: <itemizedlist> -<listitem><para> Stop all applications that might modify the data -<listitem><para> Start the <application>pg_dump</application>, and load that into the new database -<listitem><para> Wait 40 hours for the dump/load to complete -<listitem><para> Point <quote>write</quote> applications to the new database -</itemizedlist> - -<para> And note that this led to a 40 hour outage. - -<para> <productname>Slony-I</productname> offers an opportunity to -replace that long outage with one a few minutes or even a few seconds -long. The approach taken is to create a -<productname>Slony-I</productname> replica in the new version. It is -possible that it might take much longer than 40h to create that -replica, but once it's there, it can be kept very nearly up to date. +<listitem><para> Stop all applications that might modify the data</para></listitem> +<listitem><para> Start the <application>pg_dump</application>, and load that into the new database</para></listitem> +<listitem><para> Wait 40 hours for the dump/load to complete</para></listitem> +<listitem><para> Point <quote>write</quote> applications to the new database</para></listitem> +</itemizedlist></para> + +<para> And note that this led to a 40 hour outage.</para> + +<para> &slony1; offers an opportunity to replace that long outage with +one a few minutes or even a few seconds long. The approach taken is +to create a &slony1; replica in the new version. It is possible that +it might take much longer than 40h to create that replica, but once +it's there, it can be kept very nearly up to date.</para> <para> When it is time to switch over to the new database, the procedure is rather less time consuming: <itemizedlist> -<listitem><para> Stop all applications that might modify the data -<listitem><para> Lock the set against client application updates using <link linkend="stmtlockset"><command>LOCK SET</command></link> -<listitem><para> Submit the Slonik command <link linkend="stmtmoveset"><command>MOVE SET</command></command> to shift the origin from the old database to the new one -<listitem><para> Point the applications at the new database -</itemizedlist> + +<listitem><para> Stop all applications that might modify the data</para></listitem> + +<listitem><para> Lock the set against client application updates using +<link linkend="stmtlockset"><command>LOCK SET</command></link></para></listitem> + +<listitem><para> Submit the Slonik command <link linkend="stmtmoveset">eset"><command>MOVE SET</command></link> to shift the +origin from the old database to the new one</para></listitem> + +<listitem><para> Point the applications at the new database</para></listitem> +</itemizedlist></para> <para> This procedure should only need to take a very short time, likely bound more by how quickly you can reconfigure your applications than anything else. If you can automate all the steps, it might take less than a second. If not, somewhere between a few seconds and a few -minutes is likely. +minutes is likely.</para> <para> Note that after the origin has been shifted, updates now flow into the <emphasis>old</emphasis> database. If you discover that due to some unforeseen, untested condition, your application is somehow unhappy connecting to the new database, you could easily use <link -linkend="stmtmoveset"><command>MOVE SET</command></command> again to -shift the origin back to the old database. + linkend="stmtmoveset"><command>MOVE SET</command></link> again to +shift the origin back to the old database.</para> <para> If you consider it particularly vital to be able to shift back to the old database in its state at the time of the changeover, so as @@ -63,61 +68,58 @@ since the changeover), the following steps would accomplish that: <itemizedlist> -<listitem><para> Prepare <emphasis> two </emphasis> <productname>Slony-I</productname> replicas of the database: +<listitem><para> Prepare <emphasis> two </emphasis> &slony1; replicas +of the database: <itemizedlist> -<listitem><para> One running the new version of <productname>PostgreSQL</productname> -<listitem><para> One running the old version of <productname>PostgreSQL</productname> -</itemizedlist> +<listitem><para> One running the new version of &postgres;</para></listitem> +<listitem><para> One running the old version of &postgres;</para></listitem> +</itemizedlist></para> <para> Thus, you have <emphasis>three</emphasis> nodes, one running -the new version of <productname>PostgreSQL</productname>, and the -other two the old version. +the new version of &postgres;, and the other two the old version.</para></listitem> <listitem><para> Once they are roughly <quote>in sync</quote>, stop -all applications that might modify the data +all applications that might modify the data</para></listitem> <listitem><para> Allow them to get in sync, then <command>stop</command> the <application>slon</application> daemon that has been feeding the subscriber running the old version of -<productname>PostgreSQL</productname> +&postgres;</para> <para> You may want to use <link linkend="stmtuninstallnode"> <command>UNINSTALL NODE</command></link> to decommission this node, making it into a standalone database, or merely kill the <application>slon</application>, depending on how permanent you want -this all to be. +this all to be.</para></listitem> <listitem><para> Then use <command>MOVE SET</command> to shift the -origin, as before. +origin, as before.</para></listitem> -</itemizedlist> +</itemizedlist></para> <para> Supposing a <quote>small</quote> disaster strikes, you might recover back to the node running the old database that has been seeing updates come through; if you run into larger problems, you would have -to abandon the two nodes and go to the one which had been shut off. +to abandon the two nodes and go to the one which had been shut off.</para> <para> This isn't to say that it is routine to have the sorts of problems that would mandate such a <quote>paranoid</quote> procedure; people worried about process risk assessments can be reassured if you have choices like this. -<note><para> <productname> Slony-I </productname> does not support -versions of <productname> PostgreSQL </productname> older than 7.3.3 -because it needs namespace support that did not solidify until that -time. Rod Taylor <quote>hacked up</quote> a version of <productname> -Slony-I </productname> to work on 7.2 by allowing the <productname> -Slony-I </productname> objects to live in the global schema. He found -it pretty fiddly, and that some queries weren't very efficient (the -<productname> PostgreSQL </productname> query optimizer has improved +<note><para> &slony1; does not support versions of &postgres; older +than 7.3.3 because it needs namespace support that did not solidify +until that time. Rod Taylor <quote>hacked up</quote> a version of +&slony1; to work on 7.2 by allowing the &slony1; objects to live in +the global schema. He found it pretty fiddly, and that some queries +weren't very efficient (the &postgres; query optimizer has improved <emphasis>considerably</emphasis> since 7.2), but that this was more workable for him than other replication systems such as <productname>eRServer</productname>. If you desperately need that, -look for him on the <productname>PostgreSQL</productname> Hackers -mailing list. It is not anticipated that 7.2 will be supported by any -official <application><productname>Slony-I</productname></application> -release.</para> +look for him on the &postgres; Hackers mailing list. It is not +anticipated that 7.2 will be supported by any official &slony1; +release.</para></note></para> </sect1> <!-- Keep this comment at the end of the file @@ -129,8 +131,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: slon.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slon.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/slon.sgml -Ldoc/adminguide/slon.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/slon.sgml +++ doc/adminguide/slon.sgml @@ -36,7 +36,7 @@ </para> </refsect1> - <refsect1 id="R1-APP-SLON-3"> + <refsect1 id="r1-app-slon-3"> <title>Options</title> <variablelist> @@ -50,14 +50,14 @@ <para> The eight levels of logging are: <itemizedlist> - <listitem><Para>Error</Para></listitem> - <listitem><Para>Warn</Para></listitem> - <listitem><Para>Config</Para></listitem> - <listitem><Para>Info</Para></listitem> - <listitem><Para>Debug1</Para></listitem> - <listitem><Para>Debug2</Para></listitem> - <listitem><Para>Debug3</Para></listitem> - <listitem><Para>Debug4</Para></listitem> + <listitem><para>Error</para></listitem> + <listitem><para>Warn</para></listitem> + <listitem><para>Config</para></listitem> + <listitem><para>Info</para></listitem> + <listitem><para>Debug1</para></listitem> + <listitem><para>Debug2</para></listitem> + <listitem><para>Debug3</para></listitem> + <listitem><para>Debug4</para></listitem> </itemizedlist> </para> </listitem> @@ -246,7 +246,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:"slony.sgml" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:"/usr/lib/sgml/catalog" sgml-local-ecat-files:nil Index: defineset.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/defineset.sgml +++ doc/adminguide/defineset.sgml @@ -1,6 +1,6 @@ <!-- $Id$ --> <sect1 id="definingsets"> -<title>Defining Slony-I Replication Sets</title> +<title>Defining &slony1; Replication Sets</title> <para>Defining the nodes indicated the shape of the cluster of database servers; it is now time to determine what data is to be @@ -20,19 +20,18 @@ <sect2><title>Primary Keys</title> -<para><productname>Slony-I</productname> <emphasis>needs</emphasis> to -have a primary key or candidate thereof on each table that is -replicated. PK values are used as the primary identifier for each -tuple that is modified in the source system. There are three ways -that you can get <productname>Slony-I</productname> to use a primary -key:</para> +<para>&slony1; <emphasis>needs</emphasis> to have a primary key or +candidate thereof on each table that is replicated. PK values are +used as the primary identifier for each tuple that is modified in the +source system. There are three ways that you can get &slony1; to use +a primary key:</para> <itemizedlist> <listitem><para> If the table has a formally identified primary key, <command><link linkend="stmtsetaddtable">SET ADD TABLE</link></command> can be used without any need to reference the -primary key. <productname>Slony-I</productname> will pick up that +primary key. &slony1; will pick up that there is a primary key, and use it.</para></listitem> <listitem><para> If the table hasn't got a primary key, but has some @@ -52,25 +51,25 @@ key, as it infers the namespace from the table.</para></listitem> <listitem><para> If the table hasn't even got a candidate primary key, -you can ask <productname>Slony-I</productname> to provide one. This is done by -first using <command><link linkend="stmttableaddkey"> TABLE ADD KEY -</link> </command> to add a column populated using a -<productname>Slony-I</productname> sequence, and then having the <command> <link -linkend="stmtsetaddtable"> SET ADD TABLE</link></command> include the -directive <option>key=serial</option>, to indicate that -<productname>Slony-I</productname>'s own column should be used.</para></listitem> +you can ask &slony1; to provide one. This is done by first using +<command><link linkend="stmttableaddkey"> TABLE ADD KEY </link> +</command> to add a column populated using a &slony1; sequence, and +then having the <command> <link linkend="stmtsetaddtable"> SET ADD +TABLE</link></command> include the directive +<option>key=serial</option>, to indicate that &slony1;'s own column +should be used.</para></listitem> </itemizedlist> <para> It is not terribly important whether you pick a <quote>true</quote> primary key or a mere <quote>candidate primary key;</quote> it is, however, recommended that you have one of those -instead of having <productname>Slony-I</productname> populate the PK column for -you. If you don't have a suitable primary key, that means that the -table hasn't got any mechanism from your application's standpoint of -keeping values unique. <productname>Slony-I</productname> may therefore introduce -a new failure mode for your application, and this implies that you had -a way to enter confusing data into the database.</para> +instead of having &slony1; populate the PK column for you. If you +don't have a suitable primary key, that means that the table hasn't +got any mechanism from your application's standpoint of keeping values +unique. &slony1; may therefore introduce a new failure mode for your +application, and this implies that you had a way to enter confusing +data into the database.</para> </sect2> <sect2 id="definesets"><title>Grouping tables into sets</title> @@ -91,8 +90,9 @@ <listitem><para> Replicating a large set leads to a <link linkend="longtxnsareevil"> long running transaction </link> on the -provider node. The FAQ outlines a number of problems that result from -long running transactions that will injure system performance.</para> +provider node. The <link linkend="faq"> FAQ </link> outlines a number +of problems that result from long running transactions that will +injure system performance.</para> <para> If you can split a large set into several pieces, that will shorten the length of each of the transactions, lessening the degree @@ -118,7 +118,7 @@ change into place.</para> </listitem> -</itemizedlist> +</itemizedlist></para> </sect2> @@ -127,7 +127,7 @@ <para> Each time a SYNC is processed, values are recorded for <emphasis>all</emphasis> of the sequences in the set. If there are a lot of sequences, this can cause <envar>sl_seqlog</envar> to grow -rather large. +rather large.</para> <para> This points to an important difference between tables and sequences: if you add additional tables that do not see much/any @@ -137,7 +137,9 @@ the effects: <itemizedlist> -<listitem><para> A replicated table that is never updated does not introduce much work to the system. + +<listitem><para> A replicated table that is never updated does not +introduce much work to the system.</para> <para> If it is not updated, the trigger on the table on the origin never fires, and no entries are added to <envar>sl_log_1</envar>. The @@ -145,20 +147,20 @@ (<emphasis>e.g.</emphasis> in the <command>FETCH 100 FROM LOG</command> queries used to find replicatable data) as they only look for tables for which there are entries in -<envar>sl_log_1</envar>. +<envar>sl_log_1</envar>.</para></listitem> <listitem><para> In contrast, a fixed amount of work is introduced to -each SYNC by each sequence that is replicated. +each SYNC by each sequence that is replicated.</para> <para> Replicate 300 sequence and 300 rows need to be added to -<envar>sl_seqlog</envar> on a regular basis. +<envar>sl_seqlog</envar> on a regular basis.</para> <para> It is more than likely that if the value of a particular sequence hasn't changed since it was last checked, perhaps the same value need not be stored over and over; some thought needs to go into -how to do that safely. +how to do that safely.</para></listitem> -</itemizedlist> +</itemizedlist></para></sect2> </sect1> @@ -171,8 +173,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: adminscripts.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v retrieving revision 1.13 retrieving revision 1.14 diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.13 -r1.14 --- doc/adminguide/adminscripts.sgml +++ doc/adminguide/adminscripts.sgml @@ -1,15 +1,16 @@ <!-- $Id$ --> <sect1 id="altperl"> -<title>Slony-I Administration Scripts</title> +<title>&slony1; Administration Scripts</title> -<para>In the <filename>altperl</filename> directory in the <application>CVS</application> -tree, there is a sizable set of <application>Perl</application> scripts that may be -used to administer a set of <productname>Slony-I</productname> instances, which -support having arbitrary numbers of nodes.</para> +<para>In the <filename>altperl</filename> directory in the +<application>CVS</application> tree, there is a sizable set of +<application>Perl</application> scripts that may be used to administer +a set of &slony1; instances, which support having arbitrary numbers of +nodes.</para> <para>Most of them generate Slonik scripts that are then to be passed on to the <link linkend="slonik"><application>slonik</application></link> utility -to be submitted to all of the <productname>Slony-I</productname> nodes in a +to be submitted to all of the &slony1; nodes in a particular cluster. At one time, this embedded running <link linkend="slonik">slonik</link> on the slonik scripts. Unfortunately, this turned out to be a pretty large calibre @@ -23,8 +24,7 @@ <para>The UNIX environment variable <envar>SLONYNODES</envar> is used to determine what Perl configuration file will be used to control the -shape of the nodes in a <productname>Slony-I</productname> -cluster.</para> +shape of the nodes in a &slony1; cluster.</para> <para>What variables are set up. <itemizedlist> @@ -35,8 +35,9 @@ <listitem><para><envar>$APACHE_ROTATOR</envar>="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator</para></listitem> </itemizedlist> </para> -<para> -You then define the set of nodes that are to be replicated using a set of calls to <function>add_node()</function>. + +<para> You then define the set of nodes that are to be replicated +using a set of calls to <function>add_node()</function>. </para> <para><command> @@ -66,8 +67,7 @@ objects will be contained in a particular replication set.</para> <para>Unlike <envar>SLONYNODES</envar>, which is essential for -<emphasis>all</emphasis> of the <link -linkend="slonik">slonik</link>-generating scripts, this only needs to +<emphasis>all</emphasis> of the <link linkend="slonik">slonik</link>-generating scripts, this only needs to be set when running <filename>create_set.pl</filename>, as that is the only script used to control what tables will be in a particular replication set.</para> @@ -83,7 +83,7 @@ <listitem><para>@PKEYEDTABLES</para> <para> An array of names of tables to be replicated that have a -defined primary key so that Slony-I can automatically select its key</para> +defined primary key so that &slony1; can automatically select its key</para> </listitem> <listitem><para>%KEYEDTABLES</para> <para> A hash table of tables to be replicated, where the hash index @@ -93,8 +93,8 @@ <listitem><para>@SERIALTABLES</para> <para> An array of names of tables to be replicated that have no -candidate for primary key. Slony-I will add a key field based on a -sequence that Slony-I generates</para> +candidate for primary key. &slony1; will add a key field based on a +sequence that &slony1; generates</para> </listitem> <listitem><para>@SEQUENCES</para> @@ -109,7 +109,8 @@ <itemizedlist> <listitem><para> a set of <function>add_node()</function> calls to configure the cluster</para></listitem> -<listitem><para> The arrays <envar>@KEYEDTABLES</envar>, <envar>@SERIALTABLES</envar>, and <envar>@SEQUENCES</envar></para></listitem> +<listitem><para> The arrays <envar>@KEYEDTABLES</envar>, +<envar>nvar>@SERIALT</envar>nvar>, and <envar>@SEQUENCES</envar></para></listitem> </itemizedlist> </sect2> <sect2><title>create_set.pl</title> @@ -121,11 +122,13 @@ </sect2> <sect2><title>drop_node.pl</title> -<para>Generates Slonik script to drop a node from a Slony-I cluster.</para> +<para>Generates Slonik script to drop a node from a &slony1; cluster.</para> </sect2> <sect2><title>drop_set.pl</title> -<para>Generates Slonik script to drop a replication set (<emphasis>e.g.</emphasis> - set of tables and sequences) from a Slony-I cluster.</para> +<para>Generates Slonik script to drop a replication set +(<emphasis>e.g.</emphasis> - set of tables and sequences) from a +&slony1; cluster.</para> </sect2> <sect2><title>failover.pl</title> @@ -133,7 +136,7 @@ </sect2> <sect2><title>init_cluster.pl</title> -<para>Generates Slonik script to initialize a whole Slony-I cluster, +<para>Generates Slonik script to initialize a whole &slony1; cluster, including setting up the nodes, communications paths, and the listener routing.</para> </sect2> @@ -147,7 +150,8 @@ </sect2> <sect2><title>replication_test.pl</title> -<para>Script to test whether Slony-I is successfully replicating data.</para> +<para>Script to test whether &slony1; is successfully replicating +data.</para> </sect2> <sect2><title>restart_node.pl</title> @@ -186,9 +190,9 @@ </sect2><sect2><title>slon_watchdog2.pl</title> -<para>This is a somewhat smarter watchdog; it monitors a particular Slony-I -node, and restarts the slon process if it hasn't seen updates go in in -20 minutes or more.</para> +<para>This is a somewhat smarter watchdog; it monitors a particular +&slony1; node, and restarts the slon process if it hasn't seen updates +go in in 20 minutes or more.</para> <para>This is helpful if there is an unreliable network connection such that the slon sometimes stops working without becoming aware of it.</para> @@ -200,9 +204,9 @@ </sect2><sect2><title>uninstall_nodes.pl</title> -<para>This goes through and drops the Slony-I schema from each node; use -this if you want to destroy replication throughout a cluster. This is -a VERY unsafe script!</para> +<para>This goes through and drops the &slony1; schema from each node; +use this if you want to destroy replication throughout a cluster. +This is a <emphasis>VERY</emphasis> unsafe script!</para> </sect2><sect2><title>unsubscribe_set.pl</title> @@ -212,15 +216,15 @@ <sect2><title>update_nodes.pl</title> <para>Generates Slonik script to tell all the nodes to update the -Slony-I functions. This will typically be needed when you upgrade -from one version of <productname>Slony-I</productname> to another.</para> +&slony1; functions. This will typically be needed when you upgrade +from one version of &slony1; to another.</para> </sect2> <sect2 id="regenlisten"><title>regenerate-listens.pl</title> -<para>This script connects to a <productname>Slony-I</productname> -node, and queries various tables (sl_set, sl_node, sl_subscribe, -sl_path) to compute what <command><link linkend="stmtstorelisten"> -STORE LISTEN</link></command> requests should be submitted to the +<para>This script connects to a &slony1; node, and queries various +tables (sl_set, sl_node, sl_subscribe, sl_path) to compute what +<command><link linkend="stmtstorelisten"> STORE +LISTEN</link></command> requests should be submitted to the cluster.</para> <para> See the documentation on <link linkend="autolisten">Automated @@ -238,8 +242,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: slonik_ref.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v retrieving revision 1.11 retrieving revision 1.12 diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.11 -r1.12 --- doc/adminguide/slonik_ref.sgml +++ doc/adminguide/slonik_ref.sgml @@ -83,7 +83,7 @@ <para> Those commands are grouped together into one transaction per participating node. </para> -<!-- ************************************************************ --> +<!-- ************************************************************ --></sect3></sect2></sect1></article> <reference id="hdrcmds"> <title>Slonik Preamble Commands</title> @@ -1909,8 +1909,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: help.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/help.sgml +++ doc/adminguide/help.sgml @@ -1,22 +1,27 @@ <!-- $Id$ --> <sect1 id="help"> -<title>More Slony-I Help</title> +<title>More &slony1; Help</title> -<para>If you are having problems with Slony-I, you have several +<para>If you are having problems with &slony1;, you have several options for help: <itemizedlist> -<listitem><para> <ulink -url="http://slony.info/">http://slony.info/</ulink> - the official -<quote>home</quote> of Slony</para></listitem> +<listitem><para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official +<quote>home</quote> of &slony1;</para></listitem> -<listitem><para> Documentation on the Slony-I Site- Check the -documentation on the Slony website: <ulink url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto +<listitem><para> Documentation on the &slony1; Site- Check the +documentation on the Slony website: <ulink + url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto </ulink></para></listitem> +<listitem><para> A copy of this documentation lives at <ulink + url="http://linuxdatabases.info/info/slony.html"> Christopher Browne's +Web Site </ulink>.</para></listitem> + <listitem><para> Other Documentation - There are several articles here -<ulink url="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"> +<ulink + url="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"> Varlena GeneralBits </ulink> that may be helpful.</para></listitem> <listitem><para> IRC - There are usually some people on #slony on @@ -56,8 +61,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: cluster.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/cluster.sgml -Ldoc/adminguide/cluster.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/cluster.sgml +++ doc/adminguide/cluster.sgml @@ -1,11 +1,10 @@ <!-- $Id$ --> <sect1 id="cluster"> -<title>Defining Slony-I Clusters</title> +<title>Defining &slony1; Clusters</title> -<para>A <productname>Slony-I</productname> cluster is the basic -grouping of database instances in which replication takes place. It -consists of a set of <productname>PostgreSQL</productname> database -instances in which is defined a namespace specific to that +<para>A &slony1; cluster is the basic grouping of database instances +in which replication takes place. It consists of a set of &postgres; +database instances in which is defined a namespace specific to that cluster.</para> <para>Each database instance in which replication is to take place is @@ -35,8 +34,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: concepts.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/concepts.sgml +++ doc/adminguide/concepts.sgml @@ -1,10 +1,10 @@ <!-- $Id$ --> <sect1 id="concepts"> -<title><productname>Slony-I</productname> Concepts</title> +<title>&slony1; Concepts</title> -<para>In order to set up a set of <productname>Slony-I</productname> replicas, it is necessary to -understand the following major abstractions that it uses:</para> +<para>In order to set up a set of &slony1; replicas, it is necessary +to understand the following major abstractions that it uses:</para> <itemizedlist> <listitem><para>Cluster</para></listitem> @@ -16,20 +16,22 @@ <sect2> <title>Cluster</title> -<para>In <productname>Slony-I</productname> terms, a Cluster is a named set of PostgreSQL -database instances; replication takes place between those databases.</para> +<para>In &slony1; terms, a Cluster is a named set of &postgres; +database instances; replication takes place between those +databases.</para> <para>The cluster name is specified in each and every Slonik script via the directive:</para> <programlisting> cluster name = 'something'; </programlisting> -<para>If the Cluster name is <envar>something</envar>, then <productname>Slony-I</productname> will -create, in each database instance in the cluster, the namespace/schema <envar>_something.</envar></para> +<para>If the Cluster name is <envar>something</envar>, then &slony1; +will create, in each database instance in the cluster, the +namespace/schema <envar>_something.</envar></para> </sect2> <sect2><title>Node</title> -<para>A <productname>Slony-I</productname> Node is a named PostgreSQL database that will be participating in replication.</para> +<para>A &slony1; Node is a named &postgres; database that will be participating in replication.</para> <para>It is defined, near the beginning of each Slonik script, using the directive:</para> <programlisting> @@ -40,17 +42,17 @@ information indicates a string argument that will ultimately be passed to the <function>PQconnectdb()</function> libpq function.</para> -<para>Thus, a <productname>Slony-I</productname> cluster consists of:</para> +<para>Thus, a &slony1; cluster consists of:</para> <itemizedlist> <listitem><para> A cluster name</para></listitem> - <listitem><para> A set of <productname>Slony-I</productname> nodes, each of which has a namespace based on that cluster name</para></listitem> + <listitem><para> A set of &slony1; nodes, each of which has a namespace based on that cluster name</para></listitem> </itemizedlist> </sect2> <sect2><title> Replication Set</title> <para>A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a -<productname>Slony-I</productname> cluster.</para> +&slony1; cluster.</para> <para>You may have several sets, and the <quote>flow</quote> of replication does not need to be identical between those sets.</para> @@ -70,10 +72,10 @@ <para>The origin node will never be considered a <quote>subscriber.</quote> (Ignoring the case where the cluster is reshaped, and the origin is expressly shifted to another node.) But -<productname>Slony-I</productname> supports the notion of cascaded -subscriptions, that is, a node that is subscribed to some set may also -behave as a <quote>provider</quote> to other nodes in the cluster for -that replication set.</para> +&slony1; supports the notion of cascaded subscriptions, that is, a +node that is subscribed to some set may also behave as a +<quote>provider</quote> to other nodes in the cluster for that +replication set.</para> </sect2> </sect1> @@ -86,8 +88,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: maintenance.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/maintenance.sgml +++ doc/adminguide/maintenance.sgml @@ -1,20 +1,21 @@ <!-- $Id$ --> -<sect1 id="maintenance"> <title>Slony-I Maintenance</title> +<sect1 id="maintenance"> <title>&slony1; Maintenance</title> -<para><productname>Slony-I</productname> actually does most of its necessary -maintenance itself, in a <quote>cleanup</quote> thread: +<para>&slony1; actually does a lot of its necessary maintenance +itself, in a <quote>cleanup</quote> thread: <itemizedlist> <listitem><para> Deletes old data from various tables in the -<productname>Slony-I</productname> cluster's namespace, notably entries in -sl_log_1, sl_log_2 (not yet used), and sl_seqlog.</para></listitem> - -<listitem><para> Vacuum certain tables used by <productname>Slony-I</productname>. -As of 1.0.5, this includes pg_listener; in earlier versions, you must -vacuum that table heavily, otherwise you'll find replication slowing -down because <productname>Slony-I</productname> raises plenty of events, which -leads to that table having plenty of dead tuples.</para> +<productname>Slony-I</productname> cluster's namespace, notably +entries in <envar>sl_log_1</envar>, <envar>sl_log_2</envar> (not yet +used), and <envar>sl_seqlog</envar>.</para></listitem> + +<listitem><para> Vacuum certain tables used by &slony1;. As of 1.0.5, +this includes pg_listener; in earlier versions, you must vacuum that +table heavily, otherwise you'll find replication slowing down because +&slony1; raises plenty of events, which leads to that table having +plenty of dead tuples.</para> <para> In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using @@ -22,13 +23,14 @@ vacuuming of these tables. Unfortunately, it has been quite possible for <application>pg_autovacuum</application> to not vacuum quite frequently enough, so you probably want to use the internal vacuums. -Vacuuming pg_listener <quote>too often</quote> isn't nearly as -hazardous as not vacuuming it frequently enough.</para> +Vacuuming <envar>pg_listener</envar> <quote>too often</quote> isn't +nearly as hazardous as not vacuuming it frequently enough.</para> <para>Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running. This will most notably lead to -pg_listener growing large and will slow replication.</para></listitem> +<envar>pg_listener</envar> growing large and will slow +replication.</para></listitem> </itemizedlist> </para> @@ -44,7 +46,7 @@ </sect2> <sect2><title>Parallel to Watchdog: generate_syncs.sh</title> -<para>A new script for <productname>Slony-I</productname> 1.1 is +<para>A new script for &slony1; 1.1 is <application>generate_syncs.sh</application>, which addresses the following kind of situation.</para> @@ -57,9 +59,10 @@ survived. Your online application then saw nearly three days worth of reasonably heavy transaction load.</para> -<para>When you restart slon on Monday, it hasn't done a SYNC on the -master since Friday, so that the next <quote>SYNC set</quote> comprises all -of the updates between Friday and Monday. Yuck.</para> +<para>When you restart <application>slon</application> on Monday, it +hasn't done a SYNC on the master since Friday, so that the next +<quote>SYNC set</quote> comprises all of the updates between Friday +and Monday. Yuck.</para> <para>If you run <application>generate_syncs.sh</application> as a cron job every 20 minutes, it will force in a periodic <command>SYNC</command> on the origin, which @@ -73,15 +76,14 @@ <sect2><title>Replication Test Scripts </title> -<para> In the directory <filename>tools</filename> may be found four scripts -that may be used to do monitoring of <productname> Slony-I -</productname> instances: +<para> In the directory <filename>tools</filename> may be found four +scripts that may be used to do monitoring of &slony1; instances: <itemizedlist> -<listitem><para> <command>test_slony_replication.pl</command> is a Perl script -to which you can pass connection information to get to a -<productname>Slony-I</productname> node. It then queries <envar>sl_path</envar> and other +<listitem><para> <command>test_slony_replication.pl</command> is a +Perl script to which you can pass connection information to get to a +&slony1; node. It then queries <envar>sl_path</envar> and other information on that node in order to determine the shape of the requested replication set.</para> @@ -97,12 +99,11 @@ ); </programlisting></para> -<para> The last column in that table was defined by -<productname>Slony-I</productname> as one lacking a primary key...</para> +<para> The last column in that table was defined by &slony1; as one +lacking a primary key...</para> -<para> This script generates a line of output for each -<productname>Slony-I</productname> node that is active for the requested -replication set in a file called +<para> This script generates a line of output for each &slony1; node +that is active for the requested replication set in a file called <filename>cluster.fact.log</filename>.</para> <para> There is an additional <option>finalquery</option> option that allows @@ -115,8 +116,9 @@ <listitem><para><command>run_rep_tests.sh</command> is a <quote>wrapper</quote> script that runs <command>test_slony_replication.pl</command>.</para> -<para> If you have several <productname>Slony-I</productname> clusters, you might -set up configuration in this file to connect to all those clusters.</para></listitem> +<para> If you have several &slony1; clusters, you might set up +configuration in this file to connect to all those +clusters.</para></listitem> <listitem><para><command>nagios_slony_test.pl</command> is a script that was constructed to query the log files so that you might run the @@ -130,9 +132,9 @@ <application>cron</application> job run the tests and have <productname>Nagios</productname> check the results rather than having <productname>Nagios</productname> run the tests directly. The tests -can exercise the whole <productname>Slony-I</productname> cluster at -once rather than <productname>Nagios</productname> invoking updates -over and over again.</para></listitem> +can exercise the whole &slony1; cluster at once rather than +<productname>Nagios</productname> invoking updates over and over +again.</para></listitem> </itemizedlist></para> @@ -165,8 +167,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:book.sgml sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: firstdb.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/firstdb.sgml +++ doc/adminguide/firstdb.sgml @@ -3,18 +3,17 @@ <para>In this example, we will be replicating a brand new pgbench database. The mechanics of replicating an existing database are -covered here, however we recommend that you learn how -<productname>Slony-I</productname> functions by using a fresh new -non-production database.</para> - -<para>The <productname>Slony-I</productname> replication engine is -trigger-based, allowing us to replicate databases (or portions -thereof) running under the same postmaster.</para> +covered here, however we recommend that you learn how &slony1; +functions by using a fresh new non-production database.</para> + +<para>The &slony1; replication engine is trigger-based, allowing us to +replicate databases (or portions thereof) running under the same +postmaster.</para> <para>This example will show how to replicate the pgbench database running on localhost (master) to the pgbench slave database also running on localhost (slave). We make a couple of assumptions about -your PostgreSQL configuration: +your &postgres; configuration: <itemizedlist> @@ -26,8 +25,10 @@ </itemizedlist></para> -<para> The <envar>REPLICATIONUSER</envar> needs to be a PostgreSQL superuser. -This is typically postgres or pgsql.</para> +<para> The <envar>REPLICATIONUSER</envar> needs to be a &postgres; +superuser. This is typically postgres or pgsql, although in complex +environments it is quite likely a good idea to define a &slony1; user +to distinguish between them.</para> <para>You should also set the following shell variables: @@ -55,7 +56,7 @@ </itemizedlist> </para> -<para><warning><Para> If you're changing these variables to use +<para><warning><para> If you're changing these variables to use different hosts for <envar>MASTERHOST</envar> and <envar>SLAVEHOST</envar>, be sure <emphasis>not</emphasis> to use localhost for either of them. This will result in an error similar to the following:</para> @@ -81,16 +82,18 @@ pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME </programlisting> -<para>Because <productname>Slony-I</productname> depends on the databases having the pl/pgSQL procedural -language installed, we better install it now. It is possible that you have -installed pl/pgSQL into the template1 database in which case you can skip this -step because it's already installed into the <envar>$MASTERDBNAME</envar>. +<para>Because &slony1; depends on the databases having the pl/pgSQL +procedural language installed, we better install it now. It is +possible that you have installed pl/pgSQL into the template1 database +in which case you can skip this step because it's already installed +into the <envar>$MASTERDBNAME</envar>. <programlisting> createlang plpgsql -h $MASTERHOST $MASTERDBNAME </programlisting> </para> -<para><productname>Slony-I</productname> does not yet automatically copy table definitions from a + +<para>&slony1; does not automatically copy table definitions from a master when a slave subscribes to it, so we need to import this data. We do this with <application>pg_dump</application>. @@ -99,13 +102,14 @@ </programlisting> </para> -<para>To illustrate how <productname>Slony-I</productname> allows for on the fly -replication subscription, let's start up <application>pgbench</application>. If -you run the <application>pgbench</application> application in the foreground of a -separate terminal window, you can stop and restart it with different -parameters at any time. You'll need to re-export the variables again -so they are available in this session as well. +<para>To illustrate how &slony1; allows for on the fly replication +subscription, let's start up <application>pgbench</application>. If +you run the <application>pgbench</application> application in the +foreground of a separate terminal window, you can stop and restart it +with different parameters at any time. You'll need to re-export the +variables again so they are available in this session as well. </para> + <para>The typical command to run <application>pgbench</application> would look like: <programlisting> @@ -121,11 +125,11 @@ <sect2><title>Configuring the Database for Replication.</title> <para>Creating the configuration tables, stored procedures, triggers -and configuration is all done through the -<link linkend="slonik"><application>slonik</application></link> tool. -It is a specialized scripting aid that mostly calls stored procedures -in the master/slave (node) databases. The script to create the initial -configuration for the simple master-slave setup of our pgbench database looks like this: +and configuration is all done through the <link linkend="slonik"><application>slonik</application></link> tool. It is +a specialized scripting aid that mostly calls stored procedures in the +master/slave (node) databases. The script to create the initial +configuration for the simple master-slave setup of our pgbench +database looks like this: <programlisting> #!/bin/sh @@ -186,14 +190,15 @@ store listen (origin=1, provider = 1, receiver =2); store listen (origin=2, provider = 2, receiver =1); _EOF_ -</programlisting> +</programlisting></para> <para>Is the <application>pgbench</application> still running? If not start it again.</para> + <para>At this point we have 2 databases that are fully prepared. One -is the master database in which <application>pgbench</application> is busy -accessing and changing rows. It's now time to start the replication -daemons.</para> +is the master database in which <application>pgbench</application> is +busy accessing and changing rows. It's now time to start the +replication daemons.</para> <para>On $MASTERHOST the command to start the replication engine is @@ -213,7 +218,7 @@ replicating any data yet. The notices you are seeing is the synchronization of cluster configurations between the 2 <application><link linkend="slon">slon</link></application> -processes. +processes.</para> <para>To start replicating the 4 pgbench tables (set 1) from the master (node id 1) the the slave (node id 2), execute the following @@ -302,13 +307,12 @@ </para> <para>Note that there is somewhat more sophisticated documentation of -the process in the <productname>Slony-I</productname> source code tree -in a file called +the process in the &slony1; source code tree in a file called <filename>slony-I-basic-mstr-slv.txt</filename>.</para> <para>If this script returns <command>FAILED</command> please contact the developers at <ulink url="http://slony.info/"> -http://slony.info/</ulink></para> +http://slony.info/</ulink></para></sect2> </sect1> @@ -321,8 +325,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: prerequisites.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/prerequisites.sgml +++ doc/adminguide/prerequisites.sgml @@ -1,8 +1,8 @@ <!-- $Id --> <sect1 id="requirements"> <title>System Requirements</title> <para>Any platform that can run -PostgreSQL should be able to run -<productname>Slony-I</productname>.</para> +&postgres; should be able to run +&slony1;.</para> <para>The platforms that have received specific testing at the time of this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha, @@ -10,9 +10,9 @@ <trademark>Solaris</trademark>-2.8-SPARC, <trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1 and OpenBSD-3.5-sparc64.</para> -<para>There have been reports of success at running -<productname>Slony-I</productname> hosts that are running PostgreSQL -on Microsoft <trademark>Windows</trademark>. At this time, the +<para>There have been reports of success at running &slony1; hosts +that are running PostgreSQL on Microsoft +<trademark>Windows</trademark>. At this time, the <quote>binary</quote> applications (<emphasis>e.g.</emphasis> - <application><link linkend="slonik">slonik</link></application>, <application><link linkend="slon">slon</link></application>) do not @@ -33,14 +33,13 @@ interested party from volunteering to do the port.</para> <sect2> -<title> Slony-I Software Dependancies</title> +<title> &slony1; Software Dependancies</title> -<para> At present, <productname>Slony-I</productname> <emphasis>as -well as <productname>PostgreSQL</productname></emphasis> need to be -able to be compiled from source at your site. +<para> At present, &slony1; <emphasis>as well as &postgres;</emphasis> +need to be able to be compiled from source at your site.</para> -<para> In order to compile <productname>Slony-I</productname>, you -need to have the following tools. +<para> In order to compile &slony1;, you need to have the following +tools. <itemizedlist> <listitem><para> GNU make. Other make programs will not work. GNU @@ -54,11 +53,10 @@ <listitem><para> You need an ISO/ANSI C compiler. Recent versions of <application>GCC</application> work.</para></listitem> -<listitem><para> You also need a recent version of PostgreSQL -<emphasis>source</emphasis>. <productname>Slony-I</productname> -depends on namespace support so you must have PostgreSQL version 7.3.3 -or newer to be able to build and use -<productname>Slony-I</productname>. </para></listitem> +<listitem><para> You also need a recent version of &postgres; +<emphasis>source</emphasis>. &slony1; depends on namespace support so +you must have &postgres; version 7.3.3 or newer to be able to build +and use &slony1;. </para></listitem> <listitem><para> GNU packages may be included in the standard packaging for your operating system, or you may need to look for @@ -68,8 +66,8 @@ url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu </ulink> .)</para></listitem> -<listitem><para> If you need to obtain PostgreSQL source, you can -download it from your favorite PostgreSQL mirror. See <ulink +<listitem><para> If you need to obtain &postgres; source, you can +download it from your favorite &postgres; mirror. See <ulink url="http://www.postgresql.org/mirrors-www.html"> http://www.postgresql.org/mirrors-www.html </ulink> for a list.</para></listitem> </itemizedlist> </para> @@ -79,21 +77,20 @@ installation.</para> <note><para>There are changes afoot for version 1.1 that ought to make -it possible to compile <productname>Slony-I</productname> separately -from <productname>PostgreSQL</productname>, which should make it -practical for the makers of distributions of +it possible to compile &slony1; separately from &postgres;, which +should make it practical for the makers of distributions of <productname>Linux</productname> and <productname>FreeBSD</productname> to include precompiled binary -packages for <productname>Slony-I</productname>, but until that -happens, you need to be prepared to use versions of all this software -that you compile yourself.</para></note> +packages for &slony1;, but until that happens, you need to be prepared +to use versions of all this software that you compile +yourself.</para></note> </sect2> <sect2> -<title> Getting <productname>Slony-I</productname>Source</title> +<title> Getting &slony1; Source</title> -<para>You can get the <productname>Slony-I</productname> source from -<ulink url="http://developer.postgresql.org/~wieck/slony1/download/"> +<para>You can get the &slony1; source from <ulink + url="http://developer.postgresql.org/~wieck/slony1/download/"> http://developer.postgresql.org/~wieck/slony1/download/</ulink> </para> @@ -103,24 +100,24 @@ <title> Time Synchronization</title> <para> All the servers used within the replication cluster need to -have their Real Time Clocks in sync. This is to ensure that slon -doesn't generate errors with messages indicating that a subscriber is -already ahead of its provider during replication. We recommend you -have <application>ntpd</application> running on all nodes, where -subscriber nodes using the <quote>master</quote> provider host as -their time server.</para> - -<para> It is possible for <productname>Slony-I</productname> to -function even in the face of there being some time discrepancies, but -having systems <quote>in sync</quote> is usually pretty important for -distributed applications.</para> +have their Real Time Clocks in sync. This is to ensure that <link + linkend="slon"> slon </link> doesn't generate errors with messages +indicating that a subscriber is already ahead of its provider during +replication. We recommend you have <application>ntpd</application> +running on all nodes, where subscriber nodes using the +<quote>master</quote> provider host as their time server.</para> + +<para> It is possible for &slony1; itself to function even in the face +of there being some time discrepancies, but having systems <quote>in +sync</quote> is usually pretty important for distributed +applications.</para> <para> See <ulink url="http://www.ntp.org/"> www.ntp.org </ulink> for more details about NTP (Network Time Protocol).</para> <para> Some users have reported problems that have been traced to -their locales indicating the use of some time zone that -<productname>PostgreSQL</productname> did not recognize. +their locales indicating the use of some time zone that &postgres; did +not recognize. <itemizedlist> @@ -130,18 +127,17 @@ break.</para> <para> <command>CUT0</command> is a variant way of describing -<command>UTC</command> +<command>UTC</command></para> </listitem> <listitem><para> Some countries' timezones are not yet included in -<productname> PostgreSQL </productname>. </para></listitem> +&postgres;. </para></listitem> -</itemizedlist> +</itemizedlist></para> <para> In any case, what commonly seems to be the <quote>best -practice</quote> with <productname>Slony-I</productname> (and, for -that matter, <productname> PostgreSQL </productname>) is for the -postmaster user and/or the user under which +practice</quote> with &slony1; (and, for that matter, &postgres;) is +for the postmaster user and/or the user under which <application>slon</application> runs to use <command><envar>TZ</envar>=UTC</command> or <command><envar>TZ</envar>=GMT</command>. Those timezones are @@ -155,20 +151,17 @@ <para>It is necessary that the hosts that are to replicate between one another have <emphasis>bidirectional</emphasis> network communications -to the PostgreSQL instances. That is, if node B is replicating data +to the &postgres; instances. That is, if node B is replicating data from node A, it is necessary that there be a path from A to B and from B to A. It is recommended that, as much as possible, all nodes in a -<productname>Slony-I</productname> cluster allow this sort of -bidirection communications from any node in the cluster to any other -node in the cluster.</para> - -<para>Note that the network addresses must be consistent across all of -the nodes. Thus, if there is any need to use a <quote>public</quote> -address for a node, to allow remote/VPN access, that -<quote>public</quote> address needs to be able to be used consistently -throughout the <productname>Slony-I</productname> cluster, as the -address is propagated throughout the cluster in table -<envar>sl_path</envar>.</para> +&slony1; cluster allow this sort of bidirection communications from +any node in the cluster to any other node in the cluster.</para> + +<para>For ease of configuration, network addresses should be +consistent across all of the nodes. <link linkend="stmtstorepath"> +<command>STORE PATH</command> </link> does allow them to vary, but +down this road lies madness as you try to manage the multiplicity of +paths pointing to the same server.</para> <para>A possible workaround for this, in environments where firewall rules are particularly difficult to implement, may be to establish @@ -180,24 +173,22 @@ <para> Note that <application>slonik</application> and the <application>slon</application> instances need no special connections or protocols to communicate with one another; they merely need access -to the <application>PostgreSQL</application> databases, connecting as -a <quote>superuser</quote>.</para> +to the &postgres; databases, connecting as a +<quote>superuser</quote>.</para> <para> An implication of this communications model is that the entire -extended network in which a <productname>Slony-I</productname> cluster -operates must be able to be treated as being secure. If there is a -remote location where you cannot trust one of the databases that is a -<productname>Slony-I</productname> node to be considered -<quote>secure,</quote> this represents a vulnerability that can -adversely affect the security of the entire cluster. As a +extended network in which a &slony1; cluster operates must be able to +be treated as being secure. If there is a remote location where you +cannot trust one of the databases that is a &slony1; node to be +considered <quote>secure,</quote> this represents a vulnerability that +can adversely affect the security of the entire cluster. As a <quote>peer-to-peer</quote> system, <emphasis>any</emphasis> of the hosts is able to introduce replication events that will affect the entire cluster. Therefore, the security policies throughout the cluster can only be considered as stringent as those applied at the -<emphasis>weakest</emphasis> link. Running a -<productname>Slony-I</productname> node at a branch location that -can't be kept secure compromises security for the cluster as a -whole.</para> +<emphasis>weakest</emphasis> link. Running a &slony1; node at a +branch location that can't be kept secure compromises security for the +cluster as a whole.</para> <para>In the future plans is a feature whereby updates for a particular replication set would be serialized via a scheme called @@ -212,7 +203,7 @@ datagrams on avian carriers - RFC 1149.</ulink> But whatever the transmission mechanism, this will allow one way communications such that subscribers that use log shipping have no need of access to other -<productname>Slony-I</productname> nodes.</para> +&slony1; nodes.</para> </sect2> </sect1> @@ -225,8 +216,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: subscribenodes.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/subscribenodes.sgml +++ doc/adminguide/subscribenodes.sgml @@ -5,7 +5,7 @@ <application><link linkend="slon"> slon </link></application> processes running for both the provider and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat -your head against a wall trying to figure out what is going on. +your head against a wall trying to figure out what is going on.</para> <para>Subscribing a node to a set is done by issuing the <link linkend="slonik"> slonik </link> command <command> <link @@ -89,8 +89,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: legal.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/legal.sgml,v retrieving revision 1.4 retrieving revision 1.5 diff -Ldoc/adminguide/legal.sgml -Ldoc/adminguide/legal.sgml -u -w -r1.4 -r1.5 --- doc/adminguide/legal.sgml +++ doc/adminguide/legal.sgml @@ -57,8 +57,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.15 retrieving revision 1.16 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.15 -r1.16 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -34,10 +34,9 @@ </question> <answer><para>On AIX and Solaris (and possibly elsewhere), both -<productname>Slony-I</productname> <emphasis>and -<productname>PostgreSQL</productname></emphasis> must be compiled with -the <option>--enable-thread-safety</option> option. The above results -when <productname>PostgreSQL</productname> isn't so compiled.</para> +&slony1; <emphasis>and &postgres;</emphasis> must be compiled with the +<option>--enable-thread-safety</option> option. The above results +when &postgres; isn't so compiled.</para> <para>What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby @@ -63,10 +62,9 @@ <question> <para>I tried creating a CLUSTER NAME with a "-" in it. That didn't work.</para></question> -<answer><para> <productname>Slony-I</productname> uses the same rules -for unquoted identifiers as the <productname>PostgreSQL</productname> -main parser, so no, you probably shouldn't put a "-" in your -identifier name.</para> +<answer><para> &slony1; uses the same rules for unquoted identifiers +as the &postgres; main parser, so no, you probably shouldn't put a "-" +in your identifier name.</para> <para> You may be able to defeat this by putting <quote>quotes</quote> around identifier names, but it's still liable to bite you some, so this is @@ -87,12 +85,14 @@ serving this node already</para></blockquote></para></question> <answer><para> The problem is that the system table -<envar>pg_catalog.pg_listener</envar>, used by <productname>PostgreSQL</productname> to -manage event notifications, contains some entries that are pointing to -backends that no longer exist. The new <link linkend="slon"> -<application>slon</application></link> instance connects to the database, and is -convinced, by the presence of these entries, that an old -<application>slon</application> is still servicing this <productname>Slony-I</productname> node.</para> +<envar>pg_catalog.pg_listener</envar>, used by +<productname>PostgreSQL</productname> to manage event notifications, +contains some entries that are pointing to backends that no longer +exist. The new <link linkend="slon"> +<application>slon</application></link> instance connects to the +database, and is convinced, by the presence of these entries, that an +old <application>slon</application> is still servicing this &slony1; +node.</para> <para> The <quote>trash</quote> in that table needs to be thrown away.</para> @@ -145,25 +145,22 @@ <answer><para> Evidently, you haven't got the <filename>xxid.so</filename> library in the <envar>$libdir</envar> -directory that the <productname>PostgreSQL</productname> instance is -using. Note that the <productname>Slony-I</productname> components -need to be installed in the <productname>PostgreSQL</productname> +directory that the &postgres; instance is +using. Note that the &slony1; components +need to be installed in the &postgres; software installation for <emphasis>each and every one</emphasis> of the nodes, not just on the origin node.</para> <para>This may also point to there being some other mismatch between -the <productname>PostgreSQL</productname> binary instance and the -<productname>Slony-I</productname> instance. If you compiled -<productname>Slony-I</productname> yourself, on a machine that may -have multiple <productname>PostgreSQL</productname> builds -<quote>lying around,</quote> it's possible that the slon or slonik -binaries are asking to load something that isn't actually in the -library directory for the <productname>PostgreSQL</productname> -database cluster that it's hitting.</para> +the &postgres; binary instance and the &slony1; instance. If you +compiled &slony1; yourself, on a machine that may have multiple +&postgres; builds <quote>lying around,</quote> it's possible that the +slon or slonik binaries are asking to load something that isn't +actually in the library directory for the &postgres; database cluster +that it's hitting.</para> <para>Long and short: This points to a need to <quote>audit</quote> -what installations of <productname>PostgreSQL</productname> and -<productname>Slony-I</productname> you have in place on the +what installations of &postgres; and &slony1; you have in place on the machine(s). Unfortunately, just about any mismatch will cause things not to link up quite right. See also <link linkend="slonyfaq02"> SlonyFAQ02 </link> concerning threading issues on Solaris ...</para> @@ -197,20 +194,20 @@ <para>Oops. What I forgot to mention, as well, was that I was trying to add <emphasis>TWO</emphasis> subscribers, concurrently.</para></question> -<answer><para> That doesn't work out: <productname>Slony-I</productname> won't work on the +<answer><para> That doesn't work out: &slony1; won't work on the <command>COPY</command> commands concurrently. See <filename>src/slon/remote_worker.c</filename>, function <function>copy_set()</function></para> <para>This has the (perhaps unfortunate) implication that you cannot -populate two slaves concurrently. You have to subscribe one to the -set, and only once it has completed setting up the subscription -(copying table contents and such) can the second subscriber start -setting up the subscription.</para> +populate two slaves concurrently from a single provider. You have to +subscribe one to the set, and only once it has completed setting up +the subscription (copying table contents and such) can the second +subscriber start setting up the subscription.</para> <para>It could also be possible for there to be an old outstanding -transaction blocking <productname>Slony-I</productname> from processing the sync. You might want -to take a look at pg_locks to see what's up: +transaction blocking &slony1; from processing the sync. You might +want to take a look at pg_locks to see what's up: <screen> sampledb=# select * from pg_locks where transaction is not null order by transaction; @@ -233,13 +230,12 @@ setting up the first subscriber; it won't start on the second one until the first one has completed subscribing.</para> -<para>By the way, if there is more than one database on the -<productname>PostgreSQL</productname> cluster, and activity is taking -place on the OTHER database, that will lead to there being -<quote>transactions earlier than XID whatever</quote> being found to -be still in progress. The fact that it's a separate database on the -cluster is irrelevant; <productname>Slony-I</productname> will wait -until those old transactions terminate.</para> +<para>By the way, if there is more than one database on the &postgres; +cluster, and activity is taking place on the OTHER database, that will +lead to there being <quote>transactions earlier than XID +whatever</quote> being found to be still in progress. The fact that +it's a separate database on the cluster is irrelevant; &slony1; will +wait until those old transactions terminate.</para> </answer> </qandaentry> @@ -291,16 +287,16 @@ delete from _slonyschema.sl_table where tab_id = 40; </programlisting></para> -<para>The schema will obviously depend on how you defined the -<productname>Slony-I</productname> cluster. The table ID, in this -case, 40, will need to change to the ID of the table you want to have -go away.</para> - -<para> You'll have to run these three queries on all of the nodes, preferably -firstly on the origin node, so that the dropping of this propagates -properly. Implementing this via a <link linkend="slonik"> slonik -</link> statement with a new <productname>Slony-I</productname> event -would do that. Submitting the three queries using <command> <link linkend="stmtddlscript"> EXECUTE SCRIPT </link> </command> could do +<para>The schema will obviously depend on how you defined the &slony1; +cluster. The table ID, in this case, 40, will need to change to the +ID of the table you want to have go away.</para> + +<para> You'll have to run these three queries on all of the nodes, +preferably firstly on the origin node, so that the dropping of this +propagates properly. Implementing this via a <link linkend="slonik"> +slonik </link> statement with a new &slony1; event would do that. +Submitting the three queries using <command> <link +linkend="stmtddlscript"> EXECUTE SCRIPT </link> </command> could do that. Also possible would be to connect to each database and submit the queries by hand.</para></listitem> </itemizedlist></para> </answer> @@ -346,13 +342,12 @@ may be applied by hand to each of the nodes.</para> <para>Similarly to <command> <link linkend="stmtsetdroptable"> SET -DROP TABLE </link> </command>, this is implemented -<productname>Slony-I</productname> version 1.0.5 as <command> <link -linkend="stmtsetdropsequence"> SET DROP +DROP TABLE </link> </command>, this is implemented &slony1; version +1.0.5 as <command> <link linkend="stmtsetdropsequence"> SET DROP SEQUENCE</link></command>.</para></answer></qandaentry> <qandaentry> -<question><para><productname>Slony-I</productname>: cannot add table to currently subscribed set 1</para> +<question><para>Slony-I: cannot add table to currently subscribed set 1</para> <para> I tried to add a table to a set, and got the following message: @@ -372,8 +367,8 @@ <qandaentry id="PGLISTENERFULL"> <question><para>Some nodes start consistently falling behind</para> -<para>I have been running <productname>Slony-I</productname> on a node -for a while, and am seeing system performance suffering.</para> +<para>I have been running &slony1; on a node for a while, and am +seeing system performance suffering.</para> <para>I'm seeing long running queries of the form: <screen> @@ -393,17 +388,16 @@ <para> Slon daemons already vacuum a bunch of tables, and <filename>cleanup_thread.c</filename> contains a list of tables that -are frequently vacuumed automatically. In -<productname>Slony-I</productname> 1.0.2, <envar>pg_listener</envar> -is not included. In 1.0.5 and later, it is regularly vacuumed, so -this should cease to be a direct issue.</para> +are frequently vacuumed automatically. In &slony1; 1.0.2, +<envar>pg_listener</envar> is not included. In 1.0.5 and later, it is +regularly vacuumed, so this should cease to be a direct issue.</para> <para>There is, however, still a scenario where this will still -"bite." Vacuums cannot delete tuples that were made "obsolete" at any -time after the start time of the eldest transaction that is still -open. Long running transactions will cause trouble, and should be -avoided, even on "slave" nodes.</para> -</answer></qandaentry> +<quote>bite.</quote> Under MVCC, vacuums cannot delete tuples that +were made <quote>obsolete</quote> at any time after the start time of +the eldest transaction that is still open. Long running transactions +will cause trouble, and should be avoided, even on subscriber +nodes.</para> </answer></qandaentry> <qandaentry> <question><para>I started doing a backup using <application>pg_dump</application>, and suddenly Slony @@ -414,13 +408,11 @@ <listitem><para> <application>pg_dump</application>, which has taken out an <command>AccessShareLock</command> on all of the tables in the -database, including the <productname>Slony-I</productname> ones, -and</para></listitem> +database, including the &slony1; ones, and</para></listitem> -<listitem><para> A <productname>Slony-I</productname> sync event, -which wants to grab a <command>AccessExclusiveLock</command> on the -table <envar>sl_event</envar>.</para></listitem> -</itemizedlist></para> +<listitem><para> A &slony1; sync event, which wants to grab a +<command>AccessExclusiveLock</command> on the table +<envar>sl_event</envar>.</para></listitem> </itemizedlist></para> <para>The initial query that will be blocked is thus: @@ -473,25 +465,24 @@ <answer><para> You might want to take a look at the sl_log_1/sl_log_2 tables, and do a summary to see if there are any really enormous -<productname>Slony-I</productname> transactions in there. Up until at -least 1.0.2, there needs to be a slon connected to the origin in order -for <command>SYNC</command> events to be generated.</para> +&slony1; transactions in there. Up until at least 1.0.2, there needs +to be a slon connected to the origin in order for +<command>SYNC</command> events to be generated.</para> <para>If none are being generated, then all of the updates until the -next one is generated will collect into one rather enormous -<productname>Slony-I</productname> transaction.</para> +next one is generated will collect into one rather enormous &slony1; +transaction.</para> <para>Conclusion: Even if there is not going to be a subscriber around, you <emphasis>really</emphasis> want to have a <application>slon</application> running to service the origin node.</para> -<para><productname>Slony-I</productname> 1.1 provides a stored -procedure that allows <command>SYNC</command> counts to be updated on -the origin based on a <application>cron</application> job even if -there is no <link linkend="slon"> <application>slon</application></link> daemon -running.</para> -</answer></qandaentry> +<para>&slony1; 1.1 provides a stored procedure that allows +<command>SYNC</command> counts to be updated on the origin based on a +<application>cron</application> job even if there is no <link +linkend="slon"> <application>slon</application></link> daemon +running.</para> </answer></qandaentry> <qandaentry> <question><para>I pointed a subscribing node to a different provider @@ -560,9 +551,9 @@ entries. If you are still tied to earlier versions, a Perl script, <link linkend="regenlisten"> <application>regenerate-listens.pl</application> </link>, provides a -way of querying a live <productname>Slony-I</productname> instance and -generating the <link linkend="slonik"> Slonik </link> commands to -generate the listen path network.</para></listitem> +way of querying a live &slony1; instance and generating the <link +linkend="slonik"> Slonik </link> commands to generate the listen path +network.</para></listitem> </itemizedlist></para> @@ -578,7 +569,7 @@ <answer><para> This is a common scenario in versions before 1.0.5, as the <quote>clean up</quote> that takes place when purging the node does not include purging out old entries from the -<productname>Slony-I</productname> table, sl_confirm, for the recently +&slony1; table, sl_confirm, for the recently departed node.</para> <para> The node is no longer around to update confirmations of what @@ -670,11 +661,11 @@ </screen></para> <para>The transaction rolls back, and -<productname>Slony-I</productname> tries again, and again, and again. +&slony1; tries again, and again, and again. The problem is with one of the <emphasis>last</emphasis> SQL statements, the one with <command>log_cmdtype = 'I'</command>. That isn't quite obvious; what takes place is that -<productname>Slony-I</productname> groups 10 update queries together +&slony1; groups 10 update queries together to diminish the number of network round trips.</para></question> <answer><para> A <emphasis>certain</emphasis> cause for this has not @@ -687,7 +678,7 @@ set (or even the node), and restart replication from scratch on that node.</para> -<para>In <productname>Slony-I</productname> 1.0.5, the handling of +<para>In &slony1; 1.0.5, the handling of purges of sl_log_1 became more conservative, refusing to purge entries that haven't been successfully synced for at least 10 minutes on all nodes. It was not certain that that will prevent the @@ -789,7 +780,7 @@ </qandaentry> <qandaentry><question><para> What happens with rules and triggers on -<productname>Slony-I</productname>-replicated tables?</para> +&slony1;-replicated tables?</para> </question> <answer><para> Firstly, let's look at how it is handled @@ -810,7 +801,7 @@ table.</para> <para> That trigger initiates the action of logging all updates to the -table to <productname>Slony-I</productname> <envar>sl_log</envar> +table to &slony1; <envar>sl_log</envar> tables.</para></listitem> <listitem><para> On a subscriber node, this involves disabling @@ -840,7 +831,7 @@ <command>STORE TRIGGER</command> </link> enters into things.</para> <para> Simply put, this command causes -<productname>Slony-I</productname> to restore the trigger using +&slony1; to restore the trigger using <function>alterTableRestore(table id)</function>, which restores the table's OID into the <envar>pg_trigger</envar> or <envar>pg_rewrite</envar> <envar>tgrelid</envar> column on the @@ -1135,7 +1126,7 @@ and load the data back in much faster than the <command>SUBSCRIBE SET</command> runs. Why is that? </para></question> -<answer><para> <productname>Slony-I</productname> depends on there +<answer><para> &slony1; depends on there being an already existant index on the primary key, and leaves all indexes alone whilst using the <productname>PostgreSQL</productname> <command>COPY</command> command to load the data. Further hurting @@ -1179,7 +1170,7 @@ <answer><para> The action of <command><link linkend="stmtfailover">FAILOVER</link></command> is to <emphasis>abandon</emphasis> the failed node so that no more -<productname>Slony-I</productname> activity goes to or from that node. +&slony1; activity goes to or from that node. As soon as that takes place, the failed node will progressively fall further and further out of sync. </para></answer> @@ -1193,11 +1184,11 @@ </para></answer> <answer><para> As discusssed in the section on <link -linkend="failover"> Doing switchover and failover with Slony-I</link>, -using <command><link linkend="stmtfailover">FAILOVER</link></command> -should be considered a <emphasis>last resort</emphasis> as it implies -that you are abandoning the origin node as being corrupted. -</para></answer> +linkend="failover"> Doing switchover and failover with +&slony1;</link>, using <command><link +linkend="stmtfailover">FAILOVER</link></command> should be considered +a <emphasis>last resort</emphasis> as it implies that you are +abandoning the origin node as being corrupted. </para></answer> </qandaentry> </qandaset> @@ -1205,7 +1196,7 @@ <!-- Keep this comment at the end of the file Local variables: mode:sgml sgml-omittag:nil sgml-shorttag:t sgml-minimize-attributes:nil sgml-always-quote-attributes:t -sgml-indent-step:1 sgml-indent-data:t sgml-parent-document:slony.sgml -sgml-default-dtd-file:"./reference.ced" sgml-exposed-tags:nil +sgml-indent-step:1 sgml-indent-data:t sgml-parent-document:"book.sgml" +sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil End: --> Index: usingslonik.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v retrieving revision 1.2 retrieving revision 1.3 diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.2 -r1.3 --- doc/adminguide/usingslonik.sgml +++ doc/adminguide/usingslonik.sgml @@ -2,18 +2,17 @@ <sect1 id="usingslonik"> <title>Using Slonik</title> <para> It's a bit of a pain writing <application>Slonik</application> -scripts by hand, particularly as you start working with -<productname>Slony-I</productname> clusters that may be comprised of -increasing numbers of nodes and sets. Some problems that have been -noticed include the following: +scripts by hand, particularly as you start working with &slony1; +clusters that may be comprised of increasing numbers of nodes and +sets. Some problems that have been noticed include the following: <itemizedlist> -<listitem><para> If you are using <productname>Slony-I</productname> -as a <quote>master/slave</quote> replication system with one +<listitem><para> If you are using &slony1; as a +<quote>master/slave</quote> replication system with one <quote>master</quote> node and one <quote>slave</quote> node, it may be sufficiently mnemonic to call the <quote>master</quote> node 1 and -the <quote>slave</quote> node 2. +the <quote>slave</quote> node 2.</para> <para> Unfortunately, as the number of nodes increases, the mapping of IDs to nodes becomes way less obvious, particularly if you have a @@ -28,15 +27,15 @@ <listitem><para> People have observed that <application>Slonik</application> does not provide any notion of iteration. It is common to want to create a set of similar <link -linkend="stmtstorepath"> <command>STORE PATH</command</link> entries, + linkend="stmtstorepath"> <command>STORE PATH</command></link> entries, since, in most cases, hosts will likely access a particular server via the same host name or IP address.</para></listitem> <listitem><para> Users seem interested in wrapping everything possible in <command>TRY</command> blocks, which is regrettably -<emphasis>less</emphasis> useful than might be imagined... +<emphasis>less</emphasis> useful than might be imagined...</para></listitem> -</itemizedlist> +</itemizedlist></para> <para> These have assortedly pointed to requests for such enhancements as: @@ -58,7 +57,7 @@ Slonik scripts; it is unattractive to force yet another one on people. </para></listitem> -</itemizedlist> +</itemizedlist></para> <para> There are several ways to work around these issues that have been seen <quote>in the wild</quote>: @@ -67,7 +66,7 @@ <listitem><para> Some sort of text rewriting system such as M4 may be used to map mnemonic object names onto the perhaps-less-intuitive -numeric arrangement. +numeric arrangement.</para></listitem> <listitem><para> Embedding generation of slonik inside shell scripts</para> @@ -96,7 +95,7 @@ `m4' applications even to solve simple problems, devoting more time debugging their `m4' scripts than doing real work. Beware that `m4' may be dangerous for the health of compulsive -programmers.</para></warning> +programmers.</para></warning></para> <para> This being said, <application>m4</application> has three significant merits over other text rewriting systems (such as @@ -106,10 +105,10 @@ <listitem><para> Like slonik, m4 uses <quote>#</quote> to indicate comments, with the result that it may be quietly used to do rewrites -on slonik scripts. +on slonik scripts.</para> <para> Using <application>cpp</application> would require changing -over to use C or C++ style comments. +over to use C or C++ style comments.</para></listitem> <listitem><para> <application> m4 </application> is reasonably ubiquitous, being available in environments like @@ -122,7 +121,7 @@ <listitem><para> A <emphasis>potential</emphasis> merit over <application>cpp</application> is that <application> m4</application> can do more than just rewrite symbols. It has control structures, can -store data in variables, and can loop. +store data in variables, and can loop.</para> <para> Of course, down that road lies the addictions warned of above, as well as the complexity challenges of @@ -132,8 +131,8 @@ that be Bourne Shell, Perl, or Tcl. Fans of more esoteric languages like Icon, Snobol, or Scheme will have to fight their own battles to get those deemed to be reasonable choices for <quote>best -practices.</quote> -</itemizedlist> +practices.</quote></para></listitem> +</itemizedlist></para> <sect2><title> An m4 example </title> @@ -146,7 +145,7 @@ define(`node_srvrds003', `3') define(`node_srvrds007', `78') define(`ds501', `501') -</programlisting> +</programlisting></para> <para> In view of those node name definitions, you may write a Slonik script to initialize the cluster as follows, <filename>setup_cluster.slonik</filename>: @@ -163,11 +162,11 @@ store node (id=node_srvrds003, comment='Node on ds003', spool='f'); store node (id=node_srvrds007, comment='Node on ds007', spool='t'); store node (id=ds501, comment='Node on ds-501', spool='f'); -</programlisting> +</programlisting></para> -<para> You then run the rewrite rules on the script, thus: +<para> You then run the rewrite rules on the script, thus:</para> <para> -<command> % m4 cluster.m4 setup_cluster.slonik </command> +<command> % m4 cluster.m4 setup_cluster.slonik </command></para> <para> And receive the following output: <programlisting> node 1 admin conninfo 'dsn=foo'; @@ -181,7 +180,7 @@ store node (id=3, comment='Node on ds003', spool='f'); store node (id=78, comment='Node on ds007', spool='t'); store node (id=501, comment='Node on ds-501', spool='f'); -</programlisting> +</programlisting></para> <para> This makes no attempt to do anything <quote>smarter</quote>, such as to try to create the nodes via a loop that maps across a list @@ -191,10 +190,9 @@ <sect1 id="slonikshell"><title> Embedding Slonik in Shell Scripts </title> -<para> As mentioned earlier, there are numerous -<productname>Slony-I</productname> test scripts in -<filename>src/ducttape</filename> that embed the generation of Slonik -inside the shell script. +<para> As mentioned earlier, there are numerous &slony1; test scripts +in <filename>src/ducttape</filename> that embed the generation of +Slonik inside the shell script.</para> <para> They mostly <emphasis> don't </emphasis> do this in a terribly sophisticated way. Typically, they use the following sort of @@ -234,7 +232,7 @@ exit 1; } _EOF_ -</programlisting> +</programlisting></para> <para> A more sophisticated approach might involve defining some common components, notably the <quote>preamble</quote> that consists @@ -250,7 +248,7 @@ node 1 admin conninfo = 'dbname=$DB1'; node 2 admin conninfo = 'dbname=$DB2'; " -</programlisting> +</programlisting></para> <para> The <envar>PREAMBLE</envar> value could then be reused over and over again if the shell script invokes <command>slonik</command> @@ -287,7 +285,7 @@ exit 1; } _EOF_ -</programlisting> +</programlisting></para> <para> The script might be further enhanced to loop through the list of tables as follows: @@ -323,23 +321,23 @@ exit 1; } _EOF_ -</programlisting> +</programlisting></para> <para> That is of somewhat dubious value if you only have 4 tables, but eliminating errors resulting from enumerating large lists of configuration by hand will make this pretty valuable for the larger -examples you'll find in <quote>real life.</quote> +examples you'll find in <quote>real life.</quote></para> <para> You can do even more sophisticated things than this if your scripting language supports things like: <itemizedlist> -<listitem><para> <quote>Record</quote> data structures that allow assigning things in parallel +<listitem><para> <quote>Record</quote> data structures that allow assigning things in parallel</para></listitem> <listitem><para> Functions, procedures, or subroutines, allowing you to implement -useful functionality once, and then refer to it multiple times within a script +useful functionality once, and then refer to it multiple times within a script</para></listitem> <listitem><para> Some sort of <quote>module import</quote> system so that common functionality -can be shared across many scripts -</itemizedlist> +can be shared across many scripts</para></listitem> +</itemizedlist></para> <para> If you can depend on having <ulink url="http://www.gnu.org/software/bash/bash.html"> Bash</ulink>, <ulink @@ -347,14 +345,24 @@ url="http://www.kornshell.com/"> Korn shell</ulink> available, well, those are all shells with extensions supporting reasonably sophisticated data structures and module systems. On Linux, Bash is -fairly ubiquitous; on commercial <trademark/UNIX/, Korn shell is +fairly ubiquitous; on commercial <trademark>UNIX</trademark>, Korn shell is fairly ubiquitous; on BSD, <quote>sophisticated</quote> shells are an option rather than a default.</para> <para> At that point, it makes sense to start looking at other scripting languages, of which Perl is the most ubiquitous, being -widely available on Linux, <trademark/UNIX/, and BSD. +widely available on Linux, <trademark>UNIX</trademark>, and BSD. + +<sect1 id="noslonik"><title> Not Using Slonik - Bare Metal &slony1; +Functions </title> +<para> There are cases where it may make sense to directly use the +stored functions that implement the various pieces of &slony1;. +Slonik doesn't do terribly much <quote/magic;/ it is common for Slonik +commands to simply involve deciding on a node at which to apply a +command, and then submit a SQL query consisting of a call to one of +the &slony1; stored functions.</para> +</sect1> <!-- Keep this comment at the end of the file Local variables: mode:sgml @@ -364,8 +372,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: reshape.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/reshape.sgml +++ doc/adminguide/reshape.sgml @@ -17,9 +17,8 @@ its data from a different provider, or to change it to turn forwarding on or off. This can be accomplished by issuing the slonik <command> <link linkend="stmtsubscribeset"> SUBSCRIBE SET</link> </command> -operation with the new subscription information for the node; -<productname>Slony-I</productname> will change the -configuration.</para></listitem> +operation with the new subscription information for the node; &slony1; +will change the configuration.</para></listitem> <listitem><para> If the directions of data flows have changed, it is doubtless appropriate to issue a set of <command><link @@ -56,8 +55,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: startslons.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/startslons.sgml +++ doc/adminguide/startslons.sgml @@ -1,15 +1,15 @@ <!-- $Id$ --> <sect1 id="slonstart"> <title>Slon daemons</title> -<para>The programs that actually perform <productname>Slony-I</productname> replication are the +<para>The programs that actually perform &slony1; replication are the <application>slon</application> daemons.</para> <para>You need to run one <application><link linkend="slon"> slon -</link></application> instance for each node in a -<productname>Slony-I</productname> cluster, whether you consider that node a -<quote>master</quote> or a <quote>slave</quote>. Since a <command>MOVE SET</command> or -<command>FAILOVER</command> can switch the roles of nodes, slon needs to be -able to function for both providers and subscribers. It is not +</link></application> instance for each node in a &slony1; cluster, +whether you consider that node a <quote>master</quote> or a +<quote>slave</quote>. Since a <command>MOVE SET</command> or +<command>FAILOVER</command> can switch the roles of nodes, slon needs +to be able to function for both providers and subscribers. It is not essential that these daemons run on any particular host, but there are some principles worth considering: @@ -17,13 +17,12 @@ <listitem><para> Each <application>slon</application> needs to be able to communicate quickly with the database whose <quote>node -controller</quote> it is. Therefore, if a -<productname>Slony-I</productname> cluster runs across some form of -Wide Area Network, each slon process should run on or nearby the -databases each is controlling. If you break this rule, no particular -disaster should ensue, but the added latency introduced to monitoring -events on the slon's <quote>own node</quote> will cause it to -replicate in a <emphasis>somewhat</emphasis> less timely +controller</quote> it is. Therefore, if a &slony1; cluster runs +across some form of Wide Area Network, each slon process should run on +or nearby the databases each is controlling. If you break this rule, +no particular disaster should ensue, but the added latency introduced +to monitoring events on the slon's <quote>own node</quote> will cause +it to replicate in a <emphasis>somewhat</emphasis> less timely manner.</para></listitem> <listitem><para> The very fastest results would be achieved by having @@ -82,8 +81,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -30,7 +30,7 @@ linkend="stmtstorelisten">STORE LISTEN</link> statements to configuration the <quote>communications network</quote> that results from that. See the section on <link linkend="listenpaths"> Listen -Paths </link> for more details on the latter. +Paths </link> for more details on the latter.</para> <para>It is suggested that you be very deliberate when adding such things. For instance, submitting multiple subscription requests for a @@ -57,8 +57,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: intro.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/intro.sgml +++ doc/adminguide/intro.sgml @@ -1,19 +1,19 @@ <!-- $Id$ --> <sect1 id="introduction"> -<title>Introduction to <productname>Slony-I</productname></title> +<title>Introduction to &slony1;</title> <sect2><title>Why yet another replication system?</title> -<para><productname>Slony-I</productname> was born from an idea to +<para>&slony1; was born from an idea to create a replication system that was not tied to a specific version of -PostgreSQL, which is allowed to be started and stopped on an existing +&postgres;, which is allowed to be started and stopped on an existing database with out the need for a dump/reload cycle.</para> </sect2> -<sect2> <title>What <productname>Slony-I</productname> is</title> +<sect2> <title>What &slony1; is</title> -<para><productname>Slony-I</productname> is a <quote>master to +<para>&slony1; is a <quote>master to multiple slaves</quote> replication system supporting cascading and slave promotion. The big picture for the development of -<productname>Slony-I</productname> is as a master-slave system that +&slony1; is as a master-slave system that includes all features and capabilities needed to replicate large databases to a reasonably limited number of slave systems. <quote>Reasonable,</quote> in this context, is probably no more than a @@ -24,26 +24,24 @@ </link> for a further analysis of costs associated with having many nodes.</para> -<para> <productname>Slony-I</productname> is a system intended for -data centers and backup sites, where the normal mode of operation is -that all nodes are available all the time, and where all nodes can be -secured. If you have nodes that are likely to regularly drop onto and -off of the network, or have nodes that cannot be kept secure, -<productname>Slony-I</productname> is probably not the ideal -replication solution for you.</para> - -<para> Thus, examples of cases where -<productname>Slony-I</productname> probably won't work out well would -include: +<para> &slony1; is a system intended for data centers and backup +sites, where the normal mode of operation is that all nodes are +available all the time, and where all nodes can be secured. If you +have nodes that are likely to regularly drop onto and off of the +network, or have nodes that cannot be kept secure, &slony1; is +probably not the ideal replication solution for you.</para> + +<para> Thus, examples of cases where &slony1; probably won't work out +well would include: <itemizedlist> <listitem><para> Sites where connectivity is really <quote>flakey</quote> </para></listitem> -<listitem><para> Replication to nodes that are unpredictably connected. +<listitem><para> Replication to nodes that are unpredictably connected.</para> <para> Replicating a pricing database from a central server to sales staff who connect periodically to grab updates. </para></listitem> -</itemizedlist> +</itemizedlist></para> <para> There are plans for a <quote>file-based log shipping</quote> extension where updates would be serialized into files. Given that, @@ -51,90 +49,84 @@ of feedback between the provider node and those nodes subscribing via <quote>log shipping.</quote></para> -<para> But <productname>Slony-I</productname>, by only having a single -origin for each set, is quite unsuitable for -<emphasis>really</emphasis> asynchronous multiway replication. For -those that could use some sort of <quote>asynchronous multimaster -replication with conflict resolution</quote> akin to what is provided -by <productname> Lotus <trademark>Notes</trademark></productname> or -the <quote>syncing</quote> protocols found on PalmOS systems, you will +<para> But &slony1;, by only having a single origin for each set, is +quite unsuitable for <emphasis>really</emphasis> asynchronous multiway +replication. For those that could use some sort of +<quote>asynchronous multimaster replication with conflict +resolution</quote> akin to what is provided by <productname>Lotus +<trademark>Notes</trademark></productname> or the +<quote>syncing</quote> protocols found on PalmOS systems, you will really need to look elsewhere. These sorts of replication models are not without merit, but they represent <emphasis>different</emphasis> -replication scenarios that <productname>Slony-I</productname> does not -attempt to address.</para> +replication scenarios that &slony1; does not attempt to +address.</para> </sect2> -<sect2><title> What <productname>Slony-I</productname> is not</title> +<sect2><title> What &slony1; is not</title> + +<para>&slony1; is not a network management system.</para> -<para><productname>Slony-I</productname> is not a network management -system.</para> +<para> &slony1; does not have any functionality within it to detect a +node failure, nor to automatically promote a node to a master or other +data origin. It is quite possible that you may need to do that; that +will require that you combine some network tools that evaluate +<emphasis> to your satisfaction </emphasis> which nodes you consider +<quote>live</quote> and which nodes you consider <quote>dead</quote> +along with some local policy to determine what to do under those +circumstances. &slony1; does not dictate any of that policy to +you.</para> -<para> <productname>Slony-I</productname> does not have any -functionality within it to detect a node failure, nor to automatically -promote a node to a master or other data origin. It is quite possible -that you may need to do that; that will require that you combine some -network tools that evaluate <emphasis> to your satisfaction -</emphasis> which nodes you consider <quote>live</quote> and which -nodes you consider <quote>dead</quote> along with some local policy to -determine what to do under those circumstances. -<productname>Slony-I</productname> does not dictate any of that policy -to you.</para> - -<para><productname>Slony-I</productname> is not multi-master; it is -not a connection broker, and it doesn't make you coffee and toast in -the morning.</para> +<para>&slony1; is not multi-master; it is not a connection broker, and +it doesn't make you coffee and toast in the morning.</para> <para>That being said, the plan is for a subsequent system, <productname>Slony-II</productname>, to provide <quote>multimaster</quote> capabilities. But that is a separate -project, and expectations for <productname>Slony-I</productname> -should not be based on hopes for future projects.</para></sect2> +project, and expectations for &slony1; should not be based on hopes +for future projects.</para></sect2> -<sect2><title> Why doesn't <productname>Slony-I</productname> do -automatic fail-over/promotion? +<sect2><title> Why doesn't &slony1; do automatic fail-over/promotion? </title> -<para>This is the job of network monitoring software, not Slony. +<para>This is the job of network monitoring software, not &slony1;. Every site's configuration and fail-over paths are different. For example, keep-alive monitoring with redundant NIC's and intelligent HA switches that guarantee race-condition-free takeover of a network address and disconnecting the <quote>failed</quote> node will vary based on network configuration, vendor choices, and the combinations of hardware and software in use. This is clearly the realm of network -management software and not <productname>Slony-I</productname>.</para> +management software and not &slony1;.</para> <para> Furthermore, choosing what to do based on the <quote>shape</quote> of the cluster represents local business policy. -If <productname>Slony-I</productname> imposed failover policy on you, +If &slony1; imposed failover policy on you, that might conflict with business requirements, thereby making -<productname>Slony-I</productname> an unacceptable choice.</para> +&slony1; an unacceptable choice.</para> -<para>As a result, let <productname>Slony-I</productname> do what it -does best: provide database replication.</para></sect2> +<para>As a result, let &slony1; do what it does best: provide database +replication.</para></sect2> <sect2><title> Current Limitations</title> -<para><productname>Slony-I</productname> does not automatically +<para>&slony1; does not automatically propagate schema changes, nor does it have any ability to replicate large objects. There is a single common reason for these limitations, -namely that <productname>Slony-I</productname> operates using +namely that &slony1; operates using triggers, and neither schema changes nor large object operations can -raise triggers suitable to tell <productname>Slony-I</productname> +raise triggers suitable to tell &slony1; when those kinds of changes take place.</para> -<para>There is a capability for <productname>Slony-I</productname> to -propagate DDL changes if you submit them as scripts via the -<application>slonik</application> <command> <link -linkend="stmtddlscript"> EXECUTE SCRIPT </link></command> operation. -That is not <quote>automatic;</quote> you have to construct an SQL DDL -script and submit it.</para> +<para>There is a capability for &slony1; to propagate DDL changes if +you submit them as scripts via the <application>slonik</application> +<command> <link linkend="stmtddlscript"> EXECUTE SCRIPT +</link></command> operation. That is not <quote>automatic;</quote> +you have to construct an SQL DDL script and submit it.</para> <para>If you have those sorts of requirements, it may be worth -exploring the use of <application>PostgreSQL</application> 8.0 -<acronym>PITR</acronym> (Point In Time Recovery), where -<acronym>WAL</acronym> logs are replicated to remote nodes. -Unfortunately, that has two attendant limitations: +exploring the use of &postgres; 8.0 <acronym>PITR</acronym> (Point In +Time Recovery), where <acronym>WAL</acronym> logs are replicated to +remote nodes. Unfortunately, that has two attendant limitations: <itemizedlist> @@ -205,8 +197,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:slony.sgml -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -8,8 +8,7 @@ get rather deranged because they disagree on how particular tables are built.</para> -<para>If you pass the changes through -<productname>Slony-I</productname> via the <command><link +<para>If you pass the changes through &slony1; via the <command><link linkend="stmtddlscript">EXECUTE SCRIPT</link></command> (slonik) /<function>ddlscript(set,script,node)</function> (stored function), this allows you to be certain that the changes take effect at the same @@ -35,10 +34,10 @@ <listitem><para>The script <emphasis>must not</emphasis> contain transaction <command>BEGIN</command> or <command>END</command> statements, as the script is already executed inside a transaction. -In <productname>PostgreSQL</productname> version 8, the introduction -of nested transactions may modify this requirement somewhat, but you -must still remain aware that the actions in the script are wrapped -inside a transaction.</para></listitem> +In &postgres; version 8, the introduction of nested transactions may +modify this requirement somewhat, but you must still remain aware that +the actions in the script are wrapped inside a +transaction.</para></listitem> <listitem><para>If there is <emphasis>anything</emphasis> broken about the script, or about how it executes on a particular node, this will @@ -61,19 +60,18 @@ risk of there being updates made that depended on the DDL changes in order to be correct.</para></listitem> -<listitem><para> When you run <command><link -linkend="stmtddlscript">EXECUTE SCRIPT</link></command>, this causes +<listitem><para> When you run <command><link linkend="stmtddlscript">EXECUTE SCRIPT</link></command>, this causes the <application>slonik</application> to request, <emphasis>for each -table in the specified set</emphasis>, an exclusive table lock. +table in the specified set</emphasis>, an exclusive table lock.</para> <para> It starts by requesting the lock, and altering the table to -remove <productname>Slony-I</productname> triggers: +remove &slony1; triggers: <screen> BEGIN; LOCK TABLE table_name; SELECT _oxrsorg.altertablerestore(tab_id);--tab_id is _slony_schema.sl_table.tab_id -</screen> +</screen></para> <para> After the script executes, each table is <quote>restored</quote> to add back either the trigger that collects @@ -82,7 +80,7 @@ <screen> SELECT _oxrsorg.altertableforreplication(tab_id);--tab_id is _slony_schema.sl_table.tab_id COMMIT; -</screen> +</screen></para> <para> On a system which is busily taking updates, it may be troublesome to <quote>get in edgewise</quote> to actually successfully @@ -94,30 +92,29 @@ <listitem><para> You may be able to <link linkend="definesets"> define replication sets </link> that consist of smaller sets of tables so that fewer locks need to be taken in order for the DDL script to make -it into place. +it into place.</para> <para> If a particular DDL script only affects one table, it should be -unnecessary to lock <emphasis>all</emphasis> application tables. +unnecessary to lock <emphasis>all</emphasis> application tables.</para></listitem> <listitem><para> You may need to take a brief application outage in order to ensure that your applications are not demanding locks that will conflict with the ones you need to take in order to update the -database schema. +database schema.</para></listitem> -</itemizedlist> +</itemizedlist></para></listitem> </itemizedlist> <para>Unfortunately, this nonetheless implies that the use of the DDL facility is somewhat fragile and fairly dangerous. Making DDL changes must not be done in a sloppy or cavalier manner. If your applications -do not have fairly stable SQL schemas, then using -<productname>Slony-I</productname> for replication is likely to be -fraught with trouble and frustration.</para> - -<para>There is an article on how to manage Slony-I schema changes -here: -<ulink url="http://www.varlena.com/varlena/GeneralBits/88.php"> +do not have fairly stable SQL schemas, then using &slony1; for +replication is likely to be fraught with trouble and +frustration.</para> + +<para>There is an article on how to manage &slony1; schema changes +here: <ulink url="http://www.varlena.com/varlena/GeneralBits/88.php"> Varlena General Bits</ulink></para> <sect2><title> Changes that you might <emphasis>not</emphasis> want to @@ -130,46 +127,45 @@ <itemizedlist> -<listitem><Para> There are various sorts of objects that don't have -triggers that <productname>Slony-I</productname> -<emphasis>doesn't</emphasis> replicate, such as stored functions, and -it is quite likely to cause you grief if you propagate updates to them -associated with a replication set where <command>EXECUTE -SCRIPT</command> will lock a whole lot of tables that didn't really -need to be locked. +<listitem><para> There are various sorts of objects that don't have +triggers that &slony1; <emphasis>doesn't</emphasis> replicate, such as +stored functions, and it is quite likely to cause you grief if you +propagate updates to them associated with a replication set where +<command>EXECUTE SCRIPT</command> will lock a whole lot of tables that +didn't really need to be locked.</para> <para> If you are propagating a stored procedure that <emphasis>isn't</emphasis> used all the time (such that you'd care if it was briefly out of sync between nodes), then you could simply submit it to each node using <application>psql</application>, making -no special use of <productname>Slony-I</productname> +no special use of &slony1;.</para> <para> If it <emphasis>does</emphasis> matter that the object be propagated at the same location in the transaction stream on all the nodes, then you but no tables need to be locked, then you might create a replication set that contains <emphasis>no</emphasis> tables, subscribe all the appropriate nodes to it, and use <command>EXECUTE -SCRIPT</command>, specifying that <quote>empty set.</quote> +SCRIPT</command>, specifying that <quote>empty set.</quote></para></listitem> -<listitem><Para> You may want an extra index on some replicated -node(s) in order to improve performance there. +<listitem><para> You may want an extra index on some replicated +node(s) in order to improve performance there.</para> <para> For instance, a table consisting of transactions may only need indices related to referential integrity on the <quote>origin</quote> node, and maximizing performance there dictates adding no more indices than are absolutely needed. But nothing prevents you from adding additional indices to improve the performance of reports that run -against replicated nodes. +against replicated nodes.</para> <para> It would be unwise to add additional indices that <emphasis>constrain</emphasis> things on replicated nodes, as if they find problems, this leads to replication breaking down as the subscriber(s) will be unable to apply changes coming from the origin -that violate the constraints. +that violate the constraints.</para> <para> But it's no big deal to add some performance-enhancing indices. You should almost certainly <emphasis> not</emphasis> use -<command>EXECUTE SCRIPT</emphasis> to add them; that leads to some +<command>EXECUTE SCRIPT</command> to add them; that leads to some replication set locking and unlocking tables, and possibly failing to apply the event due to some locks outstanding on objects and having to retry a few times before it gets the change in. If you instead apply @@ -180,27 +176,27 @@ implicitly stop replication, while the index builds, but shouldn't cause any particular problems. If you add an index on a table that takes 20 minutes to build, replication will block for 20 minutes, but -should catch up quickly once the index is created. +should catch up quickly once the index is created.</para></listitem> -</itemizedlist> +</itemizedlist></para></sect2> <sect2><title> Testing DDL Changes </title> <para> A method for testing DDL changes has been pointed out as a -likely <quote>best practice.</quote> +likely <quote>best practice.</quote></para> <para> You <emphasis>need</emphasis> to test DDL scripts in a -non-destructive manner. +non-destructive manner.</para> <para> The problem is that if nodes are, for whatever reason, at all out of sync, replication is likely to fall over, and this takes place at what is quite likely one of the most inconvenient times, namely the -moment when you wanted it to <emphasis> work. </emphasis> +moment when you wanted it to <emphasis> work. </emphasis></para> <para> You may indeed check to see if schema scripts work well or badly, by running them by hand, against each node, adding <command> BEGIN; </command> at the beginning, and <command> ROLLBACK; </command> -at the end, so that the would-be changes roll back. +at the end, so that the would-be changes roll back.</para> <para> If this script works OK on all of the nodes, that suggests that it should work fine everywhere if executed via Slonik. If problems @@ -211,7 +207,7 @@ <warning> <para> If the SQL script contains a <command> COMMIT; </command> somewhere before the <command> ROLLBACK; </command>, that may allow changes to go in unexpectedly. </para> -</warning> +</warning></para> </sect2> </sect1> <!-- Keep this comment at the end of the file @@ -223,8 +219,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: failover.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/failover.sgml +++ doc/adminguide/failover.sgml @@ -1,32 +1,31 @@ <!-- $Id$ --> <sect1 id="failover"> -<title>Doing switchover and failover with Slony-I</title> +<title>Doing switchover and failover with &slony1;</title> <sect2><title>Foreword</title> -<para> <productname>Slony-I</productname> is an asynchronous -replication system. Because of that, it is almost certain that at the -moment the current origin of a set fails, the final transactions -committed at the origin will have not yet propagated to the -subscribers. Systems are particularly likely to fail under heavy -load; that is one of the corollaries of Murphy's Law. Therefore the -principal goal is to <emphasis>prevent</emphasis> the main server from -failing. The best way to do that is frequent maintenance.</para> +<para>&slony1; is an asynchronous replication system. Because of +that, it is almost certain that at the moment the current origin of a +set fails, the final transactions committed at the origin will have +not yet propagated to the subscribers. Systems are particularly +likely to fail under heavy load; that is one of the corollaries of +Murphy's Law. Therefore the principal goal is to +<emphasis>prevent</emphasis> the main server from failing. The best +way to do that is frequent maintenance.</para> <para> Opening the case of a running server is not exactly what we should consider a <quote>professional</quote> way to do system maintenance. And interestingly, those users who found it valuable to use replication for backup and failover purposes are the very ones that have the lowest tolerance for terms like <quote>system -downtime.</quote> To help support these requirements, -<productname>Slony-I</productname> not only offers failover -capabilities, but also the notion of controlled origin -transfer.</para> +downtime.</quote> To help support these requirements, &slony1; not +only offers failover capabilities, but also the notion of controlled +origin transfer.</para> <para> It is assumed in this document that the reader is familiar with the <link linkend="slonik"> <application>slonik</application> </link> utility and knows at least how to set up a simple 2 node replication -system with <productname>Slony-I</productname>.</para></sect2> +system with &slony1;.</para></sect2> <sect2><title> Controlled Switchover</title> @@ -98,13 +97,12 @@ where it can limp along long enough to do a controlled switchover, that is <emphasis>greatly</emphasis> preferable.</para> -<para> <productname>Slony-I</productname> does not provide any -automatic detection for failed systems. Abandoning committed -transactions is a business decision that cannot be made by a database -system. If someone wants to put the commands below into a script -executed automatically from the network monitoring system, well -... it's <emphasis>your</emphasis> data, and it's -<emphasis>your</emphasis> failover policy. </para> +<para> &slony1; does not provide any automatic detection for failed +systems. Abandoning committed transactions is a business decision +that cannot be made by a database system. If someone wants to put the +commands below into a script executed automatically from the network +monitoring system, well ... it's <emphasis>your</emphasis> data, and +it's <emphasis>your</emphasis> failover policy. </para> <itemizedlist> @@ -117,8 +115,8 @@ <para> causes node2 to assume the ownership (origin) of all sets that have node1 as their current origin. If there should happen to be -additional nodes in the <productname>Slony-I</productname> cluster, -all direct subscribers of node1 are instructed that this is happening. +additional nodes in the &slony1; cluster, all direct subscribers of +node1 are instructed that this is happening. <application>Slonik</application> will also query all direct subscribers in order to determine out which node has the highest replication status (<emphasis>e.g.</emphasis> - the latest committed @@ -172,9 +170,9 @@ to lose that information.</para> <para> If the database is very large, it may take many hours to -recover node1 as a functioning <productname>Slony-I</productname> -node; that is another reason to consider failover as an undesirable -<quote>final resort.</quote></para> +recover node1 as a functioning &slony1; node; that is another reason +to consider failover as an undesirable <quote>final +resort.</quote></para> </sect2> @@ -188,8 +186,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: monitoring.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v retrieving revision 1.10 retrieving revision 1.11 diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.10 -r1.11 --- doc/adminguide/monitoring.sgml +++ doc/adminguide/monitoring.sgml @@ -2,8 +2,8 @@ <sect1 id="monitoring"> <title>Monitoring</title> -<para>Here are some of things that you may find in your -<productname>Slony-I</productname> logs, and explanations of what they mean.</para> +<para>Here are some of things that you may find in your &slony1; logs, +and explanations of what they mean.</para> <sect2><title>CONFIG notices</title> @@ -83,38 +83,37 @@ <itemizedlist> <listitem><para> Checking <envar> sl_listen </envar> for some <quote>analytically determinable</quote> problems. It lists paths -that are not covered. +that are not covered.</para></listitem> -<listitem><para> Providing a summary of events by origin node +<listitem><para> Providing a summary of events by origin node</para> -<para> If a node hasn't submitted any events in a while, that likely suggests a problem. +<para> If a node hasn't submitted any events in a while, that likely suggests a problem.</para></listitem> -<listitem><para> Summarizes the <quote>aging</quote> of table <envar>sl_confirm</envar> +<listitem><para> Summarizes the <quote>aging</quote> of table <envar>sl_confirm</envar></para> <para> If one or another of the nodes in the cluster hasn't reported back recently, that tends to lead to cleanups of tables like <envar> -sl_log_1 </envar> and <envar> sl_seqlog </envar> not taking place. +sl_log_1 </envar> and <envar> sl_seqlog </envar> not taking place.</para></listitem> <listitem><para> Summarizes what transactions have been running for a -long time +long time</para> <para> This only works properly if the statistics collector is configured to collect command strings, as controlled by the option <option> stats_command_string = true </option> in <filename> -postgresql.conf </filename>. +postgresql.conf </filename>.</para> <para> If you have broken applications that hold connections open, -this will find them. +this will find them.</para> <para> If you have broken applications that hold connections open, -that has several unsalutory effects as <link -linkend="longtxnsareevil"> described in the FAQ</link>. +that has several unsalutory effects as <link linkend="longtxnsareevil"> described in the FAQ</link>.</para></listitem> -</itemizedlist> +</itemizedlist></para> <para> The script does not yet do much in the way of diagnosis work; it should be enhanced to be able to, based on some parameterization, -notify someone of those problems it encounters. +notify someone of those problems it encounters.</para> </sect2> @@ -128,8 +127,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: installation.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/installation.sgml +++ doc/adminguide/installation.sgml @@ -1,10 +1,9 @@ <!-- $Id$ --> -<article id="installation"> -<title>Slony-I Installation</title> -<sect1> -<title>Slony-I Installation</title> -<para>You should have obtained the <productname>Slony-I</productname> source from -the previous step. Unpack it.</para> +<sect1 id="installation"> +<title>&slony1; Installation</title> + +<para>You should have obtained the &slony1; source from the previous +step. Unpack it.</para> <screen> gunzip slony.tar.gz; @@ -12,7 +11,7 @@ </screen> <para> This will create a directory under the current directory with -the <productname>Slony-I</productname> sources. Head into that that directory for the rest of +the &slony1; sources. Head into that that directory for the rest of the installation procedure.</para> <sect2> @@ -23,7 +22,7 @@ ./configure --with-pgsourcetree=/whereever/the/source/is gmake all; gmake install </screen> -</para> +</para></sect2> <sect2> <title>Configuration</title> @@ -32,11 +31,11 @@ source tree for your system. This is done by running the <application>configure</application> script. In early versions, <application>configure</application> needed to know where your -<productname>PostgreSQL</productname> source tree is, is done with the +&postgres; source tree is, is done with the <option>--with-pgsourcetree=</option> option. As of version 1.1, -<productname>Slony-I</productname> is configured by pointing it to the various -library, binary, and include directories; for a full list of these -options, use the command <command>./configure --help</command> +&slony1; is configured by pointing it to the various library, binary, +and include directories; for a full list of these options, use the +command <command>./configure --help</command> </para> </sect2> @@ -49,11 +48,12 @@ <para>This script will run a number of tests to guess values for various dependent variables and try to detect some quirks of your -system. <productname>Slony-I</productname> is known to need a modified version of libpq on +system. &slony1; is known to need a modified version of libpq on specific platforms such as Solaris2.X on SPARC this patch can be found at <ulink url="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz"> -http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz</ulink></para> +http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz +</ulink></para> </sect2> @@ -78,9 +78,9 @@ </sect2> <sect2> -<title> Installing Slony-I</title> +<title> Installing &slony1;</title> -<para> To install Slony-I, enter +<para> To install &slony1;, enter <command> gmake install @@ -88,7 +88,7 @@ <para>This will install files into postgresql install directory as specified by the <option>--prefix</option> option used in the -PostgreSQL configuration. Make sure you have appropriate permissions +&postgres; configuration. Make sure you have appropriate permissions to write into that area. Normally you need to do this either as root or as the postgres user. </para> </sect2> @@ -103,8 +103,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:slony.sgml -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: slonik.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/slonik.sgml +++ doc/adminguide/slonik.sgml @@ -89,7 +89,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:"slony.sgml" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:"/usr/lib/sgml/catalog" sgml-local-ecat-files:nil Index: dropthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v retrieving revision 1.9 retrieving revision 1.10 diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.9 -r1.10 --- doc/adminguide/dropthings.sgml +++ doc/adminguide/dropthings.sgml @@ -1,8 +1,8 @@ <!-- $Id$ --> -<sect1 id="dropthings"> <title>Dropping things from Slony Replication</title> +<sect1 id="dropthings"> <title>Dropping things from &slony1; Replication</title> <para>There are several things you might want to do involving dropping -things from <productname>Slony-I</productname> replication.</para> +things from &slony1; replication.</para> <sect2><title>Dropping A Whole Node</title> @@ -11,12 +11,11 @@ linkend="stmtdropnode">DROP NODE</link></command> should do the trick.</para> -<para>This will lead to <productname>Slony-I</productname> dropping -the triggers (generally that deny the ability to update data), -restoring the <quote>native</quote> triggers, dropping the schema used -by <productname>Slony-I</productname>, and the <link linkend="slon"> -<command>slon</command> </link> process for that node terminating -itself.</para> +<para>This will lead to &slony1; dropping the triggers (generally that +deny the ability to update data), restoring the <quote>native</quote> +triggers, dropping the schema used by &slony1;, and the <link + linkend="slon"> <command>slon</command> </link> process for that node +terminating itself.</para> <para>As a result, the database should be available for whatever use your application makes of the database.</para> @@ -28,9 +27,9 @@ the node that you attempt to drop, so there is a bit of a failsafe to protect you from errors.</para> -<para><link linkend="FAQ17">sl_log_1 isn't getting purged</link> +<para><link linkend="faq17">sl_log_1 isn't getting purged</link> documents some extra maintenance that may need to be done on -sl_confirm if you are running versions prior to 1.0.5.</para> +sl_confirm if you are running versions prior to 1.0.5.</para></sect2> <sect2><title>Dropping An Entire Set</title> @@ -40,12 +39,11 @@ use.</para> <para>Much as with <command><link linkend="stmtdropnode">DROP NODE -</link></command>, this leads to <productname>Slony-I</productname> -dropping the <productname>Slony-I</productname> triggers on the tables -and restoring <quote>native</quote> triggers. One difference is that -this takes place on <emphasis>all</emphasis> nodes in the cluster, -rather than on just one node. Another difference is that this does -not clear out the <productname>Slony-I</productname> cluster's +</link></command>, this leads to &slony1; dropping the &slony1; +triggers on the tables and restoring <quote>native</quote> triggers. +One difference is that this takes place on <emphasis>all</emphasis> +nodes in the cluster, rather than on just one node. Another +difference is that this does not clear out the &slony1; cluster's namespace, as there might be other sets being serviced.</para> <para>This operation is quite a bit more dangerous than <command> @@ -56,14 +54,16 @@ isn't anything to prevent potentially career-limiting <quote>unfortunate results.</quote> Handle with care...</para> </sect2> + <sect2><title>Unsubscribing One Node From One Set</title> <para>The <command><link linkend="stmtunsubscribeset">UNSUBSCRIBE SET</link></command> operation is a little less invasive than either <command><link linkend="stmtdropset">DROP SET</link></command> or -<command><link linkend="stmtdropnode">DROP NODE</link></command>; it involves dropping -<productname>Slony-I</productname> triggers and restoring -<quote>native</quote> triggers on one node, for one replication set.</para> +<command><link linkend="stmtdropnode">DROP NODE</link></command>; it +involves dropping &slony1; triggers and restoring +<quote>native</quote> triggers on one node, for one replication +set.</para> <para>Much like with <command><link linkend="stmtdropnode">DROP NODE</link></command>, this operation will fail if there is a node subscribing to the set on this node. @@ -73,16 +73,15 @@ on</quote> will require that the node copy in a <emphasis>full</emphasis> fresh set of the data on a provider. The fact that the data was recently being replicated isn't good enough; -<productname>Slony-I</productname> will expect to refresh the data -from scratch.</para> +&slony1; will expect to refresh the data from scratch.</para> </warning> </para> </sect2> <sect2><title> Dropping A Table From A Set</title> -<para>In <productname>Slony-I</productname> 1.0.5 and above, there is -a Slonik command <command><link linkend="stmtsetdroptable">SET DROP +<para>In &slony1; 1.0.5 and above, there is a Slonik command +<command><link linkend="stmtsetdroptable">SET DROP TABLE</link></command> that allows dropping a single table from replication without forcing the user to drop the entire replication set.</para> @@ -100,20 +99,18 @@ </programlisting> </para> -<para>The schema will obviously depend on how you defined the -<productname>Slony-I</productname> cluster. The table ID, in this -case, 40, will need to change to the ID of the table you want to have -go away.</para> +<para>The schema will obviously depend on how you defined the &slony1; +cluster. The table ID, in this case, 40, will need to change to the +ID of the table you want to have go away.</para> <para>You'll have to run these three queries on all of the nodes, preferably firstly on the origin node, so that the dropping of this -propagates properly. Implementing this via a <link linkend="slonik">slonik</link> -statement with a new <productname>Slony-I</productname> -event would do that. Submitting the three queries using -<command><link linkend="stmtddlscript">EXECUTE SCRIPT</link></command> could do that; -see <link linkend="ddlchanges">Database Schema Changes</link> for more details. -Also possible would be to connect to each database and submit the queries by -hand.</para> +propagates properly. Implementing this via a <link linkend="slonik">slonik</link> statement with a new &slony1; event +would do that. Submitting the three queries using <command><link + linkend="stmtddlscript">EXECUTE SCRIPT</link></command> could do that; +see <link linkend="ddlchanges">Database Schema Changes</link> for more +details. Also possible would be to connect to each database and +submit the queries by hand.</para> </sect2> <sect2><title>Dropping A Sequence From A Set</title> @@ -126,9 +123,9 @@ <para>If you are running an earlier version, here are instructions as to how to drop sequences:</para> -<para>The data that needs to be deleted to stop <productname>Slony-I</productname> -from continuing to replicate the two sequences identified with -Sequence IDs 93 and 59 are thus: +<para>The data that needs to be deleted to stop &slony1; from +continuing to replicate the two sequences identified with Sequence IDs +93 and 59 are thus: <programlisting> delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59); @@ -152,8 +149,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: listenpaths.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v retrieving revision 1.10 retrieving revision 1.11 diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.10 -r1.11 --- doc/adminguide/listenpaths.sgml +++ doc/adminguide/listenpaths.sgml @@ -1,7 +1,7 @@ <!-- $Id$ --> -<sect1 id="listenpaths"><title><productname>Slony-I</productname> listen paths</title> +<sect1 id="listenpaths"><title>&slony1; listen paths</title> <note><para> If you are running version -<productname>Slony-I</productname> 1.1 or later it should be +&slony1; 1.1 or later it should be <emphasis>completely unnecessary</emphasis> to read this section as it introduces a way to automatically manage this part of its configuration. For earlier versions, however, it is needful.</para> @@ -33,17 +33,17 @@ <para>On one occasion, I had a need to drop a subscriber node (#2) and recreate it. That node was the data provider for another subscriber (#3) that was, in effect, a <quote>cascaded slave.</quote> Dropping -the subscriber node initially didn't work, as <link -linkend="slonik"><command>slonik</command></link> informed me that +the subscriber node initially didn't work, as <link linkend="slonik"><command>slonik</command></link> informed me that there was a dependant node. I repointed the dependant node to the <quote>master</quote> node for the subscription set, which, for a while, replicated without difficulties.</para> -<para>I then dropped the subscription on <quote>node 2</quote>, and started -resubscribing it. that raised the <productname>Slony-I</productname> -<command>set_subscription</command> event, which started copying tables. at -that point in time, events stopped propagating to <quote>node 3</quote>, and -while it was in perfectly ok shape, no events were making it to it.</para> +<para>I then dropped the subscription on <quote>node 2</quote>, and +started resubscribing it. that raised the &slony1; +<command>set_subscription</command> event, which started copying +tables. at that point in time, events stopped propagating to +<quote>node 3</quote>, and while it was in perfectly ok shape, no +events were making it to it.</para> <para>The problem was that node #3 was expecting to receive events from node #2, which was busy processing the <command>set_subscription</command> @@ -129,7 +129,7 @@ <para>The tool <filename>init_cluster.pl</filename> in the <filename>altperl</filename> scripts produces optimized listener networks in both the tabular form shown above as well as in the form -of <link linkend="Slonik"> <application>slonik</application></link> +of <link linkend="slonik"> <application>slonik</application></link> statements.</para> <para>There are three <quote>thorns</quote> in this set of roses: @@ -139,9 +139,8 @@ <listitem><para> If you change the shape of the node set, so that the nodes subscribe differently to things, you need to drop sl_listen entries and create new ones to indicate the new preferred paths -between nodes. Until <productname>Slony-I</productname>, there is no -automated way at this point to do this -<quote>reshaping</quote>.</para></listitem> +between nodes. Until &slony1;, there is no automated way at this +point to do this <quote>reshaping</quote>.</para></listitem> <listitem><para> If you <emphasis>don't</emphasis> change the sl_listen entries, events will likely continue to propagate so long as @@ -164,15 +163,15 @@ <quote>firewall</quote> nodes that can talk between the subnets. cut out those nodes and the subnets stop communicating.</para></listitem> -</itemizedlist> +</itemizedlist></para> </sect2> <sect2 id="autolisten"><title>Automated Listen Path Generation</title> -<para> In <productname>Slony-I</productname> version 1.1, a heuristic -scheme is introduced to automatically generate listener entries. This -happens, in order, based on three data sources: +<para> In &slony1; version 1.1, a heuristic scheme is introduced to +automatically generate listener entries. This happens, in order, +based on three data sources: <itemizedlist> @@ -197,12 +196,12 @@ <function>RebuildListenEntries()</function> will be called to revise the listener paths.</para> -<para> If you are running an earlier version of -<productname>Slony-I</productname>, you may want to take a look at -<link linkend="regenlisten"><application>regenerate-listens.pl</application></link>, a Perl -script which duplicates the functionality of the stored procedure in -the form of a script that generates the <link linkend="slonik"><command>slonik</command> -</link> requests to generate the listener paths.</para></sect2> +<para> If you are running an earlier version of &slony1;, you may want +to take a look at <link linkend="regenlisten"><application>regenerate-listens.pl</application></link>, +a Perl script which duplicates the functionality of the stored +procedure in the form of a script that generates the <link + linkend="slonik"><command>slonik</command> </link> requests to +generate the listener paths.</para></sect2> </sect1> <!-- Keep this comment at the end of the file @@ -214,8 +213,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.12 retrieving revision 1.13 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.12 -r1.13 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -6,12 +6,14 @@ <!entity % filelist SYSTEM "filelist.sgml"> %filelist; <!entity reference SYSTEM "reference.sgml"> + <!ENTITY slony1 "<PRODUCTNAME>Slony-I</PRODUCTNAME>"> + <!ENTITY postgres "<PRODUCTNAME>PostgreSQL</PRODUCTNAME>"> ]> <book id="slony"> - <title><productname>Slony-I</productname> &version; Documentation</title> + <title>&slony1; &version; Documentation</title> <bookinfo> - <corpauthor>The Slony Global Development Group</corpauthor> + <corpauthor>The &postgres; Global Development Group</corpauthor> <author> <firstname>Christopher</firstname> <surname>Browne</surname> @@ -20,7 +22,7 @@ </bookinfo> <article id="slonyintro"> - <title><productname>Slony-I</productname> Introduction</title> + <title>&slony1; Introduction</title> &intro; &prerequisites; @@ -76,7 +78,7 @@ </part> <part id="commandreferencec"> - <title>Core <productname>Slony-I</productname> Programs</title> + <title>Core &slony1; Programs</title> &slon; &slonik; &usingslonik; @@ -95,8 +97,7 @@ sgml-always-quote-attributes:t sgml-indent-step:1 sgml-indent-data:t -sgml-parent-document:nil -sgml-default-dtd-file:"./reference.ced" +sgml-parent-document:"book.sgml" sgml-exposed-tags:nil sgml-local-catalogs:("/usr/lib/sgml/catalog") sgml-local-ecat-files:nil
- Previous message: [Slony1-commit] By cbbrowne: Added document on handling of plain paths...
- Next message: [Slony1-commit] By cbbrowne: Updated tagging on all of the documents to resolve SGML
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-commit mailing list