Fri Dec 10 18:42:11 PST 2004
- Previous message: [Slony1-commit] By darcyb: add docbook tools to configure, add tools to build the
- Next message: [Slony1-commit] By cbbrowne: Made watchdog2 smarter Instead of just "thumping" a slon
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Log Message:
-----------
Wiki data transformed to DocBook/SGML
Modified Files:
--------------
slony1-engine/doc/adminguide:
Makefile (r1.1 -> r1.2)
SlonyAddThings.txt (r1.1 -> r1.2)
SlonyAdministrationScripts.txt (r1.2 -> r1.3)
SlonyConcepts.txt (r1.1 -> r1.2)
SlonyDDLChanges.txt (r1.2 -> r1.3)
SlonyDefineCluster.txt (r1.1 -> r1.2)
SlonyDefineSet.txt (r1.1 -> r1.2)
SlonyDropThings.txt (r1.1 -> r1.2)
SlonyFAQ.txt (r1.1 -> r1.2)
SlonyFAQ01.txt (r1.1 -> r1.2)
SlonyFAQ02.txt (r1.1 -> r1.2)
SlonyFAQ03.txt (r1.1 -> r1.2)
SlonyFAQ04.txt (r1.1 -> r1.2)
SlonyFAQ05.txt (r1.1 -> r1.2)
SlonyFAQ06.txt (r1.1 -> r1.2)
SlonyFAQ07.txt (r1.1 -> r1.2)
SlonyFAQ08.txt (r1.1 -> r1.2)
SlonyFAQ09.txt (r1.1 -> r1.2)
SlonyFAQ10.txt (r1.1 -> r1.2)
SlonyFAQ11.txt (r1.1 -> r1.2)
SlonyFAQ12.txt (r1.1 -> r1.2)
SlonyFAQ13.txt (r1.1 -> r1.2)
SlonyFAQ14.txt (r1.1 -> r1.2)
SlonyFAQ15.txt (r1.1 -> r1.2)
SlonyFAQ16.txt (r1.1 -> r1.2)
SlonyFAQ17.txt (r1.1 -> r1.2)
SlonyFAQ18.txt (r1.1 -> r1.2)
SlonyHandlingFailover.txt (r1.1 -> r1.2)
SlonyHelp.txt (r1.1 -> r1.2)
SlonyHowtoFirstTry.txt (r1.1 -> r1.2)
SlonyIAdministration.txt (r1.2 -> r1.3)
SlonyInstallation.txt (r1.2 -> r1.3)
SlonyIntroduction.txt (r1.1 -> r1.2)
SlonyListenPaths.txt (r1.1 -> r1.2)
SlonyListenerCosts.txt (r1.1 -> r1.2)
SlonyMaintenance.txt (r1.2 -> r1.3)
SlonyPrerequisites.txt (r1.1 -> r1.2)
SlonyReshapingCluster.txt (r1.1 -> r1.2)
SlonySlonConfiguration.txt (r1.1 -> r1.2)
SlonySlonik.txt (r1.1 -> r1.2)
SlonyStartSlons.txt (r1.2 -> r1.3)
addthings.sgml (r1.1 -> r1.2)
adminscripts.sgml (r1.1 -> r1.2)
cluster.sgml (r1.1 -> r1.2)
concepts.sgml (r1.1 -> r1.2)
ddlchanges.sgml (r1.1 -> r1.2)
defineset.sgml (r1.1 -> r1.2)
dropthings.sgml (r1.1 -> r1.2)
failover.sgml (r1.1 -> r1.2)
filelist.sgml (r1.1 -> r1.2)
firstdb.sgml (r1.1 -> r1.2)
help.sgml (r1.1 -> r1.2)
installation.sgml (r1.1 -> r1.2)
intro.sgml (r1.1 -> r1.2)
legal.sgml (r1.1 -> r1.2)
listenpaths.sgml (r1.1 -> r1.2)
maintenance.sgml (r1.1 -> r1.2)
monitoring.sgml (r1.1 -> r1.2)
prerequisites.sgml (r1.1 -> r1.2)
reshape.sgml (r1.1 -> r1.2)
slonconfig.sgml (r1.1 -> r1.2)
slonik.sgml (r1.1 -> r1.2)
slony.sgml (r1.1 -> r1.2)
startslons.sgml (r1.1 -> r1.2)
subscribenodes.sgml (r1.1 -> r1.2)
Added Files:
-----------
slony1-engine/doc/adminguide:
addthings.html (r1.1)
altperl.html (r1.1)
cluster.html (r1.1)
concepts.html (r1.1)
ddlchanges.html (r1.1)
dropthings.html (r1.1)
failover.html (r1.1)
faq.html (r1.1)
firstdb.html (r1.1)
help.html (r1.1)
installation.html (r1.1)
listenpaths.html (r1.1)
maintenance.html (r1.1)
monitoring.html (r1.1)
mysheet.dsl (r1.1)
requirements.html (r1.1)
reshape.html (r1.1)
slonconfig.html (r1.1)
slonik.html (r1.1)
slonstart.html (r1.1)
slony.html (r1.1)
subscribenodes.html (r1.1)
t24.html (r1.1)
x267.html (r1.1)
x931.html (r1.1)
-------------- next part --------------
--- /dev/null
+++ doc/adminguide/reshape.html
@@ -0,0 +1,174 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Reshaping a Cluster</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Slony-I Maintenance"
+HREF="maintenance.html"><LINK
+REL="NEXT"
+TITLE="Doing switchover and failover with Slony-I"
+HREF="failover.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="maintenance.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="failover.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="RESHAPE"
+>14. Reshaping a Cluster</A
+></H1
+><P
+>If you rearrange the nodes so that they serve different purposes, this will likely lead to the subscribers changing a bit. </P
+><P
+>This will require doing several things:
+<P
+></P
+><UL
+><LI
+><P
+> If you want a node that is a subscriber to become the "master" provider for a particular replication set, you will have to issue the slonik MOVE SET operation to change that "master" provider node. </P
+></LI
+><LI
+><P
+> You may subsequently, or instead, wish to modify the subscriptions of other nodes. You might want to modify a node to get its data from a different provider, or to change it to turn forwarding on or off. This can be accomplished by issuing the slonik SUBSCRIBE SET operation with the new subscription information for the node; Slony-I will change the configuration. </P
+></LI
+><LI
+><P
+> If the directions of data flows have changed, it is doubtless appropriate to issue a set of DROP LISTEN operations to drop out obsolete paths between nodes and SET LISTEN to add the new ones. At present, this is not changed automatically; at some point, MOVE SET and SUBSCRIBE SET might change the paths as a side-effect. See SlonyListenPaths for more information about this. In version 1.1 and later, it is likely that the generation of sl_listen entries will be entirely automated, where they will be regenerated when changes are made to sl_path or sl_subscribe, thereby making it unnecessary to even think about SET LISTEN. </P
+></LI
+></UL
+></P
+><P
+> The "altperl" toolset includes a "init_cluster.pl" script that is quite up to the task of creating the new SET LISTEN commands; it isn't smart enough to know what listener paths should be dropped.
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="maintenance.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="failover.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I Maintenance</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Doing switchover and failover with Slony-I</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/requirements.html
@@ -0,0 +1,482 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Requirements</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+HREF="t24.html"><LINK
+REL="NEXT"
+TITLE=" Slony-I Installation"
+HREF="installation.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="t24.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="installation.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="REQUIREMENTS"
+>2. Requirements</A
+></H1
+><P
+>Any platform that can run PostgreSQL should be able to run
+Slony-I. </P
+><P
+>The platforms that have received specific testing at the time of
+this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
+osX-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64,
+<SPAN
+CLASS="TRADEMARK"
+>Solaris</SPAN
+>™-2.8-SPARC, <SPAN
+CLASS="TRADEMARK"
+>Solaris</SPAN
+>™-2.9-SPARC, AIX 5.1
+and OpenBSD-3.5-sparc64. </P
+><P
+>There have been reports of success at running Slony-I hosts that
+are running PostgreSQL on Microsoft <SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™. At this
+time, the <SPAN
+CLASS="QUOTE"
+>"binary"</SPAN
+> applications (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> -
+<B
+CLASS="APPLICATION"
+>slonik</B
+>, <B
+CLASS="APPLICATION"
+>slon</B
+>) do not run on
+<SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™, but a <B
+CLASS="APPLICATION"
+>slon</B
+> running on one of the
+Unix-like systems has no reason to have difficulty connect to a
+PostgreSQL instance running on <SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™. </P
+><P
+> It ought to be possible to port <B
+CLASS="APPLICATION"
+>slon</B
+>
+and <B
+CLASS="APPLICATION"
+>slonik</B
+> to run on
+<SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™; the conspicuous challenge is of having
+a POSIX-like <TT
+CLASS="FILENAME"
+>pthreads</TT
+> implementation for
+<B
+CLASS="APPLICATION"
+>slon</B
+>, as it uses that to have multiple
+threads of execution. There are reports of there being a
+<TT
+CLASS="FILENAME"
+>pthreads</TT
+> library for
+<SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™, so nothing should prevent some
+interested party from volunteering to do the port.</P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN109"
+>2.1. Software needed</A
+></H2
+><P
+><P
+></P
+><UL
+><LI
+><P
+> GNU make. Other make programs will not work. GNU
+make is often installed under the name <TT
+CLASS="COMMAND"
+>gmake</TT
+>; this document
+will therefore always refer to it by that name. (On Linux-based
+systems GNU make is typically the default make, and is called
+<TT
+CLASS="COMMAND"
+>make</TT
+>) To test to see if your make is GNU make enter
+<TT
+CLASS="COMMAND"
+>make version</TT
+>. Version 3.76 or later will suffice; previous
+versions may not. </P
+></LI
+><LI
+><P
+> You need an ISO/ANSI C compiler. Recent versions of
+<B
+CLASS="APPLICATION"
+>GCC</B
+> work. </P
+></LI
+><LI
+><P
+> You also need a recent version of PostgreSQL
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>source</I
+></SPAN
+>. Slony-I depends on namespace support so you must
+have version 7.3.3 or newer to be able to build and use Slony-I. Rod
+Taylor has <SPAN
+CLASS="QUOTE"
+>"hacked up"</SPAN
+> a version of Slony-I that works with
+version 7.2; if you desperately need that, look for him on the <A
+HREF="http://www.postgresql.org/lists.html"
+TARGET="_top"
+> PostgreSQL Hackers mailing
+list</A
+>. It is not anticipated that 7.2 will be supported by any
+official <B
+CLASS="APPLICATION"
+>Slony-I</B
+> release. </P
+></LI
+><LI
+><P
+> GNU packages may be included in the standard
+packaging for your operating system, or you may need to look for
+source code at your local GNU mirror (see <A
+HREF="http://www.gnu.org/order/ftp.html"
+TARGET="_top"
+>http://www.gnu.org/order/ftp.html</A
+> for a list) or at <A
+HREF="ftp://ftp.gnu.org/gnu"
+TARGET="_top"
+> ftp://ftp.gnu.org/gnu</A
+> .) </P
+></LI
+><LI
+><P
+> If you need to obtain PostgreSQL source, you can
+download it from your favorite PostgreSQL mirror (see <A
+HREF="http://www.postgresql.org/mirrors-www.html"
+TARGET="_top"
+>http://www.postgresql.org/mirrors-www.html </A
+> for a list), or
+via <A
+HREF="http://bt.postgresql.org/"
+TARGET="_top"
+> BitTorrent</A
+>.</P
+></LI
+></UL
+> </P
+><P
+>Also check to make sure you have sufficient disk space. You
+will need approximately 5MB for the source tree during build and
+installation. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN136"
+>2.2. Getting Slony-I Source</A
+></H2
+><P
+>You can get the Slony-I source from <A
+HREF="http://developer.postgresql.org/~wieck/slony1/download/"
+TARGET="_top"
+>http://developer.postgresql.org/~wieck/slony1/download/</A
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN140"
+>2.3. Time Synchronization</A
+></H2
+><P
+> All the servers used within the replication cluster need to
+have their Real Time Clocks in sync. This is to ensure that slon
+doesn't error with messages indicating that slave is already ahead of
+the master during replication. We recommend you have ntpd running on
+all nodes, with subscriber nodes using the <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> provider
+node as their time server. </P
+><P
+> It is possible for Slony-I to function even in the face of
+there being some time discrepancies, but having systems <SPAN
+CLASS="QUOTE"
+>"in
+sync"</SPAN
+> is usually pretty important for distributed applications. </P
+><P
+> See <A
+HREF="http://www.ntp.org/"
+TARGET="_top"
+> www.ntp.org </A
+> for
+more details about NTP (Network Time Protocol). </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN148"
+>2.4. Network Connectivity</A
+></H2
+><P
+>It is necessary that the hosts that are to replicate between one
+another have <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>bidirectional</I
+></SPAN
+> network communications to the
+PostgreSQL instances. That is, if node B is replicating data from
+node A, it is necessary that there be a path from A to B and from B to
+A. It is recommended that all nodes in a Slony-I cluster allow this
+sort of bidirection communications from any node in the cluster to any
+other node in the cluster. </P
+><P
+>Note that the network addresses need to be consistent across all
+of the nodes. Thus, if there is any need to use a <SPAN
+CLASS="QUOTE"
+>"public"</SPAN
+>
+address for a node, to allow remote/VPN access, that <SPAN
+CLASS="QUOTE"
+>"public"</SPAN
+>
+address needs to be able to be used consistently throughout the
+Slony-I cluster, as the address is propagated throughout the cluster
+in table <CODE
+CLASS="ENVAR"
+>sl_path</CODE
+>. </P
+><P
+>A possible workaround for this, in environments where firewall
+rules are particularly difficult to implement, may be to establish
+SSH Tunnels that are created on each host that allow remote access
+through IP address 127.0.0.1, with a different port for each
+destination. </P
+><P
+> Note that <B
+CLASS="APPLICATION"
+>slonik</B
+> and the <B
+CLASS="APPLICATION"
+>slon</B
+>
+instances need no special connections or protocols to communicate with
+one another; they just need to be able to get access to the
+<B
+CLASS="APPLICATION"
+>PostgreSQL</B
+> databases, connecting as a <SPAN
+CLASS="QUOTE"
+>"superuser"</SPAN
+>. </P
+><P
+> An implication of the communications model is that the entire
+extended network in which a Slony-I cluster operates must be able to
+be treated as being secure. If there is a remote location where you
+cannot trust the Slony-I node to be considered <SPAN
+CLASS="QUOTE"
+>"secured,"</SPAN
+> this
+represents a vulnerability that adversely the security of the entire
+cluster. In effect, the security policies throughout the cluster can
+only be considered as stringent as those applied at the
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>weakest</I
+></SPAN
+> link. Running a full-blown Slony-I node at a
+branch location that can't be kept secure compromises security for the
+cluster. </P
+><P
+>In the future plans is a feature whereby updates for a
+particular replication set would be serialized via a scheme called
+<SPAN
+CLASS="QUOTE"
+>"log shipping."</SPAN
+> The data stored in sl_log_1 and sl_log_2 would
+be written out to log files on disk. These files could be transmitted
+in any manner desired, whether via scp, FTP, burning them onto
+DVD-ROMs and mailing them, or even by recording them on a USB
+<SPAN
+CLASS="QUOTE"
+>"flash device"</SPAN
+> and attaching them to birds, allowing a sort of
+<SPAN
+CLASS="QUOTE"
+>"avian transmission protocol."</SPAN
+> This will allow one way
+communications so that <SPAN
+CLASS="QUOTE"
+>"subscribers"</SPAN
+> that use log shipping would
+have no need for access to other Slony-I nodes.</P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="installation.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony-I Installation</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -1,20 +1,20 @@
-<article id="altperl"> <title/Slony Administration Scripts
+<sect1 id="altperl"><title/ Slony-I Administration Scripts/
<para>In the "altperl" directory in the CVS tree, there is a sizable set of Perl scripts that may be used to administer a set of Slony-I instances, which support having arbitrary numbers of nodes.
<para>Most of them generate Slonik scripts that are then to be passed on to the slonik utility to be submitted to all of the Slony-I nodes in a particular cluster. At one time, this embedded running slonik on the slonik scripts. Unfortunately, this turned out to be a pretty large calibre "foot gun," as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions, so the behaviour has been changed so that the scripts simply submit output to standard output. An administrator should review the slonik script before submitting it to Slonik.
-<sect1><title> Node/Cluster Configuration - cluster.nodes</title>
+<sect2><title> Node/Cluster Configuration - cluster.nodes</title>
<para>The UNIX environment variable <envar/SLONYNODES/ is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster.
<para>What variables are set up...
<itemizedlist>
-<listitem><Para> $SETNAME=orglogs; # What is the name of the replication set?
-<listitem><Para> $LOGDIR='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs?
-<listitem><Para> $SLON_BIN_PATH='/opt/dbs/pgsql74/bin'; # Where to look for slony binaries
-<listitem><Para> $APACHE_ROTATOR="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator
+<listitem><Para> <envar/$SETNAME/=orglogs; # What is the name of the replication set?
+<listitem><Para> <envar/$LOGDIR/='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs?
+<listitem><Para> <envar/$SLON_BIN_PATH/='/opt/dbs/pgsql74/bin'; # Where to look for slony binaries
+<listitem><Para> <envar/$APACHE_ROTATOR/="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator
</itemizedlist>
<para>You then define the set of nodes that are to be replicated using a set of calls to <function/add_node()/.
@@ -24,18 +24,12 @@
</command></para>
<para>The set of parameters for <function/add_node()/ are thus:
-<command>
- my %PARAMS = (host=> undef, # Host name
- dbname => 'template1', # database name
- port => 5432, # Port number
- user => 'postgres', # user to connect as
- node => undef, # node number
- password => undef, # password for user
- parent => 1, # which node is parent to this node
- noforward => undef # shall this node be set up to forward results?
- );
-</command>
-<sect1><title> Set configuration - cluster.set1, cluster.set2</title>
+
+<para> <literallayout><literal remap="tt"><inlinegraphic
+fileref="params.txt"
+format="linespecific"></literal></literallayout></para>
+
+<sect2><title> Set configuration - cluster.set1, cluster.set2</title>
<para>The UNIX environment variable <envar/SLONYSET/ is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set.
@@ -50,7 +44,7 @@
<listitem><Para> @SEQUENCES An array of names of sequences that are to be replicated
</itemizedlist>
-<sect1><title/ build_env.pl/
+<sect2><title/ build_env.pl/
<para>Queries a database, generating output hopefully suitable for
<filename/slon.env/ consisting of:
@@ -60,78 +54,78 @@
<listitem><Para> The arrays <envar/@KEYEDTABLES/, <envar/@SERIALTABLES/, and <envar/@SEQUENCES/
</itemizedlist>
-<sect1><title/ create_set.pl/
+<sect2><title/ create_set.pl/
<para>This requires <envar/SLONYSET/ to be set as well as <envar/SLONYNODES/; it is used to
generate the Slonik script to set up a replication set consisting of a
set of tables and sequences that are to be replicated.
-<sect1><title/ drop_node.pl/
+<sect2><title/ drop_node.pl/
<para>Generates Slonik script to drop a node from a Slony-I cluster.
-<sect1><title/ drop_set.pl/
+<sect2><title/ drop_set.pl/
<para>Generates Slonik script to drop a replication set (<emphasis/e.g./ - set of tables and sequences) from a Slony-I cluster.
-<sect1><title/ failover.pl/
+<sect2><title/ failover.pl/
<para>Generates Slonik script to request failover from a dead node to some new origin
-<sect1><title/ init_cluster.pl/
+<sect2><title/ init_cluster.pl/
<para>Generates Slonik script to initialize a whole Slony-I cluster,
including setting up the nodes, communications paths, and the listener
routing.
-<sect1><title/ merge_sets.pl/
+<sect2><title/ merge_sets.pl/
<para>Generates Slonik script to merge two replication sets together.
-<sect1><title/ move_set.pl/
+<sect2><title/ move_set.pl/
<para>Generates Slonik script to move the origin of a particular set to a different node.
-<sect1><title/ replication_test.pl/
+<sect2><title/ replication_test.pl/
<para>Script to test whether Slony-I is successfully replicating data.
-<sect1><title/ restart_node.pl/
+<sect2><title/ restart_node.pl/
<para>Generates Slonik script to request the restart of a node. This was
particularly useful pre-1.0.5 when nodes could get snarled up when
slon daemons died.
-<sect1><title/ restart_nodes.pl/
+<sect2><title/ restart_nodes.pl/
<para>Generates Slonik script to restart all nodes in the cluster. Not
particularly useful...
-<sect1><title/ show_configuration.pl/
+<sect2><title/ show_configuration.pl/
<para>Displays an overview of how the environment (e.g. - <envar/SLONYNODES/) is set
to configure things.
-<sect1><title/ slon_kill.pl/
+<sect2><title/ slon_kill.pl/
<para>Kills slony watchdog and all slon daemons for the specified set. It
only works if those processes are running on the local host, of
course!
-<sect1><title/ slon_pushsql.pl/
+<sect2><title/ slon_pushsql.pl/
<para>Generates Slonik script to push DDL changes to a replication set.
-<sect1><title/ slon_start.pl/
+<sect2><title/ slon_start.pl/
<para>This starts a slon daemon for the specified cluster and node, and uses
slon_watchdog.pl to keep it running.
-<sect1><title/ slon_watchdog.pl/
+<sect2><title/ slon_watchdog.pl/
<para>Used by slon_start.pl...
-<sect1><title/ slon_watchdog2.pl/
+<sect2><title/ slon_watchdog2.pl/
<para>This is a somewhat smarter watchdog; it monitors a particular Slony-I
node, and restarts the slon process if it hasn't seen updates go in in
@@ -140,29 +134,26 @@
<para>This is helpful if there is an unreliable network connection such that
the slon sometimes stops working without becoming aware of it...
-<sect1><title/ subscribe_set.pl/
+<sect2><title/ subscribe_set.pl/
<para>Generates Slonik script to subscribe a particular node to a particular replication set.
-<sect1><title/ uninstall_nodes.pl/
+<sect2><title/ uninstall_nodes.pl/
<para>This goes through and drops the Slony-I schema from each node; use
this if you want to destroy replication throughout a cluster. This is
a VERY unsafe script!
-<sect1><title/ unsubscribe_set.pl/
+<sect2><title/ unsubscribe_set.pl/
<para>Generates Slonik script to unsubscribe a node from a replication set.
-<sect1><title/ update_nodes.pl/
+<sect2><title/ update_nodes.pl/
<para>Generates Slonik script to tell all the nodes to update the Slony-I
functions. This will typically be needed when you upgrade from one
version of Slony-I to another.
-
-
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyConcepts.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyConcepts.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyConcepts.txt -Ldoc/adminguide/SlonyConcepts.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyConcepts.txt
+++ doc/adminguide/SlonyConcepts.txt
@@ -1,51 +1 @@
-%META:TOPICINFO{author="guest" date="1098328042" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slony-I Concepts
-
-In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
-
- * Cluster
- * Node
- * Replication Set
- * Provider and Subscriber
-
----+++ Cluster
-
-In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.
-
-The cluster name is specified in each and every Slonik script via the directive:
-<verbatim>
-cluster name = 'something';
-</verbatim>
-
-If the Cluster name is 'something', then Slony-I will create, in each database instance in the cluster, the namespace/schema '_something'.
-
----+++ Node
-
-A Slony-I Node is a named PostgreSQL database that will be participating in replication.
-
-It is defined, near the beginning of each Slonik script, using the directive:
-<verbatim>
- NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
-</verbatim>
-
-The CONNINFO information indicates a string argument that will ultimately be passed to the PQconnectdb() libpq function.
-
-Thus, a Slony-I cluster consists of:
- * A cluster name
- * A set of Slony-I nodes, each of which has a namespace based on that cluster name
-
----+++ Replication Set
-
-A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster.
-
-You may have several sets, and the "flow" of replication does not need to be identical between those sets.
-
----+++ Provider and Subscriber
-
-Each replication set has some "master" node, which winds up being the _only_ place where user applications are permitted to modify data in the tables that are being replicated. That "master" may be considered the master "provider node;" it is the main place from which data is provided.
-
-Other nodes in the cluster will subscribe to the replication set, indicating that they want to receive the data.
-
-The "master" node will never be considered a "subscriber." But Slony-I supports the notion of cascaded subscriptions, that is, a node that is subscribed to the "master" may also behave as a "provider" to other nodes in the cluster.
-
+Moved to SGML
Index: SlonyFAQ.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ.txt -Ldoc/adminguide/SlonyFAQ.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ.txt
+++ doc/adminguide/SlonyFAQ.txt
@@ -1,26 +1 @@
-%META:TOPICINFO{author="guest" date="1099972260" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Frequently Asked Questions
-
-Not all of these are "frequently asked;" some represent _trouble found that seemed worth documenting_.
-
-Note that numerous of these items are version-specific, describing how to deal with error conditions that are dealt with automatically by later releases. In particular, version 1.0.5 resolved numerous issues where changing replication configuration didn't quite work entirely right. Many of these items should therefore ultimately get reorganized to indicate version specificity. There are quite enough issues with 1.0.2 that people should probably prefer to use 1.0.5 unless there's very special reason to stay with 1.0.2 for the time being.
-
- * SlonyFAQ01 I looked for the _clustername namespace, and it wasn't there.
- * SlonyFAQ02 Some events moving around, but no replication
- * SlonyFAQ03 Cluster name with "-" in it
- * SlonyFAQ04 slon does not restart after crash
- * SlonyFAQ05 ps finds passwords on command line
- * SlonyFAQ06 Slonik fails - cannot load PostgreSQL library - PGRES_FATAL_ERROR load '$libdir/xxid';
- * SlonyFAQ07 Table indexes with FQ namespace names
- * SlonyFAQ08 Subscription fails "transaction still in progress"
- * SlonyFAQ09 ERROR: duplicate key violates unique constraint "sl_table-pkey"
- * SlonyFAQ10 I need to drop a table from a replication set
- * SlonyFAQ11 I need to drop a sequence from a replication set
- * SlonyFAQ12 Slony-I: cannot add table to currently subscribed set 1
- * SlonyFAQ13 Some nodes start consistently falling behind
- * SlonyFAQ14 I started doing a backup using pg_dump, and suddenly Slony stops
- * SlonyFAQ15 The slons spent the weekend out of commission [for some reason], and it's taking a long time to get a sync through.
- * SlonyFAQ16 I pointed a subscribing node to a different parent and it stopped replicating
- * SlonyFAQ17 After dropping a node, sl_log_1 isn't getting purged out anymore
- * SlonyFAQ18 Replication Fails - Unique Constraint Violation
+Moved to SGML
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -1,23 +1,32 @@
-<article id="maintenance"> <title/Slony-I Maintenance/
+<sect1 id="maintenance"> <title/Slony-I Maintenance/
<para>Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
+
<itemizedlist>
- <Listitem><para> Deletes old data from various tables in the Slony-I cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet used), and sl_seqlog.
- <listitem><Para> Vacuum certain tables used by Slony-I. As of 1.0.5, this includes pg_listener; in earlier versions, you must vacuum that table heavily, otherwise you'll find replication slowing down because Slony-I raises plenty of events, which leads to that table having plenty of dead tuples.
+ <Listitem><para> Deletes old data from various tables in the
+ Slony-I cluster's namespace, notably entries in sl_log_1,
+ sl_log_2 (not yet used), and sl_seqlog.
+
+ <listitem><Para> Vacuum certain tables used by Slony-I. As of
+ 1.0.5, this includes pg_listener; in earlier versions, you
+ must vacuum that table heavily, otherwise you'll find
+ replication slowing down because Slony-I raises plenty of
+ events, which leads to that table having plenty of dead
+ tuples.
<para> In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using something like pg_autovacuum to handle vacuuming of these tables. Unfortunately, it has been quite possible for pg_autovacuum to not vacuum quite frequently enough, so you probably want to use the internal vacuums. Vacuuming pg_listener "too often" isn't nearly as hazardous as not vacuuming it frequently enough.
<para>Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running. This will most notably lead to pg_listener growing large and will slow replication.
</itemizedlist>
-<sect1><title/ Watchdogs: Keeping Slons Running/
+<sect2><title/ Watchdogs: Keeping Slons Running/
<para>There are a couple of "watchdog" scripts available that monitor things, and restart the slon processes should they happen to die for some reason, such as a network "glitch" that causes loss of connectivity.
<para>You might want to run them...
-<sect1><title/Alternative to Watchdog: generate_syncs.sh/
+<sect2><title/Alternative to Watchdog: generate_syncs.sh/
<para>A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation.
@@ -31,7 +40,7 @@
<para>Note that if SYNCs <emphasis/are/ running regularly, this script won't bother doing anything.
-<sect1><title/ Log Files/
+<sect2><title/ Log Files/
<para>Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on. You might assortedly wish to:
<itemizedlist>
@@ -39,7 +48,7 @@
<listitem><Para> Purge out old log files, periodically.
</itemizedlist>
-</article>
+
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyFAQ09.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ09.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ09.txt -Ldoc/adminguide/SlonyFAQ09.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ09.txt
+++ doc/adminguide/SlonyFAQ09.txt
@@ -1,16 +1 @@
-%META:TOPICINFO{author="guest" date="1098326891" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ ERROR: duplicate key violates unique constraint "sl_table-pkey"
-
-I tried setting up a second replication set, and got the following error:
-
-<verbatim>
-<stdin>:9: Could not create subscription set 2 for oxrslive!
-<stdin>:11: PGRES_FATAL_ERROR select "_oxrslive".setAddTable(2, 1, 'public.replic_test', 'replic_test__Slony-I_oxrslive_rowID_key', 'Table public.replic_test without primary key'); - ERROR: duplicate key violates unique constraint "sl_table-pkey"
-CONTEXT: PL/pgSQL function "setaddtable_int" line 71 at SQL statement
-</verbatim>
-
-The table IDs used in SET ADD TABLE are required to be unique ACROSS
-ALL SETS. Thus, you can't restart numbering at 1 for a second set; if
-you are numbering them consecutively, a subsequent set has to start
-with IDs after where the previous set(s) left off.
+Moved to SGML
--- /dev/null
+++ doc/adminguide/help.html
@@ -0,0 +1,200 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> More Slony-I Help </TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Replicating Your First Database"
+HREF="firstdb.html"><LINK
+REL="NEXT"
+TITLE=" Other Information Sources"
+HREF="x931.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="firstdb.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="x931.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="HELP"
+>21. More Slony-I Help</A
+></H1
+><P
+>If you are having problems with Slony-I, you have several options for help:
+<P
+></P
+><UL
+><LI
+><P
+> <A
+HREF="http://slony.info/"
+TARGET="_top"
+>http://slony.info/</A
+> - the official "home" of Slony </P
+></LI
+><LI
+><P
+> Documentation on the Slony-I Site- Check the documentation on the Slony website: <A
+HREF="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx"
+TARGET="_top"
+>Howto </A
+></P
+></LI
+><LI
+><P
+> Other Documentation - There are several articles here <A
+HREF="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"
+TARGET="_top"
+> Varlena GeneralBits </A
+> that may be helpful. </P
+></LI
+><LI
+><P
+> IRC - There are usually some people on #slony on irc.freenode.net who may be able to answer some of your questions. There is also a bot named "rtfm_please" that you may want to chat with.</P
+></LI
+><LI
+><P
+> Mailing lists - The answer to your problem may exist in the Slony1-general mailing list archives, or you may choose to ask your question on the Slony1-general mailing list. The mailing list archives, and instructions for joining the list may be found <A
+HREF="http://gborg.postgresql.org/mailman/listinfo/slony1"
+TARGET="_top"
+>here. </A
+> </P
+></LI
+><LI
+><P
+> If your Russian is much better than your English, then <A
+HREF="http://kirov.lug.ru/wiki/Slony"
+TARGET="_top"
+> KirovOpenSourceCommunity: Slony</A
+>may be the place to go</P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="firstdb.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="x931.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Replicating Your First Database</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Other Information Sources</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: prerequisites.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/prerequisites.sgml
+++ doc/adminguide/prerequisites.sgml
@@ -1,57 +1,115 @@
-<sect1><title/ Requirements/
+<sect1 id="requirements"><title/ Requirements/
-<para>Any platform that can run PostgreSQL should be able to run Slony-I.
+<para>Any platform that can run PostgreSQL should be able to run
+Slony-I.
-<para>The platforms that have received specific testing at the time of this release are
-FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha, osX-10.3, Linux-2.4X-i386
-Linux-2.6X-i386 Linux-2.6X-amd64, Solaris-2.8-SPARC, Solaris-2.9-SPARC, AIX 5.1 and
-OpenBSD-3.5-sparc64.
-
-<para>There have been reports of success at running Slony-I hosts that are running PostgreSQL on Microsoft Windows(tm). At this time, the "binary" applications (e.g. - slonik, slon) do not run on Windows(tm), but a slon running on one of the Unix-like systems has no reason to have difficulty connect to a PostgreSQL instance running on Windows(tm).
-
-<para> It ought to be possible to port slon and slonik to run on Windows; the conspicuous challenge is of having a POSIX-like pthreads implementation for slon, as it uses that to have multiple threads of execution. There are reports of there being a pthreads library for Windows(tm), so nothing should prevent some interested party from volunteering to do the port.
+<para>The platforms that have received specific testing at the time of
+this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
+osX-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64,
+<trademark/Solaris/-2.8-SPARC, <trademark/Solaris/-2.9-SPARC, AIX 5.1
+and OpenBSD-3.5-sparc64.
+
+<para>There have been reports of success at running Slony-I hosts that
+are running PostgreSQL on Microsoft <trademark/Windows/. At this
+time, the <quote/binary/ applications (<emphasis/e.g./ -
+<application/slonik/, <application/slon/) do not run on
+<trademark/Windows/, but a <application/slon/ running on one of the
+Unix-like systems has no reason to have difficulty connect to a
+PostgreSQL instance running on <trademark/Windows/.
+
+<para> It ought to be possible to port <application>slon</application>
+and <application>slonik</application> to run on
+<trademark>Windows</trademark>; the conspicuous challenge is of having
+a POSIX-like <filename>pthreads</filename> implementation for
+<application>slon</application>, as it uses that to have multiple
+threads of execution. There are reports of there being a
+<filename>pthreads</filename> library for
+<trademark>Windows</trademark>, so nothing should prevent some
+interested party from volunteering to do the port.</para>
<sect2><title/ Software needed/
<para>
<itemizedlist>
- <listitem><Para> GNU make. Other make programs will not work. GNU make is often installed under the name gmake; this document will therefore always refer to it by that name. (On Linux-based systems GNU make is typically the default make, and is called "make") To test to see if your make is GNU make enter "make version." Version 3.76 or later will suffice; previous versions may not.
-
- <listitem><Para> You need an ISO/ANSI C compiler. Recent versions of GCC work.
-
- <listitem><Para> You also need a recent version of PostgreSQL *source*. Slony-I depends on namespace support so you must have version 7.3 or newer to be able to build and use Slony-I. Rod Taylor has "hacked up" a version of Slony-I that works with version 7.2; if you desperately need that, look for him on the PostgreSQL Hackers mailing list. It is not anticipated that 7.2 will be supported by any official Slony-I release.
- <listitem><Para> GNU packages may be included in the standard packaging for your operating system, or you may need to look for source code at your local GNU mirror (see <ulink url="http://www.gnu.org/order/ftp.html"> http://www.gnu.org/order/ftp.html</ulink> for a list) or at <ulink url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu</ulink> .)
-
- <listitem><Para> If you need to obtain PostgreSQL source, you can download it from your favorite PostgreSQL mirror (see <ulink url="http://www.postgresql.org/mirrors-www.html"> http://www.postgresql.org/mirrors-www.html </ulink> for a list), or via <ulink url="http://bt.postgresql.org/"> BitTorrent</ulinK>.
+<listitem><Para> GNU make. Other make programs will not work. GNU
+make is often installed under the name <command/gmake/; this document
+will therefore always refer to it by that name. (On Linux-based
+systems GNU make is typically the default make, and is called
+<command/make/) To test to see if your make is GNU make enter
+<command/make version/. Version 3.76 or later will suffice; previous
+versions may not.
+
+<listitem><Para> You need an ISO/ANSI C compiler. Recent versions of
+<application/GCC/ work.
+
+<listitem><Para> You also need a recent version of PostgreSQL
+<emphasis/source/. Slony-I depends on namespace support so you must
+have version 7.3.3 or newer to be able to build and use Slony-I. Rod
+Taylor has <quote/hacked up/ a version of Slony-I that works with
+version 7.2; if you desperately need that, look for him on the <ulink
+url="http://www.postgresql.org/lists.html"> PostgreSQL Hackers mailing
+list</ulink>. It is not anticipated that 7.2 will be supported by any
+official <application/Slony-I/ release.
+
+<listitem><Para> GNU packages may be included in the standard
+packaging for your operating system, or you may need to look for
+source code at your local GNU mirror (see <ulink
+url="http://www.gnu.org/order/ftp.html">
+http://www.gnu.org/order/ftp.html</ulink> for a list) or at <ulink
+url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu</ulink> .)
+
+<listitem><Para> If you need to obtain PostgreSQL source, you can
+download it from your favorite PostgreSQL mirror (see <ulink
+url="http://www.postgresql.org/mirrors-www.html">
+http://www.postgresql.org/mirrors-www.html </ulink> for a list), or
+via <ulink url="http://bt.postgresql.org/"> BitTorrent</ulink>.
</itemizedlist>
-<para>Also check to make sure you have sufficient disk space. You will need
-approximately 5MB for the source tree during build and installation.
+<para>Also check to make sure you have sufficient disk space. You
+will need approximately 5MB for the source tree during build and
+installation.
<sect2><title/ Getting Slony-I Source/
<para>You can get the Slony-I source from <ulink
url="http://developer.postgresql.org/~wieck/slony1/download/">
-http://developer.postgresql.org/~wieck/slony1/download/</ulinK>
+http://developer.postgresql.org/~wieck/slony1/download/</ulink>
+</para>
-<sect2><title/ Time Synchronization/
+</sect2>
-<para> All the servers used within the replication cluster need to have their Real Time Clocks in sync. This is to ensure that slon doesn't error with messages indicating that slave is already ahead of the master during replication. We recommend you have ntpd running on all nodes, with subscriber nodes using the "master" provider node as their time server.
+<sect2><title/ Time Synchronization/
-<para> It is possible for Slony-I to function even in the face of there being some time discrepancies, but having systems "in sync" is usually pretty important for distributed applications.
+<para> All the servers used within the replication cluster need to
+have their Real Time Clocks in sync. This is to ensure that slon
+doesn't error with messages indicating that slave is already ahead of
+the master during replication. We recommend you have ntpd running on
+all nodes, with subscriber nodes using the <quote/master/ provider
+node as their time server.
+
+<para> It is possible for Slony-I to function even in the face of
+there being some time discrepancies, but having systems <quote/in
+sync/ is usually pretty important for distributed applications.
<Para> See <ulink url="http://www.ntp.org/"> www.ntp.org </ulink> for
more details about NTP (Network Time Protocol).
<sect2><title/ Network Connectivity/
-<para>It is necessary that the hosts that are to replicate between one another have <emphasis/bidirectional/ network communications to the PostgreSQL instances. That is, if node B is replicating data from node A, it is necessary that there be a path from A to B and from B to A. It is recommended that all nodes in a Slony-I cluster allow this sort of bidirection communications from any node in the cluster to any other node in the cluster.
+<para>It is necessary that the hosts that are to replicate between one
+another have <emphasis/bidirectional/ network communications to the
+PostgreSQL instances. That is, if node B is replicating data from
+node A, it is necessary that there be a path from A to B and from B to
+A. It is recommended that all nodes in a Slony-I cluster allow this
+sort of bidirection communications from any node in the cluster to any
+other node in the cluster.
<para>Note that the network addresses need to be consistent across all
-of the nodes. Thus, if there is any need to use a "public" address
-for a node, to allow remote/VPN access, that "public" address needs to
-be able to be used consistently throughout the Slony-I cluster, as the
-address is propagated throughout the cluster in table <envar/sl_path/.
+of the nodes. Thus, if there is any need to use a <quote/public/
+address for a node, to allow remote/VPN access, that <quote/public/
+address needs to be able to be used consistently throughout the
+Slony-I cluster, as the address is propagated throughout the cluster
+in table <envar/sl_path/.
<para>A possible workaround for this, in environments where firewall
rules are particularly difficult to implement, may be to establish
@@ -59,29 +117,47 @@
through IP address 127.0.0.1, with a different port for each
destination.
-<para> Note that slonik and the slon instances need no special
-connections to communicate with one another; they just need to be able
-to get access to the PostgreSQL databases.
-
-<para> An implication of the communications model is that the extended
-network in which a Slony-I cluster operates must be able to be treated
-as being secure. If there is a remote location where you cannot trust
-the Slony-I node to be considered "secured," this represents a
-vulnerability that affects <emphasis/all/ the nodes throughout the
+<para> Note that <application/slonik/ and the <application/slon/
+instances need no special connections or protocols to communicate with
+one another; they just need to be able to get access to the
+<application/PostgreSQL/ databases, connecting as a <quote/superuser/.
+
+<para> An implication of the communications model is that the entire
+extended network in which a Slony-I cluster operates must be able to
+be treated as being secure. If there is a remote location where you
+cannot trust the Slony-I node to be considered <quote/secured,/ this
+represents a vulnerability that adversely the security of the entire
cluster. In effect, the security policies throughout the cluster can
only be considered as stringent as those applied at the
<emphasis/weakest/ link. Running a full-blown Slony-I node at a
-branch location that can't be kept secure compromises security for
-<emphasis/every node/ in the cluster.
+branch location that can't be kept secure compromises security for the
+cluster.
<para>In the future plans is a feature whereby updates for a
particular replication set would be serialized via a scheme called
-"log shipping." The data stored in sl_log_1 and sl_log_2 would be
-written out to log files on disk. These files could be transmitted in
-any manner desired, whether via scp, FTP, burning them onto DVD-ROMs
-and mailing them, or even by recording them on a USB "flash device"
-and attaching them to birds, allowing a sort of "avian transmission
-protocol." This will allow one way communications so that
-"subscribers" that use log shipping would have no need for access to
-other Slony-I nodes.
-
+<quote/log shipping./ The data stored in sl_log_1 and sl_log_2 would
+be written out to log files on disk. These files could be transmitted
+in any manner desired, whether via scp, FTP, burning them onto
+DVD-ROMs and mailing them, or even by recording them on a USB
+<quote/flash device/ and attaching them to birds, allowing a sort of
+<quote/avian transmission protocol./ This will allow one way
+communications so that <quote/subscribers/ that use log shipping would
+have no need for access to other Slony-I nodes.
+</sect1>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode:sgml
+sgml-omittag:nil
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:1
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:"./reference.ced"
+sgml-exposed-tags:nil
+sgml-local-catalogs:("/usr/lib/sgml/catalog")
+sgml-local-ecat-files:nil
+End:
+-->
\ No newline at end of file
Index: SlonyIAdministration.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyIAdministration.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyIAdministration.txt -Ldoc/adminguide/SlonyIAdministration.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyIAdministration.txt
+++ doc/adminguide/SlonyIAdministration.txt
@@ -1,32 +1 @@
-%META:TOPICINFO{author="guest" date="1100276070" format="1.0" version="1.7"}%
-%META:TOPICPARENT{name="WebHome"}%
----++ Slony-I Replication
-
-Slony-I is a replication system for use with [[http://www.postgresql.org PostgreSQL]].
-
- * SlonyIntroduction - Introduction to Slony - how it works (triggers) what it can do; what it cannot do
- * SlonyPrerequisites - Software and network stuff you need as prerequisites
- * SlonyInstallation - Compiling it, installing and making sure all components are there
- * SlonySlonik - an introduction to what Slonik is about
- * SlonyConcepts - Clusters, nodes, sets
- * SlonyDefineCluster - defining the network of nodes (includes some "best practices" on numbering them)
- * SlonyDefineSet - defining sets of tables/sequences to be replicated (make sure YOU define a primary key!)
- * SlonyAdministrationScripts - the "altperl" tools
- * SlonyStartSlons - About the slon daemons - where should slon run?
- * SlonySlonConfiguration - what are the options, how should they be chosen
- * SlonySubscribeNodes
- * SlonyMonitoring - what to expect in the logs, scripts to monitor things
- * SlonyMaintenance - making sure things are vacuumed; making sure slons run; reading logs
- * SlonyReshapingCluster - moving nodes, moving master between nodes
- * SlonyHandlingFailover - Switchover and Failover
- * SlonyListenPaths - care and feeding of sl_listen
- * SlonyAddThings - adding tables, sequences, nodes
- * SlonyDropThings - dropping tables, sequences, nodes
- * SlonyDDLChanges - changing database schema
- * SlonyHowtoFirstTry - Replicating Your First Database
- * SlonyHelp - where to go for more help
- * SlonyFAQ - Frequently Asked Questions
-
-Note to would-be authors: If you wish to add additional sections, please feel free. It would be preferable if new material maintained the prefix "Slony" as my backup scripts look specifically for that file prefix.
-
-This set of documentation will, once it is more complete, be transformed into DocBook and added into the Slony-I documentation tree. As a result, please contribute only if you are willing for it to be added there under the traditional BSD-style license used by PostgreSQL...
+Moved to SGML
Index: SlonyMaintenance.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyMaintenance.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyMaintenance.txt -Ldoc/adminguide/SlonyMaintenance.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyMaintenance.txt
+++ doc/adminguide/SlonyMaintenance.txt
@@ -1,41 +1 @@
-%META:TOPICINFO{author="guest" date="1100558890" format="1.0" version="1.5"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slony-I Maintenance
-
-Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
-
- * Deletes old data from various tables in the Slony-I cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet used), and sl_seqlog.
-
- * Vacuum certain tables used by Slony-I. As of 1.0.5, this includes pg_listener; in earlier versions, you must vacuum that table heavily, otherwise you'll find replication slowing down because Slony-I raises plenty of events, which leads to that table having plenty of dead tuples.
-
- In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using something like pg_autovacuum to handle vacuuming of these tables. Unfortunately, it has been quite possible for pg_autovacuum to not vacuum quite frequently enough, so you probably want to use the internal vacuums. Vacuuming pg_listener "too often" isn't nearly as hazardous as not vacuuming it frequently enough.
-
- Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running. This will most notably lead to pg_listener growing large and will slow replication.
-
----++ Watchdogs: Keeping Slons Running
-
-There are a couple of "watchdog" scripts available that monitor things, and restart the slon processes should they happen to die for some reason, such as a network "glitch" that causes loss of connectivity.
-
-You might want to run them...
-
----++ Alternative to Watchdog: generate_syncs.sh
-
-A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation.
-
-Supposing you have some possibly-flakey slon daemon that might not run all the time, you might return from a weekend away only to discover the following situation...
-
-On Friday night, something went "bump" and while the database came back up, none of the slon daemons survived. Your online application then saw nearly three days worth of heavy transactions.
-
-When you restart slon on Monday, it hasn't done a SYNC on the master since Friday, so that the next "SYNC set" comprises all of the updates between Friday and Monday. Yuck.
-
-If you run generate_syncs.sh as a cron job every 20 minutes, it will force in a periodic SYNC on the "master" server, which means that between Friday and Monday, the numerous updates are split into more than 100 syncs, which can be applied incrementally, making the cleanup a lot less unpleasant.
-
-Note that if SYNCs _are_ running regularly, this script won't bother doing anything.
-
----++ Log Files
-
-Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on. You might assortedly wish to:
-
- * Use a log rotator like Apache rotatelogs to have a sequence of log files so that no one of them gets too big;
-
- * Purge out old log files, periodically.
+Moved to SGML
Index: SlonyFAQ02.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ02.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ02.txt -Ldoc/adminguide/SlonyFAQ02.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ02.txt
+++ doc/adminguide/SlonyFAQ02.txt
@@ -1,20 +1 @@
-%META:TOPICINFO{author="guest" date="1099541541" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Some events moving around, but no replication
-
-Slony logs might look like the following:
-
-<verbatim>
-DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432'
-ERROR remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress
-</verbatim>
-
-On AIX and Solaris (and possibly elsewhere), both Slony-I _and PostgreSQL_ must be compiled with the --enable-thread-safety option. The above results when PostgreSQL isn't so compiled.
-
-What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing.
-
-Problems like this crop up with disadmirable regularity on AIX and Solaris; it may take something of an "object code audit" to make sure that _ALL_ of the necessary components have been compiled and linked with --enable-thread-safety.
-
-For instance, I ran into the problem one that LD_LIBRARY_PATH had been set, on Solaris, to point to libraries from an old PostgreSQL compile.
-That meant that even though the database had been compiled with --enable-thread-safety, and slon had been compiled against that, slon was being dynamically linked to the "bad old thread-unsafe version," so slon didn't work. It wasn't clear that this was the case until I ran "ldd" against slon.
-
+Moved to SGML
Index: SlonyListenerCosts.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyListenerCosts.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyListenerCosts.txt -Ldoc/adminguide/SlonyListenerCosts.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyListenerCosts.txt
+++ doc/adminguide/SlonyListenerCosts.txt
@@ -1,14 +1 @@
-%META:TOPICINFO{author="guest" date="1099541111" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyIntroduction"}%
----++ Slony-I Communications Costs
-
-The cost of communications grows in a quadratic fashion in several directions as the number of replication nodes in a cluster increases. Note the following relationships:
-
- * It is necessary to have a sl_path entry allowing connection from each node to every other node. Most will normally not need to be used for a given replication configuration, but this means that there needs to be n(n-1) paths. It is probable that there will be considerable repetition of entries, since the path to "node n" is likely to be the same from everywherein the network.
-
- * It is similarly necessary to have a sl_listen entry indicating how data flows from every node to every other node. This again requires configuring n(n-1) "listener paths."
-
- * Each SYNC applied needs to be reported back to all of the other nodes participating in the set so that the nodes all know that it is safe to purge sl_log_1 and sl_log_2 data, as any "forwarding" node may take over as "master" at any time. One might expect SYNC messages to need to travel through n/2 nodes to get propagated to their destinations; this means that each SYNC is expected to get transmitted n(n/2) times. Again, this points to a quadratic growth in communications costs as the number of nodes increases.
-
-This points to it being a bad idea to have the large communications network resulting from the number of nodes being large. Up to a half dozen nodes seems pretty reasonable; every time the number of nodes doubles, this can be expected to quadruple communications overheads.
-
+Moved to SGML
--- /dev/null
+++ doc/adminguide/slony.html
@@ -0,0 +1,685 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I 1.1 Administration</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="NEXT"
+HREF="t24.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="BOOK"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="BOOK"
+><A
+NAME="SLONY"
+></A
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+><A
+NAME="SLONY"
+>Slony-I 1.1 Administration</A
+></H1
+><H3
+CLASS="CORPAUTHOR"
+>The Slony Global Development Group</H3
+><H3
+CLASS="AUTHOR"
+><A
+NAME="AEN5"
+></A
+>Christopher Browne</H3
+><P
+CLASS="COPYRIGHT"
+>Copyright © 2004-2005 The PostgreSQL Global Development Group</P
+><DIV
+CLASS="LEGALNOTICE"
+><A
+NAME="LEGALNOTICE"
+></A
+><P
+><B
+>Legal Notice</B
+></P
+><P
+> <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> is Copyright © 2004-2005
+ by the PostgreSQL Global Development Group and is distributed under
+ the terms of the license of the University of California below.
+ </P
+><P
+> Permission to use, copy, modify, and distribute this software and
+ its documentation for any purpose, without fee, and without a
+ written agreement is hereby granted, provided that the above
+ copyright notice and this paragraph and the following two paragraphs
+ appear in all copies.
+ </P
+><P
+> IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY
+ PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
+ DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS
+ SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA
+ HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ </P
+><P
+> THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
+ INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE
+ PROVIDED HEREUNDER IS ON AN <SPAN
+CLASS="QUOTE"
+>"AS-IS"</SPAN
+> BASIS, AND THE
+ UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ </P
+><P
+> Note that <SPAN
+CLASS="TRADEMARK"
+>UNIX</SPAN
+>™ is a registered trademark of The
+ Open Group. <SPAN
+CLASS="TRADEMARK"
+>Windows</SPAN
+>™ is a registered trademark of
+ Microsoft Corporation in the United States and other countries.
+ <SPAN
+CLASS="TRADEMARK"
+>Solaris</SPAN
+>™ is a registered trademark of Sun Microsystems,
+ Inc. <SPAN
+CLASS="TRADEMARK"
+>Linux</SPAN
+>™ is a trademark of Linus Torvalds. </P
+></DIV
+><HR></DIV
+><DIV
+CLASS="TOC"
+><DL
+><DT
+><B
+>Table of Contents</B
+></DT
+><DT
+><A
+HREF="t24.html"
+></A
+></DT
+><DD
+><DL
+><DT
+>1. <A
+HREF="t24.html#INTRODUCTION"
+>Introduction to Slony-I</A
+></DT
+><DD
+><DL
+><DT
+>1.1. <A
+HREF="t24.html#AEN28"
+>Why yet another replication system?</A
+></DT
+><DT
+>1.2. <A
+HREF="t24.html#AEN31"
+>What Slony-I is</A
+></DT
+><DT
+>1.3. <A
+HREF="t24.html#AEN42"
+>Slony-I is not</A
+></DT
+><DT
+>1.4. <A
+HREF="t24.html#AEN48"
+>Why doesn't Slony-I do automatic fail-over/promotion?</A
+></DT
+><DT
+>1.5. <A
+HREF="t24.html#AEN53"
+>Current Limitations</A
+></DT
+><DT
+>1.6. <A
+HREF="t24.html#SLONYLISTENERCOSTS"
+>Slony-I Communications
+Costs</A
+></DT
+></DL
+></DD
+><DT
+>2. <A
+HREF="requirements.html"
+>Requirements</A
+></DT
+><DD
+><DL
+><DT
+>2.1. <A
+HREF="requirements.html#AEN109"
+>Software needed</A
+></DT
+><DT
+>2.2. <A
+HREF="requirements.html#AEN136"
+>Getting Slony-I Source</A
+></DT
+><DT
+>2.3. <A
+HREF="requirements.html#AEN140"
+>Time Synchronization</A
+></DT
+><DT
+>2.4. <A
+HREF="requirements.html#AEN148"
+>Network Connectivity</A
+></DT
+></DL
+></DD
+><DT
+>3. <A
+HREF="installation.html"
+>Slony-I Installation</A
+></DT
+><DD
+><DL
+><DT
+>3.1. <A
+HREF="installation.html#AEN176"
+>Short Version</A
+></DT
+><DT
+>3.2. <A
+HREF="installation.html#AEN182"
+>Configuration</A
+></DT
+><DT
+>3.3. <A
+HREF="installation.html#AEN185"
+>Example</A
+></DT
+><DT
+>3.4. <A
+HREF="installation.html#AEN191"
+>Build</A
+></DT
+><DT
+>3.5. <A
+HREF="installation.html#AEN198"
+>Installing Slony-I</A
+></DT
+></DL
+></DD
+><DT
+>4. <A
+HREF="slonik.html"
+>Slonik</A
+></DT
+><DD
+><DL
+><DT
+>4.1. <A
+HREF="slonik.html#AEN206"
+>Introduction</A
+></DT
+><DT
+>4.2. <A
+HREF="slonik.html#AEN209"
+>General outline</A
+></DT
+></DL
+></DD
+><DT
+>5. <A
+HREF="concepts.html"
+>Slony-I Concepts</A
+></DT
+><DD
+><DL
+><DT
+>5.1. <A
+HREF="concepts.html#AEN231"
+>Cluster</A
+></DT
+><DT
+>5.2. <A
+HREF="concepts.html#AEN237"
+>Node</A
+></DT
+><DT
+>5.3. <A
+HREF="concepts.html#AEN250"
+>Replication Set</A
+></DT
+><DT
+>5.4. <A
+HREF="concepts.html#AEN254"
+>Provider and Subscriber</A
+></DT
+></DL
+></DD
+><DT
+>6. <A
+HREF="cluster.html"
+>Defining Slony-I Clusters</A
+></DT
+><DT
+>7. <A
+HREF="x267.html"
+>Defining Slony-I Replication Sets</A
+></DT
+><DD
+><DL
+><DT
+>7.1. <A
+HREF="x267.html#AEN278"
+>Primary Keys</A
+></DT
+><DT
+>7.2. <A
+HREF="x267.html#AEN303"
+>Grouping tables into sets</A
+></DT
+></DL
+></DD
+><DT
+>8. <A
+HREF="altperl.html"
+>Slony-I Administration Scripts</A
+></DT
+><DD
+><DL
+><DT
+>8.1. <A
+HREF="altperl.html#AEN314"
+>Node/Cluster Configuration - cluster.nodes</A
+></DT
+><DT
+>8.2. <A
+HREF="altperl.html#AEN342"
+>Set configuration - cluster.set1, cluster.set2</A
+></DT
+><DT
+>8.3. <A
+HREF="altperl.html#AEN362"
+>build_env.pl</A
+></DT
+><DT
+>8.4. <A
+HREF="altperl.html#AEN375"
+>create_set.pl</A
+></DT
+><DT
+>8.5. <A
+HREF="altperl.html#AEN380"
+>drop_node.pl</A
+></DT
+><DT
+>8.6. <A
+HREF="altperl.html#AEN383"
+>drop_set.pl</A
+></DT
+><DT
+>8.7. <A
+HREF="altperl.html#AEN387"
+>failover.pl</A
+></DT
+><DT
+>8.8. <A
+HREF="altperl.html#AEN390"
+>init_cluster.pl</A
+></DT
+><DT
+>8.9. <A
+HREF="altperl.html#AEN393"
+>merge_sets.pl</A
+></DT
+><DT
+>8.10. <A
+HREF="altperl.html#AEN396"
+>move_set.pl</A
+></DT
+><DT
+>8.11. <A
+HREF="altperl.html#AEN399"
+>replication_test.pl</A
+></DT
+><DT
+>8.12. <A
+HREF="altperl.html#AEN402"
+>restart_node.pl</A
+></DT
+><DT
+>8.13. <A
+HREF="altperl.html#AEN405"
+>restart_nodes.pl</A
+></DT
+><DT
+>8.14. <A
+HREF="altperl.html#AEN408"
+>show_configuration.pl</A
+></DT
+><DT
+>8.15. <A
+HREF="altperl.html#AEN412"
+>slon_kill.pl</A
+></DT
+><DT
+>8.16. <A
+HREF="altperl.html#AEN415"
+>slon_pushsql.pl</A
+></DT
+><DT
+>8.17. <A
+HREF="altperl.html#AEN418"
+>slon_start.pl</A
+></DT
+><DT
+>8.18. <A
+HREF="altperl.html#AEN421"
+>slon_watchdog.pl</A
+></DT
+><DT
+>8.19. <A
+HREF="altperl.html#AEN424"
+>slon_watchdog2.pl</A
+></DT
+><DT
+>8.20. <A
+HREF="altperl.html#AEN428"
+>subscribe_set.pl</A
+></DT
+><DT
+>8.21. <A
+HREF="altperl.html#AEN431"
+>uninstall_nodes.pl</A
+></DT
+><DT
+>8.22. <A
+HREF="altperl.html#AEN434"
+>unsubscribe_set.pl</A
+></DT
+><DT
+>8.23. <A
+HREF="altperl.html#AEN437"
+>update_nodes.pl</A
+></DT
+></DL
+></DD
+><DT
+>9. <A
+HREF="slonstart.html"
+>Slon daemons</A
+></DT
+><DT
+>10. <A
+HREF="slonconfig.html"
+>Slon Configuration Options</A
+></DT
+><DT
+>11. <A
+HREF="subscribenodes.html"
+>Subscribing Nodes</A
+></DT
+><DT
+>12. <A
+HREF="monitoring.html"
+>Monitoring</A
+></DT
+><DD
+><DL
+><DT
+>12.1. <A
+HREF="monitoring.html#AEN565"
+>CONFIG notices</A
+></DT
+><DT
+>12.2. <A
+HREF="monitoring.html#AEN571"
+>DEBUG Notices</A
+></DT
+></DL
+></DD
+><DT
+>13. <A
+HREF="maintenance.html"
+>Slony-I Maintenance</A
+></DT
+><DD
+><DL
+><DT
+>13.1. <A
+HREF="maintenance.html#AEN587"
+>Watchdogs: Keeping Slons Running</A
+></DT
+><DT
+>13.2. <A
+HREF="maintenance.html#AEN591"
+>Alternative to Watchdog: generate_syncs.sh</A
+></DT
+><DT
+>13.3. <A
+HREF="maintenance.html#AEN600"
+>Log Files</A
+></DT
+></DL
+></DD
+><DT
+>14. <A
+HREF="reshape.html"
+>Reshaping a Cluster</A
+></DT
+><DT
+>15. <A
+HREF="failover.html"
+>Doing switchover and failover with Slony-I</A
+></DT
+><DD
+><DL
+><DT
+>15.1. <A
+HREF="failover.html#AEN622"
+>Foreword</A
+></DT
+><DT
+>15.2. <A
+HREF="failover.html#AEN627"
+>Switchover</A
+></DT
+><DT
+>15.3. <A
+HREF="failover.html#AEN642"
+>Failover</A
+></DT
+><DT
+>15.4. <A
+HREF="failover.html#AEN659"
+>After failover, getting back node1</A
+></DT
+></DL
+></DD
+><DT
+>16. <A
+HREF="listenpaths.html"
+>Slony Listen Paths</A
+></DT
+><DD
+><DL
+><DT
+>16.1. <A
+HREF="listenpaths.html#AEN666"
+>How Listening Can Break</A
+></DT
+><DT
+>16.2. <A
+HREF="listenpaths.html#AEN672"
+>How The Listen Configuration Should Look</A
+></DT
+><DT
+>16.3. <A
+HREF="listenpaths.html#AEN697"
+>Open Question</A
+></DT
+><DT
+>16.4. <A
+HREF="listenpaths.html#AEN700"
+>Generating listener entries via heuristics</A
+></DT
+></DL
+></DD
+><DT
+>17. <A
+HREF="addthings.html"
+>Adding Things to Replication</A
+></DT
+><DT
+>18. <A
+HREF="dropthings.html"
+>Dropping things from Slony Replication</A
+></DT
+><DD
+><DL
+><DT
+>18.1. <A
+HREF="dropthings.html#AEN729"
+>Dropping A Whole Node</A
+></DT
+><DT
+>18.2. <A
+HREF="dropthings.html#AEN737"
+>Dropping An Entire Set</A
+></DT
+><DT
+>18.3. <A
+HREF="dropthings.html#AEN750"
+>Unsubscribing One Node From One Set</A
+></DT
+><DT
+>18.4. <A
+HREF="dropthings.html#AEN762"
+>Dropping A Table From A Set</A
+></DT
+><DT
+>18.5. <A
+HREF="dropthings.html#AEN773"
+>Dropping A Sequence From A Set</A
+></DT
+></DL
+></DD
+><DT
+>19. <A
+HREF="ddlchanges.html"
+>Database Schema Changes (DDL)</A
+></DT
+><DT
+>20. <A
+HREF="firstdb.html"
+>Replicating Your First Database</A
+></DT
+><DD
+><DL
+><DT
+>20.1. <A
+HREF="firstdb.html#AEN867"
+>Creating the pgbenchuser</A
+></DT
+><DT
+>20.2. <A
+HREF="firstdb.html#AEN871"
+>Preparing the databases</A
+></DT
+><DT
+>20.3. <A
+HREF="firstdb.html#AEN885"
+>Configuring the Database for Replication.</A
+></DT
+></DL
+></DD
+><DT
+>21. <A
+HREF="help.html"
+>More Slony-I Help</A
+></DT
+><DT
+>22. <A
+HREF="x931.html"
+>Other Information Sources</A
+></DT
+></DL
+></DD
+><DT
+><A
+HREF="faq.html"
+></A
+></DT
+></DL
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+></TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/ddlchanges.html
@@ -0,0 +1,273 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Database Schema Changes (DDL)</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Dropping things from Slony Replication"
+HREF="dropthings.html"><LINK
+REL="NEXT"
+TITLE="Replicating Your First Database"
+HREF="firstdb.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="dropthings.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="firstdb.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="DDLCHANGES"
+>19. Database Schema Changes (DDL)</A
+></H1
+><P
+>When changes are made to the database schema, <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> -
+adding fields to a table, it is necessary for this to be handled
+rather carefully, otherwise different nodes may get rather deranged
+because they disagree on how particular tables are built. </P
+><P
+>If you pass the changes through Slony-I via the <TT
+CLASS="COMMAND"
+>EXECUTE
+SCRIPT</TT
+> (slonik) / <CODE
+CLASS="FUNCTION"
+>ddlscript(set,script,node)</CODE
+> (stored
+function), this allows you to be certain that the changes take effect
+at the same point in the transaction streams on all of the nodes.
+That may not be too important if you can take something of an outage
+to do schema changes, but if you want to do upgrades that take place
+while transactions are still firing their way through your systems,
+it's necessary. </P
+><P
+>It's worth making a couple of comments on <SPAN
+CLASS="QUOTE"
+>"special things"</SPAN
+>
+about <TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+>:
+
+<P
+></P
+><UL
+><LI
+><P
+> The script must not contain transaction
+<TT
+CLASS="COMMAND"
+>BEGIN</TT
+> or <TT
+CLASS="COMMAND"
+>END</TT
+> statements, as the script is already
+executed inside a transaction. In PostgreSQL version 8, the
+introduction of nested transactions may change this requirement
+somewhat, but you must still remain aware that the actions in the
+script are wrapped inside a transaction. </P
+></LI
+><LI
+><P
+> If there is <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>anything</I
+></SPAN
+> broken about the
+script, or about how it executes on a particular node, this will cause
+the slon daemon for that node to panic and crash. If you restart the
+node, it will, more likely than not, try to <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>repeat</I
+></SPAN
+> the DDL
+script, which will, almost certainly, fail the second time just as it
+did the first time. I have found this scenario to lead to a need to
+go to the <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> node to delete the event to stop it from
+continuing to fail. </P
+></LI
+><LI
+><P
+> For slon to, at that point, <SPAN
+CLASS="QUOTE"
+>"panic"</SPAN
+> is probably
+the <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>correct</I
+></SPAN
+> answer, as it allows the DBA to head over to
+the database node that is broken, and manually fix things before
+cleaning out the defective event and restarting slon. You can be
+certain that the updates made <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>after</I
+></SPAN
+> the DDL change on the
+provider node are queued up, waiting to head to the subscriber. You
+don't run the risk of there being updates made that depended on the
+DDL changes in order to be correct. </P
+></LI
+></UL
+> </P
+><P
+>Unfortunately, this nonetheless implies that the use of the DDL
+facility is somewhat fragile and dangerous. Making DDL changes should
+not be done in a sloppy or cavalier manner. If your applications do
+not have fairly stable SQL schemas, then using Slony-I for replication
+is likely to be fraught with trouble and frustration. </P
+><P
+>There is an article on how to manage Slony schema changes here:
+<A
+HREF="http://www.varlena.com/varlena/GeneralBits/88.php"
+TARGET="_top"
+>Varlena General Bits</A
+>
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="dropthings.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="firstdb.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Dropping things from Slony Replication</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Replicating Your First Database</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/cluster.html
@@ -0,0 +1,170 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Defining Slony-I Clusters</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Slony-I Concepts"
+HREF="concepts.html"><LINK
+REL="NEXT"
+TITLE="Defining Slony-I Replication Sets"
+HREF="x267.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="concepts.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="x267.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="CLUSTER"
+>6. Defining Slony-I Clusters</A
+></H1
+><P
+>A Slony-I cluster is the basic grouping of database instances in
+which replication takes place. It consists of a set of PostgreSQL
+database instances in which is defined a namespace specific to that
+cluster. </P
+><P
+>Each database instance in which replication is to take place is
+identified by a node number. </P
+><P
+>For a simple install, it may be reasonable for the "master" to
+be node #1, and for the "slave" to be node #2. </P
+><P
+>Some planning should be done, in more complex cases, to ensure that the numbering system is kept sane, lest the administrators be driven insane. The node numbers should be chosen to somehow correspond to the shape of the environment, as opposed to (say) the order in which nodes were initialized. </P
+><P
+>It may be that in version 1.1, nodes will also have a "name"
+attribute, so that they may be given more mnemonic names. In that
+case, the node numbers can be cryptic; it will be the node name that
+is used to organize the cluster.
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="concepts.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="x267.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I Concepts</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Defining Slony-I Replication Sets</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ07.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ07.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ07.txt -Ldoc/adminguide/SlonyFAQ07.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ07.txt
+++ doc/adminguide/SlonyFAQ07.txt
@@ -1,12 +1 @@
-%META:TOPICINFO{author="guest" date="1098326722" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Table indexes with FQ namespace names
-
-<verbatim>
-set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever',
- comment = 'Table some_table in namespace nspace with a candidate primary key');
-</verbatim>
-
-If you have
- key = 'nspace.key_on_whatever'
-the request will FAIL.
+Moved to SGML
Index: SlonyIntroduction.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyIntroduction.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyIntroduction.txt -Ldoc/adminguide/SlonyIntroduction.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyIntroduction.txt
+++ doc/adminguide/SlonyIntroduction.txt
@@ -1,49 +1 @@
-%META:TOPICINFO{author="guest" date="1099539982" format="1.0" version="1.6"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ An Introduction to Slony-I
-
----+++ Why yet another replication system?
-
-Slony-I was born from an idea to create a replication system that was not tied
-to a specific version of PostgreSQL, which is allowed to be started and stopped on
-an existing database with out the need for a dump/reload cycle.
-
----+++ What Slony-I is
-
-Slony-I is a "master to multiple slaves" replication system supporting cascading and slave promotion. The big picture for the development of Slony-I is as a master-slave system that includes all features and capabilities needed to replicate large databases to a reasonably limited number of slave systems. "Reasonable," in this context, is probably no more than a few dozen servers. If the number of servers grows beyond that, the cost of communications becomes prohibitively high. SlonyListenerCosts
-
-Slony-I is a system intended for data centers and backup sites, where the normal mode of operation is that all nodes are available all the time, and where all nodes can be secured. If you have nodes that are likely to regularly drop onto and off of the network, or have nodes that cannot be kept secure, Slony-I may not be the ideal replication solution for you.
-
-There are plans for a "file-based log shipping" extension where updates would be serialized into files. Given that, log files could be distributed by any means desired without any need of feedback between the provider node and those nodes subscribing via "log shipping."
-
----+++ What Slony-I is not
-
-Slony-I is not a network management system.
-
-Slony-I does not have any functionality within it to detect a node failure, or automatically promote a node to a master or other data origin.
-
-Slony-I is not multi-master; it's not a connection broker, and it doesn't make you coffee and toast in the morning.
-
-(That being said, the plan is for a subsequent system, Slony-II, to provide "multimaster" capabilities, and be "bootstrapped" using Slony-I. But that is a separate project, and expectations for Slony-I should not be based on hopes for future projects.)
-
----+++ Why doesn't Slony-I do automatic fail-over/promotion?
-
-This is the job of network monitoring software, not Slony. Every site's configuration and fail-over path is different. For example, keep-alive
-monitoring with redundant NIC's and intelligent HA switches that guarantee race-condition-free takeover of a network address and disconnecting the
-"failed" node vary in every network setup, vendor choice, hardware/software combination. This is clearly the realm of network management software and not
-Slony-I.
-
-Let Slony-I do what it does best: provide database replication.
-
----+++ Current Limitations
-
-Slony-I does not automatically propagate schema changes, nor does it have any ability to replicate large objects. There is a single common reason for these limitations, namely that Slony-I operates using triggers, and neither schema changes nor large object operations can raise triggers suitable to tell Slony-I when those kinds of changes take place.
-
-There is a capability for Slony-I to propagate DDL changes if you submit them as scripts via the slonik EXECUTE SCRIPT operation. That is not "automatic;" you have to construct an SQL DDL script and submit it.
-
-If you have those sorts of requirements, it may be worth exploring the use of PostgreSQL 8.0 PITR (Point In Time Recovery), where WAL logs are replicated to remote nodes. Unfortunately, that has two attendant limitations:
-
- * PITR replicates _all_ changes in _all_ databases; you cannot exclude data that isn't relevant;
- * A PITR replica remains dormant until you apply logs and start up the database. You cannot use the database and apply updates simultaneously. It is like having a "standby server" which cannot be used without it ceasing to be "standby."
-
-There are a number of distinct models for database replication; it is impossible for one replication system to be all things to all people.
+Moved to SGML
Index: SlonyDefineCluster.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyDefineCluster.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyDefineCluster.txt -Ldoc/adminguide/SlonyDefineCluster.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyDefineCluster.txt
+++ doc/adminguide/SlonyDefineCluster.txt
@@ -1,13 +1 @@
-%META:TOPICINFO{author="guest" date="1097952385" format="1.0" version="1.4"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Defining Slony-I Clusters
-
-A Slony-I cluster is the basic grouping of database instances in which replication takes place. It consists of a set of PostgreSQL database instances in which is defined a namespace specific to that cluster.
-
-Each database instance in which replication is to take place is identified by a node number.
-
-For a simple install, it may be reasonable for the "master" to be node #1, and for the "slave" to be node #2.
-
-Some planning should be done, in more complex cases, to ensure that the numbering system is kept sane, lest the administrators be driven insane. The node numbers should be chosen to somehow correspond to the shape of the environment, as opposed to (say) the order in which nodes were initialized.
-
-It may be that in version 1.1, nodes will also have a "name" attribute, so that they may be given more mnemonic names. In that case, the node numbers can be cryptic; it will be the node name that is used to organize the cluster.
+Moved to SGML
Index: SlonyFAQ08.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ08.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ08.txt -Ldoc/adminguide/SlonyFAQ08.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ08.txt
+++ doc/adminguide/SlonyFAQ08.txt
@@ -1,57 +1 @@
-%META:TOPICINFO{author="guest" date="1098326832" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Subscription fails "transaction still in progress"
-
-I'm trying to get a slave subscribed, and get the following
-messages in the logs:
-
-<verbatim>
-DEBUG1 copy_set 1
-DEBUG1 remoteWorkerThread_1: connected to provider DB
-WARN remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress
-WARN remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds
-</verbatim>
-
-Oops. What I forgot to mention, as well, was that I was trying to add
-TWO subscribers, concurrently.
-
-That doesn't work out: Slony-I won't work on the COPY commands
-concurrently. See src/slon/remote_worker.c, function copy_set()
-
-This has the (perhaps unfortunate) implication that you cannot
-populate two slaves concurrently. You have to subscribe one to the
-set, and only once it has completed setting up the subscription
-(copying table contents and such) can the second subscriber start
-setting up the subscription.
-
-It could also be possible for there to be an old outstanding
-transaction blocking Slony-I from processing the sync. You might want
-to take a look at pg_locks to see what's up:
-
-<verbatim>
-sampledb=# select * from pg_locks where transaction is not null order by transaction;
- relation | database | transaction | pid | mode | granted
-----------+----------+-------------+---------+---------------+---------
- | | 127314921 | 2605100 | ExclusiveLock | t
- | | 127326504 | 5660904 | ExclusiveLock | t
-(2 rows)
-</verbatim>
-
-See? 127314921 is indeed older than 127314958, and it's still running.
-
-<verbatim>
-$ ps -aef | egrep '[2]605100'
-postgres 2605100 205018 0 18:53:43 pts/3 3:13 postgres: postgres sampledb localhost COPY
-</verbatim>
-
-This happens to be a COPY transaction involved in setting up the
-subscription for one of the nodes. All is well; the system is busy
-setting up the first subscriber; it won't start on the second one
-until the first one has completed subscribing.
-
-By the way, if there is more than one database on the PostgreSQL
-cluster, and activity is taking place on the OTHER database, that will
-lead to there being "transactions earlier than XID whatever" being
-found to be still in progress. The fact that it's a separate database
-on the cluster is irrelevant; Slony-I will wait until those old
-transactions terminate.
+Moved to SGML
Index: SlonyDefineSet.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyDefineSet.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyDefineSet.txt -Ldoc/adminguide/SlonyDefineSet.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyDefineSet.txt
+++ doc/adminguide/SlonyDefineSet.txt
@@ -1,37 +1 @@
-%META:TOPICINFO{author="guest" date="1097702299" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Defining Slony-I Replication Sets
-
-Defining the nodes indicated the shape of the cluster of database servers; it is now time to determine what data is to be copied between them. The groups of data that are copied are defined as "sets."
-
-A replication set consists of the following:
-
- * Keys on tables that are to be replicated that have no suitable primary key
-
- * Tables that are to be replicated
-
- * Sequences that are to be replicated
-
----+++ Primary Keys
-
-Slony-I *needs* to have a primary key on each table that is replicated. PK values are used as the primary identifier for each tuple that is modified in the source system. There are three ways that you can get Slony-I to use a primary key:
-
- * If the table has a formally identified primary key, SET ADD TABLE can be used without any need to reference the primary key. Slony-I will pick up that there is a primary key, and use it.
-
- * If the table hasn't got a primary key, but has some *candidate* primary key, that is, some index on a combination of fields that is UNIQUE and NOT NULL, then you can specify the key, as in
-
-<verbatim>
- SET ADD TABLE (set id = 1, origin = 1, id = 42, full qualified name = 'public.this_table', key = 'this_by_that', comment='this_table has this_by_that as a candidate primary key');
-</verbatim>
-
- Notice that while you need to specify the namespace for the table, you must /not/ specify the namespace for the key, as it infers the namespace from the table.
-
- * If the table hasn't even got a candidate primary key, you can ask Slony-I to provide one. This is done by first using TABLE ADD KEY to add a column populated using a Slony-I sequence, and then having the SET ADD TABLE include the directive key=serial, to indicate that Slony-I's own column should be used.
-
-It is not terribly important whether you pick a "true" primary key or a mere "candidate primary key;" it is, however, recommended that you have one of those instead of having Slony-I populate the PK column for you. If you don't have a suitable primary key, that means that the table hasn't got any mechanism from your application's standpoint of keeping values unique. Slony-I may therefore introduce a new failure mode for your application, and this implies that you had a way to enter confusing data into the database.
-
----+++ Grouping tables into sets
-
-It will be vital to group tables together into a single set if those tables are related via foreign key constraints. If tables that are thus related are _not_ replicated together, you'll find yourself in trouble if you switch the "master provider" from one node to another, and discover that the new "master" can't be updated properly because it is missing the contents of dependent tables.
-
-If a database schema has been designed cleanly, it is likely that replication sets will be virtually synonymous with namespaces. All of the tables and sequences in a particular namespace will be sufficiently related that you will want to replicate them all. Conversely, tables found in different schemas will likely NOT be related, and therefore should be replicated in separate sets.
+Moved to SGML
Index: SlonyHowtoFirstTry.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyHowtoFirstTry.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyHowtoFirstTry.txt -Ldoc/adminguide/SlonyHowtoFirstTry.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyHowtoFirstTry.txt
+++ doc/adminguide/SlonyHowtoFirstTry.txt
@@ -1,282 +1 @@
-%META:TOPICINFO{author="guest" date="1098196188" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Replicating Your First Database
-
-In this example, we will be replicating a brand new pgbench database. The
-mechanics of replicating an existing database are covered here, however we
-recommend that you learn how Slony-I functions by using a fresh new
-non-production database.
-
-The Slony-I replication engine is trigger-based, allowing us to replicate
-databases (or portions thereof) running under the same postmaster.
-
-This example will show how to replicate the pgbench database running on
-localhost (master) to the pgbench slave database also running on localhost
-(slave). We make a couple of assumptions about your PostgreSQL configuration:
-
- 1 You have tcpip_socket=true in your postgresql.conf and
- 1 You have enabled access in your cluster(s) via pg_hba.conf
-
-The REPLICATIONUSER needs to be a PostgreSQL superuser. This is typically
-postgres or pgsql.
-
-You should also set the following shell variables:
-
-<verbatim>
-CLUSTERNAME=slony_example
-MASTERDBNAME=pgbench
-SLAVEDBNAME=pgbenchslave
-MASTERHOST=localhost
-SLAVEHOST=localhost
-REPLICATIONUSER=pgsql
-PGBENCHUSER=pgbench
-</verbatim>
-Here are a couple of examples for setting variables in common shells:
-
- * bash, sh, ksh
-<verbatim>
- export CLUSTERNAME=slony_example
-</verbatim>
- * (t)csh:
-<verbatim>
- setenv CLUSTERNAME slony_example
-</verbatim>
-
------
-*Gotcha:* If you're changing these variables to use different hosts for MASTERHOST and SLAVEHOST, be sure _not_ to use localhost for either of them. This will result in an error similar to the following:
-
-<verbatim>
-ERROR remoteListenThread_1: db_getLocalNodeId() returned 2 - wrong database?
-</verbatim>
-
--- Main.SteveSimms - 16 Oct 2004
------
-
----+++ Creating the pgbenchuser
-
-<verbatim>
-createuser -A -D $PGBENCHUSER
-</verbatim>
-
----+++ Preparing the databases
-
-<verbatim>
-createdb -O $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
-createdb -O $PGBENCHUSER -h $SLAVEHOST $SLAVEDBNAME
-
-pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
-</verbatim>
-
-Because Slony-I depends on the databases having the pl/pgSQL procedural
-language installed, we better install it now. It is possible that you have
-installed pl/pgSQL into the template1 database in which case you can skip this
-step because it's already installed into the $MASTERDBNAME.
-
-<verbatim>
-createlang plpgsql -h $MASTERHOST $MASTERDBNAME
-</verbatim>
-
-Slony-I does not yet automatically copy table definitions from a master when a
-slave subscribes to it, so we need to import this data. We do this with
-pg_dump.
-
-<verbatim>
-pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME
-</verbatim>
-
-To illustrate how Slony-I allows for on the fly replication subscription, lets
-start up pgbench. If you run the pgbench application in the foreground of a
-separate terminal window, you can stop and restart it with different
-parameters at any time. You'll need to re-export the variables again so they
-are available in this session as well.
-
-The typical command to run pgbench would look like:
-
-<verbatim>
-pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
-</verbatim>
-
-This will run pgbench with 5 concurrent clients each processing 1000
-transactions against the pgbench database running on localhost as the pgbench
-user.
-
----+++ Configuring the Database for Replication.
-
-Creating the configuration tables, stored procedures, triggers and
-configuration is all done through the slonik tool. It is a specialized
-scripting aid that mostly calls stored procedures in the master/salve (node)
-databases. The script to create the initial configuration for the simple
-master-slave setup of our pgbench database looks like this:
-
-<verbatim>
-#!/bin/sh
-
-slonik <<_EOF_
- #--
- # define the namespace the replication system uses in our example it is
- # slony_example
- #--
- cluster name = $CLUSTERNAME;
-
- #--
- # admin conninfo's are used by slonik to connect to the nodes one for each
- # node on each side of the cluster, the syntax is that of PQconnectdb in
- # the C-API
- # --
- node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
- node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
-
- #--
- # init the first node. Its id MUST be 1. This creates the schema
- # _$CLUSTERNAME containing all replication system specific database
- # objects.
-
- #--
- init cluster ( id=1, comment = 'Master Node');
-
- #--
- # Because the history table does not have a primary key or other unique
- # constraint that could be used to identify a row, we need to add one.
- # The following command adds a bigint column named
- # _Slony-I_$CLUSTERNAME_rowID to the table. It will have a default value
- # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
- # constraints applied. All existing rows will be initialized with a
- # number
- #--
- table add key (node id = 1, fully qualified name = 'public.history');
-
- #--
- # Slony-I organizes tables into sets. The smallest unit a node can
- # subscribe is a set. The following commands create one set containing
- # all 4 pgbench tables. The master or origin of the set is node 1.
- #--
- create set (id=1, origin=1, comment='All pgbench tables');
- set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
- set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
- set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
- set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
-
- #--
- # Create the second node (the slave) tell the 2 nodes how to connect to
- # each other and how they should listen for events.
- #--
-
- store node (id=2, comment = 'Slave node');
- store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
- store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
- store listen (origin=1, provider = 1, receiver =2);
- store listen (origin=2, provider = 2, receiver =1);
-_EOF_
-</verbatim>
-
-
-Is the pgbench still running? If not start it again.
-
-At this point we have 2 databases that are fully prepared. One is the master
-database in which bgbench is busy accessing and changing rows. It's now time
-to start the replication daemons.
-
-On $MASTERHOST the command to start the replication engine is
-
-<verbatim>
-slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"
-</verbatim>
-
-Likewise we start the replication system on node 2 (the slave)
-
-<verbatim>
-slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"
-</verbatim>
-
-Even though we have the slon running on both the master and slave and they are
-both spitting out diagnostics and other messages, we aren't replicating any
-data yet. The notices you are seeing is the synchronization of cluster
-configurations between the 2 slon processes.
-
-To start replicating the 4 pgbench tables (set 1) from the master (node id 1)
-the the slave (node id 2), execute the following script.
-
-<verbatim>
-#!/bin/sh
-slonik <<_EOF_
- # ----
- # This defines which namespace the replication system uses
- # ----
- cluster name = $CLUSTERNAME;
-
- # ----
- # Admin conninfo's are used by the slonik program to connect
- # to the node databases. So these are the PQconnectdb arguments
- # that connect from the administrators workstation (where
- # slonik is executed).
- # ----
- node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
- node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
-
- # ----
- # Node 2 subscribes set 1
- # ----
- subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
-_EOF_
-</verbatim>
-
-
-Any second here, the replication daemon on $SLAVEHOST will start to copy the
-current content of all 4 replicated tables. While doing so, of course, the
-pgbench application will continue to modify the database. When the copy
-process is finished, the replication daemon on $SLAVEHOST will start to catch
-up by applying the accumulated replication log. It will do this in little
-steps, 10 seconds worth of application work at a time. Depending on the
-performance of the two systems involved, the sizing of the two databases, the
-actual transaction load and how well the two databases are tuned and
-maintained, this catchup process can be a matter of minutes, hours, or
-eons.
-
-You have now successfully set up your first basic master/slave replication
-system, and the 2 databases once the slave has caught up contain identical
-data. That's the theory. In practice, it's good to check that the datasets
-are in fact the same.
-
-The following script will create ordered dumps of the 2 databases and compare
-them. Make sure that pgbench has completed it's testing, and that your slon
-sessions have caught up.
-
-<verbatim>
-#!/bin/sh
-echo -n "**** comparing sample1 ... "
-psql -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME >dump.tmp.1.$$ <<_EOF_
- select 'accounts:'::text, aid, bid, abalance, filler
- from accounts order by aid;
- select 'branches:'::text, bid, bbalance, filler
- from branches order by bid;
- select 'tellers:'::text, tid, bid, tbalance, filler
- from tellers order by tid;
- select 'history:'::text, tid, bid, aid, delta, mtime, filler,
- "_Slony-I_${CLUSTERNAME}_rowID"
- from history order by "_Slony-I_${CLUSTERNAME}_rowID";
-_EOF_
-psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME >dump.tmp.2.$$ <<_EOF_
- select 'accounts:'::text, aid, bid, abalance, filler
- from accounts order by aid;
- select 'branches:'::text, bid, bbalance, filler
- from branches order by bid;
- select 'tellers:'::text, tid, bid, tbalance, filler
- from tellers order by tid;
- select 'history:'::text, tid, bid, aid, delta, mtime, filler,
- "_Slony-I_${CLUSTERNAME}_rowID"
- from history order by "_Slony-I_${CLUSTERNAME}_rowID";
-_EOF_
-
-if diff dump.tmp.1.$$ dump.tmp.2.$$ >$CLUSTERNAME.diff ; then
- echo "success - databases are equal."
- rm dump.tmp.?.$$
- rm $CLUSTERNAME.diff
-else
- echo "FAILED - see $CLUSTERNAME.diff for database differences"
-fi
-</verbatim>
-
-Note that there is somewhat more sophisticated documentation of the process in the Slony-I source code tree in a file called slony-I-basic-mstr-slv.txt.
-
-If this script returns "FAILED" please contact the developers at
-[[http://slony.org/]]
+Moved to SGML
Index: SlonyFAQ16.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ16.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ16.txt -Ldoc/adminguide/SlonyFAQ16.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ16.txt
+++ doc/adminguide/SlonyFAQ16.txt
@@ -1,50 +1 @@
-%META:TOPICINFO{author="guest" date="1098327540" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ I pointed a subscribing node to a different parent and it stopped replicating
-
-We noticed this happening when we wanted to re-initialize a node,
-where we had configuration thus:
-
- * Node 1 - master
- * Node 2 - child of node 1 - the node we're reinitializing
- * Node 3 - child of node 3 - node that should keep replicating
-
-The subscription for node 3 was changed to have node 1 as provider,
-and we did DROP SET/SUBSCRIBE SET for node 2 to get it repopulating.
-
-Unfortunately, replication suddenly stopped to node 3.
-
-The problem was that there was not a suitable set of "listener paths"
-in sl_listen to allow the events from node 1 to propagate to node 3.
-The events were going through node 2, and blocking behind the
-SUBSCRIBE SET event that node 2 was working on.
-
-The following slonik script dropped out the listen paths where node 3
-had to go through node 2, and added in direct listens between nodes 1
-and 3.
-
-<verbatim>
-cluster name = oxrslive;
- node 1 admin conninfo='host=32.85.68.220 dbname=oxrslive user=postgres port=5432';
- node 2 admin conninfo='host=32.85.68.216 dbname=oxrslive user=postgres port=5432';
- node 3 admin conninfo='host=32.85.68.244 dbname=oxrslive user=postgres port=5432';
- node 4 admin conninfo='host=10.28.103.132 dbname=oxrslive user=postgres port=5432';
-try {
- store listen (origin = 1, receiver = 3, provider = 1);
- store listen (origin = 3, receiver = 1, provider = 3);
- drop listen (origin = 1, receiver = 3, provider = 2);
- drop listen (origin = 3, receiver = 1, provider = 2);
-}
-</verbatim>
-Immediately after this script was run, SYNC events started propagating
-again to node 3.
-
-This points out two principles:
-
- * If you have multiple nodes, and cascaded subscribers, you need to be quite careful in populating the STORE LISTEN entries, and in
- modifying them if the structure of the replication "tree" changes.
-
- * Version 1.1 probably ought to provide better tools to help manage this.
-
-The issues of "listener paths" are discussed further at SlonyListenPaths
-
+Moved to SGML
Index: SlonyFAQ13.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ13.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ13.txt -Ldoc/adminguide/SlonyFAQ13.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ13.txt
+++ doc/adminguide/SlonyFAQ13.txt
@@ -1,23 +1 @@
-%META:TOPICINFO{author="guest" date="1099542268" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Some nodes start consistently falling behind
-
-I have been running Slony-I on a node for a while, and am seeing
-system performance suffering.
-
-I'm seeing long running queries of the form:
-<verbatim>
- fetch 100 from LOG;
-</verbatim>
-
-This is characteristic of pg_listener (which is the table containing
-NOTIFY data) having plenty of dead tuples in it. That makes NOTIFY
-events take a long time, and causes the affected node to gradually
-fall further and further behind.
-
-You quite likely need to do a VACUUM FULL on pg_listener, to vigorously clean it out, and need to vacuum pg_listener really frequently. Once every five minutes would likely be AOK.
-
-Slon daemons already vacuum a bunch of tables, and cleanup_thread.c contains a list of tables that are frequently vacuumed automatically. In Slony-I 1.0.2, pg_listener is not included. In 1.0.5 and later, it is regularly vacuumed, so this should cease to be a direct issue.
-
-There is, however, still a scenario where this will still "bite." Vacuums cannot delete tuples that were made "obsolete" at any time after the start time of the eldest transaction that is still open. Long running transactions will cause trouble, and should be avoided, even on "slave" nodes.
-
+Moved to SGML
Index: SlonyFAQ17.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ17.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ17.txt -Ldoc/adminguide/SlonyFAQ17.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ17.txt
+++ doc/adminguide/SlonyFAQ17.txt
@@ -1,20 +1 @@
-%META:TOPICINFO{author="guest" date="1099632840" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ sl_log_1 grows after dropping a node
-
-After dropping a node, sl_log_1 isn't getting purged out anymore.
-
-This is a common scenario in versions before 1.0.5, as the "clean up" that takes place when purging the node does not include purging out old entries from the Slony-I table, sl_confirm, for the recently departed node.
-
-The node is no longer around to update confirmations of what syncs have been applied on it, and therefore the cleanup thread that purges log entries thinks that it can't safely delete entries newer than the final sl_confirm entry, which rather curtails the ability to purge out old logs.
-
-In version 1.0.5, the "drop node" function purges out entries in sl_confirm for the departing node. In earlier versions, this needs to be done manually. Supposing the node number is 3, then the query would be:
-
-<verbatim>
-delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;
-</verbatim>
-
-General "being careful" dictates starting with a BEGIN, looking at the contents of sl_confirm before, ensuring that only the expected records are purged, and then, only after that, confirming the change with a COMMIT. If you delete confirm entries for the wrong node, that could ruin your whole day.
-
-You'll need to run this on each node that remains...
-
+Moved to SGML
--- /dev/null
+++ doc/adminguide/maintenance.html
@@ -0,0 +1,252 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I Maintenance</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Monitoring"
+HREF="monitoring.html"><LINK
+REL="NEXT"
+TITLE="Reshaping a Cluster"
+HREF="reshape.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="monitoring.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="reshape.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="MAINTENANCE"
+>13. Slony-I Maintenance</A
+></H1
+><P
+>Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
+
+<P
+></P
+><UL
+><LI
+><P
+> Deletes old data from various tables in the
+ Slony-I cluster's namespace, notably entries in sl_log_1,
+ sl_log_2 (not yet used), and sl_seqlog.
+
+ </P
+></LI
+><LI
+><P
+> Vacuum certain tables used by Slony-I. As of
+ 1.0.5, this includes pg_listener; in earlier versions, you
+ must vacuum that table heavily, otherwise you'll find
+ replication slowing down because Slony-I raises plenty of
+ events, which leads to that table having plenty of dead
+ tuples.
+
+ </P
+><P
+> In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using something like pg_autovacuum to handle vacuuming of these tables. Unfortunately, it has been quite possible for pg_autovacuum to not vacuum quite frequently enough, so you probably want to use the internal vacuums. Vacuuming pg_listener "too often" isn't nearly as hazardous as not vacuuming it frequently enough.
+
+ </P
+><P
+>Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running. This will most notably lead to pg_listener growing large and will slow replication. </P
+></LI
+></UL
+></P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN587"
+>13.1. Watchdogs: Keeping Slons Running</A
+></H2
+><P
+>There are a couple of "watchdog" scripts available that monitor things, and restart the slon processes should they happen to die for some reason, such as a network "glitch" that causes loss of connectivity. </P
+><P
+>You might want to run them... </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN591"
+>13.2. Alternative to Watchdog: generate_syncs.sh</A
+></H2
+><P
+>A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation. </P
+><P
+>Supposing you have some possibly-flakey slon daemon that might not run all the time, you might return from a weekend away only to discover the following situation... </P
+><P
+>On Friday night, something went "bump" and while the database came back up, none of the slon daemons survived. Your online application then saw nearly three days worth of heavy transactions. </P
+><P
+>When you restart slon on Monday, it hasn't done a SYNC on the master since Friday, so that the next "SYNC set" comprises all of the updates between Friday and Monday. Yuck. </P
+><P
+>If you run generate_syncs.sh as a cron job every 20 minutes, it will force in a periodic SYNC on the "master" server, which means that between Friday and Monday, the numerous updates are split into more than 100 syncs, which can be applied incrementally, making the cleanup a lot less unpleasant. </P
+><P
+>Note that if SYNCs <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>are</I
+></SPAN
+> running regularly, this script won't bother doing anything. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN600"
+>13.3. Log Files</A
+></H2
+><P
+>Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on. You might assortedly wish to:
+<P
+></P
+><UL
+><LI
+><P
+> Use a log rotator like Apache rotatelogs to have a sequence of log files so that no one of them gets too big;
+
+ </P
+></LI
+><LI
+><P
+> Purge out old log files, periodically.</P
+></LI
+></UL
+>
+
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="monitoring.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="reshape.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Monitoring</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Reshaping a Cluster</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ11.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ11.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ11.txt -Ldoc/adminguide/SlonyFAQ11.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ11.txt
+++ doc/adminguide/SlonyFAQ11.txt
@@ -1,37 +1 @@
-%META:TOPICINFO{author="guest" date="1099542075" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ I need to drop a sequence from a replication set
-
-If you are running 1.0.5 or later, there is a SET DROP SEQUENCE
-command in Slonik to allow you to do this, parallelling SET DROP
-TABLE.
-
-If you are running 1.0.2 or earlier, the process is a bit more manual.
-
-Supposing I want to get rid of the two sequences listed below,
-whois_cachemgmt_seq and epp_whoi_cach_seq_, we start by needing the
-seq_id values.
-
-<verbatim>
-oxrsorg=# select * from _oxrsorg.sl_sequence where seq_id in (93,59);
- seq_id | seq_reloid | seq_set | seq_comment
---------+------------+---------+-------------------------------------
- 93 | 107451516 | 1 | Sequence public.whois_cachemgmt_seq
- 59 | 107451860 | 1 | Sequence public.epp_whoi_cach_seq_
-(2 rows)
-</verbatim>
-
-The data that needs to be deleted to stop Slony from continuing to
-replicate these are thus:
-
-<verbatim>
-delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
-delete from _oxrsorg.sl_sequence where seq_id in (93,59);
-</verbatim>
-
-Those two queries could be submitted to all of the nodes via
-ddlscript() / EXECUTE SCRIPT, thus eliminating the sequence everywhere
-"at once." Or they may be applied by hand to each of the nodes.
-
-Similarly to SET DROP TABLE, this should be in place for Slony-I version
-1.0.5 as SET DROP SEQUENCE.
+Moved to SGML
Index: SlonySlonik.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonySlonik.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonySlonik.txt -Ldoc/adminguide/SlonySlonik.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonySlonik.txt
+++ doc/adminguide/SlonySlonik.txt
@@ -1,17 +1 @@
-%META:TOPICINFO{author="guest" date="1097691381" format="1.0" version="1.4"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
-*Introduction*
-
-Slonik is a command line utility designed specifically to setup and modify configurations of the Slony-I replication system.
-
-*General outline*
-
-The slonik command line utility is supposed to be used embedded into shell scripts and reads commands from files or stdin.
-
-It reads a set of Slonik statements, which are written in a scripting language with syntax similar to that of SQL, and performs the set of configuration changes on the slony nodes specified in the script.
-
-Nearly all of the real configuration work is actually done by calling stored procedures after loading the Slony-I support base into a database. Slonik was created because these stored procedures have special requirements as to on which particular node in the replication system they are called. The absence of named parameters for stored procedures makes it rather hard to do this from the psql prompt, and psql lacks the ability to maintain multiple connections with open transactions to multiple databases.
-
-The format of the Slonik "language" is very similar to that of SQL, and the parser is based on a similar set of formatting rules for such things as numbers and strings. Note that slonik is declarative, using literal values throughout, and does _not_ have the notion of variables. It is anticipated that Slonik scripts will typically be *generated* by scripts, such as Bash or Perl, and these sorts of scripting languages already have perfectly good ways of managing variables.
-
-A detailed list of Slonik commands can be found here: [[http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands][http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands]]
+Moved to SGML
Index: filelist.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/filelist.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/filelist.sgml -Ldoc/adminguide/filelist.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/filelist.sgml
+++ doc/adminguide/filelist.sgml
@@ -21,6 +21,7 @@
<!entity ddlchanges SYSTEM "ddlchanges.sgml">
<!entity firstdb SYSTEM "firstdb.sgml">
<!entity help SYSTEM "help.sgml">
+<!entity faq SYSTEM "faq.sgml">
Index: SlonyHandlingFailover.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyHandlingFailover.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyHandlingFailover.txt -Ldoc/adminguide/SlonyHandlingFailover.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyHandlingFailover.txt
+++ doc/adminguide/SlonyHandlingFailover.txt
@@ -1,129 +1 @@
-%META:TOPICINFO{author="guest" date="1097850667" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Doing switchover and failover with Slony-I
-
----+++ Foreword
-
- Slony-I is an asynchronous replication system. Because of that, it
- is almost certain that at the moment the current origin of a set
- fails, the last transactions committed have not propagated to the
- subscribers yet. They always fail under heavy load, and you know
- it. Thus the goal is to prevent the main server from failing.
- The best way to do that is frequent maintenance.
-
- Opening the case of a running server is not exactly what we
- all consider professional system maintenance. And interestingly,
- those users who use replication for backup and failover
- purposes are usually the ones that have a very low tolerance for
- words like "downtime". To meet these requirements, Slony-I has
- not only failover capabilities, but controlled master role transfer
- features too.
-
- It is assumed in this document that the reader is familiar with
- the slonik utility and knows at least how to set up a simple
- 2 node replication system with Slony-I.
-
----+++ Switchover
-
- We assume a current "origin" as node1 (AKA master) with one
- "subscriber" as node2 (AKA slave). A web application on a third
- server is accessing the database on node1. Both databases are
- up and running and replication is more or less in sync.
-
- * Step 1)
-
- * At the time of this writing switchover to another server requires the application to reconnect to the database. So in order to avoid any complications, we simply shut down the web server. Users who use pg_pool for the applications database connections can shutdown the pool only.
-
- * Step 2)
-
- * A small slonik script executes the following commands:
-<verbatim>
- lock set (id = 1, origin = 1);
- wait for event (origin = 1, confirmed = 2);
- move set (id = 1, old origin = 1, new origin = 2);
- wait for event (origin = 1, confirmed = 2);
-</verbatim>
-
- After these commands, the origin (master role) of data set 1
- is now on node2. It is not simply transferred. It is done
- in a fashion so that node1 is now a fully synchronized subscriber
- actively replicating the set. So the two nodes completely switched
- roles.
-
- * Step 3)
-
- * After reconfiguring the web application (or pgpool) to connect to the database on node2 instead, the web server is restarted and resumes normal operation.
-
- Done in one shell script, that does the shutdown, slonik, move
- config files and startup all together, this entire procedure
- takes less than 10 seconds.
-
- It is now possible to simply shutdown node1 and do whatever is
- required. When node1 is restarted later, it will start replicating
- again and eventually catch up after a while. At this point the
- whole procedure is executed with exchanged node IDs and the
- original configuration is restored.
-
-
----+++ Failover
-
- Because of the possibility of missing not-yet-replicated
- transactions that are committed, failover is the worst thing
- that can happen in a master-slave replication scenario. If there
- is any possibility to bring back the failed server even if only
- for a few minutes, we strongly recommended that you follow the
- switchover procedure above.
-
- Slony does not provide any automatic detection for failed systems.
- Abandoning committed transactions is a business decision that
- cannot be made by a database. If someone wants to put the
- commands below into a script executed automatically from the
- network monitoring system, well ... its your data.
-
- * Step 1)
-
- * The slonik command
-<verbatim>
- failover (id = 1, backup node = 2);
-</verbatim>
-
- causes node2 to assume the ownership (origin) of all sets that
- have node1 as their current origin. In the case there would be
- more nodes, All direct subscribers of node1 are instructed that
- this is happening. Slonik would also query all direct subscribers
- to figure out which node has the highest replication status
- (latest committed transaction) for each set, and the configuration
- would be changed in a way that node2 first applies those last
- minute changes before actually allowing write access to the
- tables.
-
- In addition, all nodes that subscribed directly from node1 will
- now use node2 as data provider for the set. This means that
- after the failover command succeeded, no node in the entire
- replication setup will receive anything from node1 any more.
-
- * Step 2)
-
- Reconfigure and restart the application (or pgpool) to cause it
- to reconnect to node2.
-
- * Step 3)
-
- After the failover is complete and node2 accepts write operations
- against the tables, remove all remnants of node1's configuration
- information with the slonik command
-
-<verbatim>
- drop node (id = 1, event node = 2);
-</verbatim>
-
----+++ After failover, getting back node1
-
- After the above failover, the data stored on node1 must be
- considered out of sync with the rest of the nodes. Therefore, the
- only way to get node1 back and transfer the master role to it is
- to rebuild it from scratch as a slave, let it catch up and then
- follow the switchover procedure.
-
-Based on [[http://gborg.postgresql.org/project/slony1/genpage.php?howto_failover]]
-
+Moved to SGML
--- /dev/null
+++ doc/adminguide/failover.html
@@ -0,0 +1,352 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Doing switchover and failover with Slony-I</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Reshaping a Cluster"
+HREF="reshape.html"><LINK
+REL="NEXT"
+TITLE=" Slony Listen Paths"
+HREF="listenpaths.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="reshape.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="listenpaths.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="FAILOVER"
+>15. Doing switchover and failover with Slony-I</A
+></H1
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN622"
+>15.1. Foreword</A
+></H2
+><P
+> Slony-I is an asynchronous replication system. Because of that, it
+ is almost certain that at the moment the current origin of a set
+ fails, the last transactions committed have not propagated to the
+ subscribers yet. They always fail under heavy load, and you know
+ it. Thus the goal is to prevent the main server from failing.
+ The best way to do that is frequent maintenance.</P
+><P
+> Opening the case of a running server is not exactly what we
+ all consider professional system maintenance. And interestingly,
+ those users who use replication for backup and failover
+ purposes are usually the ones that have a very low tolerance for
+ words like "downtime". To meet these requirements, Slony-I has
+ not only failover capabilities, but controlled master role transfer
+ features too. </P
+><P
+> It is assumed in this document that the reader is familiar with
+ the slonik utility and knows at least how to set up a simple
+ 2 node replication system with Slony-I. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN627"
+>15.2. Switchover</A
+></H2
+><P
+> We assume a current "origin" as node1 (AKA master) with one
+ "subscriber" as node2 (AKA slave). A web application on a third
+ server is accessing the database on node1. Both databases are
+ up and running and replication is more or less in sync.
+
+<P
+></P
+><UL
+><LI
+><P
+> At the time of this writing switchover to another server requires the application to reconnect to the database. So in order to avoid any complications, we simply shut down the web server. Users who use pg_pool for the applications database connections can shutdown the pool only.
+
+
+ </P
+></LI
+><LI
+><P
+> A small slonik script executes the following commands:</P
+><P
+><TT
+CLASS="COMMAND"
+> lock set (id = 1, origin = 1);
+
+ wait for event (origin = 1, confirmed = 2);
+
+ move set (id = 1, old origin = 1, new origin = 2);
+
+ wait for event (origin = 1, confirmed = 2); </TT
+> </P
+><P
+> After these commands, the origin (master role) of data set 1
+ is now on node2. It is not simply transferred. It is done
+ in a fashion so that node1 is now a fully synchronized subscriber
+ actively replicating the set. So the two nodes completely switched
+ roles. </P
+></LI
+><LI
+><P
+> After reconfiguring the web application (or pgpool) to connect to the database on node2 instead, the web server is restarted and resumes normal operation. </P
+><P
+> Done in one shell script, that does the shutdown, slonik, move
+ config files and startup all together, this entire procedure
+ takes less than 10 seconds. </P
+></LI
+></UL
+></P
+><P
+> It is now possible to simply shutdown node1 and do whatever is
+ required. When node1 is restarted later, it will start replicating
+ again and eventually catch up after a while. At this point the
+ whole procedure is executed with exchanged node IDs and the
+ original configuration is restored. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN642"
+>15.3. Failover</A
+></H2
+><P
+> Because of the possibility of missing not-yet-replicated
+
+ transactions that are committed, failover is the worst thing
+
+ that can happen in a master-slave replication scenario. If there
+
+ is any possibility to bring back the failed server even if only
+
+ for a few minutes, we strongly recommended that you follow the
+
+ switchover procedure above.
+ </P
+><P
+> Slony does not provide any automatic detection for failed systems.
+
+ Abandoning committed transactions is a business decision that
+
+ cannot be made by a database. If someone wants to put the
+
+ commands below into a script executed automatically from the
+
+ network monitoring system, well ... its your data.
+
+<P
+></P
+><UL
+><LI
+><P
+> The slonik command</P
+><P
+><TT
+CLASS="COMMAND"
+> failover (id = 1, backup node = 2);</TT
+> </P
+><P
+> causes node2 to assume the ownership (origin) of all sets that
+
+ have node1 as their current origin. In the case there would be
+
+ more nodes, All direct subscribers of node1 are instructed that
+
+ this is happening. Slonik would also query all direct subscribers
+
+ to figure out which node has the highest replication status
+
+ (latest committed transaction) for each set, and the configuration
+
+ would be changed in a way that node2 first applies those last
+
+ minute changes before actually allowing write access to the
+
+ tables.
+ </P
+><P
+> In addition, all nodes that subscribed directly from node1 will
+
+ now use node2 as data provider for the set. This means that
+
+ after the failover command succeeded, no node in the entire
+
+ replication setup will receive anything from node1 any more. </P
+></LI
+><LI
+><P
+> Reconfigure and restart the application (or pgpool) to cause it
+
+ to reconnect to node2.
+ </P
+></LI
+><LI
+><P
+> After the failover is complete and node2 accepts write operations
+
+ against the tables, remove all remnants of node1's configuration
+
+ information with the slonik command
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> drop node (id = 1, event node = 2);</TT
+></P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN659"
+>15.4. After failover, getting back node1</A
+></H2
+><P
+> After the above failover, the data stored on node1 must be
+ considered out of sync with the rest of the nodes. Therefore, the
+ only way to get node1 back and transfer the master role to it is
+ to rebuild it from scratch as a slave, let it catch up and then
+ follow the switchover procedure.
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="reshape.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="listenpaths.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Reshaping a Cluster</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony Listen Paths</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: subscribenodes.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/subscribenodes.sgml
+++ doc/adminguide/subscribenodes.sgml
@@ -1,4 +1,4 @@
-<article id="subscribenodes"> <title/ Subscribing Nodes/
+<sect1 id="subscribenodes"> <title/ Subscribing Nodes/
<para>Before you subscribe a node to a set, be sure that you have slons running for both the master and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat your head against a wall trying to figure out what's going on.
@@ -51,7 +51,6 @@
<para>A final test is to insert a row into a table on the master node, and to see if the row is copied to the new subscriber.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyStartSlons.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyStartSlons.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyStartSlons.txt -Ldoc/adminguide/SlonyStartSlons.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyStartSlons.txt
+++ doc/adminguide/SlonyStartSlons.txt
@@ -1,20 +1 @@
-%META:TOPICINFO{author="guest" date="1100276940" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slon Daemons
-
-The programs that actually perform Slony-I replication are the "slon" daemons.
-
-You need to run one "slon" instance for each node in a Slony-I cluster, whether you consider that node a "master" or a "slave." Since a MOVE SET or FAILOVER can switch the roles of nodes, slon needs to be able to function for both providers and subscribers. It is not essential that these daemons run on any particular host, but there are some principles worth considering:
-
- * Each slon needs to be able to communicate quickly with the database whose "node controller" it is. Therefore, if a Slony-I cluster runs across some form of Wide Area Network, each slon process should run on or nearby the databases each is controlling. If you break this rule, no particular disaster should ensue, but the added latency introduced to monitoring events on the slon's "own node" will cause it to replicate in a _somewhat_ less timely manner.
-
- * The fastest results would be achieved by having each slon run on the database server that it is servicing. If it runs somewhere within a fast local network, performance will not be noticeably degraded.
-
- * It is an attractive idea to run many of the slon processes for a cluster on one machine, as this makes it easy to monitor them both in terms of log files and process tables from one location. This eliminates the need to login to several hosts in order to look at log files or to restart slon instances.
-
-There are two "watchdog" scripts currently available:
-
- * tools/altperl/slon_watchdog.pl - an "early" version that basically wraps a loop around the invocation of slon, restarting any time it falls over
- * tools/altperl/slon_watchdog2.pl - a somewhat more intelligent version that periodically polls the database, checking to see if a SYNC has taken place recently. We have had VPN connections that occasionally fall over without signalling the application, so that the slon stops working, but doesn't actually die; this polling accounts for that...
-
-The "slon_watchdog2.pl" script is probably _usually_ the preferable thing to run. It is not preferable to run it whilst subscribing a very large replication set where it is expected to take many hours to do the initial COPY SET. The problem that will come up in that case is that it will figure that since it hasn't done a SYNC in 2 hours, something's broken and it needs to restart slon, thereby restarting the COPY SET event.
+Moved to SGML
--- /dev/null
+++ doc/adminguide/t24.html
@@ -0,0 +1,524 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I Administration</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="PREVIOUS"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="NEXT"
+TITLE=" Requirements"
+HREF="requirements.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="ARTICLE"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slony.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="requirements.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="ARTICLE"
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+><A
+NAME="AEN24"
+>Slony-I Administration</A
+></H1
+><HR></DIV
+><DIV
+CLASS="TOC"
+><DL
+><DT
+><B
+>Table of Contents</B
+></DT
+><DT
+>1. <A
+HREF="t24.html#INTRODUCTION"
+>Introduction to Slony-I</A
+></DT
+><DT
+>2. <A
+HREF="requirements.html"
+>Requirements</A
+></DT
+><DT
+>3. <A
+HREF="installation.html"
+>Slony-I Installation</A
+></DT
+><DT
+>4. <A
+HREF="slonik.html"
+>Slonik</A
+></DT
+><DT
+>5. <A
+HREF="concepts.html"
+>Slony-I Concepts</A
+></DT
+><DT
+>6. <A
+HREF="cluster.html"
+>Defining Slony-I Clusters</A
+></DT
+><DT
+>7. <A
+HREF="x267.html"
+>Defining Slony-I Replication Sets</A
+></DT
+><DT
+>8. <A
+HREF="altperl.html"
+>Slony-I Administration Scripts</A
+></DT
+><DT
+>9. <A
+HREF="slonstart.html"
+>Slon daemons</A
+></DT
+><DT
+>10. <A
+HREF="slonconfig.html"
+>Slon Configuration Options</A
+></DT
+><DT
+>11. <A
+HREF="subscribenodes.html"
+>Subscribing Nodes</A
+></DT
+><DT
+>12. <A
+HREF="monitoring.html"
+>Monitoring</A
+></DT
+><DT
+>13. <A
+HREF="maintenance.html"
+>Slony-I Maintenance</A
+></DT
+><DT
+>14. <A
+HREF="reshape.html"
+>Reshaping a Cluster</A
+></DT
+><DT
+>15. <A
+HREF="failover.html"
+>Doing switchover and failover with Slony-I</A
+></DT
+><DT
+>16. <A
+HREF="listenpaths.html"
+>Slony Listen Paths</A
+></DT
+><DT
+>17. <A
+HREF="addthings.html"
+>Adding Things to Replication</A
+></DT
+><DT
+>18. <A
+HREF="dropthings.html"
+>Dropping things from Slony Replication</A
+></DT
+><DT
+>19. <A
+HREF="ddlchanges.html"
+>Database Schema Changes (DDL)</A
+></DT
+><DT
+>20. <A
+HREF="firstdb.html"
+>Replicating Your First Database</A
+></DT
+><DT
+>21. <A
+HREF="help.html"
+>More Slony-I Help</A
+></DT
+><DT
+>22. <A
+HREF="x931.html"
+>Other Information Sources</A
+></DT
+></DL
+></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="INTRODUCTION"
+>1. Introduction to Slony-I</A
+></H1
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN28"
+>1.1. Why yet another replication system?</A
+></H2
+><P
+>Slony-I was born from an idea to create a replication system that was not tied
+to a specific version of PostgreSQL, which is allowed to be started and stopped on
+an existing database with out the need for a dump/reload cycle.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN31"
+>1.2. What Slony-I is</A
+></H2
+><P
+>Slony-I is a <SPAN
+CLASS="QUOTE"
+>"master to multiple slaves"</SPAN
+> replication
+system supporting cascading and slave promotion. The big picture for
+the development of Slony-I is as a master-slave system that includes
+all features and capabilities needed to replicate large databases to a
+reasonably limited number of slave systems. <SPAN
+CLASS="QUOTE"
+>"Reasonable,"</SPAN
+> in this
+context, is probably no more than a few dozen servers. If the number
+of servers grows beyond that, the cost of communications becomes
+prohibitively high.</P
+><P
+> See also <A
+HREF="t24.html#SLONYLISTENERCOSTS"
+> SlonyListenerCosts</A
+> for a further analysis.</P
+><P
+> Slony-I is a system intended for data centers and backup sites,
+where the normal mode of operation is that all nodes are available all
+the time, and where all nodes can be secured. If you have nodes that
+are likely to regularly drop onto and off of the network, or have
+nodes that cannot be kept secure, Slony-I may not be the ideal
+replication solution for you.</P
+><P
+> There are plans for a <SPAN
+CLASS="QUOTE"
+>"file-based log shipping"</SPAN
+>
+extension where updates would be serialized into files. Given that,
+log files could be distributed by any means desired without any need
+of feedback between the provider node and those nodes subscribing via
+<SPAN
+CLASS="QUOTE"
+>"log shipping."</SPAN
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN42"
+>1.3. Slony-I is not</A
+></H2
+><P
+>Slony-I is not a network management system.</P
+><P
+> Slony-I does not have any functionality within it to detect a
+node failure, or automatically promote a node to a master or other
+data origin.</P
+><P
+>Slony-I is not multi-master; it's not a connection broker, and
+it doesn't make you coffee and toast in the morning.</P
+><P
+>(That being said, the plan is for a subsequent system, Slony-II,
+to provide "multimaster" capabilities, and be "bootstrapped" using
+Slony-I. But that is a separate project, and expectations for Slony-I
+should not be based on hopes for future projects.)</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN48"
+>1.4. Why doesn't Slony-I do automatic fail-over/promotion?</A
+></H2
+><P
+>This is the job of network monitoring software, not Slony.
+Every site's configuration and fail-over path is different. For
+example, keep-alive monitoring with redundant NIC's and intelligent HA
+switches that guarantee race-condition-free takeover of a network
+address and disconnecting the <SPAN
+CLASS="QUOTE"
+>"failed"</SPAN
+> node vary in every
+network setup, vendor choice, hardware/software combination. This is
+clearly the realm of network management software and not
+Slony-I.</P
+><P
+>Let Slony-I do what it does best: provide database replication.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN53"
+>1.5. Current Limitations</A
+></H2
+><P
+>Slony-I does not automatically propagate schema changes, nor
+does it have any ability to replicate large objects. There is a
+single common reason for these limitations, namely that Slony-I
+operates using triggers, and neither schema changes nor large object
+operations can raise triggers suitable to tell Slony-I when those
+kinds of changes take place.</P
+><P
+>There is a capability for Slony-I to propagate DDL changes if
+you submit them as scripts via the <B
+CLASS="APPLICATION"
+>slonik</B
+>
+<TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+> operation. That is not
+<SPAN
+CLASS="QUOTE"
+>"automatic;"</SPAN
+> you have to construct an SQL DDL script and submit
+it.</P
+><P
+>If you have those sorts of requirements, it may be worth
+exploring the use of <B
+CLASS="APPLICATION"
+>PostgreSQL</B
+> 8.0 PITR (Point In Time
+Recovery), where <ACRONYM
+CLASS="ACRONYM"
+>WAL</ACRONYM
+> logs are replicated to remote
+nodes. Unfortunately, that has two attendant limitations:
+
+<P
+></P
+><UL
+><LI
+><P
+> PITR replicates <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> changes in
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> databases; you cannot exclude data that isn't
+relevant;</P
+></LI
+><LI
+><P
+> A PITR replica remains dormant until you apply logs
+and start up the database. You cannot use the database and apply
+updates simultaneously. It is like having a <SPAN
+CLASS="QUOTE"
+>"standby
+server"</SPAN
+> which cannot be used without it ceasing to be
+<SPAN
+CLASS="QUOTE"
+>"standby."</SPAN
+></P
+></LI
+></UL
+></P
+><P
+>There are a number of distinct models for database replication;
+it is impossible for one replication system to be all things to all
+people.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="SLONYLISTENERCOSTS"
+>1.6. Slony-I Communications
+Costs</A
+></H2
+><P
+>The cost of communications grows in a quadratic fashion in
+several directions as the number of replication nodes in a cluster
+increases. Note the following relationships:
+
+<P
+></P
+><UL
+><LI
+><P
+> It is necessary to have a sl_path entry allowing
+connection from each node to every other node. Most will normally not
+need to be used for a given replication configuration, but this means
+that there needs to be n(n-1) paths. It is probable that there will
+be considerable repetition of entries, since the path to "node n" is
+likely to be the same from everywherein the network.</P
+></LI
+><LI
+><P
+> It is similarly necessary to have a sl_listen entry
+indicating how data flows from every node to every other node. This
+again requires configuring n(n-1) "listener paths."</P
+></LI
+><LI
+><P
+> Each SYNC applied needs to be reported back to all of
+the other nodes participating in the set so that the nodes all know
+that it is safe to purge sl_log_1 and sl_log_2 data, as any
+<SPAN
+CLASS="QUOTE"
+>"forwarding"</SPAN
+> node could potentially take over as <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+>
+at any time. One might expect SYNC messages to need to travel through
+n/2 nodes to get propagated to their destinations; this means that
+each SYNC is expected to get transmitted n(n/2) times. Again, this
+points to a quadratic growth in communications costs as the number of
+nodes increases.</P
+></LI
+></UL
+></P
+><P
+>This points to it being a bad idea to have the large
+communications network resulting from the number of nodes being large.
+Up to a half dozen nodes seems pretty reasonable; every time the
+number of nodes doubles, this can be expected to quadruple
+communications overheads.</P
+></DIV
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="requirements.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I 1.1 Administration</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Requirements</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -1,23 +1,37 @@
-<article id="addingthings"> <title/ Adding Things to Replication/
+<sect1 id="addthings"> <title/ Adding Things to Replication/
-<para>You may discover that you have missed replicating things that you wish you were replicating.
+<para>You may discover that you have missed replicating things that
+you wish you were replicating.
<para>This can be fairly easily remedied.
-<para>You cannot directly use SET ADD TABLE or SET ADD SEQUENCE in
-order to add tables and sequences to a replication set that is
-presently replicating; you must instead create a new replication set.
-Once it is identically subscribed (e.g. - the set of subscribers is
-<emphasis/identical/ to that for the set it is to merge with), the
-sets may be merged together using MERGE SET.
+<para>You cannot directly use <command/SET ADD TABLE/ or <command/SET
+ADD SEQUENCE/ in order to add tables and sequences to a replication
+set that is presently replicating; you must instead create a new
+replication set. Once it is identically subscribed (e.g. - the set of
+subscribers is <emphasis/identical/ to that for the set it is to merge
+with), the sets may be merged together using <command/MERGE SET/.
+
+<para>Up to and including 1.0.2, there is a potential problem where if
+<command/MERGE_SET/ is issued when other subscription-related events
+are pending, it is possible for things to get pretty confused on the
+nodes where other things were pending. This problem was resolved in
+1.0.5.
+
+<para>It is suggested that you be very deliberate when adding such
+things. For instance, submitting multiple subscription requests for a
+particular set in one Slonik script often turns out quite badly. If
+it is truly necessary to automate this, you'll probably want to submit
+<command/WAIT FOR EVENT/ requests in between subscription requests in
+order that the Slonik script wait for one subscription to complete
+processing before requesting the next one.
+
+<para>But in general, it is likely to be easier to cope with complex
+node reconfigurations by making sure that one change has been
+successfully processed before going on to the next. It's way easier
+to fix one thing that has broken than the interaction of five things
+that have broken.
-<para>Up to and including 1.0.2, there is a potential problem where if MERGE_SET is issued when other subscription-related events are pending, it is possible for things to get pretty confused on the nodes where other things were pending. This problem was resolved in 1.0.5.
-
-<para>It is suggested that you be very deliberate when adding such things. For instance, submitting multiple subscription requests for a particular set in one Slonik script often turns out quite badly. If it is truly necessary to automate this, you'll probably want to submit WAIT FOR EVENT requests in between subscription requests in order that the Slonik script wait for one subscription to complete processing before requesting the next one.
-
-<para>But in general, it is likely to be easier to cope with complex node reconfigurations by making sure that one change has been successfully processed before going on to the next. It's way easier to fix one thing that has broken than the interaction of five things that have broken.
-
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyFAQ06.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ06.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ06.txt -Ldoc/adminguide/SlonyFAQ06.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ06.txt
+++ doc/adminguide/SlonyFAQ06.txt
@@ -1,21 +1 @@
-%META:TOPICINFO{author="guest" date="1099541323" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Slonik fails - cannot load PostgreSQL library - PGRES_FATAL_ERROR load '$libdir/xxid';
-
- When I run the sample setup script I get an error message similar
-to:
-
-<verbatim>
-<stdin>:64: PGRES_FATAL_ERROR load '$libdir/xxid'; - ERROR: LOAD:
-could not open file '$libdir/xxid': No such file or directory
-</verbatim>
-
-Evidently, you haven't got the xxid.so library in the $libdir directory that the PostgreSQL instance is using. Note that the Slony-I components need to be installed in the PostgreSQL software installation for _each and every one_ of the nodes, not just on the "master node."
-
-This may also point to there being some other mismatch between the PostgreSQL binary instance and the Slony-I instance. If you compiled
-Slony-I yourself, on a machine that may have multiple PostgreSQL builds "lying around," it's possible that the slon or slonik binaries
-are asking to load something that isn't actually in the library directory for the PostgreSQL database cluster that it's hitting.
-
-Long and short: This points to a need to "audit" what installations of PostgreSQL and Slony you have in place on the machine(s).
-Unfortunately, just about any mismatch will cause things not to link up quite right. See also SlonyFAQ02 concerning threading issues on Solaris
-...
+Moved to SGML
Index: SlonyReshapingCluster.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyReshapingCluster.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyReshapingCluster.txt -Ldoc/adminguide/SlonyReshapingCluster.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyReshapingCluster.txt
+++ doc/adminguide/SlonyReshapingCluster.txt
@@ -1,15 +1 @@
-%META:TOPICINFO{author="guest" date="1099633739" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Reshaping A Cluster
-
-If you rearrange the nodes so that they serve different purposes, this will likely lead to the subscribers changing a bit.
-
-This will require doing several things:
-
- * If you want a node that is a subscriber to become the "master" provider for a particular replication set, you will have to issue the slonik MOVE SET operation to change that "master" provider node.
-
- * You may subsequently, or instead, wish to modify the subscriptions of other nodes. You might want to modify a node to get its data from a different provider, or to change it to turn forwarding on or off. This can be accomplished by issuing the slonik SUBSCRIBE SET operation with the new subscription information for the node; Slony-I will change the configuration.
-
- * If the directions of data flows have changed, it is doubtless appropriate to issue a set of DROP LISTEN operations to drop out obsolete paths between nodes and SET LISTEN to add the new ones. At present, this is not changed automatically; at some point, MOVE SET and SUBSCRIBE SET might change the paths as a side-effect. See SlonyListenPaths for more information about this. In version 1.1 and later, it is likely that the generation of sl_listen entries will be entirely automated, where they will be regenerated when changes are made to sl_path or sl_subscribe, thereby making it unnecessary to even think about SET LISTEN.
-
-The "altperl" toolset includes a "init_cluster.pl" script that is quite up to the task of creating the new SET LISTEN commands; it isn't smart enough to know what listener paths should be dropped.
+Moved to SGML
Index: SlonyAdministrationScripts.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyAdministrationScripts.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyAdministrationScripts.txt -Ldoc/adminguide/SlonyAdministrationScripts.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyAdministrationScripts.txt
+++ doc/adminguide/SlonyAdministrationScripts.txt
@@ -1,159 +1 @@
-%META:TOPICINFO{author="guest" date="1101222000" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slony Administration Scripts
-
-In the "altperl" directory in the CVS tree, there is a sizable set of Perl scripts that may be used to administer a set of Slony-I instances, which support having arbitrary numbers of nodes.
-
-Most of them generate Slonik scripts that are then to be passed on to the slonik utility to be submitted to all of the Slony-I nodes in a particular cluster. At one time, this embedded running slonik on the slonik scripts. Unfortunately, this turned out to be a pretty large calibre "foot gun," as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions, so the behaviour has been changed so that the scripts simply submit output to standard output. An administrator should review the slonik script before submitting it to Slonik.
-
----+++ Node/Cluster Configuration - cluster.nodes
-
-The UNIX environment variable SLONYNODES is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster.
-
-What variables are set up...
-
- * $SETNAME=orglogs; # What is the name of the replication set?
- * $LOGDIR='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs?
- * $SLON_BIN_PATH='/opt/dbs/pgsql74/bin'; # Where to look for slony binaries
- * $APACHE_ROTATOR="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator
-
-You then define the set of nodes that are to be replicated using a set of calls to add_node().
-<verbatim>
- add_node (host => '10.20.30.40', dbname => 'orglogs', port => 5437,
- user => 'postgres', node => 4, parent => 1);
-</verbatim>
-
-The set of parameters for add_node() are thus:
-<verbatim>
- my %PARAMS = (host=> undef, # Host name
- dbname => 'template1', # database name
- port => 5432, # Port number
- user => 'postgres', # user to connect as
- node => undef, # node number
- password => undef, # password for user
- parent => 1, # which node is parent to this node
- noforward => undef # shall this node be set up to forward results?
- );
-</verbatim>
----+++ Set configuration - cluster.set1, cluster.set2
-
-The UNIX environment variable SLONYSET is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set.
-
-Unlike SLONYNODES, which is essential for ALL of the slonik-generating scripts, this only needs to be set when running create_set.pl, as that is the only script used to control what tables will be in a particular replication set.
-
-What variables are set up...
-
- * $TABLE_ID = 44; Each table must be identified by a unique number; this variable controls where numbering starts
- * @PKEYEDTABLES An array of names of tables to be replicated that have a defined primary key so that Slony-I can automatically select its key
- * %KEYEDTABLES A hash table of tables to be replicated, where the hash index is the table name, and the hash value is the name of a unique not null index suitable as a "candidate primary key."
- * @SERIALTABLES An array of names of tables to be replicated that have no candidate for primary key. Slony-I will add a key field based on a sequence that Slony-I generates
- * @SEQUENCES An array of names of sequences that are to be replicated
-
----+++ build_env.pl
-
-Queries a database, generating output hopefully suitable for
-"slon.env" consisting of:
-
- * a set of add_node() calls to configure the cluster
- * The arrays @KEYEDTABLES, @SERIALTABLES, and @SEQUENCES
-
----+++ create_set.pl
-
-This requires SLONYSET to be set as well as SLONYNODES; it is used to
-generate the Slonik script to set up a replication set consisting of a
-set of tables and sequences that are to be replicated.
-
----+++ drop_node.pl
-
-Generates Slonik script to drop a node from a Slony-I cluster.
-
----+++ drop_set.pl
-
-Generates Slonik script to drop a replication set (e.g. - set of tables and sequences) from a Slony-I cluster.
-
----+++ failover.pl
-
-Generates Slonik script to request failover from a dead node to some new origin
-
----+++ init_cluster.pl
-
-Generates Slonik script to initialize a whole Slony-I cluster,
-including setting up the nodes, communications paths, and the listener
-routing.
-
----+++ merge_sets.pl
-
-Generates Slonik script to merge two replication sets together.
-
----+++ move_set.pl
-
-Generates Slonik script to move the origin of a particular set to a different node.
-
----+++ replication_test.pl
-
-Script to test whether Slony-I is successfully replicating data.
-
----+++ restart_node.pl
-
-Generates Slonik script to request the restart of a node. This was
-particularly useful pre-1.0.5 when nodes could get snarled up when
-slon daemons died.
-
----+++ restart_nodes.pl
-
-Generates Slonik script to restart all nodes in the cluster. Not
-particularly useful...
-
----+++ show_configuration.pl
-
-Displays an overview of how the environment (e.g. - SLONYNODES) is set
-to configure things.
-
----+++ slon_kill.pl
-
-Kills slony watchdog and all slon daemons for the specified set. It
-only works if those processes are running on the local host, of
-course!
-
----+++ slon_pushsql.pl
-
-Generates Slonik script to push DDL changes to a replication set.
-
----+++ slon_start.pl
-
-This starts a slon daemon for the specified cluster and node, and uses
-slon_watchdog.pl to keep it running.
-
----+++ slon_watchdog.pl
-
-Used by slon_start.pl...
-
----+++ slon_watchdog2.pl
-
-This is a somewhat smarter watchdog; it monitors a particular Slony-I
-node, and restarts the slon process if it hasn't seen updates go in in
-20 minutes or more.
-
-This is helpful if there is an unreliable network connection such that
-the slon sometimes stops working without becoming aware of it...
-
----+++ subscribe_set.pl
-
-Generates Slonik script to subscribe a particular node to a particular replication set.
-
----+++ uninstall_nodes.pl
-
-This goes through and drops the Slony-I schema from each node; use
-this if you want to destroy replication throughout a cluster. This is
-a VERY unsafe script!
-
----+++ unsubscribe_set.pl
-
-Generates Slonik script to unsubscribe a node from a replication set.
-
----+++ update_nodes.pl
-
-Generates Slonik script to tell all the nodes to update the Slony-I
-functions. This will typically be needed when you upgrade from one
-version of Slony-I to another.
-
+Moved to SGML
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -1,6 +1,6 @@
-<article id="failover"> <title/Doing switchover and failover with Slony-I/
+<sect1 id="failover"> <title/Doing switchover and failover with Slony-I/
-<sect1><title/Foreword/
+<sect2><title/Foreword/
<para> Slony-I is an asynchronous replication system. Because of that, it
is almost certain that at the moment the current origin of a set
@@ -21,7 +21,7 @@
the slonik utility and knows at least how to set up a simple
2 node replication system with Slony-I.
-<sect1><title/ Switchover
+<sect2><title/ Switchover
<para> We assume a current "origin" as node1 (AKA master) with one
"subscriber" as node2 (AKA slave). A web application on a third
@@ -60,7 +60,7 @@
whole procedure is executed with exchanged node IDs and the
original configuration is restored.
-<sect1><title/ Failover/
+<sect2><title/ Failover/
<para> Because of the possibility of missing not-yet-replicated
transactions that are committed, failover is the worst thing
@@ -110,14 +110,14 @@
</command>
</itemizedlist>
-<sect1><title/After failover, getting back node1/
+<sect2><title/After failover, getting back node1/
<para>
After the above failover, the data stored on node1 must be
considered out of sync with the rest of the nodes. Therefore, the
only way to get node1 back and transfer the master role to it is
to rebuild it from scratch as a slave, let it catch up and then
follow the switchover procedure.
-</article>
+
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -1,8 +1,8 @@
-<article id="monitoring"> <title/Monitoring/
+<sect1 id="monitoring"> <title/Monitoring/
<para>Here are some of things that you may find in your Slony logs, and explanations of what they mean.
-<sect1><title/CONFIG notices/
+<sect2><title/CONFIG notices/
<para>These entries are pretty straightforward. They are informative messages about your configuration.
@@ -18,7 +18,7 @@
CONFIG main: configuration complete - starting threads
</command>
-<sect1><title/DEBUG Notices/
+<sect2><title/DEBUG Notices/
<para>Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:
@@ -35,3 +35,20 @@
about how the threads work, what to expect in the logs after you run a
slonik command...
+<!-- Keep this comment at the end of the file
+Local variables:
+mode:sgml
+sgml-omittag:nil
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:1
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:"./reference.ced"
+sgml-exposed-tags:nil
+sgml-local-catalogs:("/usr/lib/sgml/catalog")
+sgml-local-ecat-files:nil
+End:
+-->
+
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -1,70 +1,71 @@
-<article id="installation"> <title/ Slony-I Installation
+<sect1 id="installation"> <title> Slony-I Installation</title>
-<para>You should have obtained the Slony-I source from the previous step. Unpack it.
+<para>You should have obtained the Slony-I source from the previous step. Unpack it.</para>
<para><command>
gunzip slony.tar.gz;
tar xf slony.tar
-</command>
+</command></para>
<para> This will create a directory Slony-I under the current
directory with the Slony-I sources. Head into that that directory for
-the rest of the installation procedure.
+the rest of the installation procedure.</para>
-<sect1><title/ Short Version/
+<sect2><title> Short Version</title>
<para>
-<command>./configure --with-pgsourcetree=/whereever/the/source/is </command>
-<para> <command> gmake all; gmake install </command>
+<command>./configure --with-pgsourcetree=/whereever/the/source/is </command></para>
+<para> <command> gmake all; gmake install </command></para></sect2>
-<sect1><title/ Configuration/
+<sect2><title> Configuration</title>
<para>The first step of the installation procedure is to configure the source tree
for your system. This is done by running the configure script. Configure
needs to know where your PostgreSQL source tree is, this is done with the
---with-pgsourcetree= option.
+--with-pgsourcetree= option.</para></sect2>
-<sect1><title/ Example/
+<sect2><title> Example</title>
<para> <command>
./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3
-</command>
+</command></para>
<para>This script will run a number of tests to guess values for
various dependent variables and try to detect some quirks of your
system. Slony-I is known to need a modified version of libpq on
specific platforms such as Solaris2.X on SPARC this patch can be found
-at <ulink url=
-"http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
-http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz</ulinK>
+at <ulink url="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
+http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz</ulink></para></sect2>
-<sect1><title/ Build/
+<sect2><title> Build</title>
<para>To start the build process, type
<command>
gmake all
-</command>
+</command></para>
-<para> Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things. The last line displayed should be
+<para> Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things. The last line displayed should be</para>
<para> <command>
All of Slony-I is successfully made. Ready to install.
-</command>
+</command></para></sect2>
-<sect1><title/ Installing Slony-I/
+<sect2><title> Installing Slony-I</title>
<para> To install Slony-I, enter
<command>
gmake install
-</command>
+</command></para>
<para>This will install files into postgresql install directory as
-specified by the --prefix option used in the PostgreSQL configuration.
-Make sure you have appropriate permissions to write into that area.
-Normally you need to do this either as root or as the postgres user.
+specified by the <option>--prefix</option> option used in the PostgreSQL
+configuration. Make sure you have appropriate permissions to write
+into that area. Normally you need to do this either as root or as the
+postgres user.
+</para></sect2></sect1>
<!-- Keep this comment at the end of the file
Local variables:
Index: SlonyFAQ05.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ05.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ05.txt -Ldoc/adminguide/SlonyFAQ05.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ05.txt
+++ doc/adminguide/SlonyFAQ05.txt
@@ -1,9 +1 @@
-%META:TOPICINFO{author="guest" date="1098326597" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ ps finds passwords on command line
-
- If I run a "ps" command, I, and everyone else, can see passwords
-on the command line.
-
-Take the passwords out of the Slony configuration, and put them into
-$(HOME)/.pgpass.
+Moved to SGML
--- /dev/null
+++ doc/adminguide/slonstart.html
@@ -0,0 +1,308 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slon daemons</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Slony-I Administration Scripts"
+HREF="altperl.html"><LINK
+REL="NEXT"
+TITLE="Slon Configuration Options"
+HREF="slonconfig.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="altperl.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slonconfig.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="SLONSTART"
+>9. Slon daemons</A
+></H1
+><P
+>The programs that actually perform Slony-I replication are the
+<B
+CLASS="APPLICATION"
+>slon</B
+> daemons. </P
+><P
+>You need to run one <B
+CLASS="APPLICATION"
+>slon</B
+> instance for each node in
+a Slony-I cluster, whether you consider that node a <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> or
+a <SPAN
+CLASS="QUOTE"
+>"slave."</SPAN
+> Since a <TT
+CLASS="COMMAND"
+>MOVE SET</TT
+> or <TT
+CLASS="COMMAND"
+>FAILOVER</TT
+> can
+switch the roles of nodes, slon needs to be able to function for both
+providers and subscribers. It is not essential that these daemons run
+on any particular host, but there are some principles worth
+considering:
+
+<P
+></P
+><UL
+><LI
+><P
+> Each slon needs to be able to communicate quickly
+with the database whose <SPAN
+CLASS="QUOTE"
+>"node controller"</SPAN
+> it is. Therefore, if
+a Slony-I cluster runs across some form of Wide Area Network, each
+slon process should run on or nearby the databases each is
+controlling. If you break this rule, no particular disaster should
+ensue, but the added latency introduced to monitoring events on the
+slon's <SPAN
+CLASS="QUOTE"
+>"own node"</SPAN
+> will cause it to replicate in a
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>somewhat</I
+></SPAN
+> less timely manner. </P
+></LI
+><LI
+><P
+> The fastest results would be achieved by having each
+slon run on the database server that it is servicing. If it runs
+somewhere within a fast local network, performance will not be
+noticeably degraded. </P
+></LI
+><LI
+><P
+> It is an attractive idea to run many of the
+<B
+CLASS="APPLICATION"
+>slon</B
+> processes for a cluster on one machine, as this
+makes it easy to monitor them both in terms of log files and process
+tables from one location. This eliminates the need to login to
+several hosts in order to look at log files or to restart <B
+CLASS="APPLICATION"
+>slon</B
+>
+instances. </P
+></LI
+></UL
+> </P
+><P
+>There are two <SPAN
+CLASS="QUOTE"
+>"watchdog"</SPAN
+> scripts currently available:
+
+<P
+></P
+><UL
+><LI
+><P
+> <TT
+CLASS="FILENAME"
+>tools/altperl/slon_watchdog.pl</TT
+> -
+an <SPAN
+CLASS="QUOTE"
+>"early"</SPAN
+> version that basically wraps a loop around the
+invocation of <B
+CLASS="APPLICATION"
+>slon</B
+>, restarting any time it falls over </P
+></LI
+><LI
+><P
+> <TT
+CLASS="FILENAME"
+>tools/altperl/slon_watchdog2.pl</TT
+>
+- a somewhat more intelligent version that periodically polls the
+database, checking to see if a <TT
+CLASS="COMMAND"
+>SYNC</TT
+> has taken place
+recently. We have had VPN connections that occasionally fall over
+without signalling the application, so that the <B
+CLASS="APPLICATION"
+>slon</B
+>
+stops working, but doesn't actually die; this polling addresses that
+issue. </P
+></LI
+></UL
+> </P
+><P
+>The <TT
+CLASS="FILENAME"
+>slon_watchdog2.pl</TT
+> script is probably
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>usually</I
+></SPAN
+> the preferable thing to run. It was at one point
+not preferable to run it whilst subscribing a very large replication
+set where it is expected to take many hours to do the initial
+<TT
+CLASS="COMMAND"
+>COPY SET</TT
+>. The problem that came up in that case was that it
+figured that since it hasn't done a <TT
+CLASS="COMMAND"
+>SYNC</TT
+> in 2 hours,
+something was broken requiring restarting slon, thereby restarting the
+<TT
+CLASS="COMMAND"
+>COPY SET</TT
+> event. More recently, the script has been changed
+to detect <TT
+CLASS="COMMAND"
+>COPY SET</TT
+> in progress.
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="altperl.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slonconfig.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I Administration Scripts</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slon Configuration Options</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/listenpaths.html
@@ -0,0 +1,354 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Slony Listen Paths</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Doing switchover and failover with Slony-I"
+HREF="failover.html"><LINK
+REL="NEXT"
+TITLE=" Adding Things to Replication"
+HREF="addthings.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="failover.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="addthings.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="LISTENPATHS"
+>16. Slony Listen Paths</A
+></H1
+><P
+>If you have more than two or three nodes, and any degree of usage of cascaded subscribers (_e.g._ - subscribers that are subscribing through a subscriber node), you will have to be fairly careful about the configuration of "listen paths" via the Slonik STORE LISTEN and DROP LISTEN statements that control the contents of the table sl_listen. </P
+><P
+>The "listener" entries in this table control where each node expects to listen in order to get events propagated from other nodes. You might think that nodes only need to listen to the "parent" from whom they are getting updates, but in reality, they need to be able to receive messages from _all_ nodes in order to be able to conclude that SYNCs have been received everywhere, and that, therefore, entries in sl_log_1 and sl_log_2 have been applied everywhere, and can therefore be purged. </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN666"
+>16.1. How Listening Can Break</A
+></H2
+><P
+>On one occasion, I had a need to drop a subscriber node (#2) and recreate it. That node was the data provider for another subscriber (#3) that was, in effect, a "cascaded slave." Dropping the subscriber node initially didn't work, as slonik informed me that there was a dependant node. I repointed the dependant node to the "master" node for the subscription set, which, for a while, replicated without difficulties. </P
+><P
+>I then dropped the subscription on "node 2," and started resubscribing it. That raised the Slony-I SET_SUBSCRIPTION event, which started copying tables. At that point in time, events stopped propagating to "node 3," and while it was in perfectly OK shape, no events were making it to it. </P
+><P
+>The problem was that node #3 was expecting to receive events from node #2, which was busy processing the SET_SUBSCRIPTION event, and was not passing anything else on. </P
+><P
+>We dropped the listener rules that caused node #3 to listen to node 2, replacing them with rules where it expected its events to come from node #1 (the "master" provider node for the replication set). At that moment, "as if by magic," node #3 started replicating again, as it discovered a place to get SYNC events. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN672"
+>16.2. How The Listen Configuration Should Look</A
+></H2
+><P
+>The simple cases tend to be simple to cope with. We'll look at a fairly complex set of nodes. </P
+><P
+>Consider a set of nodes, 1 thru 6, where 1 is the "master," where 2-4 subscribe directly to the master, and where 5 subscribes to 2, and 6 subscribes to 5. </P
+><P
+>Here is a "listener network" that indicates where each node should listen for messages coming from each other node: </P
+><P
+><TT
+CLASS="COMMAND"
+> 1| 2| 3| 4| 5| 6|
+--------------------------------------------
+ 1 0 2 3 4 2 2
+ 2 1 0 1 1 5 5
+ 3 1 1 0 1 1 1
+ 4 1 1 1 0 1 1
+ 5 2 2 2 2 0 6
+ 6 5 5 5 5 5 0 </TT
+> </P
+><P
+>Row 2 indicates all of the listen rules for node 2; it gets events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5 and 6 from node 5. </P
+><P
+>The row of 5's at the bottom, for node 6, indicate that node 6 listens to node 5 to get events from nodes 1-5. </P
+><P
+>The set of slonik SET LISTEN statements to express this "listener network" are as follows: </P
+><P
+><TT
+CLASS="COMMAND"
+> store listen (origin = 1, receiver = 2, provider = 1);
+
+ store listen (origin = 1, receiver = 3, provider = 1);
+
+ store listen (origin = 1, receiver = 4, provider = 1);
+
+ store listen (origin = 1, receiver = 5, provider = 2);
+
+ store listen (origin = 1, receiver = 6, provider = 5);
+
+ store listen (origin = 2, receiver = 1, provider = 2);
+
+ store listen (origin = 2, receiver = 3, provider = 1);
+
+ store listen (origin = 2, receiver = 4, provider = 1);
+
+ store listen (origin = 2, receiver = 5, provider = 2);
+
+ store listen (origin = 2, receiver = 6, provider = 5);
+
+ store listen (origin = 3, receiver = 1, provider = 3);
+
+ store listen (origin = 3, receiver = 2, provider = 1);
+
+ store listen (origin = 3, receiver = 4, provider = 1);
+
+ store listen (origin = 3, receiver = 5, provider = 2);
+
+ store listen (origin = 3, receiver = 6, provider = 5);
+
+ store listen (origin = 4, receiver = 1, provider = 4);
+
+ store listen (origin = 4, receiver = 2, provider = 1);
+
+ store listen (origin = 4, receiver = 3, provider = 1);
+
+ store listen (origin = 4, receiver = 5, provider = 2);
+
+ store listen (origin = 4, receiver = 6, provider = 5);
+
+ store listen (origin = 5, receiver = 1, provider = 2);
+
+ store listen (origin = 5, receiver = 2, provider = 5);
+
+ store listen (origin = 5, receiver = 3, provider = 1);
+
+ store listen (origin = 5, receiver = 4, provider = 1);
+
+ store listen (origin = 5, receiver = 6, provider = 5);
+
+ store listen (origin = 6, receiver = 1, provider = 2);
+
+ store listen (origin = 6, receiver = 2, provider = 5);
+
+ store listen (origin = 6, receiver = 3, provider = 1);
+
+ store listen (origin = 6, receiver = 4, provider = 1);
+
+ store listen (origin = 6, receiver = 5, provider = 6); </TT
+> </P
+><P
+>How we read these listen statements is thus... </P
+><P
+>When on the "receiver" node, look to the "provider" node to provide events coming from the "origin" node. </P
+><P
+>The tool "init_cluster.pl" in the "altperl" scripts produces optimized listener networks in both the tabular form shown above as well as in the form of Slonik statements. </P
+><P
+>There are three "thorns" in this set of roses:
+<P
+></P
+><UL
+><LI
+><P
+> If you change the shape of the node set, so that the nodes subscribe differently to things, you need to drop sl_listen entries and create new ones to indicate the new preferred paths between nodes. There is no automated way at this point to do this "reshaping."
+
+ </P
+></LI
+><LI
+><P
+> If you _don't_ change the sl_listen entries, events will likely continue to propagate so long as all of the nodes continue to run well. The problem will only be noticed when a node is taken down, "orphaning" any nodes that are listening through it.
+
+ </P
+></LI
+><LI
+><P
+> You might have multiple replication sets that have _different_ shapes for their respective trees of subscribers. There won't be a single "best" listener configuration in that case.
+
+
+
+ </P
+></LI
+><LI
+><P
+> In order for there to be an sl_listen path, there _must_ be a series of sl_path entries connecting the origin to the receiver. This means that if the contents of sl_path do not express a "connected" network of nodes, then some nodes will not be reachable. This would typically happen, in practice, when you have two sets of nodes, one in one subnet, and another in another subnet, where there are only a couple of "firewall" nodes that can talk between the subnets. Cut out those nodes and the subnets stop communicating. </P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN697"
+>16.3. Open Question</A
+></H2
+><P
+>I am not certain what happens if you have multiple listen path entries for one path, that is, if you set up entries allowing a node to listen to multiple receivers to get events from a particular origin. Further commentary on that would be appreciated! </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN700"
+>16.4. Generating listener entries via heuristics</A
+></H2
+><P
+>It ought to be possible to generate sl_listen entries dynamically, based on the following heuristics. Hopefully this will take place in version 1.1, eliminating the need to configure this by hand. </P
+><P
+>Configuration will (tentatively) be controlled based on two data sources:
+<P
+></P
+><UL
+><LI
+><P
+> sl_subscribe entries are the first, most vital control as to what listens to what; we know there must be a "listen" entry for a subscriber node to listen to its provider for events from the provider, and there should be direct "listening" taking place between subscriber and provider.
+
+ </P
+></LI
+><LI
+><P
+> sl_path entries are the second indicator; if sl_subscribe has not already indicated "how to listen," then a node may listen directly to the event's origin if there is a suitable sl_path entry
+
+ </P
+></LI
+><LI
+><P
+> If there is no guidance thus far based on the above data sources, then nodes can listen indirectly if there is an sl_path entry that points to a suitable sl_listen entry... </P
+></LI
+></UL
+> </P
+><P
+> A stored procedure would run on each node, rewriting sl_listen
+each time sl_subscribe or sl_path are modified.
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="failover.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="addthings.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Doing switchover and failover with Slony-I</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Adding Things to Replication</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/slonik.html
@@ -0,0 +1,214 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slonik</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Slony-I Installation"
+HREF="installation.html"><LINK
+REL="NEXT"
+TITLE="Slony-I Concepts"
+HREF="concepts.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="installation.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="concepts.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="SLONIK"
+>4. Slonik</A
+></H1
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN206"
+>4.1. Introduction</A
+></H2
+><P
+>Slonik is a command line utility designed specifically to setup
+and modify configurations of the Slony-I replication system.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN209"
+>4.2. General outline</A
+></H2
+><P
+>The slonik command line utility is supposed to be used embedded
+into shell scripts and reads commands from files or stdin.</P
+><P
+>It reads a set of Slonik statements, which are written in a
+scripting language with syntax similar to that of SQL, and performs
+the set of configuration changes on the slony nodes specified in the
+script.</P
+><P
+>Nearly all of the real configuration work is actually done by
+calling stored procedures after loading the Slony-I support base into
+a database. Slonik was created because these stored procedures have
+special requirements as to on which particular node in the replication
+system they are called. The absence of named parameters for stored
+procedures makes it rather hard to do this from the psql prompt, and
+psql lacks the ability to maintain multiple connections with open
+transactions to multiple databases.</P
+><P
+>The format of the Slonik "language" is very similar to that of
+SQL, and the parser is based on a similar set of formatting rules for
+such things as numbers and strings. Note that slonik is declarative,
+using literal values throughout, and does <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> have the
+notion of variables. It is anticipated that Slonik scripts will
+typically be <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>generated</I
+></SPAN
+> by scripts, such as Bash or Perl,
+and these sorts of scripting languages already have perfectly good
+ways of managing variables.</P
+><P
+>A detailed list of Slonik commands can be found here: <A
+HREF="http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands"
+TARGET="_top"
+>slonik commands </A
+></P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="installation.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="concepts.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I Installation</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony-I Concepts</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -1,8 +1,8 @@
-<article id="dropthings"> <title/ Dropping things from Slony Replication/
+<sect1 id="dropthings"> <title/ Dropping things from Slony Replication/
<para>There are several things you might want to do involving dropping things from Slony-I replication.
-<sect1><title/ Dropping A Whole Node/
+<sect2><title/ Dropping A Whole Node/
<para>If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick.
@@ -16,31 +16,51 @@
<para>SlonyFAQ17 documents some extra maintenance that may need to be done on sl_confirm if you are running versions prior to 1.0.5.
-<sect1><title/ Dropping An Entire Set/
+<sect2><title/ Dropping An Entire Set/
-<para>If you wish to stop replicating a particular replication set, the Slonik command DROP SET is what you need to use.
+<para>If you wish to stop replicating a particular replication set,
+the Slonik command <command/DROP SET/ is what you need to use.
-<para>Much as with DROP NODE, this leads to Slony-I dropping the Slony-I triggers on the tables and restoring "native" triggers. One difference is that this takes place on *all* nodes in the cluster, rather than on just one node. Another difference is that this does not clear out the Slony-I cluster's namespace, as there might be other sets being serviced.
-
-<para>This operation is quite a bit more dangerous than DROP NODE, as there _isn't_ the same sort of "failsafe." If you tell DROP SET to drop the _wrong_ set, there isn't anything to prevent "unfortunate results."
-
-<sect1><title/ Unsubscribing One Node From One Set/
-
-<para>The UNSUBSCRIBE SET operation is a little less invasive than either DROP SET or DROP NODE; it involves dropping Slony-I triggers and restoring "native" triggers on one node, for one replication set.
-
-<para>Much like with DROP NODE, this operation will fail if there is a node subscribing to the set on this node.
-
-<sect1><title/ Warning!!!/
-
-<para>For all of the above operations, "turning replication back on" will require that the node copy in a <emphasis/full/ fresh set of the data on a provider. The fact that the data was recently being replicated isn't good enough; Slony-I will expect to refresh the data from scratch.
-
-<sect1><title/ Dropping A Table From A Set/
-
-<para>In Slony 1.0.5 and above, there is a Slonik command SET DROP TABLE that allows dropping a single table from replication without forcing the user to drop the entire replication set.
-
-<para>If you are running an earlier version, there is a "hack" to do this:
-
-<para>You can fiddle this by hand by finding the table ID for the table you want to get rid of, which you can find in sl_table, and then run the following three queries, on each host:
+<para>Much as with <command/DROP NODE/, this leads to Slony-I dropping
+the Slony-I triggers on the tables and restoring <quote/native/
+triggers. One difference is that this takes place on <emphasis/all/
+nodes in the cluster, rather than on just one node. Another
+difference is that this does not clear out the Slony-I cluster's
+namespace, as there might be other sets being serviced.
+
+<para>This operation is quite a bit more dangerous than <command/DROP
+NODE/, as there <emphasis/isn't/ the same sort of "failsafe." If you
+tell <command/DROP SET/ to drop the <emphasis/wrong/ set, there isn't
+anything to prevent "unfortunate results."
+
+<sect2><title/ Unsubscribing One Node From One Set/
+
+<para>The <command/UNSUBSCRIBE SET/ operation is a little less
+invasive than either <command/DROP SET/ or <command/DROP NODE/; it
+involves dropping Slony-I triggers and restoring "native" triggers on
+one node, for one replication set.
+
+<para>Much like with <command/DROP NODE/, this operation will fail if there is a node subscribing to the set on this node.
+
+<warning>
+<para>For all of the above operations, <quote/turning replication back
+on/ will require that the node copy in a <emphasis/full/ fresh set of
+the data on a provider. The fact that the data was recently being
+replicated isn't good enough; Slony-I will expect to refresh the data
+from scratch.
+</warning>
+
+<sect2><title/ Dropping A Table From A Set/
+
+<para>In Slony 1.0.5 and above, there is a Slonik command <command/SET
+DROP TABLE/ that allows dropping a single table from replication
+without forcing the user to drop the entire replication set.
+
+<para>If you are running an earlier version, there is a <quote/hack/ to do this:
+
+<para>You can fiddle this by hand by finding the table ID for the
+table you want to get rid of, which you can find in sl_table, and then
+run the following three queries, on each host:
<para><command>
select _slonyschema.alterTableRestore(40);
@@ -48,7 +68,9 @@
delete from _slonyschema.sl_table where tab_id = 40;
</command>
-<para>The schema will obviously depend on how you defined the Slony-I cluster. The table ID, in this case, 40, will need to change to the ID of the table you want to have go away.
+<para>The schema will obviously depend on how you defined the Slony-I
+cluster. The table ID, in this case, 40, will need to change to the
+ID of the table you want to have go away.
<para>You'll have to run these three queries on all of the nodes,
preferably firstly on the "master" node, so that the dropping of this
@@ -57,13 +79,17 @@
EXECUTE SCRIPT could do that. Also possible would be to connect to
each database and submit the queries by hand.
-<sect1><title/ Dropping A Sequence From A Set/
+<sect2><title/ Dropping A Sequence From A Set/
-<para>Just as with SET DROP TABLE, version 1.0.5 introduces the operation SET DROP SEQUENCE.
+<para>Just as with <command/SET DROP TABLE/, version 1.0.5 introduces
+the operation <command/SET DROP SEQUENCE/.
-<para>If you are running an earlier version, here are instructions as to how to drop sequences:
+<para>If you are running an earlier version, here are instructions as
+to how to drop sequences:
-<para>The data that needs to be deleted to stop Slony from continuing to replicate the two sequences identified with Sequence IDs 93 and 59 are thus:
+<para>The data that needs to be deleted to stop Slony from continuing
+to replicate the two sequences identified with Sequence IDs 93 and 59
+are thus:
<para><command>
delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
@@ -75,7 +101,6 @@
the sequence everywhere "at once." Or they may be applied by hand to
each of the nodes.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/x267.html
@@ -0,0 +1,317 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Defining Slony-I Replication Sets</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Defining Slony-I Clusters"
+HREF="cluster.html"><LINK
+REL="NEXT"
+TITLE=" Slony-I Administration Scripts"
+HREF="altperl.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="cluster.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="altperl.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="AEN267"
+>7. Defining Slony-I Replication Sets</A
+></H1
+><P
+>Defining the nodes indicated the shape of the cluster of
+database servers; it is now time to determine what data is to be
+copied between them. The groups of data that are copied are defined
+as "sets." </P
+><P
+>A replication set consists of the following:
+<P
+></P
+><UL
+><LI
+><P
+> Keys on tables that are to be replicated that have no
+suitable primary key</P
+></LI
+><LI
+><P
+> Tables that are to be replicated</P
+></LI
+><LI
+><P
+> Sequences that are to be
+replicated</P
+></LI
+></UL
+> </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN278"
+>7.1. Primary Keys</A
+></H2
+><P
+>Slony-I <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>needs</I
+></SPAN
+> to have a primary key on each
+table that is replicated. PK values are used as the primary
+identifier for each tuple that is modified in the source system.
+There are three ways that you can get Slony-I to use a primary key:
+
+<P
+></P
+><UL
+><LI
+><P
+> If the table has a formally identified primary key,
+<TT
+CLASS="COMMAND"
+>SET ADD TABLE</TT
+> can be used without any need to reference the
+primary key. <B
+CLASS="APPLICATION"
+>Slony-I</B
+> will pick up that there is a
+primary key, and use it. </P
+></LI
+><LI
+><P
+> If the table hasn't got a primary key, but has some
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>candidate</I
+></SPAN
+> primary key, that is, some index on a combination
+of fields that is UNIQUE and NOT NULL, then you can specify the key,
+as in </P
+><P
+><TT
+CLASS="COMMAND"
+>SET ADD TABLE (set id = 1, origin = 1, id = 42, full qualified name = 'public.this_table', key = 'this_by_that', comment='this_table has this_by_that as a candidate primary key');</TT
+> </P
+><P
+> Notice that while you need to specify the namespace for the table, you must <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> specify the namespace for the key, as it infers the namespace from the table. </P
+></LI
+><LI
+><P
+> If the table hasn't even got a candidate primary key,
+you can ask Slony-I to provide one. This is done by first using
+<TT
+CLASS="COMMAND"
+>TABLE ADD KEY</TT
+> to add a column populated using a Slony-I
+sequence, and then having the <TT
+CLASS="COMMAND"
+>SET ADD TABLE</TT
+> include the
+directive <CODE
+CLASS="OPTION"
+>key=serial</CODE
+>, to indicate that
+<B
+CLASS="APPLICATION"
+>Slony-I</B
+>'s own column should be
+used.</P
+></LI
+></UL
+> </P
+><P
+> It is not terribly important whether you pick a
+<SPAN
+CLASS="QUOTE"
+>"true"</SPAN
+> primary key or a mere <SPAN
+CLASS="QUOTE"
+>"candidate primary
+key;"</SPAN
+> it is, however, recommended that you have one of those
+instead of having Slony-I populate the PK column for you. If you
+don't have a suitable primary key, that means that the table hasn't
+got any mechanism from your application's standpoint of keeping values
+unique. Slony-I may therefore introduce a new failure mode for your
+application, and this implies that you had a way to enter confusing
+data into the database. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN303"
+>7.2. Grouping tables into sets</A
+></H2
+><P
+> It will be vital to group tables together into a single set if
+those tables are related via foreign key constraints. If tables that
+are thus related are <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> replicated together,
+you'll find yourself in trouble if you switch the <SPAN
+CLASS="QUOTE"
+>"master
+provider"</SPAN
+> from one node to another, and discover that the new
+<SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> can't be updated properly because it is missing the
+contents of dependent tables. </P
+><P
+> If a database schema has been designed cleanly, it is likely
+that replication sets will be virtually synonymous with namespaces.
+All of the tables and sequences in a particular namespace will be
+sufficiently related that you will want to replicate them all.
+Conversely, tables found in different schemas will likely NOT be
+related, and therefore should be replicated in separate sets.
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="cluster.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="altperl.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Defining Slony-I Clusters</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony-I Administration Scripts</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: slony.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/slony.sgml
+++ doc/adminguide/slony.sgml
@@ -12,13 +12,14 @@
]>
<book id="slony">
- <title>Slony 1.1 Documentation</title>
+ <title>Slony-I 1.1 Administration</title>
<bookinfo>
- <corpauthor>The PostgreSQL Global Development Group</corpauthor>
+ <corpauthor>The Slony Global Development Group</corpauthor>
+ <author> <firstname>Christopher</firstname> <surname>Browne</surname> </author>
&legal;
</bookinfo>
-<part id="overview"> <title/Overview/
+<article> <title>Slony-I Administration</title>
&intro;
&prerequisites;
@@ -27,9 +28,6 @@
&concepts;
&cluster;
&defineset;
-</part>
-
-<part id="tools"> <title/Tools/
&adminscripts;
&startslons;
&slonconfig;
@@ -45,10 +43,15 @@
&firstdb;
&help;
-</part>
+</article>
+
+<article id="faq"><title> FAQ </title>
+
+<para> Not all of these are, strictly speaking, <quote/frequently
+asked;/ some represent <emphasis/trouble found that seemed worth
+documenting/.
-<!-- <part id="appendixes"> -->
-<!-- <title>Appendixes</title> -->
+ &faq;
<!-- &errcodes; -->
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -1,43 +1,83 @@
-<article id="defineset"> <title/Defining Slony-I Replication Sets/
+<sect1> <title>Defining Slony-I Replication Sets</title>
-
-<para>Defining the nodes indicated the shape of the cluster of database servers; it is now time to determine what data is to be copied between them. The groups of data that are copied are defined as "sets."
+<para>Defining the nodes indicated the shape of the cluster of
+database servers; it is now time to determine what data is to be
+copied between them. The groups of data that are copied are defined
+as "sets."
<para>A replication set consists of the following:
<itemizedlist>
-<listitem><para> Keys on tables that are to be replicated that have no suitable primary key
-<listitem><para> Tables that are to be replicated
+<listitem><para> Keys on tables that are to be replicated that have no
+suitable primary key</para></listitem>
+
+<listitem><para> Tables that are to be replicated</para></listitem>
-<listitem><para> Sequences that are to be replicated
+<listitem><para> Sequences that are to be
+replicated</para></listitem>
</itemizedlist>
-<sect1><title/ Primary Keys/
+<sect2><title> Primary Keys</title>
-<para>Slony-I <emphasis/needs/ to have a primary key on each table that is replicated. PK values are used as the primary identifier for each tuple that is modified in the source system. There are three ways that you can get Slony-I to use a primary key:
+<para>Slony-I <emphasis>needs</emphasis> to have a primary key on each
+table that is replicated. PK values are used as the primary
+identifier for each tuple that is modified in the source system.
+There are three ways that you can get Slony-I to use a primary key:
<itemizedlist>
-<listitem><para> If the table has a formally identified primary key, SET ADD TABLE can be used without any need to reference the primary key. Slony-I will pick up that there is a primary key, and use it.
-<listitem><para> If the table hasn't got a primary key, but has some *candidate* primary key, that is, some index on a combination of fields that is UNIQUE and NOT NULL, then you can specify the key, as in
+<listitem><para> If the table has a formally identified primary key,
+<command/SET ADD TABLE/ can be used without any need to reference the
+primary key. <application/Slony-I/ will pick up that there is a
+primary key, and use it.
+
+<listitem><para> If the table hasn't got a primary key, but has some
+<emphasis>candidate</emphasis> primary key, that is, some index on a combination
+of fields that is UNIQUE and NOT NULL, then you can specify the key,
+as in
<para><command>SET ADD TABLE (set id = 1, origin = 1, id = 42, full qualified name = 'public.this_table', key = 'this_by_that', comment='this_table has this_by_that as a candidate primary key');
</command>
-<para> Notice that while you need to specify the namespace for the table, you must <emphasis/not/ specify the namespace for the key, as it infers the namespace from the table.
+<para> Notice that while you need to specify the namespace for the table, you must <emphasis>not</emphasis> specify the namespace for the key, as it infers the namespace from the table.
-<listitem><para> If the table hasn't even got a candidate primary key, you can ask Slony-I to provide one. This is done by first using TABLE ADD KEY to add a column populated using a Slony-I sequence, and then having the SET ADD TABLE include the directive key=serial, to indicate that Slony-I's own column should be used.
+<listitem><para> If the table hasn't even got a candidate primary key,
+you can ask Slony-I to provide one. This is done by first using
+<command/TABLE ADD KEY/ to add a column populated using a Slony-I
+sequence, and then having the <command/SET ADD TABLE/ include the
+directive <option/key=serial/, to indicate that
+<application/Slony-I/'s own column should be
+used.</para></listitem>
</itemizedlist>
-<para> It is not terribly important whether you pick a "true" primary key or a mere "candidate primary key;" it is, however, recommended that you have one of those instead of having Slony-I populate the PK column for you. If you don't have a suitable primary key, that means that the table hasn't got any mechanism from your application's standpoint of keeping values unique. Slony-I may therefore introduce a new failure mode for your application, and this implies that you had a way to enter confusing data into the database.
-
-<sect1><title/ Grouping tables into sets/
-
-<para> It will be vital to group tables together into a single set if those tables are related via foreign key constraints. If tables that are thus related are <emphasis/not/ replicated together, you'll find yourself in trouble if you switch the "master provider" from one node to another, and discover that the new "master" can't be updated properly because it is missing the contents of dependent tables.
-<para> If a database schema has been designed cleanly, it is likely that replication sets will be virtually synonymous with namespaces. All of the tables and sequences in a particular namespace will be sufficiently related that you will want to replicate them all. Conversely, tables found in different schemas will likely NOT be related, and therefore should be replicated in separate sets.
+<para> It is not terribly important whether you pick a
+<quote>true</quote> primary key or a mere <quote>candidate primary
+key;</quote> it is, however, recommended that you have one of those
+instead of having Slony-I populate the PK column for you. If you
+don't have a suitable primary key, that means that the table hasn't
+got any mechanism from your application's standpoint of keeping values
+unique. Slony-I may therefore introduce a new failure mode for your
+application, and this implies that you had a way to enter confusing
+data into the database.
+
+<sect2><title> Grouping tables into sets</title>
+
+<para> It will be vital to group tables together into a single set if
+those tables are related via foreign key constraints. If tables that
+are thus related are <emphasis>not</emphasis> replicated together,
+you'll find yourself in trouble if you switch the <quote/master
+provider/ from one node to another, and discover that the new
+<quote/master/ can't be updated properly because it is missing the
+contents of dependent tables.
+
+<para> If a database schema has been designed cleanly, it is likely
+that replication sets will be virtually synonymous with namespaces.
+All of the tables and sequences in a particular namespace will be
+sufficiently related that you will want to replicate them all.
+Conversely, tables found in different schemas will likely NOT be
+related, and therefore should be replicated in separate sets.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyFAQ18.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ18.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ18.txt -Ldoc/adminguide/SlonyFAQ18.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ18.txt
+++ doc/adminguide/SlonyFAQ18.txt
@@ -1,42 +1 @@
-%META:TOPICINFO{author="guest" date="1099973679" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Replication Fails - Unique Constraint Violation
-
-Replication has been running for a while, successfully, when a node encounters a "glitch," and replication logs are filled with repetitions of the following:
-
-<verbatim>
-DEBUG2 remoteWorkerThread_1: syncing set 2 with 5 table(s) from provider 1
-DEBUG2 remoteWorkerThread_1: syncing set 1 with 41 table(s) from provider 1
-DEBUG2 remoteWorkerThread_1: syncing set 5 with 1 table(s) from provider 1
-DEBUG2 remoteWorkerThread_1: syncing set 3 with 1 table(s) from provider 1
-DEBUG2 remoteHelperThread_1_1: 0.135 seconds delay for first row
-DEBUG2 remoteHelperThread_1_1: 0.343 seconds until close cursor
-ERROR remoteWorkerThread_1: "insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090538', 'D', '_rserv_ts=''9275244''');
-delete from only public.epp_domain_host where _rserv_ts='9275244';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090539', 'D', '_rserv_ts=''9275245''');
-delete from only public.epp_domain_host where _rserv_ts='9275245';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090540', 'D', '_rserv_ts=''24240590''');
-delete from only public.epp_domain_contact where _rserv_ts='24240590';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090541', 'D', '_rserv_ts=''24240591''');
-delete from only public.epp_domain_contact where _rserv_ts='24240591';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090542', 'D', '_rserv_ts=''24240589''');
-delete from only public.epp_domain_contact where _rserv_ts='24240589';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090543', 'D', '_rserv_ts=''36968002''');
-delete from only public.epp_domain_status where _rserv_ts='36968002';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090544', 'D', '_rserv_ts=''36968003''');
-delete from only public.epp_domain_status where _rserv_ts='36968003';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090549', 'I', '(contact_id,status,reason,_rserv_ts) values (''6972897'',''64'','''',''31044208'')');
-insert into public.contact_status (contact_id,status,reason,_rserv_ts) values ('6972897','64','','31044208');insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090550', 'D', '_rserv_ts=''18139332''');
-delete from only public.contact_status where _rserv_ts='18139332';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090551', 'D', '_rserv_ts=''18139333''');
-delete from only public.contact_status where _rserv_ts='18139333';" ERROR: duplicate key violates unique constraint "contact_status_pkey"
- - qualification was:
-ERROR remoteWorkerThread_1: SYNC aborted
-</verbatim>
-
-The transaction rolls back, and Slony-I tries again, and again, and again. The problem is with one of the _last_ SQL statements, the one with log_cmdtype = 'I'. That isn't quite obvious; what takes place is that Slony-I groups 10 update queries together to diminish the number of network round trips.
-
-A _certain_ cause for this has not yet been arrived at. The factors that *appear* to go together to contribute to this scenario are as follows:
-
- * The "glitch" seems to coincide with some sort of outage; it has been observed both in cases where databases were suffering from periodic "SIG 11" problems, where backends were falling over, as well as when temporary network failure seemed likely.
-
- * The scenario seems to involve a delete transaction having been missed by Slony-I.
-
-By the time we notice that there is a problem, the missed delete transaction has been cleaned out of sl_log_1, so there is no recovery possible.
-
-What is necessary, at this point, is to drop the replication set (or even the node), and restart replication from scratch on that node.
-
-In Slony-I 1.0.5, the handling of purges of sl_log_1 are rather more conservative, refusing to purge entries that haven't been successfully synced for at least 10 minutes on all nodes. It is not certain that that will prevent the "glitch" from taking place, but it seems likely that it will leave enough sl_log_1 data to be able to do something about recovering from the condition or at least diagnosing it more exactly. And perhaps the problem is that sl_log_1 was being purged too aggressively, and this will resolve the issue completely.
-
+Moved to SGML
Index: SlonyDDLChanges.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyDDLChanges.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyDDLChanges.txt -Ldoc/adminguide/SlonyDDLChanges.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyDDLChanges.txt
+++ doc/adminguide/SlonyDDLChanges.txt
@@ -1,20 +1 @@
-%META:TOPICINFO{author="guest" date="1100558820" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----+++ Database Schema Changes (DDL)
-
-When changes are made to the database schema, e.g. adding fields to a table, it is necessary for this to be handled rather carefully, otherwise different nodes may get rather deranged because they disagree on how particular tables are built.
-
-If you pass the changes through Slony-I via the EXECUTE SCRIPT (slonik) / ddlscript(set,script,node) (stored function), this allows you to be certain that the changes take effect at the same point in the transaction streams on all of the nodes. That may not be too important if you can take something of an outage to do schema changes, but if you want to do upgrades that take place while transactions are still firing their way through your systems, it's necessary.
-
-It's worth making a couple of comments on "special things" about EXECUTE SCRIPT:
-
- * The script must not contain transaction BEGIN or END statements, as the script is already executed inside a transaction. In PostgreSQL version 8, the introduction of nested transactions may change this requirement somewhat, but you must still remain aware that the actions in the script are wrapped inside a transaction.
-
- * If there is _anything_ broken about the script, or about how it executes on a particular node, this will cause the slon daemon for that node to panic and crash. If you restart the node, it will, more likely than not, try to *repeat* the DDL script, which will, almost certainly, fail the second time just as it did the first time. I have found this scenario to lead to a need to go to the "master" node to delete the event to stop it from continuing to fail.
-
- * For slon to, at that point, "panic" is probably the _correct_ answer, as it allows the DBA to head over to the database node that is broken, and manually fix things before cleaning out the defective event and restarting slon. You can be certain that the updates made _after_ the DDL change on the provider node are queued up, waiting to head to the subscriber. You don't run the risk of there being updates made that depended on the DDL changes in order to be correct.
-
-Unfortunately, this nonetheless implies that the use of the DDL facility is somewhat fragile and dangerous. Making DDL changes should not be done in a sloppy or cavalier manner. If your applications do not have fairly stable SQL schemas, then using Slony-I for replication is likely to be fraught with trouble and frustration.
-
-There is an article on how to manage Slony schema changes here: [[http://www.varlena.com/varlena/GeneralBits/88.php][http://www.varlena.com/varlena/GeneralBits/88.php]]
-
+Moved to SGML
Index: SlonyInstallation.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyInstallation.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/SlonyInstallation.txt -Ldoc/adminguide/SlonyInstallation.txt -u -w -r1.2 -r1.3
--- doc/adminguide/SlonyInstallation.txt
+++ doc/adminguide/SlonyInstallation.txt
@@ -1,64 +1 @@
-%META:TOPICINFO{author="guest" date="1101234514" format="1.0" version="1.4"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slony-I Installation
-
-You should have obtained the Slony-I source from the previous step. Unpack it.
-
-<verbatim>
-gunzip slony.tar.gz
-tar xf slony.tar
-</verbatim>
-
-This will create a directory Slony-I under the current directory with the Slony-I sources. Head into that that directory for the rest of the installation procedure.
-
----+++ Short Version
-
-<verbatim>
-./configure --with-pgsourcetree=/whereever/the/source/is
-gmake all
-gmake install
-</verbatim>
-
----+++ Configuration
-
-The first step of the installation procedure is to configure the source tree
-for your system. This is done by running the configure script. Configure
-needs to know where your PostgreSQL source tree is, this is done with the
---with-pgsourcetree= option.
-
----+++ Example
-
-<verbatim>
-./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3
-</verbatim>
-
-This script will run a number of tests to guess values for various dependent
-variables and try to detect some quirks of your system. Slony-I is known to
-need a modified version of libpq on specific platforms such as Solaris2.X on
-SPARC this patch can be found at [[http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz][http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz]].
-
----+++ Build
-
-To start the build process, type
-
-<verbatim>
-gmake all
-</verbatim>
-
-Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things. The last line displayed should be
-<verbatim>
-All of Slony-I is successfully made. Ready to install.
-</verbatim>
-
----+++ Installing Slony-I
-
-To install Slony-I, enter
-
-<verbatim>
-gmake install
-</verbatim>
-
-This will install files into postgresql install directory as specified by the
---prefix option used in the PostgreSQL configuration. Make sure you have
-appropriate permissions to write into that area. Normally you need to do this either
-as root or as the postgres user.`
+Moved to SGML
Index: SlonyFAQ01.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ01.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ01.txt -Ldoc/adminguide/SlonyFAQ01.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ01.txt
+++ doc/adminguide/SlonyFAQ01.txt
@@ -1,12 +1 @@
-%META:TOPICINFO{author="guest" date="1098326312" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ I looked for the _clustername namespace, and it wasn't there.
-
-If the DSNs are wrong, then slon instances can't connect to the nodes.
-
-This will generally lead to nodes remaining entirely untouched.
-
-Recheck the connection configuration. By the way, since slon links to
-libpq, you could have password information stored in $HOME/.pgpass,
-partially filling in right/wrong authentication information there.
-
+Moved to SGML
--- /dev/null
+++ doc/adminguide/x931.html
@@ -0,0 +1,160 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Other Information Sources</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" More Slony-I Help "
+HREF="help.html"><LINK
+REL="NEXT"
+HREF="faq.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="help.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="faq.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="AEN931"
+>22. Other Information Sources</A
+></H1
+><P
+></P
+><UL
+><LI
+><P
+> <A
+HREF="http://comstar.dotgeek.org/postgres/slony-config/"
+TARGET="_top"
+> slony-config</A
+> - A Perl tool for configuring Slony nodes using config files in an XML-based format that the tool transforms into a Slonik script </P
+></LI
+></UL
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="help.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="faq.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>More Slony-I Help</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+></TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: slonconfig.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonconfig.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slonconfig.sgml -Ldoc/adminguide/slonconfig.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/slonconfig.sgml
+++ doc/adminguide/slonconfig.sgml
@@ -1,4 +1,4 @@
-<article id="slonconfig"> <title/Slon Configuration Options/
+<sect1 id="slonconfig"> <title/Slon Configuration Options/
<para>Slon parameters:
@@ -65,7 +65,6 @@
<para>The location of the slon configuration file.
</itemizedlist>
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -1,22 +1,25 @@
-<article id="firstdb"><title/Replicating Your First Database/
+<sect1 id="firstdb"><title/Replicating Your First Database/
<para>In this example, we will be replicating a brand new pgbench database. The
mechanics of replicating an existing database are covered here, however we
recommend that you learn how Slony-I functions by using a fresh new
non-production database.
-<para>The Slony-I replication engine is trigger-based, allowing us to replicate
-databases (or portions thereof) running under the same postmaster.
+<para>The Slony-I replication engine is trigger-based, allowing us to
+replicate databases (or portions thereof) running under the same
+postmaster.
+
+<para>This example will show how to replicate the pgbench database
+running on localhost (master) to the pgbench slave database also
+running on localhost (slave). We make a couple of assumptions about
+your PostgreSQL configuration:
-<para>This example will show how to replicate the pgbench database running on
-localhost (master) to the pgbench slave database also running on localhost
-(slave). We make a couple of assumptions about your PostgreSQL configuration:
<itemizedlist>
- <listitem><para> You have tcpip_socket=true in your postgresql.conf and
- <listitem><para> You have enabled access in your cluster(s) via pg_hba.conf
+ <listitem><para> You have <option/tcpip_socket=true/ in your <filename/postgresql.conf/ and
+ <listitem><para> You have enabled access in your cluster(s) via <filename/pg_hba.conf/
</itemizedlist>
-<para> The REPLICATIONUSER needs to be a PostgreSQL superuser. This is typically
+<para> The <envar/REPLICATIONUSER/ needs to be a PostgreSQL superuser. This is typically
postgres or pgsql.
<para>You should also set the following shell variables:
@@ -39,23 +42,23 @@
<command/setenv CLUSTERNAME slony_example/
</itemizedlist>
-<para><warning><Para> If you're changing these variables to use different
-hosts for MASTERHOST and SLAVEHOST, be sure <emphasis/not/ to use
-localhost for either of them. This will result in an error similar to
-the following:
+<para><warning><Para> If you're changing these variables to use
+different hosts for <envar/MASTERHOST/ and <envar/SLAVEHOST/, be sure
+<emphasis/not/ to use localhost for either of them. This will result
+in an error similar to the following:
<para><command>
ERROR remoteListenThread_1: db_getLocalNodeId() returned 2 - wrong database?
</command>
</warning></para>
-<sect1><title/ Creating the pgbenchuser/
+<sect2><title/ Creating the pgbenchuser/
<para><command>
createuser -A -D $PGBENCHUSER
</command>
-<sect1><title/ Preparing the databases/
+<sect2><title/ Preparing the databases/
<para><command>
createdb -O $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
@@ -96,7 +99,7 @@
transactions against the pgbench database running on localhost as the pgbench
user.
-<sect1><title/ Configuring the Database for Replication./
+<sect2><title/ Configuring the Database for Replication./
<para>Creating the configuration tables, stored procedures, triggers and
configuration is all done through the slonik tool. It is a specialized
@@ -276,7 +279,7 @@
<para>If this script returns "FAILED" please contact the developers at
<ulink url="http://slony.org/"> http://slony.org/</ulink>
-</article>
+
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/concepts.html
@@ -0,0 +1,278 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I Concepts</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Slonik"
+HREF="slonik.html"><LINK
+REL="NEXT"
+TITLE="Defining Slony-I Clusters"
+HREF="cluster.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slonik.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="cluster.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="CONCEPTS"
+>5. Slony-I Concepts</A
+></H1
+><P
+>In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
+
+<P
+></P
+><UL
+><LI
+><P
+> Cluster
+ </P
+></LI
+><LI
+><P
+> Node
+ </P
+></LI
+><LI
+><P
+> Replication Set
+ </P
+></LI
+><LI
+><P
+> Provider and Subscriber</P
+></LI
+></UL
+> </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN231"
+>5.1. Cluster</A
+></H2
+><P
+>In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases. </P
+><P
+>The cluster name is specified in each and every Slonik script via the directive:
+<TT
+CLASS="COMMAND"
+>cluster name = 'something';</TT
+> </P
+><P
+>If the Cluster name is 'something', then Slony-I will create, in each database instance in the cluster, the namespace/schema '_something'. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN237"
+>5.2. Node</A
+></H2
+><P
+>A Slony-I Node is a named PostgreSQL database that will be participating in replication. </P
+><P
+>It is defined, near the beginning of each Slonik script, using the directive:
+<TT
+CLASS="COMMAND"
+> NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';</TT
+> </P
+><P
+>The CONNINFO information indicates a string argument that will ultimately be passed to the <CODE
+CLASS="FUNCTION"
+>PQconnectdb()</CODE
+> libpq function. </P
+><P
+>Thus, a Slony-I cluster consists of:
+<P
+></P
+><UL
+><LI
+><P
+> A cluster name
+ </P
+></LI
+><LI
+><P
+> A set of Slony-I nodes, each of which has a namespace based on that cluster name</P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN250"
+>5.3. Replication Set</A
+></H2
+><P
+>A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster. </P
+><P
+>You may have several sets, and the "flow" of replication does not need to be identical between those sets. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN254"
+>5.4. Provider and Subscriber</A
+></H2
+><P
+>Each replication set has some "master" node, which winds up
+being the <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>only</I
+></SPAN
+> place where user applications are permitted
+to modify data in the tables that are being replicated. That "master"
+may be considered the master "provider node;" it is the main place
+from which data is provided. </P
+><P
+>Other nodes in the cluster will subscribe to the replication
+set, indicating that they want to receive the data. </P
+><P
+>The "master" node will never be considered a "subscriber." But
+Slony-I supports the notion of cascaded subscriptions, that is, a node
+that is subscribed to the "master" may also behave as a "provider" to
+other nodes in the cluster.
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slonik.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="cluster.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slonik</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Defining Slony-I Clusters</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: Makefile
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/Makefile,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/Makefile -Ldoc/adminguide/Makefile -u -w -r1.1 -r1.2
--- doc/adminguide/Makefile
+++ doc/adminguide/Makefile
@@ -86,7 +86,8 @@
html: slony.sgml $(ALLSGML) stylesheet.dsl
@rm -f *.html
- $(JADE) $(JADEFLAGS) $(SPFLAGS) $(SGMLINCLUDE) $(CATALOG) -d stylesheet.dsl -ioutput-html -t sgml $<
+ $(JADE) -ioutput.html -d mysheet.dsl -t sgml $<
+ #$(JADE) $(JADEFLAGS) $(SPFLAGS) $(SGMLINCLUDE) $(CATALOG) -d mysheet.dsl -ioutput-html -t sgml $<
ifeq ($(vpath_build), yes)
@cp $(srcdir)/stylesheet.css .
endif
Index: ddlchanges.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/ddlchanges.sgml
+++ doc/adminguide/ddlchanges.sgml
@@ -1,26 +1,61 @@
-<article id="ddlchanges"> <title/Database Schema Changes (DDL)
+<sect1 id="ddlchanges"> <title/Database Schema Changes (DDL)/
-<para>When changes are made to the database schema, e.g. adding fields to a table, it is necessary for this to be handled rather carefully, otherwise different nodes may get rather deranged because they disagree on how particular tables are built.
+<para>When changes are made to the database schema, <emphasis/e.g./ -
+adding fields to a table, it is necessary for this to be handled
+rather carefully, otherwise different nodes may get rather deranged
+because they disagree on how particular tables are built.
+
+<para>If you pass the changes through Slony-I via the <command/EXECUTE
+SCRIPT/ (slonik) / <function/ddlscript(set,script,node)/ (stored
+function), this allows you to be certain that the changes take effect
+at the same point in the transaction streams on all of the nodes.
+That may not be too important if you can take something of an outage
+to do schema changes, but if you want to do upgrades that take place
+while transactions are still firing their way through your systems,
+it's necessary.
-<para>If you pass the changes through Slony-I via the EXECUTE SCRIPT (slonik) / <function/ddlscript(set,script,node)/ (stored function), this allows you to be certain that the changes take effect at the same point in the transaction streams on all of the nodes. That may not be too important if you can take something of an outage to do schema changes, but if you want to do upgrades that take place while transactions are still firing their way through your systems, it's necessary.
+<para>It's worth making a couple of comments on <quote/special things/
+about <command/EXECUTE SCRIPT/:
-<para>It's worth making a couple of comments on "special things" about EXECUTE SCRIPT:
<itemizedlist>
-<Listitem><Para> The script must not contain transaction BEGIN or END statements, as the script is already executed inside a transaction. In PostgreSQL version 8, the introduction of nested transactions may change this requirement somewhat, but you must still remain aware that the actions in the script are wrapped inside a transaction.
-<Listitem><Para> If there is <emphasis/anything/ broken about the script, or about how it executes on a particular node, this will cause the slon daemon for that node to panic and crash. If you restart the node, it will, more likely than not, try to <emphasis/repeat/ the DDL script, which will, almost certainly, fail the second time just as it did the first time. I have found this scenario to lead to a need to go to the "master" node to delete the event to stop it from continuing to fail.
-
-<Listitem><Para> For slon to, at that point, "panic" is probably the <emphasis/correct/ answer, as it allows the DBA to head over to the database node that is broken, and manually fix things before cleaning out the defective event and restarting slon. You can be certain that the updates made <emphasis/after/ the DDL change on the provider node are queued up, waiting to head to the subscriber. You don't run the risk of there being updates made that depended on the DDL changes in order to be correct.
+<Listitem><Para> The script must not contain transaction
+<command/BEGIN/ or <command/END/ statements, as the script is already
+executed inside a transaction. In PostgreSQL version 8, the
+introduction of nested transactions may change this requirement
+somewhat, but you must still remain aware that the actions in the
+script are wrapped inside a transaction.
+
+<Listitem><Para> If there is <emphasis/anything/ broken about the
+script, or about how it executes on a particular node, this will cause
+the slon daemon for that node to panic and crash. If you restart the
+node, it will, more likely than not, try to <emphasis/repeat/ the DDL
+script, which will, almost certainly, fail the second time just as it
+did the first time. I have found this scenario to lead to a need to
+go to the <quote/master/ node to delete the event to stop it from
+continuing to fail.
+
+<Listitem><Para> For slon to, at that point, <quote/panic/ is probably
+the <emphasis/correct/ answer, as it allows the DBA to head over to
+the database node that is broken, and manually fix things before
+cleaning out the defective event and restarting slon. You can be
+certain that the updates made <emphasis/after/ the DDL change on the
+provider node are queued up, waiting to head to the subscriber. You
+don't run the risk of there being updates made that depended on the
+DDL changes in order to be correct.
</itemizedlist>
-<para>Unfortunately, this nonetheless implies that the use of the DDL facility is somewhat fragile and dangerous. Making DDL changes should not be done in a sloppy or cavalier manner. If your applications do not have fairly stable SQL schemas, then using Slony-I for replication is likely to be fraught with trouble and frustration.
+<para>Unfortunately, this nonetheless implies that the use of the DDL
+facility is somewhat fragile and dangerous. Making DDL changes should
+not be done in a sloppy or cavalier manner. If your applications do
+not have fairly stable SQL schemas, then using Slony-I for replication
+is likely to be fraught with trouble and frustration.
<para>There is an article on how to manage Slony schema changes here:
<ulink url="http://www.varlena.com/varlena/GeneralBits/88.php">
Varlena General Bits</ulink>
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/dropthings.html
@@ -0,0 +1,388 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Dropping things from Slony Replication</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Adding Things to Replication"
+HREF="addthings.html"><LINK
+REL="NEXT"
+TITLE="Database Schema Changes (DDL)"
+HREF="ddlchanges.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="addthings.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="ddlchanges.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="DROPTHINGS"
+>18. Dropping things from Slony Replication</A
+></H1
+><P
+>There are several things you might want to do involving dropping things from Slony-I replication. </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN729"
+>18.1. Dropping A Whole Node</A
+></H2
+><P
+>If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick. </P
+><P
+>This will lead to Slony-I dropping the triggers (generally that deny the ability to update data), restoring the "native" triggers, dropping the schema used by Slony-I, and the slon process for that node terminating itself. </P
+><P
+>As a result, the database should be available for whatever use your application makes of the database. </P
+><P
+>This is a pretty major operation, with considerable potential to cause substantial destruction; make sure you drop the right node! </P
+><P
+>The operation will fail if there are any nodes subscribing to the node that you attempt to drop, so there is a bit of failsafe. </P
+><P
+>SlonyFAQ17 documents some extra maintenance that may need to be done on sl_confirm if you are running versions prior to 1.0.5. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN737"
+>18.2. Dropping An Entire Set</A
+></H2
+><P
+>If you wish to stop replicating a particular replication set,
+the Slonik command <TT
+CLASS="COMMAND"
+>DROP SET</TT
+> is what you need to use. </P
+><P
+>Much as with <TT
+CLASS="COMMAND"
+>DROP NODE</TT
+>, this leads to Slony-I dropping
+the Slony-I triggers on the tables and restoring <SPAN
+CLASS="QUOTE"
+>"native"</SPAN
+>
+triggers. One difference is that this takes place on <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+>
+nodes in the cluster, rather than on just one node. Another
+difference is that this does not clear out the Slony-I cluster's
+namespace, as there might be other sets being serviced. </P
+><P
+>This operation is quite a bit more dangerous than <TT
+CLASS="COMMAND"
+>DROP
+NODE</TT
+>, as there <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>isn't</I
+></SPAN
+> the same sort of "failsafe." If you
+tell <TT
+CLASS="COMMAND"
+>DROP SET</TT
+> to drop the <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>wrong</I
+></SPAN
+> set, there isn't
+anything to prevent "unfortunate results." </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN750"
+>18.3. Unsubscribing One Node From One Set</A
+></H2
+><P
+>The <TT
+CLASS="COMMAND"
+>UNSUBSCRIBE SET</TT
+> operation is a little less
+invasive than either <TT
+CLASS="COMMAND"
+>DROP SET</TT
+> or <TT
+CLASS="COMMAND"
+>DROP NODE</TT
+>; it
+involves dropping Slony-I triggers and restoring "native" triggers on
+one node, for one replication set. </P
+><P
+>Much like with <TT
+CLASS="COMMAND"
+>DROP NODE</TT
+>, this operation will fail if there is a node subscribing to the set on this node.
+
+<DIV
+CLASS="WARNING"
+><P
+></P
+><TABLE
+CLASS="WARNING"
+WIDTH="100%"
+BORDER="0"
+><TR
+><TD
+WIDTH="25"
+ALIGN="CENTER"
+VALIGN="TOP"
+><IMG
+SRC="./images/warning.gif"
+HSPACE="5"
+ALT="Warning"></TD
+><TD
+ALIGN="LEFT"
+VALIGN="TOP"
+><P
+>For all of the above operations, <SPAN
+CLASS="QUOTE"
+>"turning replication back
+on"</SPAN
+> will require that the node copy in a <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>full</I
+></SPAN
+> fresh set of
+the data on a provider. The fact that the data was recently being
+replicated isn't good enough; Slony-I will expect to refresh the data
+from scratch.</P
+></TD
+></TR
+></TABLE
+></DIV
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN762"
+>18.4. Dropping A Table From A Set</A
+></H2
+><P
+>In Slony 1.0.5 and above, there is a Slonik command <TT
+CLASS="COMMAND"
+>SET
+DROP TABLE</TT
+> that allows dropping a single table from replication
+without forcing the user to drop the entire replication set. </P
+><P
+>If you are running an earlier version, there is a <SPAN
+CLASS="QUOTE"
+>"hack"</SPAN
+> to do this: </P
+><P
+>You can fiddle this by hand by finding the table ID for the
+table you want to get rid of, which you can find in sl_table, and then
+run the following three queries, on each host: </P
+><P
+><TT
+CLASS="COMMAND"
+> select _slonyschema.alterTableRestore(40);
+ select _slonyschema.tableDropKey(40);
+ delete from _slonyschema.sl_table where tab_id = 40;</TT
+> </P
+><P
+>The schema will obviously depend on how you defined the Slony-I
+cluster. The table ID, in this case, 40, will need to change to the
+ID of the table you want to have go away. </P
+><P
+>You'll have to run these three queries on all of the nodes,
+preferably firstly on the "master" node, so that the dropping of this
+propagates properly. Implementing this via a Slonik statement with a
+new Slony event would do that. Submitting the three queries using
+EXECUTE SCRIPT could do that. Also possible would be to connect to
+each database and submit the queries by hand. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN773"
+>18.5. Dropping A Sequence From A Set</A
+></H2
+><P
+>Just as with <TT
+CLASS="COMMAND"
+>SET DROP TABLE</TT
+>, version 1.0.5 introduces
+the operation <TT
+CLASS="COMMAND"
+>SET DROP SEQUENCE</TT
+>. </P
+><P
+>If you are running an earlier version, here are instructions as
+to how to drop sequences: </P
+><P
+>The data that needs to be deleted to stop Slony from continuing
+to replicate the two sequences identified with Sequence IDs 93 and 59
+are thus: </P
+><P
+><TT
+CLASS="COMMAND"
+>delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
+
+delete from _oxrsorg.sl_sequence where seq_id in (93,59); </TT
+> </P
+><P
+> Those two queries could be submitted to all of the nodes via
+<CODE
+CLASS="FUNCTION"
+>ddlscript()</CODE
+> / <TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+>, thus eliminating
+the sequence everywhere "at once." Or they may be applied by hand to
+each of the nodes.
+
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="addthings.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="ddlchanges.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Adding Things to Replication</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Database Schema Changes (DDL)</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ12.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ12.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ12.txt -Ldoc/adminguide/SlonyFAQ12.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ12.txt
+++ doc/adminguide/SlonyFAQ12.txt
@@ -1,15 +1 @@
-%META:TOPICINFO{author="guest" date="1098327131" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Slony-I: cannot add table to currently subscribed set 1
-
- I tried to add a table to a set, and got the following message:
-
-<verbatim>
- Slony-I: cannot add table to currently subscribed set 1
-</verbatim>
-
-You cannot add tables to sets that already have subscribers.
-
-The workaround to this is to create ANOTHER set, add the new tables to
-that new set, subscribe the same nodes subscribing to "set 1" to the
-new set, and then merge the sets together.
+Moved to SGML
--- /dev/null
+++ doc/adminguide/subscribenodes.html
@@ -0,0 +1,214 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Subscribing Nodes</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Slon Configuration Options"
+HREF="slonconfig.html"><LINK
+REL="NEXT"
+TITLE="Monitoring"
+HREF="monitoring.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slonconfig.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="monitoring.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="SUBSCRIBENODES"
+>11. Subscribing Nodes</A
+></H1
+><P
+>Before you subscribe a node to a set, be sure that you have slons running for both the master and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat your head against a wall trying to figure out what's going on. </P
+><P
+>Subscribing a node to a set is done by issuing the slonik command "subscribe set". It may seem tempting to try to subscribe several nodes to a set within the same try block like this: </P
+><P
+> <TT
+CLASS="COMMAND"
+>try {
+ echo 'Subscribing sets';
+ subscribe set (id = 1, provider=1, receiver=2, forward=yes);
+ subscribe set (id = 1, provider=1, receiver=3, forward=yes);
+ subscribe set (id = 1, provider=1, receiver=4, forward=yes);
+} on error {
+ echo 'Could not subscribe the sets!';
+ exit -1;
+}</TT
+> </P
+><P
+> You are just asking for trouble if you try to subscribe sets like that. The proper procedure is to subscribe one node at a time, and to check the logs and databases before you move onto subscribing the next node to the set. It is also worth noting that success within the above slonik try block does not imply that nodes 2, 3, and 4 have all been successfully subscribed. It merely guarantees that the slonik commands were received by the slon running on the master node. </P
+><P
+>A typical sort of problem that will arise is that a cascaded
+subscriber is looking for a provider that is not ready yet. In that
+failure case, that subscriber node will <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>never</I
+></SPAN
+> pick up the
+subscriber. It will get "stuck" waiting for a past event to take
+place. The other nodes will be convinced that it is successfully
+subscribed (because no error report ever made it back to them); a
+request to unsubscribe the node will be "blocked" because the node is
+stuck on the attempt to subscribe it. </P
+><P
+>When you subscribe a node to a set, you should see something like this in your slony logs for the master node: </P
+><P
+> <TT
+CLASS="COMMAND"
+>DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET</TT
+> </P
+><P
+>You should also start seeing log entries like this in the slony logs for the subscribing node: </P
+><P
+><TT
+CLASS="COMMAND"
+>DEBUG2 remoteWorkerThread_1: copy table public.my_table</TT
+> </P
+><P
+>It may take some time for larger tables to be copied from the master node to the new subscriber. If you check the pg_stat_activity table on the master node, you should see a query that is copying the table to stdout. </P
+><P
+>The table sl_subscribe on both the master, and the new subscriber should have entries for the new subscription: </P
+><P
+><TT
+CLASS="COMMAND"
+> sub_set | sub_provider | sub_receiver | sub_forward | sub_active
+---------+--------------+--------------+-------------+------------
+ 1 | 1 | 2 | t | t</TT
+> </P
+><P
+>A final test is to insert a row into a table on the master node, and to see if the row is copied to the new subscriber.
+
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slonconfig.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="monitoring.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slon Configuration Options</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Monitoring</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ10.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ10.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ10.txt -Ldoc/adminguide/SlonyFAQ10.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ10.txt
+++ doc/adminguide/SlonyFAQ10.txt
@@ -1,28 +1 @@
-%META:TOPICINFO{author="guest" date="1099542035" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ I need to drop a table from a replication set
-
-This can be accomplished several ways, not all equally desirable ;-).
-
- * You could drop the whole replication set, and recreate it with just the tables that you need. Alas, that means recopying a whole lot of data, and kills the usability of the cluster on the rest of the set while that's happening.
-
- * If you are running 1.0.5 or later, there is the command SET DROP TABLE, which will "do the trick."
-
- * If you are still using 1.0.1 or 1.0.2, the _essential_ functionality of SET DROP TABLE involves the functionality in droptable_int(). You can fiddle this by hand by finding the table ID for the table you want to get rid of, which you can find in sl_table, and then run the following three queries, on each host:
-
-<verbatim>
- select _slonyschema.alterTableRestore(40);
- select _slonyschema.tableDropKey(40);
- delete from _slonyschema.sl_table where tab_id = 40;
-</verbatim>
-
-The schema will obviously depend on how you defined the Slony-I
-cluster. The table ID, in this case, 40, will need to change to the
-ID of the table you want to have go away.
-
-You'll have to run these three queries on all of the nodes, preferably
-firstly on the "master" node, so that the dropping of this propagates
-properly. Implementing this via a SLONIK statement with a new Slony
-event would do that. Submitting the three queries using EXECUTE
-SCRIPT could do that. Also possible would be to connect to each
-database and submit the queries by hand.
+Moved to SGML
Index: SlonyListenPaths.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyListenPaths.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyListenPaths.txt -Ldoc/adminguide/SlonyListenPaths.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyListenPaths.txt
+++ doc/adminguide/SlonyListenPaths.txt
@@ -1,110 +1 @@
-%META:TOPICINFO{author="guest" date="1099870294" format="1.0" version="1.7"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Slony Listen Paths
-
-If you have more than two or three nodes, and any degree of usage of cascaded subscribers (_e.g._ - subscribers that are subscribing through a subscriber node), you will have to be fairly careful about the configuration of "listen paths" via the Slonik STORE LISTEN and DROP LISTEN statements that control the contents of the table sl_listen.
-
-The "listener" entries in this table control where each node expects to listen in order to get events propagated from other nodes. You might think that nodes only need to listen to the "parent" from whom they are getting updates, but in reality, they need to be able to receive messages from _all_ nodes in order to be able to conclude that SYNCs have been received everywhere, and that, therefore, entries in sl_log_1 and sl_log_2 have been applied everywhere, and can therefore be purged.
-
----+++ How Listening Can Break
-
-On one occasion, I had a need to drop a subscriber node (#2) and recreate it. That node was the data provider for another subscriber (#3) that was, in effect, a "cascaded slave." Dropping the subscriber node initially didn't work, as slonik informed me that there was a dependant node. I repointed the dependant node to the "master" node for the subscription set, which, for a while, replicated without difficulties.
-
-I then dropped the subscription on "node 2," and started resubscribing it. That raised the Slony-I SET_SUBSCRIPTION event, which started copying tables. At that point in time, events stopped propagating to "node 3," and while it was in perfectly OK shape, no events were making it to it.
-
-The problem was that node #3 was expecting to receive events from node #2, which was busy processing the SET_SUBSCRIPTION event, and was not passing anything else on.
-
-We dropped the listener rules that caused node #3 to listen to node 2, replacing them with rules where it expected its events to come from node #1 (the "master" provider node for the replication set). At that moment, "as if by magic," node #3 started replicating again, as it discovered a place to get SYNC events.
-
----+++ How The Listen Configuration Should Look
-
-The simple cases tend to be simple to cope with. We'll look at a fairly complex set of nodes.
-
-Consider a set of nodes, 1 thru 6, where 1 is the "master," where 2-4 subscribe directly to the master, and where 5 subscribes to 2, and 6 subscribes to 5.
-
-Here is a "listener network" that indicates where each node should listen for messages coming from each other node:
-
-<verbatim>
- 1| 2| 3| 4| 5| 6|
---------------------------------------------
- 1 0 2 3 4 2 2
- 2 1 0 1 1 5 5
- 3 1 1 0 1 1 1
- 4 1 1 1 0 1 1
- 5 2 2 2 2 0 6
- 6 5 5 5 5 5 0
-</verbatim>
-
-Row 2 indicates all of the listen rules for node 2; it gets events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5 and 6 from node 5.
-
-The row of 5's at the bottom, for node 6, indicate that node 6 listens to node 5 to get events from nodes 1-5.
-
-The set of slonik SET LISTEN statements to express this "listener network" are as follows:
-
-<verbatim>
- store listen (origin = 1, receiver = 2, provider = 1);
- store listen (origin = 1, receiver = 3, provider = 1);
- store listen (origin = 1, receiver = 4, provider = 1);
- store listen (origin = 1, receiver = 5, provider = 2);
- store listen (origin = 1, receiver = 6, provider = 5);
- store listen (origin = 2, receiver = 1, provider = 2);
- store listen (origin = 2, receiver = 3, provider = 1);
- store listen (origin = 2, receiver = 4, provider = 1);
- store listen (origin = 2, receiver = 5, provider = 2);
- store listen (origin = 2, receiver = 6, provider = 5);
- store listen (origin = 3, receiver = 1, provider = 3);
- store listen (origin = 3, receiver = 2, provider = 1);
- store listen (origin = 3, receiver = 4, provider = 1);
- store listen (origin = 3, receiver = 5, provider = 2);
- store listen (origin = 3, receiver = 6, provider = 5);
- store listen (origin = 4, receiver = 1, provider = 4);
- store listen (origin = 4, receiver = 2, provider = 1);
- store listen (origin = 4, receiver = 3, provider = 1);
- store listen (origin = 4, receiver = 5, provider = 2);
- store listen (origin = 4, receiver = 6, provider = 5);
- store listen (origin = 5, receiver = 1, provider = 2);
- store listen (origin = 5, receiver = 2, provider = 5);
- store listen (origin = 5, receiver = 3, provider = 1);
- store listen (origin = 5, receiver = 4, provider = 1);
- store listen (origin = 5, receiver = 6, provider = 5);
- store listen (origin = 6, receiver = 1, provider = 2);
- store listen (origin = 6, receiver = 2, provider = 5);
- store listen (origin = 6, receiver = 3, provider = 1);
- store listen (origin = 6, receiver = 4, provider = 1);
- store listen (origin = 6, receiver = 5, provider = 6);
-</verbatim>
-
-How we read these listen statements is thus...
-
-When on the "receiver" node, look to the "provider" node to provide events coming from the "origin" node.
-
-The tool "init_cluster.pl" in the "altperl" scripts produces optimized listener networks in both the tabular form shown above as well as in the form of Slonik statements.
-
-There are three "thorns" in this set of roses:
-
- * If you change the shape of the node set, so that the nodes subscribe differently to things, you need to drop sl_listen entries and create new ones to indicate the new preferred paths between nodes. There is no automated way at this point to do this "reshaping."
-
- * If you _don't_ change the sl_listen entries, events will likely continue to propagate so long as all of the nodes continue to run well. The problem will only be noticed when a node is taken down, "orphaning" any nodes that are listening through it.
-
- * You might have multiple replication sets that have _different_ shapes for their respective trees of subscribers. There won't be a single "best" listener configuration in that case.
-
- * In order for there to be an sl_listen path, there _must_ be a series of sl_path entries connecting the origin to the receiver. This means that if the contents of sl_path do not express a "connected" network of nodes, then some nodes will not be reachable. This would typically happen, in practice, when you have two sets of nodes, one in one subnet, and another in another subnet, where there are only a couple of "firewall" nodes that can talk between the subnets. Cut out those nodes and the subnets stop communicating.
-
----+++ Open Question
-
-I am not certain what happens if you have multiple listen path entries for one path, that is, if you set up entries allowing a node to listen to multiple receivers to get events from a particular origin. Further commentary on that would be appreciated!
-
----+++ Generating listener entries via heuristics
-
-It ought to be possible to generate sl_listen entries dynamically, based on the following heuristics. Hopefully this will take place in version 1.1, eliminating the need to configure this by hand.
-
-Configuration will (tentatively) be controlled based on two data sources:
-
- * sl_subscribe entries are the first, most vital control as to what listens to what; we know there must be a "listen" entry for a subscriber node to listen to its provider for events from the provider, and there should be direct "listening" taking place between subscriber and provider.
-
- * sl_path entries are the second indicator; if sl_subscribe has not already indicated "how to listen," then a node may listen directly to the event's origin if there is a suitable sl_path entry
-
- * If there is no guidance thus far based on the above data sources, then nodes can listen indirectly if there is an sl_path entry that points to a suitable sl_listen entry...
-
-A stored procedure would run on each node, rewriting sl_listen each time sl_subscribe or sl_path are modified.
-
+Moved to SGML
Index: SlonyAddThings.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyAddThings.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyAddThings.txt -Ldoc/adminguide/SlonyAddThings.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyAddThings.txt
+++ doc/adminguide/SlonyAddThings.txt
@@ -1,11 +1 @@
-%META:TOPICINFO{author="guest" date="1099541886" format="1.0" version="1.4"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Adding Things to Replication
-
-You may discover that you have missed replicating things that you wish you were replicating.
-
-This can be fairly easily remedied.
-
-You cannot directly use SET ADD TABLE or SET ADD SEQUENCE in order to add tables and sequences to a replication set that is presently replicating; you must instead create a new replication set. Once it is identically subscribed (e.g. - the set of subscribers is _identical_ to that for the set it is to merge with), the sets may be merged together using MERGE SET.
-
-Up to and including 1.0.2, there is a potential problem where if MERGE_SET is issued when other subscription-related events are pending, it is possible for things to get pretty confused on the nodes where other things were pending. This problem was resolved in 1.0.5.
+Moved to SGML
Index: SlonyPrerequisites.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyPrerequisites.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyPrerequisites.txt -Ldoc/adminguide/SlonyPrerequisites.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyPrerequisites.txt
+++ doc/adminguide/SlonyPrerequisites.txt
@@ -1,54 +1 @@
-%META:TOPICINFO{author="guest" date="1098242633" format="1.0" version="1.5"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Requirements:
-
-Any platform that can run PostgreSQL should be able to run Slony-I.
-
-The platforms that have received specific testing at the time of this release are
-FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha, osX-10.3, Linux-2.4X-i386
-Linux-2.6X-i386 Linux-2.6X-amd64, Solaris-2.8-SPARC, Solaris-2.9-SPARC, AIX 5.1 and
-OpenBSD-3.5-sparc64.
-
-There have been reports of success at running Slony-I hosts that are running PostgreSQL on Microsoft Windows(tm). At this time, the "binary" applications (e.g. - slonik, slon) do not run on Windows(tm), but a slon running on one of the Unix-like systems has no reason to have difficulty connect to a PostgreSQL instance running on Windows(tm).
-
-It ought to be possible to port slon and slonik to run on Windows; the conspicuous challenge is of having a POSIX-like pthreads implementation for slon, as it uses that to have multiple threads of execution. There are reports of there being a pthreads library for Windows(tm), so nothing should prevent some interested party from volunteering to do the port.
-
----+++ Software needed
-
- * GNU make. Other make programs will not work. GNU make is often installed under the name gmake; this document will therefore always refer to it by that name. (On Linux-based systems GNU make is typically the default make, and is called "make") To test to see if your make is GNU make enter "make version." Version 3.76 or later will suffice; previous versions may not.
-
- * You need an ISO/ANSI C compiler. Recent versions of GCC work.
-
- * You also need a recent version of PostgreSQL *source*. Slony-I depends on namespace support so you must have version 7.3 or newer to be able to build and use Slony-I. Rod Taylor has "hacked up" a version of Slony-I that works with version 7.2; if you desperately need that, look for him on the PostgreSQL Hackers mailing list. It is not anticipated that 7.2 will be supported by any official Slony-I release.
-
- * GNU packages may be included in the standard packaging for your operating system, or you may need to look for source code at your local GNU mirror (see [[http://www.gnu.org/order/ftp.html][http://www.gnu.org/order/ftp.html]] for a list) or at [[ftp://ftp.gnu.org/gnu][ftp://ftp.gnu.org/gnu]].)
-
- * If you need to obtain PostgreSQL source, you can download it from your favorite PostgreSQL mirror (see [[http://www.postgresql.org/mirrors-www.html][http://www.postgresql.org/mirrors-www.html]] for a list), or via [[http://bt.postgresql.org/][BitTorrent]].
-
-Also check to make sure you have sufficient disk space. You will need
-approximately 5MB for the source tree during build and installation.
-
----+++ Getting Slony-I Source
-
-You can get the Slony-I source from [[http://developer.postgresql.org/~wieck/slony1/download/][http://developer.postgresql.org/~wieck/slony1/download/]]
-
----+++ Time Synchronization
-
-All the servers used within the replication cluster need to have their Real Time Clocks in sync. This is to ensure that slon doesn't error with messages indicating that slave is already ahead of the master during replication. We recommend you have ntpd running on all nodes, with subscriber nodes using the "master" provider node as their time server.
-
-It is possible for Slony-I to function even in the face of there being some time discrepancies, but having systems "in sync" is usually pretty important for distributed applications.
-
----+++ Network Connectivity
-
-It is necessary that the hosts that are to replicate between one another have _bidirectional_ network communications to the PostgreSQL instances. That is, if node B is replicating data from node A, it is necessary that there be a path from A to B and from B to A. It is recommended that all nodes in a Slony-I cluster allow this sort of bidirection communications from any node in the cluster to any other node in the cluster.
-
-Note that the network addresses need to be consistent across all of the nodes. Thus, if there is any need to use a "public" address for a node, to allow remote/VPN access, that "public" address needs to be able to be used consistently throughout the Slony-I cluster, as the address is propagated throughout the cluster in table sl_path.
-
-A possible workaround for this, in environments where firewall rules are particularly difficult to implement, may be to establish SSHTunnels that are created on each host that allow remote access through IP address 127.0.0.1, with a different port for each destination.
-
-Note that slonik and the slon instances need no special connections to communicate with one another; they just need to be able to get access to the PostgreSQL databases.
-
-An implication of the communications model is that the extended network in which a Slony-I cluster operates must be able to be treated as being secure. If there is a remote location where you cannot trust the Slony-I node to be considered "secured," this represents a vulnerability that affects _all_ the nodes throughout the cluster. In effect, the security policies throughout the cluster can only be considered as stringent as those applied at the _weakest_ link. Running a full-blown Slony-I node at a branch location that can't be kept secure compromises security for *every node* in the cluster.
-
-In the future plans is a feature whereby updates for a particular replication set would be serialized via a scheme called "log shipping." The data stored in sl_log_1 and sl_log_2 would be written out to log files on disk. These files could be transmitted in any manner desired, whether via scp, FTP, burning them onto DVD-ROMs and mailing them, or even by recording them on a USB "flash device" and attaching them to birds, allowing a sort of "avian transmission protocol." This will allow one way communications so that "subscribers" that use log shipping would have no need for access to other Slony-I nodes.
-
+Moved to SGML
Index: listenpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/listenpaths.sgml
+++ doc/adminguide/listenpaths.sgml
@@ -1,10 +1,10 @@
-<article id="listenpaths"> <title/ Slony Listen Paths
+<sect1 id="listenpaths"> <title/ Slony Listen Paths/
<para>If you have more than two or three nodes, and any degree of usage of cascaded subscribers (_e.g._ - subscribers that are subscribing through a subscriber node), you will have to be fairly careful about the configuration of "listen paths" via the Slonik STORE LISTEN and DROP LISTEN statements that control the contents of the table sl_listen.
<para>The "listener" entries in this table control where each node expects to listen in order to get events propagated from other nodes. You might think that nodes only need to listen to the "parent" from whom they are getting updates, but in reality, they need to be able to receive messages from _all_ nodes in order to be able to conclude that SYNCs have been received everywhere, and that, therefore, entries in sl_log_1 and sl_log_2 have been applied everywhere, and can therefore be purged.
-<sect1><title/ How Listening Can Break
+<sect2><title/ How Listening Can Break/
<para>On one occasion, I had a need to drop a subscriber node (#2) and recreate it. That node was the data provider for another subscriber (#3) that was, in effect, a "cascaded slave." Dropping the subscriber node initially didn't work, as slonik informed me that there was a dependant node. I repointed the dependant node to the "master" node for the subscription set, which, for a while, replicated without difficulties.
@@ -14,7 +14,7 @@
<para>We dropped the listener rules that caused node #3 to listen to node 2, replacing them with rules where it expected its events to come from node #1 (the "master" provider node for the replication set). At that moment, "as if by magic," node #3 started replicating again, as it discovered a place to get SYNC events.
-<sect1><title/How The Listen Configuration Should Look/
+<sect2><title/How The Listen Configuration Should Look/
<para>The simple cases tend to be simple to cope with. We'll look at a fairly complex set of nodes.
@@ -89,11 +89,11 @@
<listitem><para> In order for there to be an sl_listen path, there _must_ be a series of sl_path entries connecting the origin to the receiver. This means that if the contents of sl_path do not express a "connected" network of nodes, then some nodes will not be reachable. This would typically happen, in practice, when you have two sets of nodes, one in one subnet, and another in another subnet, where there are only a couple of "firewall" nodes that can talk between the subnets. Cut out those nodes and the subnets stop communicating.
</itemizedlist>
-<sect1><title/Open Question/
+<sect2><title/Open Question/
<para>I am not certain what happens if you have multiple listen path entries for one path, that is, if you set up entries allowing a node to listen to multiple receivers to get events from a particular origin. Further commentary on that would be appreciated!
-<sect1><title/ Generating listener entries via heuristics/
+<sect2><title/ Generating listener entries via heuristics/
<para>It ought to be possible to generate sl_listen entries dynamically, based on the following heuristics. Hopefully this will take place in version 1.1, eliminating the need to configure this by hand.
@@ -109,8 +109,6 @@
<para> A stored procedure would run on each node, rewriting sl_listen
each time sl_subscribe or sl_path are modified.
-
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/firstdb.html
@@ -0,0 +1,770 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Replicating Your First Database</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Database Schema Changes (DDL)"
+HREF="ddlchanges.html"><LINK
+REL="NEXT"
+TITLE=" More Slony-I Help "
+HREF="help.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="ddlchanges.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="help.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="FIRSTDB"
+>20. Replicating Your First Database</A
+></H1
+><P
+>In this example, we will be replicating a brand new pgbench database. The
+mechanics of replicating an existing database are covered here, however we
+recommend that you learn how Slony-I functions by using a fresh new
+non-production database. </P
+><P
+>The Slony-I replication engine is trigger-based, allowing us to
+replicate databases (or portions thereof) running under the same
+postmaster. </P
+><P
+>This example will show how to replicate the pgbench database
+running on localhost (master) to the pgbench slave database also
+running on localhost (slave). We make a couple of assumptions about
+your PostgreSQL configuration:
+
+<P
+></P
+><UL
+><LI
+><P
+> You have <CODE
+CLASS="OPTION"
+>tcpip_socket=true</CODE
+> in your <TT
+CLASS="FILENAME"
+>postgresql.conf</TT
+> and
+ </P
+></LI
+><LI
+><P
+> You have enabled access in your cluster(s) via <TT
+CLASS="FILENAME"
+>pg_hba.conf</TT
+></P
+></LI
+></UL
+> </P
+><P
+> The <CODE
+CLASS="ENVAR"
+>REPLICATIONUSER</CODE
+> needs to be a PostgreSQL superuser. This is typically
+postgres or pgsql. </P
+><P
+>You should also set the following shell variables:
+
+<P
+></P
+><UL
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>CLUSTERNAME</CODE
+>=slony_example</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>MASTERDBNAME</CODE
+>=pgbench</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>SLAVEDBNAME</CODE
+>=pgbenchslave</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>MASTERHOST</CODE
+>=localhost</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>SLAVEHOST</CODE
+>=localhost</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>REPLICATIONUSER</CODE
+>=pgsql</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>PGBENCHUSER</CODE
+>=pgbench</P
+></LI
+></UL
+></P
+><P
+>Here are a couple of examples for setting variables in common shells:
+
+<P
+></P
+><UL
+><LI
+><P
+> bash, sh, ksh
+ <TT
+CLASS="COMMAND"
+>export CLUSTERNAME=slony_example</TT
+></P
+></LI
+><LI
+><P
+> (t)csh:
+ <TT
+CLASS="COMMAND"
+>setenv CLUSTERNAME slony_example</TT
+></P
+></LI
+></UL
+> </P
+><P
+><DIV
+CLASS="WARNING"
+><P
+></P
+><TABLE
+CLASS="WARNING"
+WIDTH="100%"
+BORDER="0"
+><TR
+><TD
+WIDTH="25"
+ALIGN="CENTER"
+VALIGN="TOP"
+><IMG
+SRC="./images/warning.gif"
+HSPACE="5"
+ALT="Warning"></TD
+><TD
+ALIGN="LEFT"
+VALIGN="TOP"
+><P
+> If you're changing these variables to use
+different hosts for <CODE
+CLASS="ENVAR"
+>MASTERHOST</CODE
+> and <CODE
+CLASS="ENVAR"
+>SLAVEHOST</CODE
+>, be sure
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> to use localhost for either of them. This will result
+in an error similar to the following: </P
+><P
+><TT
+CLASS="COMMAND"
+>ERROR remoteListenThread_1: db_getLocalNodeId() returned 2 - wrong database? </TT
+></P
+></TD
+></TR
+></TABLE
+></DIV
+></P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN867"
+>20.1. Creating the pgbenchuser</A
+></H2
+><P
+><TT
+CLASS="COMMAND"
+>createuser -A -D $PGBENCHUSER</TT
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN871"
+>20.2. Preparing the databases</A
+></H2
+><P
+><TT
+CLASS="COMMAND"
+>createdb -O $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
+createdb -O $PGBENCHUSER -h $SLAVEHOST $SLAVEDBNAME
+pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME</TT
+> </P
+><P
+>Because Slony-I depends on the databases having the pl/pgSQL procedural
+language installed, we better install it now. It is possible that you have
+installed pl/pgSQL into the template1 database in which case you can skip this
+step because it's already installed into the $MASTERDBNAME.
+
+<TT
+CLASS="COMMAND"
+> createlang plpgsql -h $MASTERHOST $MASTERDBNAME </TT
+>
+ </P
+><P
+>Slony-I does not yet automatically copy table definitions from a master when a
+
+slave subscribes to it, so we need to import this data. We do this with
+
+pg_dump.
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME </TT
+>
+
+ </P
+><P
+>To illustrate how Slony-I allows for on the fly replication subscription, lets
+
+start up pgbench. If you run the pgbench application in the foreground of a
+
+separate terminal window, you can stop and restart it with different
+
+parameters at any time. You'll need to re-export the variables again so they
+
+are available in this session as well.
+
+ </P
+><P
+>The typical command to run pgbench would look like:
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME </TT
+>
+
+ </P
+><P
+>This will run pgbench with 5 concurrent clients each processing 1000
+
+transactions against the pgbench database running on localhost as the pgbench
+
+user.
+
+ </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN885"
+>20.3. Configuring the Database for Replication.</A
+></H2
+><P
+>Creating the configuration tables, stored procedures, triggers and
+
+configuration is all done through the slonik tool. It is a specialized
+
+scripting aid that mostly calls stored procedures in the master/salve (node)
+
+databases. The script to create the initial configuration for the simple
+
+master-slave setup of our pgbench database looks like this:
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> #!/bin/sh
+
+
+
+slonik <<_EOF_
+
+ #--
+
+ # define the namespace the replication system uses in our example it is
+
+ # slony_example
+
+ #--
+
+ cluster name = $CLUSTERNAME;
+
+
+
+ #--
+
+ # admin conninfo's are used by slonik to connect to the nodes one for each
+
+ # node on each side of the cluster, the syntax is that of PQconnectdb in
+
+ # the C-API
+
+ # --
+
+ node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
+
+ node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
+
+
+
+ #--
+
+ # init the first node. Its id MUST be 1. This creates the schema
+
+ # _$CLUSTERNAME containing all replication system specific database
+
+ # objects.
+
+
+
+ #--
+
+ init cluster ( id=1, comment = 'Master Node');
+
+
+
+ #--
+
+ # Because the history table does not have a primary key or other unique
+
+ # constraint that could be used to identify a row, we need to add one.
+
+ # The following command adds a bigint column named
+
+ # _Slony-I_$CLUSTERNAME_rowID to the table. It will have a default value
+
+ # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
+
+ # constraints applied. All existing rows will be initialized with a
+
+ # number
+
+ #--
+
+ table add key (node id = 1, fully qualified name = 'public.history');
+
+
+
+ #--
+
+ # Slony-I organizes tables into sets. The smallest unit a node can
+
+ # subscribe is a set. The following commands create one set containing
+
+ # all 4 pgbench tables. The master or origin of the set is node 1.
+
+ #--
+
+ create set (id=1, origin=1, comment='All pgbench tables');
+
+ set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
+
+ set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
+
+ set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
+
+ set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
+
+
+
+ #--
+
+ # Create the second node (the slave) tell the 2 nodes how to connect to
+
+ # each other and how they should listen for events.
+
+ #--
+
+
+
+ store node (id=2, comment = 'Slave node');
+
+ store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
+
+ store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
+
+ store listen (origin=1, provider = 1, receiver =2);
+
+ store listen (origin=2, provider = 2, receiver =1);
+
+_EOF_ </TT
+>
+
+
+
+ </P
+><P
+>Is the pgbench still running? If not start it again.
+
+ </P
+><P
+>At this point we have 2 databases that are fully prepared. One is the master
+
+database in which bgbench is busy accessing and changing rows. It's now time
+
+to start the replication daemons.
+
+ </P
+><P
+>On $MASTERHOST the command to start the replication engine is
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST" </TT
+>
+ </P
+><P
+>Likewise we start the replication system on node 2 (the slave)
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST" </TT
+>
+ </P
+><P
+>Even though we have the slon running on both the master and slave and they are
+
+both spitting out diagnostics and other messages, we aren't replicating any
+
+data yet. The notices you are seeing is the synchronization of cluster
+
+configurations between the 2 slon processes.
+ </P
+><P
+>To start replicating the 4 pgbench tables (set 1) from the master (node id 1)
+
+the the slave (node id 2), execute the following script.
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> #!/bin/sh
+
+slonik <<_EOF_
+
+ # ----
+
+ # This defines which namespace the replication system uses
+
+ # ----
+
+ cluster name = $CLUSTERNAME;
+
+
+
+ # ----
+
+ # Admin conninfo's are used by the slonik program to connect
+
+ # to the node databases. So these are the PQconnectdb arguments
+
+ # that connect from the administrators workstation (where
+
+ # slonik is executed).
+
+ # ----
+
+ node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
+
+ node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
+
+
+
+ # ----
+
+ # Node 2 subscribes set 1
+
+ # ----
+
+ subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
+
+_EOF_ </TT
+>
+ </P
+><P
+>Any second here, the replication daemon on $SLAVEHOST will start to copy the
+
+current content of all 4 replicated tables. While doing so, of course, the
+
+pgbench application will continue to modify the database. When the copy
+
+process is finished, the replication daemon on $SLAVEHOST will start to catch
+
+up by applying the accumulated replication log. It will do this in little
+
+steps, 10 seconds worth of application work at a time. Depending on the
+
+performance of the two systems involved, the sizing of the two databases, the
+
+actual transaction load and how well the two databases are tuned and
+
+maintained, this catchup process can be a matter of minutes, hours, or
+
+eons.
+
+ </P
+><P
+>You have now successfully set up your first basic master/slave replication
+
+system, and the 2 databases once the slave has caught up contain identical
+
+data. That's the theory. In practice, it's good to check that the datasets
+
+are in fact the same.
+
+ </P
+><P
+>The following script will create ordered dumps of the 2 databases and compare
+
+them. Make sure that pgbench has completed it's testing, and that your slon
+
+sessions have caught up.
+
+ </P
+><P
+><TT
+CLASS="COMMAND"
+> #!/bin/sh
+
+echo -n "**** comparing sample1 ... "
+
+psql -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME >dump.tmp.1.$$ <<_EOF_
+
+ select 'accounts:'::text, aid, bid, abalance, filler
+
+ from accounts order by aid;
+
+ select 'branches:'::text, bid, bbalance, filler
+
+ from branches order by bid;
+
+ select 'tellers:'::text, tid, bid, tbalance, filler
+
+ from tellers order by tid;
+
+ select 'history:'::text, tid, bid, aid, delta, mtime, filler,
+
+ "_Slony-I_${CLUSTERNAME}_rowID"
+
+ from history order by "_Slony-I_${CLUSTERNAME}_rowID";
+
+_EOF_
+
+psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME >dump.tmp.2.$$ <<_EOF_
+
+ select 'accounts:'::text, aid, bid, abalance, filler
+
+ from accounts order by aid;
+
+ select 'branches:'::text, bid, bbalance, filler
+
+ from branches order by bid;
+
+ select 'tellers:'::text, tid, bid, tbalance, filler
+
+ from tellers order by tid;
+
+ select 'history:'::text, tid, bid, aid, delta, mtime, filler,
+
+ "_Slony-I_${CLUSTERNAME}_rowID"
+
+ from history order by "_Slony-I_${CLUSTERNAME}_rowID";
+
+_EOF_
+
+
+
+if diff dump.tmp.1.$$ dump.tmp.2.$$ >$CLUSTERNAME.diff ; then
+
+ echo "success - databases are equal."
+
+ rm dump.tmp.?.$$
+
+ rm $CLUSTERNAME.diff
+
+else
+
+ echo "FAILED - see $CLUSTERNAME.diff for database differences"
+
+fi </TT
+>
+
+ </P
+><P
+>Note that there is somewhat more sophisticated documentation of the process in the Slony-I source code tree in a file called slony-I-basic-mstr-slv.txt.
+
+ </P
+><P
+>If this script returns "FAILED" please contact the developers at
+<A
+HREF="http://slony.org/"
+TARGET="_top"
+> http://slony.org/</A
+>
+
+
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="ddlchanges.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="help.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Database Schema Changes (DDL)</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>More Slony-I Help</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/slonconfig.html
@@ -0,0 +1,304 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slon Configuration Options</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Slon daemons"
+HREF="slonstart.html"><LINK
+REL="NEXT"
+TITLE=" Subscribing Nodes"
+HREF="subscribenodes.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slonstart.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="subscribenodes.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="SLONCONFIG"
+>10. Slon Configuration Options</A
+></H1
+><P
+>Slon parameters: </P
+><P
+> usage: slon [options] clustername conninfo </P
+><P
+><TT
+CLASS="COMMAND"
+>Options:
+-d debuglevel verbosity of logging (1..8)
+-s milliseconds SYNC check interval (default 10000)
+-t milliseconds SYNC interval timeout (default 60000)
+-g num maximum SYNC group size (default 6)
+-c num how often to vacuum in cleanup cycles
+-p filename slon pid file
+-f filename slon configuration file</TT
+>
+
+<P
+></P
+><UL
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-d</CODE
+> </P
+><P
+>The eight levels of logging are:
+<P
+></P
+><UL
+><LI
+><P
+>Error</P
+></LI
+><LI
+><P
+>Warn</P
+></LI
+><LI
+><P
+>Config</P
+></LI
+><LI
+><P
+>Info</P
+></LI
+><LI
+><P
+>Debug1</P
+></LI
+><LI
+><P
+>Debug2</P
+></LI
+><LI
+><P
+>Debug3</P
+></LI
+><LI
+><P
+>Debug4</P
+></LI
+></UL
+>
+ </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-s</CODE
+> </P
+><P
+>A SYNC event will be sent at least this often, regardless of whether update activity is detected. </P
+><P
+>Short sync times keep the master on a "short leash," updating the slaves more frequently. If you have replicated sequences that are frequently updated _without_ there being tables that are affected, this keeps there from being times when only sequences are updated, and therefore <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>no</I
+></SPAN
+> syncs take place. </P
+><P
+>Longer sync times allow there to be fewer events, which allows somewhat better efficiency. </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-t</CODE
+> </P
+><P
+>The time before the SYNC check interval times out. </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-g</CODE
+> </P
+><P
+>Number of SYNC events to try to cram together. The default is 6, which is probably suitable for small systems that can devote only very limited bits of memory to slon. If you have plenty of memory, it would be reasonable to increase this, as it will increase the amount of work done in each transaction, and will allow a subscriber that is behind by a lot to catch up more quickly. </P
+><P
+>Slon processes usually stay pretty small; even with large value for this option, slon would be expected to only grow to a few MB in size. </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-c</CODE
+> </P
+><P
+>How often to vacuum (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> - how many cleanup cycles to run before vacuuming). </P
+><P
+>Set this to zero to disable slon-initiated vacuuming. If you are using something like <B
+CLASS="APPLICATION"
+>pg_autovacuum</B
+> to initiate vacuums, you may not need for slon to initiate vacuums itself. If you are not, there are some tables Slony-I uses that collect a <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>lot</I
+></SPAN
+> of dead tuples that should be vacuumed frequently. </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-p</CODE
+> </P
+><P
+> The location of the PID file for the slon process. </P
+></LI
+><LI
+><P
+><CODE
+CLASS="OPTION"
+>-f</CODE
+> </P
+><P
+>The location of the slon configuration file.</P
+></LI
+></UL
+>
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slonstart.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="subscribenodes.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slon daemons</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Subscribing Nodes</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyDropThings.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyDropThings.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyDropThings.txt -Ldoc/adminguide/SlonyDropThings.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyDropThings.txt
+++ doc/adminguide/SlonyDropThings.txt
@@ -1,72 +1 @@
-%META:TOPICINFO{author="guest" date="1099633479" format="1.0" version="1.3"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
----++ Dropping things from Slony Replication
-
-There are several things you might want to do involving dropping things from Slony-I replication.
-
----+++ Dropping A Whole Node
-
-If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick.
-
-This will lead to Slony-I dropping the triggers (generally that deny the ability to update data), restoring the "native" triggers, dropping the schema used by Slony-I, and the slon process for that node terminating itself.
-
-As a result, the database should be available for whatever use your application makes of the database.
-
-This is a pretty major operation, with considerable potential to cause substantial destruction; make sure you drop the right node!
-
-The operation will fail if there are any nodes subscribing to the node that you attempt to drop, so there is a bit of failsafe.
-
-SlonyFAQ17 documents some extra maintenance that may need to be done on sl_confirm if you are running versions prior to 1.0.5.
-
----+++ Dropping An Entire Set
-
-If you wish to stop replicating a particular replication set, the Slonik command DROP SET is what you need to use.
-
-Much as with DROP NODE, this leads to Slony-I dropping the Slony-I triggers on the tables and restoring "native" triggers. One difference is that this takes place on *all* nodes in the cluster, rather than on just one node. Another difference is that this does not clear out the Slony-I cluster's namespace, as there might be other sets being serviced.
-
-This operation is quite a bit more dangerous than DROP NODE, as there _isn't_ the same sort of "failsafe." If you tell DROP SET to drop the _wrong_ set, there isn't anything to prevent "unfortunate results."
-
----+++ Unsubscribing One Node From One Set
-
-The UNSUBSCRIBE SET operation is a little less invasive than either DROP SET or DROP NODE; it involves dropping Slony-I triggers and restoring "native" triggers on one node, for one replication set.
-
-Much like with DROP NODE, this operation will fail if there is a node subscribing to the set on this node.
-
----+++ Warning!!!
-
-For all of the above operations, "turning replication back on" will require that the node copy in a *full* fresh set of the data on a provider. The fact that the data was recently being replicated isn't good enough; Slony-I will expect to refresh the data from scratch.
-
----+++ Dropping A Table From A Set
-
-In Slony 1.0.5 and above, there is a Slonik command SET DROP TABLE that allows dropping a single table from replication without forcing the user to drop the entire replication set.
-
-If you are running an earlier version, there is a "hack" to do this:
-
-You can fiddle this by hand by finding the table ID for the table you want to get rid of, which you can find in sl_table, and then run the following three queries, on each host:
-
-<verbatim>
- select _slonyschema.alterTableRestore(40);
- select _slonyschema.tableDropKey(40);
- delete from _slonyschema.sl_table where tab_id = 40;
-</verbatim>
-
-The schema will obviously depend on how you defined the Slony-I cluster. The table ID, in this case, 40, will need to change to the ID of the table you want to have go away.
-
-You'll have to run these three queries on all of the nodes, preferably firstly on the "master" node, so that the dropping of this propagates
-properly. Implementing this via a Slonik statement with a new Slony event would do that. Submitting the three queries using EXECUTE
-SCRIPT could do that. Also possible would be to connect to each database and submit the queries by hand.
-
----+++ Dropping A Sequence From A Set
-
-Just as with SET DROP TABLE, version 1.0.5 introduces the operation SET DROP SEQUENCE.
-
-If you are running an earlier version, here are instructions as to how to drop sequences:
-
-The data that needs to be deleted to stop Slony from continuing to replicate the two sequences identified with Sequence IDs 93 and 59 are thus:
-
-<verbatim>
-delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
-delete from _oxrsorg.sl_sequence where seq_id in (93,59);
-</verbatim>
-
-Those two queries could be submitted to all of the nodes via ddlscript() / EXECUTE SCRIPT, thus eliminating the sequence everywhere "at once." Or they may be applied by hand to each of the nodes.
+Moved to SGML
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -1,4 +1,4 @@
-<article id="help"> <title/ More Slony-I Help /
+<sect1 id="help"> <title/ More Slony-I Help /
<para>If you are having problems with Slony-I, you have several options for help:
<itemizedlist>
<listitem><Para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official "home" of Slony
@@ -18,7 +18,6 @@
</itemizedlist>
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: cluster.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/cluster.sgml -Ldoc/adminguide/cluster.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/cluster.sgml
+++ doc/adminguide/cluster.sgml
@@ -1,4 +1,4 @@
-<article id="cluster"> <title/Defining Slony-I Clusters/
+<sect1 id="cluster"> <title/Defining Slony-I Clusters/
<para>A Slony-I cluster is the basic grouping of database instances in
which replication takes place. It consists of a set of PostgreSQL
@@ -18,7 +18,6 @@
case, the node numbers can be cryptic; it will be the node name that
is used to organize the cluster.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/mysheet.dsl
@@ -0,0 +1,286 @@
+<!DOCTYPE style-sheet PUBLIC "-//James Clark//DTD DSSSL Style Sheet//EN" [
+<!-- /usr/lib/sgml/stylesheet/dsssl/docbook/nwalsh/print/docbook.dsl -->
+<!-- <!ENTITY mydbstyle SYSTEM
+ "/usr/lib/sgml/stylesheet/dsssl/docbook/nwalsh/print/docbook.dsl"
+ CDATA DSSSL> -->
+<!ENTITY docbook.dsl PUBLIC "-//Norman Walsh//DOCUMENT DocBook HTML Stylesheet//EN" CDATA DSSSL>
+<!ENTITY % output.html "IGNORE">
+<!ENTITY % output.print "IGNORE">
+<!ENTITY % lang.ja "IGNORE">
+<!ENTITY % lang.ja.dsssl "IGNORE">
+<![ %output.html; [
+<!ENTITY docbook.dsl PUBLIC "-//Norman Walsh//DOCUMENT DocBook HTML Stylesheet//EN" CDATA DSSSL>
+]]>
+<![ %output.print; [
+<!ENTITY docbook.dsl PUBLIC "-//Norman Walsh//DOCUMENT DocBook Print Stylesheet//EN" CDATA DSSSL>
+]]>
+]>
+
+<style-sheet>
+ <style-specification id="html" use="docbook">
+ <style-specification-body>
+ <!-- Locatization -->
+ <![ %lang.ja; [
+ <![ %lang.ja.dsssl; [
+ (define %gentext-language% "ja")
+ ]]>
+ (define %html-header-tags%
+ '(("META" ("HTTP-EQUIV" "Content-Type")
+ ("CONTENT" "text/html; charset=EUC-JP"))
+ ;;; ("BASE" ("HREF" "http://linuxdatabases.info/info/index.html"))))
+ ;;; ("BASE" ("HREF" "http://www3.sympatico.ca/cbbrowne/index.html"))))
+ )
+ ]]>
+
+ <!-- HTML only .................................................... -->
+
+ <![ %output.html; [
+ <!-- Configure the stylesheet using documented variables -->
+
+ (define %gentext-nav-use-tables% #t)
+
+ (define %html-ext% ".html")
+
+ (define %shade-verbatim% #t)
+
+
+
+;(define (graphic-attrs imagefile instance-alt)
+; (let* ((grove
+; (sgml-parse image-library-filename))
+; (my-node-list-first (lambda (x) (if x (node-list-first x) #f)))
+; (imagelib
+; (node-property 'document-element
+; (node-property 'grove-root grove)))
+; (images
+; (select-elements (children imagelib) "image"))
+; (image
+; (let loop ((imglist images))
+; (if (and imglist (node-list-empty? imglist))
+; #f
+; (if (equal? (attribute-string
+; "filename"
+; (node-list-first imglist))
+; imagefile)
+; (node-list-first imglist)
+; (loop (node-list-rest imglist))))))
+; (prop
+; (if image
+; (select-elements (children image) "properties")
+; #f))
+; (metas
+; (if prop
+; (select-elements (children prop) "meta")
+; #f))
+; (attrs
+; (let loop ((meta metas) (attrlist '()))
+; (if (and meta (node-list-empty? meta))
+; attrlist
+; (if (equal? (attribute-string
+; "imgattr"
+; (node-list-first meta))
+; "yes")
+; (loop (node-list-rest meta)
+; (append attrlist
+; (list
+; (list
+; (attribute-string
+; "name"
+; (node-list-first meta))
+; (attribute-string
+; "content"
+; (node-list-first meta))))))
+; (loop (node-list-rest meta) attrlist)))))
+; (width (attribute-string "width" prop))
+; (height (attribute-string "height" prop))
+; (alttext (select-elements (children image) "alttext"))
+; (alt (if instance-alt
+; instance-alt
+; (if (and alttext (node-list-empty? alttext))
+; #f
+; (data alttext)))))
+; (if (or width height alt (not (null? attrs)))
+; (append
+; attrs
+; (if width (list (list "WIDTH" width)) '())
+; (if height (list (list "HEIGHT" height)) '())
+; (if alttext (list (list "ALT" alt)) '()))
+; '())))
+
+ (define %use-id-as-filename% #t)
+ ;; Use ID attributes as name for component HTML files?
+
+ ;; Name for the root HTML document
+ (define %root-filename% #f)
+
+ ;; Write a manifest?
+ (define html-manifest #f)
+
+ (define %author-othername-in-middle% #t)
+ (define %admon-graphics% #t)
+ (define %admon-graphics-path% "./images/")
+ (define %body-attr%
+ (list
+ (list "BGCOLOR" "#FFFFFF")
+ (list "TEXT" "#000000")
+ (list "LINK" "#0000FF")
+ (list "VLINK" "#840084")
+ (list "ALINK" "#0000FF")))
+
+ (define %html-header-tags%
+ '(("META" ("HTTP-EQUIV" "Content-Type"))
+ ;;; ("BASE" ("HREF" "http://linuxfinances.info/info/index.html"))))
+ ))
+
+ (define %html40% #t)
+ (define %stylesheet% "stdstyle.css")
+ (define %stylesheet-type% "text/css")
+ (define %css-decoration% #t)
+ (define %css-liststyle-alist%
+ '(("bullet" "disc") ("box" "square")))
+ (define %spacingparas% #t)
+ (define %link-mailto-url% "mailto:cbbrowne at gmail.com")
+ (define %generate-article-toc% #t)
+ (define %generate-reference-toc% #t)
+ (define %generate-reference-toc-on-titlepage% #t)
+ (define %annotate-toc% #t)
+ (define %generate-set-titlepage% #t)
+ (define %generate-book-titlepage% #t)
+ (define %generate-part-titlepage% #t)
+ (define %generate-reference-titlepage% #t)
+ (define %generate-article-titlepage% #t)
+ (define %emphasis-propagates-style% #t)
+
+ <!-- Add in image-library -->
+ (define image-library #t)
+ (define image-library-filename "imagelib/imagelib.xml")
+
+ <!-- Understand <segmentedlist> and related elements. Simpleminded,
+ and only works for the HTML output. -->
+
+ (element segmentedlist
+ (make element gi: "TABLE"
+ (process-children)))
+
+ (element seglistitem
+ (make element gi: "TR"
+ (process-children)))
+
+ (element seg
+ (make element gi: "TD"
+ attributes: '(("VALIGN" "TOP"))
+ (process-children)))
+
+ ]]>
+
+ <!-- Print only ................................................... -->
+ <![ %output.print; [
+
+ ]]>
+
+ <!-- Both sets of stylesheets ..................................... -->
+
+ (define %section-autolabel%
+ #t)
+
+ (define %may-format-variablelist-as-table%
+ #f)
+
+ (define %indent-programlisting-lines%
+ " ")
+
+ (define %indent-screen-lines%
+ " ")
+
+ <!-- Slightly deeper customisations -->
+
+ <!-- I want things marked up with 'sgmltag' eg.,
+
+ <para>You can use <sgmltag>para</sgmltag> to indicate
+ paragraphs.</para>
+
+ to automatically have the opening and closing braces inserted,
+ and it should be in a mono-spaced font. -->
+
+ (element sgmltag ($mono-seq$
+ (make sequence
+ (literal "<")
+ (process-children)
+ (literal ">"))))
+
+ <!-- John Fieber's 'instant' translation specification had
+ '<command>' rendered in a mono-space font, and '<application>'
+ rendered in bold.
+
+ Norm's stylesheet doesn't do this (although '<command>' is
+ rendered in bold).
+
+ Configure the stylesheet to behave more like John's. -->
+
+ (element command ($mono-seq$))
+
+ (element application ($bold-seq$))
+
+ <!-- Warnings and cautions are put in boxed tables to make them stand
+ out. The same effect can be better achieved using CSS or similar,
+ so have them treated the same as <important>, <note>, and <tip>
+ -->
+ (element warning ($admonition$))
+ (element (warning title) (empty-sosofo))
+ (element (warning para) ($admonpara$))
+ (element (warning simpara) ($admonpara$))
+ (element caution ($admonition$))
+ (element (caution title) (empty-sosofo))
+ (element (caution para) ($admonpara$))
+ (element (caution simpara) ($admonpara$))
+
+ (define en-warning-label-title-sep ": ")
+ (define en-caution-label-title-sep ": ")
+
+ <!-- Tell the stylesheet about our local customisations -->
+
+ (element hostid ($mono-seq$))
+ (element username ($mono-seq$))
+ (element devicename ($mono-seq$))
+ (element maketarget ($mono-seq$))
+ (element makevar ($mono-seq$))
+
+ <!-- QAndASet ..................................................... -->
+
+ <!-- Default to labelling Q/A with Q: and A: -->
+
+ (define (qanda-defaultlabel)
+ (normalize "qanda"))
+
+ <!-- For the HTML version, display the questions in a bigger, bolder
+ font. -->
+
+ <![ %output.html [
+ ;; Custom function because of gripes about node-list-first
+
+ (element question
+ (let* ((chlist (children (current-node)))
+ (firstch (node-list-first chlist))
+ (restch (node-list-rest chlist)))
+ (make element gi: "DIV"
+ attributes: (list (list "CLASS" (gi)))
+ (make element gi: "P"
+ (make element gi: "BIG"
+ (make element gi: "A"
+ attributes: (list
+ (list "NAME" (element-id)))
+ (empty-sosofo))
+ (make element gi: "B"
+ (literal (question-answer-label
+ (current-node)) " ")
+ (process-node-list (children firstch)))))
+ (process-node-list restch))))
+ ]]>
+
+ </style-specification-body>
+ </style-specification>
+ <external-specification id="docbook" document="docbook.dsl">
+</style-sheet>
+
+
+<!-- /usr/lib/sgml/stylesheet/dsssl/docbook/nwalsh/html/dbparam.dsl -->
--- /dev/null
+++ doc/adminguide/monitoring.html
@@ -0,0 +1,204 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Monitoring</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Subscribing Nodes"
+HREF="subscribenodes.html"><LINK
+REL="NEXT"
+TITLE="Slony-I Maintenance"
+HREF="maintenance.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="subscribenodes.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="maintenance.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="MONITORING"
+>12. Monitoring</A
+></H1
+><P
+>Here are some of things that you may find in your Slony logs, and explanations of what they mean. </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN565"
+>12.1. CONFIG notices</A
+></H2
+><P
+>These entries are pretty straightforward. They are informative messages about your configuration. </P
+><P
+>Here are some typical entries that you will probably run into in your logs: </P
+><P
+><TT
+CLASS="COMMAND"
+>CONFIG main: local node id = 1
+CONFIG main: loading current cluster configuration
+CONFIG storeNode: no_id=3 no_comment='Node 3'
+CONFIG storePath: pa_server=5 pa_client=1 pa_conninfo="host=127.0.0.1 dbname=foo user=postgres port=6132" pa_connretry=10
+CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3
+CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 1'
+CONFIG main: configuration complete - starting threads</TT
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN571"
+>12.2. DEBUG Notices</A
+></H2
+><P
+>Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads: </P
+><P
+><TT
+CLASS="COMMAND"
+>localListenThread: This is the local thread that listens for events on the local node.
+remoteWorkerThread_X: The thread processing remote events.
+remoteListenThread_X: Listens for events on a remote node database.
+cleanupThread: Takes care of things like vacuuming, cleaning out the confirm and event tables, and deleting logs.
+syncThread: Generates sync events.</TT
+> </P
+><P
+> WriteMe: I can't decide the format for the rest of this. I
+think maybe there should be a "how it works" page, explaining more
+about how the threads work, what to expect in the logs after you run a
+slonik command...
+
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="subscribenodes.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="maintenance.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Subscribing Nodes</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony-I Maintenance</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ03.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ03.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ03.txt -Ldoc/adminguide/SlonyFAQ03.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ03.txt
+++ doc/adminguide/SlonyFAQ03.txt
@@ -1,14 +1 @@
-%META:TOPICINFO{author="guest" date="1098326455" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ Cluster name with "-" in it
-
-I tried creating a CLUSTER NAME with a "-" in it. That didn't work.
-
-Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
-main parser, so no, you probably shouldn't put a "-" in your
-identifier name.
-
-You may be able to defeat this by putting "quotes" around identifier
-names, but it's liable to bite you some, so this is something that is
-probably not worth working around.
-
+Moved to SGML
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -1,4 +1,4 @@
-<article id="concepts"> <title/Slony-I Concepts/
+<sect1 id="concepts"> <title/Slony-I Concepts/
<para>In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
@@ -10,7 +10,7 @@
<listitem><Para> Provider and Subscriber
</itemizedlist>
-<sect1><title/Cluster/
+<sect2><title/Cluster/
<para>In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.
@@ -21,7 +21,7 @@
<para>If the Cluster name is 'something', then Slony-I will create, in each database instance in the cluster, the namespace/schema '_something'.
-<sect1><title/ Node/
+<sect2><title/ Node/
<para>A Slony-I Node is a named PostgreSQL database that will be participating in replication.
@@ -38,13 +38,13 @@
<listitem><Para> A set of Slony-I nodes, each of which has a namespace based on that cluster name
</itemizedlist>
-<sect1><title/ Replication Set/
+<sect2><title/ Replication Set/
<para>A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster.
<para>You may have several sets, and the "flow" of replication does not need to be identical between those sets.
-<sect1><title/ Provider and Subscriber/
+<sect2><title/ Provider and Subscriber/
<para>Each replication set has some "master" node, which winds up
being the <emphasis/only/ place where user applications are permitted
@@ -60,7 +60,6 @@
that is subscribed to the "master" may also behave as a "provider" to
other nodes in the cluster.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
--- /dev/null
+++ doc/adminguide/faq.html
@@ -0,0 +1,1476 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> FAQ </TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="PREVIOUS"
+TITLE=" Other Information Sources"
+HREF="x931.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="ARTICLE"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="x931.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+> </TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="ARTICLE"
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+><A
+NAME="FAQ"
+>FAQ</A
+></H1
+><HR></DIV
+><P
+> Not all of these are, strictly speaking, <SPAN
+CLASS="QUOTE"
+>"frequently
+asked;"</SPAN
+> some represent <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>trouble found that seemed worth
+documenting</I
+></SPAN
+>.
+
+ </P
+><DIV
+CLASS="QANDASET"
+><DL
+><DT
+>Q: <A
+HREF="faq.html#AEN944"
+>I looked for the <CODE
+CLASS="ENVAR"
+>_clustername</CODE
+> namespace, and it wasn't there.</A
+></DT
+><DT
+>Q: <A
+HREF="faq.html#AEN955"
+>Some events moving around, but no replication </A
+></DT
+></DL
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN944"
+></A
+><B
+>Q: I looked for the <CODE
+CLASS="ENVAR"
+>_clustername</CODE
+> namespace, and it wasn't there.</B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> If the DSNs are wrong, then slon instances can't connect to the nodes. </P
+><P
+>This will generally lead to nodes remaining entirely untouched. </P
+><P
+>Recheck the connection configuration. By the way, since
+<B
+CLASS="APPLICATION"
+>slon</B
+> links to libpq, you could have password information
+stored in <TT
+CLASS="FILENAME"
+> <CODE
+CLASS="ENVAR"
+>$HOME</CODE
+>/.pgpass</TT
+>,
+partially filling in right/wrong authentication information there.</P
+></DIV
+></DIV
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN955"
+></A
+><B
+>Q: Some events moving around, but no replication </B
+></BIG
+></P
+><P
+> Slony logs might look like the following:
+
+<TT
+CLASS="COMMAND"
+>DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432'
+ERROR remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress</TT
+> </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>On AIX and Solaris (and possibly elsewhere), both Slony-I <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>and PostgreSQL</I
+></SPAN
+> must be compiled with the <CODE
+CLASS="OPTION"
+>--enable-thread-safety</CODE
+> option. The above results when PostgreSQL isn't so compiled. </P
+><P
+>What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing. </P
+><P
+>Problems like this crop up with disadmirable regularity on AIX
+and Solaris; it may take something of an <SPAN
+CLASS="QUOTE"
+>"object code audit"</SPAN
+> to
+make sure that <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>ALL</I
+></SPAN
+> of the necessary components have been
+compiled and linked with <CODE
+CLASS="OPTION"
+>--enable-thread-safety</CODE
+>. </P
+><P
+>For instance, I ran into the problem one that
+<CODE
+CLASS="ENVAR"
+>LD_LIBRARY_PATH</CODE
+> had been set, on Solaris, to point to
+libraries from an old PostgreSQL compile. That meant that even though
+the database <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>had</I
+></SPAN
+> been compiled with
+<CODE
+CLASS="OPTION"
+>--enable-thread-safety</CODE
+>, and <B
+CLASS="APPLICATION"
+>slon</B
+> had been
+compiled against that, <B
+CLASS="APPLICATION"
+>slon</B
+> was being dynamically linked
+to the <SPAN
+CLASS="QUOTE"
+>"bad old thread-unsafe version,"</SPAN
+> so slon didn't work. It
+wasn't clear that this was the case until I ran <TT
+CLASS="COMMAND"
+>ldd</TT
+> against
+<B
+CLASS="APPLICATION"
+>slon</B
+>. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN978"
+></A
+><B
+>Q: I tried creating a CLUSTER NAME with a "-" in it.
+That didn't work. </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
+main parser, so no, you probably shouldn't put a "-" in your
+identifier name. </P
+><P
+> You may be able to defeat this by putting "quotes" around
+identifier names, but it's liable to bite you some, so this is
+something that is probably not worth working around. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN984"
+></A
+><B
+>Q: slon does not restart after crash </B
+></BIG
+></P
+><P
+> After an immediate stop of postgresql (simulation of system crash)
+in pg_catalog.pg_listener a tuple with
+relname='_${cluster_name}_Restart' exists. slon doesn't start cause it
+thinks another process is serving the cluster on this node. What can
+I do? The tuples can't be dropped from this relation. </P
+><P
+> The logs claim that "Another slon daemon is serving this node already" </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>It's handy to keep a slonik script like the following one around to
+run in such cases: </P
+><P
+><TT
+CLASS="COMMAND"
+>twcsds004[/opt/twcsds004/OXRS/slony-scripts]$ cat restart_org.slonik
+cluster name = oxrsorg ;
+node 1 admin conninfo = 'host=32.85.68.220 dbname=oxrsorg user=postgres port=5532';
+node 2 admin conninfo = 'host=32.85.68.216 dbname=oxrsorg user=postgres port=5532';
+node 3 admin conninfo = 'host=32.85.68.244 dbname=oxrsorg user=postgres port=5532';
+node 4 admin conninfo = 'host=10.28.103.132 dbname=oxrsorg user=postgres port=5532';
+restart node 1;
+restart node 2;
+restart node 3;
+restart node 4;</TT
+> </P
+><P
+> <TT
+CLASS="COMMAND"
+>restart node n</TT
+> cleans up dead notifications so that you can restart the node. </P
+><P
+>As of version 1.0.5, the startup process of slon looks for this
+condition, and automatically cleans it up. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN996"
+></A
+><B
+>Q: ps finds passwords on command line </B
+></BIG
+></P
+><P
+> If I run a <TT
+CLASS="COMMAND"
+>ps</TT
+> command, I, and everyone else, can see passwords
+on the command line. </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>Take the passwords out of the Slony configuration, and put them into
+<TT
+CLASS="FILENAME"
+><CODE
+CLASS="ENVAR"
+>$(HOME)</CODE
+>/.pgpass.</TT
+> </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1005"
+></A
+><B
+>Q: Slonik fails - cannot load PostgreSQL library - <TT
+CLASS="COMMAND"
+>PGRES_FATAL_ERROR load '$libdir/xxid';</TT
+> </B
+></BIG
+></P
+><P
+> When I run the sample setup script I get an error message similar
+to:
+
+<TT
+CLASS="COMMAND"
+>stdin:64: PGRES_FATAL_ERROR load '$libdir/xxid'; - ERROR: LOAD:
+could not open file '$libdir/xxid': No such file or directory</TT
+> </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> Evidently, you haven't got the <TT
+CLASS="FILENAME"
+>xxid.so</TT
+>
+library in the <CODE
+CLASS="ENVAR"
+>$libdir</CODE
+> directory that the PostgreSQL instance
+is using. Note that the Slony-I components need to be installed in
+the PostgreSQL software installation for <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>each and every one</I
+></SPAN
+>
+of the nodes, not just on the <SPAN
+CLASS="QUOTE"
+>"master node."</SPAN
+> </P
+><P
+>This may also point to there being some other mismatch between
+the PostgreSQL binary instance and the Slony-I instance. If you
+compiled Slony-I yourself, on a machine that may have multiple
+PostgreSQL builds <SPAN
+CLASS="QUOTE"
+>"lying around,"</SPAN
+> it's possible that the slon or
+slonik binaries are asking to load something that isn't actually in
+the library directory for the PostgreSQL database cluster that it's
+hitting. </P
+><P
+>Long and short: This points to a need to <SPAN
+CLASS="QUOTE"
+>"audit"</SPAN
+> what
+installations of PostgreSQL and Slony you have in place on the
+machine(s). Unfortunately, just about any mismatch will cause things
+not to link up quite right. See also <A
+HREF="faq.html#SLONYFAQ02"
+>SlonyFAQ02 </A
+> concerning threading issues on Solaris ... </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1022"
+></A
+><B
+>Q: Table indexes with FQ namespace names
+
+<TT
+CLASS="COMMAND"
+>set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever',
+ comment = 'Table some_table in namespace nspace with a candidate primary key');</TT
+> </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> If you have <TT
+CLASS="COMMAND"
+> key = 'nspace.key_on_whatever'</TT
+>
+the request will <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>FAIL</I
+></SPAN
+>. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1030"
+></A
+><B
+>Q: I'm trying to get a slave subscribed, and get the following
+messages in the logs:
+
+<TT
+CLASS="COMMAND"
+>DEBUG1 copy_set 1
+DEBUG1 remoteWorkerThread_1: connected to provider DB
+WARN remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress
+WARN remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds</TT
+> </B
+></BIG
+></P
+><P
+>Oops. What I forgot to mention, as well, was that I was trying
+to add <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>TWO</I
+></SPAN
+> subscribers, concurrently. </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> That doesn't work out: Slony-I won't work on the
+<TT
+CLASS="COMMAND"
+>COPY</TT
+> commands concurrently. See
+<TT
+CLASS="FILENAME"
+>src/slon/remote_worker.c</TT
+>, function
+<CODE
+CLASS="FUNCTION"
+>copy_set()</CODE
+> </P
+><P
+>This has the (perhaps unfortunate) implication that you cannot
+populate two slaves concurrently. You have to subscribe one to the
+set, and only once it has completed setting up the subscription
+(copying table contents and such) can the second subscriber start
+setting up the subscription. </P
+><P
+>It could also be possible for there to be an old outstanding
+transaction blocking Slony-I from processing the sync. You might want
+to take a look at pg_locks to see what's up:
+
+<TT
+CLASS="COMMAND"
+>sampledb=# select * from pg_locks where transaction is not null order by transaction;
+ relation | database | transaction | pid | mode | granted
+----------+----------+-------------+---------+---------------+---------
+ | | 127314921 | 2605100 | ExclusiveLock | t
+ | | 127326504 | 5660904 | ExclusiveLock | t
+(2 rows)</TT
+> </P
+><P
+>See? 127314921 is indeed older than 127314958, and it's still running.
+
+<TT
+CLASS="COMMAND"
+>$ ps -aef | egrep '[2]605100'
+
+postgres 2605100 205018 0 18:53:43 pts/3 3:13 postgres: postgres sampledb localhost COPY </TT
+> </P
+><P
+>This happens to be a <TT
+CLASS="COMMAND"
+>COPY</TT
+> transaction involved in setting up the
+subscription for one of the nodes. All is well; the system is busy
+setting up the first subscriber; it won't start on the second one
+until the first one has completed subscribing. </P
+><P
+>By the way, if there is more than one database on the PostgreSQL
+cluster, and activity is taking place on the OTHER database, that will
+lead to there being <SPAN
+CLASS="QUOTE"
+>"transactions earlier than XID whatever"</SPAN
+> being
+found to be still in progress. The fact that it's a separate database
+on the cluster is irrelevant; Slony-I will wait until those old
+transactions terminate.</P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1050"
+></A
+><B
+>Q: ERROR: duplicate key violates unique constraint "sl_table-pkey" </B
+></BIG
+></P
+><P
+>I tried setting up a second replication set, and got the following error:
+
+<TT
+CLASS="COMMAND"
+>stdin:9: Could not create subscription set 2 for oxrslive!
+stdin:11: PGRES_FATAL_ERROR select "_oxrslive".setAddTable(2, 1, 'public.replic_test', 'replic_test__Slony-I_oxrslive_rowID_key', 'Table public.replic_test without primary key'); - ERROR: duplicate key violates unique constraint "sl_table-pkey"
+CONTEXT: PL/pgSQL function "setaddtable_int" line 71 at SQL statement</TT
+> </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>The table IDs used in SET ADD TABLE are required to be unique <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>ACROSS
+ALL SETS</I
+></SPAN
+>. Thus, you can't restart numbering at 1 for a second set; if
+you are numbering them consecutively, a subsequent set has to start
+with IDs after where the previous set(s) left off.</P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1058"
+></A
+><B
+>Q: I need to drop a table from a replication set</B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>This can be accomplished several ways, not all equally desirable ;-).
+
+<P
+></P
+><UL
+><LI
+><P
+> You could drop the whole replication set, and recreate it with just the tables that you need. Alas, that means recopying a whole lot of data, and kills the usability of the cluster on the rest of the set while that's happening. </P
+></LI
+><LI
+><P
+> If you are running 1.0.5 or later, there is the command SET DROP TABLE, which will "do the trick." </P
+></LI
+><LI
+><P
+> If you are still using 1.0.1 or 1.0.2, the _essential_ functionality of SET DROP TABLE involves the functionality in droptable_int(). You can fiddle this by hand by finding the table ID for the table you want to get rid of, which you can find in sl_table, and then run the following three queries, on each host:
+
+<TT
+CLASS="COMMAND"
+> select _slonyschema.alterTableRestore(40);
+ select _slonyschema.tableDropKey(40);
+ delete from _slonyschema.sl_table where tab_id = 40;</TT
+> </P
+><P
+>The schema will obviously depend on how you defined the Slony-I
+cluster. The table ID, in this case, 40, will need to change to the
+ID of the table you want to have go away.
+
+You'll have to run these three queries on all of the nodes, preferably
+firstly on the "master" node, so that the dropping of this propagates
+properly. Implementing this via a SLONIK statement with a new Slony
+event would do that. Submitting the three queries using EXECUTE
+SCRIPT could do that. Also possible would be to connect to each
+database and submit the queries by hand.</P
+></LI
+></UL
+></P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1072"
+></A
+><B
+>Q: I need to drop a sequence from a replication set </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+></P
+><P
+>If you are running 1.0.5 or later, there is a
+<TT
+CLASS="COMMAND"
+>SET DROP SEQUENCE</TT
+> command in Slonik to allow you to do this,
+parallelling <TT
+CLASS="COMMAND"
+>SET DROP TABLE.</TT
+> </P
+><P
+>If you are running 1.0.2 or earlier, the process is a bit more manual. </P
+><P
+>Supposing I want to get rid of the two sequences listed below,
+<CODE
+CLASS="ENVAR"
+>whois_cachemgmt_seq</CODE
+> and <CODE
+CLASS="ENVAR"
+>epp_whoi_cach_seq_</CODE
+>, we start
+by needing the <CODE
+CLASS="ENVAR"
+>seq_id</CODE
+> values.
+
+<TT
+CLASS="COMMAND"
+>oxrsorg=# select * from _oxrsorg.sl_sequence where seq_id in (93,59);
+ seq_id | seq_reloid | seq_set | seq_comment
+--------+------------+---------+-------------------------------------
+ 93 | 107451516 | 1 | Sequence public.whois_cachemgmt_seq
+ 59 | 107451860 | 1 | Sequence public.epp_whoi_cach_seq_
+(2 rows)</TT
+> </P
+><P
+>The data that needs to be deleted to stop Slony from continuing to
+replicate these are thus:
+
+<TT
+CLASS="COMMAND"
+>delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
+delete from _oxrsorg.sl_sequence where seq_id in (93,59);</TT
+> </P
+><P
+>Those two queries could be submitted to all of the nodes via
+<CODE
+CLASS="FUNCTION"
+>ddlscript()</CODE
+> / <TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+>, thus eliminating
+the sequence everywhere <SPAN
+CLASS="QUOTE"
+>"at once."</SPAN
+> Or they may be applied by
+hand to each of the nodes. </P
+><P
+>Similarly to <TT
+CLASS="COMMAND"
+>SET DROP TABLE</TT
+>, this should be in place for Slony-I version
+1.0.5 as <TT
+CLASS="COMMAND"
+>SET DROP SEQUENCE.</TT
+></P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1095"
+></A
+><B
+>Q: Slony-I: cannot add table to currently subscribed set 1 </B
+></BIG
+></P
+><P
+> I tried to add a table to a set, and got the following message:
+
+<TT
+CLASS="COMMAND"
+> Slony-I: cannot add table to currently subscribed set 1</TT
+> </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> You cannot add tables to sets that already have
+subscribers. </P
+><P
+>The workaround to this is to create <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>ANOTHER</I
+></SPAN
+> set, add
+the new tables to that new set, subscribe the same nodes subscribing
+to "set 1" to the new set, and then merge the sets together. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1104"
+></A
+><B
+>Q: Some nodes start consistently falling behind </B
+></BIG
+></P
+><P
+>I have been running Slony-I on a node for a while, and am seeing
+system performance suffering. </P
+><P
+>I'm seeing long running queries of the form:
+<TT
+CLASS="COMMAND"
+> fetch 100 from LOG;</TT
+> </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> This is characteristic of pg_listener (which is the table containing
+<TT
+CLASS="COMMAND"
+>NOTIFY</TT
+> data) having plenty of dead tuples in it. That makes <TT
+CLASS="COMMAND"
+>NOTIFY</TT
+>
+events take a long time, and causes the affected node to gradually
+fall further and further behind. </P
+><P
+>You quite likely need to do a <TT
+CLASS="COMMAND"
+>VACUUM FULL</TT
+> on <CODE
+CLASS="ENVAR"
+>pg_listener</CODE
+>, to vigorously clean it out, and need to vacuum <CODE
+CLASS="ENVAR"
+>pg_listener</CODE
+> really frequently. Once every five minutes would likely be AOK. </P
+><P
+> Slon daemons already vacuum a bunch of tables, and
+<TT
+CLASS="FILENAME"
+>cleanup_thread.c</TT
+> contains a list of tables that are
+frequently vacuumed automatically. In Slony-I 1.0.2,
+<CODE
+CLASS="ENVAR"
+>pg_listener</CODE
+> is not included. In 1.0.5 and later, it is
+regularly vacuumed, so this should cease to be a direct issue. </P
+><P
+>There is, however, still a scenario where this will still
+"bite." Vacuums cannot delete tuples that were made "obsolete" at any
+time after the start time of the eldest transaction that is still
+open. Long running transactions will cause trouble, and should be
+avoided, even on "slave" nodes. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1122"
+></A
+><B
+>Q: I started doing a backup using pg_dump, and suddenly Slony stops </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>Ouch. What happens here is a conflict between:
+<P
+></P
+><UL
+><LI
+><P
+> <B
+CLASS="APPLICATION"
+>pg_dump</B
+>, which has taken out an <TT
+CLASS="COMMAND"
+>AccessShareLock</TT
+> on all of the tables in the database, including the Slony-I ones, and </P
+></LI
+><LI
+><P
+> A Slony-I sync event, which wants to grab a <TT
+CLASS="COMMAND"
+>AccessExclusiveLock</TT
+> on the table <CODE
+CLASS="ENVAR"
+>sl_event</CODE
+>.</P
+></LI
+></UL
+> </P
+><P
+>The initial query that will be blocked is thus:
+
+<TT
+CLASS="COMMAND"
+> select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL); </TT
+> </P
+><P
+>(You can see this in <CODE
+CLASS="ENVAR"
+>pg_stat_activity</CODE
+>, if you have query
+display turned on in <TT
+CLASS="FILENAME"
+>postgresql.conf</TT
+>) </P
+><P
+>The actual query combination that is causing the lock is from
+the function <CODE
+CLASS="FUNCTION"
+>Slony_I_ClusterStatus()</CODE
+>, found in
+<TT
+CLASS="FILENAME"
+>slony1_funcs.c</TT
+>, and is localized in the code that does:
+
+<TT
+CLASS="COMMAND"
+> LOCK TABLE %s.sl_event;
+ INSERT INTO %s.sl_event (...stuff...)
+ SELECT currval('%s.sl_event_seq');</TT
+> </P
+><P
+>The <TT
+CLASS="COMMAND"
+>LOCK</TT
+> statement will sit there and wait until <TT
+CLASS="COMMAND"
+>pg_dump</TT
+> (or whatever else has pretty much any kind of access lock on <CODE
+CLASS="ENVAR"
+>sl_event</CODE
+>) completes. </P
+><P
+>Every subsequent query submitted that touches <CODE
+CLASS="ENVAR"
+>sl_event</CODE
+> will block behind the <CODE
+CLASS="FUNCTION"
+>createEvent</CODE
+> call. </P
+><P
+>There are a number of possible answers to this:
+<P
+></P
+><UL
+><LI
+><P
+> Have pg_dump specify the schema dumped using
+--schema=whatever, and don't try dumping the cluster's schema. </P
+></LI
+><LI
+><P
+> It would be nice to add an "--exclude-schema" option
+to pg_dump to exclude the Slony cluster schema. Maybe in 8.0 or
+8.1... </P
+></LI
+><LI
+><P
+>Note that 1.0.5 uses a more precise lock that is less
+exclusive that alleviates this problem.</P
+></LI
+></UL
+></P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1160"
+></A
+><B
+>Q: The slons spent the weekend out of commission [for
+some reason], and it's taking a long time to get a sync through. </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>You might want to take a look at the sl_log_1/sl_log_2 tables, and do
+a summary to see if there are any really enormous Slony-I transactions
+in there. Up until at least 1.0.2, there needs to be a slon connected
+to the master in order for <TT
+CLASS="COMMAND"
+>SYNC</TT
+> events to be generated. </P
+><P
+>If none are being generated, then all of the updates until the next
+one is generated will collect into one rather enormous Slony-I
+transaction. </P
+><P
+>Conclusion: Even if there is not going to be a subscriber around, you
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>really</I
+></SPAN
+> want to have a slon running to service the <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> node. </P
+><P
+>Some future version (probably 1.1) may provide a way for
+<TT
+CLASS="COMMAND"
+>SYNC</TT
+> counts to be updated on the master by the stored
+function that is invoked by the table triggers. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1172"
+></A
+><B
+>Q: I pointed a subscribing node to a different parent and it stopped replicating </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+>We noticed this happening when we wanted to re-initialize a node,
+where we had configuration thus:
+
+<P
+></P
+><UL
+><LI
+><P
+> Node 1 - master</P
+></LI
+><LI
+><P
+> Node 2 - child of node 1 - the node we're reinitializing</P
+></LI
+><LI
+><P
+> Node 3 - child of node 3 - node that should keep replicating</P
+></LI
+></UL
+> </P
+><P
+>The subscription for node 3 was changed to have node 1 as
+provider, and we did <TT
+CLASS="COMMAND"
+>DROP SET</TT
+>/<TT
+CLASS="COMMAND"
+>SUBSCRIBE SET</TT
+> for
+node 2 to get it repopulating. </P
+><P
+>Unfortunately, replication suddenly stopped to node 3. </P
+><P
+>The problem was that there was not a suitable set of <SPAN
+CLASS="QUOTE"
+>"listener paths"</SPAN
+>
+in sl_listen to allow the events from node 1 to propagate to node 3.
+The events were going through node 2, and blocking behind the
+<TT
+CLASS="COMMAND"
+>SUBSCRIBE SET</TT
+> event that node 2 was working on. </P
+><P
+>The following slonik script dropped out the listen paths where node 3
+had to go through node 2, and added in direct listens between nodes 1
+and 3.
+
+<TT
+CLASS="COMMAND"
+>cluster name = oxrslive;
+ node 1 admin conninfo='host=32.85.68.220 dbname=oxrslive user=postgres port=5432';
+ node 2 admin conninfo='host=32.85.68.216 dbname=oxrslive user=postgres port=5432';
+ node 3 admin conninfo='host=32.85.68.244 dbname=oxrslive user=postgres port=5432';
+ node 4 admin conninfo='host=10.28.103.132 dbname=oxrslive user=postgres port=5432';
+try {
+ store listen (origin = 1, receiver = 3, provider = 1);
+ store listen (origin = 3, receiver = 1, provider = 3);
+ drop listen (origin = 1, receiver = 3, provider = 2);
+ drop listen (origin = 3, receiver = 1, provider = 2);
+}</TT
+> </P
+><P
+>Immediately after this script was run, <TT
+CLASS="COMMAND"
+>SYNC</TT
+> events started propagating
+again to node 3.
+
+This points out two principles:
+<P
+></P
+><UL
+><LI
+><P
+> If you have multiple nodes, and cascaded subscribers,
+you need to be quite careful in populating the <TT
+CLASS="COMMAND"
+>STORE LISTEN</TT
+>
+entries, and in modifying them if the structure of the replication
+"tree" changes. </P
+></LI
+><LI
+><P
+> Version 1.1 probably ought to provide better tools to
+help manage this. </P
+></LI
+></UL
+> </P
+><P
+>The issues of "listener paths" are discussed further at <A
+HREF="listenpaths.html"
+> Slony Listen Paths </A
+> </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1203"
+></A
+><B
+>Q: After dropping a node, sl_log_1 isn't getting purged out anymore. </B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> This is a common scenario in versions before 1.0.5, as
+the "clean up" that takes place when purging the node does not include
+purging out old entries from the Slony-I table, sl_confirm, for the
+recently departed node. </P
+><P
+> The node is no longer around to update confirmations of what
+syncs have been applied on it, and therefore the cleanup thread that
+purges log entries thinks that it can't safely delete entries newer
+than the final sl_confirm entry, which rather curtails the ability to
+purge out old logs. </P
+><P
+>Diagnosis: Run the following query to see if there are any
+"phantom/obsolete/blocking" sl_confirm entries:
+
+<TT
+CLASS="COMMAND"
+>oxrsbar=# select * from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
+ con_origin | con_received | con_seqno | con_timestamp
+------------+--------------+-----------+----------------------------
+ 4 | 501 | 83999 | 2004-11-09 19:57:08.195969
+ 1 | 2 | 3345790 | 2004-11-14 10:33:43.850265
+ 2 | 501 | 102718 | 2004-11-14 10:33:47.702086
+ 501 | 2 | 6577 | 2004-11-14 10:34:45.717003
+ 4 | 5 | 83999 | 2004-11-14 21:11:11.111686
+ 4 | 3 | 83999 | 2004-11-24 16:32:39.020194
+(6 rows)</TT
+> </P
+><P
+>In version 1.0.5, the "drop node" function purges out entries in
+sl_confirm for the departing node. In earlier versions, this needs to
+be done manually. Supposing the node number is 3, then the query
+would be:
+
+<TT
+CLASS="COMMAND"
+>delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;</TT
+> </P
+><P
+>Alternatively, to go after <SPAN
+CLASS="QUOTE"
+>"all phantoms,"</SPAN
+> you could use
+<TT
+CLASS="COMMAND"
+>oxrsbar=# delete from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
+DELETE 6</TT
+> </P
+><P
+>General "due diligance" dictates starting with a
+<TT
+CLASS="COMMAND"
+>BEGIN</TT
+>, looking at the contents of sl_confirm before,
+ensuring that only the expected records are purged, and then, only
+after that, confirming the change with a <TT
+CLASS="COMMAND"
+>COMMIT</TT
+>. If you
+delete confirm entries for the wrong node, that could ruin your whole
+day. </P
+><P
+>You'll need to run this on each node that remains... </P
+><P
+>Note that in 1.0.5, this is no longer an issue at all, as it purges unneeded entries from sl_confirm in two places:
+<P
+></P
+><UL
+><LI
+><P
+> At the time a node is dropped</P
+></LI
+><LI
+><P
+> At the start of each "cleanupEvent" run, which is the event in which old data is purged from sl_log_1 and sl_seqlog</P
+></LI
+></UL
+> </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1226"
+></A
+><B
+>Q: Replication Fails - Unique Constraint Violation
+ </B
+></BIG
+></P
+><P
+>Replication has been running for a while, successfully, when a node encounters a "glitch," and replication logs are filled with repetitions of the following:
+
+<TT
+CLASS="COMMAND"
+>DEBUG2 remoteWorkerThread_1: syncing set 2 with 5 table(s) from provider 1
+DEBUG2 remoteWorkerThread_1: syncing set 1 with 41 table(s) from provider 1
+DEBUG2 remoteWorkerThread_1: syncing set 5 with 1 table(s) from provider 1
+DEBUG2 remoteWorkerThread_1: syncing set 3 with 1 table(s) from provider 1
+DEBUG2 remoteHelperThread_1_1: 0.135 seconds delay for first row
+DEBUG2 remoteHelperThread_1_1: 0.343 seconds until close cursor
+ERROR remoteWorkerThread_1: "insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090538', 'D', '_rserv_ts=''9275244''');
+delete from only public.epp_domain_host where _rserv_ts='9275244';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090539', 'D', '_rserv_ts=''9275245''');
+delete from only public.epp_domain_host where _rserv_ts='9275245';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090540', 'D', '_rserv_ts=''24240590''');
+delete from only public.epp_domain_contact where _rserv_ts='24240590';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090541', 'D', '_rserv_ts=''24240591''');
+delete from only public.epp_domain_contact where _rserv_ts='24240591';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090542', 'D', '_rserv_ts=''24240589''');
+delete from only public.epp_domain_contact where _rserv_ts='24240589';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090543', 'D', '_rserv_ts=''36968002''');
+delete from only public.epp_domain_status where _rserv_ts='36968002';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090544', 'D', '_rserv_ts=''36968003''');
+delete from only public.epp_domain_status where _rserv_ts='36968003';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090549', 'I', '(contact_id,status,reason,_rserv_ts) values (''6972897'',''64'','''',''31044208'')');
+insert into public.contact_status (contact_id,status,reason,_rserv_ts) values ('6972897','64','','31044208');insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090550', 'D', '_rserv_ts=''18139332''');
+delete from only public.contact_status where _rserv_ts='18139332';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090551', 'D', '_rserv_ts=''18139333''');
+delete from only public.contact_status where _rserv_ts='18139333';" ERROR: duplicate key violates unique constraint "contact_status_pkey"
+ - qualification was:
+ERROR remoteWorkerThread_1: SYNC aborted</TT
+> </P
+><P
+>The transaction rolls back, and Slony-I tries again, and again,
+and again. The problem is with one of the <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>last</I
+></SPAN
+> SQL statements, the
+one with <TT
+CLASS="COMMAND"
+>log_cmdtype = 'I'</TT
+>. That isn't quite obvious; what takes
+place is that Slony-I groups 10 update queries together to diminish
+the number of network round trips. </P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+></P
+><P
+> A <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>certain</I
+></SPAN
+> cause for this has not yet been arrived
+at. The factors that <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>appear</I
+></SPAN
+> to go together to contribute
+to this scenario are as follows:
+
+<P
+></P
+><UL
+><LI
+><P
+> The "glitch" seems to coincide with some sort of
+outage; it has been observed both in cases where databases were
+suffering from periodic "SIG 11" problems, where backends were falling
+over, as well as when temporary network failure seemed likely. </P
+></LI
+><LI
+><P
+> The scenario seems to involve a delete transaction having been missed by Slony-I. </P
+></LI
+></UL
+> </P
+><P
+>By the time we notice that there is a problem, the missed delete
+transaction has been cleaned out of sl_log_1, so there is no recovery
+possible. </P
+><P
+>What is necessary, at this point, is to drop the replication set
+(or even the node), and restart replication from scratch on that node. </P
+><P
+>In Slony-I 1.0.5, the handling of purges of sl_log_1 are rather
+more conservative, refusing to purge entries that haven't been
+successfully synced for at least 10 minutes on all nodes. It is not
+certain that that will prevent the "glitch" from taking place, but it
+seems likely that it will leave enough sl_log_1 data to be able to do
+something about recovering from the condition or at least diagnosing
+it more exactly. And perhaps the problem is that sl_log_1 was being
+purged too aggressively, and this will resolve the issue completely. </P
+><DIV
+CLASS="QANDAENTRY"
+><DIV
+CLASS="QUESTION"
+><P
+><BIG
+><A
+NAME="AEN1247"
+></A
+><B
+>Q: If you have a slonik script something like this, it will hang on you and never complete, because you can't have "wait for event" inside a try block. A try block is executed as one transaction, so the event that your waiting for will never arrive.
+
+<TT
+CLASS="COMMAND"
+>try {
+ echo 'Moving set 1 to node 3';
+ lock set (id=1, origin=1);
+ echo 'Set locked';
+ wait for event (origin = 1, confirmed = 3);
+ echo 'Moving set';
+ move set (id=1, old origin=1, new origin=3);
+ echo 'Set moved - waiting for event to be confirmed by node 3';
+ wait for event (origin = 1, confirmed = 3);
+ echo 'Confirmed';
+} on error {
+ echo 'Could not move set for cluster foo';
+ unlock set (id=1, origin=1);
+ exit -1;
+}</TT
+></B
+></BIG
+></P
+></DIV
+><DIV
+CLASS="ANSWER"
+><P
+><B
+>A: </B
+> You must not invoke <TT
+CLASS="COMMAND"
+>wait for event</TT
+> inside a <SPAN
+CLASS="QUOTE"
+>"try"</SPAN
+> block. </P
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="x931.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+> </TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Other Information Sources</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+> </TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+> </TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/altperl.html
@@ -0,0 +1,590 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Slony-I Administration Scripts</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE="Defining Slony-I Replication Sets"
+HREF="x267.html"><LINK
+REL="NEXT"
+TITLE="Slon daemons"
+HREF="slonstart.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="x267.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slonstart.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="ALTPERL"
+>8. Slony-I Administration Scripts</A
+></H1
+><P
+>In the "altperl" directory in the CVS tree, there is a sizable set of Perl scripts that may be used to administer a set of Slony-I instances, which support having arbitrary numbers of nodes. </P
+><P
+>Most of them generate Slonik scripts that are then to be passed on to the slonik utility to be submitted to all of the Slony-I nodes in a particular cluster. At one time, this embedded running slonik on the slonik scripts. Unfortunately, this turned out to be a pretty large calibre "foot gun," as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions, so the behaviour has been changed so that the scripts simply submit output to standard output. An administrator should review the slonik script before submitting it to Slonik. </P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN314"
+>8.1. Node/Cluster Configuration - cluster.nodes</A
+></H2
+><P
+>The UNIX environment variable <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+> is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster. </P
+><P
+>What variables are set up...
+<P
+></P
+><UL
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$SETNAME</CODE
+>=orglogs; # What is the name of the replication set? </P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$LOGDIR</CODE
+>='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs? </P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$SLON_BIN_PATH</CODE
+>='/opt/dbs/pgsql74/bin'; # Where to look for slony binaries </P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$APACHE_ROTATOR</CODE
+>="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator</P
+></LI
+></UL
+> </P
+><P
+>You then define the set of nodes that are to be replicated using a set of calls to <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+>.</P
+><P
+><TT
+CLASS="COMMAND"
+> add_node (host => '10.20.30.40', dbname => 'orglogs', port => 5437,
+ user => 'postgres', node => 4, parent => 1);</TT
+></P
+><P
+>The set of parameters for <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+> are thus: </P
+><P
+> <P
+CLASS="LITERALLAYOUT"
+><TT
+CLASS="LITERAL"
+>my %PARAMS = (host=> undef, # Host name
+ dbname => 'template1', # database name
+ port => 5432, # Port number
+ user => 'postgres', # user to connect as
+ node => undef, # node number
+ password => undef, # password for user
+ parent => 1, # which node is parent to this node
+ noforward => undef # shall this node be set up to forward results?
+);</TT
+></P
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN342"
+>8.2. Set configuration - cluster.set1, cluster.set2</A
+></H2
+><P
+>The UNIX environment variable <CODE
+CLASS="ENVAR"
+>SLONYSET</CODE
+> is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set. </P
+><P
+>Unlike <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>, which is essential for <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> of the slonik-generating scripts, this only needs to be set when running <TT
+CLASS="FILENAME"
+>create_set.pl</TT
+>, as that is the only script used to control what tables will be in a particular replication set. </P
+><P
+>What variables are set up...
+<P
+></P
+><UL
+><LI
+><P
+> $TABLE_ID = 44; Each table must be identified by a unique number; this variable controls where numbering starts</P
+></LI
+><LI
+><P
+> @PKEYEDTABLES An array of names of tables to be replicated that have a defined primary key so that Slony-I can automatically select its key</P
+></LI
+><LI
+><P
+> %KEYEDTABLES A hash table of tables to be replicated, where the hash index is the table name, and the hash value is the name of a unique not null index suitable as a "candidate primary key."</P
+></LI
+><LI
+><P
+> @SERIALTABLES An array of names of tables to be replicated that have no candidate for primary key. Slony-I will add a key field based on a sequence that Slony-I generates</P
+></LI
+><LI
+><P
+> @SEQUENCES An array of names of sequences that are to be replicated</P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN362"
+>8.3. build_env.pl</A
+></H2
+><P
+>Queries a database, generating output hopefully suitable for
+<TT
+CLASS="FILENAME"
+>slon.env</TT
+> consisting of:
+<P
+></P
+><UL
+><LI
+><P
+> a set of <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+> calls to configure the cluster</P
+></LI
+><LI
+><P
+> The arrays <CODE
+CLASS="ENVAR"
+>@KEYEDTABLES</CODE
+>, <CODE
+CLASS="ENVAR"
+>@SERIALTABLES</CODE
+>, and <CODE
+CLASS="ENVAR"
+>@SEQUENCES</CODE
+></P
+></LI
+></UL
+> </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN375"
+>8.4. create_set.pl</A
+></H2
+><P
+>This requires <CODE
+CLASS="ENVAR"
+>SLONYSET</CODE
+> to be set as well as <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>; it is used to
+generate the Slonik script to set up a replication set consisting of a
+set of tables and sequences that are to be replicated. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN380"
+>8.5. drop_node.pl</A
+></H2
+><P
+>Generates Slonik script to drop a node from a Slony-I cluster. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN383"
+>8.6. drop_set.pl</A
+></H2
+><P
+>Generates Slonik script to drop a replication set (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> - set of tables and sequences) from a Slony-I cluster. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN387"
+>8.7. failover.pl</A
+></H2
+><P
+>Generates Slonik script to request failover from a dead node to some new origin </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN390"
+>8.8. init_cluster.pl</A
+></H2
+><P
+>Generates Slonik script to initialize a whole Slony-I cluster,
+including setting up the nodes, communications paths, and the listener
+routing. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN393"
+>8.9. merge_sets.pl</A
+></H2
+><P
+>Generates Slonik script to merge two replication sets together. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN396"
+>8.10. move_set.pl</A
+></H2
+><P
+>Generates Slonik script to move the origin of a particular set to a different node. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN399"
+>8.11. replication_test.pl</A
+></H2
+><P
+>Script to test whether Slony-I is successfully replicating data. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN402"
+>8.12. restart_node.pl</A
+></H2
+><P
+>Generates Slonik script to request the restart of a node. This was
+particularly useful pre-1.0.5 when nodes could get snarled up when
+slon daemons died. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN405"
+>8.13. restart_nodes.pl</A
+></H2
+><P
+>Generates Slonik script to restart all nodes in the cluster. Not
+particularly useful... </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN408"
+>8.14. show_configuration.pl</A
+></H2
+><P
+>Displays an overview of how the environment (e.g. - <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>) is set
+to configure things. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN412"
+>8.15. slon_kill.pl</A
+></H2
+><P
+>Kills slony watchdog and all slon daemons for the specified set. It
+only works if those processes are running on the local host, of
+course! </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN415"
+>8.16. slon_pushsql.pl</A
+></H2
+><P
+>Generates Slonik script to push DDL changes to a replication set. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN418"
+>8.17. slon_start.pl</A
+></H2
+><P
+>This starts a slon daemon for the specified cluster and node, and uses
+slon_watchdog.pl to keep it running. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN421"
+>8.18. slon_watchdog.pl</A
+></H2
+><P
+>Used by slon_start.pl... </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN424"
+>8.19. slon_watchdog2.pl</A
+></H2
+><P
+>This is a somewhat smarter watchdog; it monitors a particular Slony-I
+node, and restarts the slon process if it hasn't seen updates go in in
+20 minutes or more. </P
+><P
+>This is helpful if there is an unreliable network connection such that
+the slon sometimes stops working without becoming aware of it... </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN428"
+>8.20. subscribe_set.pl</A
+></H2
+><P
+>Generates Slonik script to subscribe a particular node to a particular replication set. </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN431"
+>8.21. uninstall_nodes.pl</A
+></H2
+><P
+>This goes through and drops the Slony-I schema from each node; use
+this if you want to destroy replication throughout a cluster. This is
+a VERY unsafe script! </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN434"
+>8.22. unsubscribe_set.pl</A
+></H2
+><P
+>Generates Slonik script to unsubscribe a node from a replication set.
+ </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN437"
+>8.23. update_nodes.pl</A
+></H2
+><P
+>Generates Slonik script to tell all the nodes to update the Slony-I
+
+functions. This will typically be needed when you upgrade from one
+
+version of Slony-I to another.
+
+
+ </P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="x267.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slonstart.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Defining Slony-I Replication Sets</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slon daemons</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
--- /dev/null
+++ doc/adminguide/addthings.html
@@ -0,0 +1,205 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Adding Things to Replication</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Slony Listen Paths"
+HREF="listenpaths.html"><LINK
+REL="NEXT"
+TITLE=" Dropping things from Slony Replication"
+HREF="dropthings.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="listenpaths.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="dropthings.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="ADDTHINGS"
+>17. Adding Things to Replication</A
+></H1
+><P
+>You may discover that you have missed replicating things that
+you wish you were replicating. </P
+><P
+>This can be fairly easily remedied. </P
+><P
+>You cannot directly use <TT
+CLASS="COMMAND"
+>SET ADD TABLE</TT
+> or <TT
+CLASS="COMMAND"
+>SET
+ADD SEQUENCE</TT
+> in order to add tables and sequences to a replication
+set that is presently replicating; you must instead create a new
+replication set. Once it is identically subscribed (e.g. - the set of
+subscribers is <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>identical</I
+></SPAN
+> to that for the set it is to merge
+with), the sets may be merged together using <TT
+CLASS="COMMAND"
+>MERGE SET</TT
+>. </P
+><P
+>Up to and including 1.0.2, there is a potential problem where if
+<TT
+CLASS="COMMAND"
+>MERGE_SET</TT
+> is issued when other subscription-related events
+are pending, it is possible for things to get pretty confused on the
+nodes where other things were pending. This problem was resolved in
+1.0.5. </P
+><P
+>It is suggested that you be very deliberate when adding such
+things. For instance, submitting multiple subscription requests for a
+particular set in one Slonik script often turns out quite badly. If
+it is truly necessary to automate this, you'll probably want to submit
+<TT
+CLASS="COMMAND"
+>WAIT FOR EVENT</TT
+> requests in between subscription requests in
+order that the Slonik script wait for one subscription to complete
+processing before requesting the next one. </P
+><P
+>But in general, it is likely to be easier to cope with complex
+node reconfigurations by making sure that one change has been
+successfully processed before going on to the next. It's way easier
+to fix one thing that has broken than the interaction of five things
+that have broken.
+
+
+ </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="listenpaths.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="dropthings.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony Listen Paths</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Dropping things from Slony Replication</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: legal.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/legal.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/legal.sgml -Ldoc/adminguide/legal.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/legal.sgml
+++ doc/adminguide/legal.sgml
@@ -41,6 +41,12 @@
SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
</para>
+ <para> Note that <trademark/UNIX/ is a registered trademark of The
+ Open Group. <trademark/Windows/ is a registered trademark of
+ Microsoft Corporation in the United States and other countries.
+ <trademark/Solaris/ is a registered trademark of Sun Microsystems,
+ Inc. <trademark/Linux/ is a trademark of Linus Torvalds.
+
</legalnotice>
<!-- Keep this comment at the end of the file
Index: SlonySlonConfiguration.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonySlonConfiguration.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonySlonConfiguration.txt -Ldoc/adminguide/SlonySlonConfiguration.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonySlonConfiguration.txt
+++ doc/adminguide/SlonySlonConfiguration.txt
@@ -1,61 +1 @@
-%META:TOPICINFO{author="guest" date="1097952716" format="1.0" version="1.6"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
-Slon parameters:
-
-usage: slon [options] clustername conninfo
-
-<verbatim>
-Options:
--d <debuglevel> verbosity of logging (1..8)
--s <milliseconds> SYNC check interval (default 10000)
--t <milliseconds> SYNC interval timeout (default 60000)
--g <num> maximum SYNC group size (default 6)
--c <num> how often to vacuum in cleanup cycles
--p <filename> slon pid file
--f <filename> slon configuration file
-</verbatim>
-
-*-d*
-<verbatim>
-The eight levels of logging are:
-- Error
-- Warn
-- Config
-- Info
-- Debug1
-- Debug2
-- Debug3
-- Debug4
-</verbatim>
-
-*-s*
-
-A SYNC event will be sent at least this often, regardless of whether update activity is detected.
-
-Short sync times keep the master on a "short leash," updating the slaves more frequently. If you have replicated sequences that are frequently updated _without_ there being tables that are affected, this keeps there from being times when only sequences are updated, and therefore _no_ syncs take place.
-
-Longer sync times allow there to be fewer events, which allows somewhat better efficiency.
-
-*-t*
-
-The time before the SYNC check interval times out.
-
-*-g*
-
-Number of SYNC events to try to cram together. The default is 6, which is probably suitable for small systems that can devote only very limited bits of memory to slon. If you have plenty of memory, it would be reasonable to increase this, as it will increase the amount of work done in each transaction, and will allow a subscriber that is behind by a lot to catch up more quickly.
-
-Slon processes usually stay pretty small; even with large value for this option, slon would be expected to only grow to a few MB in size.
-
-*-c*
-
-How often to vacuum (_e.g._ - how many cleanup cycles to run before vacuuming).
-
-Set this to zero to disable slon-initiated vacuuming. If you are using something like pg_autovacuum to initiate vacuums, you may not need for slon to initiate vacuums itself. If you are not, there are some tables Slony-I uses that collect a LOT of dead tuples that should be vacuumed frequently.
-
-*-p*
-
-The location of the PID file for the slon process.
-
-*-f*
-
-The location of the slon configuration file.
+Moved to SGML
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -1,4 +1,4 @@
-<article id="reshape"> <title/Reshaping a Cluster/
+<sect1 id="reshape"> <title/Reshaping a Cluster/
<para>If you rearrange the nodes so that they serve different purposes, this will likely lead to the subscribers changing a bit.
@@ -14,7 +14,6 @@
</itemizedlist>
<para> The "altperl" toolset includes a "init_cluster.pl" script that is quite up to the task of creating the new SET LISTEN commands; it isn't smart enough to know what listener paths should be dropped.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: startslons.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/startslons.sgml
+++ doc/adminguide/startslons.sgml
@@ -1,26 +1,69 @@
-<article id="slonstart"> <title/Slon daemons/
+<sect1 id="slonstart"> <title/Slon daemons/
-<para>The programs that actually perform Slony-I replication are the "slon" daemons.
+<para>The programs that actually perform Slony-I replication are the
+<application/slon/ daemons.
+
+<para>You need to run one <application/slon/ instance for each node in
+a Slony-I cluster, whether you consider that node a <quote/master/ or
+a <quote/slave./ Since a <command/MOVE SET/ or <command/FAILOVER/ can
+switch the roles of nodes, slon needs to be able to function for both
+providers and subscribers. It is not essential that these daemons run
+on any particular host, but there are some principles worth
+considering:
-<para>You need to run one "slon" instance for each node in a Slony-I cluster, whether you consider that node a "master" or a "slave." Since a MOVE SET or FAILOVER can switch the roles of nodes, slon needs to be able to function for both providers and subscribers. It is not essential that these daemons run on any particular host, but there are some principles worth considering:
<itemizedlist>
-<listitem><Para> Each slon needs to be able to communicate quickly with the database whose "node controller" it is. Therefore, if a Slony-I cluster runs across some form of Wide Area Network, each slon process should run on or nearby the databases each is controlling. If you break this rule, no particular disaster should ensue, but the added latency introduced to monitoring events on the slon's "own node" will cause it to replicate in a _somewhat_ less timely manner.
-<listitem><Para> The fastest results would be achieved by having each slon run on the database server that it is servicing. If it runs somewhere within a fast local network, performance will not be noticeably degraded.
+<listitem><Para> Each slon needs to be able to communicate quickly
+with the database whose <quote/node controller/ it is. Therefore, if
+a Slony-I cluster runs across some form of Wide Area Network, each
+slon process should run on or nearby the databases each is
+controlling. If you break this rule, no particular disaster should
+ensue, but the added latency introduced to monitoring events on the
+slon's <quote/own node/ will cause it to replicate in a
+<emphasis/somewhat/ less timely manner.
+
+<listitem><Para> The fastest results would be achieved by having each
+slon run on the database server that it is servicing. If it runs
+somewhere within a fast local network, performance will not be
+noticeably degraded.
+
+<listitem><Para> It is an attractive idea to run many of the
+<application/slon/ processes for a cluster on one machine, as this
+makes it easy to monitor them both in terms of log files and process
+tables from one location. This eliminates the need to login to
+several hosts in order to look at log files or to restart <application/slon/
+instances.
-<listitem><Para> It is an attractive idea to run many of the slon processes for a cluster on one machine, as this makes it easy to monitor them both in terms of log files and process tables from one location. This eliminates the need to login to several hosts in order to look at log files or to restart slon instances.
</itemizedlist>
-<para>There are two "watchdog" scripts currently available:
+<para>There are two <quote/watchdog/ scripts currently available:
+
<itemizedlist>
-<listitem><Para> tools/altperl/slon_watchdog.pl - an "early" version that basically wraps a loop around the invocation of slon, restarting any time it falls over
-<listitem><Para> tools/altperl/slon_watchdog2.pl - a somewhat more intelligent version that periodically polls the database, checking to see if a SYNC has taken place recently. We have had VPN connections that occasionally fall over without signalling the application, so that the slon stops working, but doesn't actually die; this polling accounts for that...
+<listitem><Para> <filename>tools/altperl/slon_watchdog.pl</filename> -
+an <quote/early/ version that basically wraps a loop around the
+invocation of <application/slon/, restarting any time it falls over
+
+<listitem><Para> <filename>tools/altperl/slon_watchdog2.pl</filename>
+- a somewhat more intelligent version that periodically polls the
+database, checking to see if a <command/SYNC/ has taken place
+recently. We have had VPN connections that occasionally fall over
+without signalling the application, so that the <application/slon/
+stops working, but doesn't actually die; this polling addresses that
+issue.
+
</itemizedlist>
-<para>The <filename/slon_watchdog2.pl/ script is probably <emphasis/usually/ the preferable thing to run. It was at one point not preferable to run it whilst subscribing a very large replication set where it is expected to take many hours to do the initial <command/COPY SET/. The problem that came up in that case was that it figured that since it hasn't done a <command/SYNC/ in 2 hours, something was broken requiring restarting slon, thereby restarting the <command/COPY SET/ event. More recently, the script has been changed to detect <command/COPY SET/ in progress.
+<para>The <filename/slon_watchdog2.pl/ script is probably
+<emphasis/usually/ the preferable thing to run. It was at one point
+not preferable to run it whilst subscribing a very large replication
+set where it is expected to take many hours to do the initial
+<command/COPY SET/. The problem that came up in that case was that it
+figured that since it hasn't done a <command/SYNC/ in 2 hours,
+something was broken requiring restarting slon, thereby restarting the
+<command/COPY SET/ event. More recently, the script has been changed
+to detect <command/COPY SET/ in progress.
-</article>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -1,90 +1,147 @@
-<article id="slonyintroduction"> <title/Introduction to Slony-I/
+<sect1 id="introduction"> <title>Introduction to Slony-I</title>
-<sect1> <title>Why yet another replication system? </title>
+<sect2> <title>Why yet another replication system? </title>
<para>Slony-I was born from an idea to create a replication system that was not tied
to a specific version of PostgreSQL, which is allowed to be started and stopped on
-an existing database with out the need for a dump/reload cycle.
+an existing database with out the need for a dump/reload cycle.</para></sect2>
-<sect1> <title/What Slony-I is/
+<sect2> <title>What Slony-I is</title>
-<para>Slony-I is a <quote/master to multiple slaves/ replication
+<para>Slony-I is a <quote>master to multiple slaves</quote> replication
system supporting cascading and slave promotion. The big picture for
the development of Slony-I is as a master-slave system that includes
all features and capabilities needed to replicate large databases to a
-reasonably limited number of slave systems. "Reasonable," in this
+reasonably limited number of slave systems. <quote>Reasonable,</quote> in this
context, is probably no more than a few dozen servers. If the number
of servers grows beyond that, the cost of communications becomes
-prohibitively high.
+prohibitively high.</para>
-<!-- See also <link linkend="SlonyListenerCosts"> SlonyListenerCosts </link> -->
+<para> See also <link linkend="slonylistenercosts"> SlonyListenerCosts
+</link> for a further analysis.</para>
<para> Slony-I is a system intended for data centers and backup sites,
where the normal mode of operation is that all nodes are available all
the time, and where all nodes can be secured. If you have nodes that
are likely to regularly drop onto and off of the network, or have
nodes that cannot be kept secure, Slony-I may not be the ideal
-replication solution for you.
+replication solution for you.</para>
-<para> There are plans for a <quote/file-based log shipping/ extension where updates would be serialized into files. Given that, log files could be distributed by any means desired without any need of feedback between the provider node and those nodes subscribing via "log shipping."
+<para> There are plans for a <quote>file-based log shipping</quote>
+extension where updates would be serialized into files. Given that,
+log files could be distributed by any means desired without any need
+of feedback between the provider node and those nodes subscribing via
+<quote>log shipping.</quote></para></sect2>
-<sect1><title/ Slony-I is not/
+<sect2><title> Slony-I is not</title>
-<para>Slony-I is not a network management system.
+<para>Slony-I is not a network management system.</para>
<para> Slony-I does not have any functionality within it to detect a
node failure, or automatically promote a node to a master or other
-data origin.
+data origin.</para>
<para>Slony-I is not multi-master; it's not a connection broker, and
-it doesn't make you coffee and toast in the morning.
+it doesn't make you coffee and toast in the morning.</para>
<para>(That being said, the plan is for a subsequent system, Slony-II,
to provide "multimaster" capabilities, and be "bootstrapped" using
Slony-I. But that is a separate project, and expectations for Slony-I
-should not be based on hopes for future projects.)
+should not be based on hopes for future projects.)</para></sect2>
-<sect1><title> Why doesn't Slony-I do automatic fail-over/promotion?
+<sect2><title> Why doesn't Slony-I do automatic fail-over/promotion?
</title>
<para>This is the job of network monitoring software, not Slony.
Every site's configuration and fail-over path is different. For
example, keep-alive monitoring with redundant NIC's and intelligent HA
switches that guarantee race-condition-free takeover of a network
-address and disconnecting the "failed" node vary in every network
-setup, vendor choice, hardware/software combination. This is clearly
-the realm of network management software and not Slony-I.
+address and disconnecting the <quote>failed</quote> node vary in every
+network setup, vendor choice, hardware/software combination. This is
+clearly the realm of network management software and not
+Slony-I.</para>
-<para>Let Slony-I do what it does best: provide database replication.
+<para>Let Slony-I do what it does best: provide database replication.</para></sect2>
-<sect1><title/ Current Limitations/
+<sect2><title> Current Limitations</title>
<para>Slony-I does not automatically propagate schema changes, nor
does it have any ability to replicate large objects. There is a
single common reason for these limitations, namely that Slony-I
operates using triggers, and neither schema changes nor large object
operations can raise triggers suitable to tell Slony-I when those
-kinds of changes take place.
+kinds of changes take place.</para>
<para>There is a capability for Slony-I to propagate DDL changes if
-you submit them as scripts via the slonik <command/EXECUTE SCRIPT/
-operation. That is not "automatic;" you have to construct an SQL DDL
-script and submit it.
+you submit them as scripts via the <application>slonik</application>
+<command>EXECUTE SCRIPT</command> operation. That is not
+<quote>automatic;</quote> you have to construct an SQL DDL script and submit
+it.</para>
<para>If you have those sorts of requirements, it may be worth
-exploring the use of PostgreSQL 8.0 PITR (Point In Time Recovery),
-where WAL logs are replicated to remote nodes. Unfortunately, that
-has two attendant limitations:
+exploring the use of <application>PostgreSQL</application> 8.0 PITR (Point In Time
+Recovery), where <acronym>WAL</acronym> logs are replicated to remote
+nodes. Unfortunately, that has two attendant limitations:
<itemizedlist>
- <listitem><para> PITR replicates <emphasis/all/ changes in <emphasis/all/ databases; you cannot exclude data that isn't relevant;
- <listitem><para> A PITR replica remains dormant until you apply logs and start up the database. You cannot use the database and apply updates simultaneously. It is like having a <quote/standby server/ which cannot be used without it ceasing to be <quote/standby./
-</itemizedlist>
+<listitem><para> PITR replicates <emphasis>all</emphasis> changes in
+<emphasis>all</emphasis> databases; you cannot exclude data that isn't
+relevant;</para></listitem>
+
+<listitem><para> A PITR replica remains dormant until you apply logs
+and start up the database. You cannot use the database and apply
+updates simultaneously. It is like having a <quote>standby
+server</quote> which cannot be used without it ceasing to be
+<quote>standby.</quote></para></listitem>
+
+</itemizedlist></para>
+
+<para>There are a number of distinct models for database replication;
+it is impossible for one replication system to be all things to all
+people.</para></sect2>
+
+<sect2 id="slonylistenercosts"><title> Slony-I Communications
+Costs</title>
+
+<para>The cost of communications grows in a quadratic fashion in
+several directions as the number of replication nodes in a cluster
+increases. Note the following relationships:
-<para>There are a number of distinct models for database replication; it is impossible for one replication system to be all things to all people.
+<itemizedlist>
+
+<listitem><para> It is necessary to have a sl_path entry allowing
+connection from each node to every other node. Most will normally not
+need to be used for a given replication configuration, but this means
+that there needs to be n(n-1) paths. It is probable that there will
+be considerable repetition of entries, since the path to "node n" is
+likely to be the same from everywherein the network.</para></listitem>
+
+<listitem><para> It is similarly necessary to have a sl_listen entry
+indicating how data flows from every node to every other node. This
+again requires configuring n(n-1) "listener paths."</para></listitem>
+
+<listitem><para> Each SYNC applied needs to be reported back to all of
+the other nodes participating in the set so that the nodes all know
+that it is safe to purge sl_log_1 and sl_log_2 data, as any
+<quote>forwarding</quote> node could potentially take over as <quote>master</quote>
+at any time. One might expect SYNC messages to need to travel through
+n/2 nodes to get propagated to their destinations; this means that
+each SYNC is expected to get transmitted n(n/2) times. Again, this
+points to a quadratic growth in communications costs as the number of
+nodes increases.</para></listitem>
+
+</itemizedlist></para>
+
+<para>This points to it being a bad idea to have the large
+communications network resulting from the number of nodes being large.
+Up to a half dozen nodes seems pretty reasonable; every time the
+number of nodes doubles, this can be expected to quadruple
+communications overheads.</para>
+</sect2>
+</sect1>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
Index: SlonyHelp.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyHelp.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyHelp.txt -Ldoc/adminguide/SlonyHelp.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyHelp.txt
+++ doc/adminguide/SlonyHelp.txt
@@ -1,14 +1 @@
-%META:TOPICINFO{author="guest" date="1099494380" format="1.0" version="1.5"}%
-%META:TOPICPARENT{name="SlonyIAdministration"}%
-If you are having problems with Slony-I, you have several options for help:
-
- * [[http://slony.info/][http://slony.info/]] - the official "home" of Slony
- * Documentation on the Slony-I Site- Check the documentation on the Slony website: [[http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx][http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx]]
- * Other Documentation - There are several articles here ([[http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication][http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication]] that may be helpful.
- * IRC - There are usually some people on #slony on irc.freenode.net who may be able to answer some of your questions. There is also a bot named "rtfm_please" that you may want to chat with.
- * Mailing lists - The answer to your problem may exist in the Slony1-general mailing list archives, or you may choose to ask your question on the Slony1-general mailing list. The mailing list archives, and instructions for joining the list may be found here: [[http://gborg.postgresql.org/mailman/listinfo/slony1][http://gborg.postgresql.org/mailman/listinfo/slony1]]
- * If your Russian is much better than your English, then [[http://kirov.lug.ru/wiki/Slony][KirovOpenSourceCommunity: Slony]] may be the place to go
-
----+++ Other Information Sources
-
- * [[http://comstar.dotgeek.org/postgres/slony-config/][slony-config]] - A Perl tool for configuring Slony nodes using config files in an XML-based format that the tool transforms into a Slonik script
+Moved to SGML
--- /dev/null
+++ doc/adminguide/installation.html
@@ -0,0 +1,266 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Slony-I Installation</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="t24.html"><LINK
+REL="PREVIOUS"
+TITLE=" Requirements"
+HREF="requirements.html"><LINK
+REL="NEXT"
+TITLE="Slonik"
+HREF="slonik.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="requirements.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slonik.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="INSTALLATION"
+>3. Slony-I Installation</A
+></H1
+><P
+>You should have obtained the Slony-I source from the previous step. Unpack it.</P
+><P
+><TT
+CLASS="COMMAND"
+>gunzip slony.tar.gz;
+tar xf slony.tar</TT
+></P
+><P
+> This will create a directory Slony-I under the current
+directory with the Slony-I sources. Head into that that directory for
+the rest of the installation procedure.</P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN176"
+>3.1. Short Version</A
+></H2
+><P
+><TT
+CLASS="COMMAND"
+>./configure --with-pgsourcetree=/whereever/the/source/is </TT
+></P
+><P
+> <TT
+CLASS="COMMAND"
+> gmake all; gmake install </TT
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN182"
+>3.2. Configuration</A
+></H2
+><P
+>The first step of the installation procedure is to configure the source tree
+for your system. This is done by running the configure script. Configure
+needs to know where your PostgreSQL source tree is, this is done with the
+--with-pgsourcetree= option.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN185"
+>3.3. Example</A
+></H2
+><P
+> <TT
+CLASS="COMMAND"
+>./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3</TT
+></P
+><P
+>This script will run a number of tests to guess values for
+various dependent variables and try to detect some quirks of your
+system. Slony-I is known to need a modified version of libpq on
+specific platforms such as Solaris2.X on SPARC this patch can be found
+at <A
+HREF="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz"
+TARGET="_top"
+>http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz</A
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN191"
+>3.4. Build</A
+></H2
+><P
+>To start the build process, type
+
+<TT
+CLASS="COMMAND"
+>gmake all</TT
+></P
+><P
+> Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things. The last line displayed should be</P
+><P
+> <TT
+CLASS="COMMAND"
+>All of Slony-I is successfully made. Ready to install.</TT
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN198"
+>3.5. Installing Slony-I</A
+></H2
+><P
+> To install Slony-I, enter
+
+<TT
+CLASS="COMMAND"
+>gmake install</TT
+></P
+><P
+>This will install files into postgresql install directory as
+specified by the <CODE
+CLASS="OPTION"
+>--prefix</CODE
+> option used in the PostgreSQL
+configuration. Make sure you have appropriate permissions to write
+into that area. Normally you need to do this either as root or as the
+postgres user.</P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="requirements.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slonik.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Requirements</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="t24.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slonik</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: SlonyFAQ04.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ04.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ04.txt -Ldoc/adminguide/SlonyFAQ04.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ04.txt
+++ doc/adminguide/SlonyFAQ04.txt
@@ -1,31 +1 @@
-%META:TOPICINFO{author="guest" date="1099542005" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ slon does not restart after crash
-
-After an immediate stop of postgresql (simulation of system crash)
-in pg_catalog.pg_listener a tuple with
-relname='_${cluster_name}_Restart' exists. slon doesn't start cause it
-thinks another process is serving the cluster on this node. What can
-I do? The tuples can't be dropped from this relation.
-
-The logs claim that "Another slon daemon is serving this node already"
-
-It's handy to keep a slonik script like the following one around to
-run in such cases:
-
-<verbatim>
-twcsds004[/opt/twcsds004/OXRS/slony-scripts]$ cat restart_org.slonik
-cluster name = oxrsorg ;
-node 1 admin conninfo = 'host=32.85.68.220 dbname=oxrsorg user=postgres port=5532';
-node 2 admin conninfo = 'host=32.85.68.216 dbname=oxrsorg user=postgres port=5532';
-node 3 admin conninfo = 'host=32.85.68.244 dbname=oxrsorg user=postgres port=5532';
-node 4 admin conninfo = 'host=10.28.103.132 dbname=oxrsorg user=postgres port=5532';
-restart node 1;
-restart node 2;
-restart node 3;
-restart node 4;
-</verbatim>
-
-'restart node n' cleans up dead notifications so that you can restart the node.
-
-As of version 1.0.5, the startup process of slon looks for this condition, and automatically cleans it up.
+Moved to SGML
Index: SlonyFAQ14.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ14.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ14.txt -Ldoc/adminguide/SlonyFAQ14.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ14.txt
+++ doc/adminguide/SlonyFAQ14.txt
@@ -1,37 +1 @@
-%META:TOPICINFO{author="guest" date="1099541409" format="1.0" version="1.2"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ I started doing a backup using pg_dump, and suddenly Slony stops
-
-Ouch. What happens here is a conflict between:
-
- * pg_dump, which has taken out an AccessShareLock on all of the tables in the database, including the Slony-I ones, and
-
- * A Slony-I sync event, which wants to grab a AccessExclusiveLock on the table sl_event.
-
-The initial query that will be blocked is thus:
-
-<verbatim>
- select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL);
-</verbatim>
-
-(You can see this in pg_stat_activity, if you have query display turned on in postgresql.conf)
-
-The actual query combination that is causing the lock is from the function Slony_I_ClusterStatus(), found in slony1_funcs.c, and is localized in the code that does:
-
-<verbatim>
- LOCK TABLE %s.sl_event;
- INSERT INTO %s.sl_event (...stuff...)
- SELECT currval('%s.sl_event_seq');
-</verbatim>
-
-The LOCK statement will sit there and wait until pg_dump (or whatever else has pretty much any kind of access lock on sl_event) completes.
-
-Every subsequent query submitted that touches sl_event will block behind the createEvent call.
-
-There are a number of possible answers to this:
-
- * Have pg_dump specify the schema dumped using --schema=whatever, and don't try dumping the cluster's schema.
-
- * It would be nice to add an "--exclude-schema" option to pg_dump to exclude the Slony cluster schema. Maybe in 8.0 or 8.1...
-
-Note that 1.0.5 uses a more precise lock that is less exclusive that relieves this problem.
+Moved to SGML
Index: slonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/slonik.sgml
+++ doc/adminguide/slonik.sgml
@@ -1,11 +1,11 @@
-<article id="slonik"> <title>Slonik</title>
+<sect1 id="slonik"> <title>Slonik</title>
-<sect1><title/Introduction/
+<sect2><title/Introduction/
<para>Slonik is a command line utility designed specifically to setup
and modify configurations of the Slony-I replication system.</para>
-<sect1><title/General outline/
+<sect2><title/General outline/
<para>The slonik command line utility is supposed to be used embedded
into shell scripts and reads commands from files or stdin.</para>
Index: SlonyFAQ15.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyFAQ15.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyFAQ15.txt -Ldoc/adminguide/SlonyFAQ15.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyFAQ15.txt
+++ doc/adminguide/SlonyFAQ15.txt
@@ -1,20 +1 @@
-%META:TOPICINFO{author="guest" date="1098327360" format="1.0" version="1.1"}%
-%META:TOPICPARENT{name="SlonyFAQ"}%
----++ The slons spent the weekend out of commission [for some reason], and it's taking a long time to get a sync through.
-
-You might want to take a look at the sl_log_1/sl_log_2 tables, and do
-a summary to see if there are any really enormous Slony-I transactions
-in there. Up until at least 1.0.2, there needs to be a slon connected
-to the master in order for SYNC events to be generated.
-
-If none are being generated, then all of the updates until the next
-one is generated will collect into one rather enormous Slony-I
-transaction.
-
-Conclusion: Even if there is not going to be a subscriber around, you
-_really_ want to have a slon running to service the "master" node.
-
-Some future version (probably 1.1) may provide a way for SYNC counts
-to be updated on the master by the stored function that is invoked by
-the table triggers.
-
+Moved to SGML
- Previous message: [Slony1-commit] By darcyb: add docbook tools to configure, add tools to build the
- Next message: [Slony1-commit] By cbbrowne: Made watchdog2 smarter Instead of just "thumping" a slon
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-commit mailing list