Fri Apr 15 23:27:09 PDT 2005
- Previous message: [Slony1-commit] By cbbrowne: Documentation on how to upgrade from an older version of
- Next message: [Slony1-commit] By cbbrowne: Updates to debian packaging configuration based on comments
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Log Message:
-----------
Updates to documentation to make the introductory sections reflect
what has become true about Slony-I implementation (e.g. - it now
has log shipping; building documentation requires DocBook tools;
better sample of ./configure parameters).
Modified Files:
--------------
slony1-engine/doc/adminguide:
concepts.sgml (r1.12 -> r1.13)
faq.sgml (r1.30 -> r1.31)
installation.sgml (r1.10 -> r1.11)
intro.sgml (r1.14 -> r1.15)
logshipping.sgml (r1.8 -> r1.9)
prerequisites.sgml (r1.13 -> r1.14)
-------------- next part --------------
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -11,6 +11,8 @@
<listitem><para>Node</para></listitem>
<listitem><para>Replication Set</para></listitem>
<listitem><para>Origin, Providers and Subscribers</para></listitem>
+ <listitem><para>slon daemons</para></listitem>
+ <listitem><para>slonik configuration processor</para></listitem>
</itemizedlist>
<sect2>
@@ -38,15 +40,16 @@
<primary>node</primary>
</indexterm>
-<para>A &slony1; Node is a named &postgres; database that will be participating in replication.</para>
+<para>A &slony1; Node is a named &postgres; database that will be
+participating in replication.</para>
<para>It is defined, near the beginning of each Slonik script, using the directive:</para>
<programlisting>
NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
</programlisting>
-<para>The <xref linkend="admconninfo"> information indicates a string
-argument that will ultimately be passed to the
+<para>The <xref linkend="admconninfo"> information indicates database
+connection information that will ultimately be passed to the
<function>PQconnectdb()</function> libpq function.</para>
<para>Thus, a &slony1; cluster consists of:</para>
@@ -62,8 +65,7 @@
</indexterm>
<para>A replication set is defined as a set of tables and sequences
-that are to be replicated between nodes in a
-&slony1; cluster.</para>
+that are to be replicated between nodes in a &slony1; cluster.</para>
<para>You may have several sets, and the <quote>flow</quote> of
replication does not need to be identical between those sets.</para>
@@ -100,6 +102,42 @@
<quote>provider</quote> to other nodes in the cluster for that
replication set.</para>
</sect2>
+
+<sect2><title>slon Daemon</title>
+
+<para>For each node in the cluster, there will be a <xref
+linkend="slon"> process to manage replication activity for that node.
+</para>
+
+<para> <xref linkend="slon"> is a program implemented in C that
+processes replication events. There are two main sorts of events:</para>
+
+<itemizedlist>
+
+<listitem><para> Configuration events</para>
+
+<para> These normally occur when a <xref linkend="slonik"> script is
+run, and submit updates to the configuration of the cluster. <para>
+</listitem>
+
+<listitem><para> <command>SYNC</command> events </para>
+
+<para> Updates to the tables that are replicated are grouped together
+into <command>SYNC</command>s; these groups of transactions are
+applied together to the subscriber nodes. </para>
+</listitem>
+</itemizedlist>
+</sect2>
+
+<sect2><title>slonik Configuration Processor</title>
+
+<para> The <xref linkend="slonik"> command processor processes scripts
+in a <quote>little language</quote> that are used to submit events to
+update the configuration of a &slony1; cluster. This includes such
+things as adding and removing nodes, modifying communications paths,
+adding or removing subscriptions.
+</para>
+</sect2>
</sect1>
<!-- Keep this comment at the end of the file
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -19,7 +19,17 @@
<para>
<screen>
-./configure --with-pgsourcetree=/whereever/the/source/is
+PGMAIN=/usr/local/pgsql746-freebsd-2005-04-01 \
+./configure \
+ --prefix=$PGMAIN \
+ --bindir=$PGMAIN/bin \
+ --datadir=$PGMAIN/share \
+ --libdir=$PGMAIN/lib \
+ --with-pgconfigdir=$PGMAIN/bin \
+ --with-pgbindir=$PGMAIN/bin \
+ --with-pgincludedir=$PGMAIN/include \
+ --with-pglibdir=$PGMAIN/lib \
+ --with-pgsharedir=$PGMAIN/share
gmake all; gmake install
</screen>
</para></sect2>
@@ -31,11 +41,14 @@
source tree for your system. This is done by running the
<application>configure</application> script. In early versions,
<application>configure</application> needed to know where your
-&postgres; source tree is, is done with the
-<option>--with-pgsourcetree=</option> option. As of version 1.1,
-&slony1; is configured by pointing it to the various library, binary,
-and include directories; for a full list of these options, use the
-command <command>./configure --help</command>
+&postgres; source tree is, which was done with the
+<option>--with-pgsourcetree=</option> option. As of version 1.1, this
+is no longer necessary, as &slony1; has included within its own code
+base certain parts needed for platform portability. It now only needs
+to make reference to parts of &postgres; that are actually part of the
+installation. Therefore, &slony1; is configured by pointing it to the
+various library, binary, and include directories. For a full list of
+these options, use the command <command>./configure --help</command>
</para>
</sect2>
@@ -43,20 +56,32 @@
<title>Example</title>
<screen>
-./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.6
+PGMAIN=/opt/dbs/pgsql746-aix-2005-04-01 \
+./configure \
+ --prefix=$PGMAIN \
+ --bindir=$PGMAIN/bin \
+ --datadir=$PGMAIN/share \
+ --libdir=$PGMAIN/lib \
+ --with-pgconfigdir=$PGMAIN/bin \
+ --with-pgbindir=$PGMAIN/bin \
+ --with-pgincludedir=$PGMAIN/include \
+ --with-pglibdir=$PGMAIN/lib \
+ --with-pgsharedir=$PGMAIN/share
</screen>
-<para>This script will run a number of tests to guess values for
-various dependent variables and try to detect some quirks of your
-system. &slony1; is known to need a modified version of libpq on
-specific platforms such as Solaris2.X on SPARC this patch can be found
-at <ulink
- url="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
+<para>The <application>configure</application> script will run a
+number of tests to guess values for various dependent variables and
+try to detect some quirks of your system. &slony1; is known to need a
+modified version of <application>libpq<application> on specific
+platforms such as Solaris2.X on SPARC. The patch for libpq version
+7.4.2 can be found at <ulink id="threadpatch" url=
+"http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz
-</ulink></para>
+</ulink> Similar patches may need to be constructed for other
+versions; see the FAQ entry on <link linkend="threadsafety"> thread
+safety </link>. </para>
</sect2>
-
<sect2>
<title>Build</title>
@@ -67,11 +92,14 @@
</screen></para>
<para> Be sure to use GNU make; on BSD systems, it is called
-<application>gmake</application>; on Linux, GNU make is typically the native
-<application>make</application>, so the name of the command you type in may vary
-somewhat. The build may take anywhere from a few seconds to 2 minutes
-depending on how fast your hardware is at compiling things. The last
-line displayed should be</para>
+<application>gmake</application>; on Linux, GNU make is typically the
+<quote>native</quote> <application>make</application>, so the name of
+the command you type in may be either <command>make</command> or
+<command>gmake</command>. On other platforms, you may need additional
+packages or even install GNU make from scratch. The build may take
+anywhere from a few seconds to 2 minutes depending on how fast your
+hardware is at compiling things. The last line displayed should
+be</para>
<para> <command> All of Slony-I is successfully made. Ready to
install. </command></para>
@@ -88,7 +116,7 @@
<para>This will install files into postgresql install directory as
specified by the <option>--prefix</option> option used in the
-&postgres; configuration. Make sure you have appropriate permissions
+&postgres; installation. Make sure you have appropriate permissions
to write into that area. Normally you need to do this either as root
or as the postgres user. </para>
</sect2>
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -1,13 +1,6 @@
<!-- $Id$ -->
<sect1 id="introduction">
<title>Introduction to &slony1;</title>
-<sect2><title>Why yet another replication system?</title>
-
-<para>&slony1; was born from an idea to
-create a replication system that was not tied to a specific version of
-&postgres;, which is allowed to be started and stopped on an existing
-database with out the need for a dump/reload cycle.</para> </sect2>
-
<sect2> <title>What &slony1; is</title>
<para>&slony1; is a <quote>master to
@@ -42,11 +35,11 @@
staff who connect periodically to grab updates. </para></listitem>
</itemizedlist></para>
-<para> There are plans for a <quote>file-based log shipping</quote>
-extension where updates would be serialized into files. Given that,
-log files could be distributed by any means desired without any need
-of feedback between the provider node and those nodes subscribing via
-<quote>log shipping.</quote></para>
+<para> There is also a <link linkend="logshipping">file-based log
+shipping</link> extension where updates would be serialized into
+files. Given that, log files could be distributed by any means
+desired without any need of feedback between the provider node and
+those nodes subscribing via <quote>log shipping.</quote></para>
<para> But &slony1;, by only having a single origin for each set, is
quite unsuitable for <emphasis>really</emphasis> asynchronous multiway
@@ -62,65 +55,79 @@
</sect2>
+<sect2><title>Why yet another replication system?</title>
+
+<para>&slony1; was born from an idea to create a replication system
+that was not tied to a specific version of &postgres;, which is
+allowed to be started and stopped on an existing database with out the
+need for a dump/reload cycle.</para> </sect2>
+
<sect2><title> What &slony1; is not</title>
-<para>&slony1; is not a network management system.</para>
+<itemizedlist>
+<listitem><para>&slony1; is not a network management system.</para></listitem>
-<para> &slony1; does not have any functionality within it to detect a
+<listitem><para> &slony1; does not have any functionality within it to detect a
node failure, nor to automatically promote a node to a master or other
-data origin. It is quite possible that you may need to do that; that
+data origin.</para>
+
+<para> It is quite possible that you may need to do that; that
will require that you combine some network tools that evaluate
<emphasis> to your satisfaction </emphasis> which nodes you consider
<quote>live</quote> and which nodes you consider <quote>dead</quote>
along with some local policy to determine what to do under those
circumstances. &slony1; does not dictate any of that policy to
-you.</para>
+you.</para></listitem>
-<para>&slony1; is not multi-master; it is not a connection broker, and
-it doesn't make you coffee and toast in the morning.</para>
+<listitem><para>&slony1; is not multi-master; it is not a connection broker, and
+it doesn't make you coffee and toast in the morning.</para></listitem>
-<para>That being said, the plan is for a subsequent system,
-<productname>Slony-II</productname>, to provide
-<quote>multimaster</quote> capabilities. But that is a separate
-project, and expectations for &slony1; should not be based on hopes
-for future projects.</para></sect2>
+</itemizedlist>
+
+<para>All that being said, there are tools available to help with some
+of these things, and there is a plan under way for a subsequent
+system, <productname>Slony-II</productname>, to provide
+<quote>multimaster</quote> capabilities. But that represents a
+different, separate project, being implemented in a rather different
+fashion than &slony1;, and expectations for &slony1; should not be
+based on hopes for future projects.</para></sect2>
<sect2><title> Why doesn't &slony1; do automatic fail-over/promotion?
</title>
-<para>This is the job of network monitoring software, not &slony1;.
-Every site's configuration and fail-over paths are different. For
-example, keep-alive monitoring with redundant NIC's and intelligent HA
-switches that guarantee race-condition-free takeover of a network
-address and disconnecting the <quote>failed</quote> node will vary
-based on network configuration, vendor choices, and the combinations
-of hardware and software in use. This is clearly the realm of network
+<para>That is properly the responsibility of network monitoring
+software, not &slony1;. The configuration, fail-over paths, and
+preferred policies will be different for each site. For example,
+keep-alive monitoring with redundant NIC's and intelligent HA switches
+that guarantee race-condition-free takeover of a network address and
+disconnecting the <quote>failed</quote> node will vary based on
+network configuration, vendor choices, and the combinations of
+hardware and software in use. This is clearly the realm of network
management software and not &slony1;.</para>
<para> Furthermore, choosing what to do based on the
<quote>shape</quote> of the cluster represents local business policy.
-If &slony1; imposed failover policy on you,
-that might conflict with business requirements, thereby making
-&slony1; an unacceptable choice.</para>
+If &slony1; imposed failover policy on you, that might conflict with
+business requirements, thereby making &slony1; an unacceptable
+choice.</para>
<para>As a result, let &slony1; do what it does best: provide database
replication.</para></sect2>
<sect2><title> Current Limitations</title>
-<para>&slony1; does not automatically
-propagate schema changes, nor does it have any ability to replicate
-large objects. There is a single common reason for these limitations,
-namely that &slony1; operates using
-triggers, and neither schema changes nor large object operations can
-raise triggers suitable to tell &slony1;
-when those kinds of changes take place.</para>
+<para>&slony1; does not automatically propagate schema changes, nor
+does it have any ability to replicate large objects. There is a
+single common reason for these limitations, namely that &slony1;
+operates using triggers, and neither schema changes nor large object
+operations can raise triggers suitable to tell &slony1; when those
+kinds of changes take place.</para>
<para>There is a capability for &slony1; to propagate DDL changes if
you submit them as scripts via the <application>slonik</application>
<xref linkend="stmtddlscript"> operation. That is not
<quote>automatic;</quote> you have to construct an SQL DDL script and
-submit it.</para>
+submit it, and there are a number of further caveats.</para>
<para>If you have those sorts of requirements, it may be worth
exploring the use of &postgres; 8.0 <acronym>PITR</acronym> (Point In
@@ -156,9 +163,10 @@
<itemizedlist>
<listitem><para> It is necessary to have <xref
- linkend="table.sl-listen"> entries allowing connection from each node to every
-other node. Most will normally not need to be very heavily, but it
-still means that there needs to be n(n-1) paths. </para></listitem>
+linkend="table.sl-listen"> entries allowing connection from each node
+to every other node. Most will normally not need to be very heavily,
+but it still means that there needs to be n(n-1) paths.
+</para></listitem>
<listitem><para> Each SYNC applied needs to be reported back to all of
the other nodes participating in the set so that the nodes all know
Index: prerequisites.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v
retrieving revision 1.13
retrieving revision 1.14
diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.13 -r1.14
--- doc/adminguide/prerequisites.sgml
+++ doc/adminguide/prerequisites.sgml
@@ -7,8 +7,9 @@
<para>The platforms that have received specific testing at the time of
this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
osX-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64,
-<trademark>Solaris</trademark>-2.8-SPARC, <trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1
-and OpenBSD-3.5-sparc64.</para>
+<trademark>Solaris</trademark>-2.8-SPARC,
+<trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1 and
+OpenBSD-3.5-sparc64.</para>
<para>There have been reports of success at running &slony1; hosts
that are running PostgreSQL on Microsoft
@@ -68,7 +69,16 @@
download it from your favorite &postgres; mirror. See <ulink
url="http://www.postgresql.org/mirrors-www.html">
http://www.postgresql.org/mirrors-www.html </ulink> for a
-list.</para></listitem> </itemizedlist> </para>
+list.</para></listitem>
+
+<listitem><para> This documentation is written in SGML using <ulink
+url="http://docbook.com/"> DocBook </ulink>, and may be processed into
+numerous formats including HTML, RTF, and PDF using tools in the
+<ulink url="http://docbook.sourceforge.net/"> DocBook Open Repository
+</ulink> along with <ulink url="http://openjade.sourceforge.net/">
+OpenJade.</ulink> </para></listitem>
+
+</itemizedlist> </para>
<para>Also check to make sure you have sufficient disk space. You
will need approximately 5MB for the source tree during build and
@@ -149,13 +159,13 @@
<para>It is necessary that the hosts that are to replicate between one
another have <emphasis>bidirectional</emphasis> network communications
-to the &postgres; instances. That is, if node B is replicating data
-from node A, it is necessary that there be a path from A to B and from
-B to A. It is recommended that, as much as possible, all nodes in a
-&slony1; cluster allow this sort of bidirection communications from
-any node in the cluster to any other node in the cluster.</para>
+between the &postgres; instances. That is, if node B is replicating
+data from node A, it is necessary that there be a path from A to B and
+from B to A. It is recommended that, as much as possible, all nodes
+in a &slony1; cluster allow this sort of bidirection communications
+from any node in the cluster to any other node in the cluster.</para>
-<para>For ease of configuration, network addresses should be
+<para>For ease of configuration, network addresses should ideally be
consistent across all of the nodes. <xref linkend="stmtstorepath">
does allow them to vary, but down this road lies madness as you try to
manage the multiplicity of paths pointing to the same server.</para>
@@ -188,19 +198,19 @@
branch location that can't be kept secure compromises security for the
cluster as a whole.</para>
-<para>In the future plans is a feature whereby updates for a
-particular replication set would be serialized via a scheme called
-<quote>log shipping.</quote> The data stored in
-<envar>sl_log_1</envar> and <envar>sl_log_2</envar> would be written
-out to log files on disk. These files could be transmitted in any
+<para>New in &slony1; version 1.1 is a feature whereby updates for a
+particular replication set may be serialized via a scheme called <link
+linkend="logshipping">log shipping. </link> The data stored in
+<envar>sl_log_1</envar> and <envar>sl_log_2</envar> is also written
+out to log files on disk. These files may then be transmitted in any
manner desired, whether via scp, FTP, burning them onto DVD-ROMs and
mailing them, or, at the frivolous end of the spectrum, by recording
them on a USB <quote>flash device</quote> and attaching them to birds,
allowing some equivalent to <ulink
url="http://www.faqs.org/rfcs/rfc1149.html"> transmission of IP
datagrams on avian carriers - RFC 1149.</ulink> But whatever the
-transmission mechanism, this will allow one way communications such
-that subscribers that use log shipping have no need of access to other
+transmission mechanism, this allows one way communications such that
+subscribers that use log shipping have no need of access to other
&slony1; nodes.</para>
</sect2>
Index: logshipping.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/logshipping.sgml,v
retrieving revision 1.8
retrieving revision 1.9
diff -Ldoc/adminguide/logshipping.sgml -Ldoc/adminguide/logshipping.sgml -u -w -r1.8 -r1.9
--- doc/adminguide/logshipping.sgml
+++ doc/adminguide/logshipping.sgml
@@ -58,24 +58,29 @@
subscription set generated?</para>
</question>
-<answer> <para> Any <link linkend="slon"> slon </link> node can
-generate them by adding the <option>-a</option> option.</para>
+<answer> <para> Any <link linkend="slon"> slon </link> subscriber node
+can generate them by adding the <option>-a</option> option.</para>
+
+<note><para> Notice that this implies that in order to use log
+shipping, you must have at least one subscriber node. </para></note>
</answer>
+
</qandaentry>
-<qandaentry>
-<question> <para> What takes place when a failover/MOVE SET takes place?</para></question>
+
+<qandaentry> <question> <para> What takes place when a <xref
+linkend="stmtfailover">/ <xref linkend="stmtmoveset"> takes
+place?</para></question>
<answer><para> Nothing special. So long as the archiving node remains
a subscriber, it will continue to generate logs.</para></answer>
</qandaentry>
-<qandaentry>
-<question> <para> What if we run out of <quote>spool space</quote>?
-</para></question>
+<qandaentry> <question> <para> What if we run out of <quote>spool
+space</quote>? </para></question>
-<answer><para> The node will stop accepting SYNCs until this problem
-is alleviated. The database being subscribed to will also fall
-behind. </para></answer>
+<answer><para> The node will stop accepting <command>SYNC</command>s
+until this problem is alleviated. The database being subscribed to
+will also fall behind. </para></answer>
</qandaentry>
<qandaentry>
@@ -89,8 +94,8 @@
</link></application> for the subscriber node with logging turned on.
At any point after that, you can run
<application>slony1_dump.sh</application>, which will pull the state
-of that subscriber as of some SYNC event. Once the dump completes,
-all the SYNC logs generated from the time that dump
+of that subscriber as of some <command>SYNC</command> event. Once the dump completes,
+all the <command>SYNC</command> logs generated from the time that dump
<emphasis>started</emphasis> may be added to the dump in order to get
a <quote>log shipping subscriber.</quote> </para></answer>
</qandaentry>
@@ -114,14 +119,14 @@
things out if there are multiple replication sets. </para></answer>
<answer><para> The <quote>log shipping node</quote> presently tracks
-only SYNC events. This should be sufficient to cope with
+only <command>SYNC</command> events. This should be sufficient to cope with
<emphasis>some</emphasis> changes in cluster configuration, but not
others. </para>
<para> Log shipping does <emphasis>not</emphasis> process certain
additional events, with the implication that the introduction of any
of the following events can invalidate the relationship between the
-SYNCs and the dump created using
+<command>SYNC</command>s and the dump created using
<application>slony1_dump.sh</application> so that you'll likely need
to rerun <application>slony1_dump.sh</application>:
@@ -200,11 +205,11 @@
<itemizedlist>
<listitem><para> You <emphasis>don't</emphasis> want to blindly apply
-SYNC files because any given SYNC file may <emphasis>not</emphasis> be
+<command>SYNC</command> files because any given <command>SYNC</command> file may <emphasis>not</emphasis> be
the right one. If it's wrong, then the result will be that the call
to <function> setsyncTracking_offline() </function> will fail, and
your <application> psql</application> session will <command> ABORT
-</command>, and then run through the remainder of that SYNC file
+</command>, and then run through the remainder of that <command>SYNC</command> file
looking for a <command>COMMIT</command> or <command>ROLLBACK</command>
so that it can try to move on to the next transaction.</para>
@@ -226,10 +231,11 @@
trying the next file.</para></listitem>
<listitem><para> If the <function> setsyncTracking_offline()
-</function> call succeeds, then you have the right next SYNC file, and
-should apply it. You should probably <command>ROLLBACK</command> the
-transaction, and then use <application>psql</application> to apply the
-entire file full of updates.</para></listitem>
+</function> call succeeds, then you have the right next
+<command>SYNC</command> file, and should apply it. You should
+probably <command>ROLLBACK</command> the transaction, and then use
+<application>psql</application> to apply the entire file full of
+updates.</para></listitem>
</itemizedlist></para>
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.30
retrieving revision 1.31
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.30 -r1.31
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -38,7 +38,7 @@
</answer>
</qandaentry>
-<qandaentry id="SlonyFAQ02">
+<qandaentry id="threadsafety">
<question><para> Some events are moving around, but no replication is
taking place.</para>
@@ -68,14 +68,19 @@
<para>For instance, I ran into the problem one that
<envar>LD_LIBRARY_PATH</envar> had been set, on Solaris, to point to
-libraries from an old <productname>PostgreSQL</productname> compile. That meant
-that even though the database <emphasis>had</emphasis> been compiled with
-<option>--enable-thread-safety</option>, and <application>slon</application> had been
-compiled against that, <application>slon</application> was being dynamically linked
-to the <quote>bad old thread-unsafe version,</quote> so slon didn't work. It
-wasn't clear that this was the case until I ran <command>ldd</command> against
-<application>slon</application>.</para>
-</answer></qandaentry>
+libraries from an old &postgres; compile. That meant that even though
+the database <emphasis>had</emphasis> been compiled with
+<option>--enable-thread-safety</option>, and
+<application>slon</application> had been compiled against that,
+<application>slon</application> was being dynamically linked to the
+<quote>bad old thread-unsafe version,</quote> so slon didn't work. It
+wasn't clear that this was the case until I ran <command>ldd</command>
+against <application>slon</application>.</para> </answer>
+
+<answer><para> Note that with libpq version 7.4.2, on Solaris, a
+further patch <xref linkend="threadpatch"> was required. </para>
+</answer>
+</qandaentry>
<qandaentry>
<question> <para>I tried creating a CLUSTER NAME with a "-" in it.
@@ -179,9 +184,9 @@
<para>Long and short: This points to a need to <quote>audit</quote>
what installations of &postgres; and &slony1; you have in place on the
machine(s). Unfortunately, just about any mismatch will cause things
-not to link up quite right. See also <link linkend="slonyfaq02">
-SlonyFAQ02 </link> concerning threading issues on Solaris ...</para>
-</answer></qandaentry>
+not to link up quite right. See also <link linkend="threadsafety">
+thread safety </link> concerning threading issues on Solaris
+...</para> </answer></qandaentry>
<qandaentry>
<question><para>Table indexes with FQ namespace names
@@ -1244,7 +1249,7 @@
</question>
<answer><para> There are two notable areas of
-<productname>PostgreSQL</productname> that cache query plans and OIDs:</para>
+&postgres; that cache query plans and OIDs:</para>
<itemizedlist>
<listitem><para> Prepared statements</para></listitem>
<listitem><para> pl/pgSQL functions</para></listitem>
- Previous message: [Slony1-commit] By cbbrowne: Documentation on how to upgrade from an older version of
- Next message: [Slony1-commit] By cbbrowne: Updates to debian packaging configuration based on comments
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-commit mailing list