Sun.betty alanxzq
Mon Dec 20 07:41:15 PST 2004
Hello Slony-I everybody  :
                  How are you !
Thank you for answer£¬I have still some question about slony-I failover .
in answer refer to : 
You cannot "recover" the node 1 cluster after dropping the node.
if want node1 join the cluster You will have to recreate the node from scratch.
about this i have some question:
one question : how to create one Master bring multi-slave?
My think , but I don't know whether is right .
1) in slony1-1.0.5\doc\howto\slony-I-basic-mstr-slv.txt
set up a cluster with two node (node1 and node2)
if user want set up more complex application structure
one Master have two slave
The process and step is :
1) first create node1 and node2 's relation  (cluster name : cluster-1).
2) then create node1 and node3 's relation 
                      (a new cluster name : cluster-2)
The script like this :
(pay a attention to script different (i think so,don't know is or not right ))
createFirstCluster.sh
#!/bin/sh
CLUSTERNAME=cluster-1
MASTERDBNAME=db2
SLAVEDBNAME=db2
MASTERHOST=10.10.11.12
SLAVEHOST=10.10.11.13
REPLICATIONUSER=pgsql
PGBENCHUSER=pgbench
slonik <<_EOF_
 #--
    # define the namespace the replication system uses in our example it is
    # slony_example
 #--
 cluster name = $CLUSTERNAME;
 #--
    # admin conninfo's are used by slonik to connect to the nodes one for each
    # node on each side of the cluster, the syntax is that of PQconnectdb in
    # the C-API
 # --
 node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
 node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
 #--
    # init the first node.  Its id MUST be 1.  This creates the schema
    # _$CLUSTERNAME containing all replication system specific database
    # objects.
 #--
 init cluster ( id=1, comment = 'Master Node');
 
 #--
    # Because the history table does not have a primary key or other unique
    # constraint that could be used to identify a row, we need to add one.
    # The following command adds a bigint column named
    # _Slony-I_$CLUSTERNAME_rowID to the table.  It will have a default value
    # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
    # constraints applied.  All existing rows will be initialized with a
    # number
 #--
 table add key (node id = 1, fully qualified name = 'public.history');
 #--
    # Slony-I organizes tables into sets.  The smallest unit a node can
    # subscribe is a set.  The following commands create one set containing
    # all 4 pgbench tables.  The master or origin of the set is node 1.
    # you need to have a set add table() for each table you wish to replicate
 #--
 create set (id=1, origin=1, comment='All pgbench tables');
 set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
 set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
 set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
 set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
 #--
    # Create the second node (the slave) tell the 2 nodes how to connect to
    # each other and how they should listen for events.
 #--
 store node (id=2, comment = 'Slave node');
 store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
 store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
 store listen (origin=1, provider = 1, receiver =2);
 store listen (origin=2, provider = 2, receiver =1);
_EOF_
-----------------------------------------------------------------------------------------------
createSecondCluster.sh
#!/bin/sh
CLUSTERNAME=cluster-2
MASTERDBNAME=db2
SLAVEDBNAME=db2
MASTERHOST=10.10.11.12
SLAVEHOST=10.10.11.14
REPLICATIONUSER=pgsql
PGBENCHUSER=pgbench
slonik <<_EOF_
 #--
    # define the namespace the replication system uses in our example it is
    # slony_example
 #--
 cluster name = $CLUSTERNAME;
 #--
    # admin conninfo's are used by slonik to connect to the nodes one for each
    # node on each side of the cluster, the syntax is that of PQconnectdb in
    # the C-API
 # --
 node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
 node 3 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
 #--
    # init the first node.  Its id MUST be 1.  This creates the schema
    # _$CLUSTERNAME containing all replication system specific database
    # objects.
 #--
 init cluster ( id=1, comment = 'Master Node');
 
 #--
    # Because the history table does not have a primary key or other unique
    # constraint that could be used to identify a row, we need to add one.
    # The following command adds a bigint column named
    # _Slony-I_$CLUSTERNAME_rowID to the table.  It will have a default value
    # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
    # constraints applied.  All existing rows will be initialized with a
    # number
 #--
 table add key (node id = 1, fully qualified name = 'public.history');
 #--
    # Slony-I organizes tables into sets.  The smallest unit a node can
    # subscribe is a set.  The following commands create one set containing
    # all 4 pgbench tables.  The master or origin of the set is node 1.
    # you need to have a set add table() for each table you wish to replicate
 #--
 create set (id=1, origin=1, comment='All pgbench tables');
 set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
 set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
 set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
 set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
 #--
    # Create the second node (the slave) tell the 2 nodes how to connect to
    # each other and how they should listen for events.
 #--
 store node (id=3, comment = 'Slave node-2');
 store path (server = 1, client = 3, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
 store path (server = 3, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
 store listen (origin=1, provider = 1, receiver =3);
 store listen (origin=3, provider = 3, receiver =1);
_EOF_
 
question 2 : if above think is right . if a cluster include a pair of node , one node was dropped , why the cluster name not reused it?
when a node of cluster was dropped. whether this cluster is useless,when want to recreate the cluster use old cluster name 
exec above script show error:
<stdin>:6: Error: namespace "_golf" already exists in database of node 1

Thank you for your answer very much ?

Christopher Browne <cbbrowne at ca.afilias.info> wrote:
Sun.betty wrote:

> Slony-I team:
> How do you do!
> in the course of use slony-I , I meet some quesion , if you or he or
> her know answer of quesion Please tell me. Thank you for your help
> very much !
> I read slony1-1.0.5\slony1-1.0.5\doc\howto\slony-I-failover.txt
> carefulness. and practice it .
> now I have some question:
> Question1:
> start slony process with command
> [slon golf "dbname=dbname user=username host=hostip port=serverport"]
> how to stop slony process ? except for use [ps] command find out pid
> ,then [kill pid] ¡£whether have stop command of slony?

Newer versions of slon support a "-p" option which puts the PID into a file.

If you're not running something new enough (and that may be new to CVS
HEAD), then using ps to search for the pid is the way to go.

> Question2:
> practice Failover chapter , use [drop node (id = 1, event node = 2);]
> I found node1's slon process was stop , and cluster was delete.
> if want recover cluster of node1 , how to do it?

You cannot "recover" the node 1 cluster after dropping the node.

When the node is dropped, Slony-I drops the namespace, triggers, and
such, which erases nearly all traces of the node's existence. It does
this sufficiently thoroughly that you cannot recover it.

You will have to recreate the node from scratch, which will involve
copying over all of the replicated data.




---------------------------------
Do You Yahoo!?
150ÍòÇúMP3·è¿ñËÑ£¬´øÄú´³ÈëÒôÀÖµîÌÃ
ÃÀÅ®Ã÷ÐÇÓ¦Óо¡ÓУ¬ËѱéÃÀͼ¡¢ÑÞͼºÍ¿áͼ
1G¾ÍÊÇ1000Õ×£¬ÑÅ»¢µçÓÊ×ÔÖúÀ©ÈÝ£¡
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://gborg.postgresql.org/pipermail/slony1-general/attachments/20041220/a6f5406e/attachment.html


More information about the Slony1-general mailing list