Mon Oct 10 18:28:29 PDT 2005
- Previous message: [Slony1-general] Call for 1.1.2 tests
- Next message: [Slony1-general] Help needed for Slony-I's RPM build
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
I am having two problems implementing Slony-I across a network and was hoping perhaps someone can help me out. The first problem is that the slonik on the master complains that there is no admin conninfo for the slave node. I have the following env variables in both master and slave CLUSTERNAME=slony_rep MASTERDBNAME=important_db SLAVEDBNAME=important_db MASTERHOST=mas_serv.ournet.com SLAVEHOST=slv_serv.ournet.com REPLICATIONUSER=postgres The following is the script for the Master: ======================================================================= #!/bin/sh slonik <<_EOF_ # define the namespace the replication system uses in our example it is # slony_example cluster name = $CLUSTERNAME; # admin conninfo's are used by slonik to connect to the nodes one for each # node on each side of the cluster, the syntax is that of PQconnectdb in # the C-API node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'; # init the first node. Its id MUST be 1. This creates the schema # _$CLUSTERNAME containing all replication system specific database # objects. init cluster ( id=1, comment = 'Master Node'); # Because the history table does not have a primary key or other unique # constraint that could be used to identify a row, we need to add one. # The following command adds a bigint column named # _Slony-I_$CLUSTERNAME_rowID to the table. It will have a default value # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL # constraints applied. All existing rows will be initialized with a # number table add key (node id = 1, fully qualified name = 'important_db.slony_test'); table add key (node id = 2, fully qualified name = 'important_db.slony_test'); # Slony-I organizes tables into sets. The smallest unit a node can # subscribe is a set. The following commands create one set containing # all 4 pgbench tables. The master or origin of the set is node 1. create set (id=1, origin=1, comment='All pgbench tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'radius.slony_test', comment='slony test table'); # Create the second node (the slave) tell the 2 nodes how to connect to # each other and how they should listen for events. store node (id=2, comment = 'Slave node'); store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER port-$PGPORT'); store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER port=$PGPORT'); store listen (origin=1, provider = 1, receiver =2); store listen (origin=2, provider = 2, receiver =1); _EOF_ ======================================================================= Problem 1. The script fails with: [postgres at tts postgres]$ ./slony_test.sh <stdin>:15: Error: namespace "_slony_rep" already exists in database of node 1 <stdin>:15: ERROR: no admin conninfo for node 152637712 I do not understand why the connect info does not work, since the full DNS is specified. On the Slave, I have the following script: ======================================================================= #!/bin/sh slonik <<_EOF_ # ---- # This defines which namespace the replication system uses # ---- cluster name = $CLUSTERNAME; # ---- # Admin conninfo's are used by the slonik program to connect # to the node databases. So these are the PQconnectdb arguments # that connect from the administrators workstation (where # slonik is executed). # ---- node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'; init cluster ( id=2, comment = 'Slave Node'); #---- #Add the key to be identical to master table #---- table add key (node id = 2, fully qualified name = 'important_db.slony_test'); #-- # Create the second node (the slave) tell the 2 nodes how to connect to # each other and how they should listen for events. #-- store node (id=1, comment = 'Master node'); store node (id=2, comment = 'Slave node'); store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'); store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'); store listen (origin=1, provider = 1, receiver =2); # ---- # Node 2 subscribes set 1 # ---- subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); _EOF_ ======================================================================= Problem 2. The script fails with: [postgres at nas postgres]$ ./slony_slv_tst.sh <stdin>:25: Error: ev_origin for store_node cannot be the new node <stdin>:15: loading of file /usr/local/pgsql/share//xxid.v73.sql: PGRES_FATAL_ERROR ERROR: current transaction is aborted, queries ignored until end of transaction block ERROR: current transaction is aborted, queries ignored until end of transaction block <stdin>:15: ERROR: no admin conninfo for node 156827920 I'm not really sure what I am doing wrong. Your imput would be greatly appreciated.
- Previous message: [Slony1-general] Call for 1.1.2 tests
- Next message: [Slony1-general] Help needed for Slony-I's RPM build
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list