Sung Hsin Lei sungh.lei at gmail.com
Thu Feb 4 07:24:11 PST 2016
One more question,

After I re-created node 3 and run(on replicated db):


slon slony_Securithor2 "dbname = dbNAME user = slonyuser password =
slonPASS port = 5432"


I get:


2016-02-04 17:15:05 GTB Standard Time FATAL  main: Node is not initialized
prope
rly - sleep 10s


slon then stops after 10 seconds. Any idea what happened?

Thanks again.

On Thu, Feb 4, 2016 at 9:48 AM, Sung Hsin Lei <sungh.lei at gmail.com> wrote:

> yes... that's it!!
>
> On Thu, Feb 4, 2016 at 8:58 AM, Tignor, Tom <ttignor at akamai.com> wrote:
>
>>
>> If I’m reading right, did you run the drop node op at some point on node
>> 1 and see it succeed? If it did, the sl_node table on each other node in
>> the cluster (save perhaps node 3) should show it gone.
>> If that’s the case, your cluster is fine and you can just run ‘DROP
>> SCHEMA mycluster CASCADE’ on node 3 and then retry your store node script.
>>
>> Tom    :-)
>>
>>
>> From: Sung Hsin Lei <sungh.lei at gmail.com>
>> Date: Wednesday, February 3, 2016 at 11:37 PM
>> To: slony <slony1-general at lists.slony.info>
>> Subject: [Slony1-general] Cannot fully drop slony node
>>
>> Hey guys,
>>
>> I have a cluster with 3 nodes. On the main db, I run the following script:
>>
>>
>> cluster name = slony_cluster;
>>
>> node 1 admin conninfo = 'dbname = dbNAME host = localhost user =
>> slonyuser password = slonPASS port = 5432';
>> node 3 admin conninfo = 'dbname = dbNAME host = 172.16.10.4 user =
>> slonyuser password = slonPASS port = 5432';
>>
>> DROP NODE ( ID = 3, EVENT NODE = 1 );
>>
>>
>>
>> I open pdadmin on the main db and I don't see node 3 anymore. However,
>> when I open pgadmin on the replicated db, I still see node 3. The
>> replicated db is the one associated with node 3. I run the above script
>> again on the replicated db but get the following error:
>>
>>
>> C:\Program Files\PostgreSQL\9.3\bin>slonik drop.txt
>> debug: waiting for 3,5000000004 on 1
>> drop.txt:4: PGRES_FATAL_ERROR lock table
>> "_slony_securithor2".sl_event_lock, "_s
>> lony_cluster".sl_config_lock;select
>> "_slony_securithor2".dropNode(ARRAY[3]);
>>   - ERROR:  Slony-I: DROP_NODE cannot initiate on the dropped node
>>
>>
>> Now I need to setup another node which must have id=3. I run a script on
>> the main db(the one pgadmin does not show a node 3). The following is the
>> script that I used to setup the node and the error that I get:
>>
>>
>> cluster name = slony_cluster;
>>
>> node 1 admin conninfo = 'dbname = dbNAME host = localhost user =
>> slonyuser password = slonPASS port = 5432';
>> node 3 admin conninfo = 'dbname = dbNAME host = 172.16.10.4 user =
>> slonyuser password = slonPASS port = 5432';
>>
>> store node (id=3, comment = 'Slave node 3', event node=1);
>> store path (server = 1, client = 3,
>> conninfo='dbname=dbNAME host=172.16.10.3 user=slonyuser password = slonPASS
>> port = 5432');
>> store path (server = 3, client = 1,
>> conninfo='dbname=dbNAME host=172.16.10.4 user=slonyuser password = slonPASS
>> port = 5432');
>>
>> subscribe set ( id = 1, provider = 1, receiver = 3, forward = no);
>>
>>
>>
>>
>>
>> C:\Program Files\PostgreSQL\9.3\bin>slonik create.txt
>> drop.txt:6: Error: namespace "_slony_cluster" already exists in database
>> of
>> node 3
>>
>>
>>
>> Is there another way to drop nodes? Can I recover from this without
>> dropping the cluster and restarting from scratch?
>>
>>
>> Thanks.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20160204/393d9902/attachment-0001.htm 


More information about the Slony1-general mailing list