Steve Singer ssinger at ca.afilias.info
Fri Oct 8 11:23:01 PDT 2010
On 10-10-08 02:14 PM, Gurjeet Singh wrote:
> Slony version 2.0.3
>
> I have a 2 node Slony cluster and I wish to cleanup everything
> (manually) if it is detected that one of the nodes has failed.
>
> So, is just stopping the slon daemons and then executing 'uninstall
> node' for the remaining node enough to clean up everything?
>
> $ ./slon_kill
>
> $ ( ./slonik_print_preamble && echo 'uninstall node ( id = 2 ); ' ) | slonik
>
> I looked at slonik.c;slonik_uninstall_node() and
> _cluster_name.uninstallNode() plpgsql function, all I could notice is
> that slonik_uninstall_node() calls the plpgsql function and then issues
> 'drop schema _cluster_name cascade;'. The plpgsql function just issues a
> 'lock table _cluster_name.sl_config_lock'.
>
> So I don't see a problem in performing the above 2 commands to clean up
> and then configure replication setup from scratch.
>
> Any objections?

This should be fine.

Doing the uinstall node on 2 will remove slony from node 2.  (assuming 
your node 1 has failed).

Also when you reinstall your cluster I wouldn't put 2.0.3 back on.  You 
would be much better off with 2.0.4 or 2.0.5.  2.0.3 had a serious 
regression bug introduced with it.







>
> Regards,
> --
> gurjeet.singh
> @ EnterpriseDB - The Enterprise Postgres Company
> http://www.EnterpriseDB.com
>
> singh.gurjeet@{ gmail | yahoo }.com
> Twitter/Skype: singh_gurjeet
>
> Mail sent from my BlackLaptop device
>
>
>
> _______________________________________________
> Slony1-hackers mailing list
> Slony1-hackers at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-hackers



More information about the Slony1-hackers mailing list