Kamran mkazeem
Mon Oct 11 13:42:23 PDT 2004
Hello,
Thankyou very much for prompt reply. And your solution worked too !.

I have further questions if you would like to answer.

1) Do we absolutely need  to have a primary key in each table which is
to be replicated ? or will a simple UNIQUE and/or NOT NULL key will be
enough. Further more, I have few tables in my DB which have fileds which
act as unique keys (based on sequences) , but they are not defined
explicitly as PRIMARY or UNIQUE or NOT NULL. Will they suffice for the
primary key requirements of Slony. Due to reasons that I need to make a
major adjustment in my entire application (made in VB) because of errors
come back once a PRIMARY key is violated, I do not want to define such
fields as primary keys explicitly, though they are of-course serving the
purpose. In short sentence / other words, we are avoiding duplicate
entries in such fields programatically. 

2) Do we need to run the replicate.sh script, which I refered to in
previous mail, as cron job ? or is it ok to run it just once during
system boot time ?

3) How can I run slon as daemon ? At the moment I run it on command
prompt and it just sits idle after showing some messages.

4) What does this message mean ? I see it everytime I run slon 
processes :


PL/pgSQL function "determineattkindserial" line 52 at execute statement



Thankyou for your time and effort. I appreciate.

Best,
Kamran



On Mon, 2004-10-11 at 10:26, cbbrowne at ca.afilias.info wrote:
> > My questions are: a) Where am I going wrong ?
> 
> What is wrong is that you need a much fuller set of paths and listen
> settings.  If there are 3 nodes, the paths and listener settings need to
> look more like the following:
> 
> cluster name = transtest;
>  node 1 admin conninfo='host=host1 dbname=transtest user=postgres
> port=5432 password=postgres';
>  node 2 admin conninfo='host=host2 dbname=transtest user=postgres
> port=5432 password=postgres';
>  node 3 admin conninfo='host=host3 dbname=transtest user=postgres
> port=5432 password=postgres';
> 
>       store path (server = 1, client = 2, conninfo = 'host=host1
> dbname=transtest user=postgres port=5432 password=postgres');
>       store path (server = 2, client = 1, conninfo = 'host=host2
> dbname=transtest user=postgres port=5432 password=postgres');
>       store path (server = 1, client = 3, conninfo = 'host=host1
> dbname=transtest user=postgres port=5432 password=postgres');
>       store path (server = 3, client = 1, conninfo = 'host=host3
> dbname=transtest user=postgres port=5432 password=postgres');
>       store path (server = 2, client = 3, conninfo = 'host=host2
> dbname=transtest user=postgres port=5432 password=postgres');
>       store path (server = 3, client = 2, conninfo = 'host=host3
> dbname=transtest user=postgres port=5432 password=postgres');
> 
> ## and listeners...
> 
>       store listen (origin = 1, receiver = 2, provider = 1);
>       store listen (origin = 1, receiver = 3, provider = 1);
>       store listen (origin = 2, receiver = 1, provider = 2);
>       store listen (origin = 2, receiver = 3, provider = 1);
>       store listen (origin = 3, receiver = 1, provider = 3);
>       store listen (origin = 3, receiver = 2, provider = 1);
> 
> You're missing the communications between nodes 2 and 3, and that's
> probably what has gone wrong.
> 
> If you take a look at the tables sl_path and sl_listen, on all three
> nodes, you'll probably find that instead of having 6 entries in each table
> (which is what you need), node 1 only has 4, and nodes 2 and 3 may only
> have 2.  There should be 6 in each.
> 
> Even though nodes 2 and 3 aren't directly speaking, they still need to be
> on "speaking terms" in order that SYNCs make it back and forth so all the
> nodes can know when to purge out sl_log_? entries.
> 
> Revise the paths and listens and see if that takes you some steps ahead...
> 
> 


More information about the Slony1-general mailing list