Glyn Astill glynastill at yahoo.co.uk
Tue Jul 1 06:56:57 PDT 2008
Looks like you are running slons for both servers on each server.

You only need 1 slon per server.

So on the master (192.168.99.23) just run the slon for the master, and
on the slave (192.168.99.134) just run the slon for the slave.

Or ofcourse you can run both slons on the master and none on the slave,
both on the slave and none on the master, or both on a totally separate
machine...


----- Original Message ----
> From: bijayant kumar <bijayant4u at yahoo.com>
> To: Martin Eriksson <m.eriksson at albourne.com>; Glyn Astill <glynastill at yahoo.co.uk>
> Cc: slony1-general at lists.slony.info
> Sent: Tuesday, 1 July, 2008 2:08:37 PM
> Subject: Re: [Slony1-general] Node is not initialized properly
> 
> On the Master Server (192.168.99.23)
> 
> bijayant ~ # ps aux | grep slon
> 
> root      3956  0.0  0.1   2116   692 pts/2    S    18:27   0:00 /usr/bin/slon 
> -s 1000 -d2 bijayant host=192.168.99.23 dbname=bijayant user=bijayant port=5432 
> password=bijayant
> 
> root      3961  0.0  0.1  51424   864 pts/2    Sl   18:27   0:00 
> /usr/bin/slon -s 1000 -d2 bijayant host=192.168.99.23 dbname=bijayant 
> user=bijayant port=5432 password=bijayant
> 
> root      3968  0.0  0.4   3612  2188 pts/2    S    18:27   0:00 
> /usr/local/bin/perl /usr/bin/slon_watchdog --config=/etc/slon_tools.conf node1 
> 30
> 
> root      3998  0.0  0.1   2120   692 pts/2    S    18:28   0:00 /usr/bin/slon 
> -s 1000 -d2 bijayant host=192.168.99.134 dbname=bijayant user=bijayant port=5432 
> password=bijayant
> 
> root      4006  0.0  0.1  51428   868 pts/2    Sl   18:28   0:00 /usr/bin/slon 
> -s 1000 -d2 bijayant host=192.168.99.134 dbname=bijayant 
> user=bijayant port=5432 password=bijayant
> 
> root      4008  0.0  0.4   3616  2192 pts/2    S    18:28   0:00 
> /usr/local/bin/perl /usr/bin/slon_watchdog --config=/etc/slon_tools.conf node2 
> 30
> 
> On server vi /etc/slon_tools.conf
> if ($ENV{"SLONYNODES"}) {
>     require $ENV{"SLONYNODES"};
> } else {
>     $CLUSTER_NAME = 'bijayant';
>     $LOGDIR = '/var/log/slony';
>     $MASTERNODE = 1;
>     add_node(node     => 1,
>              host     => '192.168.99.23',
>              dbname   => 'bijayant',
>              port     => 5432,
>              user     => 'bijayant',
>              password => 'bijayant');
> 
>     add_node(node     => 2,
>              host     => '192.168.99.134',
>              dbname   => 'bijayant',
>              port     => 5432,
>              user     => 'bijayant',
>              password => 'bijayant',
>              parent => 1
>              );
> }
> 
> $SLONY_SETS = {
>     # A unique name for the set
>     "set1" => {
>         # The set_id, also unique
>         "set_id" => 1,
>         # "origin" => 1,
>         # foldCase => 0,
> #       "table_id"    => 1,
> #       "sequence_id" => 1,
>        "pkeyedtables" => [
>                            'public.kavach',
>                            ],
>         },
> };
> 
> if ($ENV{"SLONYSET"}) {
>     require $ENV{"SLONYSET"};
> }
> 
> 1;
> 
> vi /etc/conf.d/slony1
> USER=postgres
> CLUSTER=bijayant
> DBUSER=bijayant
> DBNAME=bijayant
> DBHOST=192.168.99.23
> LOGFILE=/var/lib/postgresql/data/slony1.log
> LOGLEVEL=1
> 
> 
> And on the Slave Server (192.168.99.134) the file is exactly same.
> 
> Bijayant Kumar
> 
> 
> --- On Tue, 1/7/08, Glyn Astill wrote:
> 
> > From: Glyn Astill 
> > Subject: Re: [Slony1-general] Node is not initialized properly
> > To: bijayant4u at yahoo.com, "Martin Eriksson" 
> > Cc: slony1-general at lists.slony.info
> > Date: Tuesday, 1 July, 2008, 6:26 PM
> > Check what slons are running against each database. You
> > should have a slon running for each machine, so if you are
> > running one slon on each server you should have the
> > conninfo parameter configured in slon.conf, or be passing
> > it at the command line. A ps -ax should show what server
> > the slons are acting on.
> > 
> > 
> > 
> > ----- Original Message ----
> > > From: bijayant kumar 
> > > To: Martin Eriksson ;
> > Glyn Astill 
> > > Cc: slony1-general at lists.slony.info
> > > Sent: Tuesday, 1 July, 2008 1:45:11 PM
> > > Subject: Re: [Slony1-general] Node is not initialized
> > properly
> > > 
> > > Hello to list,
> > > 
> > > My this problem is solved now. I have re-installed
> > postgresql and slony1 and the 
> > > problem gone. Now i have another problem. When i run
> > slony on Master 
> > > Server(192.168.99.23), i can see in the logs
> > > 
> > > 2008-07-01 18:09:50 IST CONFIG enableNode: no_id=2
> > > 2008-07-01 18:09:50 IST DEBUG1 remoteWorkerThread_2:
> > thread starts
> > > 2008-07-01 18:09:50 IST DEBUG1 remoteListenThread_2:
> > thread starts
> > > 2008-07-01 18:09:50 IST DEBUG1 main: running scheduler
> > mainloop
> > > 2008-07-01 18:09:50 IST DEBUG1 cleanupThread: thread
> > starts
> > > 2008-07-01 18:09:50 IST DEBUG1 syncThread: thread
> > starts
> > > 2008-07-01 18:09:50 IST DEBUG1 remoteListenThread_2:
> > connected to 
> > > 'host=192.168.99.134 dbname=bijayant user=bijayant
> > port=5432 password=bijayant'
> > > 
> > > But when on second node that is on Slave
> > server(192.168.99.134), i run slony i 
> > > got
> > > 
> > > 2008-07-01 18:03:07 IST FATAL  Do you already have a
> > slon running against this 
> > > node?
> > > 2008-07-01 18:03:07 IST FATAL  Or perhaps a residual
> > idle backend connection 
> > > from a dead slon?
> > > 2008-07-01 18:03:07 IST DEBUG2 slon_abort() from
> > pid=7871
> > > 2008-07-01 18:03:07 IST DEBUG1 slon: shutdown
> > requested
> > > 2008-07-01 18:03:07 IST DEBUG2 slon: notify worker
> > process to shutdown
> > > 2008-07-01 18:03:27 IST DEBUG1 slon: child termination
> > timeout - kill child
> > > 2008-07-01 18:03:27 IST DEBUG2 slon: child terminated
> > status: 9; pid: 7871, 
> > > current worker pid: 7871
> > > 2008-07-01 18:03:27 IST DEBUG1 slon: done
> > > 2008-07-01 18:03:27 IST DEBUG2 slon: remove pid file
> > > 2008-07-01 18:03:27 IST DEBUG2 slon: exit(0)
> > > 
> > > 
> > > I have a exact same configuration(like xerox copy) on
> > both the server. Is it 
> > > creating the problem? Should i have to change the 
> > > parameters(/etc/slon_tools.conf) according to
> > Master/Slave.
> > > 
> > > Please help me, i think i am very near to replicate my
> > first ever database.
> > > 
> > > Bijayant Kumar
> > > 
> > > 
> > > --- On Tue, 1/7/08, bijayant kumar wrote:
> > > 
> > > > From: bijayant kumar 
> > > > Subject: Re: [Slony1-general] Node is not
> > initialized properly
> > > > To: "Martin Eriksson" , "Glyn
> > Astill" 
> > > 
> > > > Cc: slony1-general at lists.slony.info
> > > > Date: Tuesday, 1 July, 2008, 3:05 PM
> > > > Hi,
> > > > The user "bijayant" is a super user. I
> > have
> > > > created this user like
> > > > 
> > > > postgres at bijayant ~ $ createuser bijayant
> > > > Shall the new user be allowed to create
> > databases? (y/n) y
> > > > Shall the new user be allowed to create more new
> > users?
> > > > (y/n) y
> > > > CREATE USER
> > > > 
> > > > It means that bijayant is a super user right? Or
> > i am doing
> > > > some silly thing. If i will use user as pgsql
> > than what to
> > > > use for password.
> > > > 
> > > > bijayant ~ # pg_config --libdir
> > > > /usr/lib
> > > > 
> > > > bijayant ~ # pg_config --pkglibdir
> > > > /usr/lib/postgresql
> > > > 
> > > > In the Master Database server the xxid.so file is
> > in 
> > > > /usr/local/pgsql/lib/xxid.so. I have copied to
> > this file to
> > > > /usr/lib/postgresql/ and /usr/lib/ also. But no
> > luck.
> > > > 
> > > > On the Slave Server, its in
> > /usr/lib64/postgresql/xxid.so.
> > > > 
> > > > Is it creating the problem. How to resolve this
> > problem.
> > > > I have installed postgres and slony1 by the
> > emerge utility
> > > > of gentoo.
> > > > 
> > > > Thanks & Regards,
> > > > 
> > > > Bijayant Kumar
> > > > 
> > > > 
> > > > --- On Tue, 1/7/08, Glyn Astill
> > > > wrote:
> > > > 
> > > > > From: Glyn Astill 
> > > > > Subject: Re: [Slony1-general] Node is not
> > initialized
> > > > properly
> > > > > To: bijayant4u at yahoo.com, "Martin
> > Eriksson"
> > > > 
> > > > > Cc: slony1-general at lists.slony.info
> > > > > Date: Tuesday, 1 July, 2008, 2:22 PM
> > > > > I'd be checking to see where xxid.so was
> > on the
> > > > system,
> > > > > and if I had an older version lurking
> > somewhere.
> > > > > 
> > > > > I'd also make sure bijayant was a
> > database
> > > > superuser.
> > > > > 
> > > > > 
> > > > > 
> > > > > ----- Original Message ----
> > > > > > From: bijayant kumar 
> > > > > > To: Martin Eriksson
> > > > 
> > > > > > Cc: slony1-general at lists.slony.info
> > > > > > Sent: Tuesday, 1 July, 2008 8:00:19 AM
> > > > > > Subject: Re: [Slony1-general] Node is
> > not
> > > > initialized
> > > > > properly
> > > > > > 
> > > > > > Thanks to all for the reply. Now i
> > think that i
> > > > am
> > > > > coming closer to run slonik. 
> > > > > > When i did as suggested by all
> > > > > > 
> > > > > > bijayant ~ # slonik_init_cluster
> > --config
> > > > > /etc/slon_tools.conf | slonik
> > > > > > 
> > > > > > :6: PGRES_FATAL_ERROR load
> > > > '$libdir/xxid';  -
> > > > > ERROR:  could not access 
> > > > > > file "$libdir/xxid": No such
> > file or
> > > > > directory
> > > > > > 
> > > > > > :6: Error: the extension for the xxid
> > data type
> > > > cannot
> > > > > be loaded in 
> > > > > > database 'host=192.168.99.23
> > dbname=bijayant
> > > > > user=bijayant port=5432 
> > > > > > password=bijayant'
> > > > > > 
> > > > > > :6: ERROR: no admin conninfo for node
> > 134598992
> > > > > > 
> > > > > > The first two lines of the error i
> > couldnot
> > > > > understand.
> > > > > > The "admin conninfo" error, i
> > think
> > > > this
> > > > > parameter should be present in conf 
> > > > > > file, right? But in my case what should
> > be there,
> > > > as i
> > > > > am already giving the 
> > > > > > username and password to connect to the
> > > > postgresql
> > > > > database.
> > > > > > 
> > > > > > In this problem thread one gentleman
> > has asked me
> > > > to
> > > > > check whether "_bijayant" 
> > > > > > schema is created or not? I am sorry to
> > ask but i
> > > > > really dont know about this. I 
> > > > > > have only created a user,database and
> > table in
> > > > the
> > > > > postgresql database nothing 
> > > > > > else. When we create database, schema
> > is also
> > > > created
> > > > > automatically, right?
> > > > > > 
> > > > > > Please suggest me what should i do
> > next. Sorry
> > > > but i
> > > > > am very new to slonik and 
> > > > > > database. I am trying hard to
> > understand the
> > > > concept
> > > > > > 
> > > > > > Thanks & Regards,
> > > > > > 
> > > > > > Bijayant Kumar
> > > > > > 
> > > > > > 
> > > > > > --- On Mon, 30/6/08, Martin Eriksson
> > wrote:
> > > > > > 
> > > > > > > From: Martin Eriksson 
> > > > > > > Subject: Re: [Slony1-general] Node
> > is not
> > > > > initialized properly
> > > > > > > To: 
> > > > > > > Cc:
> > slony1-general at lists.slony.info
> > > > > > > Date: Monday, 30 June, 2008, 8:34
> > PM
> > > > > > > If you only run it like that, it
> > only prints
> > > > what
> > > > > it will
> > > > > > > execute, to 
> > > > > > > actually execute this you need to
> > | it to
> > > > the
> > > > > slonik app..
> > > > > > > 
> > > > > > > eg.
> > /data/pgsql/slony/slonik_init_cluster
> > > > > --config
> > > > > > > | 
> > > > > > > /data/pgsql/bin/slonik
> > > > > > > 
> > > > > > > all scripts starting with
> > > > "slonik_"
> > > > > doesn't
> > > > > > > actually do anything 
> > > > > > > themself, its just a way to format
> > a command
> > > > > correctly for
> > > > > > > the slonik 
> > > > > > > parser, which actually does the
> > work..
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > bijayant kumar wrote:
> > > > > > > > Thanks for the reply. I
> > executed the
> > > > command
> > > > > > > "slonik_init_cluster".It
> > gives the
> > > > > output like
> > > > > > > >
> > > > > > > > # INIT CLUSTER
> > > > > > > > cluster name = bijayant;
> > > > > > > >  node 1 admin
> > > > > conninfo='host=192.168.99.23
> > > > > > > dbname=bijayant user=bijayant
> > port=5432
> > > > > > > password=bijayant';
> > > > > > > >  node 2 admin
> > > > > conninfo='host=192.168.99.134
> > > > > > > dbname=bijayant user=bijayant
> > port=5432
> > > > > > > password=bijayant';
> > > > > > > >   init cluster (id = 1,
> > comment =
> > > > 'Node
> > > > > 1 -
> > > > > > > bijayant at 192.168.99.23');
> > > > > > > >
> > > > > > > > # STORE NODE
> > > > > > > >   store node (id = 2, event
> > node = 1,
> > > > > comment =
> > > > > > > 'Node 2 -
> > bijayant at 192.168.99.134');
> > > > > > > >   echo 'Set up
> > replication
> > > > nodes';
> > > > > > > >
> > > > > > > > # STORE PATH
> > > > > > > >   echo 'Next: configure
> > paths for
> > > > each
> > > > > > > node/origin';
> > > > > > > >   store path (server = 1,
> > client = 2,
> > > > > conninfo =
> > > > > > > 'host=192.168.99.23
> > dbname=bijayant
> > > > > user=bijayant
> > > > > > > port=5432 password=bijayant');
> > > > > > > >   store path (server = 2,
> > client = 1,
> > > > > conninfo =
> > > > > > > 'host=192.168.99.134
> > dbname=bijayant
> > > > > user=bijayant
> > > > > > > port=5432 password=bijayant');
> > > > > > > >   echo 'Replication nodes
> > > > prepared';
> > > > > > > >   echo 'Please start a
> > slon
> > > > replication
> > > > > daemon for
> > > > > > > each node';
> > > > > > > >
> > > > > > > > After that i run the slonik
> > daemon and
> > > > got
> > > > > the error
> > > > > > > mentioned.
> > > > > > > >
> > > > > > > >
> > > > > > > > Bijayant Kumar
> > > > > > > >
> > > > > > > >
> > > > > > > > --- On Mon, 30/6/08, Martin
> > Eriksson
> > > > > > > wrote:
> > > > > > > >
> > > > > > > >  
> > > > > > > >> From: Martin Eriksson
> > > > > > > 
> > > > > > > >> Subject: Re:
> > [Slony1-general] Node
> > > > is
> > > > > not
> > > > > > > initialized properly
> > > > > > > >> To: 
> > > > > > > >> Cc:
> > slony1-general at lists.slony.info
> > > > > > > >> Date: Monday, 30 June,
> > 2008, 7:18
> > > > PM
> > > > > > > >> Very basic, but saw no
> > mention of
> > > > it in
> > > > > the
> > > > > > > e-mail,
> > > > > > > >>
> > > > > > > >> I assume you ran the
> > > > > > > "slonik_init_cluster"
> > before
> > > > > > > >> trying to start the 
> > > > > > > >> slon daemons? as that
> > does the
> > > > slony
> > > > > setup on each
> > > > > > > of the
> > > > > > > >> dbs
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> bijayant kumar wrote:
> > > > > > > >>    
> > > > > > > >>> Hello list,
> > > > > > > >>>
> > > > > > > >>> I am a very new user
> > of slony1
> > > > so
> > > > > please
> > > > > > > forgive me if
> > > > > > > >>>      
> > > > > > > >> i am asking very stupid
> > question. 
> > > > > > > >>    
> > > > > > > >>> I have installed
> > Postgresql and
> > > > > Slony1 on two
> > > > > > > gentoo
> > > > > > > >>>      
> > > > > > > >> machine. Postgresql is
> > working fine
> > > > no
> > > > > problem at
> > > > > > > all. I
> > > > > > > >> want to use Slony1 to
> > replicate the
> > > > two
> > > > > databases
> > > > > > > across
> > > > > > > >> the systems. To
> > understand the
> > > > slony1
> > > > > concept i
> > > > > > > made a 
> > > > > > > >>    
> > > > > > > >>> test database with
> > one single
> > > > table
> > > > > and only
> > > > > > > one entry
> > > > > > > >>>      
> > > > > > > >> into it. But when i start
> > slony1 i
> > > > get
> > > > > error like
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST ERROR 
> > > > > cannot get
> > > > > > > >>>      
> > > > > > > >> sl_local_node_id - ERROR:
> >  schema
> > > > > > > "_bijayant"
> > > > > > > >> does not exist
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST FATAL 
> > > > main:
> > > > > Node is
> > > > > > > not
> > > > > > > >>>      
> > > > > > > >> initialized properly -
> > sleep 10s
> > > > > > > >>    
> > > > > > > >>> Now i am giving here
> > my
> > > > > configuration details.
> > > > > > > >>>
> > > > > > > >>> /* Master Server */
> > > > > > > >>> IP Address
> > 192.168.99.23
> > > > > > > >>> Database name
> > bijayant
> > > > > > > >>> Table name kavach
> > > > > > > >>>
> > > > > > > >>> bijayant=# select *
> > from
> > > > kavach;
> > > > > > > >>>  id |   name   |
> > designation | 
> > > > 
> > > > > address
> > > > > > > >>>
> > > > > ----+----------+-------------+-------------
> > > > > > > >>>   1 | Bijayant |
> > consultant  |
> > > > > Lakkasandra
> > > > > > > >>> (1 row)
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> vi
> > /etc/slon_tools.conf
> > > > > > > >>>
> > > > > > > >>> if
> > > > ($ENV{"SLONYNODES"}) {
> > > > > > > >>>     require
> > > > > $ENV{"SLONYNODES"};
> > > > > > > >>> } else {
> > > > > > > >>>     $CLUSTER_NAME =
> > > > > 'bijayant';
> > > > > > > >>>     $LOGDIR =
> > > > > '/var/log/slony';
> > > > > > > >>>     $MASTERNODE = 1;
> > > > > > > >>>     add_node(node    
> > => 1,
> > > > > > > >>>              host    
> > =>
> > > > > > > '192.168.99.23',
> > > > > > > >>>              dbname  
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              port    
> > =>
> > > > 5432,
> > > > > > > >>>              user    
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              password
> > =>
> > > > > > > 'bijayant');
> > > > > > > >>>
> > > > > > > >>>     add_node(node    
> > => 2,
> > > > > > > >>>              host    
> > =>
> > > > > > > '192.168.99.134',
> > > > > > > >>>              dbname  
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              port    
> > =>
> > > > 5432,
> > > > > > > >>>              user    
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              password
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              parent
> > => 1
> > > > > > > >>>              );
> > > > > > > >>> }
> > > > > > > >>>
> > > > > > > >>> $SLONY_SETS = {
> > > > > > > >>>                
> > > > "set1"
> > > > > => {
> > > > > > > >>>                      
> >    
> > > > > "set_id"
> > > > > > > => 1,
> > > > > > > >>>                      
> >    
> > > > > > > "pkeyedtables"
> > > > > > > >>>      
> > > > > > > >> => [
> > > > > > > >>    
> > > > > > > >>>                      
> >    
> > > > > > > 'public.kavach',
> > > > > > > >>>                      
> >      ],
> > > > > > > >>>         },
> > > > > > > >>> if
> > ($ENV{"SLONYSET"})
> > > > {
> > > > > > > >>>     require
> > > > > $ENV{"SLONYSET"};
> > > > > > > >>> }
> > > > > > > >>>
> > > > > > > >>> 1;
> > > > > > > >>>
> > > > > > > >>> vi /etc/conf.d/slony1
> > > > > > > >>> USER=postgres
> > > > > > > >>> CLUSTER=bijayant
> > > > > > > >>> DBUSER=bijayant
> > > > > > > >>> DBNAME=bijayant
> > > > > > > >>> DBHOST=192.168.99.23
> > > > > > > >>>
> > > > > LOGFILE=/var/lib/postgresql/data/slony1.log
> > > > > > > >>> LOGLEVEL=4
> > > > > > > >>>
> > > > > > > >>> /* On the Slave
> > Server */
> > > > > > > >>>
> > > > > > > >>> IP Address
> > 192.168.99.134
> > > > > > > >>> Database name
> > bijayant
> > > > > > > >>> Table name kavach
> > > > > > > >>>
> > > > > > > >>> vi
> > /etc/slon_tools.conf
> > > > > > > >>>
> > > > > > > >>> if
> > > > ($ENV{"SLONYNODES"}) {
> > > > > > > >>>     require
> > > > > $ENV{"SLONYNODES"};
> > > > > > > >>> } else {
> > > > > > > >>>     $CLUSTER_NAME =
> > > > > 'bijayant';
> > > > > > > >>>     $LOGDIR =
> > > > > '/var/log/slony';
> > > > > > > >>>     # SYNC check
> > interval (slon
> > > > -s
> > > > > option)
> > > > > > > >>>     #
> > $SYNC_CHECK_INTERVAL =
> > > > 1000;
> > > > > > > >>>     $MASTERNODE = 1;
> > > > > > > >>>     add_node(node    
> > => 1,
> > > > > > > >>>              host    
> > =>
> > > > > > > '192.168.99.23',
> > > > > > > >>>              dbname  
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              port    
> > =>
> > > > 5432,
> > > > > > > >>>              user    
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              password
> > =>
> > > > > > > 'bijayant');
> > > > > > > >>>
> > > > > > > >>>     add_node(node    
> > => 2,
> > > > > > > >>>              host    
> > =>
> > > > > > > '192.168.99.134',
> > > > > > > >>>              dbname  
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              port    
> > =>
> > > > 5432,
> > > > > > > >>>              user    
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              password
> > =>
> > > > > > > 'bijayant',
> > > > > > > >>>              parent
> > => 1
> > > > > > > >>>              );
> > > > > > > >>> }
> > > > > > > >>> $SLONY_SETS = {
> > > > > > > >>>     "set1"
> > => {
> > > > > > > >>>              
> > > > "set_id"
> > > > > => 1,
> > > > > > > >>>                
> > > > > "pkeyedtables" =>
> > > > > > > [
> > > > > > > >>>                      
> >          
> > > >  
> > > > > > > >>>      
> > > > > > > >> 'public.kavach',
> > > > > > > >>    
> > > > > > > >>>                      
> >          
> > > >   ],
> > > > > > > >>>               },
> > > > > > > >>> };
> > > > > > > >>>
> > > > > > > >>> if
> > ($ENV{"SLONYSET"})
> > > > {
> > > > > > > >>>     require
> > > > > $ENV{"SLONYSET"};
> > > > > > > >>> }
> > > > > > > >>>
> > > > > > > >>> 1;
> > > > > > > >>>
> > > > > > > >>> vi /etc/conf.d/slony1
> > > > > > > >>> USER=postgres
> > > > > > > >>> CLUSTER=bijayant
> > > > > > > >>> DBUSER=bijayant
> > > > > > > >>> DBNAME=bijayant
> > > > > > > >>> DBHOST=192.168.99.23
> > > > > > > >>>
> > > > > LOGFILE=/var/lib/postgresql/data/slony1.log
> > > > > > > >>> LOGLEVEL=4 
> > > > > > > >>>
> > > > > > > >>> When i start the
> > slony1
> > > > > /etc/init.d/slony1
> > > > > > > start on
> > > > > > > >>>      
> > > > > > > >> both the machine it
> > generates lots
> > > > of
> > > > > logs with
> > > > > > > errors line
> > > > > > > >> like
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > > slon_retry()
> > > > > > > from
> > > > > > > >>>      
> > > > > > > >> pid=19007
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG1
> > > > slon:
> > > > > retry
> > > > > > > requested
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > slon:
> > > > > notify
> > > > > > > worker
> > > > > > > >>>      
> > > > > > > >> process to shutdown
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > slon:
> > > > > child
> > > > > > > terminated
> > > > > > > >>>      
> > > > > > > >> status: 0; pid: 19007,
> > current
> > > > worker
> > > > > pid: 19007
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG1
> > > > slon:
> > > > > restart
> > > > > > > of worker
> > > > > > > >>> 2008-06-30 15:54:55
> > IST CONFIG
> > > > main:
> > > > > slon
> > > > > > > version
> > > > > > > >>>      
> > > > > > > >> 1.2.10 starting up
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > slon:
> > > > > watchdog
> > > > > > > process
> > > > > > > >>>      
> > > > > > > >> started
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > slon:
> > > > > watchdog
> > > > > > > ready -
> > > > > > > >>>      
> > > > > > > >> pid = 19005
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST DEBUG2
> > > > slon:
> > > > > worker
> > > > > > > process
> > > > > > > >>>      
> > > > > > > >> created - pid = 19034
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST ERROR 
> > > > > cannot get
> > > > > > > >>>      
> > > > > > > >> sl_local_node_id - ERROR:
> >  schema
> > > > > > > "_bijayant"
> > > > > > > >> does not exist
> > > > > > > >>    
> > > > > > > >>> 2008-06-30 15:54:55
> > IST FATAL 
> > > > main:
> > > > > Node is
> > > > > > > not
> > > > > > > >>>      
> > > > > > > >> initialized properly -
> > sleep 10s
> > > > > > > >>    
> > > > > > > >>> I am sure that i am
> > not
> > > > > understanding some
> > > > > > > basic
> > > > > > > >>>      
> > > > > > > >> things about the slony1.
> > Can
> > > > anybody
> > > > > help me to
> > > > > > > understand
> > > > > > > >> the logic, i will be very
> > helpful
> > > > for
> > > > > you all.
> > > > > > > Please tell
> > > > > > > >> me what i am doing wrong
> > here, what
> > > > > should i do. I
> > > > > > > have
> > > > > > > >> read the documentation at
> > the
> > > > website
> > > > > but not able
> > > > > > > to
> > > > > > > >> understand fully. Please
> > help me
> > > > out.
> > > > > > > >>    
> > > > > > > >>> Thanks & Regards,
> > > > > > > >>> Bijayant Kumar
> > > > > > > >>>
> > > > > > > >>> Send instant messages
> > to your
> > > > online
> > > > > friends
> > > > > > > >>>      
> > > > > > > >>
> > http://uk.messenger.yahoo.com 
> > > > > > > >>    
> > > > > > > >>>
> > > > > > >
> > > > _______________________________________________
> > > > > > > >>> Slony1-general
> > mailing list
> > > > > > > >>>
> > Slony1-general at lists.slony.info
> > > > > > > >>>
> > > > > > > >>>      
> > > > > > > >>
> > > > > > >
> > > > >
> > > >
> > http://lists.slony.info/mailman/listinfo/slony1-general
> > > > > > > >>    
> > > > > > > >>>  
> > > > > > > >>>      
> > > > > > > >>
> > > > >
> > _______________________________________________
> > > > > > > >> Slony1-general mailing
> > list
> > > > > > > >>
> > Slony1-general at lists.slony.info
> > > > > > > >>
> > > > > > >
> > > > >
> > > >
> > http://lists.slony.info/mailman/listinfo/slony1-general
> > > > > > > >>    
> > > > > > > >
> > > > > > > > Send instant messages to your
> > online
> > > > friends
> > > > > > > http://uk.messenger.yahoo.com 
> > > > > > > >  
> > > > > > > 
> > > > > > >
> > > > _______________________________________________
> > > > > > > Slony1-general mailing list
> > > > > > > Slony1-general at lists.slony.info
> > > > > > >
> > > > >
> > > >
> > http://lists.slony.info/mailman/listinfo/slony1-general
> > > > > > 
> > > > > > Send instant messages to your online
> > friends
> > > > > http://uk.messenger.yahoo.com
> > > > > >
> > _______________________________________________
> > > > > > Slony1-general mailing list
> > > > > > Slony1-general at lists.slony.info
> > > > > >
> > > > >
> > > >
> > http://lists.slony.info/mailman/listinfo/slony1-general
> > > > > 
> > > > > 
> > > > > 
> > > > >      
> > > > >
> > > >
> > __________________________________________________________
> > > > > Not happy with your email address?.
> > > > > Get the one you really want - millions of
> > new email
> > > > > addresses available now at Yahoo!
> > > > > http://uk.docs.yahoo.com/ymail/new.html
> > > > 
> > > > Send instant messages to your online friends
> > > > http://uk.messenger.yahoo.com
> > > > _______________________________________________
> > > > Slony1-general mailing list
> > > > Slony1-general at lists.slony.info
> > > >
> > http://lists.slony.info/mailman/listinfo/slony1-general
> > > 
> > > Send instant messages to your online friends
> > http://uk.messenger.yahoo.com
> > 
> > 
> > 
> >      
> > __________________________________________________________
> > Not happy with your email address?.
> > Get the one you really want - millions of new email
> > addresses available now at Yahoo!
> > http://uk.docs.yahoo.com/ymail/new.html
> 
> Send instant messages to your online friends http://uk.messenger.yahoo.com



      __________________________________________________________
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html


More information about the Slony1-general mailing list