Sean Staats sean at ftdna.com
Tue Jun 2 08:30:45 PDT 2009
I created a new replication cluster.  It turns out that starting the 
table IDs at id=1 and the sequence IDs at id=1001 didn't make any 
difference as slony gave me the same error (sequence ID 1001 has already 
been assigned.)  Increasing the log verbosity to 4 doesn't produce any 
more useful debugging information.  Time for another approach.

Would it make sense to create 2 different sets - one to replicate the 
tables and one to replicate the sequences?  Is there a downside to this 
kind of workaround?

Sean


Sean wrote:
> Thanks for all the responses.  To quickly answer Stuart's question...  
> Yes it keeps looping.  Also, it doesn't appear that the system is 
> running out of disk space.  The root partition still has 27G free and 
> the partition that contains the postgresql data has 261G free.  I'll 
> investigate Bug #62 in the meantime.  Just to help with 
> troubleshooting, I'll build another replication cluster and start the 
> table IDs at 1 and the sequence IDs at 1001.
>
> Cheers!
> Sean
>
> Stuart Bishop wrote:
>> On Sat, May 30, 2009 at 3:53 AM, Sean Staats <sean at ftdna.com> wrote:
>>  
>>> The initial replication of all 165 tables appears to succeed.  When 
>>> slony
>>> tries to replicate the first sequence, the following error is reported:
>>> log snippet...
>>>     
>>
>>  
>>> 2009-05-29 15:39:06 CDT ERROR  remoteWorkerThread_1: "select
>>> "_finch_cluster_1".setAddSequence_int(1, 1,
>>> '"finch"."contig_sequence_id_seq"', '')" PGRES_FATAL_ERROR ERROR:  
>>> Slony-I:
>>> setAddSequence_int(): sequence ID 1 has already been assigned
>>> 2009-05-29 15:39:06 CDT WARN   remoteWorkerThread_1: data copy for 
>>> set 1
>>> failed - sleep 60 seconds
>>> WARNING:  there is no transaction in progress
>>>     
>>
>> Does it keep looping?
>>
>> You might be hitting Bug #62 (
>> http://www.slony.info/bugzilla/show_bug.cgi?id=62 ). Its repeatable if
>> you run out of disk space during replication, but there might be other
>> triggers too. If you search the archives, there is a thread about the
>> same error message roughly every month or two.
>>
>>   
>
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-general



More information about the Slony1-general mailing list