Christopher Browne cbbrowne at ca.afilias.info
Sun Sep 2 16:32:17 PDT 2007
"Radim Kolar SF.NET" <hsn at sendmail.cz> writes:
>> I don't think this will be material; the improvement is primarily in going
>> from 1 to 100; the improvement from going to (say) 1000 is small, in
>> comparison.  And I think 1.2 improved this in the logic that deals with large
>> tuples...
> problem for me are not big tuples but lot of small ones. tracing network
> discovered that fetching 100 rows needs just 11 1500 sized network packets.
> Because my network is fast > 100Mbit,  but it has high RTT time ~ 270 ms
> bandwith is not a problem and packet loss is < 0.1%. Switching from cursor
> fetching to COPY will provide major speedup.

The thing is, COPY can't do the job.  COPY is only in general any good
for the initial subscription.  After that, you've got a mix of
traffic.  INSERTs, UPDATEs, DELETEs.

In principle, if you had a whole bunch of INSERT statements
referencing a single table, that could be turned into a COPY.  In
principle.

Mind you, if your application does flurries of updates to various
tables, those INSERTs won't all be in sequence, so you certainly won't
get a COPY out of it.

>> The "better" answer would involve trying to do "peephole optimization"
>> on the query stream so as to group together related updates. 
> yes, all high-end replicators for Oracle are using it.

That seems assumption, to me.  I don't expect that Oracle replicators
are similar enough for analogies to hold terribly much.
-- 
output = ("cbbrowne" "@" "linuxdatabases.info")
http://linuxfinances.info/info/spiritual.html
Microsoft Outlook: Deploying Viruses Has Never Been This Easy!


More information about the Slony1-hackers mailing list