dinesh kumar dineshkumar02 at gmail.com
Fri Jun 8 14:30:47 PDT 2012
Hi Steve,

Thank you for the e-mail ..

On Sat, Jun 9, 2012 at 1:39 AM, Steve Singer <ssinger at ca.afilias.info>wrote:

> On 12-06-08 02:50 PM, dinesh kumar wrote:
>
>> Thank you so much Steve ...
>>
>> We are using Slony 2.1.x ... We are still at the initial stage only...
>> Slony is still trying to copy the data from source to destination host ..
>>
>>  >> Has your initial copy_set finished?
>>
>> Here only, we are facing problem... Slony struggling to copy the data ..
>> It went well upto 140 GB. From 141GB onwards  the build started slow..
>> Now it's very very very slow ...
>>
>> Let me know if you need any configurations what we have used ...
>>
>>
> Are you doing this copy over a WAN?
>

We are doing copy over LAN.. We got very good transfer rate for up to some
extent.. From after some time (i'm not sure the exact time), the build got
slow..


> How long is this taking? How long does it normally take you to load a
> pg_dump of your 3TB database on this hardware?
>
> Currently it's taking 1hour for 1 GB  Copy .. We haven't tested 3TB of
data using pg_dump and restore due to storage constraint ...



> Can you tell what the bottleneck is? Are you IO bound on the slave? Are
> you CPU bound on the slave?
>
> We don't face any performance issues. IO is really good on
production(Primary Cluster where it is 3 TB) and having normal CPU average
...

There isn't a lot of slony configuration that controls the intial sync
> which is done by COPY.  There are a bunch of parameters that control the
> behaviour to process normal SYNC events after your initial subscription but
> I don't think those would effect the intial copy.
>

We think, it's problem from the db itself.. We have around  400GB tables (5
to 8), those are having huge bloats also.. We think, slony is taking some
time to get those live rows bypassing the dead rows from DB. So we think,
the actual live rows might scatter to multiple files. I think this might be
the root cause ..

We just stopped the existing slony setup and will perform vacuum on DB once
and will do it again ...

Can we get the slony event configuration parameters what we need to tune.
So that, we will try those from our side with sample setup ...

If we configure the slony sets like (set 1 having 10 tables of size  400 GB
and set 2 having 10 tables of size 400 GB size .... ) is it good? Which is
the better option ?

Is Slony completes set1 and then only moves to set2 ... ? Or it switching
between the sets ..?

Kindly suggest us ...

Thank you,
Dinesh

>
>  Thanks again for your response ..
>>
>> Best Regards,
>> Dinesh
>>
>>
>>
>> On Sat, Jun 9, 2012 at 12:12 AM, Steve Singer <ssinger at ca.afilias.info
>> <mailto:ssinger at ca.afilias.**info <ssinger at ca.afilias.info>>> wrote:
>>
>>    On 12-06-08 02:26 PM, dinesh kumar wrote:
>>
>>        Hi All,
>>
>>        Can i get any slony recommendations for the replication of 3 TB
>> Data
>>        with slony. I mean, slon configuration parameters ..
>>
>>        As of now, it's copied 145 GB of data from Primary to slave and
>>        seems
>>        from here onwards it's running very very very slow  .. No Errors in
>>        slony logs and pg_logs ... Slon process is running with it's
>> default
>>        configuration parameters ...
>>
>>
>>    You don't tell us what version of slony your running. I would
>>    recommend you run at least 2.1.x.
>>
>>    1.2.x and 2.0.x have a performance bug that might mean that slony
>>    can fall so far behind on the initial sync that it might never catch
>> up.
>>
>>
>>
>>
>>        Your help is highly appreciated ..
>>
>>        Thank You,
>>        Dinesh
>>
>>
>>
>>        ______________________________**___________________
>>        Slony1-general mailing list
>>        Slony1-general at lists.slony.__**info
>>        <mailto:Slony1-general at lists.**slony.info<Slony1-general at lists.slony.info>
>> >
>>        http://lists.slony.info/__**mailman/listinfo/slony1-__**general<http://lists.slony.info/__mailman/listinfo/slony1-__general>
>>        <http://lists.slony.info/**mailman/listinfo/slony1-**general<http://lists.slony.info/mailman/listinfo/slony1-general>
>> >
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20120609/52b239e6/attachment.htm 


More information about the Slony1-general mailing list