David Parker dparker
Fri Dec 3 22:14:46 PST 2004
After successfully putting it off for a while, I'm finally having to confront the need to replicate a database that has large objects in it. In one of our schemas, we support application objects that are essentially software agents, which get represented in the database as some metadata plus an oid field that stores binary content, e.g., a shared library .so file.

I know there's no support in slony for replicating large objects, primarily, as I understand it, because it's not possible to create triggers on the pg_largeobject catalog object (there may well be other issues).

First of all, has anybody been thinking about this? Has any experimenting been done?

Given a table like:

create table agent
{
   id  int,
   name varchar(64),
   description varchar(255),
   ...
   data oid
}

I was thinking something like this might work:

1) remote_worker.c sync_event, for a given table insert/update, detects the 
presence of an oid field. This would require inspecting more schema metadata
about the tables in the set, of course. I'm already on shaky ground, here, because
I don't fully understand yet how this sync_event works.

2) for the given oid field, the worker reads that lob from the provider db

3) writes the lob into the receiver db

4) swizzles the oid in the replicated data from the providers local oid to the
receivers local oid.

This results in data that is not completely identical because the lob oids are different, which
might well be a problem. I also haven't thought about the initial copy_set, though I guess
it would do basically the same thing.

I'm sure there are a hundred holes in this, so I'd appreciate comments. Or if it's completely impossible then somebody can put me out of my misery immediately....

Thanks.

- DAP
----------------------------------------------------------------------------------
David Parker    Tazz Networks    (401) 709-5130
?


More information about the Slony1-general mailing list