4.5. Log Shipping - Slony-I with Files

Slony-I has the ability to serialize the updates to go out into log files that can be kept in a spool directory.

The spool files can then be transferred via whatever means is desired to a "slave system," whether that be via FTP, rsync, or perhaps even by pushing them onto a 1GB "USB key" to be sent to the destination by clipping it to the ankle of some sort of "avian transport" system.

There are plenty of neat things you can do with a data stream in this form, including:

4.5.1. Setting up Log Shipping

Setting up log shipping requires that you already have a replication cluster setup with at least two nodes and that the tables and sequences that you want to ship data for are part of replication sets that the subscriber is subscribed to.

Slon will generate a file in the archive directory for each SYNC event that it processes. These files will contain COPY statements that insert data into the sl_log_archive table. A trigger, that was installed by slony1_dumps.sh will intercept these inserts and make the approriate changes to your replicated tables

Slon will generate log shipping files that contain data for all tables in any replication set that the node is subscribed for. There is no support in slon to only data for some of the replication sets in the log shipping files. If slon is unable to create the log shipping files, for example because you run out of disk space, then slon will stop processing new SYNC events and replication for that subscriber will fall behind.

4.5.2. Applying Log Files

The .SQL files that slon places in the archive directory contain SQL commands that should be executed on the log shipping target. These files need to be applied in the proper order. The file name of the next SQL file to apply is based on information contained in the sl_archive_tracking table. slony_logshipping is a daemon will monitor an archive directory and apply the updates in the proper order.

Each .SQL file contains a a SQL COPY command that will copy the data into the sl_log_archive table where a trigger will instead perform the proper action on the target database. The slony1_dump.sh script will create the sl_log_archive table and setup the trigger.

4.5.3. Converting SQL commands from COPY to INSERT/UPDATE/DELETE

Prior to Slony-I 2.2 the SQL files generated for log shipping contained INSERT/UPDATE/DELETE statements. As of Slony-I 2.2 the log shipping files contain COPY statements. The COPY statements should result in better performance. If you need the old style of SQL files with INSERT/UPDATE/DELETE then the script tools/logshipping_toinsert.pl can be used to convert the COPY style log shipping files to INSERT/UPDATE/DELETE statements. INSERT/UPDATE/DELETE statements should be easier to apply against databases other than PostgreSQL or in environments where you can't create the sl_log_archive table.

4.5.4. Cluster Reconfiguration Events

Most cluster reconfiguration events such as adding or dropping nodes or paths do not effect the nodes receiving the log files and are ignored by log shipping. However the following clsuter configuration events are supported.