[Zope-CMF] backup strategies

Toby Dickenson tdickenson@geminidataloggers.com
Fri, 7 Feb 2003 20:35:38 +0000

On Friday 07 February 2003 6:59 pm, sean.upton@uniontrib.com wrote:

> Heartbeat also manages resource with init-like scripts, when the takeover
> started, it would start up a ZSS process on a replicated DirectoryStorage
> after taking over the IP.  For safety, you would want to likely kill the
> primary server to keep it from replicating to the backup after the
> takeover. You could do this by using a power-device (STONITH: Shoot The
> Other Node In The Head).

STONITH shouldnt be necessary for safety if using DirectoryStorage's new 
replication tool. File locking takes care of preventing a replication taking 
place while the replica storage is live, and revision checking in the 
replication tool will forever block replication if a write transaction 
happens inside the live replica, even once the file locks have gone.

Is ClientStorage reliably safe in this context? I suspect it may be possible 
for it to get confused if the replica storage comes up missing the last 
couple of trasactions. I have been planning some experiments in this area. 
There is an easy solution to any problem; ClientStorage needs to trigger a 
full Zope shutdown and restart when it loses its connection to the ZEO 

> > Is there any way for example to *test* Data.fs when you back it up (to be
> > sure that you are not backing up corrupt data)?
> there's a utility in utilities/ZODBTools/fstest.py that checks
> for errors. Run a cron job that runs this tool and mails you the
> result.
> There is also another utility, fscheck.py that gives more extensive
> reports, and IIRC is new to Zope 2.6.

And for DirectoryStorage, checkds.py

Toby Dickenson