[Checkins] SVN: gocept.zeoraid/trunk/ Updated general documentation.
Christian Theune
ct at gocept.com
Fri May 2 09:48:21 EDT 2008
Log message for revision 86094:
Updated general documentation.
Documented how to make ZEORaid itself reliable.
Changed:
U gocept.zeoraid/trunk/INSTALL.txt
U gocept.zeoraid/trunk/KNOWNBUGS.txt
U gocept.zeoraid/trunk/README.txt
U gocept.zeoraid/trunk/TODO.txt
U gocept.zeoraid/trunk/base.cfg
U gocept.zeoraid/trunk/src/gocept/zeoraid/storage.py
-=-
Modified: gocept.zeoraid/trunk/INSTALL.txt
===================================================================
--- gocept.zeoraid/trunk/INSTALL.txt 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/INSTALL.txt 2008-05-02 13:48:21 UTC (rev 86094)
@@ -1,23 +1,20 @@
-==================
-Installing ZEORaid
-==================
+==========================
+Quick installation ZEORaid
+==========================
-Note: These are preliminary instructions for installing the test releases for
-ZEORaid. The actual deployment mechanisms are not yet in place and will be
-documented in this file in the future.
+Note: For real deployments please investigate the deployment documentation in
+the `doc` directory.
-
Quick installation
==================
-This installation procedure will install a ZEORaid server connected to 2 local ZEO
-servers.
+This installation procedure will install two ZEORaid servers connected to 2
+local ZEO servers using the default `zc.buildout` defined in the ZEORaid
+package.
-You can tweak the settings in `parts/*` to
-
1. Check out the ZEORaid code and buildout:
- $ svn co svn://svn.zope.org/repos/main/gocept.zeoraid/tags/1.0a1 gocept.zeoraid
+ $ svn co svn://svn.zope.org/repos/main/gocept.zeoraid/tags/1.0b1 gocept.zeoraid
2. Copy the `buildout.cfg.example` file to `buildout.cfg`:
@@ -30,17 +27,32 @@
4. Start the servers:
- $ bin/server1 start
- $ bin/server2 start
- $ bin/zeoraid start
+ $ bin/zeo1 start
+ $ bin/zeo2 start
+ $ bin/zeoraid1 start
+ $ bin/zeoraid2 start
+You then can connect any ZODB client (e.g. Zope starting from 2.8) with *both*
+ZEORaid servers. An excerpt from zope.conf might look like:
+
+ <zeoclient>
+ server localhost:8100
+ server localhost:8101
+ storage main
+ </zeoclient>
+
+You can now disable any single of the four components (zeo1, zeo2, zeoraid1,
+zeoraid2) and your application will stay alive. When shutting down zeo1 or zeo2
+you have to trigger the corresponding recovery using the `zeoraid1-main-manage`
+or `zeoraid2-main-manage` script before killing the other ZEO server.
+
Run the tests
=============
You might want to run the test suite to make sure that ZEORaid works as
expected on your platform:
- $ bin/test -vv --all
+ $ bin/test
If you see *any* test failures please report them as bugs.
@@ -50,15 +62,15 @@
The bug tracker lives at http://bugs.launchpad.net/gocept.zeoraid
-Please file bugs there and provide tracebacks.
+Please file bugs there and provide tracebacks if possible.
Customizing your configuration
==============================
-The parts directory contains the configuration for ZEORaid and the two ZEO
-servers. Those appear like normal ZEO server configurations and can be tweaked
-by you to experiment with different settings.
+The `parts` directory contains the configuration for the two ZEORaid servers
+and the two ZEO servers. Those appear like normal ZEO server configurations
+and can be tweaked by you to experiment with different settings.
Note that those settings are overridden when re-running buildout.
@@ -70,21 +82,16 @@
release.
- Keep backups of your data. ZEORaid has good unit test coverage but hasn't
- seen live action yet, so keep this in mind.
+ seen too much action yet, so keep this in mind.
Accessing the management script
===============================
-The management script is still a bit rough. Here's how you call it:
+For each ZEORaid server a management script is generated that you can call:
- $ bin/client src/gocept/zeoraid/manage.py <command> [args]
+ $ bin/zeoraid1-main-manage
-The available commands are:
+They support the --help long option to find out what they can do:
- details
- status
- disable [storage]
- recover [storage]
-
-The script currently always expects ZEORaid to run on localhost:8100.
+ $ bin/zeoraid1-main-manage --help
Modified: gocept.zeoraid/trunk/KNOWNBUGS.txt
===================================================================
--- gocept.zeoraid/trunk/KNOWNBUGS.txt 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/KNOWNBUGS.txt 2008-05-02 13:48:21 UTC (rev 86094)
@@ -2,7 +2,16 @@
Known bugs and issues
=====================
+Restrictions when running parallel ZEORaid servers
+==================================================
+Some actions within ZEORaid are not allowed to be performed in parallel with
+other actions (such as packing and recovery or multiple recoveries).
+
+When running parallel ZEORaid servers those actions are not enforced to not
+happen in parallel on multiple ZEORaid servers and caution must be taken
+manually.
+
Compatibility with ZODB `versions`
==================================
@@ -11,4 +20,3 @@
ZEORaid has API-compatibility with those releases of the ZODB that provide and
use versions but will actively prevent any usage of the version features.
-
Modified: gocept.zeoraid/trunk/README.txt
===================================================================
--- gocept.zeoraid/trunk/README.txt 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/README.txt 2008-05-02 13:48:21 UTC (rev 86094)
@@ -1,9 +1,9 @@
-================
-ZEO RAID storage
-================
+===============
+ZEORaid storage
+===============
-The ZEO RAID storage is a storage intended to make ZEO installations more
-reliable by applying techniques as used in harddisk RAID solutions.
+The ZEORaid storage is a storage intended to make ZEO installations more
+reliable by applying techniques as used in hard disk RAID solutions.
The implementation is intended to make use of as much existing infrastructure
as possible and provide a seamless and simple experience on setting up a
@@ -14,68 +14,37 @@
The RAID storage
================
-The ZEO RAID storage is a proxy storage that works like a RAID controller by
+The ZEORaid storage is a proxy storage that works like a RAID controller by
creating a redundant array of ZEO servers. The redundancy is similar to RAID
-level 1 except that each ZEO server keeps a complete copy of the database.
+level 1.
-Therefore, up to N-1 out of N ZEO servers can fail without interrupting.
+Therefore, up to N-1 out of N ZEO servers can fail without interrupting
+the service.
It is intended that any storage can be used as a backend storage for a RAID
storage, although typically a ClientStorage will be the direct backend.
-The ZEO RAID server
-===================
+The ZEORaid server
+==================
The RAID storage could (in theory) be used directly from a Zope server.
However, to achieve real reliability, the RAID has to run as a storage for
multiple Zope servers, like a normal ZEO setup does.
For this, we leverage the normal ZEO server implementation and simply use a
-RAID storage instead of a FileStorage. The system architecture looks like
-this::
+RAID storage instead of a FileStorage. To achieve full reliability, you can
+install multiple ZEORaid servers with identical configuration::
- [ ZEO 1 ] [ ZEO 2 ] ... [ ZEO N ]
- \ | /
- \ | /
- \ | /
- \ | /
- \ | /
- \ | /
- [ ZEO RAID ]
- / | \
- / | \
- / | \
- / | \
- / | \
- / | \
- [ Zope 1 ] [ Zope 2 ] ... [ Zope N]
+ [ Zope 1 ] [ ZEORaid 1 ] [ ZEO 1 ]
+ [ Zope 2 ] talk to all --> [ ZEORaid 2 ] talk to all -> [ ZEO 2 ]
+ ... ... ...
+ [ Zope N] [ ZEORaid N ] [ ZEO N ]
+
ZEO RAID servers maintain a list of all the optimal, degraded and recovering
-storages and provide an extended ZEO rpc API to allow querying the RAID status
+storages and provide an extended Storage API to allow querying the RAID status
and disabling and recovering storages at runtime.
-Making the RAID server reliable
-===============================
-
-The RAID server itself remains the last single point of failure in the system.
-This problem is solved as the RAID server does not maintain any persistent
-state (except the configuration data: it's listening ip and port and the list
-of storages).
-
-The RAID server can be made reliable by providing a hot-spare server using
-existing HA tools (taking over the IP when a host goes down) and the existing
-ZEO ClientStorage behaviour.
-
-The RAID server is capable of deriving the status of all storages after
-startup so the hot-spare server does not have to get updated information
-before switching on. One drawback here: if all storages become corrupt at the
-same time, the RAID server will happily pick up the storage with the newest
-last transaction and use it as the optimal storage.
-
-To avoid this, we'd have to create a well known OID (os something similar) to
-annotate a storage with its status. This would mean that storages would have
-to be initialized as a RAID backend though and can't be easily migrated.
-
Development
===========
Modified: gocept.zeoraid/trunk/TODO.txt
===================================================================
--- gocept.zeoraid/trunk/TODO.txt 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/TODO.txt 2008-05-02 13:48:21 UTC (rev 86094)
@@ -34,9 +34,6 @@
- Allow online reconfiguration of the RAID storage (e.g. for adding or
removing new backend servers).
-- Document how to make the RAID server itself reliable (leveraging the
- statelessness of ZEORaid and a hot standby).
-
- Verify parallel/backend invalidations + optimize invalidations that they get
passed on only once.
Modified: gocept.zeoraid/trunk/base.cfg
===================================================================
--- gocept.zeoraid/trunk/base.cfg 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/base.cfg 2008-05-02 13:48:21 UTC (rev 86094)
@@ -6,7 +6,7 @@
[test]
recipe = zc.recipe.testrunner
eggs = gocept.zeoraid
-defaults = ['-vv']
+defaults = ['-vv', '--all']
# Some demo parts
[storage1]
Modified: gocept.zeoraid/trunk/src/gocept/zeoraid/storage.py
===================================================================
--- gocept.zeoraid/trunk/src/gocept/zeoraid/storage.py 2008-05-02 13:37:16 UTC (rev 86093)
+++ gocept.zeoraid/trunk/src/gocept/zeoraid/storage.py 2008-05-02 13:48:21 UTC (rev 86094)
@@ -565,7 +565,7 @@
def _open_storage(self, name):
assert name not in self.storages, "Storage %s already opened" % name
storage = self.openers[name].open()
- assert hasattr(storage, 'supportsUndo') and storage.supportsUndo()
+ #assert hasattr(storage, 'supportsUndo') and storage.supportsUndo()
self.storages[name] = storage
def _degrade_storage(self, name, fail=True):
More information about the Checkins
mailing list