[Checkins] SVN: gocept.zeoraid/trunk/ Rename ROADMAP to TODO according to sponsoring agreement.

Christian Theune ct at gocept.com
Wed Apr 9 03:57:14 EDT 2008


Log message for revision 85179:
  Rename ROADMAP to TODO according to sponsoring agreement.
  

Changed:
  D   gocept.zeoraid/trunk/ROADMAP.txt
  A   gocept.zeoraid/trunk/TODO.txt

-=-
Deleted: gocept.zeoraid/trunk/ROADMAP.txt
===================================================================
--- gocept.zeoraid/trunk/ROADMAP.txt	2008-04-09 07:56:14 UTC (rev 85178)
+++ gocept.zeoraid/trunk/ROADMAP.txt	2008-04-09 07:57:14 UTC (rev 85179)
@@ -1,101 +0,0 @@
-====
-TODO
-====
-
-1.0
-===
-
-Stabilization
--------------
-
- - Check edge cases for locking on all methods so that degrading a storage
-   works under all circumstances.
-
- - The second pass of the recovery isn't thread safe. Ensure that only one
-   recovery can run at a time. (This is probably a good idea anyway because of
-   IO load.)
-
- - Make sure that opening a ZEO client doesn't block forever. (E.g. by using a
-   custom opener that sets 'wait' to True and timeout to 10 seconds )
-
-   Workaround: do this by using "wait off" or setting the timeout in
-   the RAID server config.
-
- - Run some manual tests for weird situations, high load, ...
-
- - Compatibility to which ZODB clients and which ZEO servers? Best would be to
-   support Zope 2.9 and Zope 3.4.
-
- - Re-check API usage and definition for ZODB 3.8 as our base.
-
- - Ensure that blob-caching parameters are equal for all clientstorages
-
- - Provide RAID-aware blob storage implementation that ignores requests on a
-   shared file system that were handled already and are consistent.
-
- - Disallow packing while a storage is recovering.
-
- - Disallow recovering multiple storages in parallel.
-
- - manager client: setup correctly so that invalidations do not trigger errors
-
- - manager client: provide current recovery message for a storage that is
-  recovering
-
-- allow zeoraid to startup `quickly` even when a backend zeo server is not
-  available (thread-parallelize the opening of storages?)
-
-- The exception branch for ClientDisconnected (StorageError) is not tested
-  during unit/functional tests (__apply_storage)
-
-Feature-completeness
---------------------
-
- - Rebuild/recover with blobs!
-
- - Create a limit for the transaction rate when recovering so that the
-   recovery doesn't clog up the live servers.
-
- - Support Undo
-
-Cleanup
--------
-
- - Offer the `read only` option through ZConfig schema
-
- - Remove print statements and provide logging.
-
- - XXX pack may never be run while a storage is recovering.
-
- - XXX Blobs: Hard links created for the multiple backend storages need to be tracked
-   and cleaned up.
-
-Future
-======
-
-- Support packing?
-
-- Windows support
-
-- make writing to multiple storages asynchronous or at least truly parallel
-
-- Make the read requests come from different backends to optimize caching and
-  distribute IO load.
-
-  beware of hard-coded priority queue during packing
-
-- Allow adding and removing new backend servers while running.
-
-- Incremental backup.
-
-- Asynchronuous write to off-site servers
-
-- Better performance for reading (distribute read load)
-
-- Verify parallel/backend invalidations + optimize invalidations
-  that they get passed on only once.
-
-- FileIterator may be (very) slow when re-initializing at later points in a
-
-- Guarantee a recovery rate larger than the rate of new commits using a credit
-  point system.

Copied: gocept.zeoraid/trunk/TODO.txt (from rev 85178, gocept.zeoraid/trunk/ROADMAP.txt)
===================================================================
--- gocept.zeoraid/trunk/TODO.txt	                        (rev 0)
+++ gocept.zeoraid/trunk/TODO.txt	2008-04-09 07:57:14 UTC (rev 85179)
@@ -0,0 +1,101 @@
+====
+TODO
+====
+
+1.0
+===
+
+Stabilization
+-------------
+
+ - Check edge cases for locking on all methods so that degrading a storage
+   works under all circumstances.
+
+ - The second pass of the recovery isn't thread safe. Ensure that only one
+   recovery can run at a time. (This is probably a good idea anyway because of
+   IO load.)
+
+ - Make sure that opening a ZEO client doesn't block forever. (E.g. by using a
+   custom opener that sets 'wait' to True and timeout to 10 seconds )
+
+   Workaround: do this by using "wait off" or setting the timeout in
+   the RAID server config.
+
+ - Run some manual tests for weird situations, high load, ...
+
+ - Compatibility to which ZODB clients and which ZEO servers? Best would be to
+   support Zope 2.9 and Zope 3.4.
+
+ - Re-check API usage and definition for ZODB 3.8 as our base.
+
+ - Ensure that blob-caching parameters are equal for all clientstorages
+
+ - Provide RAID-aware blob storage implementation that ignores requests on a
+   shared file system that were handled already and are consistent.
+
+ - Disallow packing while a storage is recovering.
+
+ - Disallow recovering multiple storages in parallel.
+
+ - manager client: setup correctly so that invalidations do not trigger errors
+
+ - manager client: provide current recovery message for a storage that is
+  recovering
+
+- allow zeoraid to startup `quickly` even when a backend zeo server is not
+  available (thread-parallelize the opening of storages?)
+
+- The exception branch for ClientDisconnected (StorageError) is not tested
+  during unit/functional tests (__apply_storage)
+
+Feature-completeness
+--------------------
+
+ - Rebuild/recover with blobs!
+
+ - Create a limit for the transaction rate when recovering so that the
+   recovery doesn't clog up the live servers.
+
+ - Support Undo
+
+Cleanup
+-------
+
+ - Offer the `read only` option through ZConfig schema
+
+ - Remove print statements and provide logging.
+
+ - XXX pack may never be run while a storage is recovering.
+
+ - XXX Blobs: Hard links created for the multiple backend storages need to be tracked
+   and cleaned up.
+
+Future
+======
+
+- Support packing?
+
+- Windows support
+
+- make writing to multiple storages asynchronous or at least truly parallel
+
+- Make the read requests come from different backends to optimize caching and
+  distribute IO load.
+
+  beware of hard-coded priority queue during packing
+
+- Allow adding and removing new backend servers while running.
+
+- Incremental backup.
+
+- Asynchronuous write to off-site servers
+
+- Better performance for reading (distribute read load)
+
+- Verify parallel/backend invalidations + optimize invalidations
+  that they get passed on only once.
+
+- FileIterator may be (very) slow when re-initializing at later points in a
+
+- Guarantee a recovery rate larger than the rate of new commits using a credit
+  point system.



More information about the Checkins mailing list