[Checkins] SVN: gocept.zeoraid/trunk/TODO.txt Cleanup and
reformatting.
Christian Theune
ct at gocept.com
Wed Apr 9 04:06:07 EDT 2008
Log message for revision 85181:
Cleanup and reformatting.
Changed:
U gocept.zeoraid/trunk/TODO.txt
-=-
Modified: gocept.zeoraid/trunk/TODO.txt
===================================================================
--- gocept.zeoraid/trunk/TODO.txt 2008-04-09 08:04:54 UTC (rev 85180)
+++ gocept.zeoraid/trunk/TODO.txt 2008-04-09 08:06:06 UTC (rev 85181)
@@ -8,94 +8,81 @@
Stabilization
-------------
- - Check edge cases for locking on all methods so that degrading a storage
- works under all circumstances.
+- Check edge cases for locking on all methods so that degrading a storage
+ works under all circumstances.
- - The second pass of the recovery isn't thread safe. Ensure that only one
- recovery can run at a time. (This is probably a good idea anyway because of
- IO load.)
+- The second pass of the recovery isn't thread safe. Ensure that only one
+ recovery can run at a time. (This is probably a good idea anyway because of
+ IO load.)
- - Make sure that opening a ZEO client doesn't block forever. (E.g. by using a
- custom opener that sets 'wait' to True and timeout to 10 seconds )
+- Run some manual tests for weird situations, high load, ...
- Workaround: do this by using "wait off" or setting the timeout in
- the RAID server config.
+- Compatibility to which ZODB clients and which ZEO servers? Best would be to
+ support Zope 2.9 and Zope 3.4.
- - Run some manual tests for weird situations, high load, ...
+- Re-check API usage and definition for ZODB 3.8 as our base.
- - Compatibility to which ZODB clients and which ZEO servers? Best would be to
- support Zope 2.9 and Zope 3.4.
+- Ensure that blob-caching parameters are equal for all clientstorages
- - Re-check API usage and definition for ZODB 3.8 as our base.
+- Provide RAID-aware blob storage implementation that ignores requests on a
+ shared file system that were handled already and are consistent.
- - Ensure that blob-caching parameters are equal for all clientstorages
+- Disallow packing while a storage is recovering.
- - Provide RAID-aware blob storage implementation that ignores requests on a
- shared file system that were handled already and are consistent.
+- Disallow recovering multiple storages in parallel.
- - Disallow packing while a storage is recovering.
+- Manager client: setup correctly so that invalidations do not trigger errors
- - Disallow recovering multiple storages in parallel.
-
- - manager client: setup correctly so that invalidations do not trigger errors
-
- - manager client: provide current recovery message for a storage that is
+- Manager client: provide current recovery message for a storage that is
recovering
-- allow zeoraid to startup `quickly` even when a backend zeo server is not
+- Allow ZEORaid to startup `quickly` even when a backend zeo server is not
available (thread-parallelize the opening of storages?)
- The exception branch for ClientDisconnected (StorageError) is not tested
during unit/functional tests (__apply_storage)
+- Pack may never be run while a storage is recovering.
+
+- Blobs: Hard links created for the multiple backend storages need to be
+ tracked and cleaned up.
+
+- Windows support
+
Feature-completeness
--------------------
- - Rebuild/recover with blobs!
+- Recovery with blob support.
- - Create a limit for the transaction rate when recovering so that the
- recovery doesn't clog up the live servers.
+- Support undo.
- - Support Undo
-
Cleanup
-------
- - Offer the `read only` option through ZConfig schema
+- Offer the `read only` option on RAIDStorage through ZConfig schema.
- - Remove print statements and provide logging.
+- Remove print statements and provide logging.
- - XXX pack may never be run while a storage is recovering.
-
- - XXX Blobs: Hard links created for the multiple backend storages need to be tracked
- and cleaned up.
-
Future
======
-- Support packing?
+- Allow asynchronous backend storages (e.g. for off-site replication)
-- Windows support
+- Make write access to backend storages parallel (for better write
+ performance).
-- make writing to multiple storages asynchronous or at least truly parallel
+- Balance read requests over varying backends to optimize caching and
+ distribute IO load. (Beware of the hard-coded priority queue that we use for
+ packing.)
-- Make the read requests come from different backends to optimize caching and
- distribute IO load.
+- Allow online reconfiguration of the RAID storage (e.g. for adding or
+ removing new backend servers).
- beware of hard-coded priority queue during packing
+- Document how to make the RAID server itself reliable (leveraging the
+ statelessness of ZEORaid and a hot standby).
-- Allow adding and removing new backend servers while running.
+- Verify parallel/backend invalidations + optimize invalidations that they get
+ passed on only once.
-- Incremental backup.
-
-- Asynchronuous write to off-site servers
-
-- Better performance for reading (distribute read load)
-
-- Verify parallel/backend invalidations + optimize invalidations
- that they get passed on only once.
-
-- FileIterator may be (very) slow when re-initializing at later points in a
-
-- Guarantee a recovery rate larger than the rate of new commits using a credit
- point system.
+- Guarantee a recovery rate larger than the rate of new commits (one idea is
+ to use a "credit point system").
More information about the Checkins
mailing list