[Zodb-checkins] SVN: ZODB/branches/jim-zeo-registerdb/ merged trunk changes since branch

Jim Fulton jim at zope.com
Wed May 9 16:45:52 EDT 2007


Log message for revision 75658:
  merged trunk changes since branch

Changed:
  U   ZODB/branches/jim-zeo-registerdb/HISTORY.txt
  U   ZODB/branches/jim-zeo-registerdb/NEWS.txt
  U   ZODB/branches/jim-zeo-registerdb/src/BTrees/Interfaces.py
  U   ZODB/branches/jim-zeo-registerdb/src/BTrees/__init__.py
  U   ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testBTrees.py
  U   ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testConflict.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/ClientStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/ServerStub.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/StorageServer.py
  A   ZODB/branches/jim-zeo-registerdb/src/ZEO/interfaces.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/Cache.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testConversionSupport.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testZEO.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/BaseStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/Blob.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/BlobStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/__init__.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/Connection.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/DB.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/DemoStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/FileStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/fsdump.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/MappingStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/broken.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/interfaces.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/BasicStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/IteratorStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/MTStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/ReadOnlyStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/RevisionStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/Synchronization.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoVersionStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/VersionStorage.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnection.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnectionSavepoint.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDB.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDemoStorage.py
  D   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testSubTransaction.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testZODB.py
  U   ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/test_storage.py
  U   ZODB/branches/jim-zeo-registerdb/src/transaction/_manager.py
  U   ZODB/branches/jim-zeo-registerdb/src/transaction/_transaction.py
  U   ZODB/branches/jim-zeo-registerdb/src/transaction/interfaces.py
  U   ZODB/branches/jim-zeo-registerdb/src/transaction/tests/test_transaction.py

-=-
Modified: ZODB/branches/jim-zeo-registerdb/HISTORY.txt
===================================================================
--- ZODB/branches/jim-zeo-registerdb/HISTORY.txt	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/HISTORY.txt	2007-05-09 20:45:50 UTC (rev 75658)
@@ -1,43 +1,132 @@
-
-What's new in ZODB3 3.8.0?
+What's new in ZODB3 3.7.0
 ==========================
-Release date: ???
+Release date: 2007-04-20
 
-- Added support for Blobs. 
+Packaging
+---------
 
+- (3.7.0b3) ZODB is now packaged without it's dependencies
+
+  ZODB no longer includes copies of dependencies such as 
+  ZConfig, zope.interface and so on.  It now treats these as
+  dependencies.  If ZODB is installed with easy_install or
+  zc.buildout, the dependencies will be installed automatically.
+
+
+- (3.7.0b3) ZODB is now a buildout
+
+  ZODB checkouts are now built and tested using zc.buildout.
+
+- (3.7b4) Added logic to avoid spurious errors from the logging system
+  on exit.
+
+- (3.7b2) Removed the "sync" mode for ClientStorage.  
+
+  Previously, a ClientStorage could be in either "sync" mode or "async"
+  mode.  Now there is just "async" mode.  There is now a dedicicated
+  asyncore main loop dedicated to ZEO clients.
+
+  Applications no-longer need to run an asyncore main loop to cause
+  client storages to run in async mode.  Even if an application runs an
+  asyncore main loop, it is independent of the loop used by client
+  storages. 
+
+  This addresses a test failure on Mac OS X,
+  http://www.zope.org/Collectors/Zope3-dev/650, that I believe was due
+  to a bug in sync mode. Some asyncore-based code was being called from
+  multiple threads that didn't expect to be.
+
+  Converting to always-async mode revealed some bugs that weren't caught
+  before because the tests ran in sync mode.  These problems could
+  explain some problems we've seen at times with clients taking a long
+  time to reconnect after a disconnect.
+
+  Added a partial heart beat to try to detect lost connections that
+  aren't otherwise caught,
+  http://mail.zope.org/pipermail/zodb-dev/2005-June/008951.html, by
+  perioidically writing to all connections during periods of inactivity.
+
+Connection management
+---------------------
+
+- (3.7a1) When more than ``pool_size`` connections have been closed,
+  ``DB`` forgets the excess (over ``pool_size``) connections closed first.
+  Python's cyclic garbage collection can take "a long time" to reclaim them
+  (and may in fact never reclaim them if application code keeps strong
+  references to them), but such forgotten connections can never be opened
+  again, so their caches are now cleared at the time ``DB`` forgets them.
+  Most applications won't notice a difference, but applications that open
+  many connections, and/or store many large objects in connection caches,
+  and/or store limited resources (such as RDB connections) in connection
+  caches may benefit.
+
 BTrees
 ------
 
-- Added support for 64-bit integer BTrees as separate types.  
+- Support for 64-bit integer keys and values has been provided as a
+  compile-time option for the "I" BTrees (e.g. IIBTree).
 
-  (For now, we're retaining compile-time support for making the
-   regular integer BTrees 64-bit.)
+Documentation
+-------------
 
-- Normalize names in modules so that BTrees, Buckets, Sets, and TreeSets can
-  all be accessed with those names in the modules (e.g.,
-  BTrees.IOBTree.BTree).  This is in addition to the older names (e.g.,
-  BTrees.IOBTree.IOBTree).  This allows easier drop-in replacement, which can
-  especially be simplify code for packages that want to support both 32-bit and
-  64-bit BTrees.
+- (3.7a1) Thanks to Stephan Richter for converting many of the doctest
+  files to ReST format.  These are now chapters in the Zope 3 apidoc too.
 
-- Describe the interfaces for each module and actually declare the interfaces
-  for each.
+IPersistent
+-----------
 
-- Fix module references so klass.__module__ points to the Python wrapper
-  module, not the C extension.
+- (3.7a1) The documentation for ``_p_oid`` now specifies the concrete
+  type of oids (in short, an oid is either None or a non-empty string).
 
+Testing
+-------
 
-What's new in ZODB3 3.7.0?
-==========================
-Release date: ???
+- (3.7b2) Fixed test-runner output truncation.
 
+  A bug was fixed in the test runner that caused result summaries to be
+  omitted when running on Windows.
+
+Tools
+-----
+
+- (3.7a1) The changeover from zLOG to the logging module means that some
+  tools need to perform minimal logging configuration themselves. Changed
+  the zeoup script to do so and thus enable it to emit error messages.
+
 BTrees
 ------
 
-- Support for 64-bit integer keys and values has been provided as a
-  compile-time option.
+- (3.7a1) Suppressed warnings about signedness of characters when
+  compiling under GCC 4.0.x.  See http://www.zope.org/Collectors/Zope/2027.
 
+Connection
+----------
 
+- (3.7a1) An optimization for loading non-current data (MVCC) was
+  inadvertently disabled in ``_setstate()``; this has been repaired.
+
+persistent
+----------
+
+- (3.7a1) Suppressed warnings about signedness of characters when
+  compiling under GCC 4.0.x.  See http://www.zope.org/Collectors/Zope/2027.
+
+- (3.7a1) PersistentMapping was inadvertently pickling volatile attributes
+  (http://www.zope.org/Collectors/Zope/2052).
+
+After Commit hooks
+------------------
+
+- (3.7a1) Transaction objects have a new method,
+  ``addAfterCommitHook(hook, *args, **kws)``.  Hook functions
+  registered with a transaction are called after the transaction
+  commits or aborts. For example, one might want to launch non
+  transactional or asynchrnonous code after a successful, or aborted,
+  commit. See ``test_afterCommitHook()`` in
+  ``transaction/tests/test_transaction.py`` for a tutorial doctest,
+  and the ``ITransaction`` interface for details.
+
+
 What's new in ZODB3 3.6.2?
 ==========================
 Release date: 15-July-2006

Modified: ZODB/branches/jim-zeo-registerdb/NEWS.txt
===================================================================
--- ZODB/branches/jim-zeo-registerdb/NEWS.txt	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/NEWS.txt	2007-05-09 20:45:50 UTC (rev 75658)
@@ -1,5 +1,5 @@
-What's new on ZODB 3.8a1?
-=========================
+What's new on ZODB 3.8.0
+========================
 
 ZEO
 ---
@@ -49,127 +49,26 @@
   ZODB now handles (reasonably) large binary objects efficiently. Useful to
   use from a few kilobytes to at least multiple hundred megabytes.
 
-
-What's new on ZODB 3.7.0b4?
-===========================
-
-Packaging
----------
-
-- (3.7.0b3) ZODB is now packaged without it's dependencies
-
-  ZODB no longer includes copies of dependencies such as 
-  ZConfig, zope.interface and so on.  It now treats these as
-  dependencies.  If ZODB is installed with easy_install or
-  zc.buildout, the dependencies will be installed automatically.
-
-
-- (3.7.0b3) ZODB is now a buildout
-
-  ZODB checkouts are now built and tested using zc.buildout.
-
-ClientStorage
--------------
-
-- (3.7b4) Added logic to avoid spurious errors from the logging system
-  on exit.
-
-- (3.7b2) Removed the "sync" mode for ClientStorage.  
-
-  Previously, a ClientStorage could be in either "sync" mode or "async"
-  mode.  Now there is just "async" mode.  There is now a dedicicated
-  asyncore main loop dedicated to ZEO clients.
-
-  Applications no-longer need to run an asyncore main loop to cause
-  client storages to run in async mode.  Even if an application runs an
-  asyncore main loop, it is independent of the loop used by client
-  storages. 
-
-  This addresses a test failure on Mac OS X,
-  http://www.zope.org/Collectors/Zope3-dev/650, that I believe was due
-  to a bug in sync mode. Some asyncore-based code was being called from
-  multiple threads that didn't expect to be.
-
-  Converting to always-async mode revealed some bugs that weren't caught
-  before because the tests ran in sync mode.  These problems could
-  explain some problems we've seen at times with clients taking a long
-  time to reconnect after a disconnect.
-
-  Added a partial heart beat to try to detect lost connections that
-  aren't otherwise caught,
-  http://mail.zope.org/pipermail/zodb-dev/2005-June/008951.html, by
-  perioidically writing to all connections during periods of inactivity.
-
-Connection management
----------------------
-
-- (3.7a1) When more than ``pool_size`` connections have been closed,
-  ``DB`` forgets the excess (over ``pool_size``) connections closed first.
-  Python's cyclic garbage collection can take "a long time" to reclaim them
-  (and may in fact never reclaim them if application code keeps strong
-  references to them), but such forgotten connections can never be opened
-  again, so their caches are now cleared at the time ``DB`` forgets them.
-  Most applications won't notice a difference, but applications that open
-  many connections, and/or store many large objects in connection caches,
-  and/or store limited resources (such as RDB connections) in connection
-  caches may benefit.
-
-Documentation
--------------
-
-- (3.7a1) Thanks to Stephan Richter for converting many of the doctest
-  files to ReST format.  These are now chapters in the Zope 3 apidoc too.
-
-IPersistent
------------
-
-- (3.7a1) The documentation for ``_p_oid`` now specifies the concrete
-  type of oids (in short, an oid is either None or a non-empty string).
-
-Testing
--------
-
-- (3.7b2) Fixed test-runner output truncation.
-
-  A bug was fixed in the test runner that caused result summaries to be
-  omitted when running on Windows.
-
-Tools
------
-
-- (3.7a1) The changeover from zLOG to the logging module means that some
-  tools need to perform minimal logging configuration themselves. Changed
-  the zeoup script to do so and thus enable it to emit error messages.
-
 BTrees
 ------
 
-- (3.7a1) Suppressed warnings about signedness of characters when
-  compiling under GCC 4.0.x.  See http://www.zope.org/Collectors/Zope/2027.
+- (3.8a1) Added support for 64-bit integer BTrees as separate types.  
 
-Connection
-----------
+  (For now, we're retaining compile-time support for making the regular
+  integer BTrees 64-bit.)
 
-- (3.7a1) An optimization for loading non-current data (MVCC) was
-  inadvertently disabled in ``_setstate()``; this has been repaired.
+- (3.8a1) Normalize names in modules so that BTrees, Buckets, Sets, and
+  TreeSets can all be accessed with those names in the modules (e.g.,
+  BTrees.IOBTree.BTree).  This is in addition to the older names (e.g.,
+  BTrees.IOBTree.IOBTree).  This allows easier drop-in replacement, which
+  can especially be simplify code for packages that want to support both
+  32-bit and 64-bit BTrees.
 
-persistent
-----------
+- (3.8a1) Describe the interfaces for each module and actually declare
+  the interfaces for each.
 
-- (3.7a1) Suppressed warnings about signedness of characters when
-  compiling under GCC 4.0.x.  See http://www.zope.org/Collectors/Zope/2027.
+- (3.8a1) Fix module references so klass.__module__ points to the Python
+  wrapper module, not the C extension.
 
-- (3.7a1) PersistentMapping was inadvertently pickling volatile attributes
-  (http://www.zope.org/Collectors/Zope/2052).
-
-After Commit hooks
-------------------
-
-- (3.7a1) Transaction objects have a new method,
-  ``addAfterCommitHook(hook, *args, **kws)``.  Hook functions
-  registered with a transaction are called after the transaction
-  commits or aborts. For example, one might want to launch non
-  transactional or asynchrnonous code after a successful, or aborted,
-  commit. See ``test_afterCommitHook()`` in
-  ``transaction/tests/test_transaction.py`` for a tutorial doctest,
-  and the ``ITransaction`` interface for details.
+- (3.8a1) introduce module families, to group all 32-bit and all 64-bit
+  modules.

Modified: ZODB/branches/jim-zeo-registerdb/src/BTrees/Interfaces.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/BTrees/Interfaces.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/BTrees/Interfaces.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -440,11 +440,25 @@
         linear-time pass.
         """
 
+class IBTreeFamily(Interface):
+    """the 64-bit or 32-bit family"""
+    IO = Attribute('The IIntegerObjectBTreeModule for this family')
+    OI = Attribute('The IObjectIntegerBTreeModule for this family')
+    II = Attribute('The IIntegerIntegerBTreeModule for this family')
+    IF = Attribute('The IIntegerFloatBTreeModule for this family')
+    OO = Attribute('The IObjectObjectBTreeModule for this family')
+    maxint = Attribute('The maximum integer storable in this family')
+    minint = Attribute('The minimum integer storable in this family')
+
+
 class IIntegerObjectBTreeModule(IBTreeModule, IMerge):
     """keys, or set values, are integers; values are objects.
     
     describes IOBTree and LOBTree"""
+    
+    family = Attribute('The IBTreeFamily of this module')
 
+
 class IObjectIntegerBTreeModule(IBTreeModule, IIMerge):
     """keys, or set values, are objects; values are integers.
     
@@ -452,12 +466,18 @@
     object id)!  Homogenous key types recommended.
     
     describes OIBTree and LOBTree"""
+    
+    family = Attribute('The IBTreeFamily of this module')
 
+
 class IIntegerIntegerBTreeModule(IBTreeModule, IIMerge, IMergeIntegerKey):
     """keys, or set values, are integers; values are also integers.
     
     describes IIBTree and LLBTree"""
+    
+    family = Attribute('The IBTreeFamily of this module')
 
+
 class IObjectObjectBTreeModule(IBTreeModule, IMerge):
     """keys, or set values, are objects; values are also objects.
     
@@ -466,11 +486,18 @@
     
     describes OOBTree"""
 
+    # Note that there's no ``family`` attribute; all families include
+    # the OO flavor of BTrees.
+
+
 class IIntegerFloatBTreeModule(IBTreeModule, IMerge):
     """keys, or set values, are integers; values are floats.
     
     describes IFBTree and LFBTree"""
+    
+    family = Attribute('The IBTreeFamily of this module')
 
+
 ###############################################################
 # IMPORTANT NOTE
 #

Modified: ZODB/branches/jim-zeo-registerdb/src/BTrees/__init__.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/BTrees/__init__.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/BTrees/__init__.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -1 +1,69 @@
-# This is a Python package.
+#############################################################################
+#
+# Copyright (c) 2007 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+#############################################################################
+
+import zope.interface
+import BTrees.Interfaces
+
+
+class _Family(object):
+    zope.interface.implements(BTrees.Interfaces.IBTreeFamily)
+
+    from BTrees import OOBTree as OO
+
+class _Family32(_Family):
+    from BTrees import OIBTree as OI
+    from BTrees import IIBTree as II
+    from BTrees import IOBTree as IO
+    from BTrees import IFBTree as IF
+
+    maxint = int(2**31-1)
+    minint = -maxint - 1
+
+    def __reduce__(self):
+        return _family32, ()
+
+class _Family64(_Family):
+    from BTrees import OLBTree as OI
+    from BTrees import LLBTree as II
+    from BTrees import LOBTree as IO
+    from BTrees import LFBTree as IF
+
+    maxint = 2**63-1
+    minint = -maxint - 1
+
+    def __reduce__(self):
+        return _family64, ()
+
+def _family32():
+    return family32
+_family32.__safe_for_unpickling__ = True
+
+def _family64():
+    return family64
+_family64.__safe_for_unpickling__ = True
+
+
+family32 = _Family32()
+family64 = _Family64()
+
+
+BTrees.family64.IO.family = family64
+BTrees.family64.OI.family = family64
+BTrees.family64.IF.family = family64
+BTrees.family64.II.family = family64
+
+BTrees.family32.IO.family = family32
+BTrees.family32.OI.family = family32
+BTrees.family32.IF.family = family32
+BTrees.family32.II.family = family32

Modified: ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testBTrees.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testBTrees.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testBTrees.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -11,7 +11,9 @@
 # FOR A PARTICULAR PURPOSE
 #
 ##############################################################################
+import pickle
 import random
+import StringIO
 from unittest import TestCase, TestSuite, TextTestRunner, makeSuite
 from types import ClassType
 import zope.interface.verify
@@ -26,16 +28,7 @@
 from BTrees.LFBTree import LFBTree, LFBucket, LFSet, LFTreeSet
 from BTrees.OLBTree import OLBTree, OLBucket, OLSet, OLTreeSet
 
-import BTrees.OOBTree
-import BTrees.IOBTree
-import BTrees.IIBTree
-import BTrees.IFBTree
-import BTrees.OIBTree
-import BTrees.LOBTree
-import BTrees.LLBTree
-import BTrees.LFBTree
-import BTrees.OLBTree
-import BTrees.Interfaces
+import BTrees
 
 from BTrees.IIBTree import using64bits
 from BTrees.check import check
@@ -1676,6 +1669,112 @@
         self.assert_(
             zope.interface.verify.verifyObject(self.iface, self.module))
 
+    def testFamily(self):
+        if self.prefix == 'OO':
+            self.assert_(
+                getattr(self.module, 'family', self) is self)
+        elif 'L' in self.prefix:
+            self.assert_(self.module.family is BTrees.family64)
+        elif 'I' in self.prefix:
+            self.assert_(self.module.family is BTrees.family32)
+
+class FamilyTest(TestCase):
+    def test32(self):
+        self.assert_(
+            zope.interface.verify.verifyObject(
+                BTrees.Interfaces.IBTreeFamily, BTrees.family32))
+        self.assertEquals(
+            BTrees.family32.IO, BTrees.IOBTree)
+        self.assertEquals(
+            BTrees.family32.OI, BTrees.OIBTree)
+        self.assertEquals(
+            BTrees.family32.II, BTrees.IIBTree)
+        self.assertEquals(
+            BTrees.family32.IF, BTrees.IFBTree)
+        self.assertEquals(
+            BTrees.family32.OO, BTrees.OOBTree)
+        s = IOTreeSet()
+        s.insert(BTrees.family32.maxint)
+        self.assert_(BTrees.family32.maxint in s)
+        s = IOTreeSet()
+        s.insert(BTrees.family32.minint)
+        self.assert_(BTrees.family32.minint in s)
+        s = IOTreeSet()
+        # this next bit illustrates an, um, "interesting feature".  If
+        # the characteristics change to match the 64 bit version, please
+        # feel free to change.
+        big = BTrees.family32.maxint + 1
+        if isinstance(big, long):
+            self.assertRaises(TypeError, s.insert, big)
+            self.assertRaises(TypeError, s.insert, BTrees.family32.minint - 1)
+        else: # 64 bit Python
+            s.insert(BTrees.family32.maxint + 1)
+            self.assert_(BTrees.family32.maxint + 1 not in s)
+            # yeah, it's len of 1 now...don't look...don't look...
+            s = IOTreeSet()
+            s.insert(BTrees.family32.minint - 1)
+            self.assert_(BTrees.family32.minint - 1 not in s)
+        self.check_pickling(BTrees.family32)
+
+    def test64(self):
+        self.assert_(
+            zope.interface.verify.verifyObject(
+                BTrees.Interfaces.IBTreeFamily, BTrees.family64))
+        self.assertEquals(
+            BTrees.family64.IO, BTrees.LOBTree)
+        self.assertEquals(
+            BTrees.family64.OI, BTrees.OLBTree)
+        self.assertEquals(
+            BTrees.family64.II, BTrees.LLBTree)
+        self.assertEquals(
+            BTrees.family64.IF, BTrees.LFBTree)
+        self.assertEquals(
+            BTrees.family64.OO, BTrees.OOBTree)
+        s = LOTreeSet()
+        s.insert(BTrees.family64.maxint)
+        self.assert_(BTrees.family64.maxint in s)
+        s = LOTreeSet()
+        s.insert(BTrees.family64.minint)
+        self.assert_(BTrees.family64.minint in s)
+        s = LOTreeSet()
+        self.assertRaises(ValueError, s.insert, BTrees.family64.maxint + 1)
+        self.assertRaises(ValueError, s.insert, BTrees.family64.minint - 1)
+        self.check_pickling(BTrees.family64)
+
+    def check_pickling(self, family):
+        # The "family" objects are singletons; they can be pickled and
+        # unpickled, and the same instances will always be returned on
+        # unpickling, whether from the same unpickler or different
+        # unpicklers.
+        s = pickle.dumps((family, family))
+        (f1, f2) = pickle.loads(s)
+        self.failUnless(f1 is family)
+        self.failUnless(f2 is family)
+
+        # Using a single memo across multiple pickles:
+        sio = StringIO.StringIO()
+        p = pickle.Pickler(sio)
+        p.dump(family)
+        p.dump([family])
+        u = pickle.Unpickler(StringIO.StringIO(sio.getvalue()))
+        f1 = u.load()
+        f2, = u.load()
+        self.failUnless(f1 is family)
+        self.failUnless(f2 is family)
+
+        # Using separate memos for each pickle:
+        sio = StringIO.StringIO()
+        p = pickle.Pickler(sio)
+        p.dump(family)
+        p.clear_memo()
+        p.dump([family])
+        u = pickle.Unpickler(StringIO.StringIO(sio.getvalue()))
+        f1 = u.load()
+        f2, = u.load()
+        self.failUnless(f1 is family)
+        self.failUnless(f2 is family)
+
+
 def test_suite():
     s = TestSuite()
 
@@ -1723,6 +1822,7 @@
         DegenerateBTree,
         TestCmpError,
         BugFixes,
+        FamilyTest,
         ):
         s.addTest(makeSuite(klass))
 

Modified: ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testConflict.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testConflict.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/BTrees/tests/testConflict.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -399,11 +399,13 @@
         # Invoke conflict resolution by committing a transaction.
         self.openDB()
 
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         r1["t"] = self.t
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -433,7 +435,7 @@
         self.assertEqual(state[0][3], 75)
         self.assertEqual(state[0][5], 120)
 
-        transaction.commit()
+        tm1.commit()
 
         # In the other transaction, add 3 values near the tail end of bucket1.
         # This doesn't cause a split.
@@ -451,8 +453,7 @@
         self.assertEqual(state[0][1], 60)
         self.assertEqual(state[0][3], 120)
 
-        self.assertRaises(ConflictError, transaction.commit)
-        transaction.abort()   # horrible things happen w/o this
+        self.assertRaises(ConflictError, tm2.commit)
 
     def testEmptyBucketConflict(self):
         # Tests that an emptied bucket *created by* conflict resolution is
@@ -476,11 +477,13 @@
         # Invoke conflict resolution by committing a transaction.
         self.openDB()
 
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         r1["t"] = self.t
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -502,7 +505,7 @@
         self.assertEqual(state[0][1], 60)
         self.assertEqual(state[0][3], 120)
 
-        transaction.commit()
+        tm1.commit()
 
         # In the other transaction, delete the other half of bucket 1.
         b = copy
@@ -523,8 +526,7 @@
         # create an "insane" BTree (a legit BTree cannot contain an empty
         # bucket -- it contains NULL pointers the BTree code doesn't
         # expect, and segfaults result).
-        self.assertRaises(ConflictError, transaction.commit)
-        transaction.abort()   # horrible things happen w/o this
+        self.assertRaises(ConflictError, tm2.commit)
 
 
     def testEmptyBucketNoConflict(self):
@@ -599,12 +601,14 @@
     def testThreeEmptyBucketsNoSegfault(self):
         self.openDB()
 
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         self.assertEqual(len(self.t), 0)
         r1["t"] = b = self.t  # an empty tree
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -612,15 +616,14 @@
         # In one transaction, add and delete a key.
         b[2] = 2
         del b[2]
-        transaction.commit()
+        tm1.commit()
 
         # In the other transaction, also add and delete a key.
         b = copy
         b[1] = 1
         del b[1]
         # If the commit() segfaults, the C code is still wrong for this case.
-        self.assertRaises(ConflictError, transaction.commit)
-        transaction.abort()
+        self.assertRaises(ConflictError, tm2.commit)
 
     def testCantResolveBTreeConflict(self):
         # Test that a conflict involving two different changes to
@@ -646,11 +649,13 @@
 
         # Set up database connections to provoke conflict.
         self.openDB()
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         r1["t"] = self.t
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -662,16 +667,15 @@
 
         for k in range(200, 300, 4):
             self.t[k] = k
-        transaction.commit()
+        tm1.commit()
 
         for k in range(0, 60, 4):
             del copy[k]
 
         try:
-            transaction.commit()
+            tm2.commit()
         except ConflictError, detail:
             self.assert_(str(detail).startswith('database conflict error'))
-            transaction.abort()
         else:
             self.fail("expected ConflictError")
 
@@ -700,11 +704,13 @@
 
         # Set up database connections to provoke conflict.
         self.openDB()
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         r1["t"] = self.t
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -716,15 +722,14 @@
 
         for k in range(0, 60, 4):
             del self.t[k]
-        transaction.commit()
+        tm1.commit()
 
         copy[1] = 1
 
         try:
-            transaction.commit()
+            tm2.commit()
         except ConflictError, detail:
             self.assert_(str(detail).startswith('database conflict error'))
-            transaction.abort()
         else:
             self.fail("expected ConflictError")
 
@@ -733,11 +738,13 @@
         for i in range(0, 200, 4):
             b[i] = i
 
-        r1 = self.db.open().root()
+        tm1 = transaction.TransactionManager()
+        r1 = self.db.open(transaction_manager=tm1).root()
         r1["t"] = b
-        transaction.commit()
+        tm1.commit()
 
-        r2 = self.db.open(synch=False).root()
+        tm2 = transaction.TransactionManager()
+        r2 = self.db.open(transaction_manager=tm2).root()
         copy = r2["t"]
         # Make sure all of copy is loaded.
         list(copy.values())
@@ -747,15 +754,14 @@
         # Now one transaction empties the first bucket, and another adds a
         # key to the first bucket.
         b[1] = 1
-        transaction.commit()
+        tm1.commit()
 
         for k in range(0, 60, 4):
             del copy[k]
         try:
-            transaction.commit()
+            tm2.commit()
         except ConflictError, detail:
             self.assert_(str(detail).startswith('database conflict error'))
-            transaction.abort()
         else:
             self.fail("expected ConflictError")
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/ClientStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/ClientStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/ClientStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -266,8 +266,7 @@
         self._pickler = None
 
         self._info = {'length': 0, 'size': 0, 'name': 'ZEO Client',
-                      'supportsUndo':0, 'supportsVersions': 0,
-                      'supportsTransactionalUndo': 0}
+                      'supportsUndo':0, 'supportsVersions': 0}
 
         self._tbuf = self.TransactionBufferClass()
         self._db = None
@@ -670,10 +669,6 @@
         """Storage API: return whether we support versions."""
         return self._info['supportsVersions']
 
-    def supportsTransactionalUndo(self):
-        """Storage API: return whether we support transactional undo."""
-        return self._info['supportsTransactionalUndo']
-
     def isReadOnly(self):
         """Storage API: return whether we are in read-only mode."""
         if self._is_read_only:
@@ -732,15 +727,15 @@
         return self._server.history(oid, version, length)
 
     def record_iternext(self, next=None):
-        """Storage API: get the mext database record.
+        """Storage API: get the next database record.
 
         This is part of the conversion-support API.
         """
         return self._server.record_iternext(next)
 
-    def getSerial(self, oid):
+    def getTid(self, oid):
         """Storage API: return current serial number for oid."""
-        return self._server.getSerial(oid)
+        return self._server.getTid(oid)
 
     def loadSerial(self, oid, serial):
         """Storage API: load a historical revision of an object."""

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/ServerStub.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/ServerStub.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/ServerStub.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -177,13 +177,12 @@
         return self.rpc.call('zeoLoad', oid)
 
     ##
-    # Return current data for oid in version, the tid of the transaction that
-    # wrote the most recent revision, and the name of the version for the
-    # data returned.  Versions make this hard to understand; in particular,
-    # the version string returned may not equal the version string passed
-    # in, and that's "a feature" I don't understand.  Similarly, the tid
-    # returned is the tid of the most recent revision of oid, and that may
-    # not equal the tid of the transaction that wrote the data returned.
+    
+    # Return current data for oid in version, the tid of the
+    # transaction that wrote the most recent revision, and the name of
+    # the version for the data returned.  Note that if the object
+    # wasn't modified in the version, then the non-version data is
+    # returned and the returned version is an empty string.
     # @param oid object id
     # @param version string, name of version
     # @defreturn 3-tuple
@@ -275,8 +274,8 @@
     def loadBlob(self, oid, serial, version, offset):
         return self.rpc.call('loadBlob', oid, serial, version, offset)
 
-    def getSerial(self, oid):
-        return self.rpc.call('getSerial', oid)
+    def getTid(self, oid):
+        return self.rpc.call('getTid', oid)
 
     def loadSerial(self, oid, serial):
         return self.rpc.call('loadSerial', oid, serial)

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/StorageServer.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/StorageServer.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/StorageServer.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -22,11 +22,12 @@
 
 import asyncore
 import cPickle
+import logging
 import os
 import sys
 import threading
 import time
-import logging
+import warnings
 
 import transaction
 
@@ -142,8 +143,8 @@
     def __repr__(self):
         tid = self.transaction and repr(self.transaction.id)
         if self.storage:
-            stid = (self.storage._transaction and
-                    repr(self.storage._transaction.id))
+            stid = (self.tpc_transaction() and
+                    repr(self.tpc_transaction().id))
         else:
             stid = None
         name = self.__class__.__name__
@@ -153,34 +154,64 @@
         log(msg, level=level, label=self.log_label, exc_info=exc_info)
 
     def setup_delegation(self):
-        """Delegate several methods to the storage"""
-        self.versionEmpty = self.storage.versionEmpty
-        self.versions = self.storage.versions
-        self.getSerial = self.storage.getSerial
-        self.history = self.storage.history
-        self.load = self.storage.load
-        self.loadSerial = self.storage.loadSerial
-        self.modifiedInVersion = self.storage.modifiedInVersion
-        record_iternext = getattr(self.storage, 'record_iternext', None)
+        """Delegate several methods to the storage
+        """
+
+        storage = self.storage
+
+        info = self.get_info()
+        if info['supportsVersions']:
+            self.versionEmpty = storage.versionEmpty
+            self.versions = storage.versions
+            self.modifiedInVersion = storage.modifiedInVersion
+        else:
+            self.versionEmpty = lambda version: True
+            self.versions = lambda max=None: ()
+            self.modifiedInVersion = lambda oid: ''
+            def commitVersion(*a, **k):
+                raise NotImplementedError
+            self.commitVersion = self.abortVersion = commitVersion
+
+        if not info['supportsUndo']:
+            self.undoLog = self.undoInfo = lambda *a,**k: ()
+            def undo(*a, **k):
+                raise NotImplementedError
+            self.undo = undo
+
+        self.getTid = storage.getTid
+        self.history = storage.history
+        self.load = storage.load
+        self.loadSerial = storage.loadSerial
+        record_iternext = getattr(storage, 'record_iternext', None)
         if record_iternext is not None:
             self.record_iternext = record_iternext
 
         try:
-            fn = self.storage.getExtensionMethods
+            fn = storage.getExtensionMethods
         except AttributeError:
-            # We must be running with a ZODB which
-            # predates adding getExtensionMethods to
-            # BaseStorage. Eventually this try/except
-            # can be removed
-            pass
+            pass # no extension methods
         else:
             d = fn()
             self._extensions.update(d)
-            for name in d.keys():
+            for name in d:
                 assert not hasattr(self, name)
-                setattr(self, name, getattr(self.storage, name))
-        self.lastTransaction = self.storage.lastTransaction
+                setattr(self, name, getattr(storage, name))
+        self.lastTransaction = storage.lastTransaction
 
+        try:
+            self.tpc_transaction = storage.tpc_transaction
+        except AttributeError:
+            if hasattr(storage, '_transaction'):
+                log("Storage %r doesn't have a tpc_transaction method.\n"
+                    "See ZEO.interfaces.IServeable."
+                    "Falling back to using _transaction attribute, which\n."
+                    "is icky.",
+                    logging.ERROR)
+                self.tpc_transaction = lambda : storage._transaction
+            else:
+                raise
+                
+
     def _check_tid(self, tid, exc=None):
         if self.read_only:
             raise ReadOnlyError()
@@ -241,11 +272,27 @@
                                                                    self)
 
     def get_info(self):
-        return {'length': len(self.storage),
-                'size': self.storage.getSize(),
-                'name': self.storage.getName(),
-                'supportsUndo': self.storage.supportsUndo(),
-                'supportsVersions': self.storage.supportsVersions(),
+        storage = self.storage
+
+        try:
+            supportsVersions = storage.supportsVersions
+        except AttributeError:
+            supportsVersions = False
+        else:
+            supportsVersions = supportsVersions()
+
+        try:
+            supportsUndo = storage.supportsUndo
+        except AttributeError:
+            supportsUndo = False
+        else:
+            supportsUndo = supportsUndo()
+
+        return {'length': len(storage),
+                'size': storage.getSize(),
+                'name': storage.getName(),
+                'supportsUndo': supportsUndo,
+                'supportsVersions': supportsVersions,
                 'extensionMethods': self.getExtensionMethods(),
                 'supports_record_iternext': hasattr(self, 'record_iternext'),
                 }
@@ -260,8 +307,15 @@
 
     def loadEx(self, oid, version):
         self.stats.loads += 1
-        return self.storage.loadEx(oid, version)
+        if version:
+            oversion = self.storage.modifiedInVersion(oid)
+            if oversion == version:
+                data, serial = self.storage.load(oid, version)
+                return data, serial, version
 
+        data, serial = self.storage.load(oid, '')
+        return data, serial, ''
+
     def loadBefore(self, oid, tid):
         self.stats.loads += 1
         return self.storage.loadBefore(oid, tid)
@@ -293,7 +347,7 @@
 
     def verify(self, oid, version, tid):
         try:
-            t = self.storage.getTid(oid)
+            t = self.getTid(oid)
         except KeyError:
             self.client.invalidateVerify((oid, ""))
         else:
@@ -310,7 +364,7 @@
             self.verifying = 1
             self.stats.verifying_clients += 1
         try:
-            os = self.storage.getTid(oid)
+            os = self.getTid(oid)
         except KeyError:
             self.client.invalidateVerify((oid, ''))
             # It's not clear what we should do now.  The KeyError
@@ -625,7 +679,7 @@
     def _wait(self, thunk):
         # Wait for the storage lock to be acquired.
         self._thunk = thunk
-        if self.storage._transaction:
+        if self.tpc_transaction():
             d = Delay()
             self.storage._waiting.append((d, self))
             self.log("Transaction blocked waiting for storage. "

Copied: ZODB/branches/jim-zeo-registerdb/src/ZEO/interfaces.py (from rev 75657, ZODB/trunk/src/ZEO/interfaces.py)

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/Cache.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/Cache.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/Cache.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -34,11 +34,6 @@
             info = self._storage.undoInfo(0, 20)
         tid = info[0]['id']
 
-        # We may need to bail at this point if the storage doesn't
-        # support transactional undo
-        if not self._storage.supportsTransactionalUndo():
-            return
-
         # Now start an undo transaction
         t = Transaction()
         t.note('undo1')

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testConversionSupport.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testConversionSupport.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testConversionSupport.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -17,10 +17,10 @@
 class FakeStorageBase:
 
     def __getattr__(self, name):
-        if name in ('versionEmpty', 'versions', 'getSerial',
+        if name in ('versionEmpty', 'versions', 'getTid',
                     'history', 'load', 'loadSerial', 'modifiedInVersion',
                     'lastTransaction', 'getSize', 'getName', 'supportsUndo',
-                    'supportsVersions'):
+                    'supportsVersions', 'tpc_transaction'):
            return lambda *a, **k: None
         raise AttributeError(name)
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testZEO.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testZEO.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZEO/tests/testZEO.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -222,6 +222,36 @@
     def getConfig(self):
         return """<mappingstorage 1/>"""
 
+class DemoStorageTests(
+    GenericTests,
+    Cache.StorageWithCache,
+    VersionStorage.VersionStorage,
+    ):
+
+    def getConfig(self):
+        return """
+        <demostorage 1>
+          <filestorage 1>
+             path %s
+          </filestorage>
+        </demostorage>
+        """ % tempfile.mktemp()
+
+    def checkLoadBeforeVersion(self):
+        # Doesn't implement loadBefore, except as a kind of place holder.
+        pass
+    
+    # the next three pack tests depend on undo
+
+    def checkPackVersionReachable(self):
+        pass
+
+    def checkPackVersions(self):
+        pass
+
+    def checkPackVersionsInPast(self):
+        pass
+
 class HeartbeatTests(ZEO.tests.ConnectionTests.CommonSetupTearDown):
     """Make sure a heartbeat is being sent and that it does no harm
 
@@ -782,7 +812,7 @@
     """
 
 
-test_classes = [FileStorageTests, MappingStorageTests,
+test_classes = [FileStorageTests, MappingStorageTests, DemoStorageTests,
                 BlobAdaptedFileStorageTests, BlobWritableCacheTests]
 
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/BaseStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/BaseStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/BaseStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -54,21 +54,7 @@
     If it stores multiple revisions, it should implement
     loadSerial()
     loadBefore()
-    iterator()
 
-    If the subclass wants to implement undo, it should implement the
-    multiple revision methods and:
-    undo()
-    undoInfo()
-    undoLog()
-
-    If the subclass wants to implement versions, it must implement:
-    abortVersion()
-    commitVersion()
-    modifiedInVersion()
-    versionEmpty()
-    versions()
-
     Each storage will have two locks that are accessed via lock
     acquire and release methods bound to the instance.  (Yuck.)
     _lock_acquire / _lock_release (reentrant)
@@ -112,22 +98,6 @@
         else:
             self._oid = oid
 
-    def abortVersion(self, src, transaction):
-        if transaction is not self._transaction:
-            raise POSException.StorageTransactionError(self, transaction)
-        return self._tid, []
-
-    def commitVersion(self, src, dest, transaction):
-        if transaction is not self._transaction:
-            raise POSException.StorageTransactionError(self, transaction)
-        return self._tid, []
-
-    def close(self):
-        pass
-
-    def cleanup(self):
-        pass
-
     def sortKey(self):
         """Return a string that can be used to sort storage instances.
 
@@ -144,11 +114,8 @@
         return len(self)*300 # WAG!
 
     def history(self, oid, version, length=1, filter=None):
-        pass
+        return ()
 
-    def modifiedInVersion(self, oid):
-        return ''
-
     def new_oid(self):
         if self._is_read_only:
             raise POSException.ReadOnlyError()
@@ -183,12 +150,6 @@
     def isReadOnly(self):
         return self._is_read_only
 
-    def supportsUndo(self):
-        return 0
-
-    def supportsVersions(self):
-        return 0
-
     def tpc_abort(self, transaction):
         self._lock_acquire()
         try:
@@ -243,6 +204,9 @@
         finally:
             self._lock_release()
 
+    def tpc_transaction(self):
+        return self._transaction
+
     def _begin(self, tid, u, d, e):
         """Subclasses should redefine this to supply transaction start actions.
         """
@@ -292,28 +256,17 @@
         """
         pass
 
-    def undo(self, transaction_id, txn):
-        if self._is_read_only:
-            raise POSException.ReadOnlyError()
-        raise POSException.UndoError('non-undoable transaction')
-
-    def undoLog(self, first, last, filter=None):
-        return ()
-
-    def versionEmpty(self, version):
-        return 1
-
-    def versions(self, max=None):
-        return ()
-
-    def pack(self, t, referencesf):
-        if self._is_read_only:
-            raise POSException.ReadOnlyError()
-
-    def getSerial(self, oid):
+    def getTid(self, oid):
         self._lock_acquire()
         try:
-            v = self.modifiedInVersion(oid)
+            v = ''
+            try:
+                supportsVersions = self.supportsVersions
+            except AttributeError:
+                pass
+            else:
+                if supportsVersions():
+                    v = self.modifiedInVersion(oid)
             pickledata, serial = self.load(oid, v)
             return serial
         finally:
@@ -325,120 +278,78 @@
 
     def loadBefore(self, oid, tid):
         """Return most recent revision of oid before tid committed."""
+        return None
 
-        # Unsure: Is it okay for loadBefore() to return current data?
-        # There doesn't seem to be a good reason to forbid it, even
-        # though the typical use of this method will never find
-        # current data.  But maybe we should call it loadByTid()?
-
-        n = 2
-        start_time = None
-        end_time = None
-        while start_time is None:
-            # The history() approach is a hack, because the dict
-            # returned by history() doesn't contain a tid.  It
-            # contains a serialno, which is often the same, but isn't
-            # required to be.  We'll pretend it is for now.
-
-            # A second problem is that history() doesn't say anything
-            # about whether the transaction status.  If it falls before
-            # the pack time, we can't honor the MVCC request.
-
-            # Note: history() returns the most recent record first.
-
-            # TODO: The filter argument to history() only appears to be
-            # supported by FileStorage.  Perhaps it shouldn't be used.
-            L = self.history(oid, "", n, lambda d: not d["version"])
-            if not L:
-                return
-            for d in L:
-                if d["serial"] < tid:
-                    start_time = d["serial"]
-                    break
-                else:
-                    end_time = d["serial"]
-            if len(L) < n:
-                break
-            n *= 2
-        if start_time is None:
-            return None
-        data = self.loadSerial(oid, start_time)
-        return data, start_time, end_time
-
-    def getExtensionMethods(self):
-        """getExtensionMethods
-
-        This returns a dictionary whose keys are names of extra methods
-        provided by this storage. Storage proxies (such as ZEO) should
-        call this method to determine the extra methods that they need
-        to proxy in addition to the standard storage methods.
-        Dictionary values should be None; this will be a handy place
-        for extra marshalling information, should we need it
-        """
-        return {}
-
     def copyTransactionsFrom(self, other, verbose=0):
         """Copy transactions from another storage.
 
         This is typically used for converting data from one storage to
         another.  `other` must have an .iterator() method.
         """
-        _ts=None
-        ok=1
-        preindex={};
-        preget=preindex.get   # waaaa
-        # restore() is a new storage API method which has an identical
-        # signature to store() except that it does not return anything.
-        # Semantically, restore() is also identical to store() except that it
-        # doesn't do the ConflictError or VersionLockError consistency
-        # checks.  The reason to use restore() over store() in this method is
-        # that store() cannot be used to copy transactions spanning a version
-        # commit or abort, or over transactional undos.
-        #
-        # We'll use restore() if it's available, otherwise we'll fall back to
-        # using store().  However, if we use store, then
-        # copyTransactionsFrom() may fail with VersionLockError or
-        # ConflictError.
-        restoring = hasattr(self, 'restore')
-        fiter = other.iterator()
-        for transaction in fiter:
-            tid=transaction.tid
-            if _ts is None:
-                _ts=TimeStamp(tid)
+        copy(other, self, verbose)
+
+def copy(source, dest, verbose=0):
+    """Copy transactions from a source to a destination storage
+
+    This is typically used for converting data from one storage to
+    another.  `source` must have an .iterator() method.
+    """
+    _ts = None
+    ok = 1
+    preindex = {};
+    preget = preindex.get
+    # restore() is a new storage API method which has an identical
+    # signature to store() except that it does not return anything.
+    # Semantically, restore() is also identical to store() except that it
+    # doesn't do the ConflictError or VersionLockError consistency
+    # checks.  The reason to use restore() over store() in this method is
+    # that store() cannot be used to copy transactions spanning a version
+    # commit or abort, or over transactional undos.
+    #
+    # We'll use restore() if it's available, otherwise we'll fall back to
+    # using store().  However, if we use store, then
+    # copyTransactionsFrom() may fail with VersionLockError or
+    # ConflictError.
+    restoring = hasattr(dest, 'restore')
+    fiter = source.iterator()
+    for transaction in fiter:
+        tid = transaction.tid
+        if _ts is None:
+            _ts = TimeStamp(tid)
+        else:
+            t = TimeStamp(tid)
+            if t <= _ts:
+                if ok: print ('Time stamps out of order %s, %s' % (_ts, t))
+                ok = 0
+                _ts = t.laterThan(_ts)
+                tid = `_ts`
             else:
-                t=TimeStamp(tid)
-                if t <= _ts:
-                    if ok: print ('Time stamps out of order %s, %s' % (_ts, t))
-                    ok=0
-                    _ts=t.laterThan(_ts)
-                    tid=`_ts`
-                else:
-                    _ts = t
-                    if not ok:
-                        print ('Time stamps back in order %s' % (t))
-                        ok=1
+                _ts = t
+                if not ok:
+                    print ('Time stamps back in order %s' % (t))
+                    ok = 1
 
+        if verbose:
+            print _ts
+
+        dest.tpc_begin(transaction, tid, transaction.status)
+        for r in transaction:
+            oid = r.oid
             if verbose:
-                print _ts
+                print oid_repr(oid), r.version, len(r.data)
+            if restoring:
+                dest.restore(oid, r.tid, r.data, r.version,
+                             r.data_txn, transaction)
+            else:
+                pre = preget(oid, None)
+                s = dest.store(oid, pre, r.data, r.version, transaction)
+                preindex[oid] = s
 
-            self.tpc_begin(transaction, tid, transaction.status)
-            for r in transaction:
-                oid=r.oid
-                if verbose:
-                    print oid_repr(oid), r.version, len(r.data)
-                if restoring:
-                    self.restore(oid, r.tid, r.data, r.version,
-                                 r.data_txn, transaction)
-                else:
-                    pre=preget(oid, None)
-                    s=self.store(oid, pre, r.data, r.version, transaction)
-                    preindex[oid]=s
+        dest.tpc_vote(transaction)
+        dest.tpc_finish(transaction)
 
-            self.tpc_vote(transaction)
-            self.tpc_finish(transaction)
+    fiter.close()
 
-        fiter.close()
-
 class TransactionRecord:
     """Abstract base class for iterator protocol"""
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/Blob.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/Blob.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/Blob.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -38,12 +38,14 @@
 
 
 class Blob(Persistent):
+    """A BLOB supports efficient handling of large data within ZODB."""
 
     zope.interface.implements(IBlob)
 
     # Binding this to an attribute allows overriding it in the unit tests
     if sys.platform == 'win32':
-        _os_link = lambda self, src, dst: win32file.CreateHardLink(dst, src, None)
+        _os_link = lambda self, src, dst: win32file.CreateHardLink(dst, src,
+                                                                   None)
     else:
         _os_link = os.link
 
@@ -92,7 +94,8 @@
 
             if self._p_blob_uncommitted is None:
                 # Create a new working copy
-                uncommitted = BlobFile(self._create_uncommitted_file(), mode, self)
+                uncommitted = BlobFile(self._create_uncommitted_file(),
+                                       mode, self)
                 # NOTE: _p_blob data appears by virtue of Connection._setstate
                 utils.cp(file(self._p_blob_data), uncommitted)
                 uncommitted.seek(0)
@@ -153,8 +156,8 @@
             if os.path.exists(target):
                 os.unlink(target)
 
-            # If there was a file moved aside, bring it back including the pointer to
-            # the uncommitted file.
+            # If there was a file moved aside, bring it back including the
+            # pointer to the uncommitted file.
             if previous_uncommitted:
                 os.rename(target_aside, target)
                 self._p_blob_uncommitted = target
@@ -179,7 +182,8 @@
         return self._p_blob_uncommitted or self._p_blob_data
 
     def _create_uncommitted_file(self):
-        assert self._p_blob_uncommitted is None, "Uncommitted file already exists."
+        assert self._p_blob_uncommitted is None, (
+            "Uncommitted file already exists.")
         tempdir = os.environ.get('ZODB_BLOB_TEMPDIR', tempfile.gettempdir())
         self._p_blob_uncommitted = utils.mktemp(dir=tempdir)
         return self._p_blob_uncommitted
@@ -276,8 +280,8 @@
     def _remove_uncommitted_data(self):
         self.blob._p_blob_clear()
         self.fhrefs.map(lambda fhref: fhref.close())
-        if self.blob._p_blob_uncommitted is not None and \
-           os.path.exists(self.blob._p_blob_uncommitted):
+        if (self.blob._p_blob_uncommitted is not None and
+            os.path.exists(self.blob._p_blob_uncommitted)):
             os.unlink(self.blob._p_blob_uncommitted)
             self.blob._p_blob_uncommitted = None
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/BlobStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/BlobStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/BlobStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -41,7 +41,7 @@
 
     # Proxies can't have a __dict__ so specifying __slots__ here allows
     # us to have instance attributes explicitly on the proxy.
-    __slots__ = ('fshelper', 'dirty_oids')
+    __slots__ = ('fshelper', 'dirty_oids', '_BlobStorage__supportsUndo')
 
     def __new__(self, base_directory, storage):
         return SpecificationDecoratorBase.__new__(self, storage)
@@ -53,6 +53,13 @@
         self.fshelper.create()
         self.fshelper.checkSecure()
         self.dirty_oids = []
+        try:
+            supportsUndo = storage.supportsUndo
+        except AttributeError:
+            supportsUndo = False
+        else:
+            supportsUndo = supportsUndo()
+        self.__supportsUndo = supportsUndo
 
     @non_overridable
     def __repr__(self):
@@ -184,7 +191,7 @@
         # perform a pack on blob data
         self._lock_acquire()
         try:
-            if unproxied.supportsUndo():
+            if self.__supportsUndo:
                 self._packUndoing(packtime, referencesf)
             else:
                 self._packNonUndoing(packtime, referencesf)

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/__init__.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/__init__.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/Blobs/__init__.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -1 +1 @@
-# python package
+"""The ZODB Blob package."""

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/Connection.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/Connection.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/Connection.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -88,9 +88,6 @@
         # Multi-database support
         self.connections = {self._db.database_name: self}
 
-        self._synch = None
-        self._mvcc = None
-
         self._version = version
         self._normal_storage = self._storage = db._storage
         self.new_oid = db._storage.new_oid
@@ -231,6 +228,9 @@
         if obj is not None:
             return obj
 
+        # This appears to be an MVCC violation because we are loading
+        # the must recent data when perhaps we shouldnt. The key is
+        # that we are only creating a ghost!        
         p, serial = self._storage.load(oid, self._version)
         obj = self._reader.getGhost(p)
 
@@ -281,9 +281,8 @@
 
         self._debug_info = ()
 
-        if self._synch:
+        if self._opened:
             self.transaction_manager.unregisterSynch(self)
-            self._synch = None
 
         if primary:
             for connection in self.connections.values():
@@ -345,7 +344,7 @@
         if connection is None:
             new_con = self._db.databases[database_name].open(
                 transaction_manager=self.transaction_manager,
-                mvcc=self._mvcc, version=self._version, synch=self._synch,
+                version=self._version,
                 )
             self.connections.update(new_con.connections)
             new_con.connections = self.connections
@@ -450,8 +449,6 @@
     def _tpc_cleanup(self):
         """Performs cleanup operations to support tpc_finish and tpc_abort."""
         self._conflicts.clear()
-        if not self._synch:
-            self._flush_invalidations()
         self._needs_to_join = True
         self._registered_objects = []
         self._creating.clear()
@@ -646,12 +643,6 @@
         # return an exception object and expect that the Connection
         # will raise the exception.
 
-        # When commit_sub() exceutes a store, there is no need to
-        # update the _p_changed flag, because the subtransaction
-        # tpc_vote() calls already did this.  The change=1 argument
-        # exists to allow commit_sub() to avoid setting the flag
-        # again.
-
         # When conflict resolution occurs, the object state held by
         # the connection does not match what is written to the
         # database.  Invalidate the object here to guarantee that
@@ -863,12 +854,12 @@
         providedBy = getattr(obj, '__providedBy__', None)
         if providedBy is not None and IBlob in providedBy:
             obj._p_blob_uncommitted = None
-            obj._p_blob_data = \
-                    self._storage.loadBlob(obj._p_oid, serial, self._version)
+            obj._p_blob_data = self._storage.loadBlob(
+                obj._p_oid, serial, self._version)
 
     def _load_before_or_conflict(self, obj):
         """Load non-current state for obj or raise ReadConflictError."""
-        if not (self._mvcc and self._setstate_noncurrent(obj)):
+        if not ((not self._version) and self._setstate_noncurrent(obj)):
             self._register(obj)
             self._conflicts[obj._p_oid] = True
             raise ReadConflictError(object=obj)
@@ -968,8 +959,7 @@
         # return a list of [ghosts....not recently used.....recently used]
         return everything.items() + items
 
-    def open(self, transaction_manager=None, mvcc=True, synch=True,
-             delegate=True):
+    def open(self, transaction_manager=None, delegate=True):
         """Register odb, the DB that this Connection uses.
 
         This method is called by the DB every time a Connection
@@ -981,21 +971,12 @@
 
         Parameters:
         odb: database that owns the Connection
-        mvcc: boolean indicating whether MVCC is enabled
         transaction_manager: transaction manager to use.  None means
             use the default transaction manager.
-        synch: boolean indicating whether Connection should
         register for afterCompletion() calls.
         """
 
-        # TODO:  Why do we go to all the trouble of setting _db and
-        # other attributes on open and clearing them on close?
-        # A Connection is only ever associated with a single DB
-        # and Storage.
-
         self._opened = time()
-        self._synch = synch
-        self._mvcc = mvcc and not self._version
 
         if transaction_manager is None:
             transaction_manager = transaction.manager
@@ -1008,8 +989,7 @@
         else:
             self._flush_invalidations()
 
-        if synch:
-            transaction_manager.registerSynch(self)
+        transaction_manager.registerSynch(self)
 
         if self._cache is not None:
             self._cache.incrgc() # This is a good time to do some GC
@@ -1018,7 +998,7 @@
             # delegate open to secondary connections
             for connection in self.connections.values():
                 if connection is not self:
-                    connection.open(transaction_manager, mvcc, synch, False)
+                    connection.open(transaction_manager, False)
 
     def _resetCache(self):
         """Creates a new cache, discarding the old one.
@@ -1113,7 +1093,7 @@
         src.reset(*state)
 
     def _commit_savepoint(self, transaction):
-        """Commit all changes made in subtransactions and begin 2-phase commit
+        """Commit all changes made in savepoints and begin 2-phase commit
         """
         src = self._savepoint_storage
         self._storage = self._normal_storage
@@ -1145,7 +1125,7 @@
         src.close()
 
     def _abort_savepoint(self):
-        """Discard all subtransaction data."""
+        """Discard all savepoint data."""
         src = self._savepoint_storage
         self._storage = self._normal_storage
         self._savepoint_storage = None
@@ -1195,11 +1175,19 @@
     def __init__(self, base_version, storage):
         self._storage = storage
         for method in (
-            'getName', 'new_oid', 'modifiedInVersion', 'getSize',
-            'undoLog', 'versionEmpty', 'sortKey', 'loadBefore',
+            'getName', 'new_oid', 'getSize', 'sortKey', 'loadBefore',
             ):
             setattr(self, method, getattr(storage, method))
 
+        try:
+            supportsVersions = storage.supportsVersions
+        except AttributeError:
+            pass
+        else:
+            if supportsVersions():
+                self.modifiedInVersion = storage.modifiedInVersion
+                self.versionEmpty = storage.versionEmpty
+
         self._base_version = base_version
         tmpdir = os.environ.get('ZODB_BLOB_TEMPDIR')
         if tmpdir is None:

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/DB.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/DB.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/DB.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -15,6 +15,8 @@
 
 $Id$"""
 
+import warnings
+
 import cPickle, cStringIO, sys
 import threading
 from time import time, ctime
@@ -31,6 +33,7 @@
 
 import transaction
 
+
 logger = logging.getLogger('ZODB.DB')
 
 class _ConnectionPool(object):
@@ -77,23 +80,28 @@
         # a list (we push only "on the right", but may pop from both ends).
         self.available = []
 
-    # Change our belief about the expected maximum # of live connections.
-    # If the pool_size is smaller than the current value, this may discard
-    # the oldest available connections.
     def set_pool_size(self, pool_size):
+        """Change our belief about the expected maximum # of live connections.
+
+        If the pool_size is smaller than the current value, this may discard
+        the oldest available connections.
+        """
         self.pool_size = pool_size
         self._reduce_size()
 
-    # Register a new available connection.  We must not know about c already.
-    # c will be pushed onto the available stack even if we're over the
-    # pool size limit.
     def push(self, c):
+        """Register a new available connection.
+
+        We must not know about c already. c will be pushed onto the available
+        stack even if we're over the pool size limit.
+        """
         assert c not in self.all
         assert c not in self.available
         self._reduce_size(strictly_less=True)
         self.all.add(c)
         self.available.append(c)
-        n, limit = len(self.all), self.pool_size
+        n = len(self.all)
+        limit = self.pool_size
         if n > limit:
             reporter = logger.warn
             if n > 2 * limit:
@@ -101,20 +109,25 @@
             reporter("DB.open() has %s open connections with a pool_size "
                      "of %s", n, limit)
 
-    # Reregister an available connection formerly obtained via pop().  This
-    # pushes it on the stack of available connections, and may discard
-    # older available connections.
     def repush(self, c):
+        """Reregister an available connection formerly obtained via pop().
+
+        This pushes it on the stack of available connections, and may discard
+        older available connections.
+        """
         assert c in self.all
         assert c not in self.available
         self._reduce_size(strictly_less=True)
         self.available.append(c)
 
-    # Throw away the oldest available connections until we're under our
-    # target size (strictly_less=False) or no more than that (strictly_less=
-    # True, the default).
     def _reduce_size(self, strictly_less=False):
-        target = self.pool_size - bool(strictly_less)
+        """Throw away the oldest available connections until we're under our
+        target size (strictly_less=False, the default) or no more than that
+        (strictly_less=True).
+        """
+        target = self.pool_size
+        if strictly_less:
+            target -= 1
         while len(self.available) > target:
             c = self.available.pop(0)
             self.all.remove(c)
@@ -132,11 +145,13 @@
             # now, and `c` would be left in a user-visible crazy state.
             c._resetCache()
 
-    # Pop an available connection and return it, or return None if none are
-    # available.  In the latter case, the caller should create a new
-    # connection, register it via push(), and call pop() again.  The
-    # caller is responsible for serializing this sequence.
     def pop(self):
+        """Pop an available connection and return it.
+
+        Return None if none are available - in this case, the caller should
+        create a new connection, register it via push(), and call pop() again.
+        The caller is responsible for serializing this sequence.
+        """
         result = None
         if self.available:
             result = self.available.pop()
@@ -145,8 +160,8 @@
             assert result in self.all
         return result
 
-    # For every live connection c, invoke f(c).
     def map(self, f):
+        """For every live connection c, invoke f(c)."""
         self.all.map(f)
 
 class DB(object):
@@ -227,8 +242,6 @@
         self._version_pool_size = version_pool_size
         self._version_cache_size = version_cache_size
 
-        self._miv_cache = {}
-
         # Setup storage
         self._storage=storage
         self.references = ZODB.serialize.referencesf
@@ -237,8 +250,14 @@
         except TypeError:
             storage.registerDB(self, None) # Backward compat
             
-        if not hasattr(storage, 'tpc_vote'):
+        if (not hasattr(storage, 'tpc_vote')) and not storage.isReadOnly():
+            warnings.warn(
+                "Storage doesn't have a tpc_vote and this violates "
+                "the storage API. Violently monkeypatching in a do-nothing "
+                "tpc_vote.",
+                DeprecationWarning, 2)
             storage.tpc_vote = lambda *args: None
+
         try:
             storage.load(z64, '')
         except KeyError:
@@ -268,14 +287,46 @@
                              database_name)
         databases[database_name] = self
 
-        # Pass through methods:
-        for m in ['history', 'supportsUndo', 'supportsVersions', 'undoLog',
-                  'versionEmpty', 'versions']:
-            setattr(self, m, getattr(storage, m))
+        self._setupUndoMethods()
+        self._setupVersionMethods()
+        self.history = storage.history
 
-        if hasattr(storage, 'undoInfo'):
-            self.undoInfo = storage.undoInfo
+    def _setupUndoMethods(self):
+        storage = self._storage
+        try:
+            self.supportsUndo = storage.supportsUndo
+        except AttributeError:
+            self.supportsUndo = lambda : False
 
+        if self.supportsUndo():
+            self.undoLog = storage.undoLog
+            if hasattr(storage, 'undoInfo'):
+                self.undoInfo = storage.undoInfo
+        else:
+            self.undoLog = self.undoInfo = lambda *a,**k: ()
+            def undo(*a, **k):
+                raise NotImplementedError
+            self.undo = undo
+
+    def _setupVersionMethods(self):
+        storage = self._storage
+        try:
+            self.supportsVersions = storage.supportsVersions
+        except AttributeError:
+            self.supportsVersions = lambda : False
+
+        if self.supportsVersions():
+            self.versionEmpty = storage.versionEmpty
+            self.versions = storage.versions
+            self.modifiedInVersion = storage.modifiedInVersion
+        else:
+            self.versionEmpty = lambda version: True
+            self.versions = lambda max=None: ()
+            self.modifiedInVersion = lambda oid: ''
+            def commitVersion(*a, **k):
+                raise NotImplementedError
+            self.commitVersion = self.abortVersion = commitVersion
+
     # This is called by Connection.close().
     def _returnToPool(self, connection):
         """Return a connection to the pool.
@@ -319,6 +370,10 @@
             self._r()
 
     def abortVersion(self, version, txn=None):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         if txn is None:
             txn = transaction.get()
         txn.register(AbortVersion(self, version))
@@ -436,6 +491,10 @@
         self._storage.close()
 
     def commitVersion(self, source, destination='', txn=None):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         if txn is None:
             txn = transaction.get()
         txn.register(CommitVersion(self, source, destination))
@@ -456,9 +515,17 @@
         return self._storage.getSize()
 
     def getVersionCacheSize(self):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         return self._version_cache_size
 
     def getVersionPoolSize(self):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         return self._version_pool_size
 
     def invalidate(self, tid, oids, connection=None, version=''):
@@ -471,12 +538,6 @@
         """
         if connection is not None:
             version = connection._version
-        # Update modified in version cache
-        for oid in oids:
-            h = hash(oid) % 131
-            o = self._miv_cache.get(h, None)
-            if o is not None and o[0]==oid:
-                del self._miv_cache[h]
 
         # Notify connections.
         def inval(c):
@@ -487,25 +548,13 @@
 
     def invalidateCache(self):
         """Invalidate each of the connection caches
-        """        
-        self._miv_cache.clear()
+        """
         self._connectionMap(lambda c: c.invalidateCache())
 
-    def modifiedInVersion(self, oid):
-        h = hash(oid) % 131
-        cache = self._miv_cache
-        o = cache.get(h, None)
-        if o and o[0] == oid:
-            return o[1]
-        v = self._storage.modifiedInVersion(oid)
-        cache[h] = oid, v
-        return v
-
     def objectCount(self):
         return len(self._storage)
 
-    def open(self, version='', mvcc=True,
-             transaction_manager=None, synch=True):
+    def open(self, version='', transaction_manager=None):
         """Return a database Connection for use by application code.
 
         The optional `version` argument can be used to specify that a
@@ -518,13 +567,19 @@
         :Parameters:
           - `version`: the "version" that all changes will be made
              in, defaults to no version.
-          - `mvcc`: boolean indicating whether MVCC is enabled
           - `transaction_manager`: transaction manager to use.  None means
              use the default transaction manager.
-          - `synch`: boolean indicating whether Connection should
-             register for afterCompletion() calls.
         """
 
+        if version:
+            if not self.supportsVersions():
+                raise ValueError(
+                    "Versions are not supported by this database.")
+            warnings.warn(
+                "Versions are deprecated and will become unsupported "
+                "in ZODB 3.9",
+                DeprecationWarning, 2)            
+
         self._a()
         try:
             # pool <- the _ConnectionPool for this version
@@ -550,7 +605,7 @@
             assert result is not None
 
             # Tell the connection it belongs to self.
-            result.open(transaction_manager, mvcc, synch)
+            result.open(transaction_manager)
 
             # A good time to do some cache cleanup.
             self._connectionMap(lambda c: c.cacheGC())
@@ -638,6 +693,10 @@
             self._r()
 
     def setVersionCacheSize(self, size):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         self._a()
         try:
             self._version_cache_size = size
@@ -654,6 +713,10 @@
         self._reset_pool_sizes(size, for_versions=False)
 
     def setVersionPoolSize(self, size):
+        warnings.warn(
+            "Versions are deprecated and will become unsupported "
+            "in ZODB 3.9",
+            DeprecationWarning, 2)            
         self._version_pool_size = size
         self._reset_pool_sizes(size, for_versions=True)
 
@@ -687,17 +750,18 @@
             txn = transaction.get()
         txn.register(TransactionalUndo(self, id))
 
-    def versionEmpty(self, version):
-        return self._storage.versionEmpty(version)
 
-
-
 resource_counter_lock = threading.Lock()
 resource_counter = 0
 
 class ResourceManager(object):
     """Transaction participation for a version or undo resource."""
 
+    # XXX This implementation is broken.  Subclasses invalidate oids
+    # in their commit calls. Invalidations should not be sent until
+    # tpc_finish is called.  In fact, invalidations should be sent to
+    # the db *while* tpc_finish is being called on the storage.
+
     def __init__(self, db):
         self._db = db
         # Delegate the actual 2PC methods to the storage
@@ -729,10 +793,10 @@
     # argument to the methods below is self.
 
     def abort(self, obj, txn):
-        pass
+        raise NotImplementedError
 
     def commit(self, obj, txn):
-        pass
+        raise NotImplementedError
 
 class CommitVersion(ResourceManager):
 
@@ -742,6 +806,7 @@
         self._dest = dest
 
     def commit(self, ob, t):
+        # XXX see XXX in ResourceManager
         dest = self._dest
         tid, oids = self._db._storage.commitVersion(self._version,
                                                     self._dest,
@@ -760,6 +825,7 @@
         self._version = version
 
     def commit(self, ob, t):
+        # XXX see XXX in ResourceManager
         tid, oids = self._db._storage.abortVersion(self._version, t)
         self._db.invalidate(tid,
                             dict.fromkeys(oids, 1),
@@ -772,5 +838,6 @@
         self._tid = tid
 
     def commit(self, ob, t):
+        # XXX see XXX in ResourceManager
         tid, oids = self._db._storage.undo(self._tid, t)
         self._db.invalidate(tid, dict.fromkeys(oids, 1))

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/DemoStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/DemoStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/DemoStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -117,10 +117,16 @@
         self._quota = quota
         self._ltid = None
         self._clear_temp()
-        if base is not None and base.versions():
-            raise POSException.StorageError(
-                "Demo base storage has version data")
 
+        try:
+            versions = base.versions
+        except AttributeError:
+            pass
+        else:
+            if base.versions():
+                raise POSException.StorageError(
+                    "Demo base storage has version data")
+
     # When DemoStorage needs to create a new oid, and there is a base
     # storage, it must use that storage's new_oid() method.  Else
     # DemoStorage may end up assigning "new" oids that are already in use
@@ -212,37 +218,32 @@
         finally:
             self._lock_release()
 
-    def loadEx(self, oid, version):
+    def load(self, oid, version):
         self._lock_acquire()
         try:
             try:
                 oid, pre, vdata, p, tid = self._index[oid]
             except KeyError:
                 if self._base:
-                    return self._base.load(oid, '')
+                    return self._base.load(oid, version)
                 raise KeyError(oid)
 
-            ver = ""
             if vdata:
                 oversion, nv = vdata
                 if oversion != version:
                     if nv:
                         # Return the current txn's tid with the non-version
                         # data.
-                        oid, pre, vdata, p, skiptid = nv
+                        p = nv[3]
                     else:
                         raise KeyError(oid)
-                ver = oversion
 
             if p is None:
                 raise KeyError(oid)
 
-            return p, tid, ver
+            return p, tid
         finally: self._lock_release()
 
-    def load(self, oid, version):
-        return self.loadEx(oid, version)[:2]
-
     def modifiedInVersion(self, oid):
         self._lock_acquire()
         try:
@@ -560,3 +561,11 @@
                 o.append('    %s: %s' % (oid_repr(oid), r))
 
         return '\n'.join(o)
+
+    def cleanup(self):
+        if self._base is not None:
+            self._base.cleanup()
+
+    def close(self):
+        if self._base is not None:
+            self._base.close()

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/FileStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/FileStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/FileStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -515,32 +515,6 @@
         except TypeError:
             raise TypeError("invalid oid %r" % (oid,))
 
-    def loadEx(self, oid, version):
-        # A variant of load() that also returns the version string.
-        # ZEO wants this for managing its cache.
-        self._lock_acquire()
-        try:
-            pos = self._lookup_pos(oid)
-            h = self._read_data_header(pos, oid)
-            if h.version and h.version != version:
-                # Return data and tid from pnv (non-version data).
-                # If we return the old record's transaction id, then
-                # it will look to the cache like old data is current.
-                # The tid for the current data must always be greater
-                # than any non-current data.
-                data = self._loadBack_impl(oid, h.pnv)[0]
-                return data, h.tid, ""
-            if h.plen:
-                data = self._file.read(h.plen)
-                return data, h.tid, h.version
-            else:
-                # Get the data from the backpointer, but tid from
-                # currnt txn.
-                data = self._loadBack_impl(oid, h.back)[0]
-                return data, h.tid, h.version
-        finally:
-            self._lock_release()
-
     def load(self, oid, version):
         """Return pickle data and serial number."""
         self._lock_acquire()
@@ -940,9 +914,6 @@
             self._file.truncate(self._pos)
             self._nextpos=0
 
-    def supportsTransactionalUndo(self):
-        return 1
-
     def _undoDataInfo(self, oid, pos, tpos):
         """Return the tid, data pointer, data, and version for the oid
         record at pos"""
@@ -974,17 +945,17 @@
             result = self._get_cached_tid(oid)
             if result is None:
                 pos = self._lookup_pos(oid)
-                result = self._getTid(oid, pos)
+                h = self._read_data_header(pos, oid)
+                if h.plen == 0 and h.back == 0:
+                    # Undone creation
+                    raise POSKeyError(oid)
+                else:
+                    result = h.tid
+                    self._oid2tid[oid] = result
             return result
         finally:
             self._lock_release()
 
-    def _getTid(self, oid, pos):
-        self._file.seek(pos)
-        h = self._file.read(16)
-        assert oid == h[:8]
-        return h[8:]
-
     def _getVersion(self, oid, pos):
         h = self._read_data_header(pos, oid)
         if h.version:
@@ -1440,7 +1411,10 @@
         except ValueError: # "empty tree" error
             next_oid = None
 
-        data, tid = self.load(oid, "") # ignore versions
+        # ignore versions
+        # XXX if the object was created in a version, this will fail.
+        data, tid = self.load(oid, "")
+
         return oid, tid, data, next_oid
 
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/fsdump.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/fsdump.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/FileStorage/fsdump.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -14,8 +14,8 @@
 import struct
 
 from ZODB.FileStorage import FileIterator
-from ZODB.FileStorage.format \
-     import TRANS_HDR, TRANS_HDR_LEN, DATA_HDR, DATA_HDR_LEN
+from ZODB.FileStorage.format import TRANS_HDR, TRANS_HDR_LEN, DATA_HDR, DATA_HDR_LEN
+from ZODB.FileStorage.format import DATA_HDR_LEN
 from ZODB.TimeStamp import TimeStamp
 from ZODB.utils import u64, get_pickle_metadata
 from ZODB.tests.StorageTestBase import zodb_unpickle
@@ -24,13 +24,13 @@
     iter = FileIterator(path)
     for i, trans in enumerate(iter):
         if with_offset:
-            print >> file, "Trans #%05d tid=%016x time=%s offset=%d" % \
-                  (i, u64(trans.tid), TimeStamp(trans.tid), trans._pos)
+            print >> file, ("Trans #%05d tid=%016x time=%s offset=%d" %
+                  (i, u64(trans.tid), TimeStamp(trans.tid), trans._pos))
         else:
-            print >> file, "Trans #%05d tid=%016x time=%s" % \
-                  (i, u64(trans.tid), TimeStamp(trans.tid))
-        print >> file, "    status=%r user=%r description=%r" % \
-              (trans.status, trans.user, trans.description)
+            print >> file, ("Trans #%05d tid=%016x time=%s" %
+                  (i, u64(trans.tid), TimeStamp(trans.tid)))
+        print >> file, ("    status=%r user=%r description=%r" %
+              (trans.status, trans.user, trans.description))
 
         for j, rec in enumerate(trans):
             if rec.data is None:
@@ -53,8 +53,8 @@
             else:
                 bp = ""
 
-            print >> file, "  data #%05d oid=%016x%s%s class=%s%s" % \
-                  (j, u64(rec.oid), version, size, fullclass, bp)
+            print >> file, ("  data #%05d oid=%016x%s%s class=%s%s" %
+                  (j, u64(rec.oid), version, size, fullclass, bp))
     iter.close()
 
 def fmt(p64):
@@ -123,8 +123,8 @@
             version = self.file.read(vlen)
             print >> self.dest, "version: %r" % version
             print >> self.dest, "non-version data offset: %d" % u64(pnv)
-            print >> self.dest, \
-                  "previous version data offset: %d" % u64(sprevdata)
+            print >> self.dest, ("previous version data offset: %d" %
+                                 u64(sprevdata))
         print >> self.dest, "len(data): %d" % dlen
         self.file.read(dlen)
         if not dlen:

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/MappingStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/MappingStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/MappingStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -63,11 +63,6 @@
         finally:
             self._lock_release()
 
-    def loadEx(self, oid, version):
-        # Since we don't support versions, just tack the empty version
-        # string onto load's result.
-        return self.load(oid, version) + ("",)
-
     def getTid(self, oid):
         self._lock_acquire()
         try:
@@ -139,3 +134,9 @@
                      (u64(oid), TimeStamp(r[:8]), repr(r[8:])))
 
         return '\n'.join(o)
+
+    def cleanup(self):
+        pass
+
+    def close(self):
+        pass

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/broken.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/broken.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/broken.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -281,11 +281,11 @@
 
         and persistent broken objects aren't directly picklable:
 
-          >>> a.__reduce__()
+          >>> a.__reduce__()    # doctest: +NORMALIZE_WHITESPACE
           Traceback (most recent call last):
           ...
-          BrokenModified: """ \
-        r"""<persistent broken not.there.Atall instance '\x00\x00\x00\x00****'>
+          BrokenModified: 
+          <persistent broken not.there.Atall instance '\x00\x00\x00\x00****'>
 
         but you can get their state:
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/interfaces.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/interfaces.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/interfaces.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -18,6 +18,7 @@
 
 from zope.interface import Interface, Attribute
 
+
 class IConnection(Interface):
     """Connection to ZODB for loading and storing objects.
 
@@ -289,7 +290,6 @@
         This invalidates *all* objects in the cache. If the connection
         is open, subsequent reads will fail until a new transaction
         begins or until the connection os reopned.
-        
         """
 
 class IStorageDB(Interface):
@@ -345,37 +345,6 @@
     TODO: This interface is incomplete.
     """
 
-## __init__ methods don't belong in interfaces:
-##
-##     def __init__(storage,
-##                  pool_size=7,
-##                  cache_size=400,
-##                  version_pool_size=3,
-##                  version_cache_size=100,
-##                  database_name='unnamed',
-##                  databases=None,
-##                  ):
-##         """Create an object database.
-
-##         storage: the storage used by the database, e.g. FileStorage
-##         pool_size: expected maximum number of open connections
-##         cache_size: target size of Connection object cache, in number of
-##             objects
-##         version_pool_size: expected maximum number of connections (per
-##             version)
-##         version_cache_size: target size of Connection object cache for
-##              version connections, in number of objects
-##         database_name: when using a multi-database, the name of this DB
-##             within the database group.  It's a (detected) error if databases
-##             is specified too and database_name is already a key in it.
-##             This becomes the value of the DB's database_name attribute.
-##         databases: when using a multi-database, a mapping to use as the
-##             binding of this DB's .databases attribute.  It's intended
-##             that the second and following DB's added to a multi-database
-##             pass the .databases attribute set on the first DB added to the
-##             collection.
-##         """
-
     databases = Attribute("""\
         A mapping from database name to DB (database) object.
 
@@ -386,119 +355,449 @@
         entry.
         """)
 
-    def invalidateCache():
-        """Invalidate all objects in the database object caches
+    def open(version='', transaction_manager=None):
+        """Return an IConnection object for use by application code.
 
-        invalidateCache will be called on each of the database's connections.
+        version: the "version" that all changes will be made
+            in, defaults to no version.
+        transaction_manager: transaction manager to use.  None means
+            use the default transaction manager.
+
+        Note that the connection pool is managed as a stack, to
+        increase the likelihood that the connection's stack will
+        include useful objects.
         """
 
+    # TODO: Should this method be moved into some subinterface?
+    def pack(t=None, days=0):
+        """Pack the storage, deleting unused object revisions.
+
+        A pack is always performed relative to a particular time, by
+        default the current time.  All object revisions that are not
+        reachable as of the pack time are deleted from the storage.
+
+        The cost of this operation varies by storage, but it is
+        usually an expensive operation.
+
+        There are two optional arguments that can be used to set the
+        pack time: t, pack time in seconds since the epcoh, and days,
+        the number of days to subtract from t or from the current
+        time if t is not specified.
+        """
+
+    # TODO: Should this method be moved into some subinterface?
+    def undo(id, txn=None):
+        """Undo a transaction identified by id.
+
+        A transaction can be undone if all of the objects involved in
+        the transaction were not modified subsequently, if any
+        modifications can be resolved by conflict resolution, or if
+        subsequent changes resulted in the same object state.
+
+        The value of id should be generated by calling undoLog()
+        or undoInfo().  The value of id is not the same as a
+        transaction id used by other methods; it is unique to undo().
+
+        id: a storage-specific transaction identifier
+        txn: transaction context to use for undo().
+            By default, uses the current transaction.
+        """
+
+    def close():
+        """Close the database and its underlying storage.
+
+        It is important to close the database, because the storage may
+        flush in-memory data structures to disk when it is closed.
+        Leaving the storage open with the process exits can cause the
+        next open to be slow.
+
+        What effect does closing the database have on existing
+        connections?  Technically, they remain open, but their storage
+        is closed, so they stop behaving usefully.  Perhaps close()
+        should also close all the Connections.
+        """
+
 class IStorage(Interface):
     """A storage is responsible for storing and retrieving data of objects.
     """
 
-## What follows is the union of methods found across various storage
-## implementations.  Exactly what "the storage API" is and means has
-## become fuzzy over time.  Methods should be uncommented here, or
-## even deleted, as the storage API regains a crisp definition.
+    def close():
+        """Close the storage.
+        """
 
-##    def load(oid, version):
-##        """TODO"""
-##
-##    def close():
-##        """TODO"""
-##
-##    def cleanup():
-##        """TODO"""
-##
-##    def lastSerial():
-##        """TODO"""
-##
-##    def lastTransaction():
-##        """TODO"""
-##
-##    def lastTid(oid):
-##        """Return last serialno committed for object oid."""
-##
-##    def loadSerial(oid, serial):
-##        """TODO"""
-##
-##    def loadBefore(oid, tid):
-##        """TODO"""
-##
-##    def iterator(start=None, stop=None):
-##        """TODO"""
-##
-##    def sortKey():
-##        """TODO"""
-##
-##    def getName():
-##        """TODO"""
-##
-##    def getSize():
-##        """TODO"""
-##
-##    def history(oid, version, length=1, filter=None):
-##        """TODO"""
-##
-##    def new_oid():
-##        """TODO"""
-##
-##    def set_max_oid(possible_new_max_oid):
-##        """TODO"""
-##
-##    def registerDB(db):
-##        """TODO"""
-##
-##    def isReadOnly():
-##        """TODO"""
-##
-##    def supportsUndo():
-##        """TODO"""
-##
-##    def supportsVersions():
-##        """TODO"""
-##
-##    def tpc_abort(transaction):
-##        """TODO"""
-##
-##    def tpc_begin(transaction):
-##        """TODO"""
-##
-##    def tpc_vote(transaction):
-##        """TODO"""
-##
-##    def tpc_finish(transaction, f=None):
-##        """TODO"""
-##
-##    def getSerial(oid):
-##        """TODO"""
-##
-##    def loadSerial(oid, serial):
-##        """TODO"""
-##
-##    def loadBefore(oid, tid):
-##        """TODO"""
-##
-##    def getExtensionMethods():
-##        """TODO"""
-##
-##    def copyTransactionsFrom():
-##        """TODO"""
-##
-##    def store(oid, oldserial, data, version, transaction):
-##        """
-##
-##        may return the new serial or not
-##        """
+    def getName():
+        """The name of the storage
 
+        The format and interpretation of this name is storage
+        dependent. It could be a file name, a database name, etc.
+
+        This is used soley for informational purposes.
+        """
+
+    def getSize():
+        """An approximate size of the database, in bytes.
+        
+        This is used soley for informational purposes.
+        """
+
+    def history(oid, version, size=1):
+        """Return a sequence of history information dictionaries.
+
+        Up to size objects (including no objects) may be returned.
+        
+        The information provides a log of the changes made to the
+        object. Data are reported in reverse chronological order.
+
+        Each dictionary has the following keys:
+
+        time
+            UTC seconds since the epoch (as in time.time) that the
+            object revision was committed.
+        tid
+            The transaction identifier of the transaction that
+            committed the version.
+        version
+            The version that the revision is in.  If the storage
+            doesn't support versions, then this must be an empty
+            string.
+        user_name
+            The user identifier, if any (or an empty string) of the
+            user on whos behalf the revision was committed.
+        description
+            The transaction description for the transaction that
+            committed the revision.
+        size
+            The size of the revision data record.
+
+        If the transaction had extension items, then these items are
+        also included if they don't conflict with the keys above.
+        """
+
+    def isReadOnly():
+        """Test whether a storage allows committing new transactions
+
+        For a given storage instance, this method always returns the
+        same value.  Read-only-ness is a static property of a storage.
+        """
+
+    def lastTransaction():
+        """Return the id of the last committed transaction
+        """
+
+    def __len__():
+        """The approximate number of objects in the storage
+        
+        This is used soley for informational purposes.
+        """
+
+    def load(oid, version):
+        """Load data for an object id and version
+
+        A data record and serial are returned.  The serial is a
+        transaction identifier of the transaction that wrote the data
+        record.
+
+        A POSKeyError is raised if there is no record for the object
+        id and version.
+
+        Storages that don't support versions must ignore the version
+        argument.
+        """
+
+    def loadBefore(oid, tid):
+        """Load the object data written before a transaction id
+
+        If there isn't data before the object before the given
+        transaction, then None is returned, otherwise three values are
+        returned:
+
+        - The data record
+
+        - The transaction id of the data record
+
+        - The transaction id of the following revision, if any, or None.
+        """
+
+    def loadSerial(oid, serial):
+        """Load the object record for the give transaction id
+
+        If a matching data record can be found, it is returned,
+        otherwise, POSKeyError is raised.
+        """
+
+    def new_oid():
+        """Allocate a new object id.
+
+        The object id returned is reserved at least as long as the
+        storage is opened.
+
+        The return value is a string.
+        """
+
+    def pack(pack_time, referencesf):
+        """Pack the storage
+
+        It is up to the storage to interpret this call, however, the
+        general idea is that the storage free space by:
+
+        - discarding object revisions that were old and not current as of the
+          given pack time.
+
+        - garbage collecting objects that aren't reachable from the
+          root object via revisions remaining after discarding
+          revisions that were not current as of the pack time.
+
+        The pack time is given as a UTC time in seconds since the
+        empoch.
+
+        The second argument is a function that should be used to
+        extract object references from database records.  This is
+        needed to determine which objects are referenced from object
+        revisions.
+        """
+
+    def registerDB(db):
+        """Register an IStorageDB.
+
+        Note that, for historical reasons, an implementation may
+        require a second argument, however, if required, the None will
+        be passed as the second argument.
+        """
+
+    def sortKey():
+        """Sort key used to order distributed transactions
+
+        When a transaction involved multiple storages, 2-phase commit
+        operations are applied in sort-key order.  This must be unique
+        among storages used in a transaction. Obviously, the storage
+        can't assure this, but it should construct the sort key so it
+        has a reasonable chance of being unique.
+        """
+
+    def store(oid, serial, data, version, transaction):
+        """Store data for the object id, oid.
+
+        Arguments:
+
+        oid
+            The object identifier.  This is either a string
+            consisting of 8 nulls or a string previously returned by
+            new_oid. 
+
+        serial
+            The serial of the data that was read when the object was
+            loaded from the database.  If the object was created in
+            the current transaction this will be a string consisting
+            of 8 nulls.
+
+        data
+            The data record. This is opaque to the storage.
+
+        version
+            The version to store the data is.  If the storage doesn't
+            support versions, this should be an empty string and the
+            storage is allowed to ignore it.
+
+        transaction
+            A transaction object.  This should match the current
+            transaction for the storage, set by tpc_begin.
+
+        The new serial for the object is returned, but not necessarily
+        immediately.  It may be returned directly, or un a subsequent
+        store or tpc_vote call.
+
+        The return value may be:
+
+        - None
+
+        - A new serial (string) for the object, or
+
+        - An iterable of object-id and serial pairs giving new serials
+          for objects.
+        """
+
+    def tpc_abort(transaction):
+        """Abort the transaction.
+
+        Any changes made by the transaction are discarded.
+
+        This call is ignored is the storage is not participating in
+        two-phase commit or if the given transaction is not the same
+        as the transaction the storage is commiting.
+        """
+
+    def tpc_begin(transaction):
+        """Begin the two-phase commit process.
+
+        If storage is already participating in a two-phase commit
+        using the same transaction, the call is ignored.
+
+        If the storage is already participating in a two-phase commit
+        using a different transaction, the call blocks until the
+        current transaction ends (commits or aborts).
+        """
+
+    def tpc_finish(transaction, func = lambda: None):
+        """Finish the transaction, making any transaction changes permanent.
+
+        Changes must be made permanent at this point.
+
+        This call is ignored if the storage isn't participating in
+        two-phase commit or if it is commiting a different
+        transaction.  Failure of this method is extremely serious.
+        """
+
+    def tpc_vote(transaction):
+        """Provide a storage with an opportunity to veto a transaction
+
+        This call is ignored if the storage isn't participating in
+        two-phase commit or if it is commiting a different
+        transaction.  Failure of this method is extremely serious.
+
+        If a transaction can be committed by a storage, then the
+        method should return.  If a transaction cannot be committed,
+        then an exception should be raised.  If this method returns
+        without an error, then there must not be an error if
+        tpc_finish or tpc_abort is called subsequently.
+
+        The return value can be either None or a sequence of object-id
+        and serial pairs giving new serials for objects who's ids were
+        passed to previous store calls in the same transaction.
+        After the tpc_vote call, bew serials must have been returned,
+        either from tpc_vote or store for objects passed to store.
+        """
+
+class IStorageRestoreable(IStorage):
+
+    def tpc_begin(transaction, tid=None):
+        """Begin the two-phase commit process.
+
+        If storage is already participating in a two-phase commit
+        using the same transaction, the call is ignored.
+
+        If the storage is already participating in a two-phase commit
+        using a different transaction, the call blocks until the
+        current transaction ends (commits or aborts).
+
+        If a transaction id is given, then the transaction will use
+        the given id rather than generating a new id.  This is used
+        when copying already committed transactions from another
+        storage.
+        """
+
+        # Note that the current implementation also accepts a status.
+        # This is an artifact of:
+        # - Earlier use of an undo status to undo revisions in place,
+        #   and,
+        # - Incorrect pack garbage-collection algorithms (possibly
+        #   including the existing FileStorage implementation), that
+        #   failed to take into account records after the pack time.
+        
+
+    def restore(oid, serial, data, version, prev_txn, transaction):
+        """Write data already committed in a separate database
+
+        The restore method is used when copying data from one database
+        to a replica of the database.  It differs from store in that
+        the data have already been committed, so there is no check for
+        conflicts and no new transaction is is used for the data.
+
+        Arguments:
+
+        oid
+             The object id for the record
+        
+        serial
+             The transaction identifier that originally committed this object.
+
+        data
+             The record data.  This will be None if the transaction
+             undid the creation of the object.
+
+        version
+             The version identifier for the record
+
+        prev_txn
+             The identifier of a previous transaction that held the
+             object data.  The target storage can sometimes use this
+             as a hint to save space.
+
+        transaction
+             The current transaction.
+
+        Nothing is returned.
+        """
+
+class IStorageRecordInformation(Interface):
+    """Provide information about a single storage record
+    """
+
+    oid = Attribute("The object id")
+    version = Attribute("The version")
+    data = Attribute("The data record")
+
+class IStorageTransactionInformation(Interface):
+    """Provide information about a storage transaction
+    """
+
+    tid = Attribute("Transaction id")
+    status = Attribute("Transaction Status") # XXX what are valid values?
+    user = Attribute("Transaction user")
+    description = Attribute("Transaction Description")
+    extension = Attribute("Transaction extension data")
+
+    def __iter__():
+        """Return an iterable of IStorageTransactionInformation
+        """
+
+class IStorageIteration(Interface):
+    """API for iterating over the contents of a storage
+
+    Note that this is a future API.  Some storages now provide an
+    approximation of this.
+
+    """
+
+    def iterator(start=None, stop=None):
+        """Return an IStorageTransactionInformation iterator.
+
+        An IStorageTransactionInformation iterator is returned for
+        iterating over the transactions in the storage.
+
+        If the start argument is not None, then iteration will start
+        with the first transaction whos identifier is greater than or
+        equal to start.
+
+        If the stop argument is not None, then iteration will end with
+        the last transaction whos identifier is less than or equal to
+        start.
+
+        """
+
 class IStorageUndoable(IStorage):
     """A storage supporting transactional undo.
     """
 
-    def undo(transaction_id, txn):
-        """TODO"""
+    def supportsUndo():
+        """Return True, indicating that the storage supports undo.
+        """
 
-    def undoLog(first, last, filter=(lambda desc: True)):
+    def undo(transaction_id, transaction):
+        """Undo the transaction corresponding to the given transaction id.
+
+        The transaction id is a value returned from undoInfo or
+        undoLog, which may not be a stored transaction identifier as
+        used elsewhere in the storage APIs.
+
+        This method must only be called in the first phase of
+        two-phase commit (after tpc_begin but before tpc_vote). It
+        returns a serial (transaction id) and a sequence of object ids
+        for objects affected by the transaction.
+
+        """
+        # Used by DB (Actually, by TransactionalUndo)
+
+    def undoLog(first, last, filter=None):
         """Return a sequence of descriptions for undoable transactions.
 
         Application code should call undoLog() on a DB instance instead of on
@@ -551,8 +850,9 @@
                      could be gotten by passing the positive first-last for
                      `last` instead.
         """
+        # DB pass through
 
-    def undoInfo(first, last, specification=None):
+    def undoInfo(first=0, last=-20, specification=None):
         """Return a sequence of descriptions for undoable transactions.
 
         This is like `undoLog()`, except for the `specification` argument.
@@ -567,30 +867,21 @@
         ZEO client to its ZEO server (while a ZEO client ignores any `filter`
         argument passed to `undoLog()`).
         """
+        # DB pass-through
 
-    def pack(t, referencesf):
-        """TODO"""
 
-class IStorageVersioning(IStorage):
-    """A storage supporting versions.
-    """
+class IStorageCurrentRecordIteration(IStorage):
 
-## What follows is the union of methods found across various version storage
-## implementations.  Exactly what "the storage API" is and means has
-## become fuzzy over time.  Methods should be uncommented here, or
-## even deleted, as the storage API regains a crisp definition.
+    def record_iternext(next=None):
+        """Iterate over the records in a storage
 
-##    def abortVersion(src, transaction):
-##        """TODO"""
-##
-##    def commitVersion(src, dest, transaction):
-##        """TODO"""
-##
-##    def modifiedInVersion(oid):
-##        """TODO"""
-##
-##    def versionEmpty(version):
-##        """TODO"""
-##
-##    def versions(max=None):
-##        """TODO"""
+        Use like this:
+
+            >>> next = None
+            >>> while 1:
+            ...     oid, tid, data, next = storage.record_iternext(next)
+            ...     # do things with oid, tid, and data
+            ...     if next is None:
+            ...         break
+        
+        """

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/BasicStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/BasicStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/BasicStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -28,8 +28,6 @@
 
 ZERO = '\0'*8
 
-
-
 class BasicStorage:
     def checkBasics(self):
         t = transaction.Transaction()
@@ -46,22 +44,24 @@
             self._storage.store,
             0, 0, 0, 0, transaction.Transaction())
 
-        try:
-            self._storage.abortVersion('dummy', transaction.Transaction())
-        except (POSException.StorageTransactionError,
-                POSException.VersionCommitError):
-            pass # test passed ;)
-        else:
-            assert 0, "Should have failed, invalid transaction."
+        if self.__supportsVersions():
+            try:
+                self._storage.abortVersion(
+                    'dummy', transaction.Transaction())
+            except (POSException.StorageTransactionError,
+                    POSException.VersionCommitError):
+                pass # test passed ;)
+            else:
+                assert 0, "Should have failed, invalid transaction."
 
-        try:
-            self._storage.commitVersion('dummy', 'dummer',
-                                        transaction.Transaction())
-        except (POSException.StorageTransactionError,
-                POSException.VersionCommitError):
-            pass # test passed ;)
-        else:
-            assert 0, "Should have failed, invalid transaction."
+            try:
+                self._storage.commitVersion('dummy', 'dummer',
+                                            transaction.Transaction())
+            except (POSException.StorageTransactionError,
+                    POSException.VersionCommitError):
+                pass # test passed ;)
+            else:
+                assert 0, "Should have failed, invalid transaction."
 
         self.assertRaises(
             POSException.StorageTransactionError,
@@ -69,6 +69,15 @@
             0, 1, 2, 3, transaction.Transaction())
         self._storage.tpc_abort(t)
 
+    def __supportsVersions(self):
+        storage = self._storage
+        try:
+            supportsVersions = storage.supportsVersions
+        except AttributeError:
+            return False
+        else:
+            return supportsVersions()
+
     def checkSerialIsNoneForInitialRevision(self):
         eq = self.assertEqual
         oid = self._storage.new_oid()
@@ -107,9 +116,10 @@
         eq(zodb_unpickle(data), MinPO(21))
 
     def checkNonVersionModifiedInVersion(self):
-        oid = self._storage.new_oid()
-        self._dostore(oid=oid)
-        self.assertEqual(self._storage.modifiedInVersion(oid), '')
+        if self.__supportsVersions():
+            oid = self._storage.new_oid()
+            self._dostore(oid=oid)
+            self.assertEqual(self._storage.modifiedInVersion(oid), '')
 
     def checkConflicts(self):
         oid = self._storage.new_oid()
@@ -161,19 +171,19 @@
         revid4 = self._dostore(oid2, revid=revid2, data=p52)
         noteq(revid3, revid4)
 
-    def checkGetSerial(self):
-        if not hasattr(self._storage, 'getSerial'):
+    def checkGetTid(self):
+        if not hasattr(self._storage, 'getTid'):
             return
         eq = self.assertEqual
         p41, p42 = map(MinPO, (41, 42))
         oid = self._storage.new_oid()
-        self.assertRaises(KeyError, self._storage.getSerial, oid)
+        self.assertRaises(KeyError, self._storage.getTid, oid)
         # Now store a revision
         revid1 = self._dostore(oid, data=p41)
-        eq(revid1, self._storage.getSerial(oid))
+        eq(revid1, self._storage.getTid(oid))
         # And another one
         revid2 = self._dostore(oid, revid=revid1, data=p42)
-        eq(revid2, self._storage.getSerial(oid))
+        eq(revid2, self._storage.getTid(oid))
 
     def checkTwoArgBegin(self):
         # Unsure: how standard is three-argument tpc_begin()?
@@ -212,10 +222,3 @@
         self._storage.store(oid, ZERO, zodb_pickle(MinPO(5)), '', t)
         self._storage.tpc_vote(t)
         self._storage.tpc_finish(t)
-
-    def checkGetExtensionMethods(self):
-        m = self._storage.getExtensionMethods()
-        self.assertEqual(type(m),type({}))
-        for k,v in m.items():
-            self.assertEqual(v,None)
-            self.assert_(callable(getattr(self._storage,k)))

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/IteratorStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/IteratorStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/IteratorStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -87,11 +87,6 @@
                 pass
 
     def checkUndoZombieNonVersion(self):
-        if not hasattr(self._storage, 'supportsTransactionalUndo'):
-            return
-        if not self._storage.supportsTransactionalUndo():
-            return
-
         oid = self._storage.new_oid()
         revid = self._dostore(oid, data=MinPO(94))
         # Get the undo information
@@ -149,10 +144,10 @@
         finally:
             self._storage.tpc_finish(t)
 
-    def checkLoadEx(self):
+    def checkLoad_was_checkLoadEx(self):
         oid = self._storage.new_oid()
         self._dostore(oid, data=42)
-        data, tid, ver = self._storage.loadEx(oid, "")
+        data, tid = self._storage.load(oid, "")
         self.assertEqual(zodb_unpickle(data), MinPO(42))
         match = False
         for txn in self._storage.iterator():
@@ -162,6 +157,7 @@
                     match = True
         if not match:
             self.fail("Could not find transaction with matching id")
+ 
 
 
 class ExtendedIteratorStorage(IteratorCompare):

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/MTStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/MTStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/MTStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -162,9 +162,25 @@
     def runtest(self):
         # pick some other storage ops to execute, depending in part
         # on the features provided by the storage.
-        names = ["do_load", "do_modifiedInVersion"]
-        if self.storage.supportsUndo():
-            names += ["do_loadSerial", "do_undoLog", "do_iterator"]
+        names = ["do_load"]
+
+        storage = self.storage
+        try:
+            supportsVersions = storage.supportsVersions
+        except AttributeError:
+            pass
+        else:
+            if supportsVersions():
+                names.append("do_modifiedInVersion")
+
+        try:
+            supportsUndo = storage.supportsUndo
+        except AttributeError:
+            pass
+        else:
+            if supportsUndo():
+                names += ["do_loadSerial", "do_undoLog", "do_iterator"]
+
         ops = [getattr(self, meth) for meth in names]
         assert ops, "Didn't find an storage ops in %s" % self.storage
         # do a store to guarantee there's at least one oid in self.oids

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/ReadOnlyStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/ReadOnlyStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/ReadOnlyStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -59,6 +59,5 @@
         self.assertRaises(ReadOnlyError, self._storage.store,
                           '\000' * 8, None, '', '', t)
 
-        if self._storage.supportsTransactionalUndo():
-            self.assertRaises(ReadOnlyError, self._storage.undo,
-                              '\000' * 8, t)
+        self.assertRaises(ReadOnlyError, self._storage.undo,
+                          '\000' * 8, t)

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/RevisionStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/RevisionStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/RevisionStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -52,7 +52,7 @@
             snooze()
             snooze()
             revid = self._dostore(oid, revid, data=MinPO(i))
-            revs.append(self._storage.loadEx(oid, ""))
+            revs.append(self._storage.load(oid, ""))
 
         prev = u64(revs[0][1])
         for i in range(1, 10):
@@ -123,10 +123,10 @@
             # Always undo the most recent txn, so the value will
             # alternate between 3 and 4.
             self._undo(tid, [oid], note="undo %d" % i)
-            revs.append(self._storage.loadEx(oid, ""))
+            revs.append(self._storage.load(oid, ""))
 
         prev_tid = None
-        for i, (data, tid, ver) in enumerate(revs):
+        for i, (data, tid) in enumerate(revs):
             t = self._storage.loadBefore(oid, p64(u64(tid) + 1))
             self.assertEqual(data, t[0])
             self.assertEqual(tid, t[1])

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/Synchronization.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/Synchronization.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/Synchronization.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -84,21 +84,33 @@
         self.assertRaises(StorageTransactionError, callable, *args)
         self._storage.tpc_abort(t)
 
+    def __supportsVersions(self):
+        storage = self._storage
+        try:
+            supportsVersions = storage.supportsVersions
+        except AttributeError:
+            return False
+        return supportsVersions()
+
     def checkAbortVersionNotCommitting(self):
-        self.verifyNotCommitting(self._storage.abortVersion,
-                                 VERSION, Transaction())
+        if self.__supportsVersions():
+            self.verifyNotCommitting(self._storage.abortVersion,
+                                     VERSION, Transaction())
 
     def checkAbortVersionWrongTrans(self):
-        self.verifyWrongTrans(self._storage.abortVersion,
-                              VERSION, Transaction())
+        if self.__supportsVersions():
+            self.verifyWrongTrans(self._storage.abortVersion,
+                                  VERSION, Transaction())
 
     def checkCommitVersionNotCommitting(self):
-        self.verifyNotCommitting(self._storage.commitVersion,
-                                 VERSION, "", Transaction())
+        if self.__supportsVersions():
+            self.verifyNotCommitting(self._storage.commitVersion,
+                                     VERSION, "", Transaction())
 
     def checkCommitVersionWrongTrans(self):
-        self.verifyWrongTrans(self._storage.commitVersion,
-                              VERSION, "", Transaction())
+        if self.__supportsVersions():
+            self.verifyWrongTrans(self._storage.commitVersion,
+                                  VERSION, "", Transaction())
 
 
     def checkStoreNotCommitting(self):

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -143,7 +143,7 @@
         eq(zodb_unpickle(data), MinPO(23))
         self._iterate()
 
-    def checkCreationUndoneGetSerial(self):
+    def checkCreationUndoneGetTid(self):
         # create an object
         oid = self._storage.new_oid()
         self._dostore(oid, data=MinPO(23))
@@ -156,9 +156,9 @@
         self._storage.undo(tid, t)
         self._storage.tpc_vote(t)
         self._storage.tpc_finish(t)
-        # Check that calling getSerial on an uncreated object raises a KeyError
+        # Check that calling getTid on an uncreated object raises a KeyError
         # The current version of FileStorage fails this test
-        self.assertRaises(KeyError, self._storage.getSerial, oid)
+        self.assertRaises(KeyError, self._storage.getTid, oid)
 
     def checkUndoCreationBranch1(self):
         eq = self.assertEqual

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoVersionStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoVersionStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/TransactionalUndoVersionStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -22,6 +22,7 @@
 from ZODB.tests.MinPO import MinPO
 from ZODB.tests.StorageTestBase import zodb_unpickle
 
+from ZODB.tests.VersionStorage import loadEx
 
 class TransactionalUndoVersionStorage:
 
@@ -132,16 +133,15 @@
         self.assertEqual(load_value(oid1), 0)
         self.assertEqual(load_value(oid1, version), 2)
 
-        data, tid, ver = self._storage.loadEx(oid1, "")
+        data, tid = self._storage.load(oid1, "")
         # After undoing the version commit, the non-version data
         # once again becomes the non-version data from 'create1'.
         self.assertEqual(tid, self._storage.lastTransaction())
-        self.assertEqual(ver, "")
 
         # The current version data comes from an undo record, which
         # means that it gets data via the backpointer but tid from the
         # current txn.
-        data, tid, ver = self._storage.loadEx(oid1, version)
+        data, tid, ver = loadEx(self._storage, oid1, version)
         self.assertEqual(ver, version)
         self.assertEqual(tid, self._storage.lastTransaction())
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/VersionStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/VersionStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/VersionStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -16,7 +16,7 @@
 Any storage that supports versions should be able to pass all these tests.
 """
 
-import time
+import time, warnings
 
 import transaction
 from transaction import Transaction
@@ -27,6 +27,19 @@
 from ZODB.tests.StorageTestBase import zodb_unpickle, snooze
 from ZODB import DB
 
+def loadEx(storage, oid, version):
+    v = storage.modifiedInVersion(oid)
+    if v == version:
+        data, serial = storage.load(oid, version)
+        return data, serial, version
+    else:
+        data, serial = storage.load(oid, '')
+        return data, serial, ''
+        
+warnings.filterwarnings(
+    'ignore', message='Versions are deprecated', module=__name__)
+
+
 class VersionStorage:
 
     def checkCommitVersionSerialno(self):
@@ -46,7 +59,7 @@
         revid1 = self._dostore(oid, data=MinPO(12))
         revid2 = self._dostore(oid, revid=revid1, data=MinPO(13),
                                version="version")
-        data, tid, ver = self._storage.loadEx(oid, "version")
+        data, tid, ver = loadEx(self._storage, oid, "version")
         self.assertEqual(revid2, tid)
         self.assertEqual(zodb_unpickle(data), MinPO(13))
         oids = self._abortVersion("version")
@@ -55,7 +68,7 @@
         # use repr() to avoid getting binary data in a traceback on error
         self.assertNotEqual(revid1, revid3)
         self.assertNotEqual(revid2, revid3)
-        data, tid, ver = self._storage.loadEx(oid, "")
+        data, tid = self._storage.load(oid, "")
         self.assertEqual(revid3, tid)
         self.assertEqual(zodb_unpickle(data), MinPO(12))
         self.assertEqual(tid, self._storage.lastTransaction())
@@ -80,13 +93,13 @@
         eq(zodb_unpickle(data), MinPO(12))
         data, vrevid = self._storage.load(oid, version)
         eq(zodb_unpickle(data), MinPO(15))
-        if hasattr(self._storage, 'getSerial'):
-            s = self._storage.getSerial(oid)
+        if hasattr(self._storage, 'getTid'):
+            s = self._storage.getTid(oid)
             eq(s, max(revid, vrevid))
-        data, tid, ver = self._storage.loadEx(oid, version)
+        data, tid, ver = loadEx(self._storage, oid, version)
         eq(zodb_unpickle(data), MinPO(15))
         eq(tid, revid2)
-        data, tid, ver = self._storage.loadEx(oid, "other version")
+        data, tid, ver = loadEx(self._storage, oid, "other version")
         eq(zodb_unpickle(data), MinPO(12))
         eq(tid, revid2)
         # loadSerial returns non-version data
@@ -186,7 +199,7 @@
         eq = self.assertEqual
         oid, version = self._setup_version()
 
-        # Not sure I can write a test for getSerial() in the
+        # Not sure I can write a test for getTid() in the
         # presence of aborted versions, because FileStorage and
         # Berkeley storage give a different answer. I think Berkeley
         # is right and FS is wrong.
@@ -202,7 +215,7 @@
         # after a version is aborted.
         oid, version = self._setup_version()
         self._abortVersion(version)
-        data, tid, ver = self._storage.loadEx(oid, "")
+        data, tid, ver = loadEx(self._storage, oid, "")
         # write a new revision of oid so that the aborted-version txn
         # is not current
         self._dostore(oid, revid=tid, data=MinPO(17))
@@ -219,11 +232,9 @@
         self._storage.tpc_begin(t)
 
         # And try to abort the empty version
-        if (hasattr(self._storage, 'supportsTransactionalUndo') and
-                self._storage.supportsTransactionalUndo()):
-            self.assertRaises(POSException.VersionError,
-                              self._storage.abortVersion,
-                              '', t)
+        self.assertRaises(POSException.VersionError,
+                          self._storage.abortVersion,
+                          '', t)
 
         # But now we really try to abort the version
         tid, oids = self._storage.abortVersion(version, t)
@@ -235,9 +246,6 @@
         eq(zodb_unpickle(data), MinPO(51))
 
     def checkCommitVersionErrors(self):
-        if not (hasattr(self._storage, 'supportsTransactionalUndo') and
-                self._storage.supportsTransactionalUndo()):
-            return
         eq = self.assertEqual
         oid1, version1 = self._setup_version('one')
         data, revid1 = self._storage.load(oid1, version1)

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnection.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnection.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnection.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -24,12 +24,6 @@
 from ZODB.tests.warnhook import WarningsHook
 from zope.interface.verify import verifyObject
 
-# deprecated37  remove when subtransactions go away
-# Don't complain about subtxns in these tests.
-warnings.filterwarnings("ignore",
-                        ".*\nsubtransactions are deprecated",
-                        DeprecationWarning, __name__)
-
 class ConnectionDotAdd(unittest.TestCase):
 
     def setUp(self):
@@ -292,35 +286,6 @@
         10
         >>> cn.close(); cn2.close()
 
-        Bug:  We weren't catching the case where the only changes pending
-        were in a subtransaction.
-        >>> cn = db.open()
-        >>> cn.root()['a'] = 100
-        >>> transaction.commit(True)
-        >>> cn.close()  # this was succeeding
-        Traceback (most recent call last):
-          ...
-        ConnectionStateError: Cannot close a connection joined to a transaction
-
-        Again this leaves the connection as it was.
-        >>> transaction.commit()
-        >>> cn2 = db.open()
-        >>> cn2.root()['a']
-        100
-        >>> cn.close(); cn2.close()
-
-        Make sure we can still close a connection after aborting a pending
-        subtransaction.
-        >>> cn = db.open()
-        >>> cn.root()['a'] = 1000
-        >>> transaction.commit(True)
-        >>> cn.root()['a']
-        1000
-        >>> transaction.abort()
-        >>> cn.root()['a']
-        100
-        >>> cn.close()
-
         >>> db.close()
         """
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnectionSavepoint.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnectionSavepoint.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testConnectionSavepoint.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -54,11 +54,8 @@
 The problem was that we were effectively commiting the object twice --
 when commiting the current data and when committing the savepoint.
 The fix was to first make a new savepoint to move new changes to the
-savepoint storage and *then* to commit the savepoint storage. (This is
-similar to the strategy that was used for subtransactions prior to
-savepoints.)
+savepoint storage and *then* to commit the savepoint storage.
 
-
     >>> import ZODB.tests.util
     >>> db = ZODB.tests.util.DB()
     >>> connection = db.open()
@@ -90,8 +87,7 @@
     """\
 Although the interface doesn't guarantee this internal detail, making a
 savepoint should do incremental gc on connection memory caches.  Indeed,
-one traditional use for savepoints (started by the older, related
-"subtransaction commit" idea) is simply to free memory space midstream
+one traditional use for savepoints is simply to free memory space midstream
 during a long transaction.  Before ZODB 3.4.2, making a savepoint failed
 to trigger cache gc, and this test verifies that it now does.
 

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDB.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDB.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDB.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -35,6 +35,8 @@
         self.__path = os.path.abspath('test.fs')
         store = ZODB.FileStorage.FileStorage(self.__path)
         self.db = ZODB.DB(store)
+        warnings.filterwarnings(
+            'ignore', message='Versions are deprecated', module=__name__)
 
     def tearDown(self):
         self.db.close()

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDemoStorage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDemoStorage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testDemoStorage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -13,8 +13,12 @@
 ##############################################################################
 import unittest
 
-from ZODB.tests import StorageTestBase, BasicStorage, \
-     VersionStorage, Synchronization
+import transaction
+from ZODB.DB import DB
+import ZODB.utils
+import ZODB.DemoStorage
+from ZODB.tests import StorageTestBase, BasicStorage, VersionStorage
+from ZODB.tests import Synchronization
 
 class DemoStorageTests(StorageTestBase.StorageTestBase,
                        BasicStorage.BasicStorage,
@@ -23,7 +27,6 @@
                        ):
 
     def setUp(self):
-        import ZODB.DemoStorage
         self._storage = ZODB.DemoStorage.DemoStorage()
 
     def tearDown(self):
@@ -54,7 +57,14 @@
     def checkPackVersionsInPast(self):
         pass
 
+    def checkLoadDelegation(self):
+        # Minimal test of loadEX w/o version -- ironically
+        db = DB(self._storage) # creates object 0. :)
+        s2 = ZODB.DemoStorage.DemoStorage(base=self._storage)
+        self.assertEqual(s2.load(ZODB.utils.z64, ''),
+                         self._storage.load(ZODB.utils.z64, ''))
 
+
 class DemoStorageWrappedBase(DemoStorageTests):
 
     def setUp(self):

Deleted: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testSubTransaction.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testSubTransaction.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testSubTransaction.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -1,179 +0,0 @@
-##############################################################################
-#
-# Copyright (c) 2004 Zope Corporation and Contributors.
-# All Rights Reserved.
-#
-# This software is subject to the provisions of the Zope Public License,
-# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
-# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
-# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
-# FOR A PARTICULAR PURPOSE.
-#
-##############################################################################
-"""
-ZODB subtransaction tests
-=========================
-
-Subtransactions are deprecated.  First we install a hook, to verify that
-deprecation warnings are generated.
-
->>> hook = WarningsHook()
->>> hook.install()
-
-Subtransactions are provided by a generic transaction interface, but
-only supported by ZODB.  These tests verify that some of the important
-cases work as expected.
-
->>> import transaction
->>> from ZODB import DB
->>> from ZODB.tests.test_storage import MinimalMemoryStorage
->>> from ZODB.tests.MinPO import MinPO
-
-First create a few objects in the database root with a normal commit.
-We're going to make a series of modifications to these objects.
-
->>> db = DB(MinimalMemoryStorage())
->>> cn = db.open()
->>> rt = cn.root()
->>> def init():
-...     global a, b, c
-...     a = rt["a"] = MinPO("a0")
-...     b = rt["b"] = MinPO("b0")
-...     c = rt["c"] = MinPO("c0")
-...     transaction.commit()
->>> init()
-
-We'll also open a second database connection and use it to verify that
-the intermediate results of subtransactions are not visible to other
-connections.
-
->>> cn2 = db.open(synch=False)
->>> rt2 = cn2.root()
->>> shadow_a = rt2["a"]
->>> shadow_b = rt2["b"]
->>> shadow_c = rt2["c"]
-
-Subtransaction commit
----------------------
-
-We'll make a series of modifications in subtransactions.
-
->>> a.value = "a1"
->>> b.value = "b1"
->>> transaction.commit(1)
->>> a.value, b.value
-('a1', 'b1')
->>> shadow_a.value, shadow_b.value
-('a0', 'b0')
-
-The subtransaction commit should have generated a deprecation wng:
-
->>> len(hook.warnings)
-1
->>> message, category, filename, lineno = hook.warnings[0]
->>> print message
-This will be removed in ZODB 3.7:
-subtransactions are deprecated; use transaction.savepoint() instead of \
-transaction.commit(1)
->>> category.__name__
-'DeprecationWarning'
->>> hook.clear()
-
->>> a.value = "a2"
->>> c.value = "c1"
->>> transaction.commit(1)
->>> a.value, c.value
-('a2', 'c1')
->>> shadow_a.value, shadow_c.value
-('a0', 'c0')
-
->>> a.value = "a3"
->>> transaction.commit(1)
->>> a.value
-'a3'
->>> shadow_a.value
-'a0'
-
->>> transaction.commit()
-
->>> a.value, b.value, c.value
-('a3', 'b1', 'c1')
-
-Subtransaction with nested abort
---------------------------------
-
->>> init()
->>> a.value = "a1"
->>> transaction.commit(1)
-
->>> b.value = "b1"
->>> transaction.commit(1)
-
-A sub-transaction abort will undo current changes, reverting to the
-database state as of the last sub-transaction commit.  There is
-(apparently) no way to abort an already-committed subtransaction.
-
->>> c.value = "c1"
->>> transaction.abort(1)
->>> a.value, b.value, c.value
-('a1', 'b1', 'c0')
-
-The subtxn abort should also have generated a deprecation warning:
-
->>> len(hook.warnings)
-1
->>> message, category, filename, lineno = hook.warnings[0]
->>> print message
-This will be removed in ZODB 3.7:
-subtransactions are deprecated; use sp.rollback() instead of \
-transaction.abort(1), where `sp` is the corresponding savepoint \
-captured earlier
->>> category.__name__
-'DeprecationWarning'
->>> hook.clear()
-
-
-Multiple aborts have no extra effect.
-
->>> transaction.abort(1)
->>> a.value, b.value, c.value
-('a1', 'b1', 'c0')
-
->>> transaction.commit()
->>> a.value, b.value, c.value
-('a1', 'b1', 'c0')
-
-Subtransaction with top-level abort
------------------------------------
-
->>> init()
->>> a.value = "a1"
->>> transaction.commit(1)
-
->>> b.value = "b1"
->>> transaction.commit(1)
-
-A sub-transaction abort will undo current changes, reverting to the
-database state as of the last sub-transaction commit.  There is
-(apparently) no way to abort an already-committed subtransaction.
-
->>> c.value = "c1"
->>> transaction.abort(1)
-
->>> transaction.abort()
->>> a.value, b.value, c.value
-('a0', 'b0', 'c0')
-
-We have to uninstall the hook so that other warnings don't get lost.
-
->>> len(hook.warnings)  # we don't expect we captured other warnings
-0
->>> hook.uninstall()
-"""
-
-from ZODB.tests.warnhook import WarningsHook
-from zope.testing import doctest
-
-def test_suite():
-    return doctest.DocTestSuite()

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testZODB.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testZODB.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/testZODB.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -16,6 +16,7 @@
 
 import ZODB
 import ZODB.FileStorage
+import ZODB.MappingStorage
 from ZODB.POSException import ReadConflictError, ConflictError
 from ZODB.POSException import TransactionFailedError
 from ZODB.tests.warnhook import WarningsHook
@@ -24,10 +25,9 @@
 from persistent.mapping import PersistentMapping
 import transaction
 
-# deprecated37  remove when subtransactions go away
-# Don't complain about subtxns in these tests.
+# deprecated39  remove when versions go away
 warnings.filterwarnings("ignore",
-                        ".*\nsubtransactions are deprecated",
+                        "Versions are deprecated",
                         DeprecationWarning, __name__)
 
 class P(Persistent):
@@ -213,178 +213,6 @@
             conn1.close()
             conn2.close()
 
-    def checkReadConflict(self):
-        self.obj = P()
-        self.readConflict()
-
-    def readConflict(self, shouldFail=True):
-        # Two transactions run concurrently.  Each reads some object,
-        # then one commits and the other tries to read an object
-        # modified by the first.  This read should fail with a conflict
-        # error because the object state read is not necessarily
-        # consistent with the objects read earlier in the transaction.
-
-        tm1 = transaction.TransactionManager()
-        conn = self._db.open(mvcc=False, transaction_manager=tm1)
-        r1 = conn.root()
-        r1["p"] = self.obj
-        self.obj.child1 = P()
-        tm1.get().commit()
-
-        # start a new transaction with a new connection
-        tm2 = transaction.TransactionManager()
-        cn2 = self._db.open(mvcc=False, transaction_manager=tm2)
-        # start a new transaction with the other connection
-        r2 = cn2.root()
-
-        self.assertEqual(r1._p_serial, r2._p_serial)
-
-        self.obj.child2 = P()
-        tm1.get().commit()
-
-        # resume the transaction using cn2
-        obj = r2["p"]
-        # An attempt to access obj should fail, because r2 was read
-        # earlier in the transaction and obj was modified by the othe
-        # transaction.
-        if shouldFail:
-            self.assertRaises(ReadConflictError, lambda: obj.child1)
-            # And since ReadConflictError was raised, attempting to commit
-            # the transaction should re-raise it.  checkNotIndependent()
-            # failed this part of the test for a long time.
-            self.assertRaises(ReadConflictError, tm2.get().commit)
-
-            # And since that commit failed, trying to commit again should
-            # fail again.
-            self.assertRaises(TransactionFailedError, tm2.get().commit)
-            # And again.
-            self.assertRaises(TransactionFailedError, tm2.get().commit)
-            # Etc.
-            self.assertRaises(TransactionFailedError, tm2.get().commit)
-
-        else:
-            # make sure that accessing the object succeeds
-            obj.child1
-        tm2.get().abort()
-
-    def checkReadConflictIgnored(self):
-        # Test that an application that catches a read conflict and
-        # continues can not commit the transaction later.
-        root = self._db.open(mvcc=False).root()
-        root["real_data"] = real_data = PersistentMapping()
-        root["index"] = index = PersistentMapping()
-
-        real_data["a"] = PersistentMapping({"indexed_value": 0})
-        real_data["b"] = PersistentMapping({"indexed_value": 1})
-        index[1] = PersistentMapping({"b": 1})
-        index[0] = PersistentMapping({"a": 1})
-        transaction.commit()
-
-        # load some objects from one connection
-        tm = transaction.TransactionManager()
-        cn2 = self._db.open(mvcc=False, transaction_manager=tm)
-        r2 = cn2.root()
-        real_data2 = r2["real_data"]
-        index2 = r2["index"]
-
-        real_data["b"]["indexed_value"] = 0
-        del index[1]["b"]
-        index[0]["b"] = 1
-        transaction.commit()
-
-        del real_data2["a"]
-        try:
-            del index2[0]["a"]
-        except ReadConflictError:
-            # This is the crux of the text.  Ignore the error.
-            pass
-        else:
-            self.fail("No conflict occurred")
-
-        # real_data2 still ready to commit
-        self.assert_(real_data2._p_changed)
-
-        # index2 values not ready to commit
-        self.assert_(not index2._p_changed)
-        self.assert_(not index2[0]._p_changed)
-        self.assert_(not index2[1]._p_changed)
-
-        self.assertRaises(ReadConflictError, tm.get().commit)
-        self.assertRaises(TransactionFailedError, tm.get().commit)
-        tm.get().abort()
-
-    def checkIndependent(self):
-        self.obj = Independent()
-        self.readConflict(shouldFail=False)
-
-    def checkNotIndependent(self):
-        self.obj = DecoyIndependent()
-        self.readConflict()
-
-    def checkSubtxnCommitDoesntGetInvalidations(self):
-        # Prior to ZODB 3.2.9 and 3.4, Connection.tpc_finish() processed
-        # invalidations even for a subtxn commit.  This could make
-        # inconsistent state visible after a subtxn commit.  There was a
-        # suspicion that POSKeyError was possible as a result, but I wasn't
-        # able to construct a case where that happened.
-
-        # Set up the database, to hold
-        # root --> "p" -> value = 1
-        #      --> "q" -> value = 2
-        tm1 = transaction.TransactionManager()
-        conn = self._db.open(transaction_manager=tm1)
-        r1 = conn.root()
-        p = P()
-        p.value = 1
-        r1["p"] = p
-        q = P()
-        q.value = 2
-        r1["q"] = q
-        tm1.commit()
-
-        # Now txn T1 changes p.value to 3 locally (subtxn commit).
-        p.value = 3
-        tm1.commit(True)
-
-        # Start new txn T2 with a new connection.
-        tm2 = transaction.TransactionManager()
-        cn2 = self._db.open(transaction_manager=tm2)
-        r2 = cn2.root()
-        p2 = r2["p"]
-        self.assertEqual(p._p_oid, p2._p_oid)
-        # T2 shouldn't see T1's change of p.value to 3, because T1 didn't
-        # commit yet.
-        self.assertEqual(p2.value, 1)
-        # Change p.value to 4, and q.value to 5.  Neither should be visible
-        # to T1, because T1 is still in progress.
-        p2.value = 4
-        q2 = r2["q"]
-        self.assertEqual(q._p_oid, q2._p_oid)
-        self.assertEqual(q2.value, 2)
-        q2.value = 5
-        tm2.commit()
-
-        # Back to T1.  p and q still have the expected values.
-        rt = conn.root()
-        self.assertEqual(rt["p"].value, 3)
-        self.assertEqual(rt["q"].value, 2)
-
-        # Now do another subtxn commit in T1.  This shouldn't change what
-        # T1 sees for p and q.
-        rt["r"] = P()
-        tm1.commit(True)
-
-        # Doing that subtxn commit in T1 should not process invalidations
-        # from T2's commit.  p.value should still be 3 here (because that's
-        # what T1 subtxn-committed earlier), and q.value should still be 2.
-        # Prior to ZODB 3.2.9 and 3.4, q.value was 5 here.
-        rt = conn.root()
-        try:
-            self.assertEqual(rt["p"].value, 3)
-            self.assertEqual(rt["q"].value, 2)
-        finally:
-            tm1.abort()
-
     def checkSavepointDoesntGetInvalidations(self):
         # Prior to ZODB 3.2.9 and 3.4, Connection.tpc_finish() processed
         # invalidations even for a subtxn commit.  This could make
@@ -451,49 +279,6 @@
         finally:
             tm1.abort()
 
-    def checkReadConflictErrorClearedDuringAbort(self):
-        # When a transaction is aborted, the "memory" of which
-        # objects were the cause of a ReadConflictError during
-        # that transaction should be cleared.
-        root = self._db.open(mvcc=False).root()
-        data = PersistentMapping({'d': 1})
-        root["data"] = data
-        transaction.commit()
-
-        # Provoke a ReadConflictError.
-        tm2 = transaction.TransactionManager()
-        cn2 = self._db.open(mvcc=False, transaction_manager=tm2)
-        r2 = cn2.root()
-        data2 = r2["data"]
-
-        data['d'] = 2
-        transaction.commit()
-
-        try:
-            data2['d'] = 3
-        except ReadConflictError:
-            pass
-        else:
-            self.fail("No conflict occurred")
-
-        # Explicitly abort cn2's transaction.
-        tm2.get().abort()
-
-        # cn2 should retain no memory of the read conflict after an abort(),
-        # but 3.2.3 had a bug wherein it did.
-        data_conflicts = data._p_jar._conflicts
-        data2_conflicts = data2._p_jar._conflicts
-        self.failIf(data_conflicts)
-        self.failIf(data2_conflicts)  # this used to fail
-
-        # And because of that, we still couldn't commit a change to data2['d']
-        # in the new transaction.
-        cn2.sync()  # process the invalidation for data2['d']
-        data2['d'] = 3
-        tm2.get().commit()  # 3.2.3 used to raise ReadConflictError
-
-        cn2.close()
-
     def checkTxnBeginImpliesAbort(self):
         # begin() should do an abort() first, if needed.
         cn = self._db.open()
@@ -504,23 +289,14 @@
         rt = cn.root()
         self.assertRaises(KeyError, rt.__getitem__, 'a')
 
-        # A longstanding bug:  this didn't work if changes were only in
-        # subtransactions.
         transaction.begin()
         rt = cn.root()
-        rt['a'] = 2
-        transaction.commit(1)
-
-        transaction.begin()
-        rt = cn.root()
         self.assertRaises(KeyError, rt.__getitem__, 'a')
 
-        # One more time, mixing "top level" and subtransaction changes.
+        # One more time.
         transaction.begin()
         rt = cn.root()
         rt['a'] = 3
-        transaction.commit(1)
-        rt['b'] = 4
 
         transaction.begin()
         rt = cn.root()
@@ -536,7 +312,6 @@
         # rest of this test was tossed.
 
     def checkFailingCommitSticks(self):
-        # See also checkFailingSubtransactionCommitSticks.
         # See also checkFailingSavepointSticks.
         cn = self._db.open()
         rt = cn.root()
@@ -582,93 +357,6 @@
 
         cn.close()
 
-    def checkFailingSubtransactionCommitSticks(self):
-        cn = self._db.open()
-        rt = cn.root()
-        rt['a'] = 1
-        transaction.commit(True)
-        self.assertEqual(rt['a'], 1)
-
-        rt['b'] = 2
-
-        # Make a jar that raises PoisonedError when a subtxn commit is done.
-        poisoned = PoisonedJar(break_savepoint=True)
-        transaction.get().join(poisoned)
-        # We're using try/except here instead of assertRaises so that this
-        # module's attempt to suppress subtransaction deprecation wngs
-        # works.
-        try:
-            transaction.commit(True)
-        except PoisonedError:
-            pass
-        else:
-            self.fail("expected PoisonedError")
-        # Trying to subtxn-commit again fails too.
-        try:
-            transaction.commit(True)
-        except TransactionFailedError:
-            pass
-        else:
-            self.fail("expected TransactionFailedError")
-        try:
-            transaction.commit(True)
-        except TransactionFailedError:
-            pass
-        else:
-            self.fail("expected TransactionFailedError")
-        # Top-level commit also fails.
-        self.assertRaises(TransactionFailedError, transaction.commit)
-
-        # The changes to rt['a'] and rt['b'] are lost.
-        self.assertRaises(KeyError, rt.__getitem__, 'a')
-        self.assertRaises(KeyError, rt.__getitem__, 'b')
-
-        # Trying to modify an object also fails, because Transaction.join()
-        # also raises TransactionFailedError.
-        self.assertRaises(TransactionFailedError, rt.__setitem__, 'b', 2)
-
-        # Clean up via abort(), and try again.
-        transaction.abort()
-        rt['a'] = 1
-        transaction.commit()
-        self.assertEqual(rt['a'], 1)
-
-        # Cleaning up via begin() should also work.
-        rt['a'] = 2
-        transaction.get().join(poisoned)
-        try:
-            transaction.commit(True)
-        except PoisonedError:
-            pass
-        else:
-            self.fail("expected PoisonedError")
-        # Trying to subtxn-commit again fails too.
-        try:
-            transaction.commit(True)
-        except TransactionFailedError:
-            pass
-        else:
-            self.fail("expected TransactionFailedError")
-
-        # The change to rt['a'] is lost.
-        self.assertEqual(rt['a'], 1)
-        # Trying to modify an object also fails.
-        self.assertRaises(TransactionFailedError, rt.__setitem__, 'b', 2)
-
-        # Clean up via begin(), and try again.
-        transaction.begin()
-        rt['a'] = 2
-        transaction.commit(True)
-        self.assertEqual(rt['a'], 2)
-        transaction.get().commit()
-
-        cn2 = self._db.open()
-        rt = cn.root()
-        self.assertEqual(rt['a'], 2)
-
-        cn.close()
-        cn2.close()
-
     def checkFailingSavepointSticks(self):
         cn = self._db.open()
         rt = cn.root()
@@ -762,8 +450,163 @@
             transaction.abort()
             conn.close()
 
-        
+class ReadConflictTests(unittest.TestCase):
 
+    def setUp(self):
+        self._storage = ZODB.MappingStorage.MappingStorage()
+
+    def readConflict(self, shouldFail=True):
+        # Two transactions run concurrently.  Each reads some object,
+        # then one commits and the other tries to read an object
+        # modified by the first.  This read should fail with a conflict
+        # error because the object state read is not necessarily
+        # consistent with the objects read earlier in the transaction.
+
+        tm1 = transaction.TransactionManager()
+        conn = self._db.open(transaction_manager=tm1)
+        r1 = conn.root()
+        r1["p"] = self.obj
+        self.obj.child1 = P()
+        tm1.get().commit()
+
+        # start a new transaction with a new connection
+        tm2 = transaction.TransactionManager()
+        cn2 = self._db.open(transaction_manager=tm2)
+        # start a new transaction with the other connection
+        r2 = cn2.root()
+
+        self.assertEqual(r1._p_serial, r2._p_serial)
+
+        self.obj.child2 = P()
+        tm1.get().commit()
+
+        # resume the transaction using cn2
+        obj = r2["p"]
+        # An attempt to access obj should fail, because r2 was read
+        # earlier in the transaction and obj was modified by the othe
+        # transaction.
+        if shouldFail:
+            self.assertRaises(ReadConflictError, lambda: obj.child1)
+            # And since ReadConflictError was raised, attempting to commit
+            # the transaction should re-raise it.  checkNotIndependent()
+            # failed this part of the test for a long time.
+            self.assertRaises(ReadConflictError, tm2.get().commit)
+
+            # And since that commit failed, trying to commit again should
+            # fail again.
+            self.assertRaises(TransactionFailedError, tm2.get().commit)
+            # And again.
+            self.assertRaises(TransactionFailedError, tm2.get().commit)
+            # Etc.
+            self.assertRaises(TransactionFailedError, tm2.get().commit)
+
+        else:
+            # make sure that accessing the object succeeds
+            obj.child1
+        tm2.get().abort()
+
+
+    def checkReadConflict(self):
+        self.obj = P()
+        self.readConflict()
+
+    def checkIndependent(self):
+        self.obj = Independent()
+        self.readConflict(shouldFail=False)
+
+    def checkNotIndependent(self):
+        self.obj = DecoyIndependent()
+        self.readConflict()
+    
+    def checkReadConflictIgnored(self):
+        # Test that an application that catches a read conflict and
+        # continues can not commit the transaction later.
+        root = self._db.open().root()
+        root["real_data"] = real_data = PersistentMapping()
+        root["index"] = index = PersistentMapping()
+
+        real_data["a"] = PersistentMapping({"indexed_value": 0})
+        real_data["b"] = PersistentMapping({"indexed_value": 1})
+        index[1] = PersistentMapping({"b": 1})
+        index[0] = PersistentMapping({"a": 1})
+        transaction.commit()
+
+        # load some objects from one connection
+        tm = transaction.TransactionManager()
+        cn2 = self._db.open(transaction_manager=tm)
+        r2 = cn2.root()
+        real_data2 = r2["real_data"]
+        index2 = r2["index"]
+
+        real_data["b"]["indexed_value"] = 0
+        del index[1]["b"]
+        index[0]["b"] = 1
+        transaction.commit()
+
+        del real_data2["a"]
+        try:
+            del index2[0]["a"]
+        except ReadConflictError:
+            # This is the crux of the text.  Ignore the error.
+            pass
+        else:
+            self.fail("No conflict occurred")
+
+        # real_data2 still ready to commit
+        self.assert_(real_data2._p_changed)
+
+        # index2 values not ready to commit
+        self.assert_(not index2._p_changed)
+        self.assert_(not index2[0]._p_changed)
+        self.assert_(not index2[1]._p_changed)
+
+        self.assertRaises(ReadConflictError, tm.get().commit)
+        self.assertRaises(TransactionFailedError, tm.get().commit)
+        tm.get().abort()
+
+    def checkReadConflictErrorClearedDuringAbort(self):
+        # When a transaction is aborted, the "memory" of which
+        # objects were the cause of a ReadConflictError during
+        # that transaction should be cleared.
+        root = self._db.open().root()
+        data = PersistentMapping({'d': 1})
+        root["data"] = data
+        transaction.commit()
+
+        # Provoke a ReadConflictError.
+        tm2 = transaction.TransactionManager()
+        cn2 = self._db.open(transaction_manager=tm2)
+        r2 = cn2.root()
+        data2 = r2["data"]
+
+        data['d'] = 2
+        transaction.commit()
+
+        try:
+            data2['d'] = 3
+        except ReadConflictError:
+            pass
+        else:
+            self.fail("No conflict occurred")
+
+        # Explicitly abort cn2's transaction.
+        tm2.get().abort()
+
+        # cn2 should retain no memory of the read conflict after an abort(),
+        # but 3.2.3 had a bug wherein it did.
+        data_conflicts = data._p_jar._conflicts
+        data2_conflicts = data2._p_jar._conflicts
+        self.failIf(data_conflicts)
+        self.failIf(data2_conflicts)  # this used to fail
+
+        # And because of that, we still couldn't commit a change to data2['d']
+        # in the new transaction.
+        cn2.sync()  # process the invalidation for data2['d']
+        data2['d'] = 3
+        tm2.get().commit()  # 3.2.3 used to raise ReadConflictError
+
+        cn2.close()
+
 class PoisonedError(Exception):
     pass
 
@@ -778,8 +621,6 @@
     def sortKey(self):
         return str(id(self))
 
-    # A way that used to poison a subtransaction commit.  With the current
-    # implementation of subtxns, pass break_savepoint=True instead.
     def tpc_begin(self, *args):
         if self.break_tpc_begin:
             raise PoisonedError("tpc_begin fails")

Modified: ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/test_storage.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/test_storage.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/ZODB/tests/test_storage.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -74,19 +74,16 @@
     def _clear_temp(self):
         pass
 
-    def loadEx(self, oid, version):
+    def load(self, oid, version):
         self._lock_acquire()
         try:
             assert not version
             tid = self._cur[oid]
-            self.hook(oid, tid, version)
-            return self._index[(oid, tid)], tid, ""
+            self.hook(oid, tid, '')
+            return self._index[(oid, tid)], tid
         finally:
             self._lock_release()
 
-    def load(self, oid, version):
-        return self.loadEx(oid, version)[:2]
-
     def _begin(self, tid, u, d, e):
         self._txn = Transaction(tid)
 
@@ -145,6 +142,11 @@
         finally:
             self._lock_release()
 
+    def close(self):
+        pass
+
+    cleanup = close
+
 class MinimalTestSuite(StorageTestBase.StorageTestBase,
                        BasicStorage.BasicStorage,
                        MTStorage.MTStorage,

Modified: ZODB/branches/jim-zeo-registerdb/src/transaction/_manager.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/transaction/_manager.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/transaction/_manager.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -89,24 +89,11 @@
     def doom(self):
         return self.get().doom()
 
-    def commit(self, sub=_marker):
-        if sub is _marker:
-            sub = None
-        else:
-            deprecated37("subtransactions are deprecated; use "
-                         "transaction.savepoint() instead of "
-                         "transaction.commit(1)")
-        return self.get().commit(sub, deprecation_wng=False)
+    def commit(self):
+        return self.get().commit()
 
-    def abort(self, sub=_marker):
-        if sub is _marker:
-            sub = None
-        else:
-            deprecated37("subtransactions are deprecated; use "
-                         "sp.rollback() instead of "
-                         "transaction.abort(1), where `sp` is the "
-                         "corresponding savepoint captured earlier")
-        return self.get().abort(sub, deprecation_wng=False)
+    def abort(self):
+        return self.get().abort()
 
     def savepoint(self, optimistic=False):
         return self.get().savepoint(optimistic)

Modified: ZODB/branches/jim-zeo-registerdb/src/transaction/_transaction.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/transaction/_transaction.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/transaction/_transaction.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -27,54 +27,6 @@
 for backwards compatibility.  It takes a persistent object and
 registers its _p_jar attribute.  TODO: explain adapter
 
-Subtransactions
----------------
-
-Note: Subtransactions are deprecated!  Use savepoint/rollback instead.
-
-A subtransaction applies the transaction notion recursively.  It
-allows a set of modifications within a transaction to be committed or
-aborted as a group.  A subtransaction is a strictly local activity;
-its changes are not visible to any other database connection until the
-top-level transaction commits.  In addition to its use to organize a
-large transaction, subtransactions can be used to optimize memory use.
-ZODB must keep modified objects in memory until a transaction commits
-and it can write the changes to the storage.  A subtransaction uses a
-temporary disk storage for its commits, allowing modified objects to
-be flushed from memory when the subtransaction commits.
-
-The commit() and abort() methods take an optional subtransaction
-argument that defaults to false.  If it is a true, the operation is
-performed on a subtransaction.
-
-Subtransactions add a lot of complexity to the transaction
-implementation.  Some resource managers support subtransactions, but
-they are not required to.  (ZODB Connection is the only standard
-resource manager that supports subtransactions.)  Resource managers
-that do support subtransactions implement abort_sub() and commit_sub()
-methods and support a second argument to tpc_begin().
-
-The second argument to tpc_begin() indicates that a subtransaction
-commit is beginning (if it is true).  In a subtransaction, there is no
-tpc_vote() call, because sub-transactions don't need 2-phase commit.
-If a sub-transaction abort or commit fails, we can abort the outer
-transaction.  The tpc_finish() or tpc_abort() call applies just to
-that subtransaction.
-
-Once a resource manager is involved in a subtransaction, all
-subsequent transactions will be treated as subtransactions until
-abort_sub() or commit_sub() is called.  abort_sub() will undo all the
-changes of the subtransactions.  commit_sub() will begin a top-level
-transaction and store all the changes from subtransactions.  After
-commit_sub(), the transaction must still call tpc_vote() and
-tpc_finish().
-
-If the resource manager does not support subtransactions, nothing
-happens when the subtransaction commits.  Instead, the resource
-manager is put on a list of managers to commit when the actual
-top-level transaction commits.  If this happens, it will not be
-possible to abort subtransactions.
-
 Two-phase commit
 ----------------
 
@@ -89,21 +41,6 @@
     3. tpc_vote(txn)
     4. tpc_finish(txn)
 
-Subtransaction commit
----------------------
-
-Note: Subtransactions are deprecated!
-
-When a subtransaction commits, the protocol is different.
-
-1. tpc_begin() is passed a second argument, which indicates that a
-   subtransaction is being committed.
-2. tpc_vote() is not called.
-
-Once a subtransaction has been committed, the top-level transaction
-commit will start with a commit_sub() call instead of a tpc_begin()
-call.
-
 Before-commit hook
 ------------------
 
@@ -212,9 +149,6 @@
     # savepoint to its index (see above).
     _savepoint2index = None
 
-    # Remember the savepoint for the last subtransaction.
-    _subtransaction_savepoint = None
-
     # Meta data.  ._extension is also metadata, but is initialized to an
     # emtpy dict in __init__.
     user = ""
@@ -372,29 +306,13 @@
             assert id(obj) not in map(id, adapter.objects)
             adapter.objects.append(obj)
 
-    def commit(self, subtransaction=_marker, deprecation_wng=True):
+    def commit(self):
         if self.status is Status.DOOMED:
             raise interfaces.DoomedTransaction()
 
-        if subtransaction is _marker:
-            subtransaction = 0
-        elif deprecation_wng:
-            deprecated37("subtransactions are deprecated; instead of "
-                         "transaction.commit(1), use "
-                         "transaction.savepoint(optimistic=True) in "
-                         "contexts where a subtransaction abort will never "
-                         "occur, or sp=transaction.savepoint() if later "
-                         "rollback is possible and then sp.rollback() "
-                         "instead of transaction.abort(1)")
-
         if self._savepoint2index:
             self._invalidate_all_savepoints()
 
-        if subtransaction:
-            # TODO deprecate subtransactions
-            self._subtransaction_savepoint = self.savepoint(optimistic=True)
-            return
-
         if self.status is Status.COMMITFAILED:
             self._prior_operation_failed() # doesn't return
 
@@ -546,28 +464,7 @@
                 self.log.error("Error in tpc_abort() on manager %s",
                                rm, exc_info=sys.exc_info())
 
-    def abort(self, subtransaction=_marker, deprecation_wng=True):
-        if subtransaction is _marker:
-            subtransaction = 0
-        elif deprecation_wng:
-            deprecated37("subtransactions are deprecated; use "
-                         "sp.rollback() instead of "
-                         "transaction.abort(1), where `sp` is the "
-                         "corresponding savepoint captured earlier")
-
-        if subtransaction:
-            # TODO deprecate subtransactions.
-            if not self._subtransaction_savepoint:
-                raise interfaces.InvalidSavepointRollbackError
-            if self._subtransaction_savepoint.valid:
-                self._subtransaction_savepoint.rollback()
-                # We're supposed to be able to call abort(1) multiple
-                # times without additional effect, so mark the subtxn
-                # savepoint invalid now.
-                self._subtransaction_savepoint.transaction = None
-                assert not self._subtransaction_savepoint.valid
-            return
-
+    def abort(self):
         if self._savepoint2index:
             self._invalidate_all_savepoints()
 

Modified: ZODB/branches/jim-zeo-registerdb/src/transaction/interfaces.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/transaction/interfaces.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/transaction/interfaces.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -202,8 +202,8 @@
         hooks.  Applications should take care to avoid creating infinite loops
         by recursively registering hooks.
 
-        Hooks are called only for a top-level commit.  A subtransaction
-        commit does not call any hooks.  If the transaction is aborted, hooks
+        Hooks are called only for a top-level commit.  A savepoint
+        does not call any hooks.  If the transaction is aborted, hooks
         are not called, and are discarded.  Calling a hook "consumes" its
         registration too:  hook registrations do not persist across
         transactions.  If it's desired to call the same hook on every
@@ -231,8 +231,8 @@
         hooks.  Applications should take care to avoid creating infinite loops
         by recursively registering hooks.
 
-        Hooks are called only for a top-level commit.  A subtransaction
-        commit or savepoint creation does not call any hooks.  If the
+        Hooks are called only for a top-level commit.  A 
+        savepoint creation does not call any hooks.  If the
         transaction is aborted, hooks are not called, and are discarded.
         Calling a hook "consumes" its registration too:  hook registrations
         do not persist across transactions.  If it's desired to call the same
@@ -269,8 +269,8 @@
          hooks.  Applications should take care to avoid creating infinite loops
          by recursively registering hooks.
          
-         Hooks are called only for a top-level commit.  A subtransaction
-         commit or savepoint creation does not call any hooks.  Calling a
+         Hooks are called only for a top-level commit.  A 
+         savepoint creation does not call any hooks.  Calling a
          hook "consumes" its registration:  hook registrations do not
          persist across transactions.  If it's desired to call the same
          hook on every transaction commit, then addAfterCommitHook() must be

Modified: ZODB/branches/jim-zeo-registerdb/src/transaction/tests/test_transaction.py
===================================================================
--- ZODB/branches/jim-zeo-registerdb/src/transaction/tests/test_transaction.py	2007-05-09 18:41:39 UTC (rev 75657)
+++ ZODB/branches/jim-zeo-registerdb/src/transaction/tests/test_transaction.py	2007-05-09 20:45:50 UTC (rev 75658)
@@ -46,12 +46,6 @@
 from ZODB.utils import positive_id
 from ZODB.tests.warnhook import WarningsHook
 
-# deprecated37  remove when subtransactions go away
-# Don't complain about subtxns in these tests.
-warnings.filterwarnings("ignore",
-                        ".*\nsubtransactions are deprecated",
-                        DeprecationWarning, __name__)
-
 class TransactionTests(unittest.TestCase):
 
     def setUp(self):
@@ -114,38 +108,7 @@
         assert self.nosub1._p_jar.ctpc_finish == 0
         assert self.nosub1._p_jar.cabort == 1
 
-    def BUGtestNSJSubTransactionCommitAbort(self):
-        """
-        this reveals a bug in transaction.py
-        the nosub jar should not have tpc_finish
-        called on it till the containing txn
-        ends.
 
-        sub calling method commit
-        nosub calling method tpc_begin
-        sub calling method tpc_finish
-        nosub calling method tpc_finish
-        nosub calling method abort
-        sub calling method abort_sub
-        """
-
-        self.sub1.modify(tracing='sub')
-        self.nosub1.modify(tracing='nosub')
-
-        self.transaction_manager.commit(1)
-
-        assert self.sub1._p_jar.ctpc_finish == 1
-
-        # bug, non sub trans jars are getting finished
-        # in a subtrans
-        assert self.nosub1._p_jar.ctpc_finish == 0
-
-        self.transaction_manager.abort()
-
-        assert self.nosub1._p_jar.cabort == 1
-        assert self.sub1._p_jar.cabort_sub == 1
-
-
     ### Failure Mode Tests
     #
     # ok now we do some more interesting
@@ -158,7 +121,7 @@
 
     def testExceptionInAbort(self):
 
-        self.sub1._p_jar = SubTransactionJar(errors='abort')
+        self.sub1._p_jar = BasicJar(errors='abort')
 
         self.nosub1.modify()
         self.sub1.modify(nojar=1)
@@ -173,7 +136,7 @@
 
     def testExceptionInCommit(self):
 
-        self.sub1._p_jar = SubTransactionJar(errors='commit')
+        self.sub1._p_jar = BasicJar(errors='commit')
 
         self.nosub1.modify()
         self.sub1.modify(nojar=1)
@@ -188,7 +151,7 @@
 
     def testExceptionInTpcVote(self):
 
-        self.sub1._p_jar = SubTransactionJar(errors='tpc_vote')
+        self.sub1._p_jar = BasicJar(errors='tpc_vote')
 
         self.nosub1.modify()
         self.sub1.modify(nojar=1)
@@ -214,7 +177,7 @@
         sub calling method tpc_abort
         nosub calling method tpc_abort
         """
-        self.sub1._p_jar = SubTransactionJar(errors='tpc_begin')
+        self.sub1._p_jar = BasicJar(errors='tpc_begin')
 
         self.nosub1.modify()
         self.sub1.modify(nojar=1)
@@ -228,8 +191,7 @@
         assert self.sub1._p_jar.ctpc_abort == 1
 
     def testExceptionInTpcAbort(self):
-        self.sub1._p_jar = SubTransactionJar(
-                                errors=('tpc_abort', 'tpc_vote'))
+        self.sub1._p_jar = BasicJar(errors=('tpc_abort', 'tpc_vote'))
 
         self.nosub1.modify()
         self.sub1.modify(nojar=1)
@@ -283,9 +245,9 @@
     def modify(self, nojar=0, tracing=0):
         if not nojar:
             if self.nost:
-                self._p_jar = NoSubTransactionJar(tracing=tracing)
+                self._p_jar = BasicJar(tracing=tracing)
             else:
-                self._p_jar = SubTransactionJar(tracing=tracing)
+                self._p_jar = BasicJar(tracing=tracing)
         self.transaction_manager.get().join(self._p_jar)
 
 class TestTxnException(Exception):
@@ -350,19 +312,6 @@
         self.check('tpc_finish')
         self.ctpc_finish += 1
 
-class SubTransactionJar(BasicJar):
-
-    def abort_sub(self, txn):
-        self.check('abort_sub')
-        self.cabort_sub = 1
-
-    def commit_sub(self, txn):
-        self.check('commit_sub')
-        self.ccommit_sub = 1
-
-class NoSubTransactionJar(BasicJar):
-    pass
-
 class HoserJar(BasicJar):
 
     # The HoserJars coordinate their actions via the class variable
@@ -482,17 +431,13 @@
       >>> log
       []
 
-    The hook is only called for a full commit, not for a savepoint or
-    subtransaction.
+    The hook is only called for a full commit, not for a savepoint.
 
       >>> t = transaction.begin()
       >>> t.beforeCommitHook(hook, 'A', kw1='B')
       >>> dummy = t.savepoint()
       >>> log
       []
-      >>> t.commit(subtransaction=True)
-      >>> log
-      []
       >>> t.commit()
       >>> log
       ["arg 'A' kw1 'B' kw2 'no_kw2'"]
@@ -632,17 +577,13 @@
       >>> log
       []
 
-    The hook is only called for a full commit, not for a savepoint or
-    subtransaction.
+    The hook is only called for a full commit, not for a savepoint.
 
       >>> t = transaction.begin()
       >>> t.addBeforeCommitHook(hook, 'A', dict(kw1='B'))
       >>> dummy = t.savepoint()
       >>> log
       []
-      >>> t.commit(subtransaction=True)
-      >>> log
-      []
       >>> t.commit()
       >>> log
       ["arg 'A' kw1 'B' kw2 'no_kw2'"]
@@ -810,17 +751,13 @@
       >>> log
       []
 
-    The hook is only called after a full commit, not for a savepoint or
-    subtransaction.
+    The hook is only called after a full commit, not for a savepoint.
 
       >>> t = transaction.begin()
       >>> t.addAfterCommitHook(hook, 'A', dict(kw1='B'))
       >>> dummy = t.savepoint()
       >>> log
       []
-      >>> t.commit(subtransaction=True)
-      >>> log
-      []
       >>> t.commit()
       >>> log
       ["True arg 'A' kw1 'B' kw2 'no_kw2'"]
@@ -844,7 +781,7 @@
       >>> class CommitFailure(Exception):
       ...     pass
       >>> class FailingDataManager:
-      ...     def tpc_begin(self, txn, sub=False):
+      ...     def tpc_begin(self, txn):
       ...         raise CommitFailure
       ...     def abort(self, txn):
       ...         pass



More information about the Zodb-checkins mailing list