[Zope-Coders] Who killed the Zope HEAD?

Barry A. Warsaw barry@zope.com
Tue, 19 Mar 2002 22:39:34 -0500


>>>>> "SH" == Shane Hathaway <shane@zope.com> writes:

    SH> Well, it turns out I already had all the latest stable code:
    SH> Python 2.1.2, BerkeleyDB 3.3.11 with both patches, and bsddb3
    SH> 3.3.0.  The freeze still happens.

    | So I did some further investigation.  The freeze happens in 
    | bsddb3storage/Full.py, line 1103, which reads:

    SH>      self._metadata.delete(key)

    SH> This is called in the middle of _dopack().  I thought that
    SH> maybe the _metadata table was being locked by a function
    SH> calling this routine, and that the type of lock being used was
    SH> not reentrant.  Sure enough, _dopack() gets a cursor for the
    SH> _metadata table, which it uses before calling _zaprevision(),
    SH> but it actually finishes with it first.  So I added a "del c"
    SH> in the middle of _dopack(), and the tests passed again.

Shane!  Thanks so much for debugging this one!  Indeed, the hang in
checkPackAllRevision() is reproducible for me too.

You're right that the _metadata cursor used in _dopack() was not being
closed properly.  However, instead of deleting `c' and implicitly
freeing that resource, I think it's better to wrap the use of the
cursor in a try/finally that ensures the cursor is closed (similar to
the idiom used elsewhere in the file).  See attached patch; a complete
cvs-up'd unit test for me now passes.  Please try it and if it works
for you (guys), I'll commit the change.  (It's really just a 3 line
change, but that's masked by the re-indentation.)

-Barry

-------------------- snip snip --------------------
Index: Full.py
===================================================================
RCS file: /cvs-repository/StandaloneZODB/bsddb3Storage/bsddb3Storage/Full.py,v
retrieving revision 1.39
diff -u -r1.39 Full.py
--- Full.py	12 Feb 2002 22:33:09 -0000	1.39
+++ Full.py	20 Mar 2002 03:36:17 -0000
@@ -1152,35 +1152,40 @@
         # values which are a list of revisions (oid+tid) that can be packed.
         packablerevs = {}
         c = self._metadata.cursor()
-        # BAW: can two threads be packing at the same time?  If so, we need to
-        # handle that.  If not, we should enforce that with a pack-lock.
-        for oid in self._serials.keys():
-            try:
-                rec = c.set_range(oid+packtid)
-                # The one just before this should be the largest record less
-                # than or equal to the key, i.e. the object revision just
-                # before the given pack time.
-                rec = c.prev()
-            except db.DBNotFoundError:
-                # Perhaps the last record in the database is the last one
-                # containing this oid?
-                rec = c.last()
-            # Now move backwards in time to look at all the revisions of this
-            # object.  All but the current one are packable, unless the object
-            # isn't reachable from the root, in which case, all its revisions
-            # are packable.
-            while rec:
-                key, data = rec
-                rec = c.prev()
-                # Make sure we're still looking at revisions for this object
-                if oid <> key[:8]:
-                    break
-                if not reachables.has_key(oid):
-                    packablerevs.setdefault(oid, []).append(key)
-                # Otherwise, if this isn't the current revision for this
-                # object, then it's packable.
-                elif self._serials[oid] <> key[8:]:
-                    packablerevs.setdefault(oid, []).append(key)
+        try:
+            # BAW: can two threads be packing at the same time?  If so, we
+            # need to handle that.  If not, we should enforce that with a
+            # pack-lock.
+            for oid in self._serials.keys():
+                try:
+                    rec = c.set_range(oid+packtid)
+                    # The one just before this should be the largest record
+                    # less than or equal to the key, i.e. the object revision
+                    # just before the given pack time.
+                    rec = c.prev()
+                except db.DBNotFoundError:
+                    # Perhaps the last record in the database is the last one
+                    # containing this oid?
+                    rec = c.last()
+                # Now move backwards in time to look at all the revisions of
+                # this object.  All but the current one are packable, unless
+                # the object isn't reachable from the root, in which case, all
+                # its revisions are packable.
+                while rec:
+                    key, data = rec
+                    rec = c.prev()
+                    # Make sure we're still looking at revisions for this
+                    # object
+                    if oid <> key[:8]:
+                        break
+                    if not reachables.has_key(oid):
+                        packablerevs.setdefault(oid, []).append(key)
+                    # Otherwise, if this isn't the current revision for this
+                    # object, then it's packable.
+                    elif self._serials[oid] <> key[8:]:
+                        packablerevs.setdefault(oid, []).append(key)
+        finally:
+            c.close()
         # We now have all the packable revisions we're going to handle.  For
         # each object with revisions that we're going to pack away, acquire
         # the storage lock so we can do that without fear of trampling by