[Checkins] SVN: Sandbox/poster/zc.blist/trunk/s checkpoint. seems to be getting pretty robust.

Gary Poster gary at zope.com
Mon May 19 21:03:21 EDT 2008


Log message for revision 86846:
  checkpoint.  seems to be getting pretty robust.

Changed:
  U   Sandbox/poster/zc.blist/trunk/setup.py
  U   Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt
  U   Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py
  U   Sandbox/poster/zc.blist/trunk/src/zc/blist/testing.py
  U   Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py

-=-
Modified: Sandbox/poster/zc.blist/trunk/setup.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/setup.py	2008-05-19 22:01:21 UTC (rev 86845)
+++ Sandbox/poster/zc.blist/trunk/setup.py	2008-05-20 01:03:19 UTC (rev 86846)
@@ -72,7 +72,7 @@
 
 setup(
     name='zc.blist',
-    version='1.0a1',
+    version='1.0b1',
     packages=find_packages('src'),
     package_dir={'':'src'},
     zip_safe=False,

Modified: Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt	2008-05-19 22:01:21 UTC (rev 86845)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt	2008-05-20 01:03:19 UTC (rev 86846)
@@ -12,8 +12,8 @@
 individual buckets. This means that, for small changes in large sequences, the
 sequence could be a big win. For instance, an ordered BTree-based container
 might want to store order in a sequence, so that moves only cause a bucket or
-two-- around 50 strings or less--to be rewritten in the database, rather than
-the entire contents (which might be thousands of strings or more).
+two--around 50 strings or less--to be rewritten in the database, rather than
+the entire contents (which might be thousands of strings, for instance).
 
 If the sequence is most often completely rearranged, the complexity of the code
 in this package is not desirable.  It only makes sense if changes most
@@ -26,35 +26,136 @@
 Another downside is the corollary of the bucket advantage listed initially:
 with more persistent objects, iterating over it will fill a lot of ZODB's
 object cache (which is based on the number of objects cached, rather than the
-size).  Specify a big cache if you are using these to store a lot of data and
-are frequently iterating or changing.
+size). Consider specifying a big object cache if you are using these to store a
+lot of data and are frequently iterating or changing.
 
 These sequences return slices as iterators, and add some helpful iteration
-methods.  __eq__ compares identity.
+methods.  It adds a ``copy`` method that provides a cheap copy of the blist
+that shares all buckets and indexes until a write happens, at which point it
+copies and mutates the affected indexes and buckets.
 
-Other than that, these sequences work like a standard Python sequence. Therefore
-this file is not geared towards users of the data structure but implementers.
+We'll take a glance at how these differences work, and then describe the
+implementation's basic mechanism, and close with a brief discussion of
+performance characteristics in the abstract.
 
+==============================
+Differences from Python's List
+==============================
+
+Slices are Iterators
+====================
+
+This doesn't need much discussion.  Getting slices of all sorts returns
+iterators.
+
+    >>> from zc.blist import BList
+    >>> l = BList(range(1000))
+    >>> l[345:351] # doctest: +ELLIPSIS
+    <generator object at ...>
+    >>> list(l[345:351])
+    [345, 346, 347, 348, 349, 350]
+
+    >>> l[351:345:-1] # doctest: +ELLIPSIS
+    <generator object at ...>
+    >>> list(l[351:345:-1])
+    [351, 350, 349, 348, 347, 346]
+
+    >>> l[345:351:2] # doctest: +ELLIPSIS
+    <generator object at ...>
+    >>> list(l[345:351:2])
+    [345, 347, 349]
+
+Additional Iteration Methods
+============================
+
+``iterReversed`` lets you iterate over the list in reverse order, efficiently,
+with a given start point.  It is used for slices that proceed with a step of
+-1.
+
+    >>> i = l.iterReversed()
+    >>> i.next()
+    999
+    >>> i.next()
+    998
+    >>> list(i)[-10:]
+    [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
+
+``iterSlice`` lets you iterate over the list with a slice.  It is equivalent to
+using a slice with __getitem__.
+
+    >>> i = l.iterSlice(345, 351, 2)
+    >>> i # doctest: +ELLIPSIS
+    <generator object at ...>
+    >>> list(i)
+    [345, 347, 349]
+
+Cheap ``copy``
+==============
+
+The ``copy`` method produces a cheap copy of the given blist.  All buckets
+and indexes are shared until a change is made to either side.  Copies can
+safely be made of other copies.
+
+    >>> c = l.copy()
+    >>> l == c
+    True
+    >>> list(c) == list(l)
+    True
+    >>> del c[10:]
+    >>> list(l) == range(1000)
+    True
+    >>> list(c)
+    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
+    >>> l == c
+    False
+    >>> c2 = c.copy()
+    >>> c2 == c
+    True
+    >>> list(c2)
+    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
+
 =========
 Mechanism
 =========
 
 In its implementation, the sequence is an adapted B+ tree. Indexes are keys, but
 each bucket or branch starts at 0. For instance, a perfectly-balanced bucket
-sequence with 16 items, and 3 in a bucket or branch, would have "keys" like
-this::
+sequence with 16 items, and a limit of 3 entries in a bucket or branch, would
+have "keys" like this. In the diagram, the top three rows are indexes, and the
+bottom row consists of buckets.
 
         0           8
      0     4     0     4
     0 2   0 2   0 2   0 2
-   01 01 01 01 01 01 01 01 
+   01 01 01 01 01 01 01 01
 
+So, for instance, you would get the value at position 5 using this process:
+
+- In the top index (the top row, with keys of 0 and 8), find the largest key
+  that is lower than the desired position, and use the associated value (index
+  or bucket, which is in this case the index represented by the first pair of 0
+  and 4 in the second row) for the next step. In this case, the top index has
+  keys of 0 and 8, so the largest key lower than position 5 is 0. Subtract this
+  key from the position for the next step. This difference will be used as the
+  position for the next step. In this case, the next position will be (5-0=) 5.
+
+- The next index has keys of 0 and 4. The largest key lower than 5 is 4. Use
+  the child index associated with the 4 key for the next step (the second pair
+  of 0 and 2 in the third row), and subtract the key (4) from the position (5)
+  for the position to be used in the next step (=1).
+
+- The next index (the second pair of 0 and 2 in the third row) needs to find
+  position 1.  This will return the third pair of 0 1 in the last row.  The new
+  position will be (1-0=) 1.
+
+- Finally, position 1 in the bottom bucket stores the actual desired value.
+
 This arrangement minimizes the changes to keys necessary when a new value is
-inserted low in the sequence: ignoring balancing the tree, only subsequent
-siblings and subsequent siblings of parents must be adjusted.  For instance,
-inserting a new value in the 0 position of the bucketsequence described above
-(the worst case for the algorithm, in terms of the number of objects touched)
-would result in the following tree::
+inserted low in the sequence: ignoring balancing the tree, only parents and
+their subsequent siblings must be adjusted. For instance, inserting a new value
+in the 0 position of the bucketsequence described above (the worst case for the
+algorithm, in terms of the number of objects touched) would result in the
+following tree::
 
         0           9
      0     5     0     4
@@ -91,7 +192,7 @@
 
 - ``count``, ``__eq__``, and other methods load all buckets and items, and are
   brute force, and in Python. In contrast, lists and tuples will load all
-  items, and is brute force in C.
+  items (worse), and is brute force in C (better, but not algorithmically).
 
 The Bad
 =======
@@ -99,5 +200,7 @@
 - This will create a lot of Persistent objects for one blist, which may cause
   cache eviction problems depending on circumstances and usage.
 
-- Did I mention that this was in Python, not C?  That's fixable, at least.
+- Did I mention that this was in Python, not C?  That's fixable, at least, and
+  in fact doesn't appear to be too problematic at the moment, at least for the
+  author's usage.
 

Modified: Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py	2008-05-19 22:01:21 UTC (rev 86845)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py	2008-05-20 01:03:19 UTC (rev 86846)
@@ -62,8 +62,6 @@
     del l[:count]
     return res
 
-# Bucket and Index extend this
-
 class Collections(persistent.Persistent):
     # separate persistent object so a change does not necessitate rewriting
     # bucket or index
@@ -79,15 +77,18 @@
 
     def remove(self, collection):
         res = []
-        found = False
+        found = 0
         for coll in self._collections:
             if coll is not collection:
                 res.append(coll)
             else:
                 assert not found
-                found = True
+                found += 1
+        if not found:
+            raise ValueError('blist programmer error: collection not found')
         self._collections = tuple(res)
 
+
     def __len__(self):
         return len(self._collections)
 
@@ -103,6 +104,8 @@
     def __nonzero__(self):
         return bool(self._collections)
 
+# Bucket and Index extend this
+
 class AbstractData(persistent.Persistent):
     def __init__(self, collection, identifier=None, previous=None, next=None,
                  parent=None):
@@ -236,7 +239,10 @@
         raise ValueError('value not found; likely programmer error')
 
     def contained_len(self, collection):
-        val = self.maxKey()
+        try:
+            val = self.maxKey()
+        except ValueError:
+            return 0
         return val + collection._mapping[self[val]].contained_len(collection)
 
     @method
@@ -316,11 +322,13 @@
 
 class BList(persistent.Persistent):
 
+    family = BTrees.family32 # don't support 64 yet because need another Index
+    # class
+
     def __init__(self, vals=None,
-                 bucket_size=30, index_size=10, family=BTrees.family32):
+                 bucket_size=30, index_size=10):
         self.bucket_size = bucket_size
         self.index_size = index_size
-        self.family = family
         self._mapping = self.family.IO.BTree()
         self._top_index = 0
         self._mapping[self._top_index] = Bucket(self, self._top_index)
@@ -491,7 +499,9 @@
         length = len(self)
         self[length:length] = iterable
 
-    __iadd__ = extend
+    def __iadd__(self, iterable):
+        self.extend(iterable)
+        return self
 
     def pop(self, index=-1):
         res = self[index]
@@ -505,49 +515,48 @@
     def reverse(self):
         self[:] = reversed(self)
 
-    def sort(self, cmpfunc=None):
+    def sort(self, cmp=None, key=None, reverse=False):
         vals = list(self)
-        vals.sort(cmpfunc)
-        self[:] = l
+        vals.sort(cmp=cmp, key=key, reverse=reverse)
+        self[:] = vals
 
     # __setitem__ helpers
 
     def _reindex(self, start_bucket, stop_bucket_id, recurse=False):
-        if start_bucket is None:
+        if start_bucket is None or start_bucket.identifier == self._top_index:
             return
-        bucket = self._mapping[start_bucket.identifier]
-        k = None
-        stopped = False
-        while bucket is not None and bucket.identifier != self._top_index:
-            stopped = stopped or bucket.identifier == stop_bucket_id
-            parent = self._mapping[bucket.parent]
-            if k is None:
-                k = parent.index(bucket.identifier)
-                if k == parent.minKey() and k > 0:
-                    parent = self._mutable(parent)
-                    del parent[k]
-                    k = 0
-                    parent[k] = bucket.identifier
-            v = bucket.contained_len(self)
-            try:
-                next = parent.minKey(k+1)
-            except ValueError:
-                k = None
-                if recurse:
-                    self._reindex(
-                        parent, self._mapping[stop_bucket_id].parent, recurse)
-                if stopped:
-                    break
-            else:
-                k += v
-                if next != k:
-                    b = parent[next]
-                    parent = self._mutable(parent)
-                    del parent[next]
-                    parent[k] = b
-            bucket = bucket.next
-            if bucket is not None:
-                bucket = self._mapping[bucket]
+        orig_parent = parent = self._mapping[
+            self._mapping[start_bucket.identifier].parent]
+        stopped = found = False
+        new = {}
+        while not stopped:
+            found = found or parent.minKey() != 0
+            next = 0
+            changed = False
+            for k, v in parent.items():
+                if not found:
+                    new[k] = v
+                    if v == start_bucket.identifier:
+                        found = True
+                        next = k + self._mapping[v].contained_len(self)
+                else:
+                    new[next] = v
+                    changed = changed or k != next
+                    next = next + self._mapping[v].contained_len(self)
+                stopped = stopped or v == stop_bucket_id
+            if changed:
+                parent = self._mutable(parent)
+                parent.clear()
+                parent.update(new)
+            if not stopped:
+                if parent.next is None:
+                    stopped = True
+                else:
+                    parent = self._mapping[parent.next]
+                    new.clear()
+        if recurse:
+            self._reindex(
+                orig_parent, self._mapping[stop_bucket_id].parent, recurse)
 
     def _mutable(self, bucket):
         if bucket.shared:
@@ -599,6 +608,8 @@
                 # specifying the step.
 
         start, stop, stride = index.indices(length)
+        if start > stop and stride > 0 or start < stop and stride < 0:
+            stop = start # that's the way the Python list works.
         if index.step is None:
             # delete and/or insert range; bucket arrangement may change
             value = list(value) # we actually do mutate this, so a list is
@@ -607,6 +618,9 @@
             len_value = len(value)
             if start == 0 and stop == length and stride == 1:
                 # shortcut: clear out everything
+                for data in self._mapping.values():
+                    if data.shared:
+                        data.collections.remove(self)
                 self._mapping.clear()
                 self._top_index = 0
                 start_bucket = self._mapping[
@@ -626,6 +640,10 @@
                         if removed:
                             bucket = self._mutable(bucket)
                             del bucket[ix:stop_ix]
+                        elif (stop_bucket_id != start_bucket_id and
+                              bucket.previous is not None):
+                            # TODO this logic could maybe be moved earlier?
+                            stop_bucket_id = bucket.previous
                         break
                     removed = bucket[ix:]
                     if removed:
@@ -646,7 +664,7 @@
                     bucket = self._mutable(bucket)
                     bucket[ix:ix] = items
                     if bucket.identifier == stop_bucket_id or not value:
-                        stop_ix = len(items)
+                        stop_ix = len(items) + ix
                         break
                     bucket = self._mapping[bucket.next]
                     ix = 0
@@ -716,11 +734,14 @@
 
                     if ((bucket is stop_bucket or not len_stop) and
                         bucket.previous is None and stop_bucket.next is None):
+                        bucket = self._mutable(bucket)
                         if bucket.identifier != self._top_index:
                             # get rid of unnecessary parents
                             p = bucket
                             while p.identifier != self._top_index:
                                 p = self._mapping[p.parent]
+                                if p.shared:
+                                    p.collections.remove(self)
                                 del self._mapping[p.identifier]
                             # this is now the top of the tree
                             self._top_index = bucket.identifier 
@@ -885,7 +906,8 @@
                         previous=old_bucket.identifier, next=old_bucket.next)
                     old_bucket.next = bucket.identifier
                     if bucket.next is not None:
-                        self._mapping[bucket.next].previous = bucket.identifier
+                        self._mutable(self._mapping[bucket.next]).previous = (
+                            bucket.identifier)
                     created.append(bucket)
                     length = self.bucket_size
 
@@ -908,7 +930,8 @@
 
             if not created:
                 self._reindex(
-                    self._mapping[start_bucket_id], bucket.identifier)
+                    self._mapping[start_bucket_id], bucket.identifier,
+                    recurse=True)
                 return
 
             value = created
@@ -951,20 +974,21 @@
                 length = fill_maximum - len(bucket)
                 while value:
                     for o in shift_sequence(value, length):
-                        assert not o.shared # these are newly created
+                        o = self._mutable(o)
                         bucket[ix] = o.identifier
                         o.parent = bucket.identifier
                         ix += o.contained_len(self)
-                    # we don't necessarily need to fix parents--
-                    # we'll get to them above
+                    # we don't need to fix parents--we'll get to them above
                     if value:
                         identifier = self._generateId()
+                        previous = self._mutable(bucket)
                         bucket = self._mapping[identifier] = Index(
                             self, identifier,
-                            previous=bucket.identifier, next=bucket.next)
-                        self._mapping[bucket.previous].next = identifier
+                            previous=previous.identifier, next=previous.next)
+                        previous.next = identifier
                         if bucket.next is not None:
-                            self._mapping[bucket.next].previous = identifier
+                            next = self._mutable(self._mapping[bucket.next])
+                            next.previous = identifier
                         created.append(bucket)
                         length = fill_maximum
                         ix = 0
@@ -995,7 +1019,8 @@
             value_ct = 0
             for v in value:
                 value_ct += 1
-                if index >= stop:
+                if (stride > 0 and index >= stop or
+                    stride < 0 and index <= stop):
                     error = ValueError(
                         'attempt to assign sequence of at least size %d '
                         'to extended slice of size %d' % (
@@ -1003,6 +1028,7 @@
                     break
                 bucket, ix = self._get_bucket(index)
                 old = bucket[ix]
+                bucket = self._mutable(bucket)
                 bucket[ix] = v
                 changed.append((bucket, ix, old))
                 index += stride

Modified: Sandbox/poster/zc.blist/trunk/src/zc/blist/testing.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/testing.py	2008-05-19 22:01:21 UTC (rev 86845)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/testing.py	2008-05-20 01:03:19 UTC (rev 86846)
@@ -32,10 +32,12 @@
     
 def matches(b, result):
     assert list(b) == result, repr(list(b)) + ' != ' + repr(result)
+    assert len(b) == len(result)
     res = []
-    for i in range(len(b)):
-        res.append(b[i])
-    assert res == result, repr(res) + ' != ' + repr(result)
+    bad = [(i, (b_i, r_i)) for (i, b_i, r_i) in
+           ((i, b[i], result[i]) for i in range(len(b)))
+           if b_i != r_i]
+    assert not bad, 'getitems do not match on these indices: ' + repr(bad)
     # we'll check the buckets internally while we are here
     assert b.data.parent is None
     assert b.data.previous is None and b.data.next is None

Modified: Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py	2008-05-19 22:01:21 UTC (rev 86845)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py	2008-05-20 01:03:19 UTC (rev 86846)
@@ -13,9 +13,15 @@
 ##############################################################################
 import os
 import unittest
+import random
+import sys
+import traceback
 
 from zope.testing import doctest
 
+import zc.blist
+import zc.blist.testing
+
 def by_hand_regression_test():
     """
     ==================
@@ -164,6 +170,13 @@
         ...     assert matches(b, old_comparison)
         ...     return checkCopies(b, new)
         ...
+
+    So, ``checkCopies`` and ``check`` return three lists: the bucket
+    identifiers that are only in b, the bucket identifiers that are only in
+    new, and the bucket identifiers that are in both, but different. Initially,
+    all three lists are empty, because the two blists share all buckets and
+    indexes.
+
         >>> check()
         ([], [], [])
         >>> del new[4]
@@ -189,25 +202,225 @@
         >>> del new[::3]
         >>> del comparison[::3]
         >>> [len(v) for v in check()]
-        [34, 26, 32]
+        [35, 26, 28]
         >>> del new[::2]
         >>> del comparison[::2]
         >>> [len(v) for v in check()]
-        [55, 19, 11]
+        [56, 19, 10]
         >>> del new[-1:5:-2]
         >>> del comparison[-1:5:-2]
         >>> [len(v) for v in check()]
-        [55, 6, 11]
-    
-    XXX lots more to test, especially exceptions.
+        [56, 7, 10]
     """
 
+class Canary(object):
+    def __init__(self, comp, blist = None):
+        if blist is None:
+            blist = zc.blist.BList(comp)
+        self.blist = blist
+        self.comp = comp
+        self.ops = [getattr(self, n) for n in dir(self) if n.startswith('t_')]
+        self.empty_ops = [getattr(self, n) for n in dir(self) if n.startswith('e_')]
+        self.ops.extend(self.empty_ops)
+
+    def __call__(self, count=100):
+        for i in range(count):
+            if len(self.comp):
+                c = random.choice(self.ops)
+            else:
+                c = random.choice(self.empty_ops)
+            self.run(c)
+
+    def run(self, call):
+        orig = self.blist.copy()
+        orig_comp = self.comp[:]
+        try:
+            zc.blist.testing.checkCopies(self.blist, orig) # initial
+            args = call()
+            zc.blist.testing.matches(self.blist, self.comp)
+            zc.blist.testing.matches(orig, orig_comp)
+            return zc.blist.testing.checkCopies(self.blist, orig) # post
+        except Exception, e:
+            traceback.print_exc()
+            import pdb; pdb.post_mortem(sys.exc_info()[2])
+            raise
+
+    def _getval(self):
+        return random.randint(-sys.maxint, sys.maxint)
+
+    def _getloc(self, adjustment=-1):
+        max = len(self.comp)+adjustment
+        if max <= 1:
+            return 0
+        return random.randint(0, max)
+
+    def _get_start_stop(self):
+        if not self.comp:
+            return 0, 0
+        max = len(self.comp)-1
+        start = random.randint(0, max)
+        if start == max:
+            stop = start
+        else:
+            stop = random.randint(start, max)
+        if random.choice((True, False)):
+            start -= len(self.comp)
+        if random.choice((True, False)):
+            stop -= len(self.comp)
+        return start, stop
+
+    def _get_start_stop_step(self):
+        if not self.comp:
+            return 0, 0, 1
+        max = len(self.comp)-1
+        start = random.randint(0, max)
+        step = 1
+        if start == max:
+            stop = start
+        else:
+            stop = random.randint(start, max)
+            if stop-start > 1:
+                step = random.randint(1, stop-start)
+        if random.choice((True, False)):
+            start -= len(self.comp)
+        if random.choice((True, False)):
+            stop -= len(self.comp)
+        if random.choice((True, False)):
+            stop, start = start, stop
+            step = -step
+        return start, stop, step
+
+    def _getvals(self):
+        return [self._getval() for i in range(random.randint(1, 100))]
+
+    def e_append(self):
+        val = self._getval()
+        self.comp.append(val)
+        self.blist.append(val)
+        return (val,)
+
+    def e_extend(self):
+        new = self._getvals()
+        self.comp.extend(new)
+        self.blist.extend(new)
+        return (new,)
+
+    def e_iadd(self):
+        new = self._getvals()
+        self.comp += new
+        self.blist += new
+        return (new,)
+
+    def e_insert(self):
+        val = self._getval()
+        location = self._getloc(0) # can insert after last item
+        self.comp.insert(location, val)
+        self.blist.insert(location, val)
+        return location, val
+
+    def t_delitem(self):
+        location = self._getloc()
+        del self.comp[location]
+        del self.blist[location]
+        return (location,)
+
+    def t_delslice(self):
+        start, stop = self._get_start_stop()
+        del self.comp[start:stop]
+        del self.blist[start:stop]
+        return start, stop
+
+    def t_delslice_step(self):
+        start, stop, step = self._get_start_stop_step()
+        del self.comp[start:stop:step]
+        del self.blist[start:stop:step]
+        return start, stop, step
+
+    def e_delslice_noop(self):
+        stop, start = self._get_start_stop()
+        del self.comp[start:stop]
+        del self.blist[start:stop]
+        return start, stop
+
+    def e_delslice_step_noop(self):
+        stop, start, step = self._get_start_stop_step()
+        del self.comp[start:stop:step]
+        del self.blist[start:stop:step]
+        return start, stop, step
+
+    def t_pop(self):
+        location = self._getloc()
+        assert self.comp.pop(location) == self.blist.pop(location)
+        return (location,)
+
+    def t_remove(self):
+        val = self.comp[self._getloc()]
+        self.comp.remove(val)
+        self.blist.remove(val)
+        return (val,)
+
+    def t_reverse(self):
+        self.comp.reverse()
+        self.blist.reverse()
+        return ()
+
+    def t_sort(self):
+        self.comp.sort()
+        self.blist.sort()
+        return ()
+
+    def t_sort_cmp(self):
+        self.comp.sort(lambda s, o: cmp(str(o), str(s))) # reverse, by
+        self.blist.sort(lambda s, o: cmp(str(o), str(s))) # string
+        return ()
+
+    def t_sort_key(self):
+        self.comp.sort(key=lambda v: str(v))
+        self.blist.sort(key=lambda v: str(v))
+        return ()
+
+    def t_sort_reverse(self):
+        self.comp.sort(reverse = True)
+        self.blist.sort(reverse = True)
+        return ()
+
+    def t_setitem(self):
+        location = self._getloc()
+        val = self._getval()
+        self.comp[location] = val
+        self.blist[location] = val
+        return location, val
+
+    def e_setitem_slice(self):
+        start, stop = self._get_start_stop()
+        vals = self._getvals()
+        self.comp[start:stop] = vals
+        self.blist[start:stop] = vals
+        return start, stop, vals
+
+    def e_setitem_slice_step(self):
+        start, stop, step = self._get_start_stop_step()
+        vals = [self._getval() for i in range(len(self.comp[start:stop:step]))]
+        self.comp[start:stop:step] = vals
+        self.blist[start:stop:step] = vals
+        return start, stop, step, vals
+
+class CanaryTestCase(unittest.TestCase):
+    def test_empty_canary(self):
+        c = Canary([])
+        c()
+
+    def test_small_bucket_empty_canary(self):
+        c = Canary([], zc.blist.BList(bucket_size=5, index_size=4))
+        c()
+
 def test_suite():
     return unittest.TestSuite((
         doctest.DocFileSuite(
             'README.txt',
             optionflags=doctest.INTERPRET_FOOTNOTES),
-        doctest.DocTestSuite()
+        doctest.DocTestSuite(),
+        unittest.TestLoader().loadTestsFromTestCase(CanaryTestCase),
         ))
 
 



More information about the Checkins mailing list