[Checkins] SVN: Sandbox/poster/zc.blist/trunk/ initial check in. needs much better tests, but it is a start, and I have been sitting on this way too long.

Gary Poster gary at zope.com
Wed Apr 30 23:53:09 EDT 2008


Log message for revision 85946:
  initial check in.  needs much better tests, but it is a start, and I have been sitting on this way too long.

Changed:
  _U  Sandbox/poster/zc.blist/trunk/
  A   Sandbox/poster/zc.blist/trunk/CHANGES.txt
  A   Sandbox/poster/zc.blist/trunk/DEVELOP.txt
  A   Sandbox/poster/zc.blist/trunk/LICENSE.txt
  A   Sandbox/poster/zc.blist/trunk/buildout.cfg
  A   Sandbox/poster/zc.blist/trunk/setup.py
  A   Sandbox/poster/zc.blist/trunk/src/
  A   Sandbox/poster/zc.blist/trunk/src/zc/
  A   Sandbox/poster/zc.blist/trunk/src/zc/__init__.py
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/CHANGES.txt
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/TODO.txt
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py
  A   Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py

-=-

Property changes on: Sandbox/poster/zc.blist/trunk
___________________________________________________________________
Name: svn:ignore
   + develop-eggs
bin
parts
.installed.cfg
dist
TEST_THIS_REST_BEFORE_REGISTERING.txt
*.kpf

Name: svn:externals
   + bootstrap svn://svn.zope.org/repos/main/zc.buildout/trunk/bootstrap


Added: Sandbox/poster/zc.blist/trunk/CHANGES.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/CHANGES.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/CHANGES.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1 @@
+Please see CHANGES.txt in the package.


Property changes on: Sandbox/poster/zc.blist/trunk/CHANGES.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/DEVELOP.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/DEVELOP.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/DEVELOP.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,26 @@
+To develop this package from source:
+
+- check out the software from the repository
+
+- ``cd`` to the checkout
+
+- Ideally with a clean, non-system python, run
+  ``python bootstrap/bootstrap.py``
+
+- run ``./bin/buildout``
+
+To run tests, run ``./bin/test``.
+
+Changes should be documented in CHANGES.txt *in the package*.
+
+Before making a release that registers the software to PyPI, run the following:
+
+- ``./bin/py setup.py``
+
+This then creates a file with the following silly name:
+``TEST_THIS_REST_BEFORE_REGISTERING.txt``
+
+As the name suggests, test the file in a ReST tool to make sure docutils
+parses it correctly.
+
+Once this works, go ahead and ``./bin/py setup.py sdist register upload``.


Property changes on: Sandbox/poster/zc.blist/trunk/DEVELOP.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/LICENSE.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/LICENSE.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/LICENSE.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,54 @@
+Zope Public License (ZPL) Version 2.1
+-------------------------------------
+
+A copyright notice accompanies this license document that
+identifies the copyright holders.
+
+This license has been certified as open source. It has also
+been designated as GPL compatible by the Free Software
+Foundation (FSF).
+
+Redistribution and use in source and binary forms, with or
+without modification, are permitted provided that the
+following conditions are met:
+
+1. Redistributions in source code must retain the
+   accompanying copyright notice, this list of conditions,
+   and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the accompanying
+   copyright notice, this list of conditions, and the
+   following disclaimer in the documentation and/or other
+   materials provided with the distribution.
+
+3. Names of the copyright holders must not be used to
+   endorse or promote products derived from this software
+   without prior written permission from the copyright
+   holders.
+
+4. The right to distribute this software or to use it for
+   any purpose does not give you the right to use
+   Servicemarks (sm) or Trademarks (tm) of the copyright
+   holders. Use of them is covered by separate agreement
+   with the copyright holders.
+
+5. If any files are modified, you must cause the modified
+   files to carry prominent notices stating that you changed
+   the files and the date of any change.
+
+Disclaimer
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS''
+  AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT
+  NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+  AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN
+  NO EVENT SHALL THE COPYRIGHT HOLDERS BE
+  LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+  LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+  HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+  OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+  DAMAGE.
\ No newline at end of file


Property changes on: Sandbox/poster/zc.blist/trunk/LICENSE.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/buildout.cfg
===================================================================
--- Sandbox/poster/zc.blist/trunk/buildout.cfg	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/buildout.cfg	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,21 @@
+[buildout]
+parts =
+    interpreter
+    test
+
+develop = .
+
+find-links =
+    http://download.zope.org/distribution
+
+[test]
+recipe = zc.recipe.testrunner
+eggs = zc.blist
+defaults = '--tests-pattern ^[fn]?tests --exit-with-status -1'.split()
+working-directory = ${buildout:directory}
+
+[interpreter]
+recipe = zc.recipe.egg
+eggs = zc.blist
+       docutils
+interpreter = py


Property changes on: Sandbox/poster/zc.blist/trunk/buildout.cfg
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/setup.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/setup.py	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/setup.py	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,96 @@
+##############################################################################
+#
+# Copyright (c) 2006-2008 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+import os
+from setuptools import setup, find_packages
+
+# generic helpers primarily for the long_description
+try:
+    import docutils
+except ImportError:
+    import warnings
+    def validateReST(text):
+        return ''
+else:
+    import docutils.utils
+    import docutils.parsers.rst
+    import StringIO
+    def validateReST(text):
+        doc = docutils.utils.new_document('validator')
+        # our desired settings
+        doc.reporter.halt_level = 5
+        doc.reporter.report_level = 1
+        stream = doc.reporter.stream = StringIO.StringIO()
+        # docutils buglets (?)
+        doc.settings.tab_width = 2
+        doc.settings.pep_references = doc.settings.rfc_references = False
+        doc.settings.trim_footnote_reference_space = None
+        # and we're off...
+        parser = docutils.parsers.rst.Parser()
+        parser.parse(text, doc)
+        return stream.getvalue()
+
+def text(*args, **kwargs):
+    # note: distutils explicitly disallows unicode for setup values :-/
+    # http://docs.python.org/dist/meta-data.html
+    tmp = []
+    for a in args:
+        if a.endswith('.txt'):
+            f = open(os.path.join(*a.split('/')))
+            tmp.append(f.read())
+            f.close()
+            tmp.append('\n\n')
+        else:
+            tmp.append(a)
+    if len(tmp) == 1:
+        res = tmp[0]
+    else:
+        res = ''.join(tmp)
+    out = kwargs.get('out')
+    if out is True:
+        out = 'TEST_THIS_REST_BEFORE_REGISTERING.txt'
+    if out:
+        f = open(out, 'w')
+        f.write(res)
+        f.close()
+        report = validateReST(res)
+        if report:
+            print report
+            raise ValueError('ReST validation error')
+    return res
+# end helpers; below this line should be code custom to this package
+
+setup(
+    name='zc.blist',
+    version='1.0a1',
+    packages=find_packages('src'),
+    package_dir={'':'src'},
+    zip_safe=False,
+    author='Gary Poster',
+    author_email='gary at zope.com',
+    description='ZODB-friendly BTree-based list implementation',
+    long_description=text(
+        'src/zc/blist/README.txt',
+        "=======\nChanges\n=======\n\n",
+        'src/zc/blist/CHANGES.txt',
+        out=True),
+    license='ZPL',
+    install_requires=[
+        'ZODB3 >= 3.7',
+        'rwproperty',
+        'setuptools',
+        
+        'zope.testing',
+        ],
+    include_package_data=True,
+    )

Added: Sandbox/poster/zc.blist/trunk/src/zc/__init__.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/__init__.py	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/__init__.py	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,20 @@
+##############################################################################
+#
+# Copyright (c) 2006-2008 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+# this is a namespace package
+try:
+    import pkg_resources
+    pkg_resources.declare_namespace(__name__)
+except ImportError:
+    import pkgutil
+    __path__ = pkgutil.extend_path(__path__, __name__)

Added: Sandbox/poster/zc.blist/trunk/src/zc/blist/CHANGES.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/CHANGES.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/CHANGES.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,4 @@
+1.0 (unreleased)
+================
+
+Initial release.


Property changes on: Sandbox/poster/zc.blist/trunk/src/zc/blist/CHANGES.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,267 @@
+~~~~~~~~
+zc.blist
+~~~~~~~~
+
+.. contents::
+
+========
+Overview
+========
+
+The sequence in this package has a list-like API, but stores its values in
+individual buckets. This means that, for small changes in large sequences, the
+sequence could be a big win. For instance, an ordered BTree-based container
+might want to store order in a sequence, so that moves only cause a bucket or
+two-- around 50 strings or less--to be rewritten in the database, rather than
+the entire contents (which might be thousands of strings or more).
+
+If the sequence is most often completely rearranged, the complexity of the code
+in this package is not desirable.  It only makes sense if changes most
+frequently are fairly small.
+
+One downside is that reading and writing is more work than with a normal list.
+If this were to actually gain traction, perhaps writing some or all of it in C
+would be helpful.  However, it still seems pretty snappy.
+
+Another downside is the corollary of the bucket advantage listed initially:
+with more persistent objects, iterating over it will fill a lot of ZODB's
+object cache (which is based on the number of objects cached, rather than the
+size).  Specify a big cache if you are using these to store a lot of data and
+are frequently iterating or changing.
+
+These sequences return slices as iterators, and add some helpful iteration
+methods.
+
+Other than that, these sequences work like a standard Python sequence. Therefore
+this file is not geared towards users of the data structure but implementers.
+
+=========
+Mechanism
+=========
+
+In its implementation, the sequence is an adapted B+ tree. Indexes are keys, but
+each bucket or branch starts at 0. For instance, a perfectly-balanced bucket
+sequence with 16 items, and 3 in a bucket or branch, would have "keys" like
+this::
+
+        0           8
+     0     4     0     4
+    0 2   0 2   0 2   0 2
+   01 01 01 01 01 01 01 01 
+
+This arrangement minimizes the changes to keys necessary when a new value is
+inserted low in the sequence: ignoring balancing the tree, only subsequent
+siblings and subsequent siblings of parents must be adjusted.  For instance,
+inserting a new value in the 0 position of the bucketsequence described above
+(the worst case for the algorithm, in terms of the number of objects touched)
+would result in the following tree::
+
+        0           9
+     0     5     0     4
+    0  3   0 2   0 2   0 2
+   012 01 01 01 01 01 01 01
+
+===========================
+Performance Characteristics
+===========================
+
+The Good
+========
+
+- ``__getitem__`` is efficient, not loading unnecessary buckets.  It handles
+  slices pretty well too, not even loading intermediary buckets if the slice
+  is very large.  Slices currently return iterables rather than lists; this
+  may switch to a view of some sort.  All that should be assumed right now is
+  that you can iterate over the result of a slice.
+
+- ``__setitem__`` and all the write methods do a pretty good job in terms of
+  efficient loading of buckets, and only writing what they need to.  It
+  supports full Python slice semantics.
+
+- ``copy`` is cheap: it reuses buckets and indexes so that new inner
+  components are created lazily when they mutate.
+
+- While ``__contains__``, ``__iter__``, ``index`` and other methods are brute
+  force and written in Python, they might not load all buckets and items, while
+  with a normal list or tuple, they always will. See also ``iter``,
+  ``iterReversed``, and ``iterSlice``.
+
+The So-So
+=========
+
+- ``count``, ``__eq__``, and other methods load all buckets and items, and are
+  brute force in Python. Lists and tuples will load all items, and is brute
+  force in C.
+
+The Bad
+=======
+
+- This will create a lot of Persistent objects for one blist, which can cause
+  cache eviction problems.
+
+==================
+Regression Testing
+==================
+
+We'll use a `matches` function to compare a bucket sequence with a standard
+Python list to which the same modifications have made.  This also checks for
+bucket health [#matches]_.
+
+    >>> import zc.blist
+    >>> b = zc.blist.BList(bucket_size=5, index_size=4) # we want > 3 so min is > 1
+    >>> matches(b, [])
+    True
+    >>> b.append(0)
+    >>> matches(b, [0])
+    True
+    >>> del b[0]
+    >>> matches(b, [])
+    True
+    >>> b.extend(range(10))
+    >>> comparison = range(10)
+    >>> matches(b, comparison)
+    True
+    >>> b.reverse()
+    >>> comparison.reverse()
+    >>> matches(b, comparison)
+    True
+    >>> for i in range(10):
+    ...     b[i] = i+10
+    ...     comparison[i] = i+10
+    ...
+    >>> matches(b, comparison)
+    True
+    >>> b[5:10] = [9, 8, 7, 6, 5]
+    >>> comparison[5:10] = [9, 8, 7, 6, 5]
+    >>> matches(b, comparison)
+    True
+    >>> b[0:0] = [-3, -2, -1]
+    >>> comparison[0:0] = [-3, -2, -1]
+    >>> matches(b, comparison)
+    True
+    >>> b.extend(range(90, 100))
+    >>> comparison.extend(range(90,100))
+    >>> matches(b, comparison)
+    True
+    >>> b[10:10] = range(20, 90)
+    >>> comparison[10:10] = range(20, 90)
+    >>> matches(b, comparison)
+    True
+    >>> b[b.index(82)]
+    82
+    >>> del b[:4]
+    >>> del comparison[:4]
+    >>> matches(b, comparison)
+    True
+    >>> comparison[2:10:2] = [100, 102, 104, 106]
+    >>> b[2:10:2] = [100, 102, 104, 106]
+    >>> matches(b, comparison)
+    True
+    >>> del b[1:88]
+    >>> del comparison[1:88]
+    >>> matches(b, comparison)
+    True
+    >>> list(b[:])
+    [11, 99]
+    >>> b[0] = 0
+    >>> b[2] = 100
+    >>> b[3] = 101
+    >>> b[4] = 102
+    >>> matches(b, [0, 99, 100, 101, 102])
+    True
+
+Switching two values is most efficiently done with slice notation.
+
+    >>> b[:] = range(1000)
+    >>> b[5:996:990] = (b[995], b[5])
+    >>> list(b[:7])
+    [0, 1, 2, 3, 4, 995, 6]
+    >>> list(b[994:])
+    [994, 5, 996, 997, 998, 999]
+    >>> comparison = range(1000)
+    >>> comparison[5] = 995
+    >>> comparison[995] = 5
+    >>> matches(b, comparison)
+    True
+
+We'll test some of the other methods
+
+    >>> b.pop(995) == comparison.pop(995)
+    True
+    >>> matches(b, comparison)
+    True
+    >>> b.insert(995, 5)
+    >>> comparison.insert(995, 5)
+    >>> matches(b, comparison)
+    True
+
+These are some more stress and regression tests.
+
+    >>> del b[900:]
+    >>> del comparison[900:]
+    >>> matches(b, comparison)
+    True
+
+    >>> del comparison[::2]
+    >>> del b[::2] # 1
+    >>> matches(b, comparison)
+    True
+    >>> del b[::2] # 2
+    >>> del comparison[::2]
+    >>> matches(b, comparison)
+    True
+    >>> del b[::2] # 3
+    >>> del comparison[::2]
+    >>> matches(b, comparison)
+    True
+
+XXX lots more to test, especially exceptions.
+
+.. [#matches] 
+    >>> def checkIndex(ix, b, previous, previous_ix=0):
+    ...     computed = 0
+    ...     if len(previous) < previous_ix+1:
+    ...         previous.append(None)
+    ...         assert len(previous) >= previous_ix + 1
+    ...     # assert isinstance(ix, zc.blist.Index)
+    ...     assert b.data is ix or len(ix) <= b.index_size
+    ...     assert b.data is ix or len(ix) >= b.index_size // 2
+    ...     assert ix.minKey() == 0
+    ...     for k, v in ix.items():
+    ...         v = b._mapping[v]
+    ...         assert computed == k
+    ...         assert v.parent == ix.identifier
+    ...         p = previous[previous_ix]
+    ...         if p is not None:
+    ...             p = p.identifier
+    ...         assert v.previous == p
+    ...         assert (v.previous is None or
+    ...                 previous[previous_ix].next == v.identifier)
+    ...         assert (v.next is None or
+    ...                 b._mapping[v.next].previous == v.identifier)
+    ...         computed += v.contained_len(b)
+    ...         if isinstance(v, zc.blist.Index):
+    ...             checkIndex(v, b, previous, previous_ix+1)
+    ...         else:
+    ...             assert isinstance(v, zc.blist.Bucket)
+    ...             assert len(v) <= b.bucket_size
+    ...             assert len(v) >= b.bucket_size // 2
+    ...         previous[previous_ix] = v
+    ...     
+    >>> def matches(b, result):
+    ...     assert list(b) == result, repr(list(b)) + ' != ' + repr(result)
+    ...     res = []
+    ...     for i in range(len(b)):
+    ...         res.append(b[i])
+    ...     assert res == result, repr(res) + ' != ' + repr(result)
+    ...     #assert set(result) == set(b.children)
+    ...     #children_errors = [(k, [b._mapping.get(i) for i in v]) for k, v in
+    ...     #                   b.children.items() if len(v) != 1]
+    ...     #assert children_errors == [], repr(children_errors)
+    ...     # we'll check the buckets internally while we are here
+    ...     assert b.data.parent is None
+    ...     assert b.data.previous is None and b.data.next is None
+    ...     if isinstance(b.data, zc.blist.Index):
+    ...         checkIndex(b.data, b, [None])
+    ...     return True
+    ...


Property changes on: Sandbox/poster/zc.blist/trunk/src/zc/blist/README.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/src/zc/blist/TODO.txt
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/TODO.txt	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/TODO.txt	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,3 @@
+- need much better tests
+
+- no tests at all for lazy ``copy`` right now, which is a delicate dance!


Property changes on: Sandbox/poster/zc.blist/trunk/src/zc/blist/TODO.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/__init__.py	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,1009 @@
+##############################################################################
+#
+# Copyright (c) 2007-2008 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+import sys
+import random
+
+import persistent
+import persistent.list
+import BTrees.OOBTree
+import BTrees.Length
+import rwproperty
+
+def method(f):
+    def wrapper(self, *args, **kwargs):
+        if self.shared:
+            raise RuntimeError('cannot mutate shared object')
+        return f(self, *args, **kwargs)
+    return wrapper
+
+class setproperty(rwproperty.rwproperty):
+
+    @staticmethod
+    def createProperty(func):
+        return property(None, method(func))
+
+    @staticmethod
+    def enhanceProperty(oldprop, func):
+        return property(oldprop.fget, method(func), oldprop.fdel)
+
+def supercall(name):
+    sys._getframe(1).f_locals[name] = method(
+        lambda self, *args, **kwargs: getattr(
+            super(self.__class__, self), name)(*args, **kwargs))
+
+def makeProperty(name, default=None):
+    protected = '_z_%s__' % name
+    def setprop(self, value):
+        if self.shared:
+            raise RuntimeError('cannot mutate shared object')
+        if value is not None and not isinstance (value, (int, long)):
+            raise TypeError(value)
+        setattr(self, protected, value)
+    sys._getframe(1).f_locals[name] = property(
+        lambda self: getattr(self, protected, default), setprop)
+
+# helpers
+
+def shift_sequence(l, count):
+    res = l[:count]
+    del l[:count]
+    return res
+
+# Bucket and Index extend this
+
+class Collections(persistent.Persistent):
+    # separate persistent object so a change does not necessitate rewriting
+    # bucket or index
+    def __init__(self, *collections):
+        self._collections = collections
+
+    def __iter__(self):
+        return iter(self._collections)
+
+    def add(self, collection):
+        if collection not in self._collections:
+            self._collections += (collection,)
+
+    def remove(self, collection):
+        res = []
+        found = False
+        for coll in self._collections:
+            if coll is not collection:
+                res.append(coll)
+            else:
+                assert not found
+                found = True
+        self._collections = tuple(res)
+
+    def __len__(self):
+        return len(self._collections)
+
+    def __getitem__(self, key):
+        return self._collections.__getitem__(key)
+
+class AbstractData(persistent.Persistent):
+    def __init__(self, collection, identifier=None, previous=None, next=None,
+                 parent=None):
+        self.collections = Collections(collection)
+        self.identifier = identifier
+        self.previous = previous
+        self.next = next
+        self.parent = parent
+
+    @property
+    def collection(self):
+        if len(self.collections) > 1:
+            raise ValueError('ambiguous')
+        return self.collections[0]
+
+    @property
+    def shared(self):
+        return len(self.collections) > 1
+    
+    makeProperty('identifier')
+    makeProperty('previous')
+    makeProperty('next')
+    makeProperty('parent')
+
+    # the other shared "interface" bits are `contained_len`, `clear`, `copy`,
+    # and `index`
+
+class Bucket(persistent.list.PersistentList, AbstractData):
+    """Buckets hold blocks of data from from the collection."""
+
+    def __init__(self, collection,
+                 identifier, previous=None, next=None, parent=None, vals=None):
+        AbstractData.__init__(
+            self, collection, identifier, previous, next, parent)
+        persistent.list.PersistentList.__init__(self, vals)
+
+    def __getslice__(self, i, j):
+        return self.data[i:j]
+
+    def contained_len(self, collection):
+        return len(self)
+
+    @method
+    def clear(self):
+        del self[:]
+
+    def copy(self, collection):
+        assert self.shared, 'only copy shared object'
+        self.collections.remove(collection)
+        return Bucket(
+            collection, self.identifier, self.previous, self.next,
+            self.parent, self)
+
+    @method
+    def balance(self, right):
+        len_left = len(self)
+        len_right = len(right)
+        move_index = (len_left + len_right) // 2
+        right = self.collection._mutable(right)
+        if len_left > len_right:
+            # move some right
+            moved = self[move_index:]
+            right[0:0] = moved
+            del left[move_index:]
+        else:
+            # move some left
+            move_index -= len_left
+            moved = right[:move_index]
+            left.extend(moved)
+            del right[:move_index]
+
+    @method
+    def rotate(self, right):
+        if len(self) + len(right) > self.collection.bucket_size:
+            self.balance(right)
+        else:
+            moved = right[:]
+            self.extend(moved)
+            right = self.collection._mutable(right)
+            del right[:]
+
+    @method
+    def rotateRight(self, right):
+        if len(self) + len(right) > self.collection.bucket_size:
+            self.balance(right)
+        else:
+            moved = self[:]
+            right = self.collection._mutable(right)
+            right[0:0] = moved
+            del self[:]
+        
+
+    supercall('__setitem__')
+    supercall('__delitem__')
+    supercall('__setslice__')
+    supercall('__delslice__')
+    supercall('__iadd__')
+    supercall('__imul__')
+    supercall('append')
+    supercall('insert')
+    supercall('pop')
+    supercall('remove')
+    supercall('reverse')
+    supercall('sort')
+    supercall('extend')
+
+class Index(BTrees.family32.II.Bucket, AbstractData):
+    """Indexes index buckets and sub-indexes."""
+
+    supercall('clear')
+    supercall('update')
+    supercall('__setitem__')
+    supercall('__delitem__')
+    supercall('setdefault')
+    supercall('pop')
+
+    def __init__(self, collection,
+                 identifier, previous=None, next=None, parent=None):
+        AbstractData.__init__(
+            self, collection, identifier, previous, next, parent)
+
+    def index(self, other):
+        for k, v in self.items():
+            if v == other:
+                return k
+        raise ValueError('value not found; likely programmer error')
+
+    def contained_len(self, collection):
+        val = self.maxKey()
+        return val + collection._mapping[self[val]].contained_len(collection)
+
+    @method
+    def balance(self, right):
+        len_left = len(self)
+        len_right = len(right)
+        move_index = (len_left + len_right) // 2
+        right = self.collection._mutable(right)
+        if len_left > len_right:
+            # move some right
+            items = list(self.items()[move_index:])
+            zero = items[0][0] # this will be index 0 on the right
+            offset = (items[-1][0] + # this is offset for current right values
+                      self.collection._mapping[items[-1][1]].contained_len(
+                        self.collection) - zero)
+            for k, o in reversed(right.items()):
+                right[offset+k] = o
+                del right[k]
+            for k, o in items:
+                right[k-zero] = o
+                del self[k]
+                self.collection._mutable(
+                    self.collection._mapping[o]).parent = right.identifier
+        else:
+            # move some left
+            move_index -= len_left
+            items = list(right.items()[:move_index])
+            offset = self.contained_len(self.collection)
+            for k, o in items:
+                self[offset+k] = o
+                del right[k]
+                self.collection._mutable(
+                    self.collection._mapping[o]).parent = self.identifier
+            offset = (items[-1][0] +
+                      self.collection._mapping[items[-1][1]].contained_len(
+                        self.collection))
+            for k, o in list(right.items()):
+                del right[k]
+                right[k-offset] = o
+
+    @method
+    def rotate(self, right):
+        if len(self) + len(right) > self.collection.index_size:
+            self.balance(right)
+        else:
+            offset = self.contained_len(self.collection)
+            for k, o in list(right.items()):
+                self[offset+k] = o
+                self.collection._mutable(
+                    self.collection._mapping[o]).parent = self.identifier
+            right = self.collection._mutable(right)
+            right.clear()
+
+    @method
+    def rotateRight(self, right):
+        if len(self) + len(right) > self.collection.index_size:
+            self.balance(right)
+        else:
+            offset = self.contained_len(self.collection)
+            right = self.collection._mutable(right)
+            for k, o in reversed(right.items()):
+                right[offset+k] = o
+                del right[k]
+            for k, o in self.items():
+                right[k] = o
+                self.collection._mutable(
+                    self.collection._mapping[o]).parent = right.identifier
+            self.clear()
+
+    def copy(self, collection):
+        assert self.shared, 'only copy shared object'
+        self.collections.remove(collection)
+        res = Index(collection, self.identifier, self.previous, self.next,
+                    self.parent)
+        res.update(self)
+        return res
+
+class BList(persistent.Persistent):
+
+    def __init__(self, vals=None,
+                 bucket_size=30, index_size=10, family=BTrees.family32):
+        self.bucket_size = bucket_size
+        self.index_size = index_size
+        self.family = family
+        self._mapping = self.family.IO.BTree()
+        self._top_index = 0
+        self._mapping[self._top_index] = Bucket(self, self._top_index)
+        if vals is not None:
+            self.extend(vals)
+
+    def copy(self):
+        res = self.__class__.__new__(self.__class__)
+        res.bucket_size = self.bucket_size
+        res.index_size = self.index_size
+        res.family = self.family
+        res._mapping = self.family.IO.BTree()
+        res._top_index = 0
+        res._mapping.update(self._mapping)
+        for v in self._mapping.values():
+            v.collections.add(res)
+        return res
+
+    @property
+    def data(self):
+        return self._mapping[self._top_index]
+
+    # Read API
+
+    def __contains__(self, value):
+        # this potentially loads all buckets and items from ZODB.  Then again,
+        # standard list or tuple *will* load all items.
+        for item in self:
+            if value == item:
+                return True
+        return False
+
+    def __len__(self):
+        return self.data.contained_len(self)
+
+    def count(self, value):
+        # whee!  Let's load everything!
+        ct = 0
+        for item in self:
+            if value == item:
+                ct += 1
+        return ct
+
+    def _get_bucket(self, index):
+        bucket = self.data
+        ix = index
+        while not isinstance(bucket, Bucket):
+            key = bucket.maxKey(ix)
+            bucket = self._mapping[bucket[key]]
+            ix -= key
+        return bucket, ix
+
+    def iter(self, start=0):
+        length = len(self)
+        if start < 0:
+            start += length
+            if start < 0:
+                raise IndexError('list index out of range')
+        if length > start:
+            bucket, ix = self._get_bucket(start)
+            for v in bucket[ix:]:
+                yield v
+            bucket_ix = bucket.next
+            while bucket_ix is not None:
+                bucket = self._mapping[bucket_ix]
+                for v in bucket:
+                    yield v
+                bucket_ix = bucket.next
+
+    def iterReversed(self, start=-1):
+        length = len(self)
+        if start < 0:
+            start += length
+            if start < 0:
+                raise IndexError('list index out of range')
+        if length > start:
+            bucket, ix = self._get_bucket(start)
+            for v in reversed(bucket[:ix+1]):
+                yield v
+            bucket_ix = bucket.previous
+            while bucket_ix is not None:
+                bucket = self._mapping[bucket_ix]
+                for v in reversed(bucket):
+                    yield v
+                bucket_ix = bucket.previous
+
+    def iterSlice(self, start=0, stop=None, stride=None):
+        if isinstance(start, slice):
+            if stop is not None or stride is not None:
+                raise ValueError(
+                    'cannot pass slice with additional stop or stride')
+        else:
+            start = slice(start, stop, stride)
+        start, stop, stride = start.indices(len(self))
+        if stride == 1:
+            ix = start
+            i = self.iter(start)
+            while ix < stop:
+                yield i.next()
+                ix += 1
+        elif stride == -1:
+            ix = start
+            i = self.iterReversed(start)
+            while ix > stop:
+                yield i.next()
+                ix -= 1
+        else:
+            if stride < 0:
+                condition = lambda begin, end: begin > end
+            else:
+                condition = lambda begin, end: begin < end
+            ix = start
+            while condition(ix, stop):
+                bucket, i = self._get_bucket(ix)
+                yield bucket[i]
+                ix += stride
+
+    def __iter__(self):
+        return self.iter()
+
+    def index(self, value, start=0, stop=None):
+        for ct, item in enumerate(self.iterSlice(start, stop)):
+            if item == value:
+                return start + ct
+        raise ValueError('.index(x): x not in collection')
+
+    def __getitem__(self, index):
+        if isinstance(index, slice):
+            return self.iterSlice(index) # XXX return view?
+        if index < 0:
+            index += length
+            if index < 0:
+                raise IndexError('list index out of range')
+        elif index > len(self):
+            raise IndexError('list index out of range')
+        bucket, ix = self._get_bucket(index)
+        return bucket[ix]
+
+    # Write API
+    
+    # Everything relies on __setitem__ to reduce duplicated logic
+
+    def append(self, value):
+        self[len(self)] = value
+
+    def insert(self, index, value):
+        self[index:index] = (value,)
+
+    def __delitem__(self, index):
+        if not isinstance(index, slice):
+            if index > len(self):
+                raise IndexError('list assignment index out of range')
+            index = slice(index, index+1)
+        elif index.step == 1:
+            index = slice(index.start, index.stop)
+        elif index.step is not None:
+            start, stop, stride = index.indices(len(self))
+            if stride < 0:
+                ix = range(start, stop, stride)
+            else:
+                ix = reversed(range(start, stop, stride))
+            for i in ix:
+                self.__setitem__(slice(i, i+1), ())
+            return
+        self.__setitem__(index, ())
+
+    def extend(self, iterable):
+        length = len(self)
+        self[length:length] = iterable
+
+    __iadd__ = extend
+
+    def pop(self, index=-1):
+        res = self[index]
+        self[index:index+1] = ()
+        return res
+
+    def remove(self, item):
+        index = self.index(item)
+        self[index:index+1] = ()
+
+    def reverse(self):
+        self[:] = reversed(self)
+
+    def sort(self, cmpfunc=None):
+        vals = list(self)
+        vals.sort(cmpfunc)
+        self[:] = l
+
+    # __setitem__ helpers
+
+    def _reindex(self, start_bucket, stop_bucket, recurse=False):
+        bucket = start_bucket
+        k = None
+        stopped = False
+        while bucket is not None and bucket.identifier != self._top_index:
+            stopped = stopped or bucket is stop_bucket
+            parent = self._mapping[bucket.parent]
+            if k is None:
+                k = parent.index(bucket.identifier)
+                if k == parent.minKey() and k > 0:
+                    parent = self._mutable(parent)
+                    del parent[k]
+                    k = 0
+                    parent[k] = bucket.identifier
+            v = bucket.contained_len(self)
+            try:
+                next = parent.minKey(k+1)
+            except ValueError:
+                k = None
+                if recurse:
+                    self._reindex(parent, self._mapping[stop_bucket.parent],
+                                  recurse)
+                if stopped:
+                    break
+            else:
+                k += v
+                if next != k:
+                    b = parent[next]
+                    parent = self._mutable(parent)
+                    del parent[next]
+                    parent[k] = b
+            bucket = bucket.next
+            if bucket is not None:
+                bucket = self._mapping[bucket]
+
+    def _mutable(self, bucket):
+        if bucket.shared:
+            bucket = bucket.copy(self)
+            self._mapping[bucket.identifier] = bucket
+        return bucket
+
+    _v_nextid = None
+
+    def _generateId(self):
+        # taken from zope.app.intid code
+        """Generate an id which is not yet taken.
+
+        This tries to allocate sequential ids so they fall into the
+        same BTree bucket, and randomizes if it stumbles upon a
+        used one.
+        """
+        while True:
+            if self._v_nextid is None:
+                self._v_nextid = random.randrange(
+                    self.family.minint, self.family.maxint)
+            uid = self._v_nextid
+            self._v_nextid += 1
+            if uid not in self._mapping:
+                return uid
+            self._v_nextid = None
+
+    # __setitem__ itself: the workhorse
+
+    def __setitem__(self, index, value):
+        length = len(self)
+        # To reduce the amount of duplicated code, everything is based on
+        # slices. Either you are replacing specific items (index is an integer
+        # less than len or a slice with an explicit step) or deleting/inserting
+        # ranges (index is an integer equal to len or a slice with an implicit
+        # step of 1).  We convert integer requests to slice requests here.
+        if not isinstance(index, slice):
+            value = (value,)
+            if index == length:
+                index = slice(length, length)
+            elif index > length:
+                raise IndexError('list assignment index out of range')
+            elif index == -1:
+                index = slice(length-1, length, 1) # we specify a step to use
+                # the second, "replace values" code path below, rather than
+                # the first "range" code path.
+            else:
+                index = slice(index, index+1, 1) # same reason as above for
+                # specifying the step.
+
+        start, stop, stride = index.indices(length)
+        if index.step is None:
+            # delete and/or insert range; bucket arrangement may change
+            value = list(value) # we actually do mutate this, so a list is
+            # intentional.  Even if the original value is a list, we don't want
+            # to mutate the original.
+            len_value = len(value)
+            if start == 0 and stop == length and stride == 1:
+                # shortcut: clear out everything
+                self._mapping.clear()
+                self._top_index = 0
+                start_bucket = self._mapping[
+                    self._top_index] = Bucket(self, self._top_index)
+                start_ix = 0
+            elif stop != start:
+                # we're supposed to delete
+                start_bucket, start_ix = self._get_bucket(start)
+                stop_bucket, stop_ix = self._get_bucket(stop)
+                bucket = start_bucket
+                ix = start_ix
+                while True:
+                    if bucket is stop_bucket:
+                        removed = bucket[ix:stop_ix]
+                        if removed:
+                            bucket = self._mutable(bucket)
+                            del bucket[ix:stop_ix]
+                        break
+                    removed = bucket[ix:]
+                    if removed:
+                        bucket = self._mutable(bucket)
+                        del bucket[ix:]
+                    bucket_ix = bucket.next
+                    if bucket_ix is None:
+                        break
+                    bucket = self._mapping[bucket_ix]
+                    ix = 0
+                bucket = start_bucket
+                ix = start_ix
+                # populate old buckets with new values, until we are out of
+                # new values or old buckets
+                while value:
+                    items = shift_sequence(
+                        value, self.bucket_size - len(bucket))
+                    bucket[ix:ix] = items
+                    if bucket is stop_bucket or not value:
+                        stop_ix = len(items)
+                        break
+                    bucket = self._mapping[bucket.next]
+                    ix = 0
+
+                # we've deleted values, and may have replaced some,
+                # and now we need to see if we need to rearrange
+                # buckets because they are smaller than the fill ratio
+                # allows.  We do this even if we have more values to
+                # insert so that the insert code can begin from a sane
+                # state; this is an obvious possible optimization point,
+                # therefore (though other optimizations may be better choices).
+
+                # The algorithm has us first try to balance across
+                # siblings, and then clean up the parents.  Typically
+                # B+ tree algorithm descriptions go
+                # one-item-at-a-time, while we may have swaths of
+                # changes to which we need to adjust.
+                #
+                # Key adjustments are different than the standard B+
+                # tree story because this is a sequence, and our keys
+                # are indices that we need to adjust to accomodate the
+                # deletions.  This means siblings to all of our
+                # parents, walking up the tree.  The "swaths of
+                # changes" also makes this a bit tricky.
+
+                fill_maximum = self.bucket_size
+                fill_minimum = fill_maximum // 2
+
+                original_stop_bucket = stop_bucket
+
+                while start_bucket is not None:
+
+                    # We'll get the buckets rotated so that any
+                    # bucket that has members will be above the fill ratio
+                    # (unless it is the only bucket).
+                    #
+                    # `bucket` is the last bucket we might have put
+                    # anything in to; we'll want to look at it and the
+                    # `stop_bucket` (if different) to see if we need to
+                    # adjust.
+                    #
+                    # if bucket and stop_bucket are different and
+                    # stop_bucket is not empty and either are below the
+                    # fill_minimum...
+                        # if the combination is less than the fill_maximum,
+                        # put in bucket and empty stop_bucket
+                        # else redistribute across so both are above
+                        # fill_minimum
+
+                    len_bucket = len(bucket)
+                    len_stop = len(stop_bucket)
+                    if (bucket is not stop_bucket and
+                        len_stop and (
+                            len_bucket < fill_minimum or
+                            len_stop < fill_minimum)):
+                        bucket.rotate(stop_bucket)
+                        len_bucket = len(bucket)
+                        len_stop = len(stop_bucket)
+
+                    # if (bucket is stop_bucket or stop_bucket is empty)
+                    # and bucket.previous is None and stop_bucket.next is
+                    # None, shortcut: just make sure this is the top
+                    # bucket and break.
+
+                    if ((bucket is stop_bucket or not len_stop) and
+                        bucket.previous is None and stop_bucket.next is None):
+                        if self.data is not bucket:
+                            self._mapping.clear()
+                            self._mapping[bucket.identifier] = bucket
+                            self._top_index = bucket.identifier 
+                            bucket.parent = None
+                        else:
+                            assert bucket.parent is None
+                        bucket.next = None
+                        stop_bucket = None
+                        break
+
+                    # now these are the possible states:
+                    # - bucket is stop_bucket and is empty
+                    # - bucket is stop_bucket and is too small
+                    # - bucket is stop_bucket and is ok
+                    # - bucket and stop_bucket are both empty
+                    # - bucket is ok and stop_bucket is empty
+                    # - bucket is too small and stop_bucket is empty
+                    # - bucket is ok and stop_bucket is ok
+                    #
+                    # Therefore, 
+                    # - if the stop_bucket is ok or the bucket is empty,
+                    #   we're ok with this step, and can move on to
+                    #   adjusting the indexes and pointers.
+                    # - otherwise the bucket is too small, and there is
+                    #   another bucket to rotate with.  Find the bucket
+                    #   and adjust so that no non-empty buckets are
+                    #   beneath the fill_minimum.  Make sure to adjust the
+                    #   start_bucket or stop_bucket to include the altered
+                    #   bucket.
+
+                    if len_bucket < fill_minimum:
+                        previous = bucket.previous
+                        next = stop_bucket.next
+                        assert previous is not None or next is not None
+                        assert bucket is stop_bucket or not len_stop
+                        if next is not None:
+                            next = self._mapping[next]
+                        if (next is None or
+                            previous is not None and
+                            len(next) + len_bucket > fill_maximum):
+                            # work with previous
+                            previous = self._mapping[previous]
+                            previous.rotate(bucket)
+                            if bucket is start_bucket:
+                                bucket = start_bucket = previous
+                            if not bucket:
+                                bucket = previous
+                                assert bucket
+                        else:
+                            # work with next
+                            bucket.rotateRight(next)
+                            stop_bucket = next
+
+                    # OK, now we need to adjust pointers and get rid of
+                    # empty buckets.  We'll go level-by-level.
+
+                    reindex_start = start_bucket
+
+                    b = bucket
+                    while b is not None:
+                        next = b.next
+                        if next is not None:
+                            next = self._mapping[next]
+                        if not b: # it is empty
+                            parent = self._mapping[b.parent]
+                            ix = parent.index(b.identifier)
+                            del parent[ix]
+                            if b.previous is not None:
+                                self._mapping[b.previous].next = b.next
+                            if next is not None: # next defined at loop start
+                                next.previous = b.previous
+                            if b is reindex_start:
+                                reindex_start = next
+                            del self._mapping[b.identifier]
+                        if b is stop_bucket:
+                            break
+                        b = next
+
+                    self._reindex(reindex_start, stop_bucket)
+
+                    # now we get ready for the next round...
+
+                    start_bucket = start_bucket.parent
+                    if start_bucket is not None:
+                        start_bucket = self._mapping[start_bucket]
+                    stop_bucket = stop_bucket.parent
+                    if stop_bucket is not None:
+                        stop_bucket = self._mapping[stop_bucket]
+                    bucket = bucket.parent
+                    if bucket is not None:
+                        bucket = self._mapping[bucket]
+                    fill_maximum = self.index_size
+                    fill_minimum = fill_maximum // 2
+                    
+                assert stop_bucket is None
+
+                if not value:
+                    return # we're done; don't fall through to add story
+                else:
+                    # we've replaced old values with new, but there are
+                    # some more left.  we'll set things up so the
+                    # standard insert story should work for the remaining
+                    # values.
+                    start_bucket = original_stop_bucket
+                    start_ix = stop_ix
+                    # ...now continue with add story
+
+            else:
+                start_bucket, start_ix = self._get_bucket(start)
+            # this is the add story.
+
+            # So, we have a start_bucket and a start_ix: we're supposed
+            # to insert the values in i at start_ix in start_bucket.
+            if not value:
+                return
+            fill_maximum = self.bucket_size
+            fill_minimum = fill_maximum // 2
+
+            # Clean out the ones after start_ix in the start_bucket, if
+            # any.  
+
+            moved = start_bucket[start_ix:]
+            value.extend(moved)
+            start_bucket = self._mutable(start_bucket)
+            del start_bucket[start_ix:]
+            ix = start_ix
+            bucket = start_bucket
+            created = []
+
+            # Start filling at the ix.  Fill until we reached len
+            # or until i is empty.  Make new buckets, remembering them in
+            # a list, and fill them until i is empty, and then continue
+            # with the removed ones from the start_bucket.  If the last
+            # bucket is too small, merge or rotate as appropriate.
+
+            length = fill_maximum - len(bucket)
+            while value:
+                added = shift_sequence(value, length)
+                bucket.extend(added)
+                if value:
+                    old_bucket = bucket
+                    identifier = self._generateId()
+                    bucket = self._mapping[identifier] = Bucket(
+                        self, identifier,
+                        previous=old_bucket.identifier, next=old_bucket.next)
+                    old_bucket.next = bucket.identifier
+                    if bucket.next is not None:
+                        self._mapping[bucket.next].previous = bucket.identifier
+                    created.append(bucket)
+                    length = self.bucket_size
+
+            if (bucket.identifier != self._top_index and
+                len(bucket) < fill_minimum):
+                # this should only be able to happen when a previous bucket
+                # is already filled. It's simplest, then, to just split the
+                # contents of the previous bucket and this one--that way
+                # there's not any empty bucket to have to handle.
+                previous = self._mapping[bucket.previous]
+                assert len(previous) + len(bucket) >= 2 * fill_minimum
+                previous.balance(bucket)
+
+            # Now we have to insert any new buckets in the parents.  We
+            # again fill the parents, creating and remembering as
+            # necessary, and rotating at the end.  We keep on walking up
+            # until the list of new buckets is empty.  If we reach the top,
+            # we add a level at the top and continue.
+
+            if not created:
+                self._reindex(start_bucket, bucket)
+                return
+
+            value = created
+            fill_maximum = self.index_size
+            fill_minimum = fill_maximum // 2
+
+            while value:
+                if start_bucket.identifier == self._top_index: # the top
+                    assert start_bucket.parent is None
+                    self._top_index = identifier = self._generateId()
+                    parent = self._mapping[identifier] = Index(
+                        self, identifier, parent=None)
+                    parent[0] = start_bucket.identifier
+                    start_bucket.parent = parent.identifier
+                    start_ix = 0
+                    bucket = start_bucket = parent
+                else:
+                    parent = self._mapping[start_bucket.parent]
+                    start_ix = parent.index(start_bucket.identifier)
+                    bucket = start_bucket = parent
+                    value.extend(
+                        self._mapping[i] for i in
+                        start_bucket.values(start_ix, excludemin=True))
+                    for k in tuple(
+                        start_bucket.keys(start_ix, excludemin=True)):
+                        del start_bucket[k]
+                ix = start_ix + self._mapping[
+                    start_bucket[start_ix]].contained_len(self)
+                created = []
+
+                # Start filling at the ix. Fill until we reached len or
+                # until i is empty. Make new buckets, remembering them in a
+                # list, and fill them until i is empty, and then continue
+                # with the removed ones from the start_bucket. If the last
+                # bucket is too small, merge or rotate as appropriate.
+
+                length = fill_maximum - len(bucket)
+                while value:
+                    for o in shift_sequence(value, length):
+                        bucket[ix] = o.identifier
+                        o.parent = bucket.identifier
+                        ix += o.contained_len(self)
+                    # we don't necessarily need to fix parents--
+                    # we'll get to them above
+                    if value:
+                        identifier = self._generateId()
+                        bucket = self._mapping[identifier] = Index(
+                            self, identifier,
+                            previous=bucket.identifier, next=bucket.next)
+                        self._mapping[bucket.previous].next = identifier
+                        if bucket.next is not None:
+                            self._mapping[bucket.next].previous = identifier
+                        created.append(bucket)
+                        length = fill_maximum
+                        ix = 0
+
+                if (bucket.identifier != self._top_index and
+                    len(bucket) < fill_minimum):
+                    # this should only be able to happen when a previous
+                    # bucket is already filled. It's simplest, then, to
+                    # just split the contents of the previous bucket and
+                    # this one--that way there's not any empty bucket to
+                    # have to handle.
+                    assert (len(self._mapping[bucket.previous]) + len(bucket) >=
+                            2 * fill_minimum)
+                    self._mapping[bucket.previous].balance(bucket)
+                value = created
+            if start_bucket.identifier != self._top_index:
+                # we need to correct the indices of the parents.
+                self._reindex(start_bucket, bucket, recurse=True)
+
+        else:
+            # replace one set with a set of equal length
+            changed = []
+            index = start
+            removed = {}
+            problems = set()
+            added = {}
+            error = None
+            value_ct = 0
+            for v in value:
+                value_ct += 1
+                if index >= stop:
+                    error = ValueError(
+                        'attempt to assign sequence of at least size %d '
+                        'to extended slice of size %d' % (
+                            value_ct, (stop - start) / stride))
+                    break
+                bucket, ix = self._get_bucket(index)
+                old = bucket[ix]
+                if old in removed:
+                    removed[old].add(bucket)
+                else:
+                    removed[old] = set((bucket,))
+                if v in added:
+                    added[v].add(bucket.identifier)
+                else:
+                    added[v] = set((bucket.identifier,))
+                bucket[ix] = v
+                changed.append((bucket, ix, old))
+                index += stride
+            else:
+                if value_ct < (stop - start) / stride:
+                    error = ValueError(
+                            'attempt to assign sequence of size %d to '
+                            'extended slice of size %d' % (
+                            value_ct, (stop - start) / stride))
+            if not error and problems:
+                problems.difference_update(removed)
+                if problems:
+                    error = ValueError('item(s) already in collection',
+                        problems)
+            if error:
+                for bucket, ix, old in changed:
+                    bucket[ix] = old
+                raise error
+
+#    I want eq and ne but don't care much about the rest
+
+    def __eq__(self, other):
+        return isinstance(other, Sequence) and tuple(self) == tuple(other)
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+#     def __lt__(self, other):
+#         pass
+# 
+#     def __gt__(self, other):
+#         pass
+# 
+#     def __le__(self, other):
+#         pass
+# 
+#     def __ge__(self, other):
+#         pass
+
+#     def __add__(self, other):
+#         pass
+# 
+#     def __mul__(self, other):
+#         pass
+# 
+#     def __rmul__(self, other):
+#         pass

Added: Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py
===================================================================
--- Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py	                        (rev 0)
+++ Sandbox/poster/zc.blist/trunk/src/zc/blist/tests.py	2008-05-01 03:53:08 UTC (rev 85946)
@@ -0,0 +1,28 @@
+##############################################################################
+#
+# Copyright (c) 2007-2008 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+import os
+import unittest
+
+from zope.testing import doctest
+
+def test_suite():
+    return unittest.TestSuite((
+        doctest.DocFileSuite(
+            'README.txt',
+            optionflags=doctest.INTERPRET_FOOTNOTES),
+        ))
+
+
+if __name__ == '__main__':
+    unittest.main(defaultTest='test_suite')



More information about the Checkins mailing list