[Zope] BoboPOS: Large Dictionary questions

Brian Kelley kelley@bioreason.com
Fri, 19 Feb 1999 14:07:03 -0700


Hello all.
Due to a recent post on comp.lang.python "Large Dictionaries" I have
been looking at BoboPOS as a potential solution.

I have been trying to muck through the available documentation to arrive
at some simple, minimal-code versions of basic tasks (sort of a
mini-howto for large Dictionaries).

Basic Task #1
Store/Retrieve simple picklable objects

from BoboPOS import PickleDictionary
from BoboPOS import Persistence

pd = PickleDictionary("/tmp/pickle_dict")

# begin a transaction
get_transaction().begin()
pd[1] = "This is a test for entry 1"
pd[2] = "This is a test for entry 2"
# commit the changes to the database
get_transaction().commit()
del pd

# did this work?
pd = PickleDictionary("pickle_dict")
print pd[1]
print pd[2]
del pd

The above code works well (it, perhaps, is not minimal).

The problem I am having is that the above implementation will not cache.

Suppose the transaction is in a loop:

for i in range(10000):
    pd[i] = "Entry for key int(%i)"%i

The memory used by this implementation is the same as the size of the
"pickle_dict" file.
I have looked through the source code and noticed that what I might
actually want to use is a Transactional Pickle Dictionary which
apparently does call the internal jar.cache.minimize() .
Alghough the cache.minimize is only called during the abort() function
which confuses me.

The question is:
How do I make large pickle dictionaries cache?

Of course, this particular thread might be well used in a how-to.  If
such a beast already exists, please point me in the right direction.  In
any case, the results of this query will probably be posted to
comp.lang.python as a service to the community and to promote Zope.

Thanks.

--
Brian Kelley          w 505 995-8188
Bioreason, Inc        f 505 995-8186
309 Johnson Av
Santa Fe, NM 87501    kelley@bioreason.com