[ZODB-Dev] large C extension objects / MemoryError

Andrew Dalke Andrew Dalke" <dalke@dalkescientific.com
Mon, 29 Oct 2001 02:31:44 -0700


Me:
>I guess the next is to try a FileStorage to see if that does
>anything different.  (It shouldn't, IMO.)  I think if I keep
>(re)packing the data I can keep the file under the 2GB limit.

Turns out that the test set, when packed, needs only 182MB for
the FileStorage, plus 3.5MB for the .index and 1MB for the .tmp.
I don't know if my 2GB problem came from a larger test data
set or if pack really had that much overhead.  (Doubt the latter.)

In any case, I still get the same error -- scanning through all
the primary entries in the ZODB causes the system to run out
of memory.  I've 1 GB of RAM, and I can watch the process size
grow up and up and up and ... until it hits the limit and halts.

In both FileStorage and bsddb3Storage, I can watch the
cacheSize() grow by about 6 to 9 elements after each retrieval.
(The objects don't all have the same number of field in them.
I assume the difference in growth of the number of cached
objects comes from this difference.)

I still suspect the cache isn't properly being maintained.
At what point should the cacheSize() value decrease?

I don't know what else to do to test my hypothesis.  Any
more suggestions?  Possibilities of workarounds?

                    Andrew
                    dalke@dalkescientific.com