[ZODB-Dev] cache not minimized at transaction boundaries?

Chris Withers chris at simplistix.co.uk
Thu Jan 26 13:52:21 EST 2006


Hi All,

This is with whatever ZODB ships with Zope 2.8.5...

I have a Stepper (zopectl run on steroids) job that deals with lots of 
big objects.

After processing each one, Stepper does a transaction.get().commit(). I 
thought this was enough to keep the object cache at a sane size, however 
the job kept bombing out with MemoryErrors, and sure enough it was using 
2 or 3 gigs of memory when that happened.

I fiddled about with the gc module and found that, sure enough, object 
were being kept in memory. At a guess, I inserted something close to the 
following:

obj._p_jar.db().cacheMinimize()

...after each 5,000 objects were processed (there are 60,000 objects in 
total)

Lo and behold, memory usage became sane.

Why is this step necessary? I thought transaction.get().commit() every 
so often was enough to sort out the cache...

cheers,

Chris

-- 
Simplistix - Content Management, Zope & Python Consulting
            - http://www.simplistix.co.uk


More information about the ZODB-Dev mailing list