[ZODB-Dev] cache not minimized at transaction boundaries?

Chris Withers chris at simplistix.co.uk
Mon Jan 30 16:40:12 EST 2006


Hi Tim,

Tim Peters wrote:
> Do:
> 
>     import ZODB
>     print ZODB.__version__
> 
> to find out.

Good to know, thanks...

>>I have a Stepper (zopectl run on steroids) job that deals with lots of
>>big objects.
> 
> Can you quantify this?

60,000 File objects of the order of 2Mb each.

> It does not do cacheMinimize().  It tries to reduce the memory cache to the
> target number of objects specified for that cache, which is not at all the
> same as cache minimization (which latter shoots for a target size of 0).
> Whether that's "sane" or not depends on the product of:
> 
>     the cache's target number of objects
> 
> times:
> 
>     "the average" byte size of an object

Ah, that'll do it, I wondered why it was only this step that was 
hurting. My guess is that our cache size settings with lots of max-sized 
PData objects lead to the RAM blowup...

...oh well, if only the ZODB cache was RAM-usage-based ratehr than 
object count based ;-)

thanks for the info!

Chris

-- 
Simplistix - Content Management, Zope & Python Consulting
            - http://www.simplistix.co.uk


More information about the ZODB-Dev mailing list