[ZODB-Dev] Re: From memory Problems to Disk Problems :-(

Barry A. Warsaw barry@zope.com
Mon, 5 Nov 2001 12:03:31 -0500


>>>>> "CW" == Chris Withers <chrisw@nipltd.com> writes:

    CW> Well, moving from FileStorage to BerkleyStorage hasn't solved
    CW> the problem, but it has made it different :-S

    CW> The import script only uses 100Mb odd of RAM now, but
    CW> something that the BDB tables involved are doing is causing
    CW> the cache to jump up to 536Mb :-(

Which cache are you talking about?

    CW> How can indexing objects cause this to happen?  What should I
    CW> be doing?  I'm currently indexing the objects in batches of 50
    CW> before committing, each commit takes roughly 25 minutes(!!!)
    CW> so I'm getting a little concerned.

Have you read the tuning pages for BerkeleyDB?  What have you tried?
Remember that BerkeleyDB's own default cache size is 256KB, which is
going to be too small for any real Berkeley-based storage.

    CW> How can I check how much data is being written to the ODB?
    CW> What should I be looking to optimise with for BDB? Lots of
    CW> small objects or fewer big objects?  I'm only using the BTrees
    CW> module here and I'm getting to a point of despair with using
    CW> it...

I'm not sure.  Sleepycat claims that they can handle big objects just
fine.  I suppose I'd err on the side of lots of smaller commits.

-Barry