[ZODB-Dev] Packing BIG fat Data.fs

Chris Withers chrisw@nipltd.com
Thu, 30 Aug 2001 20:39:10 +0100


Jim Fulton wrote:
> 
> > How much RAM per MB of Data.fs would you expect a pack to use?
> 
> It's not RAM per MB, it's RAM per object. I'd say that you need about
> 300 bytes per object in the database, this is due to some in-memory
> inbexes that FileStorage uses. 

Ah, now that could explain it since this Data.fs had bucket loads of object, or,
to re-arrange that, loads of bucket objects ;-) Of the 8GB I was trying to pack,
probably 6GB of it was ZCatalog indexes...

> The Berkeley database Full storage
> that Barry Warsaw put together doesn't use in-memory indexes, and
> should use a lot less memory.

Cool. As I hinted at in another post, I'm fuzzy on why it was felt there was a
need to involve an external database engine in Zope's soon-to-be prefered
storage? Is Berkley a RDMS? (sorry for my ignorance) Are the reasons for the
choice documented anywhere?

> Now, both stoages load one record at a time into
> memory, so if you had a *really* big record, you could use a lot
> of memory, 

Well, if the import of a 1.5GB .zexp counts as a single record, then yeah, I'd
say I had a really big record. 5 or 6 of them to be precise ;-)

I'm still curious as to why an import which raised an exception (as the first 4
or 5 did) would result in the Data.fs growing so much, I thought aborted
transactions were chopped off the end of the data.fs or never added in the first
place?

> but then it's hard to beleive you had enough memory
> to write the record in the first place.

...it took about 6-8 hrs and ended up with some pretty hot hard disks ;-)

cheers,

Chris