[ZODB-Dev] Packing BIG fat Data.fs

Hannu Krosing hannu@tm.ee
Fri, 31 Aug 2001 11:56:51 +0200


Chris Withers wrote:
> 
> Jim Fulton wrote:
> >
> > > How much RAM per MB of Data.fs would you expect a pack to use?
> >
> > It's not RAM per MB, it's RAM per object. I'd say that you need about
> > 300 bytes per object in the database, this is due to some in-memory
> > inbexes that FileStorage uses.
> 
> Ah, now that could explain it since this Data.fs had bucket loads of object, or,
> to re-arrange that, loads of bucket objects ;-) Of the 8GB I was trying to pack,
> probably 6GB of it was ZCatalog indexes...
> 
> > The Berkeley database Full storage
> > that Barry Warsaw put together doesn't use in-memory indexes, and
> > should use a lot less memory.
> 
> Cool. As I hinted at in another post, I'm fuzzy on why it was felt there was a
> need to involve an external database engine in Zope's soon-to-be prefered
> storage? Is Berkley a RDMS? (sorry for my ignorance) 

Berkeley is the liftout of early PostgreSQL indexing code for doing
things 
that don't need full RDBMS :)

It is not a rdbms, but a fast system for getting a data by its key.

> Are the reasons for the choice documented anywhere?

I don't know about documenting, but to me it seems the best choice 
as it has just the right level of functionality, has lots of field 
testing (even Netscape browsers use it for storage ;) and good speed.

---------------
Hannu