[Zope-dev] 60GB Data.fs?

Erik Enge erik@thingamy.net
Wed, 6 Jun 2001 10:12:57 +0200 (CEST)


[no cross-posting, please]

On Wed, 6 Jun 2001, Bjorn Stabell wrote:

> We're planning a Yahoo! Clubs like system that should scale to about
> 30, 000 users.  Assuming about 3,000 groups and 20MB per group (group
> functionality includes photo albums), gives a database size of 60GB.
> Assuming on average 3,000 users per day, 20 page views per users,
> gives about 60,000 page views (not a lot, but if it's all dynamically
> generated?).

You're going to need some serious hardware for that.  You could do a lot
with your setup though (ZEO, RDBMS, distributed application-programming)
but I don't have much experience to share on that.  In a scenario where
each box (if you have several) has its own 60GB Data.fs I'd be worried
about disk-activity for one.  It seems to me (with my petty 1GB Data.fs)
that it is the disks rather than ZODB itself that slows things down.

> At this scale, how would ZODB hold up with respect to memory use and
> speed?  I've heard rumors that it loads an index into memory on
> start-up.

I'm running a 1GB Data.fs with CompressedStorage here and that takes
probably about 3-5 minutes on a 1GHz with 1GB RAM.  I keep banging my head
against it, but it just won't run faster.

Let us know how that project progress, will you? :)