[ZODB-Dev] ZEO bandwidth requirements?

Jim Fulton jim@zope.com
Sat, 27 Oct 2001 12:16:43 -0400


"Barry A. Warsaw" wrote:
> 
> >>>>> "JF" == Jim Fulton <jim@zope.com> writes:
> 
>     JF> Oracle storage benefits a lot from compressing pickles. I
>     JF> suspect that the same *might* be true for the Berkeley
>     JF> storage.  It's tempting to send compressed data through ZEO to
>     JF> the storage, so that compression and decompression is done on
>     JF> the client, however, currently the Berkeley storage would have
>     JF> to decompress the pickle to get object references.
> 
> Right.  If we really wanted to explore this with Berkeley we'd have to
> add the object references to a table (which is something we talked
> about early on).

Or make the BerkelyStorage uncompress the pickles when getting the
references (it would still store compressed data, which might help
us on Berkeley cache requirements, unless it doesn't). Given the
large cache that the Berkeley storage needs, I'm wary of adding 
another table to the mix.

Jim

--
Jim Fulton           mailto:jim@zope.com       Python Powered!        
CTO                  (888) 344-4332            http://www.python.org  
Zope Corporation     http://www.zope.com       http://www.zope.org