[ZODB-Dev] Packing BIG fat Data.fs

Jim Fulton jim@zope.com
Mon, 13 Aug 2001 11:43:03 -0400


Jeremy Hylton wrote:
> 
> >>>>> "CW" == Chris Withers <chrisw@nipltd.com> writes:
> 
>   CW> Hi, How do you guys suggest I go about packing a Data.fs that
>   CW> has ballooned to over 8GB? (thanks to the dodgy import problems
>   CW> I mentioned here and on zope-dev)
> 
>   CW> I tried doing a simple pack on a 1GHz processor machine with 1GB
>   CW> RAM. it quickyl became memory bound and I expect the disk was
>   CW> happilly thrashing away...
> 
>   CW> IS this to be expected? If so, what shoudl I do about it?
> 
> Here's an off-the-wall suggestion: Can you export the Data.fs into a
> Packless bsddb3Storage?  Then export it back to a FileStorage?

That would lose historical data, which might be bad.

Alternatively, the iterator interface 
(See BaseStorage.copyTransactionsFrom) to move the transaction data
into the Full Berkeley Database storage and pack that. The Full storage
should use less memory for packing. You could then use the same 
interface to move the data back.

Jim

--
Jim Fulton           mailto:jim@digicool.com   Python Powered!        
Technical Director   (888) 344-4332            http://www.python.org  
Zope Corporation     http://www.zope.com       http://www.zope.org