[ZODB-Dev] OverflowError (ZEO ClientCache)

Matthew Kozloski matthew.kozloski@strus.com
Fri, 21 Mar 2003 15:29:31 -0500


Thanks for getting back to me.  I thought it was funny too - the
UpdateSupport runs a python script against an imported structure (which is
almost 2GB now).  So, as part of its routine, it will create files greater
than 2GB (the data for the single object does grow this large).

This works fine outside of ZEO - if I run with a plain-vanilla Data.fs I can
handle these files no problem.

I am going to play with the ClientCache.py file.  My python is compiled with
large-file support, and I currently have several large (greater than 4GB
data stores in ZEO).

How does Python 2.2.2 run with Zope/ZEO (and the Zope products -- I am not
really up to speed on the reason why Zope runs on 2.1.3)?


Thanks - Matt
----- Original Message -----
From: "Guido van Rossum" <guido@python.org>
To: "Kozloski, Matthew" <matthew.kozloski@strus.com>
Cc: <zodb-dev@zope.org>
Sent: Friday, March 21, 2003 3:12 PM
Subject: Re: [ZODB-Dev] OverflowError (ZEO ClientCache)


> > Here is my traceback:
> >
> > 2003-03-11T13:55:29 PANIC(300) ZODB A storage error occurred in the last
> > phase of a two-phase commit.  This shouldn't happen.
> > Traceback (innermost last):
> >   File /usr/local/Zope-2.6.1/lib/python/ZODB/Transaction.py, line 356,
in
> > _finish_one
> >   File /usr/local/Zope-2.6.1/lib/python/ZODB/Connection.py, line 692, in
> > tpc_finish
> >   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientStorage.py, line 681,
in
> > tpc_finish
> >     (Object: thd@localhost:9900)
> >   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientStorage.py, line 691,
in
> > _update_cache
> >     (Object: thd@localhost:9900)
> >   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientCache.py, line 479, in
> > checkSize
> > OverflowError: integer addition
> >
> > This seems to happen when my cache file approaches 2GB.  Now, I never
> > changed any of the ClientCache settings, so I am unsure why the cache
> > reaches that level.  It happens when I update a large data set using the
> > UpdateSupport product.
>
> If your cache file actually grows to (close to) 2GB with the default
> settings, you're in trouble -- that can only happen when the data for
> a single object is that large.  Is that the case?  I don't know what
> UpdateSupport is -- does it have a way to break down object size?
>
> > Any thoughts?  I think ClientCache is missing the 'L' somewhere
> > (literal size).
>
> Yeah, *and* your Python should be compiled for large files of course.
>
> I'd expect that if you really can't avoid the cache growing this big,
> you'd have to add conversion to longs in a whole number of places --
> or you could switch to Python 2.2.2, which automatically switches to
> longs when necessary, rather than raising OverflowError.
>
> --Guido van Rossum (home page: http://www.python.org/~guido/)
>