[ZODB-Dev] OverflowError (ZEO ClientCache)

Guido van Rossum guido@python.org
Fri, 21 Mar 2003 15:12:03 -0500


> Here is my traceback:
> 
> 2003-03-11T13:55:29 PANIC(300) ZODB A storage error occurred in the last
> phase of a two-phase commit.  This shouldn't happen.
> Traceback (innermost last):
>   File /usr/local/Zope-2.6.1/lib/python/ZODB/Transaction.py, line 356, in
> _finish_one
>   File /usr/local/Zope-2.6.1/lib/python/ZODB/Connection.py, line 692, in
> tpc_finish
>   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientStorage.py, line 681, in
> tpc_finish
>     (Object: thd@localhost:9900)
>   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientStorage.py, line 691, in
> _update_cache
>     (Object: thd@localhost:9900)
>   File /usr/local/Zope-2.6.1/lib/python/ZEO/ClientCache.py, line 479, in
> checkSize
> OverflowError: integer addition
> 
> This seems to happen when my cache file approaches 2GB.  Now, I never
> changed any of the ClientCache settings, so I am unsure why the cache
> reaches that level.  It happens when I update a large data set using the
> UpdateSupport product.

If your cache file actually grows to (close to) 2GB with the default
settings, you're in trouble -- that can only happen when the data for
a single object is that large.  Is that the case?  I don't know what
UpdateSupport is -- does it have a way to break down object size?

> Any thoughts?  I think ClientCache is missing the 'L' somewhere
> (literal size).

Yeah, *and* your Python should be compiled for large files of course.

I'd expect that if you really can't avoid the cache growing this big,
you'd have to add conversion to longs in a whole number of places --
or you could switch to Python 2.2.2, which automatically switches to
longs when necessary, rather than raising OverflowError.

--Guido van Rossum (home page: http://www.python.org/~guido/)