[ZODB-Dev] URGENT: ZODB down - Important Software Application at CERN

Pedro Ferreira jose.pedro.ferreira at cern.ch
Tue May 26 05:17:02 EDT 2009


Tres Seaver wrote:
> Pedro Ferreira wrote:
>
> > Thanks a lot for your help. In fact, it was a matter of increasing the
> > maximum recursion limit.
> > There's still an unsolved issue, though. Each time we try to recover a
> > backup using repozo, we get a CRC error. Is this normal? Has it happened
> > to anyone?
>
> I don't recall anything like that.  Can you provide a traceback?
pferreir at pcituds01 ~/projects/indico/db $ repozo -Rv -r
/afs/cern.ch/project/indico/db_backups/indico/ -D 2009-05-23-21-31-33 -o
data.fs
looking for files between last full backup and 2009-05-23-21-31-33...
files needed to recover state as of 2009-05-23-21-31-33:
       
/afs/cern.ch/project/indico/db_backups/indico/2009-05-23-21-31-33.fsz
Recovering file to data.fs
Traceback (most recent call last):
  File "/usr/bin/repozo", line 8, in <module>
    load_entry_point('ZODB3==3.8.1', 'console_scripts', 'repozo')()
  File
"//usr/lib/python2.5/site-packages/ZODB3-3.8.1-py2.5-linux-i686.egg/ZODB/scripts/repozo.py",
line 513, in main
    do_recover(options)
  File
"//usr/lib/python2.5/site-packages/ZODB3-3.8.1-py2.5-linux-i686.egg/ZODB/scripts/repozo.py",
line 501, in do_recover
    reposz, reposum = concat(repofiles, outfp)
  File
"//usr/lib/python2.5/site-packages/ZODB3-3.8.1-py2.5-linux-i686.egg/ZODB/scripts/repozo.py",
line 263, in concat
    bytesread += dofile(func, ifp)
  File
"//usr/lib/python2.5/site-packages/ZODB3-3.8.1-py2.5-linux-i686.egg/ZODB/scripts/repozo.py",
line 200, in dofile
    data = fp.read(todo)
  File "/usr/lib/python2.5/gzip.py", line 227, in read
    self._read(readsize)
  File "/usr/lib/python2.5/gzip.py", line 292, in _read
    self._read_eof()
  File "/usr/lib/python2.5/gzip.py", line 311, in _read_eof
    raise IOError, "CRC check failed"
IOError: CRC check failed

>
> > I guess we have a very large database, for what is normal in ZODB
> > applications.
>
> Not really:  I know of clients whose database routinely grow much larger
> than yours (15 Gb, packing down to 6 Gb, right?)
>
> > We were wondering if there's any way to optimize the size
> > (and performance) of such a large database, through the removal of
> > unused objects and useless data. We perform packs in a weekly basis, but
> > we're not sure if this is enough, or if there are other ways of
> > lightening up the DB. Any recommendations regarding this point?
>
> Without knowing anything about the application:
>
> - Check that it is not holding onto "old" data inappropriately
>   (e.g., maintaining lots of "archival" versions of content).
>
Yes, I think we can do some improvements there. We actually store some
deleted content as a a safeguard, we're considering a major cleanup
operation.
> - Check into the catalog / index usage:  you may be able to slow
>   the growth by batching updates, especially to text indexes.
>
I'm not sure I understand this one... we're not using ZCatalog, nor
full-text indexes, though...

Thanks!

Pedro

-- 

José Pedro Ferreira
(Software Developer, Indico Project)

IT-UDS-AVC
CERN
Geneva, Switzerland

Office 513-R-042
Tel. +41 22 76 77159







More information about the ZODB-Dev mailing list