[ZODB-Dev] Relstorage and over growing database.

Jürgen Herrmann Juergen.Herrmann at XLhost.de
Wed Feb 6 11:23:14 UTC 2013


Am 03.02.2013 03:18, schrieb Shane Hathaway:
> On 02/02/2013 04:13 PM, Juan A. Diaz wrote:
>> 2013/2/2 Shane Hathaway <shane at hathawaymix.org>:
>>> On 02/01/2013 09:08 PM, Juan A. Diaz wrote:
>> Do you think that add one option in zodbpack to truncate this tables
>> after the pack could be a god idea?
>
> The object_ref table is intended to help the next pack run quickly,
> but I suppose it might make sense to clear it anyway with an option.
>
>>> If your database grows by 2.2 GB per day, it's not surprising that 
>>> the
>>> database is 15 GB.  With drive and RAM sizes today, 15 GB doesn't 
>>> sound like
>>> a problem to me... unless it's on a Raspberry Pi. :-)
>>
>> Yes, but after the pack the size of new objects that remain in the
>> database is just like 200MB.
>>
>> Also 15GB as you say is not a real big database for this days, but 
>> we
>> are synchronizing our databases through a low bandwidth channel 
>> across
>> various datacenters and in some cases recover the database from a
>> failure in the sync process is real pain!. Do you think that is
>> possible to don't replicate that tables could be safe? There are 
>> other
>> tables that maybe we don't need replicate?
>
> You never need to replicate the MyISAM tables.  Only the InnoDB
> tables (object_state, current_object, transaction) need to be
> replicated.
>
> Shane

Hi!

I think this is not entirely correct. I ran in to problems serveral
times when new_oid was emptied! Maybe Shane can confirm this?
(results in read conlfict errors)

Then I'd like to talk a little about my current relstorage setup here:
It's backed by mysql, history-preserving setup. Recently one of our
DBs started to grow very quickly and it's object_state.ibd (InnoDB)
file is just over 86GB as of today. Packing now fails due to mysql
not being able to complete sorts in the object_ref table. object_ref
is also very big (36GB MYD file, 25GB MYI file). I took a backup of the
DB and let zodbconvert convert it back to a FileStorage, the resulting
file is 6GB (!). I will pack it and see how big it is then. I will
also investigate how big on disk this DB would be when stored in
postgresql. This situation poses another problem for us: using
zodbconvert to convert this mess to a Filestorage tages just over an
hour when writing to a ramdisk. I suspect converting to postgres
will take more than 10 hours, which is unacceptable for us as this
is a live database an cannot be offline for more than 2-3 hours in
the nicht. So we will have to investigate into a special zodbconvert
that uses a two step process:
1. import transactions to new storage from a mysql db backup
2. import "rest" of transactions that occurred after the backup was
    made from the "live" database (which is offline for that time
    of course)

looking at zodbconvert using copyTransactionsFrom() i thnik this
should be possible but up to now i did non investigate furhter.
maybe shane could confirm this? maybe this could also be transformed
into a neat way of getting incremental backups out of zodbs in
general?

best regards,
Jürgen Herrmann
-- 
>> XLhost.de ® - Webhosting von supersmall bis eXtra Large <<

XLhost.de GmbH
Jürgen Herrmann, Geschäftsführer
Boelckestrasse 21, 93051 Regensburg, Germany

Geschäftsführer: Jürgen Herrmann
Registriert unter: HRB9918
Umsatzsteuer-Identifikationsnummer: DE245931218

Fon:  +49 (0)800 XLHOSTDE [0800 95467833]
Fax:  +49 (0)800 95467830
Web:  http://www.XLhost.de


More information about the ZODB-Dev mailing list