[ZODB-Dev] performance of chunked data over ZEO

Dieter Maurer dieter at handshake.de
Mon Nov 29 15:49:05 EST 2004


Andreas Jung wrote at 2004-11-28 20:56 +0100:
>Inside an application that stored large amounts of data in the ZODB as 
>linked
>structure (very similar to the Pdata magic in OFS.Image) I discovered that
>the chunk size has a significant influcence on the performance.  Using a 
>chunk
>size of 16k (which is also the default in OFS.Image) I got the expected 
>transfer
>rate of about 2-3MB/second. Increasing the chunk size to 64K caused a 
>performance
>loss to about 100-150 KB/seconds. Any ideas why increasing the chunk size
>causes such a dramatic performance loss?

I could not reproduce the behaviour. Instead, I found:

wrote 512 chunks of size 16384 in 2.845490 s
read 512 chunks of size 16384 in 1.630589 s
wrote 256 chunks of size 32768 in 2.813680 s
read 256 chunks of size 32768 in 1.068197 s
wrote 128 chunks of size 65536 in 3.180180 s
read 128 chunks of size 65536 in 0.852019 s
wrote 64 chunks of size 131072 in 2.747044 s
read 64 chunks of size 131072 in 0.815273 s
wrote 32 chunks of size 262144 in 2.980384 s
read 32 chunks of size 262144 in 0.796395 s

This means, read time comes down when the chunks are getting bigger.


This measurement opened a new ZEO connection for each
sample. This means, it effectively did not use a cache.
(neither ZEO client nor ZODB).

-- 
Dieter


More information about the ZODB-Dev mailing list