[ZODB-Dev] Making ZODB / ZEO faster

Jim Fulton jim at zope.com
Fri Dec 4 12:17:10 EST 2009


On Fri, Dec 4, 2009 at 9:41 AM, Erik Dahl <edahl at zenoss.com> wrote:
> Guys,
>
> We have a product written in python using ZODB/ZEO and I would like to
> improve the speed of database in general.  Things that I have seen
> that I would like to improve some I understand and some not.

How have you "seen" these? Do you have evidence that they are
affecting your applications specifically?

> 1. Loading of largish (but not too large object had a list with around
> 20K references in it) pickles can be very slow.  Ok what size pickle?
> not 100% sure is there a way to get the pickle size of a zodb
> persistent object?  I lamely tried to pickle one of our persistent
> objects and of course it blew up with max recursion because it went
> beyond the normal bounds of a zodb persistent pickle.  There must be a
> way to do this though right?

You can get the pickle and thus the pickle size for an object fairly
straightforwardly.

  p, s = ob._p_jar.db().storage.load(ob._p_oid, '')
  print len(p)

Pickling is actually pretty fast.

By list, do you mean  a Python list (or persistent list)?
When such an object is loaded, the database will have to instantiate
all those objects. That might be time consuming.  In general, I'd avoid
large lists or persistent lists, especially if they are uploaded a lot.

If you try to access those objects all at once, then those objects
will have to be loaded from the database, and that can be very
time consuming.

> 2. Writing lots of objects.  I know that zodb wasn't written for this
> type of use case but we have backed into it.

While some relational databases have greater transaction throughput,
you can fair writes throughput with ZODB.

>  We can have many (~30 is
> that a lot?)

That's on the order of what we have.

> zodb clients and as a result large numbers of cache
> invalidations can be sent when a write occurs.  Could invalidation
> performance / cache refresh be an issue?

I don't think so. Invalidations aren't that large and are sent asynchronously.

> 3. DB hot spots.  Of course we see conflict errors when there are lots
> of writes to the db from different clients that touch the same
> object.  We haven't done a bunch of optimization work here but I'm
> thinking of moving all indexing out to a separate client/process that
> reads off a queue to find objects to index.  I'm guessing the indexes
> are a hotspot (haven't tested this out much though I guess b-tree's
> buckets should alleviate this problem some).  (is there a persistent
> queue around?)

hotspots are an issue in any database.  It is generally worth
application refactoring to avoid them. :)

> Anyway these are some things that come to mind when I think of
> performance issues.  I have the thought that many could be made better
> with faster ZEO I/O.  Does this seem like a good assumption?  If so
> what could we do to make ZEO faster?

There are 2 major issues with ZEO that I'm aware of:

- Each object load requires a round trip.  If you have a list
  of 20,000 objects and you want to iterate over them, that
  will require 20,000 separate object load requests, each requiring
  a network round trip.

  We've discussed schemes to give the database hints to pre-fetch
  objects, thus avoiding serial round trips.

- ZEO servers are currently single threaded. This can hurt a lot of you
  have many clients and prevents you from taking advantage of multi-
  spindle storage systems.

I'll note that if you have a large database, a substantial amount of time
can be spent doing disk IO.  In a recent test we performed with a 500GB
database, disk access accounted for ~15 milliseconds of a ~16 millisecond
object load requests. IOW, disk access times swamped network access times,
which in my measurements were on the order of a 600-1000 microseconds.

> Questions:
>
> * We use a filestorage are there faster ones?

Not that I'm aware of.

> Can this be a bottleneck?

Not in and of itself AFAIK.  Your disk configuration
can matter a lot. How big is your database file?

The biggest issues with file storage for large databases are:

- packing can take a long time and can affect database
  performance a great deal.

- shut down and start up can take a long time to write and read
  an index file. God help you if you have to open a large database
  without an up to date index.

This is why I am (yet again) exploring a BerkeleyDB-based
storage implementation.

> * Is the ZEO protocol inefficient?

No, or at least not enough to matter.

> * Is the ZEO server just plain slow?

To some degree, yes, mainly because it is single threaded.

See my jim-thready-zeo2 branch.  I suspect that Python's GIL
will always put ZEO at a disadvantage relative to n

You can help by testing this. :)

> Thoughts I have that may have no impact.
>
> * rewrite ZEO or parts of it in C
> * write a C based storage

I don't think either of these will help in any significant way.

Jim

-- 
Jim Fulton


More information about the ZODB-Dev mailing list