[ZODB-Dev] [Proposal] Size controlled ZODB cache
tino at wildenhain.de
Fri Jun 3 16:48:03 EDT 2005
Am Donnerstag, den 02.06.2005, 19:28 +0200 schrieb Dieter Maurer:
> Currently, the ZODB cache can only be controlled via the maximal number
> of objects. This makes configuration complex as the actual limiting
> factor is the amount of available RAM and it is very difficult to
> estimate the size of the objects in the cache.
> I therefore propose the implementation of cache replacement policies
> based on the estimated size of its objects.
> I propose to use the pickle size as the size estimate.
> The connection could store the pickle size in the object
> as "_p_size" (and may call a hook function "_p_estimateSize",
> if it is defined -- but I do not think, we need this).
> I am aware that the actual size of an object may significantly
> differ from its pickle size, but usually, they will at least
> be in the same order.
> As additional limiting parameters, I propose "MAX_OBJECT_SIZE" and
> Objects with size >= "MAX_OBJECT_SIZE" are invalidated at the next
> possible time (at a transaction boundary) before other potential
> invalidations are considered.
> The purpose of the limit it to prevent a single (or few) large objects
> to flush large amounts of small objects. Such large objects
> are managed in a special (doubly linked) list in order to quickly locate them.
> After large objects are flushed, the replacement policy works
> as it does now. However, beside the number of objects, their
> total estimated size is accumulated. As soon as
> either the "MAX_OBJECT_NUMBER" or "MAX_TOTAL_SIZE" is reached,
> the remaining objects are invalidated (as far as possible).
Would be nice to optionally have the posibility to define
finer grained cache policy (e.g. reserve 80% of cache
mem to objects <5k and 20% for all the others - or something
the like - much like altq does for ip-traffic shaping).
E.g. some clean hooks to cache store management and cache
More information about the ZODB-Dev