[Zodb-checkins] SVN: ZODB/branches/3.4/ The cache simulation seems good enough to be useful now,

Tim Peters tim.one at comcast.net
Tue Jul 26 16:03:28 EDT 2005


Log message for revision 37432:
  The cache simulation seems good enough to be useful now,
  although the toy app I wrote to generate a 500MB trace
  file doesn't use MVCC in an essential way (some of the
  MVCC simulation code is nevertheless exercised, since
  an invalidation of current data in the presence of MVCC
  always creates a cache record for the newly non-current
  revision).
  
  Still puzzling over what to do when the trace file records
  a load hit but the simulated cache gets a miss.  The
  old simulation code seemed to assume that a store for the same
  oid would show up in the trace file next, and it could get
  the info it needed about the missing object from the store
  trace.  But that isn't true:  precisely because the load was
  a hit in the trace file, the object isn't going to be stored
  again "soon" in the trace file.
  
  Here are some actual-vs-simulated hit rate results, for a
  20MB cache, with a trace file covering about 9 million
  loads, over 3 ZEO client (re)starts:
  
  actual   simulated
  ------   ---------
    93.1        92.7
    79.8        79.0
    68.0        69.1
        
    81.4        81.1  overall
  
  Since the simulated hit rates are both higher and lower
  than the actual hit rates, that argues against a gross
  systematic bias in the simulation (although there may
  be several systematic biases in opposite directions).
  

Changed:
  U   ZODB/branches/3.4/doc/ZEO/trace.txt
  U   ZODB/branches/3.4/src/ZEO/cache.py
  U   ZODB/branches/3.4/src/ZEO/simul.py

-=-
Modified: ZODB/branches/3.4/doc/ZEO/trace.txt
===================================================================
--- ZODB/branches/3.4/doc/ZEO/trace.txt	2005-07-26 16:32:55 UTC (rev 37431)
+++ ZODB/branches/3.4/doc/ZEO/trace.txt	2005-07-26 20:03:26 UTC (rev 37432)
@@ -14,22 +14,22 @@
 ----------------------
 
 To enable cache tracing, you must use a persistent cache (specify a ``client``
-name), and set the environment variable ZEO_CACHE_TRACE.  The path to the
-trace file is derived from the path to the persistent cache file by appending
-".trace".  If the file doesn't exist, ZEO will try to create it.  If the file
-does exist, it's opened for appending (previous trace information is not
-overwritten).  If there are problems with the file, a warning message is
-logged.  To start or stop tracing, the ZEO client process (typically a Zope
-application server) must be restarted.
+name), and set the environment variable ZEO_CACHE_TRACE to a non-empty
+value.  The path to the trace file is derived from the path to the persistent
+cache file by appending ".trace".  If the file doesn't exist, ZEO will try to
+create it.  If the file does exist, it's opened for appending (previous trace
+information is not overwritten).  If there are problems with the file, a
+warning message is logged.  To start or stop tracing, the ZEO client process
+(typically a Zope application server) must be restarted.
 
 The trace file can grow pretty quickly; on a moderately loaded server, we
-observed it growing by 5 MB per hour.  The file consists of binary records,
+observed it growing by 7 MB per hour.  The file consists of binary records,
 each 34 bytes long if 8-byte oids are in use; a detailed description of the
 record lay-out is given in stats.py.  No sensitive data is logged:  data
-record sizes and binary object and transaction ids are logged, but no
-information about object types or names, user names, version names,
-transaction comments, access paths, or machine information such as machine
-name or IP address.
+record sizes (but not data records), and binary object and transaction ids
+are logged, but no object pickles, object types or names, user names,
+transaction comments, access paths, or machine information (such as machine
+name or IP address) are logged.
 
 Analyzing a Cache Trace
 -----------------------
@@ -40,31 +40,29 @@
 indicating client restarts, followed by a more detailed summary of overall
 statistics.
 
-The most important statistic is probably the "hit rate", a percentage
-indicating how many requests to load an object could be satisfied from
-the cache.  Hit rates around 70% are good.  90% is probably close to
-the theoretical maximum.  If you see a hit rate under 60% you can
-probably improve the cache performance (and hence your Zope
-application server's performance) by increasing the ZEO cache size.
-This is normally configured using cache_size key in the ``zeoclient``
-section of your configuration file.  The default cache size is 20 MB, which
-is very small.
+The most important statistic is the "hit rate", a percentage indicating how
+many requests to load an object could be satisfied from the cache.  Hit rates
+around 70% are good.  90% is excellent.  If you see a hit rate under 60% you
+can probably improve the cache performance (and hence your Zope application
+server's performance) by increasing the ZEO cache size.  This is normally
+configured using key ``cache_size`` in the ``zeoclient`` section of your
+configuration file.  The default cache size is 20 MB, which is very small.
 
 The stats.py tool shows its command line syntax when invoked without
-arguments.  The tracefile argument can be a gzipped file if it has a
-.gz extension.  It will read from stdin (assuming uncompressed data)
-if the tracefile argument is '-'.
+arguments.  The tracefile argument can be a gzipped file if it has a .gz
+extension.  It will read from stdin (assuming uncompressed data) if the
+tracefile argument is '-'.
 
 Simulating Different Cache Sizes
 --------------------------------
 
-Based on a cache trace file, you can make a prediction of how well the
-cache might do with a different cache size.  The simul.py tool runs an
-accurate simulation of the ZEO client cache implementation based upon
-the events read from a trace file.  A new simulation is started each
-time the trace file records a client restart event; if a trace file
-contains more than one restart event, a separate line is printed for
-each simulation, and line with overall statistics is added at the end.
+Based on a cache trace file, you can make a prediction of how well the cache
+might do with a different cache size.  The simul.py tool runs a simulation of
+the ZEO client cache implementation based upon the events read from a trace
+file.  A new simulation is started each time the trace file records a client
+restart event; if a trace file contains more than one restart event, a
+separate line is printed for each simulation, and a line with overall
+statistics is added at the end.
 
 Example, assuming the trace file is in /tmp/cachetrace.log::
 
@@ -98,3 +96,26 @@
 The simul.py tool also supports simulating different cache
 strategies.  Since none of these are implemented, these are not
 further documented here.
+
+Simulation Limitations
+----------------------
+
+The cache simulation is an approximation, and actual hit rate may be higher
+or lower than the simulated result.  These are some factors that inhibit
+exact simulation:
+
+- The simulator doesn't try to emulate versions.  If the trace file contains
+  loads and stores of objects in versions, the simulator treats them as if
+  they were loads and stores of non-version data.
+
+- Each time a load of an object O in the trace file was a cache hit, but the
+  simulated cache has evicted O, the simulated cache has no way to repair its
+  knowledge about O.  This is more frequent when simulating caches smaller
+  than the cache used to produce the trace file.  When a real cache suffers a
+  cache miss, it asks the ZEO server for the needed information about O, and
+  saves O in the client cache.  The simulated cache doesn't have a ZEO server
+  to ask, and O continues to be absent in the simulated cache.  Further
+  requests for O will continue to be simulated cache misses, although in a
+  real cache they'll likely be cache hits.  On the other hand, the
+  simulated cache doesn't need to evict any objects to make room for O, so it
+  may enjoy further cache hits on objects a real cache would need to evict.

Modified: ZODB/branches/3.4/src/ZEO/cache.py
===================================================================
--- ZODB/branches/3.4/src/ZEO/cache.py	2005-07-26 16:32:55 UTC (rev 37431)
+++ ZODB/branches/3.4/src/ZEO/cache.py	2005-07-26 20:03:26 UTC (rev 37432)
@@ -210,7 +210,7 @@
         # than any comparable non-None object in recent Pythons.
         i = bisect.bisect_left(L, (tid, None))
         # Now L[i-1] < (tid, None) < L[i], and the start_tid for everything in
-        # L[:i} is < tid, and the start_tid for everything in L[i:] is >= tid.
+        # L[:i] is < tid, and the start_tid for everything in L[i:] is >= tid.
         # Therefore the largest start_tid < tid must be at L[i-1].  If i is 0,
         # there is no start_tid < tid:  we don't have any data old enougn.
         if i == 0:

Modified: ZODB/branches/3.4/src/ZEO/simul.py
===================================================================
--- ZODB/branches/3.4/src/ZEO/simul.py	2005-07-26 16:32:55 UTC (rev 37431)
+++ ZODB/branches/3.4/src/ZEO/simul.py	2005-07-26 20:03:26 UTC (rev 37432)
@@ -14,19 +14,10 @@
 ##############################################################################
 """Cache simulation.
 
-Usage: simul.py [-bflyz] [-s size] tracefile
+Usage: simul.py [-s size] tracefile
 
-Use one of -b, -f, -l, -y or -z select the cache simulator:
--b: buddy system allocator
--f: simple free list allocator
--l: idealized LRU (no allocator)
--y: variation on the existing ZEO cache that copies to current file
--z: existing ZEO cache (default)
-
 Options:
 -s size: cache size in MB (default 20 MB)
-
-Note: the buddy system allocator rounds the cache size up to a power of 2
 """
 
 import sys
@@ -47,7 +38,7 @@
     # Parse options.
     MB = 1024**2
     cachelimit = 20*MB
-    simclass = ZEOCacheSimulation
+    simclass = CircularCacheSimulation
     try:
         opts, args = getopt.getopt(sys.argv[1:], "bflyz2cOaTUs:")
     except getopt.error, msg:
@@ -140,12 +131,11 @@
         if len(oid) < oidlen:
             break
         # Decode the code.
-        dlen, version, code, current = (code & 0x7fffff00,
-                                        code & 0x80,
-                                        code & 0x7e,
-                                        code & 0x01)
+        dlen, version, code = (code & 0x7fffff00,
+                               code & 0x80,
+                               code & 0x7e)
         # And pass it to the simulation.
-        sim.event(ts, dlen, version, code, current, oid, start_tid, end_tid)
+        sim.event(ts, dlen, version, code, oid, start_tid, end_tid)
 
     f.close()
     # Finish simulation.
@@ -162,7 +152,6 @@
     The standard event() method calls these additional methods:
     write(), load(), inval(), report(), restart(); the standard
     finish() method also calls report().
-
     """
 
     def __init__(self, cachelimit):
@@ -187,7 +176,7 @@
         self.writes = 0
         self.ts0 = None
 
-    def event(self, ts, dlen, _version, code, _current, oid,
+    def event(self, ts, dlen, _version, code, oid,
               start_tid, end_tid):
         # Record first and last timestamp seen.
         if self.ts0 is None:
@@ -198,9 +187,11 @@
 
         # Simulate cache behavior.  Caution:  the codes in the trace file
         # record whether the actual cache missed or hit on each load, but
-        # that bears no relationship to whether the simulated cache will
-        # hit or miss.
-        action = code & 0x70  # ignore high bit (version flag)
+        # that bears no necessary relationship to whether the simulated cache
+        # will hit or miss.  Relatedly, if the actual cache needed to store
+        # an object, the simulated cache may not need to (it may already
+        # have the data).
+        action = code & 0x70
         if action == 0x20:
             # Load.
             self.loads += 1
@@ -282,12 +273,34 @@
                            for name in self.extras])
             print (self.format + " OVERALL") % args
 
+
+# For use in CircularCacheSimulation.
+class CircularCacheEntry(object):
+    __slots__ = (# object key:  an (oid, start_tid) pair, where
+                 # start_tid is the tid of the transaction that created
+                 # this revision of oid
+                 'key',
+
+                 # tid of transaction that created the next revision;
+                 # z64 iff this is the current revision
+                 'end_tid',
+
+                 # Offset from start of file to the object's data
+                 # record; this includes all overhead bytes (status
+                 # byte, size bytes, etc).
+                 'offset',
+                )
+
+    def __init__(self, key, end_tid, offset):
+        self.key = key
+        self.end_tid = end_tid
+        self.offset = offset
+
 from ZEO.cache import ZEC3_HEADER_SIZE
-# An Entry just wraps a (key, offset) pair.  A key is in turn an
-# (oid, tid) pair.
-from ZEO.cache import Entry
 
 class CircularCacheSimulation(Simulation):
+    """Simulate the ZEO 3.0a cache."""
+
     # The cache is managed as a single file with a pointer that
     # goes around the file, circularly, forever.  New objects
     # are written at the current pointer, evicting whatever was
@@ -300,14 +313,16 @@
         from BTrees.OIBTree import OIBTree
 
         Simulation.__init__(self, cachelimit)
-        self.total_evicts = 0
+        self.total_evicts = 0  # number of cache evictions
+
         # Current offset in file.
         self.offset = ZEC3_HEADER_SIZE
-        # Map offset in file to (size, Entry) pair, or to (size, None) if
-        # the offset starts a free block.
+
+        # Map offset in file to (size, CircularCacheEntry) pair, or to
+        # (size, None) if the offset starts a free block.
         self.filemap = {ZEC3_HEADER_SIZE: (self.cachelimit - ZEC3_HEADER_SIZE,
                                            None)}
-        # Map key to Entry.  A key is an (oid, tid) pair.
+        # Map key to CircularCacheEntry.  A key is an (oid, tid) pair.
         self.key2entry = {}
 
         # Map oid to tid of current revision.
@@ -348,7 +363,7 @@
                 self._cache_miss(oid, tid)
             return
 
-        # May or may not be trying to load current revisiion.
+        # May or may not be trying to load current revision.
         cur_tid = self.current.get(oid)
         if cur_tid == tid:
             self.hits += 1
@@ -375,9 +390,10 @@
         self.total_hits += 1
 
     def _cache_miss(self, oid, tid, HUGE64='\xff'*8):
+        return
         have_data = False
         if tid == z64:
-            # Miss on current data.  Find the most revision we ever saw.
+            # Miss on current data.  Find the most recent revision we ever saw.
             items = self.key2size.items(min=(oid, z64), max=(oid, HUGE64))
             if items:
                 (oid, tid), size = items[-1]  # most recent
@@ -393,10 +409,11 @@
             # Pretend the cache miss was followed by a store.
             self.writes += 1
             self.total_writes += 1
-            self.add(oid, tid, size)
+            self.add(oid, size, tid)
 
     # (oid, tid) is in the cache.  Remove it:  take it out of key2entry,
-    # and in `filemap` mark the space it occupied as being free.
+    # and in `filemap` mark the space it occupied as being free.  The
+    # caller is responsible for removing it from `current` or `noncurrent`.
     def _remove(self, oid, tid):
         key = oid, tid
         e = self.key2entry.pop(key)
@@ -416,12 +433,13 @@
 
     def inval(self, oid, tid):
         if tid == z64:
-            # This is part of startup cache verification.
+            # This is part of startup cache verification:  forget everything
+            # about this oid.
             self._remove_noncurrent_revisions(oid, version)
 
         cur_tid = self.current.get(oid)
         if cur_tid is None:
-            # We don't have current data, so nothing to do.
+            # We don't have current data, so nothing more to do.
             return
 
         # We had current data for oid, but no longer.
@@ -433,10 +451,15 @@
             self._remove(oid, current_tid)
             return
 
-        # Add the validty range to the list of non-current data for oid.
+        # Our current data becomes non-current data.
+        # Add the validity range to the list of non-current data for oid.
         assert cur_tid < tid
         L = self.noncurrent.setdefault(oid, [])
         bisect.insort_left(L, (cur_tid, tid))
+        # Update the end of oid's validity range in its CircularCacheEntry.
+        e = self.key2entry[oid, cur_tid]
+        assert e.end_tid == z64
+        e.end_tid = tid
 
     def write(self, oid, size, start_tid, end_tid):
         if end_tid == z64:
@@ -447,7 +470,7 @@
             self.key2size[oid, start_tid] = size
             self.writes += 1
             self.total_writes += 1
-            self.add(oid, start_tid, size)
+            self.add(oid, size, start_tid)
             return
         # Storing non-current revision.
         L = self.noncurrent.setdefault(oid, [])
@@ -458,14 +481,17 @@
         self.key2size[(oid, start_tid)] = size
         self.writes += 1
         self.total_writes += 1
-        self.add(oid, start_tid, size)
+        self.add(oid, size, start_tid, end_tid)
 
-    def add(self, oid, tid, size):
+    # Add `oid` to the cache, evicting objects as needed to make room.
+    # This updates `filemap` and `key2entry`; it's the caller's
+    # responsibilty to update `current` or `noncurrent` appropriately.
+    def add(self, oid, size, start_tid, end_tid=z64):
+        key = oid, start_tid
+        assert key not in self.key2entry
         size += self.overhead
         avail = self.makeroom(size)
-        key = oid, tid
-        assert key not in self.key2entry
-        e = Entry(key, self.offset)
+        e = CircularCacheEntry(key, end_tid, self.offset)
         self.filemap[self.offset] = size, e
         self.key2entry[key] = e
         self.offset += size
@@ -474,8 +500,13 @@
         if excess:
             self.filemap[self.offset] = excess, None
 
+    # Evict enough objects to make at least `need` contiguous bytes, starting
+    # at `self.offset`, available.  Evicted objects are removed from
+    # `filemap`, `key2entry`, `current` and `noncurrent`.  The caller is
+    # responsible for adding new entries to `filemap` to account for all
+    # the freed bytes, and for advancing `self.offset`.  The number of bytes
+    # freed is the return value, and will be >= need.
     def makeroom(self, need):
-        # Evict enough objects to make the necessary space available.
         if self.offset + need > self.cachelimit:
             self.offset = ZEC3_HEADER_SIZE
         pos = self.offset
@@ -487,12 +518,18 @@
                 self.dump()
                 raise
             del self.filemap[pos]
-            if e:
+            if e:   # there is an object here (else it's already free space)
                 self.evicts += 1
                 self.total_evicts += 1
                 assert pos == e.offset
                 _e = self.key2entry.pop(e.key)
                 assert e is _e
+                oid, start_tid = e.key
+                if e.end_tid == z64:
+                    del self.current[oid]
+                else:
+                    L = self.noncurrent[oid]
+                    L.remove((start_tid, e.end_tid))
             need -= size
             pos += size
         return pos - self.offset  # total number of bytes freed
@@ -531,8 +568,16 @@
             v = self.filemap[k]
             print k, v[0], repr(v[1])
 
+#############################################################################
+# CAUTION:  It's most likely that none of the simulators below this
+# point work anymore.  A great many changes were needed to teach
+# CircularCacheSimulation (above) about MVCC, including method signature
+# changes and changes in cache file format, and none of the others simulator
+# classes were changed.
+#############################################################################
+
 class ZEOCacheSimulation(Simulation):
-    """Simulate the ZEO 1.0 and 2.0cache behavior.
+    """Simulate the ZEO 1.0 and 2.0 cache behavior.
 
     This assumes the cache is not persistent (we don't know how to
     simulate cache validation.)



More information about the Zodb-checkins mailing list