[Zodb-checkins] SVN: ZODB/trunk/src/ Previously, database connections were managed as a stack. This

Jim Fulton jim at zope.com
Sat Jan 3 16:47:56 EST 2009


Log message for revision 94495:
  Previously, database connections were managed as a stack.  This
  tended to cause the same connection(s) to be used over and over.
  For example, the most used conection would typically be the onlyt
  connection used.  In some rare situations, extra connections could
  be opened and end up on the top of the stack, causing extreme memory
  wastage.  Now, when connections are placed on the stack, they sink
  below existing connections that have more active objects.
  

Changed:
  U   ZODB/trunk/src/CHANGES.txt
  U   ZODB/trunk/src/ZODB/DB.py
  U   ZODB/trunk/src/ZODB/tests/dbopen.txt

-=-
Modified: ZODB/trunk/src/CHANGES.txt
===================================================================
--- ZODB/trunk/src/CHANGES.txt	2009-01-03 21:47:55 UTC (rev 94494)
+++ ZODB/trunk/src/CHANGES.txt	2009-01-03 21:47:56 UTC (rev 94495)
@@ -55,6 +55,14 @@
   starting point.  This enhancement makes it practical to take
   advantage of the new storage server invalidation-age option.
 
+- Previously, database connections were managed as a stack.  This
+  tended to cause the same connection(s) to be used over and over.
+  For example, the most used conection would typically be the onlyt
+  connection used.  In some rare situations, extra connections could
+  be opened and end up on the top of the stack, causing extreme memory
+  wastage.  Now, when connections are placed on the stack, they sink
+  below existing connections that have more active objects.
+
 3.9.0a8 (2008-12-15)
 ====================
 

Modified: ZODB/trunk/src/ZODB/DB.py
===================================================================
--- ZODB/trunk/src/ZODB/DB.py	2009-01-03 21:47:55 UTC (rev 94494)
+++ ZODB/trunk/src/ZODB/DB.py	2009-01-03 21:47:56 UTC (rev 94495)
@@ -122,6 +122,21 @@
         # in this stack.
         self.available = []
 
+    def _append(self, c):
+        available = self.available
+        cactive = c._cache.cache_non_ghost_count
+        if (available and
+            (available[-1][1]._cache.cache_non_ghost_count > cactive)
+            ):
+            i = len(available) - 1
+            while (i and
+                   (available[i-1][1]._cache.cache_non_ghost_count > cactive)
+                   ):
+                i -= 1
+            available.insert(i, (time.time(), c))
+        else:
+            available.append((time.time(), c))
+
     def push(self, c):
         """Register a new available connection.
 
@@ -132,7 +147,7 @@
         assert c not in self.available
         self._reduce_size(strictly_less=True)
         self.all.add(c)
-        self.available.append((time.time(), c))
+        self._append(c)
         n = len(self.all)
         limit = self.size
         if n > limit:
@@ -151,7 +166,7 @@
         assert c in self.all
         assert c not in self.available
         self._reduce_size(strictly_less=True)
-        self.available.append((time.time(), c))
+        self._append(c)
 
     def _reduce_size(self, strictly_less=False):
         """Throw away the oldest available connections until we're under our
@@ -210,7 +225,7 @@
 
     def availableGC(self):
         """Perform garbage collection on available connections.
-        
+
         If a connection is no longer viable because it has timed out, it is
         garbage collected."""
         threshhold = time.time() - self.timeout

Modified: ZODB/trunk/src/ZODB/tests/dbopen.txt
===================================================================
--- ZODB/trunk/src/ZODB/tests/dbopen.txt	2009-01-03 21:47:55 UTC (rev 94494)
+++ ZODB/trunk/src/ZODB/tests/dbopen.txt	2009-01-03 21:47:56 UTC (rev 94495)
@@ -173,6 +173,47 @@
     >>> len(pool.available), len(pool.all)
     (0, 3)
 
+It's a bit more complicated though.  The connection pool tries to keep
+connections with larger caches at the top of the stack.  It does this
+by having connections with larger caches "sink" below connections with
+larger caches when they are closed.
+
+To see this, we'll add some objects to the caches:
+
+    >>> for i in range(10):
+    ...     c1.root()[i] = c1.root().__class__()
+    >>> import transaction
+    >>> transaction.commit()
+    >>> c1._cache.cache_non_ghost_count
+    11
+
+    >>> for i in range(5):
+    ...     _ = len(c2.root()[i])
+    >>> c2._cache.cache_non_ghost_count
+    6
+
+Now, we'll close the connections and get them back:
+
+    >>> c1.close()
+    >>> c2.close()
+    >>> c3.close()
+
+We closed c3 last, but c1 is the biggest, so we get c1 on the next
+open:
+
+    >>> db.open() is c1
+    True
+
+Similarly, c2 is the next buggest, so we get that next:
+
+    >>> db.open() is c2
+    True
+
+and finally c3:
+
+    >>> db.open() is c3
+    True
+
 What about the 3 in pool.all?  We've seen that closing connections doesn't
 reduce pool.all, and it would be bad if DB kept connections alive forever.
 
@@ -295,8 +336,13 @@
 
 Now open more connections so that the total exceeds pool_size (2):
 
-    >>> conn1 = db.open()
-    >>> conn2 = db.open()
+    >>> conn1 = db.open(); _ = conn1.root()['a']
+    >>> conn2 = db.open(); _ = conn2.root()['a']
+
+Note that we accessed the objects in the new connections so they would
+be of the same size, so that when they get closed, they don't sink
+below conn0.
+
     >>> pool = db.pool
     >>> len(pool.all), len(pool.available)  # all Connections are in use
     (3, 0)



More information about the Zodb-checkins mailing list