[Checkins] SVN: relstorage/trunk/ Merged 1.1 branch as of r88795

Shane Hathaway shane at hathawaymix.org
Mon Jul 28 13:26:42 EDT 2008


Log message for revision 88881:
  Merged 1.1 branch as of r88795
  

Changed:
  D   relstorage/trunk/CHANGELOG.txt
  A   relstorage/trunk/CHANGES.txt
  D   relstorage/trunk/MANIFEST
  U   relstorage/trunk/README.txt
  U   relstorage/trunk/notes/migrate-1.0.1.txt
  U   relstorage/trunk/poll-invalidation-1-zodb-3-7-1.patch
  U   relstorage/trunk/poll-invalidation-1-zodb-3-8-0.patch
  U   relstorage/trunk/relstorage/adapters/common.py
  U   relstorage/trunk/relstorage/adapters/mysql.py
  U   relstorage/trunk/relstorage/adapters/oracle.py
  U   relstorage/trunk/relstorage/adapters/postgresql.py
  U   relstorage/trunk/relstorage/component.xml
  U   relstorage/trunk/relstorage/config.py
  U   relstorage/trunk/relstorage/relstorage.py
  U   relstorage/trunk/relstorage/tests/reltestbase.py
  U   relstorage/trunk/setup.py

-=-
Deleted: relstorage/trunk/CHANGELOG.txt
===================================================================
--- relstorage/trunk/CHANGELOG.txt	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/CHANGELOG.txt	2008-07-28 17:26:36 UTC (rev 88881)
@@ -1,173 +0,0 @@
-
-RelStorage 1.1b2
-
-- Made the MySQL locks database-specific rather than server-wide.  This is
-  important for multi-database configurations.
-
-- In the PostgreSQL adapter, made the pack lock fall back to table locking
-  rather than advisory locks for PostgreSQL 8.1.
-
-
-RelStorage 1.1b1
-
-- Fixed the use of setup.py without setuptools.  Thanks to Chris Withers.
-
-- Fixed type coercion of the transaction extension field.  This fixes
-  an issue with converting databases.  Thanks to Kevin Smith for
-  discovering this.
-
-- Added logging to the pack code to help diagnose performance issues.
-
-- Additions to the object_ref table are now periodically committed
-  during pre_pack so that the work is not lost if pre_pack fails.
-
-- Modified the pack code to pack one transaction at a time and
-  release the commit lock frequently.  This should help large pack
-  operations.
-
-- Fixed buildout-based installation of the zodbconvert script.  Thanks to
-  Jim Fulton.
-
-
-RelStorage 1.0.1
-
-- The speedtest script failed if run on a test database that has no tables.
-  Now the script creates the tables if needed.  Thanks to Flavio Coelho
-  for discovering this.
-
-- Reworked the auto-reconnect logic so that applications never see
-  temporary database disconnects if possible.  Thanks to Rigel Di Scala
-  for pointing out this issue.
-
-- Improved the log messages explaining database connection failures.
-
-- Moved poll_invalidations to the common adapter base class, reducing the
-  amount of code to maintain.
-
-
-RelStorage 1.0
-
-- Added a utility for converting between storages called zodbconvert.
-
-
-RelStorage 1.0c1
-
-- The previous fix for non-ASCII characters was incorrect.  Now transaction
-  metadata is stored as raw bytes.  A schema migration is required; see
-  notes/migrate-1.0-beta.txt.
-
-- Integrated setuptools and made an egg.
-
-
-RelStorage 1.0 beta
-
-- Renamed to reflect expanding database support.
-
-- Added support for Oracle 10g.
-
-- Major overhaul with many scalability and reliability improvements,
-  particularly in the area of packing.
-
-- Moved to svn.zope.org and switched to ZPL 2.1.
-
-- Made two-phase commit optional in both Oracle and PostgreSQL.  They
-  both use commit_lock in such a way that the commit is not likely to
-  fail in the second phase.
-
-- Switched most database transaction isolation levels from serializable
-  to read committed.  It turns out that commit_lock already provides
-  the serializability guarantees we need, so it is safe to take advantage
-  of the potential speed gains.  The one major exception is the load
-  connection, which requires an unchanging view of the database.
-
-- Stored objects are now buffered in a database table rather than a file.
-
-- Stopped using the LISTEN and NOTIFY statements in PostgreSQL since
-  they are not strictly transactional in the sense we require.
-
-- Started using a prepared statement in PostgreSQL for getting the
-  newest transaction ID quickly.
-
-- Removed the code in the Oracle adapter for retrying connection attempts.
-  (It is better to just reconfigure Oracle.)
-
-- Added support for MySQL 5.0.
-
-- Added the poll_interval option.  It reduces the frequency of database
-  polls, but it also increases the potential for conflict errors on
-  servers with high write volume.
-
-- Implemented the storage iterator protocol, making it possible to copy
-  transactions to and from FileStorage and other RelStorage instances.
-
-- Fixed a bug that caused OIDs to be reused after importing transactions.
-  Added a corresponding test.
-
-- Made it possible to disable garbage collection during packing.
-  Exposed the option in zope.conf.
-
-- Valery Suhomlinov discovered a problem with non-ASCII data in transaction
-  metadata.  The problem has been fixed for all supported databases.
-
-
-PGStorage 0.4
-
-- Began using the PostgreSQL LISTEN and NOTIFY statements as a shortcut
-  for invalidation polling.
-
-- Removed the commit_order code.  The commit_order idea was intended to
-  allow concurrent commits, but that idea is a little too ambitious while
-  other more important ideas are being tested.  Something like it may
-  come later.
-
-- Improved connection management: only one database connection is
-  held continuously open per storage instance.
-
-- Reconnect to the database automatically.
-
-- Removed test mode.
-
-- Switched from using a ZODB.Connection subclass to a ZODB patch.  The
-  Connection class changes in subtle ways too often to subclass reliably;
-  a patch is much safer.
-
-- PostgreSQL 8.1 is now a dependency because PGStorage uses two phase commit.
-
-- Fixed an undo bug.  Symptom: attempting to examine the undo log revealed
-  broken pickles.  Cause: the extension field was not being wrapped in
-  psycopg2.Binary upon insert.  Solution: used psycopg2.Binary.
-  Unfortunately, this doesn't fix existing transactions people have
-  committed.  If anyone has any data to keep, fixing the old transactions
-  should be easy.
-
-- Moved from a private CVS repository to Sourceforge.
-  See http://pgstorage.sourceforge.net .  Also switched to the MIT license.
-
-- David Pratt added a basic getSize() implementation so that the Zope
-  management interface displays an estimate of the size of the database.
-
-- Turned PGStorage into a top-level package.  Python generally makes
-  top-level packages easier to install.
-
-
-PGStorage 0.3
-
-- Made compatible with Zope 3, although an undo bug apparently remains.
-
-
-PGStorage 0.2
-
-- Fixed concurrent commits, which were generating deadlocks.  Fixed by
-  adding a special table, "commit_lock", which is used for
-  synchronizing increments of commit_seq (but only at final commit.)
-  If you are upgrading from version 0.1, you need to change your
-  database using the 'psql' prompt:
-
-    create table commit_lock ();
-
-- Added speed tests and an OpenDocument spreadsheet comparing
-  FileStorage / ZEO with PGStorage.  PGStorage wins at reading objects
-  and writing a lot of small transactions, while FileStorage / ZEO
-  wins at writing big transactions.  Interestingly, they tie when
-  writing a RAM disk.
-

Copied: relstorage/trunk/CHANGES.txt (from rev 88795, relstorage/branches/1.1/CHANGES.txt)
===================================================================
--- relstorage/trunk/CHANGES.txt	                        (rev 0)
+++ relstorage/trunk/CHANGES.txt	2008-07-28 17:26:36 UTC (rev 88881)
@@ -0,0 +1,202 @@
+
+Next Release
+
+- Normalized poll-invalidation patches as Solaris' patch command would not
+  accept the current format. The patches now apply with:
+  patch -d lib/python/ZODB -p0 < poll-invalidation-1-zodb-3-X-X.patch
+
+- In MySQL, Use DROP TABLE IF EXISTS instead of TRUNCATE to clear 'temp_store'
+  because:
+  - TRUNCATE has one page of caveats in the MySQL documentation.
+  - TEMPORARY TABLEs have half a page of caveats when it comes to
+    replication.
+  - The end result is that 'temp_store' may not exist on the
+    replication slave at the exact same time(s) it exists on the
+    master.
+
+RelStorage 1.1c1
+
+- Added optional memcache integration.  This is useful when the connection
+  to the relational database has high latency.
+
+- Made it possible to set the pack and memcache options in zope.conf.
+
+- Log more info when a KeyError occurs within RelStorage.
+
+
+RelStorage 1.1b2
+
+- Made the MySQL locks database-specific rather than server-wide.  This is
+  important for multi-database configurations.
+
+- In the PostgreSQL adapter, made the pack lock fall back to table locking
+  rather than advisory locks for PostgreSQL 8.1.
+
+- Changed a query for following object references (used during packing)
+  to work around a MySQL performance bug.  Thanks to Anton Stonor for
+  discovering this.
+
+
+RelStorage 1.1b1
+
+- Fixed the use of setup.py without setuptools.  Thanks to Chris Withers.
+
+- Fixed type coercion of the transaction extension field.  This fixes
+  an issue with converting databases.  Thanks to Kevin Smith for
+  discovering this.
+
+- Added logging to the pack code to help diagnose performance issues.
+
+- Additions to the object_ref table are now periodically committed
+  during pre_pack so that the work is not lost if pre_pack fails.
+
+- Modified the pack code to pack one transaction at a time and
+  release the commit lock frequently.  This should help large pack
+  operations.
+
+- Fixed buildout-based installation of the zodbconvert script.  Thanks to
+  Jim Fulton.
+
+
+RelStorage 1.0.1
+
+- The speedtest script failed if run on a test database that has no tables.
+  Now the script creates the tables if needed.  Thanks to Flavio Coelho
+  for discovering this.
+
+- Reworked the auto-reconnect logic so that applications never see
+  temporary database disconnects if possible.  Thanks to Rigel Di Scala
+  for pointing out this issue.
+
+- Improved the log messages explaining database connection failures.
+
+- Moved poll_invalidations to the common adapter base class, reducing the
+  amount of code to maintain.
+
+
+RelStorage 1.0
+
+- Added a utility for converting between storages called zodbconvert.
+
+
+RelStorage 1.0c1
+
+- The previous fix for non-ASCII characters was incorrect.  Now transaction
+  metadata is stored as raw bytes.  A schema migration is required; see
+  notes/migrate-1.0-beta.txt.
+
+- Integrated setuptools and made an egg.
+
+
+RelStorage 1.0 beta
+
+- Renamed to reflect expanding database support.
+
+- Added support for Oracle 10g.
+
+- Major overhaul with many scalability and reliability improvements,
+  particularly in the area of packing.
+
+- Moved to svn.zope.org and switched to ZPL 2.1.
+
+- Made two-phase commit optional in both Oracle and PostgreSQL.  They
+  both use commit_lock in such a way that the commit is not likely to
+  fail in the second phase.
+
+- Switched most database transaction isolation levels from serializable
+  to read committed.  It turns out that commit_lock already provides
+  the serializability guarantees we need, so it is safe to take advantage
+  of the potential speed gains.  The one major exception is the load
+  connection, which requires an unchanging view of the database.
+
+- Stored objects are now buffered in a database table rather than a file.
+
+- Stopped using the LISTEN and NOTIFY statements in PostgreSQL since
+  they are not strictly transactional in the sense we require.
+
+- Started using a prepared statement in PostgreSQL for getting the
+  newest transaction ID quickly.
+
+- Removed the code in the Oracle adapter for retrying connection attempts.
+  (It is better to just reconfigure Oracle.)
+
+- Added support for MySQL 5.0.
+
+- Added the poll_interval option.  It reduces the frequency of database
+  polls, but it also increases the potential for conflict errors on
+  servers with high write volume.
+
+- Implemented the storage iterator protocol, making it possible to copy
+  transactions to and from FileStorage and other RelStorage instances.
+
+- Fixed a bug that caused OIDs to be reused after importing transactions.
+  Added a corresponding test.
+
+- Made it possible to disable garbage collection during packing.
+  Exposed the option in zope.conf.
+
+- Valery Suhomlinov discovered a problem with non-ASCII data in transaction
+  metadata.  The problem has been fixed for all supported databases.
+
+
+PGStorage 0.4
+
+- Began using the PostgreSQL LISTEN and NOTIFY statements as a shortcut
+  for invalidation polling.
+
+- Removed the commit_order code.  The commit_order idea was intended to
+  allow concurrent commits, but that idea is a little too ambitious while
+  other more important ideas are being tested.  Something like it may
+  come later.
+
+- Improved connection management: only one database connection is
+  held continuously open per storage instance.
+
+- Reconnect to the database automatically.
+
+- Removed test mode.
+
+- Switched from using a ZODB.Connection subclass to a ZODB patch.  The
+  Connection class changes in subtle ways too often to subclass reliably;
+  a patch is much safer.
+
+- PostgreSQL 8.1 is now a dependency because PGStorage uses two phase commit.
+
+- Fixed an undo bug.  Symptom: attempting to examine the undo log revealed
+  broken pickles.  Cause: the extension field was not being wrapped in
+  psycopg2.Binary upon insert.  Solution: used psycopg2.Binary.
+  Unfortunately, this doesn't fix existing transactions people have
+  committed.  If anyone has any data to keep, fixing the old transactions
+  should be easy.
+
+- Moved from a private CVS repository to Sourceforge.
+  See http://pgstorage.sourceforge.net .  Also switched to the MIT license.
+
+- David Pratt added a basic getSize() implementation so that the Zope
+  management interface displays an estimate of the size of the database.
+
+- Turned PGStorage into a top-level package.  Python generally makes
+  top-level packages easier to install.
+
+
+PGStorage 0.3
+
+- Made compatible with Zope 3, although an undo bug apparently remains.
+
+
+PGStorage 0.2
+
+- Fixed concurrent commits, which were generating deadlocks.  Fixed by
+  adding a special table, "commit_lock", which is used for
+  synchronizing increments of commit_seq (but only at final commit.)
+  If you are upgrading from version 0.1, you need to change your
+  database using the 'psql' prompt:
+
+    create table commit_lock ();
+
+- Added speed tests and an OpenDocument spreadsheet comparing
+  FileStorage / ZEO with PGStorage.  PGStorage wins at reading objects
+  and writing a lot of small transactions, while FileStorage / ZEO
+  wins at writing big transactions.  Interestingly, they tie when
+  writing a RAM disk.
+

Deleted: relstorage/trunk/MANIFEST
===================================================================
--- relstorage/trunk/MANIFEST	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/MANIFEST	2008-07-28 17:26:36 UTC (rev 88881)
@@ -1,21 +0,0 @@
-CHANGELOG.txt
-MANIFEST.in
-README.txt
-poll-invalidation-1-zodb-3-7-1.patch
-poll-invalidation-1-zodb-3-8-0.patch
-setup.py
-relstorage/__init__.py
-relstorage/component.xml
-relstorage/config.py
-relstorage/relstorage.py
-relstorage/adapters/__init__.py
-relstorage/adapters/common.py
-relstorage/adapters/mysql.py
-relstorage/adapters/oracle.py
-relstorage/adapters/postgresql.py
-relstorage/tests/__init__.py
-relstorage/tests/reltestbase.py
-relstorage/tests/speedtest.py
-relstorage/tests/testmysql.py
-relstorage/tests/testoracle.py
-relstorage/tests/testpostgresql.py

Modified: relstorage/trunk/README.txt
===================================================================
--- relstorage/trunk/README.txt	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/README.txt	2008-07-28 17:26:36 UTC (rev 88881)
@@ -139,7 +139,7 @@
     .. _migrate-1.0-beta.txt:
 http://svn.zope.org/*checkout*/relstorage/trunk/notes/migrate-1.0-beta.txt
 
-  To migrate from version 1.0.1 to version 1.1b1, see:
+  To migrate from version 1.0.1 to version 1.1, see:
 
     migrate-1.0.1.txt_
 
@@ -150,37 +150,85 @@
 Optional Features
 =================
 
-  poll-interval
+    Specify these options in zope.conf.
 
-    This option is useful if you need to reduce database traffic.  If set,
-RelStorage will poll the database for changes less often.  A setting of 1 to 5
-seconds should be sufficient for most systems.  Fractional seconds are allowed.
+    poll-interval
 
-    While this setting should not affect database integrity, it increases the
-probability of basing transactions on stale data, leading to conflicts.  Thus a
-nonzero setting can hurt the performance of servers with high write volume.
+        Defer polling the database for the specified maximum time interval.
+        Set to 0 (the default) to always poll.  Fractional seconds are
+        allowed.
 
-    To enable this feature, add a line similar to "poll-interval 2" inside a
-<relstorage> section of zope.conf.
+        Use this to lighten the database load on servers with high read
+        volume and low write volume.  A setting of 1-5 seconds is sufficient
+        for most systems.
 
-  pack-gc
+        While this setting should not affect database integrity,
+        it increases the probability of basing transactions on stale data,
+        leading to conflicts.  Thus a nonzero setting can hurt
+        the performance of servers with high write volume.
 
-    If pack-gc is false, pack operations do not perform garbage collection. 
-Garbage collection is enabled by default.
+    pack-gc
 
-    If garbage collection is disabled, pack operations keep at least one
-revision of every object.  With garbage collection disabled, the pack code does
-not need to follow object references, making packing conceivably much faster. 
-However, some of that benefit may be lost due to an ever increasing number of
-unused objects.
+        If pack-gc is false, pack operations do not perform
+        garbage collection.  Garbage collection is enabled by default.
 
-    Disabling garbage collection is also a hack that ensures inter-database
-references never break.
+        If garbage collection is disabled, pack operations keep at least one
+        revision of every object.  With garbage collection disabled, the
+        pack code does not need to follow object references, making
+        packing conceivably much faster.  However, some of that benefit
+        may be lost due to an ever increasing number of unused objects.
 
-    To disable garbage collection, add the line "pack-gc no" inside a
-<relstorage> section of zope.conf.
+        Disabling garbage collection is also a hack that ensures
+        inter-database references never break.
 
+    pack-batch-timeout
 
+        Packing occurs in batches of transactions; this specifies the
+        timeout in seconds for each batch.  Note that some database
+        configurations have unpredictable I/O performance
+        and might stall much longer than the timeout.
+        The default timeout is 5.0 seconds.
+
+    pack-duty-cycle
+
+        After each batch, the pack code pauses for a time to
+        allow concurrent transactions to commit.  The pack-duty-cycle
+        specifies what fraction of time should be spent on packing.
+        For example, if the duty cycle is 0.75, then 75% of the time
+        will be spent packing: a 6 second pack batch
+        will be followed by a 2 second delay.  The duty cycle should
+        be greater than 0.0 and less than or equal to 1.0.  Specify
+        1.0 for no delay between batches.
+
+        The default is 0.5.  Raise it to finish packing faster; lower it
+        to reduce the effect of packing on transaction commit performance.
+
+    pack-max-delay
+
+        This specifies a maximum delay between pack batches.  Sometimes
+        the database takes an extra long time to finish a pack batch; at
+        those times it is useful to cap the delay imposed by the
+        pack-duty-cycle.  The default is 20 seconds.
+
+    cache-servers
+
+        Specifies a list of memcache servers.  Enabling memcache integration
+        is useful if the connection to the relational database has high
+        latency and the connection to memcache has significantly lower
+        latency.  On the other hand, if the connection to the relational
+        database already has low latency, memcache integration may actually
+        hurt overall performance.
+
+        Provide a list of host:port pairs, separated by whitespace.
+        "127.0.0.1:11211" is a common setting.  The default is to disable
+        memcache integration.
+
+    cache-module-name
+
+        Specifies which Python memcache module to use.  The default is
+        "memcache", a pure Python module.  An alternative module is
+        "cmemcache".  This setting has no effect unless cache-servers is set.
+
 Development
 ===========
 

Modified: relstorage/trunk/notes/migrate-1.0.1.txt
===================================================================
--- relstorage/trunk/notes/migrate-1.0.1.txt	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/notes/migrate-1.0.1.txt	2008-07-28 17:26:36 UTC (rev 88881)
@@ -1,5 +1,5 @@
 
-Migrating from version 1.0.1 to version 1.1b1
+Migrating from RelStorage version 1.0.1 to version 1.1
 
 PostgreSQL:
 

Modified: relstorage/trunk/poll-invalidation-1-zodb-3-7-1.patch
===================================================================
--- relstorage/trunk/poll-invalidation-1-zodb-3-7-1.patch	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/poll-invalidation-1-zodb-3-7-1.patch	2008-07-28 17:26:36 UTC (rev 88881)
@@ -1,7 +1,8 @@
-diff -r 34747fbd09ec Connection.py
---- a/Connection.py	Tue Nov 20 21:57:31 2007 -0700
-+++ b/Connection.py	Fri Jan 11 21:19:00 2008 -0700
-@@ -75,8 +75,14 @@ class Connection(ExportImport, object):
+Index: Connection.py
+===================================================================
+--- Connection.py	(revision 87280)
++++ Connection.py	(working copy)
+@@ -75,8 +75,14 @@
          """Create a new Connection."""
  
          self._db = db
@@ -18,7 +19,7 @@
          self._savepoint_storage = None
  
          self.transaction_manager = self._synch = self._mvcc = None
-@@ -170,6 +176,12 @@ class Connection(ExportImport, object):
+@@ -170,6 +176,12 @@
          # Multi-database support
          self.connections = {self._db.database_name: self}
  
@@ -31,7 +32,7 @@
  
      def add(self, obj):
          """Add a new object 'obj' to the database and assign it an oid."""
-@@ -267,6 +279,11 @@ class Connection(ExportImport, object):
+@@ -267,6 +279,11 @@
              self.transaction_manager.unregisterSynch(self)
              self._synch = None
  
@@ -43,7 +44,7 @@
          if primary:
              for connection in self.connections.values():
                  if connection is not self:
-@@ -295,6 +312,10 @@ class Connection(ExportImport, object):
+@@ -295,6 +312,10 @@
  
      def invalidate(self, tid, oids):
          """Notify the Connection that transaction 'tid' invalidated oids."""
@@ -54,7 +55,7 @@
          self._inv_lock.acquire()
          try:
              if self._txn_time is None:
-@@ -438,8 +459,23 @@ class Connection(ExportImport, object):
+@@ -438,8 +459,23 @@
          self._registered_objects = []
          self._creating.clear()
  
@@ -78,10 +79,11 @@
          self._inv_lock.acquire()
          try:
              # Non-ghostifiable objects may need to read when they are
-diff -r 34747fbd09ec DB.py
---- a/DB.py	Tue Nov 20 21:57:31 2007 -0700
-+++ b/DB.py	Wed Nov 28 18:33:12 2007 -0700
-@@ -260,6 +260,10 @@ class DB(object):
+Index: DB.py
+===================================================================
+--- DB.py	(revision 87280)
++++ DB.py	(working copy)
+@@ -260,6 +260,10 @@
              storage.store(z64, None, file.getvalue(), '', t)
              storage.tpc_vote(t)
              storage.tpc_finish(t)

Modified: relstorage/trunk/poll-invalidation-1-zodb-3-8-0.patch
===================================================================
--- relstorage/trunk/poll-invalidation-1-zodb-3-8-0.patch	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/poll-invalidation-1-zodb-3-8-0.patch	2008-07-28 17:26:36 UTC (rev 88881)
@@ -1,7 +1,8 @@
-diff -r 72bf2fd94b66 src/ZODB/Connection.py
---- a/src/ZODB/Connection.py	Wed Jan 30 23:23:05 2008 -0700
-+++ b/src/ZODB/Connection.py	Wed Jan 30 23:38:51 2008 -0700
-@@ -90,8 +90,15 @@ class Connection(ExportImport, object):
+Index: Connection.py
+===================================================================
+--- Connection.py	(revision 87666)
++++ Connection.py	(working copy)
+@@ -90,8 +90,15 @@
          self.connections = {self._db.database_name: self}
  
          self._version = version
@@ -19,7 +20,7 @@
          self._savepoint_storage = None
  
          # Do we need to join a txn manager?
-@@ -151,6 +158,12 @@ class Connection(ExportImport, object):
+@@ -151,6 +158,12 @@
          # in the cache on abort and in other connections on finish.
          self._modified = []
  
@@ -32,7 +33,7 @@
  
          # _invalidated queues invalidate messages delivered from the DB
          # _inv_lock prevents one thread from modifying the set while
-@@ -297,6 +310,11 @@ class Connection(ExportImport, object):
+@@ -297,6 +310,11 @@
          if self._opened:
              self.transaction_manager.unregisterSynch(self)
  
@@ -44,7 +45,7 @@
          if primary:
              for connection in self.connections.values():
                  if connection is not self:
-@@ -328,6 +346,10 @@ class Connection(ExportImport, object):
+@@ -328,6 +346,10 @@
  
      def invalidate(self, tid, oids):
          """Notify the Connection that transaction 'tid' invalidated oids."""
@@ -55,7 +56,7 @@
          self._inv_lock.acquire()
          try:
              if self._txn_time is None:
-@@ -469,8 +491,23 @@ class Connection(ExportImport, object):
+@@ -469,8 +491,23 @@
          self._registered_objects = []
          self._creating.clear()
  
@@ -79,10 +80,11 @@
          self._inv_lock.acquire()
          try:
              # Non-ghostifiable objects may need to read when they are
-diff -r 72bf2fd94b66 src/ZODB/DB.py
---- a/src/ZODB/DB.py	Wed Jan 30 23:23:05 2008 -0700
-+++ b/src/ZODB/DB.py	Wed Jan 30 23:38:51 2008 -0700
-@@ -284,6 +284,10 @@ class DB(object):
+Index: DB.py
+===================================================================
+--- DB.py	(revision 87666)
++++ DB.py	(working copy)
+@@ -284,6 +284,10 @@
              storage.store(z64, None, file.getvalue(), '', t)
              storage.tpc_vote(t)
              storage.tpc_finish(t)

Modified: relstorage/trunk/relstorage/adapters/common.py
===================================================================
--- relstorage/trunk/relstorage/adapters/common.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/adapters/common.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -98,6 +98,16 @@
             WHERE tid = %(tid)s
             LIMIT 1
             """,
+
+        'prepack_follow_child_refs': """
+            UPDATE pack_object SET keep = %(TRUE)s
+            WHERE keep = %(FALSE)s
+                AND zoid IN (
+                    SELECT DISTINCT to_zoid
+                    FROM object_ref
+                        JOIN temp_pack_visit USING (zoid)
+                )
+            """,
     }
 
 
@@ -382,7 +392,7 @@
         return self.open()
 
 
-    def pre_pack(self, pack_tid, get_references, gc):
+    def pre_pack(self, pack_tid, get_references, options):
         """Decide what to pack.
 
         tid specifies the most recent transaction to pack.
@@ -390,15 +400,16 @@
         get_references is a function that accepts a pickled state and
         returns a set of OIDs that state refers to.
 
-        gc is a boolean indicating whether to run garbage collection.
-        If gc is false, at least one revision of every object is kept,
-        even if nothing refers to it.  Packing with gc disabled can be
+        options is an instance of relstorage.Options.
+        The options.pack_gc flag indicates whether to run garbage collection.
+        If pack_gc is false, at least one revision of every object is kept,
+        even if nothing refers to it.  Packing with pack_gc disabled can be
         much faster.
         """
         conn, cursor = self.open_for_pre_pack()
         try:
             try:
-                if gc:
+                if options.pack_gc:
                     log.info("pre_pack: start with gc enabled")
                     self._pre_pack_with_gc(
                         conn, cursor, pack_tid, get_references)
@@ -413,7 +424,8 @@
                 self._run_script_stmt(cursor, stmt)
                 to_remove = 0
 
-                if gc:
+                if options.pack_gc:
+                    # Pack objects with the keep flag set to false.
                     stmt = """
                     INSERT INTO pack_state (tid, zoid)
                     SELECT tid, zoid
@@ -427,6 +439,7 @@
                         pack_tid})
                     to_remove += cursor.rowcount
 
+                # Pack object states with the keep flag set to true.
                 stmt = """
                 INSERT INTO pack_state (tid, zoid)
                 SELECT tid, zoid
@@ -490,6 +503,10 @@
     def _pre_pack_with_gc(self, conn, cursor, pack_tid, get_references):
         """Determine what to pack, with garbage collection.
         """
+        stmt = self._scripts['create_temp_pack_visit']
+        if stmt:
+            self._run_script(cursor, stmt)
+
         log.info("pre_pack: following references after the pack point")
         # Fill object_ref with references from object states
         # in transactions that will not be packed.
@@ -513,33 +530,27 @@
 
         -- Keep objects that have been revised since pack_tid.
         UPDATE pack_object SET keep = %(TRUE)s
-        WHERE keep = %(FALSE)s
-            AND zoid IN (
-                SELECT zoid
-                FROM current_object
-                WHERE tid > %(pack_tid)s
-            );
+        WHERE zoid IN (
+            SELECT zoid
+            FROM current_object
+            WHERE tid > %(pack_tid)s
+        );
 
         -- Keep objects that are still referenced by object states in
         -- transactions that will not be packed.
         UPDATE pack_object SET keep = %(TRUE)s
-        WHERE keep = %(FALSE)s
-            AND zoid IN (
-                SELECT to_zoid
-                FROM object_ref
-                WHERE tid > %(pack_tid)s
-            );
+        WHERE zoid IN (
+            SELECT to_zoid
+            FROM object_ref
+            WHERE tid > %(pack_tid)s
+        );
         """
         self._run_script(cursor, stmt, {'pack_tid': pack_tid})
 
-        stmt = self._scripts['create_temp_pack_visit']
-        if stmt:
-            self._run_script(cursor, stmt)
-
         # Each of the packable objects to be kept might
         # refer to other objects.  If some of those references
-        # include objects currently set to be removed, keep
-        # those objects as well.  Do this
+        # include objects currently set to be removed, mark
+        # the referenced objects to be kept as well.  Do this
         # repeatedly until all references have been satisfied.
         pass_num = 1
         while True:
@@ -588,16 +599,8 @@
 
             # Visit the children of all parent objects that were
             # just visited.
-            stmt = """
-            UPDATE pack_object SET keep = %(TRUE)s
-            WHERE keep = %(FALSE)s
-                AND zoid IN (
-                    SELECT DISTINCT to_zoid
-                    FROM object_ref
-                        JOIN temp_pack_visit USING (zoid)
-                )
-            """
-            self._run_script_stmt(cursor, stmt)
+            stmt = self._scripts['prepack_follow_child_refs']
+            self._run_script(cursor, stmt)
             found_count = cursor.rowcount
 
             log.debug("pre_pack: found %d more referenced object(s) in "
@@ -724,8 +727,7 @@
         pass
 
 
-    def pack(self, pack_tid, batch_timeout=5.0, delay_ratio=1.0,
-            max_delay=20.0):
+    def pack(self, pack_tid, options):
         """Pack.  Requires populated pack tables."""
 
         # Read committed mode is sufficient.
@@ -757,17 +759,21 @@
                 for tid, packed, has_removable in tid_rows:
                     self._pack_transaction(
                         cursor, pack_tid, tid, packed, has_removable)
-                    if time.time() >= start + batch_timeout:
+                    if time.time() >= start + options.pack_batch_timeout:
                         # commit the work done so far and release the
                         # commit lock for a short time
                         conn.commit()
                         self._release_commit_lock(cursor)
-                        # Add a delay.
+                        # Add a delay based on the configured duty cycle.
                         elapsed = time.time() - start
-                        delay = min(max_delay, elapsed * delay_ratio)
-                        if delay > 0:
-                            log.debug('pack: sleeping %.4g second(s)', delay)
-                            time.sleep(delay)
+                        duty_cycle = options.pack_duty_cycle
+                        if duty_cycle > 0.0 and duty_cycle < 1.0:
+                            delay = min(options.pack_max_delay,
+                                elapsed * (1.0 / duty_cycle - 1.0))
+                            if delay > 0:
+                                log.debug('pack: sleeping %.4g second(s)',
+                                    delay)
+                                time.sleep(delay)
                         self._hold_commit_lock(cursor)
                         start = time.time()
 
@@ -916,8 +922,8 @@
 
         # Get the list of changed OIDs and return it.
         stmt = """
-        SELECT DISTINCT zoid
-        FROM object_state
+        SELECT zoid
+        FROM current_object
         WHERE tid > %(tid)s
         """
         if ignore_tid is None:

Modified: relstorage/trunk/relstorage/adapters/mysql.py
===================================================================
--- relstorage/trunk/relstorage/adapters/mysql.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/adapters/mysql.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -63,6 +63,22 @@
 class MySQLAdapter(Adapter):
     """MySQL adapter for RelStorage."""
 
+    _scripts = Adapter._scripts.copy()
+    # work around a MySQL performance bug
+    # See: http://mail.zope.org/pipermail/zodb-dev/2008-May/011880.html
+    #      http://bugs.mysql.com/bug.php?id=28257
+    _scripts['prepack_follow_child_refs'] = """
+    UPDATE pack_object SET keep = %(TRUE)s
+    WHERE keep = %(FALSE)s
+        AND zoid IN (
+            SELECT * FROM (
+                SELECT DISTINCT to_zoid
+                FROM object_ref
+                    JOIN temp_pack_visit USING (zoid)
+            ) AS child_zoids
+        )
+    """
+
     def __init__(self, **params):
         self._params = params.copy()
         self._params['use_unicode'] = True
@@ -264,6 +280,21 @@
         # do later
         return 0
 
+    def get_current_tid(self, cursor, oid):
+        """Returns the current integer tid for an object.
+
+        oid is an integer.  Returns None if object does not exist.
+        """
+        cursor.execute("""
+        SELECT tid
+        FROM current_object
+        WHERE zoid = %s
+        """, (oid,))
+        if cursor.rowcount:
+            assert cursor.rowcount == 1
+            return cursor.fetchone()[0]
+        return None
+
     def load_current(self, cursor, oid):
         """Returns the current pickle and integer tid for an object.
 
@@ -367,11 +398,19 @@
             self.close(conn, cursor)
             raise
 
+    def _restart_temp_table(self, cursor):
+        """Restart the temporary table for storing objects"""
+        stmt = """
+        DROP TEMPORARY TABLE IF EXISTS temp_store
+        """
+        cursor.execute(stmt)
+        self._make_temp_table(cursor)
+
     def restart_store(self, cursor):
         """Reuse a store connection."""
         try:
             cursor.connection.rollback()
-            cursor.execute("TRUNCATE temp_store")
+            self._restart_temp_table(cursor)
         except (MySQLdb.OperationalError, MySQLdb.InterfaceError), e:
             raise StorageError(e)
 

Modified: relstorage/trunk/relstorage/adapters/oracle.py
===================================================================
--- relstorage/trunk/relstorage/adapters/oracle.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/adapters/oracle.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -68,6 +68,9 @@
             FROM object_state
             WHERE tid = %(tid)s
             """,
+
+        'prepack_follow_child_refs':
+            Adapter._scripts['prepack_follow_child_refs'],
     }
 
     def __init__(self, user, password, dsn, twophase=False, arraysize=64):
@@ -348,6 +351,20 @@
         # May not be possible without access to the dba_* objects
         return 0
 
+    def get_current_tid(self, cursor, oid):
+        """Returns the current integer tid for an object.
+
+        oid is an integer.  Returns None if object does not exist.
+        """
+        cursor.execute("""
+        SELECT tid
+        FROM current_object
+        WHERE zoid = :1
+        """, (oid,))
+        for (tid,) in cursor:
+            return tid
+        return None
+
     def load_current(self, cursor, oid):
         """Returns the current pickle and integer tid for an object.
 

Modified: relstorage/trunk/relstorage/adapters/postgresql.py
===================================================================
--- relstorage/trunk/relstorage/adapters/postgresql.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/adapters/postgresql.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -263,6 +263,21 @@
         finally:
             self.close(conn, cursor)
 
+    def get_current_tid(self, cursor, oid):
+        """Returns the current integer tid for an object.
+
+        oid is an integer.  Returns None if object does not exist.
+        """
+        cursor.execute("""
+        SELECT tid
+        FROM current_object
+        WHERE zoid = %s
+        """, (oid,))
+        if cursor.rowcount:
+            assert cursor.rowcount == 1
+            return cursor.fetchone()[0]
+        return None
+
     def load_current(self, cursor, oid):
         """Returns the current pickle and integer tid for an object.
 

Modified: relstorage/trunk/relstorage/component.xml
===================================================================
--- relstorage/trunk/relstorage/component.xml	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/component.xml	2008-07-28 17:26:36 UTC (rev 88881)
@@ -55,6 +55,59 @@
         inter-database references never break.
       </description>
     </key>
+    <key name="pack-batch-timeout" datatype="float" required="no">
+      <description>
+        Packing occurs in batches of transactions; this specifies the
+        timeout in seconds for each batch.  Note that some database
+        configurations have unpredictable I/O performance
+        and might stall much longer than the timeout.
+        The default timeout is 5.0 seconds.
+      </description>
+    </key>
+    <key name="pack-duty-cycle" datatype="float" required="no">
+      <description>
+        After each batch, the pack code pauses for a time to
+        allow concurrent transactions to commit.  The pack-duty-cycle
+        specifies what fraction of time should be spent on packing.
+        For example, if the duty cycle is 0.75, then 75% of the time
+        will be spent packing: a 6 second pack batch
+        will be followed by a 2 second delay.  The duty cycle should
+        be greater than 0.0 and less than or equal to 1.0.  Specify
+        1.0 for no delay between batches.
+
+        The default is 0.5.  Raise it to finish packing faster; lower it
+        to reduce the effect of packing on transaction commit performance.
+      </description>
+    </key>
+    <key name="pack-max-delay" datatype="float" required="no">
+      <description>
+        This specifies a maximum delay between pack batches.  Sometimes
+        the database takes an extra long time to finish a pack batch; at
+        those times it is useful to cap the delay imposed by the
+        pack-duty-cycle.  The default is 20 seconds.
+      </description>
+    </key>
+    <key name="cache-servers" datatype="string" required="no">
+      <description>
+        Specifies a list of memcache servers.  Enabling memcache integration
+        is useful if the connection to the relational database has high
+        latency and the connection to memcache has significantly lower
+        latency.  On the other hand, if the connection to the relational
+        database already has low latency, memcache integration may actually
+        hurt overall performance.
+
+        Provide a list of host:port pairs, separated by whitespace.
+        "127.0.0.1:11211" is a common setting.  The default is to disable
+        memcache integration.
+      </description>
+    </key>
+    <key name="cache-module-name" datatype="string" required="no">
+      <description>
+        Specifies which Python memcache module to use.  The default is
+        "memcache", a pure Python module.  An alternative module is
+        "cmemcache".  This setting has no effect unless cache-servers is set.
+      </description>
+    </key>
   </sectiontype>
 
   <sectiontype name="postgresql" implements="relstorage.adapter"

Modified: relstorage/trunk/relstorage/config.py
===================================================================
--- relstorage/trunk/relstorage/config.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/config.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -15,7 +15,7 @@
 
 from ZODB.config import BaseConfig
 
-from relstorage import RelStorage
+from relstorage import RelStorage, Options
 
 
 class RelStorageFactory(BaseConfig):
@@ -23,9 +23,13 @@
     def open(self):
         config = self.config
         adapter = config.adapter.open()
+        options = Options()
+        for key in options.__dict__.keys():
+            value = getattr(config, key, None)
+            if value is not None:
+                setattr(options, key, value)
         return RelStorage(adapter, name=config.name, create=config.create,
-            read_only=config.read_only, poll_interval=config.poll_interval,
-            pack_gc=config.pack_gc)
+            read_only=config.read_only, options=options)
 
 
 class PostgreSQLAdapterFactory(BaseConfig):

Modified: relstorage/trunk/relstorage/relstorage.py
===================================================================
--- relstorage/trunk/relstorage/relstorage.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/relstorage.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -43,15 +43,17 @@
     """Storage to a relational database, based on invalidation polling"""
 
     def __init__(self, adapter, name=None, create=True,
-            read_only=False, poll_interval=0, pack_gc=True):
+            read_only=False, options=None):
         if name is None:
             name = 'RelStorage on %s' % adapter.__class__.__name__
 
         self._adapter = adapter
         self._name = name
         self._is_read_only = read_only
-        self._poll_interval = poll_interval
-        self._pack_gc = pack_gc
+        if options is None:
+            options = Options()
+        self._options = options
+        self._cache_client = None
 
         if create:
             self._adapter.prepare_schema()
@@ -91,7 +93,18 @@
         # _max_new_oid is the highest OID provided by new_oid()
         self._max_new_oid = 0
 
+        # set _cache_client
+        if options.cache_servers:
+            module_name = options.cache_module_name
+            module = __import__(module_name, {}, {}, ['Client'])
+            servers = options.cache_servers
+            if isinstance(servers, basestring):
+                servers = servers.split()
+            self._cache_client = module.Client(servers)
+        else:
+            self._cache_client = None
 
+
     def _open_load_connection(self):
         """Open the load connection to the database.  Return nothing."""
         conn, cursor = self._adapter.open_for_load()
@@ -173,6 +186,9 @@
         """
         self._adapter.zap_all()
         self._rollback_load_connection()
+        cache = self._cache_client
+        if cache is not None:
+            cache.flush_all()
 
     def close(self):
         """Close the connections to the database."""
@@ -210,24 +226,89 @@
         """Return database size in bytes"""
         return self._adapter.get_db_size()
 
+    def _get_oid_cache_key(self, oid_int):
+        """Return the cache key for finding the current tid.
+
+        This is overridden by BoundRelStorage.
+        """
+        return None
+
+    def _log_keyerror(self, oid_int, reason):
+        """Log just before raising KeyError in load().
+
+        KeyErrors in load() are generally not supposed to happen,
+        so this is a good place to gather information.
+        """
+        cursor = self._load_cursor
+        adapter = self._adapter
+        msg = ["Storage KeyError on oid %d: %s" % (oid_int, reason)]
+        rows = adapter.iter_transactions(cursor)
+        row = None
+        for row in rows:
+            # just get the first row
+            break
+        if not row:
+            msg.append("No transactions exist")
+        else:
+            msg.append("Current transaction is %d" % row[0])
+
+        rows = adapter.iter_object_history(cursor, oid_int)
+        tids = []
+        for row in rows:
+            tids.append(row[0])
+            if len(tids) >= 10:
+                break
+        msg.append("Recent object tids: %s" % repr(tids))
+        log.warning('; '.join(msg))
+
     def load(self, oid, version):
+        oid_int = u64(oid)
+        cache = self._cache_client
+
         self._lock_acquire()
         try:
             if not self._load_transaction_open:
                 self._restart_load()
             cursor = self._load_cursor
-            state, tid_int = self._adapter.load_current(cursor, u64(oid))
+            if cache is None:
+                state, tid_int = self._adapter.load_current(cursor, oid_int)
+            else:
+                # get tid_int from the cache or the database
+                cachekey = self._get_oid_cache_key(oid_int)
+                if cachekey:
+                    tid_int = cache.get(cachekey)
+                if not cachekey or not tid_int:
+                    tid_int = self._adapter.get_current_tid(
+                        cursor, oid_int)
+                    if cachekey and tid_int is not None:
+                        cache.set(cachekey, tid_int)
+                if tid_int is None:
+                    self._log_keyerror(oid_int, "no tid found(1)")
+                    raise KeyError(oid)
+
+                # get state from the cache or the database
+                cachekey = 'state:%d:%d' % (oid_int, tid_int)
+                state = cache.get(cachekey)
+                if not state:
+                    state = self._adapter.load_revision(
+                        cursor, oid_int, tid_int)
+                    if state:
+                        state = str(state)
+                        cache.set(cachekey, state)
         finally:
             self._lock_release()
+
         if tid_int is not None:
             if state:
                 state = str(state)
             if not state:
                 # This can happen if something attempts to load
                 # an object whose creation has been undone.
+                self._log_keyerror(oid_int, "creation has been undone")
                 raise KeyError(oid)
             return state, p64(tid_int)
         else:
+            self._log_keyerror(oid_int, "no tid found(2)")
             raise KeyError(oid)
 
     def loadEx(self, oid, version):
@@ -237,6 +318,15 @@
 
     def loadSerial(self, oid, serial):
         """Load a specific revision of an object"""
+        oid_int = u64(oid)
+        tid_int = u64(serial)
+        cache = self._cache_client
+        if cache is not None:
+            cachekey = 'state:%d:%d' % (oid_int, tid_int)
+            state = cache.get(cachekey)
+            if state:
+                return state
+
         self._lock_acquire()
         try:
             if self._store_cursor is not None:
@@ -247,17 +337,20 @@
                 if not self._load_transaction_open:
                     self._restart_load()
                 cursor = self._load_cursor
-            state = self._adapter.load_revision(cursor, u64(oid), u64(serial))
-            if state is not None:
-                state = str(state)
-                if not state:
-                    raise POSKeyError(oid)
-                return state
-            else:
-                raise KeyError(oid)
+            state = self._adapter.load_revision(cursor, oid_int, tid_int)
         finally:
             self._lock_release()
 
+        if state is not None:
+            state = str(state)
+            if not state:
+                raise POSKeyError(oid)
+            if cache is not None:
+                cache.set(cachekey, state)
+            return state
+        else:
+            raise KeyError(oid)
+
     def loadBefore(self, oid, tid):
         """Return the most recent revision of oid before tid committed."""
         oid_int = u64(oid)
@@ -702,22 +795,6 @@
             self._lock_release()
 
 
-    def set_pack_gc(self, pack_gc):
-        """Configures whether garbage collection during packing is enabled.
-
-        Garbage collection is enabled by default.  If GC is disabled,
-        packing keeps at least one revision of every object.
-        With GC disabled, the pack code does not need to follow object
-        references, making packing conceivably much faster.
-        However, some of that benefit may be lost due to an ever
-        increasing number of unused objects.
-
-        Disabling garbage collection is also a hack that ensures
-        inter-database references never break.
-        """
-        self._pack_gc = pack_gc
-
-
     def pack(self, t, referencesf):
         if self._is_read_only:
             raise POSException.ReadOnlyError()
@@ -751,10 +828,10 @@
                 # In pre_pack, the adapter fills tables with
                 # information about what to pack.  The adapter
                 # should not actually pack anything yet.
-                adapter.pre_pack(tid_int, get_references, self._pack_gc)
+                adapter.pre_pack(tid_int, get_references, self._options)
 
                 # Now pack.
-                adapter.pack(tid_int)
+                adapter.pack(tid_int, self._options)
                 self._after_pack()
             finally:
                 adapter.release_pack_lock(lock_cursor)
@@ -785,14 +862,20 @@
         # self._zodb_conn = zodb_conn
         RelStorage.__init__(self, adapter=parent._adapter, name=parent._name,
             create=False, read_only=parent._is_read_only,
-            poll_interval=parent._poll_interval, pack_gc=parent._pack_gc)
+            options=parent._options)
         # _prev_polled_tid contains the tid at the previous poll
         self._prev_polled_tid = None
         self._poll_at = 0
 
+    def _get_oid_cache_key(self, oid_int):
+        my_tid = self._prev_polled_tid
+        if my_tid is None:
+            return None
+        return 'tid:%d:%d' % (oid_int, my_tid)
+
     def connection_closing(self):
         """Release resources."""
-        if not self._poll_interval:
+        if not self._options.poll_interval:
             self._rollback_load_connection()
         # else keep the load transaction open so that it's possible
         # to ignore the next poll.
@@ -818,7 +901,7 @@
             if self._closed:
                 return {}
 
-            if self._poll_interval:
+            if self._options.poll_interval:
                 now = time.time()
                 if self._load_transaction_open and now < self._poll_at:
                     # It's not yet time to poll again.  The previous load
@@ -826,7 +909,7 @@
                     # ignore this poll.
                     return {}
                 # else poll now after resetting the timeout
-                self._poll_at = now + self._poll_interval
+                self._poll_at = now + self._options.poll_interval
 
             self._restart_load()
             conn = self._load_conn
@@ -949,3 +1032,14 @@
         else:
             self.data = None
 
+
+class Options:
+    """Options for tuning RelStorage."""
+    def __init__(self):
+        self.poll_interval = 0
+        self.pack_gc = True
+        self.pack_batch_timeout = 5.0
+        self.pack_duty_cycle = 0.5
+        self.pack_max_delay = 20.0
+        self.cache_servers = ()  # ['127.0.0.1:11211']
+        self.cache_module_name = 'memcache'

Modified: relstorage/trunk/relstorage/tests/reltestbase.py
===================================================================
--- relstorage/trunk/relstorage/tests/reltestbase.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/relstorage/tests/reltestbase.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -260,7 +260,7 @@
     def checkPollInterval(self):
         # Verify the poll_interval parameter causes RelStorage to
         # delay invalidation polling.
-        self._storage._poll_interval = 3600 
+        self._storage._poll_interval = 3600
         db = DB(self._storage)
         try:
             c1 = db.open()
@@ -446,7 +446,7 @@
             db.close()
 
     def checkPackGCDisabled(self):
-        self._storage.set_pack_gc(False)
+        self._storage._options.pack_gc = False
         self.checkPackGC(gc_enabled=False)
 
 
@@ -526,5 +526,3 @@
 
     def new_dest(self):
         return self._dst
-
-

Modified: relstorage/trunk/setup.py
===================================================================
--- relstorage/trunk/setup.py	2008-07-28 16:58:03 UTC (rev 88880)
+++ relstorage/trunk/setup.py	2008-07-28 17:26:36 UTC (rev 88881)
@@ -27,7 +27,7 @@
 with RelStorage.
 """
 
-VERSION = "1.1b1"
+VERSION = "1.1c1"
 
 classifiers = """\
 Development Status :: 4 - Beta



More information about the Checkins mailing list