[ZODB-Dev] SQL/ZODB integration (was Re: ZODB and new style classes?)

Phillip J. Eby pje@telecommunity.com
Mon, 01 Jul 2002 09:52:20 -0400


At 09:29 AM 7/1/02 -0400, Jim Fulton wrote:
>Jeremy Hylton wrote:
>..
>>You put the smiling their, so I think you've got the right idea, but I feel
>>like I should clarify.  There's no problem with using ZODB 4, but I wouldn't
>>trust it for production code and I certainly wouldn't distribute code using
>>it to other people yet.  I'm very happy to have non-Zope users experiment
>>with and test the ZODB 4 code.  There are a lot of internal changes, and we
>>have a number of significant API changes planned for the near future.  Such
>>helpful to get feedback from a wide range of users.
>
>I'll add that I also want to do a major refactoring of some of the APIs
>to make them cleaner and usable for non-ZODB Python persistence projects.
>I saw a number of cool presentations at EuroPython on RDBMs persistence 
>systems
>and it would be really cool if these could share persistence and transaction
>frameworks with ZODB.

For what it's worth, I've successfully designed (and have started 
implementing) a comprehensive framework for SQL/LDAP/anything-backed 
persistence using the ZODB Persistence and Transaction packages/interfaces 
exactly as they are now, so I'm not sure how much refactoring is really 
required.  ;)

The work I've done so far supports things like foreign keys and inverse 
foreign key reference sets, cross database references (including support 
for automatic thunk generation), updates in guaranteed referential 
integrity order, early flushing (pre-commit) of updates when a query would 
otherwise read stale data, and the ability for multi-row queries to "push" 
their data to pre-load ghost objects' state, avoiding re-querying to 
retrieve the state when it's accessed in object form from the query.

To implement the above features, one simply subclasses the basic data 
manager, and implements from one to six methods, depending on which 
features you want to support.  One of the methods is only needed for 
cross-database thunking, and another is only needed if you need to use a 
loaded state to determine what class a ghost should be created as.  So the 
usual methods to implement are just load(), save(), new(), and 
defaultState().  The base class handles 99% of the boilerplate persistence 
management, including the complete IPersistDataManager and 
Transaction.IDataManager interfaces.

Anyway, if you're interested, I've written some 80K+ of text on the core 
design and concepts for its use in postings at:

http://www.eby-sarna.com/pipermail/transwarp/2002-June/thread.html#135

Most of the detailed design for the data manager is in the "Basic 'storage 
jar' design" thread.  Some of the other messages involve terminology that 
is specific to PEAK or TransWarp, but the basics should be clear 
nonetheless, especially in the back-and-forth between Roche' Compaan and I 
about how the design is supposed to work.