[Grok-dev] Re: STORM howto

Martijn Faassen faassen at startifact.com
Mon Mar 17 06:06:43 EDT 2008


Laurence Rowe wrote:
> Martijn Faassen wrote:
[snip]
>> You do generate the schemas in memory and then write them out? What 
>> prompted the decision to write them out?
> 
> So I could see them and then more easily know how to subclass them - for 
> instance I needed to change the types of some fields (e.g. choice 
> instead of int, validators). Actually, recently I've moved away a bit 
> from the whole SQL first approach, mainly because I've changed database 
> twice in the past month.

Interesting. I've had some discussions with the Storm people, and 
they're heavily in favor of the SQL first approach, as they feel no 
schema generation could create schemas that are really right (let alone 
accepted by DBAs if they're involved). My perspective is that there are 
situations (with smaller applications, without DBAs involved, just 
starting to develop) where it'd be nicer to have the schema be generated 
from Python, so it's a use case I'd like to try supporting at least to 
some level.

Seeing that you came from the SQL first approach and are now moving the 
other way around, what is your reasoning surrounding this issue?

I think Grok should end up supporting both scenarios in some fashion, 
though that's a bit too much wishy-washy to be really nice to explain.

Concerning generated schemas, I agree there are complexities involved 
when generating them during run-time. We need to play around with this 
some to see what's possible. I generally try to avoid code generation.

> Starting now I would probably hack around with the declaritive stuff to 
> work out how to annotate the field info to the Column definitions and 
> autogenerate if not supplied.

I'm not sure I follow you entirely, and I'm quite curious about this 
approach. :) Annotate what field info how? Zope 3 schema field info or 
something else?

[snip]
>> [collective.lead]
>>> strictly zodb3 is not required, only transaction. The elro-tpc branch
>>> will become 2.0 fairly soon. It pulls together some work done by
>>> bitranch using scoped sessions and supporting the MyClass.query magic
>>> and automatically associating new instances with sessions as well as
>>> adding twophase and savepoint support. I just need to do a bit of
>>> tidying up before I merge to trunk.
>>
>> I'm not sure what features you're referring to here; I'm not that 
>> familiar with SQLAlchemy. Are scoped sessions and MyClass.query magic 
>> both SQLAlchemy features? Would you recommend a new project starts 
>> with this branch?
> 
> ScopedSessions provide per thread sessions and are a good thing. 
> Randomly they also seem to provide the option (using a special 
> session.mapper) to annotate your original class with information about 
> which metadata/session they belong too. Thinking again, I don't 
> recommend you use them. Too much magic. I've moved it to an experimental 
> module for now.

So ScopedSessions are good, but the special mapper story is too much 
magic? Is that the MyClass.query magic you're talking about?

> The more I think of it, I would limit the IDatabase interface to 
> session, engine and connection. It seems that the declarative approach 
> is to keep metadata separate. This should allow for having multiple 
> sites with their own databases using the same code for mappers.

Where would one keep metadata around in this case?

[collective.tin]

[an overview]
> Having said that, if you just want one folder with items in a table in 
> it then start again, tin is just too complex if you only want to handle 
> that use case - the complexity and lines of code come not in the zope2 
> compatibility (that is all fairly well contained) but in the support for 
> plugable strategies for naming and versioning. It's designed to be 
> flexible enough for a SQL first approach where you don't own the 
> database you are interfacing with.

That's interesting, thanks for that overview of the code. I think it 
would be interesting to consider a strategy to get the 50 lines of Zope 
2 integration code out of the package into another one. I always worry 
when packages start to import Plone, even if they have the good 
intention not to rely on it everywhere, compatibility code has the 
tendency to spread around if you don't put a clear separation in place.

[snip]
> There is certainly a lack of documentation on it. Unfortunately I think 
> I'm too bogged down in my project to do much about it at the moment. I'm 
> very willing to help anyone else who wants to make use of it though. I'm 
> normally on irc during the day.

We'll be around, I'm sure. :)

> SQLAlchemy is very well documented though, and extremely powerful. The 
> whole sql dialect abstraction layer is wonderful and saves your bacon 
> when someone decides to move your project from postgresql to oracle to 
> sql server.

Yes, the SQLAlchemy documentation is indeed much appreciated!

Regards,

Martijn



More information about the Grok-dev mailing list