[Checkins] SVN: mongopersist/tags/0.7.2/ Release 0.7.2
Andrey Lebedev
cvs-admin at zope.org
Thu Apr 19 11:41:31 UTC 2012
Log message for revision 125175:
Release 0.7.2
Changed:
A mongopersist/tags/0.7.2/
D mongopersist/tags/0.7.2/CHANGES.txt
A mongopersist/tags/0.7.2/CHANGES.txt
D mongopersist/tags/0.7.2/setup.py
A mongopersist/tags/0.7.2/setup.py
D mongopersist/tags/0.7.2/src/mongopersist/zope/container.py
A mongopersist/tags/0.7.2/src/mongopersist/zope/container.py
-=-
Deleted: mongopersist/tags/0.7.2/CHANGES.txt
===================================================================
--- mongopersist/trunk/CHANGES.txt 2012-04-18 14:04:35 UTC (rev 125172)
+++ mongopersist/tags/0.7.2/CHANGES.txt 2012-04-19 11:41:27 UTC (rev 125175)
@@ -1,268 +0,0 @@
-=======
-CHANGES
-=======
-
-0.7.2 (unreleased)
-------------------
-
-- ...
-
-0.7.1 (2012-04-13)
-------------------
-
-- Performance: Improved the profiler a bit by allowing to disable modification
- of records as well.
-
-- Performance: Added caching of ``_m_jar`` lookups in Mongo Containers, since
- the computation turned out to be significantly expensive.
-
-- Performance: Use lazy hash computation for DBRef. Also, disable support for
- arbitrary keyword arguments. This makes roughly a 2-4% difference in object
- loading time.
-
-- Bug: An error occurred when ``_py_serial`` was missing. This was possible
- due to a bug in version 0.6. It also protects against third party software
- which is not aware of our meta-data.
-
-- Performance: Switched to ``repoze.lru`` (from ``lru``), which is much
- faster.
-
-- Performance: To avoid excessive hash computations, we now use the hash of
- the ``DBRef`` references as cache keys.
-
-- Bug: ``ObjectId`` ids are not guaranteed to be unique across
- collections. Thus they are a bad key for global caches. So we use full
- ``DBRef`` references instead.
-
-0.7.0 (2012-04-02)
-------------------
-
-- Feature: A new ``IConflictHandler`` interface now controls all aspects of
- conflict resolution. The following implementations are provided:
-
- * ``NoCheckConflictHandler``: This handler does nothing and when used, the
- system behaves as before when the ``detect_conflicts`` flag was set to
- ``False``.
-
- * ``SimpleSerialConflictHandler``: This handler uses serial numbers on each
- document to keep track of versions and then to detect conflicts. When a
- conflict is detected, a ``ConflictError`` is raised. This handler is
- identical to ``detect_conflicts`` being set to ``True``.
-
- * ``ResolvingSerialConflictHandler``: Another serial handler, but it has the
- ability to resolve a conflict. For this to happen, a persistent object
- must implement ``_p_resolveConflict(orig_state, cur_state, new_state)``,
- which returns the new, merged state. (Experimental)
-
- As a result, the ``detect_conflicts`` flag of the data manager was removed
- and replaced with the ``conflict_handler`` attribute. One can pass in the
- ``conflict_handler_factory`` to the data manager constructor. The factory
- needs to expect on argument, the data manager.
-
-- Feature: The new ``IdNamesMongoContainer`` class uses the natural Mongo
- ObjectId as the name/key for the items in the container. No more messing
- around with coming up or generating a name. Of course, if you specified
- ``None`` as a key in the past, it already used the object id, but it was
- sotred again in the mapping key field. Now the object id is used directly
- everywhere.
-
-- Feature: Whenever ``setattr()`` is called on a persistent object, it is
- marked as changed even if the new value equals the old one. To minimize
- writes to MongoDB, the latest database state is compared to the new state
- and the new state is only written when changes are detected. A flag called
- ``serialize.IGNORE_IDENTICAL_DOCUMENTS`` (default: ``True``) is used to
- control the feature. (Experimental)
-
-- Feature: ``ConflictError`` has now a much more meaningful API. Instead of
- just referencing the object and different serials, it now actual has the
- original, current and new state documents.
-
-- Feature: Conflicts are now detected while aborting a transaction. The
- implemented policy will not reset the document state, if a conflict is
- detected.
-
-- Feature: Provide a flag to turn on MongoDB access logging. The flag is false
- by default, since access logging is very expensive.
-
-- Feature: Added transaction ID to LoggingDecorator.
-
-- Feature: Added a little script to test performance. It is not very
- sophisticated, but it is sufficient for a first round of optimizations.
-
-- Feature: Massively improved performance on all levels. This was mainly
- accomplished by removing unnecessary database accesses, better caching and
- more efficient algorithms. This results in speedups between 4-25 times.
-
- - When resolving the path to a class, the result is now cached. More
- importantly, lookup failures are also cached mapping path ->
- ``None``. This is important, since an optimization the ``resolve()``
- method causes a lot of failing lookups.
-
- - When resolving the dbref to a type, we try to resolve the dbref early
- using the document, if we know that the documents within the collection
- store their type path. This avoids frequent queries of the name map
- collection when it is not needed.
-
- - When getting the object document to read the class path, it will now read
- the entire document and store it in the ``_latest_states`` dictionary, so
- that other code may pick it up and use it. This should avoid superflous
- reads from MongoDB.
-
- - Drastically improved performance for collections that store only one type
- of object and where the documents do not store the type (i.e. it is
- stored in the name map collection).
-
- - The Mongo Container fast load via find() did not work correctly, since
- setstate() did not change the state from ghost to active and thus the
- state was loaded again from MongoDB and set on the object. Now we use the
- new ``_latest_states`` cache to lookup a document when ``setstate()`` is
- called through the proper channels. Now this "fast load" method truly
- causes O(1) database lookups.
-
- - Implemented several more mapping methods for the Mongo Container, so that
- all methods getting the full list of items are fast now.
-
- - Whenever the Mongo Object Id is used as a hash key, use the hash of the id
- instead. The ``__cmp__()`` method of the ``ObjectId`` class is way too
- slow.
-
- - Cache collection name lookup from objects in the ``ObjectWriter`` class.
-
-- Bug: We have seen several occasions in production where we suddenly lost
- some state in some documents, which prohibited the objects from being
- loadable again. The cause was that the ``_original_states`` attribute did not
- store the raw MongoDB document, but a modified one. Since those states are
- used during abort to reset the state, however, the modified document got
- stored making the affected objects inaccessible.
-
-- Bug: When a transaction was aborted, the states of all *loaded* objects were
- reset. Now, only *modified* object states are reset. This should drastically
- lower problems (by the ratio of read over modified objects) due to lack of
- full MVCC.
-
-- Bug: When looking for an item by key/name (``find_*()`` methods) , you would
- never get the right object back, but the first one found in the
- database. This was due to clobbering the search filter with more general
- parameters.
-
-
-0.6.1 (2012-03-28)
-------------------
-
-- Feature: Added quite detailed debug logging around collection methods
-
-0.6.0 (2012-03-12)
-------------------
-
-- Feature: Switched to optimisitc data dumping, which approaches transactions
- by dumping early and as the data comes. All changes are undone when the
- transaction fails/aborts. See ``optimistic-data-dumping.txt`` for
- details. Here are some of the new features:
-
- * Data manager keeps track of all original docs before their objects are
- modified, so any change can be done.
-
- * Added an API to data manager (``DataManager.insert(obj)``) to insert an
- object in the database.
-
- * Added an API to data manager (``DataManager.remove(obj)``) to remove an
- object from the database.
-
- * Data can be flushed to Mongo (``DataManager.flush()``) at any point of the
- transaction retaining the ability to completely undo all changes. Flushing
- features the following characteristics:
-
- + During a given transaction, we guarantee that the user will always receive
- the same Python object. This requires that flush does not reset the object
- cache.
-
- + The ``_p_serial`` is increased by one. (Automatically done in object
- writer.)
-
- + The object is removed from the registered objects and the ``_p_changed``
- flag is set to ``False``.
-
- + Before flushing, potential conflicts are detected.
-
- * Implemented a flushing policy: Changes are always flushed before any query
- is made. A simple wrapper for the ``pymongo`` collection
- (``CollectionWrapper``) ensures that flush is called before the correct
- method calls. Two new API methods ``DataManager.get_collection(db_name,
- coll_name)`` and ``DataManager.get_collection_from_object(obj)``
- allows one to quickly get a wrapped collection.
-
-- Feature: Renamed ``processSpec()`` to ``process_spec()`` to adhere to
- package nameing convention.
-
-- Feature: Created a ``ProcessSpecDecorator`` that is used in the
- ``CollectionWrapper`` class to process the specs of the ``find()``,
- ``find_one()`` and ``find_and_modify()`` collection methods.
-
-- Feature: The ``MongoContainer`` class now removes objects from the database
- upon container removal is ``_m_remove_documents`` is ``True``. The default
- is ``True``.
-
-- Feature: When adding an item to ``MongoContainer`` and the key is ``None``,
- then the OID is chosen as the key. Ids are perfect keys, because they are
- guaranteed to be unique within the collection.
-
-- Feature: Since people did not like the setitem with ``None`` key
- implementation, I also added the ``MongoContainer.add(value, key=None)``
- method, which makes specifying the key optional. The default implementation
- is to use the OID, if the key is ``None``.
-
-- Feature: Removed ``fields`` argument from the ``MongoContainer.find(...)``
- and ``MongoContainer.find_one(...)`` methods, since it was not used.
-
-- Feature: If a container has N items, it took N+1 queries to load the list of
- items completely. This was due to one query returning all DBRefs and then
- using one query to load the state for each. Now, the first query loads all
- full states and uses an extension to ``DataManager.setstate(obj, doc=None)``
- to load the state of the object with the previously queried data.
-
-- Feature: Changed ``MongoContainer.get_collection()`` to return a
- ``CollectionWrapper`` instance.
-
-
-0.5.5 (2012-03-09)
-------------------
-
-- Feature: Moved ZODB dependency to test dependency
-
-- Bug: When an object has a SimpleContainer as attribute, then simply loading
- this object would cause it to written at the end of the transaction. The
- culprit was a persistent dictionary containing the SimpleContainer
- state. This dictionary got modified during state load and caused it to be
- registered as a changed object and it was marked as a ``_p_mongo_sub_object``
- and had the original object as ``_p_mongo_doc_object``.
-
-
-0.5.4 (2012-03-05)
-------------------
-
-- Feature: Added a hook via the IMongoSpecProcessor adapter that gets called
- before each find to process/log spec.
-
-0.5.3 (2012/01/16)
-------------------
-
-- Bug: ``MongoContainer`` did not emit any Zope container or lifecycle
- events. This has been fixed by using the ``zope.container.contained``
- helper functions.
-
-0.5.2 (2012-01-13)
-------------------
-
-- Feature: Added an interface for the ``MongoContainer`` class describing the
- additional attributes and methods.
-
-0.5.1 (2011-12-22)
-------------------
-
-- Bug: The ``MongoContainer`` class did not implement the ``IContainer``
- interface.
-
-0.5.0 (2011-11-04)
-------------------
-
-- Initial Release
Copied: mongopersist/tags/0.7.2/CHANGES.txt (from rev 125174, mongopersist/trunk/CHANGES.txt)
===================================================================
--- mongopersist/tags/0.7.2/CHANGES.txt (rev 0)
+++ mongopersist/tags/0.7.2/CHANGES.txt 2012-04-19 11:41:27 UTC (rev 125175)
@@ -0,0 +1,270 @@
+=======
+CHANGES
+=======
+
+0.7.2 (2012-04-19)
+------------------
+
+- Bug: avoid caching MongoDataManager instances in mongo container to avoid
+ multiple MongoDataManagers in the single transaction in multithreaded
+ environment. Cache IMongoDataManagerProvider instead.
+
+0.7.1 (2012-04-13)
+------------------
+
+- Performance: Improved the profiler a bit by allowing to disable modification
+ of records as well.
+
+- Performance: Added caching of ``_m_jar`` lookups in Mongo Containers, since
+ the computation turned out to be significantly expensive.
+
+- Performance: Use lazy hash computation for DBRef. Also, disable support for
+ arbitrary keyword arguments. This makes roughly a 2-4% difference in object
+ loading time.
+
+- Bug: An error occurred when ``_py_serial`` was missing. This was possible
+ due to a bug in version 0.6. It also protects against third party software
+ which is not aware of our meta-data.
+
+- Performance: Switched to ``repoze.lru`` (from ``lru``), which is much
+ faster.
+
+- Performance: To avoid excessive hash computations, we now use the hash of
+ the ``DBRef`` references as cache keys.
+
+- Bug: ``ObjectId`` ids are not guaranteed to be unique across
+ collections. Thus they are a bad key for global caches. So we use full
+ ``DBRef`` references instead.
+
+0.7.0 (2012-04-02)
+------------------
+
+- Feature: A new ``IConflictHandler`` interface now controls all aspects of
+ conflict resolution. The following implementations are provided:
+
+ * ``NoCheckConflictHandler``: This handler does nothing and when used, the
+ system behaves as before when the ``detect_conflicts`` flag was set to
+ ``False``.
+
+ * ``SimpleSerialConflictHandler``: This handler uses serial numbers on each
+ document to keep track of versions and then to detect conflicts. When a
+ conflict is detected, a ``ConflictError`` is raised. This handler is
+ identical to ``detect_conflicts`` being set to ``True``.
+
+ * ``ResolvingSerialConflictHandler``: Another serial handler, but it has the
+ ability to resolve a conflict. For this to happen, a persistent object
+ must implement ``_p_resolveConflict(orig_state, cur_state, new_state)``,
+ which returns the new, merged state. (Experimental)
+
+ As a result, the ``detect_conflicts`` flag of the data manager was removed
+ and replaced with the ``conflict_handler`` attribute. One can pass in the
+ ``conflict_handler_factory`` to the data manager constructor. The factory
+ needs to expect on argument, the data manager.
+
+- Feature: The new ``IdNamesMongoContainer`` class uses the natural Mongo
+ ObjectId as the name/key for the items in the container. No more messing
+ around with coming up or generating a name. Of course, if you specified
+ ``None`` as a key in the past, it already used the object id, but it was
+ sotred again in the mapping key field. Now the object id is used directly
+ everywhere.
+
+- Feature: Whenever ``setattr()`` is called on a persistent object, it is
+ marked as changed even if the new value equals the old one. To minimize
+ writes to MongoDB, the latest database state is compared to the new state
+ and the new state is only written when changes are detected. A flag called
+ ``serialize.IGNORE_IDENTICAL_DOCUMENTS`` (default: ``True``) is used to
+ control the feature. (Experimental)
+
+- Feature: ``ConflictError`` has now a much more meaningful API. Instead of
+ just referencing the object and different serials, it now actual has the
+ original, current and new state documents.
+
+- Feature: Conflicts are now detected while aborting a transaction. The
+ implemented policy will not reset the document state, if a conflict is
+ detected.
+
+- Feature: Provide a flag to turn on MongoDB access logging. The flag is false
+ by default, since access logging is very expensive.
+
+- Feature: Added transaction ID to LoggingDecorator.
+
+- Feature: Added a little script to test performance. It is not very
+ sophisticated, but it is sufficient for a first round of optimizations.
+
+- Feature: Massively improved performance on all levels. This was mainly
+ accomplished by removing unnecessary database accesses, better caching and
+ more efficient algorithms. This results in speedups between 4-25 times.
+
+ - When resolving the path to a class, the result is now cached. More
+ importantly, lookup failures are also cached mapping path ->
+ ``None``. This is important, since an optimization the ``resolve()``
+ method causes a lot of failing lookups.
+
+ - When resolving the dbref to a type, we try to resolve the dbref early
+ using the document, if we know that the documents within the collection
+ store their type path. This avoids frequent queries of the name map
+ collection when it is not needed.
+
+ - When getting the object document to read the class path, it will now read
+ the entire document and store it in the ``_latest_states`` dictionary, so
+ that other code may pick it up and use it. This should avoid superflous
+ reads from MongoDB.
+
+ - Drastically improved performance for collections that store only one type
+ of object and where the documents do not store the type (i.e. it is
+ stored in the name map collection).
+
+ - The Mongo Container fast load via find() did not work correctly, since
+ setstate() did not change the state from ghost to active and thus the
+ state was loaded again from MongoDB and set on the object. Now we use the
+ new ``_latest_states`` cache to lookup a document when ``setstate()`` is
+ called through the proper channels. Now this "fast load" method truly
+ causes O(1) database lookups.
+
+ - Implemented several more mapping methods for the Mongo Container, so that
+ all methods getting the full list of items are fast now.
+
+ - Whenever the Mongo Object Id is used as a hash key, use the hash of the id
+ instead. The ``__cmp__()`` method of the ``ObjectId`` class is way too
+ slow.
+
+ - Cache collection name lookup from objects in the ``ObjectWriter`` class.
+
+- Bug: We have seen several occasions in production where we suddenly lost
+ some state in some documents, which prohibited the objects from being
+ loadable again. The cause was that the ``_original_states`` attribute did not
+ store the raw MongoDB document, but a modified one. Since those states are
+ used during abort to reset the state, however, the modified document got
+ stored making the affected objects inaccessible.
+
+- Bug: When a transaction was aborted, the states of all *loaded* objects were
+ reset. Now, only *modified* object states are reset. This should drastically
+ lower problems (by the ratio of read over modified objects) due to lack of
+ full MVCC.
+
+- Bug: When looking for an item by key/name (``find_*()`` methods) , you would
+ never get the right object back, but the first one found in the
+ database. This was due to clobbering the search filter with more general
+ parameters.
+
+
+0.6.1 (2012-03-28)
+------------------
+
+- Feature: Added quite detailed debug logging around collection methods
+
+0.6.0 (2012-03-12)
+------------------
+
+- Feature: Switched to optimisitc data dumping, which approaches transactions
+ by dumping early and as the data comes. All changes are undone when the
+ transaction fails/aborts. See ``optimistic-data-dumping.txt`` for
+ details. Here are some of the new features:
+
+ * Data manager keeps track of all original docs before their objects are
+ modified, so any change can be done.
+
+ * Added an API to data manager (``DataManager.insert(obj)``) to insert an
+ object in the database.
+
+ * Added an API to data manager (``DataManager.remove(obj)``) to remove an
+ object from the database.
+
+ * Data can be flushed to Mongo (``DataManager.flush()``) at any point of the
+ transaction retaining the ability to completely undo all changes. Flushing
+ features the following characteristics:
+
+ + During a given transaction, we guarantee that the user will always receive
+ the same Python object. This requires that flush does not reset the object
+ cache.
+
+ + The ``_p_serial`` is increased by one. (Automatically done in object
+ writer.)
+
+ + The object is removed from the registered objects and the ``_p_changed``
+ flag is set to ``False``.
+
+ + Before flushing, potential conflicts are detected.
+
+ * Implemented a flushing policy: Changes are always flushed before any query
+ is made. A simple wrapper for the ``pymongo`` collection
+ (``CollectionWrapper``) ensures that flush is called before the correct
+ method calls. Two new API methods ``DataManager.get_collection(db_name,
+ coll_name)`` and ``DataManager.get_collection_from_object(obj)``
+ allows one to quickly get a wrapped collection.
+
+- Feature: Renamed ``processSpec()`` to ``process_spec()`` to adhere to
+ package nameing convention.
+
+- Feature: Created a ``ProcessSpecDecorator`` that is used in the
+ ``CollectionWrapper`` class to process the specs of the ``find()``,
+ ``find_one()`` and ``find_and_modify()`` collection methods.
+
+- Feature: The ``MongoContainer`` class now removes objects from the database
+ upon container removal is ``_m_remove_documents`` is ``True``. The default
+ is ``True``.
+
+- Feature: When adding an item to ``MongoContainer`` and the key is ``None``,
+ then the OID is chosen as the key. Ids are perfect keys, because they are
+ guaranteed to be unique within the collection.
+
+- Feature: Since people did not like the setitem with ``None`` key
+ implementation, I also added the ``MongoContainer.add(value, key=None)``
+ method, which makes specifying the key optional. The default implementation
+ is to use the OID, if the key is ``None``.
+
+- Feature: Removed ``fields`` argument from the ``MongoContainer.find(...)``
+ and ``MongoContainer.find_one(...)`` methods, since it was not used.
+
+- Feature: If a container has N items, it took N+1 queries to load the list of
+ items completely. This was due to one query returning all DBRefs and then
+ using one query to load the state for each. Now, the first query loads all
+ full states and uses an extension to ``DataManager.setstate(obj, doc=None)``
+ to load the state of the object with the previously queried data.
+
+- Feature: Changed ``MongoContainer.get_collection()`` to return a
+ ``CollectionWrapper`` instance.
+
+
+0.5.5 (2012-03-09)
+------------------
+
+- Feature: Moved ZODB dependency to test dependency
+
+- Bug: When an object has a SimpleContainer as attribute, then simply loading
+ this object would cause it to written at the end of the transaction. The
+ culprit was a persistent dictionary containing the SimpleContainer
+ state. This dictionary got modified during state load and caused it to be
+ registered as a changed object and it was marked as a ``_p_mongo_sub_object``
+ and had the original object as ``_p_mongo_doc_object``.
+
+
+0.5.4 (2012-03-05)
+------------------
+
+- Feature: Added a hook via the IMongoSpecProcessor adapter that gets called
+ before each find to process/log spec.
+
+0.5.3 (2012/01/16)
+------------------
+
+- Bug: ``MongoContainer`` did not emit any Zope container or lifecycle
+ events. This has been fixed by using the ``zope.container.contained``
+ helper functions.
+
+0.5.2 (2012-01-13)
+------------------
+
+- Feature: Added an interface for the ``MongoContainer`` class describing the
+ additional attributes and methods.
+
+0.5.1 (2011-12-22)
+------------------
+
+- Bug: The ``MongoContainer`` class did not implement the ``IContainer``
+ interface.
+
+0.5.0 (2011-11-04)
+------------------
+
+- Initial Release
Deleted: mongopersist/tags/0.7.2/setup.py
===================================================================
--- mongopersist/trunk/setup.py 2012-04-18 14:04:35 UTC (rev 125172)
+++ mongopersist/tags/0.7.2/setup.py 2012-04-19 11:41:27 UTC (rev 125175)
@@ -1,60 +0,0 @@
-"""Setup
-"""
-import os
-from setuptools import setup, find_packages
-
-def read(*rnames):
- text = open(os.path.join(os.path.dirname(__file__), *rnames)).read()
- return unicode(text, 'utf-8').encode('ascii', 'xmlcharrefreplace')
-
-setup (
- name='mongopersist',
- version='0.7.2.dev0',
- author = "Stephan Richter",
- author_email = "stephan.richter at gmail.com",
- description = "Mongo Persistence Backend",
- long_description=(
- read('src', 'mongopersist', 'README.txt')
- + '\n\n' +
- read('CHANGES.txt')
- ),
- license = "ZPL 2.1",
- keywords = "mongo persistent ",
- classifiers = [
- 'Development Status :: 4 - Beta',
- 'Intended Audience :: Developers',
- 'Programming Language :: Python',
- 'Programming Language :: Python :: 2',
- 'Framework :: ZODB',
- 'License :: OSI Approved :: Zope Public License',
- 'Natural Language :: English',
- 'Operating System :: OS Independent'],
- packages = find_packages('src'),
- package_dir = {'':'src'},
- extras_require = dict(
- test = (
- 'zope.app.testing',
- 'zope.testing',
- 'ZODB3',
- ),
- zope = (
- 'rwproperty',
- 'zope.container',
- ),
- ),
- install_requires = [
- 'transaction >=1.1.0',
- 'repoze.lru',
- 'pymongo',
- 'setuptools',
- 'zope.dottedname',
- 'zope.interface',
- 'zope.exceptions >=3.7.1', # required for extract_stack
- ],
- include_package_data = True,
- zip_safe = False,
- entry_points = '''
- [console_scripts]
- profile = mongopersist.performance:main
- ''',
- )
Copied: mongopersist/tags/0.7.2/setup.py (from rev 125174, mongopersist/trunk/setup.py)
===================================================================
--- mongopersist/tags/0.7.2/setup.py (rev 0)
+++ mongopersist/tags/0.7.2/setup.py 2012-04-19 11:41:27 UTC (rev 125175)
@@ -0,0 +1,60 @@
+"""Setup
+"""
+import os
+from setuptools import setup, find_packages
+
+def read(*rnames):
+ text = open(os.path.join(os.path.dirname(__file__), *rnames)).read()
+ return unicode(text, 'utf-8').encode('ascii', 'xmlcharrefreplace')
+
+setup (
+ name='mongopersist',
+ version='0.7.2',
+ author = "Stephan Richter",
+ author_email = "stephan.richter at gmail.com",
+ description = "Mongo Persistence Backend",
+ long_description=(
+ read('src', 'mongopersist', 'README.txt')
+ + '\n\n' +
+ read('CHANGES.txt')
+ ),
+ license = "ZPL 2.1",
+ keywords = "mongo persistent ",
+ classifiers = [
+ 'Development Status :: 4 - Beta',
+ 'Intended Audience :: Developers',
+ 'Programming Language :: Python',
+ 'Programming Language :: Python :: 2',
+ 'Framework :: ZODB',
+ 'License :: OSI Approved :: Zope Public License',
+ 'Natural Language :: English',
+ 'Operating System :: OS Independent'],
+ packages = find_packages('src'),
+ package_dir = {'':'src'},
+ extras_require = dict(
+ test = (
+ 'zope.app.testing',
+ 'zope.testing',
+ 'ZODB3',
+ ),
+ zope = (
+ 'rwproperty',
+ 'zope.container',
+ ),
+ ),
+ install_requires = [
+ 'transaction >=1.1.0',
+ 'repoze.lru',
+ 'pymongo',
+ 'setuptools',
+ 'zope.dottedname',
+ 'zope.interface',
+ 'zope.exceptions >=3.7.1', # required for extract_stack
+ ],
+ include_package_data = True,
+ zip_safe = False,
+ entry_points = '''
+ [console_scripts]
+ profile = mongopersist.performance:main
+ ''',
+ )
Deleted: mongopersist/tags/0.7.2/src/mongopersist/zope/container.py
===================================================================
--- mongopersist/trunk/src/mongopersist/zope/container.py 2012-04-18 14:04:35 UTC (rev 125172)
+++ mongopersist/tags/0.7.2/src/mongopersist/zope/container.py 2012-04-19 11:41:27 UTC (rev 125175)
@@ -1,321 +0,0 @@
-##############################################################################
-#
-# Copyright (c) 2011 Zope Foundation and Contributors.
-# All Rights Reserved.
-#
-# This software is subject to the provisions of the Zope Public License,
-# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
-# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
-# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
-# FOR A PARTICULAR PURPOSE.
-#
-##############################################################################
-"""Mongo Persistence Zope Containers"""
-import UserDict
-import persistent
-import pymongo.dbref
-import pymongo.objectid
-import zope.component
-from bson.errors import InvalidId
-from rwproperty import getproperty, setproperty
-from zope.container import contained, sample
-from zope.container.interfaces import IContainer
-
-from mongopersist import interfaces, serialize
-from mongopersist.zope import interfaces as zinterfaces
-
-class MongoContained(contained.Contained):
-
- @getproperty
- def __name__(self):
- return getattr(self, '_v_key', None)
- @setproperty
- def __name__(self, value):
- setattr(self, '_v_key', value)
-
- @getproperty
- def __parent__(self):
- return getattr(self, '_v_parent', None)
- @setproperty
- def __parent__(self, value):
- setattr(self, '_v_parent', value)
-
-
-class SimpleMongoContainer(sample.SampleContainer, persistent.Persistent):
- _m_remove_documents = True
-
- def __getstate__(self):
- state = super(SimpleMongoContainer, self).__getstate__()
- state['data'] = state.pop('_SampleContainer__data')
- return state
-
- def __setstate__(self, state):
- # Mongopersist always reads a dictionary as persistent dictionary. And
- # modifying this dictionary will cause the persistence mechanism to
- # kick in. So we create a new object that we can easily modify without
- # harm.
- state = dict(state)
- state['_SampleContainer__data'] = state.pop('data', {})
- super(SimpleMongoContainer, self).__setstate__(state)
-
- def __getitem__(self, key):
- obj = super(SimpleMongoContainer, self).__getitem__(key)
- obj._v_key = key
- obj._v_parent = self
- return obj
-
- def get(self, key, default=None):
- '''See interface `IReadContainer`'''
- obj = super(SimpleMongoContainer, self).get(key, default)
- if obj is not default:
- obj._v_key = key
- obj._v_parent = self
- return obj
-
- def items(self):
- items = super(SimpleMongoContainer, self).items()
- for key, obj in items:
- obj._v_key = key
- obj._v_parent = self
- return items
-
- def values(self):
- return [v for k, v in self.items()]
-
- def __setitem__(self, key, obj):
- super(SimpleMongoContainer, self).__setitem__(key, obj)
- self._p_changed = True
-
- def __delitem__(self, key):
- obj = self[key]
- super(SimpleMongoContainer, self).__delitem__(key)
- if self._m_remove_documents:
- self._p_jar.remove(obj)
- self._p_changed = True
-
-
-class MongoContainer(contained.Contained,
- persistent.Persistent,
- UserDict.DictMixin):
- zope.interface.implements(IContainer, zinterfaces.IMongoContainer)
- _m_database = None
- _m_collection = None
- _m_mapping_key = 'key'
- _m_parent_key = 'parent'
- _m_remove_documents = True
-
- def __init__(self, collection=None, database=None,
- mapping_key=None, parent_key=None):
- if collection:
- self._m_collection = collection
- if database:
- self._m_database = database
- if mapping_key is not None:
- self._m_mapping_key = mapping_key
- if parent_key is not None:
- self._m_parent_key = parent_key
-
- @property
- def _m_jar(self):
- if not hasattr(self, '_v_m_jar'):
- # If the container is in a Mongo storage hierarchy, then getting
- # the datamanager is easy, otherwise we do an adapter lookup.
- if interfaces.IMongoDataManager.providedBy(self._p_jar):
- self._v_m_jar = self._p_jar
- else:
- provider = zope.component.getUtility(
- interfaces.IMongoDataManagerProvider)
- self._v_m_jar = provider.get()
- return self._v_m_jar
-
- def get_collection(self):
- db_name = self._m_database or self._m_jar.default_database
- return self._m_jar.get_collection(db_name, self._m_collection)
-
- def _m_get_parent_key_value(self):
- if getattr(self, '_p_jar', None) is None:
- raise ValueError('_p_jar not found.')
- if interfaces.IMongoDataManager.providedBy(self._p_jar):
- return self
- else:
- return 'zodb-'+''.join("%02x" % ord(x) for x in self._p_oid).strip()
-
- def _m_get_items_filter(self):
- filter = {}
- # Make sure that we only look through objects that have the mapping
- # key. Objects not having the mapping key cannot be part of the
- # collection.
- if self._m_mapping_key is not None:
- filter[self._m_mapping_key] = {'$exists': True}
- if self._m_parent_key is not None:
- gs = self._m_jar._writer.get_state
- filter[self._m_parent_key] = gs(self._m_get_parent_key_value())
- return filter
-
- def _m_add_items_filter(self, filter):
- for key, value in self._m_get_items_filter().items():
- if key not in filter:
- filter[key] = value
-
- def _locate(self, obj, doc):
- obj._v_key = doc[self._m_mapping_key]
- obj._v_parent = self
-
- def _load_one(self, doc):
- # Create a DBRef object and then load the full state of the object.
- dbref = pymongo.dbref.DBRef(
- self._m_collection, doc['_id'],
- self._m_database or self._m_jar.default_database)
- # Stick the doc into the _latest_states:
- self._m_jar._latest_states[dbref] = doc
- obj = self._m_jar.load(dbref)
- self._locate(obj, doc)
- return obj
-
- def __getitem__(self, key):
- filter = self._m_get_items_filter()
- filter[self._m_mapping_key] = key
- obj = self.find_one(filter)
- if obj is None:
- raise KeyError(key)
- return obj
-
- def _real_setitem(self, key, value):
- # This call by iteself caues the state to change _p_changed to True.
- if self._m_mapping_key is not None:
- setattr(value, self._m_mapping_key, key)
- if self._m_parent_key is not None:
- setattr(value, self._m_parent_key, self._m_get_parent_key_value())
-
- def __setitem__(self, key, value):
- # Make sure the value is in the database, since we might want to use
- # its oid.
- if value._p_oid is None:
- self._m_jar.insert(value)
- # When the key is None, we use the object is as name.
- if key is None:
- key = unicode(value._p_oid.id)
- # We want to be as close as possible to using the Zope semantics.
- contained.setitem(self, self._real_setitem, key, value)
-
- def add(self, value, key=None):
- # We are already suporting ``None`` valued keys, which prompts the key
- # to be the OID. But people felt that a more explicit interface would
- # be better in this case.
- self[key] = value
-
- def __delitem__(self, key):
- value = self[key]
- # First remove the parent and name from the object.
- if self._m_mapping_key is not None:
- delattr(value, self._m_mapping_key)
- if self._m_parent_key is not None:
- delattr(value, self._m_parent_key)
- # Let's now remove the object from the database.
- if self._m_remove_documents:
- self._m_jar.remove(value)
- # Send the uncontained event.
- contained.uncontained(value, self, key)
-
- def __contains__(self, key):
- return self.raw_find_one(
- {self._m_mapping_key: key}, fields=()) is not None
-
- def __iter__(self):
- result = self.raw_find(
- {self._m_mapping_key: {'$ne': None}}, fields=(self._m_mapping_key,))
- for doc in result:
- yield doc[self._m_mapping_key]
-
- def keys(self):
- return list(self.__iter__())
-
- def iteritems(self):
- result = self.raw_find()
- for doc in result:
- obj = self._load_one(doc)
- yield doc[self._m_mapping_key], obj
-
- def raw_find(self, spec=None, *args, **kwargs):
- if spec is None:
- spec = {}
- self._m_add_items_filter(spec)
- coll = self.get_collection()
- return coll.find(spec, *args, **kwargs)
-
- def find(self, spec=None, *args, **kwargs):
- # Search for matching objects.
- result = self.raw_find(spec, *args, **kwargs)
- for doc in result:
- obj = self._load_one(doc)
- yield obj
-
- def raw_find_one(self, spec_or_id=None, *args, **kwargs):
- if spec_or_id is None:
- spec_or_id = {}
- if not isinstance(spec_or_id, dict):
- spec_or_id = {'_id': spec_or_id}
- self._m_add_items_filter(spec_or_id)
- coll = self.get_collection()
- return coll.find_one(spec_or_id, *args, **kwargs)
-
- def find_one(self, spec_or_id=None, *args, **kwargs):
- doc = self.raw_find_one(spec_or_id, *args, **kwargs)
- if doc is None:
- return None
- return self._load_one(doc)
-
-
-class IdNamesMongoContainer(MongoContainer):
- """A container that uses the Mongo ObjectId as the name/key."""
- _m_mapping_key = None
-
-
- @property
- def _m_remove_documents(self):
- # Objects must be removed, since removing the _id of a document is not
- # allowed.
- return True
-
- def _locate(self, obj, doc):
- obj._v_key = unicode(doc['_id'])
- obj._v_parent = self
-
- def __getitem__(self, key):
- try:
- id = pymongo.objectid.ObjectId(key)
- except InvalidId:
- raise KeyError(key)
- filter = self._m_get_items_filter()
- filter['_id'] = id
- obj = self.find_one(filter)
- if obj is None:
- raise KeyError(key)
- return obj
-
- def __contains__(self, key):
- try:
- id = pymongo.objectid.ObjectId(key)
- except InvalidId:
- return False
- return self.raw_find_one({'_id': id}, fields=()) is not None
-
- def __iter__(self):
- result = self.raw_find(fields=None)
- for doc in result:
- yield unicode(doc['_id'])
-
- def iteritems(self):
- result = self.raw_find()
- for doc in result:
- obj = self._load_one(doc)
- yield unicode(doc['_id']), obj
-
-
-class AllItemsMongoContainer(MongoContainer):
- _m_parent_key = None
-
-
-class SubDocumentMongoContainer(MongoContained, MongoContainer):
- _p_mongo_sub_object = True
Copied: mongopersist/tags/0.7.2/src/mongopersist/zope/container.py (from rev 125173, mongopersist/trunk/src/mongopersist/zope/container.py)
===================================================================
--- mongopersist/tags/0.7.2/src/mongopersist/zope/container.py (rev 0)
+++ mongopersist/tags/0.7.2/src/mongopersist/zope/container.py 2012-04-19 11:41:27 UTC (rev 125175)
@@ -0,0 +1,322 @@
+##############################################################################
+#
+# Copyright (c) 2011 Zope Foundation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+"""Mongo Persistence Zope Containers"""
+import UserDict
+import persistent
+import pymongo.dbref
+import pymongo.objectid
+import zope.component
+from bson.errors import InvalidId
+from rwproperty import getproperty, setproperty
+from zope.container import contained, sample
+from zope.container.interfaces import IContainer
+
+from mongopersist import interfaces, serialize
+from mongopersist.zope import interfaces as zinterfaces
+
+class MongoContained(contained.Contained):
+
+ @getproperty
+ def __name__(self):
+ return getattr(self, '_v_key', None)
+ @setproperty
+ def __name__(self, value):
+ setattr(self, '_v_key', value)
+
+ @getproperty
+ def __parent__(self):
+ return getattr(self, '_v_parent', None)
+ @setproperty
+ def __parent__(self, value):
+ setattr(self, '_v_parent', value)
+
+
+class SimpleMongoContainer(sample.SampleContainer, persistent.Persistent):
+ _m_remove_documents = True
+
+ def __getstate__(self):
+ state = super(SimpleMongoContainer, self).__getstate__()
+ state['data'] = state.pop('_SampleContainer__data')
+ return state
+
+ def __setstate__(self, state):
+ # Mongopersist always reads a dictionary as persistent dictionary. And
+ # modifying this dictionary will cause the persistence mechanism to
+ # kick in. So we create a new object that we can easily modify without
+ # harm.
+ state = dict(state)
+ state['_SampleContainer__data'] = state.pop('data', {})
+ super(SimpleMongoContainer, self).__setstate__(state)
+
+ def __getitem__(self, key):
+ obj = super(SimpleMongoContainer, self).__getitem__(key)
+ obj._v_key = key
+ obj._v_parent = self
+ return obj
+
+ def get(self, key, default=None):
+ '''See interface `IReadContainer`'''
+ obj = super(SimpleMongoContainer, self).get(key, default)
+ if obj is not default:
+ obj._v_key = key
+ obj._v_parent = self
+ return obj
+
+ def items(self):
+ items = super(SimpleMongoContainer, self).items()
+ for key, obj in items:
+ obj._v_key = key
+ obj._v_parent = self
+ return items
+
+ def values(self):
+ return [v for k, v in self.items()]
+
+ def __setitem__(self, key, obj):
+ super(SimpleMongoContainer, self).__setitem__(key, obj)
+ self._p_changed = True
+
+ def __delitem__(self, key):
+ obj = self[key]
+ super(SimpleMongoContainer, self).__delitem__(key)
+ if self._m_remove_documents:
+ self._p_jar.remove(obj)
+ self._p_changed = True
+
+
+class MongoContainer(contained.Contained,
+ persistent.Persistent,
+ UserDict.DictMixin):
+ zope.interface.implements(IContainer, zinterfaces.IMongoContainer)
+ _m_database = None
+ _m_collection = None
+ _m_mapping_key = 'key'
+ _m_parent_key = 'parent'
+ _m_remove_documents = True
+
+ def __init__(self, collection=None, database=None,
+ mapping_key=None, parent_key=None):
+ if collection:
+ self._m_collection = collection
+ if database:
+ self._m_database = database
+ if mapping_key is not None:
+ self._m_mapping_key = mapping_key
+ if parent_key is not None:
+ self._m_parent_key = parent_key
+
+ @property
+ def _m_jar(self):
+ if not hasattr(self, '_v_mdmp'):
+ # If the container is in a Mongo storage hierarchy, then getting
+ # the datamanager is easy, otherwise we do an adapter lookup.
+ if interfaces.IMongoDataManager.providedBy(self._p_jar):
+ return self._p_jar
+
+ # cache result of expensive component lookup
+ self._v_mdmp = zope.component.getUtility(
+ interfaces.IMongoDataManagerProvider)
+
+ return self._v_mdmp.get()
+
+ def get_collection(self):
+ db_name = self._m_database or self._m_jar.default_database
+ return self._m_jar.get_collection(db_name, self._m_collection)
+
+ def _m_get_parent_key_value(self):
+ if getattr(self, '_p_jar', None) is None:
+ raise ValueError('_p_jar not found.')
+ if interfaces.IMongoDataManager.providedBy(self._p_jar):
+ return self
+ else:
+ return 'zodb-'+''.join("%02x" % ord(x) for x in self._p_oid).strip()
+
+ def _m_get_items_filter(self):
+ filter = {}
+ # Make sure that we only look through objects that have the mapping
+ # key. Objects not having the mapping key cannot be part of the
+ # collection.
+ if self._m_mapping_key is not None:
+ filter[self._m_mapping_key] = {'$exists': True}
+ if self._m_parent_key is not None:
+ gs = self._m_jar._writer.get_state
+ filter[self._m_parent_key] = gs(self._m_get_parent_key_value())
+ return filter
+
+ def _m_add_items_filter(self, filter):
+ for key, value in self._m_get_items_filter().items():
+ if key not in filter:
+ filter[key] = value
+
+ def _locate(self, obj, doc):
+ obj._v_key = doc[self._m_mapping_key]
+ obj._v_parent = self
+
+ def _load_one(self, doc):
+ # Create a DBRef object and then load the full state of the object.
+ dbref = pymongo.dbref.DBRef(
+ self._m_collection, doc['_id'],
+ self._m_database or self._m_jar.default_database)
+ # Stick the doc into the _latest_states:
+ self._m_jar._latest_states[dbref] = doc
+ obj = self._m_jar.load(dbref)
+ self._locate(obj, doc)
+ return obj
+
+ def __getitem__(self, key):
+ filter = self._m_get_items_filter()
+ filter[self._m_mapping_key] = key
+ obj = self.find_one(filter)
+ if obj is None:
+ raise KeyError(key)
+ return obj
+
+ def _real_setitem(self, key, value):
+ # This call by iteself caues the state to change _p_changed to True.
+ if self._m_mapping_key is not None:
+ setattr(value, self._m_mapping_key, key)
+ if self._m_parent_key is not None:
+ setattr(value, self._m_parent_key, self._m_get_parent_key_value())
+
+ def __setitem__(self, key, value):
+ # Make sure the value is in the database, since we might want to use
+ # its oid.
+ if value._p_oid is None:
+ self._m_jar.insert(value)
+ # When the key is None, we use the object is as name.
+ if key is None:
+ key = unicode(value._p_oid.id)
+ # We want to be as close as possible to using the Zope semantics.
+ contained.setitem(self, self._real_setitem, key, value)
+
+ def add(self, value, key=None):
+ # We are already suporting ``None`` valued keys, which prompts the key
+ # to be the OID. But people felt that a more explicit interface would
+ # be better in this case.
+ self[key] = value
+
+ def __delitem__(self, key):
+ value = self[key]
+ # First remove the parent and name from the object.
+ if self._m_mapping_key is not None:
+ delattr(value, self._m_mapping_key)
+ if self._m_parent_key is not None:
+ delattr(value, self._m_parent_key)
+ # Let's now remove the object from the database.
+ if self._m_remove_documents:
+ self._m_jar.remove(value)
+ # Send the uncontained event.
+ contained.uncontained(value, self, key)
+
+ def __contains__(self, key):
+ return self.raw_find_one(
+ {self._m_mapping_key: key}, fields=()) is not None
+
+ def __iter__(self):
+ result = self.raw_find(
+ {self._m_mapping_key: {'$ne': None}}, fields=(self._m_mapping_key,))
+ for doc in result:
+ yield doc[self._m_mapping_key]
+
+ def keys(self):
+ return list(self.__iter__())
+
+ def iteritems(self):
+ result = self.raw_find()
+ for doc in result:
+ obj = self._load_one(doc)
+ yield doc[self._m_mapping_key], obj
+
+ def raw_find(self, spec=None, *args, **kwargs):
+ if spec is None:
+ spec = {}
+ self._m_add_items_filter(spec)
+ coll = self.get_collection()
+ return coll.find(spec, *args, **kwargs)
+
+ def find(self, spec=None, *args, **kwargs):
+ # Search for matching objects.
+ result = self.raw_find(spec, *args, **kwargs)
+ for doc in result:
+ obj = self._load_one(doc)
+ yield obj
+
+ def raw_find_one(self, spec_or_id=None, *args, **kwargs):
+ if spec_or_id is None:
+ spec_or_id = {}
+ if not isinstance(spec_or_id, dict):
+ spec_or_id = {'_id': spec_or_id}
+ self._m_add_items_filter(spec_or_id)
+ coll = self.get_collection()
+ return coll.find_one(spec_or_id, *args, **kwargs)
+
+ def find_one(self, spec_or_id=None, *args, **kwargs):
+ doc = self.raw_find_one(spec_or_id, *args, **kwargs)
+ if doc is None:
+ return None
+ return self._load_one(doc)
+
+
+class IdNamesMongoContainer(MongoContainer):
+ """A container that uses the Mongo ObjectId as the name/key."""
+ _m_mapping_key = None
+
+
+ @property
+ def _m_remove_documents(self):
+ # Objects must be removed, since removing the _id of a document is not
+ # allowed.
+ return True
+
+ def _locate(self, obj, doc):
+ obj._v_key = unicode(doc['_id'])
+ obj._v_parent = self
+
+ def __getitem__(self, key):
+ try:
+ id = pymongo.objectid.ObjectId(key)
+ except InvalidId:
+ raise KeyError(key)
+ filter = self._m_get_items_filter()
+ filter['_id'] = id
+ obj = self.find_one(filter)
+ if obj is None:
+ raise KeyError(key)
+ return obj
+
+ def __contains__(self, key):
+ try:
+ id = pymongo.objectid.ObjectId(key)
+ except InvalidId:
+ return False
+ return self.raw_find_one({'_id': id}, fields=()) is not None
+
+ def __iter__(self):
+ result = self.raw_find(fields=None)
+ for doc in result:
+ yield unicode(doc['_id'])
+
+ def iteritems(self):
+ result = self.raw_find()
+ for doc in result:
+ obj = self._load_one(doc)
+ yield unicode(doc['_id']), obj
+
+
+class AllItemsMongoContainer(MongoContainer):
+ _m_parent_key = None
+
+
+class SubDocumentMongoContainer(MongoContained, MongoContainer):
+ _p_mongo_sub_object = True
More information about the checkins
mailing list