[Checkins] SVN: zc.async/trunk/s add context-aware zope 3 job; add setUp and tearDown hooks in job; make some small text cleanups.

Gary Poster gary at zope.com
Thu Jul 3 20:45:24 EDT 2008


Log message for revision 87992:
  add context-aware zope 3 job; add setUp and tearDown hooks in job; make some small text cleanups.

Changed:
  U   zc.async/trunk/setup.py
  U   zc.async/trunk/src/zc/async/CHANGES.txt
  U   zc.async/trunk/src/zc/async/README.txt
  U   zc.async/trunk/src/zc/async/TODO.txt
  U   zc.async/trunk/src/zc/async/dispatcher.py
  U   zc.async/trunk/src/zc/async/job.py
  A   zc.async/trunk/src/zc/async/tips.txt
  U   zc.async/trunk/src/zc/async/utils.py
  A   zc.async/trunk/src/zc/async/z3.py
  A   zc.async/trunk/src/zc/async/z3.txt
  U   zc.async/trunk/src/zc/async/z3tests.py

-=-
Modified: zc.async/trunk/setup.py
===================================================================
--- zc.async/trunk/setup.py	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/setup.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -71,7 +71,7 @@
 
 setup(
     name='zc.async',
-    version='1.3a6',
+    version='1.3a7',
     packages=find_packages('src'),
     package_dir={'':'src'},
     zip_safe=False,
@@ -110,6 +110,9 @@
     extras_require={
         'z3':[
             'zc.z3monitor',
+            'zope.security',
+            'zope.app.security',
+            'zope.app.component',
             'simplejson',
             ]},
     include_package_data=True,

Modified: zc.async/trunk/src/zc/async/CHANGES.txt
===================================================================
--- zc.async/trunk/src/zc/async/CHANGES.txt	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/CHANGES.txt	2008-07-04 00:45:24 UTC (rev 87992)
@@ -1,8 +1,29 @@
 1.3 (unreleased)
 ================
 
+- added ``setUp`` and ``tearDown`` hooks to Job class so that code can run
+  before and after the main job's code.  The output of ``setUp`` is passed as
+  an argument to ``tearDown`` so that one can pass state to the other, if
+  needed. ``setUp`` is run immediately before the actual job call.
+  ``tearDown`` runs after the transaction is committed, or after it was aborted
+  if there was a failure.   A retry requested by a retry policy causes the
+  methods to be run again.  A failure in ``setUp`` is considered to be a
+  failure in the job, as far as the retryPolicy is concerned (i.e., the job
+  calls the retry policy's ``jobError`` method).  If ``setUp`` fails, the job
+  is not called, bit ``tearDown`` is.  ``tearDown`` will fail with a critical
+  log message, but then processing will continue.
+
+- using the new ``setUp`` and ``tearDown`` hooks, added a Zope 3-specific Job
+  subclass (see zc.async.z3.Job) that remembers the zope.app.component site and
+  interaction participants when instantiated. These can be mutated. Then, when
+  the job is run, the ``setUp`` sets up the site and a security interaction
+  with the old participants, and then the ``tearDown`` tears it all down after
+  the transaction has committed.
+
 - changed retry policy logs to "WARNING" level, from "INFO" level.
 
+- changed many dispatcher errors to "CRITICAL" level from "ERROR" level.
+
 - added "CRITICAL" level logs for "other" commit retries on the
   RetryCommonForever retry policy.
 
@@ -17,9 +38,17 @@
 
 - remove odd full-path self-references within the utils module.
 
+- renamed ``zc.async.utils.try_transaction_five_times`` to
+  ``zc.async.utils.try_five_times``.
+
 - doc improvements and fixes (thanks to Zvezdan Petkovic and Gintautas
   Miliauskas).
 
+- the ``z3`` "extra" distutils target now explicitly depends on zope.security,
+  zope.app.security, and zope.app.component.  This almost certainly does not
+  increase the practical dependencies of the ``z3`` extras, but it does reflect
+  new direct dependencies of the z3-specific modules in the package.
+
 1.2 (2008-06-20)
 ================
 

Modified: zc.async/trunk/src/zc/async/README.txt
===================================================================
--- zc.async/trunk/src/zc/async/README.txt	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/README.txt	2008-07-04 00:45:24 UTC (rev 87992)
@@ -344,29 +344,27 @@
 Overview
 --------
 
-The result of a call to `put` returns an IJob.  The
-job represents the pending result.  This object has a lot of
-functionality that's explored in other documents in this package, and
-demonstrated a bit below, but here's a summary.
+The result of a call to ``put`` returns an ``IJob``. The job represents the
+pending result. This object has a lot of functionality that's explored in other
+documents in this package, and demonstrated a bit below, but here's a summary.
 
-- You can introspect, and even modify, the call and its
-  arguments.
+- You can introspect, and even modify, the call and its arguments.
 
-- You can specify that the job should be run serially with others
-  of a given identifier.
+- You can specify that the job should be run serially with others of a given
+  identifier.
 
-- You can specify other calls that should be made on the basis of the
-  result of this call.
+- You can specify other calls that should be made on the basis of the result of
+  this call.
 
-- You can persist a reference to it, and periodically (after syncing
-  your connection with the database, which happens whenever you begin or
-  commit a transaction) check its `state` to see if it is equal to
-  zc.async.interfaces.COMPLETED.  When it is, the call has run to
-  completion, either to success or an exception.
+- You can persist a reference to it, and periodically (after syncing your
+  connection with the database, which happens whenever you begin or commit a
+  transaction) check its ``status`` to see if it is equal to
+  ``zc.async.interfaces.COMPLETED``. When it is, the call has run to completion,
+  either to success or an exception.
 
-- You can look at the result of the call (once COMPLETED).  It might be
-  the result you expect, or a zc.twist.Failure, which is a
-  subclass of twisted.python.failure.Failure, way to safely communicate
+- You can look at the result of the call (once ``COMPLETED``). It might be the
+  result you expect, or a ``zc.twist.Failure``, a subclass of
+  ``twisted.python.failure.Failure``, which is a way to safely communicate
   exceptions across connections and machines and processes.
 
 -------
@@ -374,7 +372,7 @@
 -------
 
 So here's a simple story.  What if you want to get a result back from a
-call?  Look at the job.result after the call is COMPLETED.
+call?  Look at the job.result after the call is ``COMPLETED``.
 
     >>> def imaginaryNetworkCall():
     ...     # let's imagine this makes a network call...
@@ -397,7 +395,7 @@
 Closures
 --------
 
-What's more, you can pass a Job to the `put` call.  This means that you
+What's more, you can pass a Job to the ``put`` call.  This means that you
 aren't constrained to simply having simple non-argument calls performed
 asynchronously, but you can pass a job with a call, arguments, and
 keyword arguments--effectively, a kind of closure.  Here's a quick example.
@@ -875,10 +873,10 @@
 .. Footnotes ..
 .. ......... ..
 
-.. [#async_history] The first generation, zasync, had the following goals:
+.. [#async_history] The first generation, ``zasync``, had the following goals:
 
-    - be scalable, so that another process or machine could do the
-      asynchronous work;
+    - be scalable, so that another process or machine could do the asynchronous
+      work;
 
     - support lengthy jobs outside of the ZODB;
 
@@ -886,8 +884,8 @@
 
     - be recoverable, so that crashes would not lose work;
 
-    - be discoverable, so that logs and web interfaces give a view into
-      the work being done asynchronously;
+    - be discoverable, so that logs and web interfaces give a view into the
+      work being done asynchronously;
 
     - be easily extendible, to do new jobs; and
 
@@ -895,114 +893,105 @@
 
     It met its goals well in some areas and adequately in others.
 
-    Based on experience with the first generation, this second
-    generation identifies several areas of improvement from the first
-    design, and adds several goals.
+    Based on experience with the first generation, this second generation
+    identifies several areas of improvement from the first design, and adds
+    several goals.
 
     - Improvements
 
       * More carefully delineate the roles of the comprising components.
 
-        The zasync design has three main components, as divided by their
-        roles: persistent deferreds, now called jobs; job queues (the
-        original zasync's "asynchronous call manager"); and dispatchers
-        (the original zasync ZEO client).  The zasync 1.x design
-        blurred the lines between the three components such that the
-        component parts could only be replaced with difficulty, if at
-        all. A goal for the 2.x design is to clearly define the role for
-        each of three components such that, for instance, a user of a
-        queue does not need to know about the dispatcher or the agents.
+        The zc.async design has three main components, as divided by their
+        roles: persistent deferreds, now called jobs; job queues (the original
+        zasync's "asynchronous call manager"); and dispatchers (the original
+        zasync ZEO client). The zasync 1.x design blurred the lines between the
+        three components such that the component parts could only be replaced
+        with difficulty, if at all. A goal for the 2.x design is to clearly
+        define the role for each of three components such that, for instance, a
+        user of a queue does not need to know about the dispatcher or the
+        agents.
 
       * Improve scalability of asynchronous workers.
 
-        The 1.x line was initially designed for a single asynchronous
-        worker, which could be put on another machine thanks to ZEO.
-        Tarek Ziade of Nuxeo wrote zasyncdispatcher, which allowed
-        multiple asynchronous workers to accept work, allowing multiple
-        processes and multiple machines to divide and conquer. It worked
-        around the limitations of the original zasync design to provide
-        even more scalability. However, it was forced to divide up work
-        well before a given worker looks at the queue.
+        The 1.x line was initially designed for a single asynchronous worker,
+        which could be put on another machine thanks to ZEO. Tarek Ziade of
+        Nuxeo wrote zasyncdispatcher, which allowed multiple asynchronous
+        workers to accept work, allowing multiple processes and multiple
+        machines to divide and conquer. It worked around the limitations of the
+        original zasync design to provide even more scalability. However, it
+        was forced to divide up work well before a given worker looks at the
+        queue.
 
-        While dividing work earlier allows guesses and heuristics a
-        chance to predict what worker might be more free in the future,
-        a more reliable approach is to let the worker gauge whether it
-        should take a job at the time the job is taken. Perhaps the
-        worker will choose based on the worker's load, or other
-        concurrent jobs in the process, or other details. A goal for the
-        2.x line is to more directly support this type of scalability.
+        While dividing work earlier allows guesses and heuristics a chance to
+        predict what worker might be more free in the future, a more reliable
+        approach is to let the worker gauge whether it should take a job at the
+        time the job is taken. Perhaps the worker will choose based on the
+        worker's load, or other concurrent jobs in the process, or other
+        details. A goal for the 2.x line is to more directly support this type
+        of scalability.
 
       * Improve scalability of registering jobs.
 
-        The 1.x line initially wasn't concerned about very many
-        concurrent asynchronous requests.  When this situation was
-        encountered, it caused ConflictErrors between the worker process
-        reading the deferred queue and the code that was adding the
-        deferreds.  Thanks to Nuxeo, this problem was addressed in the
-        1.x line.  A goal for the new version is to include and improve
-        upon the 1.x solution.
+        The 1.x line initially wasn't concerned about very many concurrent
+        asynchronous requests. When this situation was encountered, it caused
+        ConflictErrors between the worker process reading the deferred queue
+        and the code that was adding the deferreds. Thanks to Nuxeo, this
+        problem was addressed in the 1.x line. A goal for the new version is to
+        include and improve upon the 1.x solution.
 
       * Make it even simpler to provide new jobs.
 
-        In the first version, `plugins` performed jobs.  They had a
-        specific API and they had to be configured.  A goal for the new
-        version is to require no specific API for jobs, and to not
-        require any configuration.
+        In the first version, `plugins` performed jobs. They had a specific API
+        and they had to be configured. A goal for the new version is to require
+        no specific API for jobs, and to not require any configuration.
 
       * Improve report information, especially through the web.
 
-        The component that the first version of zasync provided to do
-        the asynchronous work, the zasync client, provided very verbose
-        logs of the jobs done, but they were hard to read and also did
-        not have a through- the-web parallel.  Two goals for the new
-        version are to improve the usefulness of the filesystem logs and
-        to include more complete through-the-web visibility of the
-        status of the provided asynchronous clients.
+        The component that the first version of zasync provided to do the
+        asynchronous work, the zasync client, provided very verbose logs of the
+        jobs done, but they were hard to read and also did not have a through-
+        the-web parallel. Two goals for the new version are to improve the
+        usefulness of the filesystem logs and to include more complete
+        visibility of the status of the provided asynchronous clients.
 
       * Make it easier to configure and start, especially for small
         deployments.
 
-        A significant barrier to experimentation and deployment of the
-        1.x line was the difficulty in configuration.  The 1.x line
-        relied on ZConfig for zasync client configuration, demanding
-        non-extensible similar-yet-subtly-different .conf files like the
-        Zope conf files. The 2.x line plans to provide code that Zope 3
-        can configure to run in the same process as a standard Zope 3
-        application.  This means that development instances can start a
-        zasync quickly and easily.  It also means that processes can be
-        reallocated on the fly during production use, so that a machine
-        being used as a zasync process can quickly be converted to a web
-        server, if needed, and vice versa.  It further means that the
-        Zope web server can be used for through-the-web reports of the
-        current zasync process state.
+        A significant barrier to experimentation and deployment of the 1.x line
+        was the difficulty in configuration. The 1.x line relied on ZConfig for
+        zasync client configuration, demanding non-extensible
+        similar-yet-subtly-different .conf files like the Zope conf files. The
+        2.x line provides code that Zope 3 can configure to run in the same
+        process as a standard Zope 3 application. This means that development
+        instances can start a zasync quickly and easily. It also means that
+        processes can be reallocated on the fly during production use, so that
+        a machine being used as a zasync process can quickly be converted to a
+        web server, if needed, and vice versa.
 
     - New goals
 
-      * Support intermediate return calls so that jobs can report back
-        how they are doing.
+      * Support intermediate return calls so that jobs can report back how they
+        are doing.
 
-        A frequent request from users of zasync 1.x was the ability for
-        a long- running asynchronous process to report back progress to
-        the original requester.  The 2.x line addresses this with three
-        changes:
+        A frequent request from users of zasync 1.x was the ability for a long-
+        running asynchronous process to report back progress to the original
+        requester. The 2.x line addresses this with three changes:
 
         + jobs are annotatable;
 
-        + jobs should not be modified in an asynchronous
-          worker that does work (though they may be read);
+        + jobs should not be modified in an asynchronous worker that does work
+          (though they may be read);
 
-        + jobs can request another job in a synchronous process
-          that annotates the job with progress status or other
-          information.
+        + jobs can request another job in a synchronous process that annotates
+          the job with progress status or other information.
 
-        Because of relatively recent changes in ZODB--multi version
-        concurrency control--this simple pattern should not generate
-        conflict errors.
+        Because of relatively recent changes in ZODB--multi version concurrency
+        control--this simple pattern should not generate conflict errors.
 
       * Support time-delayed calls.
 
-        Retries and other use cases make time-delayed deferred calls
-        desirable. The new design supports these sort of calls.
+        Retries and other use cases make time-delayed deferred calls desirable.
+        The new design supports these sort of calls.
 
 .. [#identifying_agent] The combination of a queue name plus a
     dispatcher UUID plus an agent name uniquely identifies an agent.

Modified: zc.async/trunk/src/zc/async/TODO.txt
===================================================================
--- zc.async/trunk/src/zc/async/TODO.txt	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/TODO.txt	2008-07-04 00:45:24 UTC (rev 87992)
@@ -9,41 +9,8 @@
 - show how to use with collapsing jobs (hint to future self: use external queue
   to put in work, and have job(s) just pull what they can see from queue)
 
-- write tips and tricks
-
-  * avoid long transactions if possible.  really avoid long transactions
-    involving frequently written objects.  Discuss ramifications and
-    strategies, such as doing big work in one job, then in callback schedule
-    actually writing the data into the hotspot.
-
-  * in zope.app.testing.functional tests, zc.async doesn't do well being
-    started in a layer's setup because then it is associated with the
-    wrapped layer DB, and the test is associated with the DemoStorage wrapper,
-    so that the test can see what zc.async does, but zc.async can't see what
-    the test does.  The current workaround is to start the dispatcher in the
-    test or the test set up (but, again, *not* The layer set up).
-
-  * In tests, don't check to see if poll is activated until after the first
-    poll. Try ``zc.async.testing.get_poll(zc.async.dispatcher.get(), 0)``, for
-    instance.
-
-  * In tests, be aware that DemoStorage does not support mvcc and does not
-    support conflict resolution, so you may experience ConflictError (write and
-    particularly read) problems with it that you will not experience as much,
-    or at all, with a storage that supports those features such as FileStorage.
-    Notice that all of the tests in this package use FileStorage.
-
-  * callbacks should be very, very quick, and very reliable.  If you want to do
-    something that might take a while, put another job in the queue
-
 - custom retry policies, particularly for non-transactional tasks;
 
 - changing the default retry policy, per-process and per-agent; and
 
 - changing the default log level for queue jobs, callback jobs, and per-job.
-
-
-For some other package, maybe:
-
-- TTW Management and logging views, as in zasync (see goals in the "History"
-  section of the README).

Modified: zc.async/trunk/src/zc/async/dispatcher.py
===================================================================
--- zc.async/trunk/src/zc/async/dispatcher.py	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/dispatcher.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -347,14 +347,14 @@
             'in queue %s (oid %d)' % (
                 self.UUID, agent.name, ZODB.utils.u64(agent._p_oid),
                 agent.queue.name, ZODB.utils.u64(agent.queue._p_oid)))
-        res = zc.async.utils.try_transaction_five_times(
+        res = zc.async.utils.try_five_times(
             agent.claimJob, identifier, transaction)
         if isinstance(res, twisted.python.failure.Failure):
             identifier = 'stashing failure on agent %s (oid %s)' % (
                 agent.name, ZODB.utils.u64(agent._p_oid))
             def setFailure():
                 agent.failure = res
-            zc.async.utils.try_transaction_five_times(
+            zc.async.utils.try_five_times(
                 setFailure, identifier, transaction)
         return res
 
@@ -393,7 +393,7 @@
                                 return False
                         da.activate()
                         return True
-                    if zc.async.utils.try_transaction_five_times(
+                    if zc.async.utils.try_five_times(
                         activate, identifier, transaction) is True:
                         self._activated.add(queue._p_oid)
                     else:
@@ -454,7 +454,7 @@
                                 (job._p_oid, dbname, info))
                             job = self._getJob(agent)
                 identifier = 'committing ping for UUID %s' % (self.UUID,)
-                zc.async.utils.try_transaction_five_times(
+                zc.async.utils.try_five_times(
                     lambda: queue.dispatchers.ping(self.UUID), identifier,
                     transaction)
                 if len(pools) > len(queue_info):
@@ -554,7 +554,7 @@
                             da = queue.dispatchers.get(self.UUID)
                             if da is not None and da.activated:
                                 da.deactivate()
-                zc.async.utils.try_transaction_five_times(
+                zc.async.utils.try_five_times(
                     deactivate_das, identifier, transaction)
             finally:
                 transaction.abort()

Modified: zc.async/trunk/src/zc/async/job.py
===================================================================
--- zc.async/trunk/src/zc/async/job.py	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/job.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -221,6 +221,14 @@
         self.callbacks = zc.queue.PersistentQueue()
         self.annotations = BTrees.OOBTree.OOBTree()
 
+    def setUp(self):
+        # a hook (see z3.py, for instance) used in __call__
+        pass
+
+    def tearDown(self, setup_info):
+        # a hook (see z3.py, for instance) used in __call__
+        pass
+
     @property
     def active_start(self):
         return self._active_start
@@ -492,13 +500,20 @@
         res = None
         while 1:
             try:
+                setup_info = self.setUp()
                 res = self.callable(*effective_args, **effective_kwargs)
             except zc.async.utils.EXPLOSIVE_ERRORS:
                 tm.abort()
+                zc.async.utils.try_five_times(
+                    lambda: self.tearDown(setup_info),
+                    'tearDown for %r' % self, tm, commit=False)
                 raise
             except:
                 res = zc.twist.Failure()
                 tm.abort()
+                zc.async.utils.try_five_times(
+                    lambda: self.tearDown(setup_info),
+                    'tearDown for %r' % self, tm, commit=False)
                 retry = self._getRetry('jobError', tm, res, data_cache)
                 if isinstance(retry, (datetime.timedelta, datetime.datetime)):
                     identifier = (
@@ -516,10 +531,16 @@
                 callback = self._set_result(res, tm, data_cache)
             except zc.async.utils.EXPLOSIVE_ERRORS:
                 tm.abort()
+                zc.async.utils.try_five_times(
+                    lambda: self.tearDown(setup_info),
+                    'tearDown for %r' % self, tm, commit=False)
                 raise
             except:
                 failure = zc.twist.Failure()
                 tm.abort()
+                zc.async.utils.try_five_times(
+                    lambda: self.tearDown(setup_info),
+                    'tearDown for %r' % self, tm, commit=False)
                 retry = self._getRetry('commitError', tm, failure, data_cache)
                 if isinstance(retry, (datetime.timedelta, datetime.datetime)):
                     identifier = (
@@ -560,6 +581,10 @@
                 identifier = 'storing failure at commit of %r' % (self,)
                 zc.async.utils.never_fail(complete, identifier, tm)
                 callback = True
+            else:
+                zc.async.utils.try_five_times(
+                    lambda: self.tearDown(setup_info),
+                    'tearDown for %r' % self, tm, commit=False)
             if callback:
                 self._log_completion(res)
                 identifier = 'performing callbacks of %r' % (self,)

Added: zc.async/trunk/src/zc/async/tips.txt
===================================================================
--- zc.async/trunk/src/zc/async/tips.txt	                        (rev 0)
+++ zc.async/trunk/src/zc/async/tips.txt	2008-07-04 00:45:24 UTC (rev 87992)
@@ -0,0 +1,31 @@
+===============
+Tips and Tricks
+===============
+
+General
+=======
+
+* avoid long transactions if possible.  really avoid long transactions
+  involving frequently written objects.  Discuss ramifications and
+  strategies, such as doing big work in one job, then in callback schedule
+  actually writing the data into the hotspot. XXX
+
+* callbacks should be very, very quick, and very reliable.  If you want to do
+  something that might take a while, put another job in the queue
+
+Testing
+=======
+
+* In tests, don't check to see if poll is activated until after the first
+  poll. Try ``zc.async.testing.get_poll(zc.async.dispatcher.get(), 0)``, for
+  instance.
+
+* In tests, be aware that DemoStorage does not support mvcc and does not
+  support conflict resolution, so you may experience ConflictError (write and
+  particularly read) problems with it that you will not experience as much,
+  or at all, with a storage that supports those features such as FileStorage.
+  Notice that all of the tests in this package use FileStorage.
+
+.. insert the z3.txt document here
+
+.. insert the ftesting.txt document here.


Property changes on: zc.async/trunk/src/zc/async/tips.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/trunk/src/zc/async/utils.py
===================================================================
--- zc.async/trunk/src/zc/async/utils.py	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/utils.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -301,18 +301,19 @@
         else:
             return res
 
-def try_transaction_five_times(call, identifier, tm):
+def try_five_times(call, identifier, tm, commit=True):
     ct = 0
     res = None
     while 1:
         try:
             res = call()
-            tm.commit()
+            if commit:
+                tm.commit()
         except ZODB.POSException.TransactionError:
             tm.abort()
             ct += 1
             if ct >= 5:
-                log.error('Five consecutive transaction errors while %s',
+                log.critical('Five consecutive transaction errors while %s',
                           identifier, exc_info=True)
                 res = zc.twist.Failure()
             else:
@@ -322,6 +323,6 @@
             raise
         except:
             tm.abort()
-            log.error('Error while %s', identifier, exc_info=True)
+            log.critical('Error while %s', identifier, exc_info=True)
             res = zc.twist.Failure()
         return res

Added: zc.async/trunk/src/zc/async/z3.py
===================================================================
--- zc.async/trunk/src/zc/async/z3.py	                        (rev 0)
+++ zc.async/trunk/src/zc/async/z3.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -0,0 +1,50 @@
+# code specific to use within Zope 3
+
+import persistent.interfaces
+import zope.component
+import zope.security.interfaces
+import zope.security.management
+import zope.app.component.hooks
+import zope.app.security.interfaces
+
+import zc.async.job
+
+
+class Participation(object):
+    zope.interface.implements(zope.security.interfaces.IParticipation)
+    interaction = principal = None
+
+    def __init__(self, principal):
+        self.principal = principal
+
+class Job(zc.async.job.Job):
+    # a job that examines the site and interaction participants when it is
+    # created, and reestablishes them when run, tearing down as necessary.
+
+    site = None
+    participants = ()
+
+    def __init__(self, *args, **kwargs):
+        super(Job, self).__init__(*args, **kwargs)
+        site = zope.app.component.hooks.getSite()
+        self.site = site
+        interaction = zope.security.management.queryInteraction()
+        if interaction is not None:
+            self.participants = tuple(
+                participation.principal.id for participation in
+                interaction.participations)
+
+    def setUp(self):
+        old_site = zope.app.component.hooks.getSite()
+        zope.app.component.hooks.setSite(self.site)
+        if self.participants:
+            auth = zope.component.getUtility(
+                zope.app.security.interfaces.IAuthentication)
+            zope.security.management.newInteraction(
+                *(Participation(auth.getPrincipal(principal_id)) for
+                  principal_id in self.participants))
+        return old_site
+
+    def tearDown(self, setup_info):
+        zope.app.component.hooks.setSite(setup_info)
+        zope.security.management.endInteraction()

Added: zc.async/trunk/src/zc/async/z3.txt
===================================================================
--- zc.async/trunk/src/zc/async/z3.txt	                        (rev 0)
+++ zc.async/trunk/src/zc/async/z3.txt	2008-07-04 00:45:24 UTC (rev 87992)
@@ -0,0 +1,167 @@
+----------------------------
+The Context-Aware Zope 3 Job
+----------------------------
+
+If you use Zope 3, sometimes you want async jobs that have local sites and
+security set up. ``zc.async.z3.Job`` is a subclass of the main
+``zc.async.job.Job`` implementation that leverages the ``setUp`` and
+``tearDown`` hooks to accomplish this.
+
+It takes the site and the ids of the principals in the security context at its
+instantiation. The values can be mutated. Then when the job runs it sets the
+context up for the job's code, and then tears it down after the work has been
+committed (or aborted, if there was a failure). This can be very convenient for
+jobs that care about site-based component registries, or that care about the
+participants in zope.security interactions.
+
+This is different than a ``try: finally:`` wrapper around your main code that
+does the work, both because it is handled for you transparently, and because
+the context is cleaned up *after* the job's main transaction is committed. This
+means that code that expects a security or site context during a
+pre-transaction hook will be satisfied.
+
+For instance, let's imagine we have a database, and we establish a local site
+and an interaction with a request. [#zope3job_database_setup]_ [#empty_setup]_
+Unfortunately, this is a lot of set up. [#zope3job_setup]_
+
+    >>> import zope.app.component.hooks
+    >>> zope.app.component.hooks.setSite(site)
+    >>> import zope.security.management
+    >>> import zc.async.z3
+    >>> zope.security.management.newInteraction(
+    ...     zc.async.z3.Participation(mickey)) # usually would be a request
+
+Now we create a new job.
+
+    >>> def reportOnContext():
+    ...     print (zope.app.component.hooks.getSite().__class__.__name__,
+    ...             tuple(p.principal.id for p in
+    ...             zope.security.management.getInteraction().participations))
+    >>> j = root['j'] = zc.async.z3.Job(reportOnContext)
+
+The ids of the principals in the participations in the current interaction
+are in a ``participants`` tuple.  The site is on the job's ``site`` attribute.
+
+    >>> j.participants
+    ('mickey',)
+    >>> j.site is site
+    True
+
+If we end the interaction, clear the local site, and run the job, the job we
+used (``reportOnContext`` above) shows that the context was correctly in place.
+
+    >>> zope.security.management.endInteraction()
+    >>> zope.app.component.hooks.setSite(None)
+    >>> transaction.commit()
+    >>> j()
+    ('StubSite', ('mickey',))
+
+However, now the site and interaction are empty.
+
+    >>> print zope.security.management.queryInteraction()
+    None
+    >>> print zope.app.component.hooks.getSite()
+    None
+
+As mentioned, the context will be maintained through the transaction's commit.
+Let's illustrate.
+
+    >>> import zc.async
+    >>> import transaction.interfaces
+    >>> def setTransactionHook():
+    ...     t = transaction.interfaces.ITransactionManager(j).get()
+    ...     t.addBeforeCommitHook(reportOnContext)
+    ...
+    >>> zope.app.component.hooks.setSite(site)
+    >>> zope.security.management.newInteraction(
+    ...     zc.async.z3.Participation(mickey), zc.async.z3.Participation(jack),
+    ...     zc.async.z3.Participation(foo)) # >1 == rare but possible scenario
+    >>> j = root['j'] = zc.async.z3.Job(setTransactionHook)
+    >>> j.participants
+    ('mickey', 'jack', 'foo')
+    >>> j.site is site
+    True
+
+    >>> zope.security.management.endInteraction()
+    >>> zope.app.component.hooks.setSite(None)
+    >>> transaction.commit()
+    >>> j()
+    ('StubSite', ('mickey', 'jack', 'foo'))
+
+    >>> print zope.security.management.queryInteraction()
+    None
+    >>> print zope.app.component.hooks.getSite()
+    None
+
+
+.. [#zope3job_database_setup]
+
+    >>> from ZODB.tests.util import DB
+    >>> db = DB()
+    >>> conn = db.open()
+    >>> root = conn.root()
+
+    >>> import zc.async.configure
+    >>> zc.async.configure.base()
+
+    >>> import zc.async.testing
+    >>> zc.async.testing.setUpDatetime() # pins datetimes
+
+.. [#empty_setup] Without a site or an interaction, you can still instantiate
+    and run the job normally.
+
+    >>> import zc.async.z3
+    >>> import operator
+    >>> j = root['j'] = zc.async.z3.Job(operator.mul, 6, 7)
+    >>> j.participants
+    ()
+    >>> print j.site
+    None
+    >>> import transaction
+    >>> transaction.commit()
+    >>> j()
+    42
+
+.. [#zope3job_setup] To do this, we need to set up the zope.app.component
+    hooks, create a site, set up an authentication utility, and create some
+    principals that the authentication utility can return.
+
+    >>> import zope.app.component.hooks
+    >>> zope.app.component.hooks.setHooks()
+
+    >>> import zope.app.component.site
+    >>> import persistent
+    >>> class StubSite(persistent.Persistent,
+    ...                zope.app.component.site.SiteManagerContainer):
+    ...     pass
+    >>> site = root['site'] = StubSite()
+    >>> sm = zope.app.component.site.LocalSiteManager(site)
+    >>> site.setSiteManager(sm)
+
+    >>> import zope.security.interfaces
+    >>> import zope.app.security.interfaces
+    >>> import zope.interface
+    >>> import zope.location
+    >>> class StubPrincipal(object):
+    ...     zope.interface.implements(zope.security.interfaces.IPrincipal)
+    ...     def __init__(self, identifier, title, description=''):
+    ...         self.id = identifier
+    ...         self.title = title
+    ...         self.description = description
+    ...
+    >>> class StubPersistentAuth(persistent.Persistent,
+    ...                          zope.location.Location):
+    ...     zope.interface.implements(
+    ...         zope.app.security.interfaces.IAuthentication)
+    ...     _mapping = {'foo': 'Foo Fighter',
+    ...                 'jack': 'Jack, Giant Killer',
+    ...                 'mickey': 'Mickey Mouse'}
+    ...     def getPrincipal(self, principal_id):
+    ...         return StubPrincipal(principal_id, self._mapping[principal_id])
+    ...
+    >>> auth = StubPersistentAuth()
+    >>> sm.registerUtility(auth, zope.app.security.interfaces.IAuthentication)
+    >>> transaction.commit()
+    >>> mickey = auth.getPrincipal('mickey')
+    >>> jack = auth.getPrincipal('jack')
+    >>> foo = auth.getPrincipal('foo')


Property changes on: zc.async/trunk/src/zc/async/z3.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/trunk/src/zc/async/z3tests.py
===================================================================
--- zc.async/trunk/src/zc/async/z3tests.py	2008-07-04 00:29:45 UTC (rev 87991)
+++ zc.async/trunk/src/zc/async/z3tests.py	2008-07-04 00:45:24 UTC (rev 87992)
@@ -16,8 +16,8 @@
 from zope.testing import doctest, module
 import zope.component.testing
 import zc.ngi.async # to quiet the thread complaints from the testing
-# infrastructure, because there is no API way to stop the z3monitor server or
-# the zc.ngi.async thread. :-(
+# infrastructure, because there is no API way as of this writing to stop the
+# z3monitor server or the zc.ngi.async thread. :-(
 
 import zc.async.tests
 
@@ -39,6 +39,7 @@
     return unittest.TestSuite((
         doctest.DocFileSuite(
             'monitor.txt',
+            'z3.txt',
             setUp=setUp, tearDown=zc.async.tests.modTearDown,
             optionflags=doctest.INTERPRET_FOOTNOTES),
         doctest.DocFileSuite(



More information about the Checkins mailing list