[Checkins] SVN: zc.async/branches/dev/ ready for a merge to trunk and a 1.0, I guess

Gary Poster gary at zope.com
Wed Apr 9 23:12:28 EDT 2008


Log message for revision 85211:
  ready for a merge to trunk and a 1.0, I guess

Changed:
  A   zc.async/branches/dev/CHANGES.txt
  U   zc.async/branches/dev/buildout.cfg
  U   zc.async/branches/dev/setup.py
  A   zc.async/branches/dev/src/zc/async/CHANGES.txt
  U   zc.async/branches/dev/src/zc/async/README.txt
  U   zc.async/branches/dev/src/zc/async/README_2.txt
  U   zc.async/branches/dev/src/zc/async/README_3.txt
  U   zc.async/branches/dev/src/zc/async/TODO.txt
  U   zc.async/branches/dev/src/zc/async/dispatcher.txt
  U   zc.async/branches/dev/src/zc/async/monitor.txt
  U   zc.async/branches/dev/src/zc/async/subscribers.txt

-=-
Added: zc.async/branches/dev/CHANGES.txt
===================================================================
--- zc.async/branches/dev/CHANGES.txt	                        (rev 0)
+++ zc.async/branches/dev/CHANGES.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -0,0 +1 @@
+Please see CHANGES.txt in the zc.async package.
\ No newline at end of file


Property changes on: zc.async/branches/dev/CHANGES.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/branches/dev/buildout.cfg
===================================================================
--- zc.async/branches/dev/buildout.cfg	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/buildout.cfg	2008-04-10 03:12:27 UTC (rev 85211)
@@ -10,8 +10,6 @@
 find-links =
     http://download.zope.org/distribution
 
-extensions = zc.buildoutsftp
-
 [test]
 recipe = zc.recipe.testrunner
 eggs = zc.async

Modified: zc.async/branches/dev/setup.py
===================================================================
--- zc.async/branches/dev/setup.py	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/setup.py	2008-04-10 03:12:27 UTC (rev 85211)
@@ -2,16 +2,27 @@
 
 from setuptools import setup, find_packages
 
+long_description = (
+    open('src/zc/async/README.txt').read() + "\n" +
+    open('src/zc/async/README_2.txt').read() + "\n" +
+    open('src/zc/async/README_3.txt').read() +
+    "\n\n=======\nChanges\n=======\n\n" +
+    open('src/zc/async/CHANGES.txt').read() + "\n")
+
+f = open('TEST_THIS_REST_BEFORE_REGISTERING.txt', 'w')
+f.write(long_description)
+f.close()
+
 setup(
     name='zc.async',
-    version='1.0b1',
+    version='1.0',
     packages=find_packages('src'),
     package_dir={'':'src'},
     zip_safe=False,
     author='Zope Project',
     author_email='zope-dev at zope.org',
     description='Perform durable tasks asynchronously',
-    long_description=open('src/zc/async/README.txt').read(),
+    long_description=long_description,
     license='ZPL',
     install_requires=[
         'ZODB3',

Added: zc.async/branches/dev/src/zc/async/CHANGES.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/CHANGES.txt	                        (rev 0)
+++ zc.async/branches/dev/src/zc/async/CHANGES.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -0,0 +1,4 @@
+1.0 (2008-04-09)
+================
+
+Initial release.
\ No newline at end of file


Property changes on: zc.async/branches/dev/src/zc/async/CHANGES.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/branches/dev/src/zc/async/README.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/README.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/README.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -40,8 +40,8 @@
   quick fix to move the work out-of-band.
 
 Many of these core use cases involve end-users being able to start potentially
-expensive processes, on demand.  None of them are explicitly about scheduled
-tasks, though scheduled tasks can benefit from this package.
+expensive processes, on demand. Basic scheduled tasks are also provided by this
+package, though recurrence must be something you arrange.
 
 Multiple processes can claim and perform jobs.  Jobs can be (manually)
 decomposed for serial or parallel processing of the component parts.
@@ -57,9 +57,9 @@
 Design Overview
 ===============
 
------
-Usage
------
+---------------
+Overview: Usage
+---------------
 
 Looking at the design from the perspective of regular usage, your code
 obtains a ``queue``, which is a place to queue jobs to be performed
@@ -79,15 +79,15 @@
 is complete; or it can be configured to do additional work when it
 completes.
 
----------
-Mechanism
----------
+-------------------
+Overview: Mechanism
+-------------------
 
 Multiple processes, typically spread across multiple machines, can use
 ZEO to connect to the queue and claim and perform work.  As with other
 collections of processes that share a database with ZEO, these processes
 generally should share the same software (though some variations on this
-constraint might be theoretically possible).
+constraint should be possible).
 
 A process that should claim and perform work, in addition to a database
 connection and the necessary software, needs a ``dispatcher`` with a
@@ -127,9 +127,8 @@
 Reading More
 ============
 
-This document continues on with four other main sections: `Usage`_,
-`Configuration Without Zope 3`_, `Configuration With Zope 3`_, and
-`Tips and Tricks`.
+This document continues on with three other main sections: `Usage`_,
+`Configuration without Zope 3`_, and `Configuration with Zope 3`_.
 
 Other documents in the package are primarily geared as maintainer
 documentation, though the author has tried to make them readable and
@@ -154,7 +153,7 @@
 dispatchers, reactors and agents all waiting to fulfill jobs placed into
 the queue.  We start with a connection object, ``conn``, and some
 convenience functions introduced along the way that help us simulate
-time passing and work being done[#usageSetUp]_.
+time passing and work being done [#usageSetUp]_.
 
 -------------------
 Obtaining the queue
@@ -162,8 +161,8 @@
 
 First, how do we get the queue?  Your installation may have some
 conveniences.  For instance, the Zope 3 configuration described below
-makes it possible to get the primary queue with a call to
-``zope.component.getUtility(zc.async.interfaces.IQueue)``.
+makes it possible to get the primary queue with an adaptation call like
+``zc.async.interfaces.IQueue(a_persistent_object_with_db_connection)``.
 
 But failing that, queues are always exected to be in a zc.async.queue.Queues
 mapping found off the ZODB root in a key defined by the constant
@@ -178,11 +177,10 @@
     >>> isinstance(queues, zc.async.queue.Queues)
     True
 
-As the name implies, ``queues`` is a collection of queues.  It's
-possible to have multiple queues, as a tool to distribute and control
-work.  We will assume a convention of a queue being available in the ''
-(empty string).  This is followed in the Zope 3 configuration discussed
-below.
+As the name implies, ``queues`` is a collection of queues. As discussed later,
+it's possible to have multiple queues, as a tool to distribute and control
+work. We will assume a convention of a queue being available in the '' (empty
+string).
 
     >>> queues.keys()
     ['']
@@ -202,6 +200,10 @@
     >>> import transaction
     >>> transaction.commit()
 
+Note that this won't really work in an interactive session: the callable needs
+to be picklable, as discussed above, so ``send_message`` would need to be
+a module global, for instance.
+
 The ``put`` returned a job.  Now we need to wait for the job to be
 performed.  We would normally do this by really waiting.  For our
 examples, we will use a helper function called ``wait_for`` to wait for
@@ -213,8 +215,8 @@
 We also could have used the method of a persistent object.  Here's another
 quick example.
 
-First we define a simple persistent.Persistent subclass and put it in the
-database[#commit_for_multidatabase]_.
+First we define a simple persistent.Persistent subclass and put an instance of
+it in the database [#commit_for_multidatabase]_.
 
     >>> import persistent
     >>> class Demo(persistent.Persistent):
@@ -238,9 +240,10 @@
 
 The method was called, and the persistent object modified!
 
-To reiterate, only persistent callables and the methods of persistent
-objects can be used.  This rules out, for instance, closures.  As we'll
-see below, the job instance can help us out there.
+To reiterate, only pickleable callables such as global functions and the
+methods of persistent objects can be used. This rules out, for instance,
+lambdas and other functions created dynamically. As we'll see below, the job
+instance can help us out there somewhat by offering closure-like features.
 
 ---------------
 Scheduled Calls
@@ -275,9 +278,9 @@
     True
 
 If you set a time that has already passed, it will be run as if it had
-been set to run as soon as possible[#already_passed]_...unless the job
+been set to run as soon as possible [#already_passed]_...unless the job
 has already timed out, in which case the job fails with an
-abort[#already_passed_timed_out]_.
+abort [#already_passed_timed_out]_.
 
 The queue's `put` method is the essential API.  Other methods are used
 to introspect, but are not needed for basic usage.
@@ -295,9 +298,9 @@
 The result of a call to `put` returns an IJob.  The
 job represents the pending result.  This object has a lot of
 functionality that's explored in other documents in this package, and
-demostrated a bit below, but here's a summary.  
+demonstrated a bit below, but here's a summary.  
 
-- You can introspect it to look at, and even modify, the call and its
+- You can introspect, and even modify, the call and its
   arguments.
 
 - You can specify that the job should be run serially with others
@@ -405,15 +408,16 @@
 ---------
 
 You can register callbacks to handle the result of a job, whether a
-Failure or another result.  These callbacks can be thought of as the
-"except" "else" or "finally" clauses of a "try" statement.  Each
-callback receives the job's current result as input, and its output
-becomes the job's new result (and therefore the input of the next
-callback, if any).
+Failure or another result.
 
-Note that, during execution of a callback, there is no guarantee that
+Note that, unlike callbacks on a Twisted deferred, these callbacks do not
+change the result of the original job. Since callbacks are jobs, you can chain
+results, but generally callbacks for the same job all get the same result as
+input.
+
+Also ote that, during execution of a callback, there is no guarantee that
 the callback will be processed on the same machine as the main call.  Also,
-the ``local`` functions will not work.
+some of the ``local`` functions, discussed below, will not work as desired.
 
 Here's a simple example of reacting to a success.
 
@@ -448,8 +452,20 @@
     >>> callback2.result
     'I handled a name error: SCRIBBLED'
 
+Advanced Techniques and Tools
+=============================
+
+**Important**
+
+The job and its functionality described above are the core zc.async tools.
+
+The following are advanced techniques and tools of various complexities. You
+can use zc.async very productively without ever understanding or using them. If
+the following do not make sense to you now, please just move on for now.
+
+--------------
 zc.async.local
-==============
+--------------
 
 Jobs always run their callables in a thread, within the context of a
 connection to the ZODB. The callables have access to five special
@@ -468,7 +484,7 @@
     middle of other work.
     
     As a simple rule, only send immutable objects like strings or
-    numbers as values[#setLiveAnnotation]_.
+    numbers as values [#setLiveAnnotation]_.
 
 ``zc.async.local.getLiveAnnotation(name, default=None, timeout=0, poll=1, job=None)``
     The ``getLiveAnnotation`` tells the agent to get an annotation for a job,
@@ -477,15 +493,15 @@
     middle of other work.  
     
     As a simple rule, only ask for annotation values that will be
-    immutable objects like strings or numbers[#getLiveAnnotation]_.
+    immutable objects like strings or numbers [#getLiveAnnotation]_.
 
-    If the ``timeout`` argument is set to a positive float or int, the
-    function will wait that at least that number of seconds until an
-    annotation of the given name is available. Otherwise, it will return
-    the ``default`` if the name is not present in the annotations.   The
-    ``poll`` argument specifies approximately how often to poll for the
-    annotation, in seconds, though as the timeout period approaches the
-    next poll will be min(poll, remaining seconds to timeout).
+    If the ``timeout`` argument is set to a positive float or int, the function
+    will wait that at least that number of seconds until an annotation of the
+    given name is available. Otherwise, it will return the ``default`` if the
+    name is not present in the annotations. The ``poll`` argument specifies
+    approximately how often to poll for the annotation, in seconds (to be more
+    precise, a subsequent poll will be min(poll, remaining seconds until
+    timeout) seconds away).
 
 ``zc.async.local.getReactor()``
     The ``getReactor`` function returns the job's dispatcher's reactor.  The
@@ -542,8 +558,9 @@
 [#stats_2]_ ``getReactor`` and ``getDispatcher`` are for advanced use
 cases and are not explored further here.
 
+----------
 Job Quotas
-==========
+----------
 
 One class of asynchronous jobs are ideally serialized.  For instance,
 you may want to reduce or eliminate the chance of conflict errors when
@@ -633,8 +650,15 @@
 This may be valuable when a needed resource is only available in limited
 numbers at a time.
 
+Note that, while quotas are valuable tools for doing serialized work such as
+updating a text index, other optimization features sometimes useful for this
+sort of task, such as collapsing similar jobs, are not provided directly by
+this package. This functionality could be trivially built on top of zc.async,
+however [#idea_for_collapsing_jobs]_.
+
+--------------
 Returning Jobs
-==============
+--------------
 
 Our examples so far have done work directly.  What if the job wants to
 orchestrate other work?  One way this can be done is to return another
@@ -644,7 +668,6 @@
 other jobs; and to make parts of a job that can be parallelized available
 to more workers.
 
----------------
 Serialized Work
 ---------------
 
@@ -678,7 +701,6 @@
 separation of code: dividing code that does work from code that
 orchestrates the jobs.  We'll see an example of the idea below.
 
------------------
 Parallelized Work
 -----------------
 
@@ -722,8 +744,6 @@
     ...     job.args.append(result)
     ...     if len(job.args) == 3: # all results are in
     ...         zc.async.local.getJob().queue.put(job)
-    ...     return result # unnecessary; just keeps this job's result
-    ...     # from changing
     ...
     >>> def main_job():
     ...     job = zc.async.job.Job(post_process)
@@ -734,6 +754,10 @@
     ...     return job
     ...
 
+That may be a bit mind-blowing at first.  The trick to catch here is that,
+because the main_job returns a job, the result of that job will become the
+result of the main_job once the returned (``post_process``) job is done.
+
 Now we'll put this in and let it cook.
 
     >>> job = queue.put(main_job)
@@ -750,13 +774,12 @@
 
 Ta-da!
 
-A further polish to this solution would eliminate the chance for conflict
-errors by making the callbacks put their work into jobs with
-serialized_ids.  You'd also probably want to deal with the possibility of
-one or more of the jobs generating a Failure.
+For real-world usage, you'd also probably want to deal with the possibility of
+one or more of the jobs generating a Failure, among other edge cases.
 
+-------------------
 Returning Deferreds
-===================
+-------------------
 
 What if you want to do work that doesn't require a ZODB connection?  You
 can also return a Twisted deferred (twisted.internet.defer.Deferred).
@@ -787,9 +810,11 @@
 Conclusion
 ==========
 
-This concludes our discussion of zc.async usage.  The next section shows
-how to configure zc.async without Zope 3[#stop_reactor]_.
+This concludes our discussion of zc.async usage. The `next section`_ shows how
+to configure zc.async without Zope 3 [#stop_usage_reactor]_.
 
+.. _next section: `Configuration without Zope 3`_
+
 .. ......... ..
 .. Footnotes ..
 .. ......... ..
@@ -1030,7 +1055,7 @@
 
 .. [#commit_for_multidatabase] We commit before we do the next step as a
     good practice, in case the queue is from a different database than
-    the root.  See the `Tips and Tricks`_ section for a discussion about
+    the root.  See the configuration sections for a discussion about
     why putting the queue in another database might be a good idea. 
     
     Rather than committing the transaction,
@@ -1169,8 +1194,15 @@
      >>> info['poll id'] is not None
      True
 
-.. [#stop_reactor] 
+.. [#idea_for_collapsing_jobs] For instance, here is one approach.  Imagine
+    you are queueing the job of indexing documents. If the same document has a
+    request to index, the job could simply walk the queue and remove (``pull``)
+    similar tasks, perhaps aggregating any necessary data. Since the jobs are
+    serial because of a quota, no other worker should be trying to work on
+    those jobs.
 
+.. [#stop_usage_reactor] 
+
     >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
     {'failed': 2,
      'longest active': None,

Modified: zc.async/branches/dev/src/zc/async/README_2.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/README_2.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/README_2.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -1,19 +1,14 @@
 ============================
-Configuration Without Zope 3
+Configuration without Zope 3
 ============================
 
 This section discusses setting up zc.async without Zope 3. Since Zope 3 is
 ill-defined, we will be more specific: this describes setting up zc.async
 without ZCML, without any zope.app packages, and with as few dependencies as
-possible. A casual way of describing the dependencies is "ZODB and
-zope.component"[#specific_dependencies]_.
+possible. A casual way of describing the dependencies is "ZODB, Twisted and
+zope.component," though we directly depend on some smaller packages and
+indirectly on others [#specific_dependencies]_.
 
-The next section, `Configuration With Zope 3`_, still tries to limit
-dependencies, but includes both ZCML and indirect and direct dependencies on a
-few "zope.app.*" packages like zope.app.appsetup. It still is minimal enough
-that someone wanting to avoid ZCML, for instance, might still find valuable
-information.
-
 You may have one or two kinds of configurations for your software using
 zc.async. The simplest approach is to have all processes able both to put items
 in queues, and to perform them with a dispatcher. You can then use on-the-fly
@@ -46,13 +41,13 @@
 --------------------------------
 
 The required registrations can be installed for you by the
-``zc.async.configure.base`` function. Most other documents in this package,
-such as those in the "Usage" section (found in README.txt), use this in their
+``zc.async.configure.base`` function. Most other examples in this package,
+such as those in the `Usage`_ section, use this in their
 test setup. 
 
-**Again, for a quick start, you might just want to use the helper
-``zc.async.configure.base`` function, and move on to the ``Required ZODB Set
-Up``_ section below.**
+Again, for a quick start, you might just want to use the helper
+``zc.async.configure.base`` function, and move on to the `Required ZODB Set
+Up`_ section below.
 
 Here, though, we will go over each required registration to briefly explain
 what they are.
@@ -104,9 +99,9 @@
 
 The UUID we register here is a UUID of the instance, which is expected
 to uniquely identify the process when in production. It is stored in
-the file specified by the ZC_ASYNC_UUID environment variable (or in
+the file specified by the ``ZC_ASYNC_UUID`` environment variable (or in
 ``os.join(os.getcwd(), 'uuid.txt')`` if this is not specified, for easy
-experimentation).
+initial experimentation with the package).
 
     >>> import uuid
     >>> import os
@@ -180,11 +175,11 @@
 
 All we must have for a client to be able to put jobs in a queue is...a queue.
 
-For a quick start, the ``zc.async.subscribers`` module provides some a subscriber to
+For a quick start, the ``zc.async.subscribers`` module provides a subscriber to
 a DatabaseOpened event that does the right dance. See
 ``multidb_queue_installer`` and ``queue_installer`` in that module, and you can
-see that in use in the Zope 3 configuration section (in README_3). For now,
-though, we're taking things step by step and explaining what's going on.
+see that in use in `Configuration with Zope 3`_. For now, though, we're taking
+things step by step and explaining what's going on.
 
 Dispatchers look for queues in a mapping off the root of the database in 
 a key defined as a constant: zc.async.interfaces.KEY.  This mapping should
@@ -235,22 +230,22 @@
 should know these characteristics:
 
 - you cannot add a job with a quota name that is not defined in the
-  queue[#undefined_quota_name]_;
+  queue [#undefined_quota_name]_;
 
 - you cannot add a quota name to a job in a queue if the quota name is not
-  defined in the queue[#no_mutation_to_undefined]_;
+  defined in the queue [#no_mutation_to_undefined]_;
 
-- you can create and remove quotas on the queue[#create_remove_quotas]_;
+- you can create and remove quotas on the queue [#create_remove_quotas]_;
 
 - you can remove quotas if pending jobs have their quota names--the quota name
-  is then ignored[#remove_quotas]_;
+  is then ignored [#remove_quotas]_;
 
-- quotas default to a size of 1[#default_size]_;
+- quotas default to a size of 1 [#default_size]_;
 
-- this can be changed at creation or later[#change_size]_; and
+- this can be changed at creation or later [#change_size]_; and
 
 - decreasing the size of a quota while the old quota size is filled will
-  not affect the currently running jobs[#decreasing_affects_future]_.
+  not affect the currently running jobs [#decreasing_affects_future]_.
 
 Multiple Queues
 ---------------
@@ -284,15 +279,15 @@
 You often want to daemonize your software, so that you can restart it if
 there's a problem, keep track of it and monitor it, and so on.  ZDaemon
 (http://pypi.python.org/pypi/zdaemon) and Supervisor (http://supervisord.org/)
-are two fairly simple ways of doing this for both client and client/server
-processes.  If your main application can be packaged as a setuptools
-distribution (egg or source release or even development egg) then you can
-have your main application as a zc.async client and your dispatchers running
-a separate zc.async-only main loop that simply includes your main application
-as a dependency, so the necessary software is around.  You may have to do a
-bit more configuration on the client/server side to mimic global registries
-such as zope.component registrations and so on between the client and the
-client/servers, but this shouldn't be too bad.
+are two fairly simple-to-use ways of doing this for both client and
+client/server processes. If your main application can be packaged as a
+setuptools distribution (egg or source release or even development egg) then
+you can have your main application as a zc.async client and your dispatchers
+running a separate zc.async-only main loop that simply includes your main
+application as a dependency, so the necessary software is around. You may have
+to do a bit more configuration on the client/server side to mimic global
+registries such as zope.component registrations and so on between the client
+and the client/servers, but this shouldn't be too bad.
 
 UUID File Location
 ------------------
@@ -327,7 +322,7 @@
 need to set up and start a reactor and dispatcher; configure agents as desired
 to get the dispatcher to do some work; and optionally configure logging.
 
-For a quick start, the ``zc.async.subscribers`` module have some conveniences
+For a quick start, the ``zc.async.subscribers`` module has some conveniences
 to start a threaded reactor and dispatcher, and to install agents.  You might
 want to look at those to get started.  They are also used in the Zope 3
 configuration (README_3).  Meanwhile, this document continues to go
@@ -374,11 +369,13 @@
 you would like to see a minimal contract.
 
 Configuring the basics is fairly simple, as we'll see in a moment.  The
-trickiest part is to handle signals cleanly.  Here we install signal
-handlers in the main thread using ``reactor._handleSignals``.  This may
-work in some real-world applications, but if your application already
-needs to handle signals you may need a more careful approach. Again, see
-``zc.async.subscribers`` for some options you can explore.
+trickiest part is to handle signals cleanly. It is also optional! The
+dispatcher will eventually figure out that there was not a clean shut down
+before and take care of it. Here, though, essentially as an optimization, we
+install signal handlers in the main thread using ``reactor._handleSignals``.
+``reactor._handleSignals`` may work in some real-world applications, but if
+your application already needs to handle signals you may need a more careful
+approach. Again, see ``zc.async.subscribers`` for some options you can explore.
 
     >>> import twisted.internet.selectreactor
     >>> reactor = twisted.internet.selectreactor.SelectReactor()
@@ -428,7 +425,7 @@
     ['main']
     >>> agent = dispatcher_agents['main']
 
-Now we have our agent!  But...what is it[#stop_reactor]_?
+Now we have our agent!  But...what is it [#stop_config_reactor]_?
 
 ------
 Agents
@@ -509,8 +506,9 @@
 you need.  Be sure to auto-rotate the trace logs.
 
 The package supports monitoring using zc.z3monitor, but using this package
-includes more Zope 3 dependencies, so it is not included here.  If you would
-like to use it, see monitor.txt and README_3.
+includes more Zope 3 dependencies, so it is not included here. If you would
+like to use it, see monitor.txt in the package and our next section:
+`Configuration with Zope 3`_.
 
     >>> reactor.stop()
 
@@ -583,6 +581,13 @@
     - zope.testing
         Testing extensions and helpers.
 
+    The next section, `Configuration With Zope 3`_, still tries to limit
+    dependencies--we only rely on additional packages zc.z3monitor, simplejson,
+    and zope.app.appsetup ourselves--but as of this writing zope.app.appsetup
+    ends up dragging in a large chunk of zope.app.* packages. Hopefully that
+    will be refactored in Zope itself, and our full Zope 3 configuration can
+    benefit from the reduced indirect dependencies.
+
 .. [#undefined_quota_name]
 
     >>> import operator
@@ -721,8 +726,8 @@
     ...         assert False, 'no poll!'
     ... 
 
-.. [#stop_reactor] We don't want the live dispatcher for our demos, actually.
-    See dispatcher.txt to see the live dispatcher actually in use.
+.. [#stop_config_reactor] We don't want the live dispatcher for our demos,
+    actually.  See dispatcher.txt to see the live dispatcher actually in use.
 
     >>> reactor.callFromThread(reactor.stop)
     >>> for i in range(30):

Modified: zc.async/branches/dev/src/zc/async/README_3.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/README_3.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/README_3.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -1,5 +1,5 @@
 =========================
-Configuration With Zope 3
+Configuration with Zope 3
 =========================
 
 Our last main section can be the shortest yet, both because we've already
@@ -16,13 +16,13 @@
 For a client/server combination, use zcml that is something like the
 basic_dispatcher_policy.zcml, make sure you have access to the database with
 the queues, configure logging and monitoring as desired, configure the
-ZC_ASYNC_UUID environmental variable in zdaemon.conf if you are in production,
-and start up! Getting started is really pretty easy. You can even start a
-dispatcher-only version by not starting any servers in zcml.
+``ZC_ASYNC_UUID`` environmental variable in zdaemon.conf if you are in
+production, and start up! Getting started is really pretty easy. You can even
+start a dispatcher-only version by not starting any servers in zcml.
 
 We'll look at this by making a zope.conf-alike and a site.zcml-alike.  We'll
 need a place to put some files, so we'll use a temporary directory.  This, and
-the comments in the files that we set up. are the primary differences between
+the comments in the files that we set up, are the primary differences between
 our examples and a real set up.
 
 So, without further ado, here is the text of our zope.conf-alike, and of our
@@ -105,7 +105,7 @@
 (Other tools, such as supervisor, also can work, of course; their spellings are
 different and are "left as an exercise to the reader" at the moment.)
 
-We'll do that "by hand":
+We'll do that by hand:
 
     >>> os.environ['ZC_ASYNC_UUID'] = os.path.join(dir, 'uuid.txt')
 
@@ -134,7 +134,7 @@
 comes from code in subscribers.py, which can be adjusted easily.
 
 If we process these files, and wait for a poll, we've got a working
-set up[#process]_.
+set up [#process]_.
 
     >>> import zc.async.dispatcher
     >>> dispatcher = zc.async.dispatcher.get()
@@ -218,6 +218,9 @@
     >>> import shutil
     >>> shutil.rmtree(dir)
 
+Hopefully zc.async will be an easy-to-configure, easy-to-use, and useful tool
+for you! Good luck!
+
 .. ......... ..
 .. Footnotes ..
 .. ......... ..

Modified: zc.async/branches/dev/src/zc/async/TODO.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/TODO.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/TODO.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -1,16 +1,14 @@
-- make trace log only write changes or occasionally
-- make data structure for old polls more efficient
-- make period for polls and jobs shorter
-- make failures reduce to small size (eliminate stack?)
-- make full tracebacks write to log, or at least an option
 - Write the z3monitor tests.
-- Write a stress test.
+- See if combined README + README_2 + README_3 makes a comprehensible document
 
 For future versions:
 
-- TTW Management and logging views, as in zasync (see goals in the "History"
-  section of the README).
-- Write a Zope 3 request/context munger that sets security context and site
-  based on current values.
 - queues should be pluggable like agent with filter
 - show how to broadcast, maybe add conveniences
+- show how to use with collapsing jobs (hint to future self: use external queue
+  to put in work, and have job(s) just pull what they can see from queue)
+
+For some other package, maybe:
+
+- TTW Management and logging views, as in zasync (see goals in the "History"
+  section of the README).
\ No newline at end of file

Modified: zc.async/branches/dev/src/zc/async/dispatcher.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/dispatcher.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/dispatcher.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -42,7 +42,8 @@
 your own reactor.
 
 For this example, we will use the twisted select reactor running in another
-thread.  
+thread.  Other doctests in this package use a test reactor defined in the
+zc.async.testing module, as an example of a write-your-own reactor.
 
 To instantiate the dispatcher, we need a reactor and a db.  We'll create it,
 then start it off in the thread.  We'll assume we already have the db and the
@@ -159,8 +160,8 @@
     >>> evs[0].object._p_oid == queue.dispatchers[dispatcher.UUID]._p_oid
     True
 
-- Lastly, the dispatcher made its first ping.  A ping means that the
-  dispatcher changes a datetime to record that it is alive.  
+- The dispatcher made its first ping. A ping means that the dispatcher changes
+  a datetime to record that it is alive. 
 
     >>> queue.dispatchers[dispatcher.UUID].last_ping is not None
     True
@@ -177,6 +178,17 @@
     >>> queue.dispatchers[dispatcher.UUID].ping_death_interval
     datetime.timedelta(0, 60)
 
+- We have some log entries.  (We're using some magic log handlers inserted by
+  setup code in tests.py here.)
+  
+    >>> print event_logs # doctest: +ELLIPSIS
+    zc.async.events INFO
+      attempting to activate dispatcher ...
+
+    >>> print trace_logs # doctest: +ELLIPSIS
+    zc.async.trace DEBUG
+      poll ...
+
 So the dispatcher is running now.  It still won't do any jobs until we tell
 it the kind of jobs it should perform.  Let's add in our default agent, with
 the default configuration that it is willing to perform any job.
@@ -234,9 +246,35 @@
                    'new jobs': [('\x00...', 'unnamed')],
                    'size': 3}}}
 
-[#getPollInfo]_ Notice our ``new jobs`` has a value in it now.  We can get
-some information about that job from the dispatcher.
+We also have some log entries.
 
+    >>> info = debug = None
+    >>> for r in reversed(trace_logs.records):
+    ...     if info is None and r.levelname == 'INFO':
+    ...         info = r
+    ...     elif debug is None and r.levelname == 'DEBUG':
+    ...         debug = r
+    ...     elif info is not None and debug is not None:
+    ...         break
+    ... else:
+    ...     assert False, 'could not find'
+    ...
+    >>> print info.getMessage() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
+    <zc.async.job.Job (oid ..., db 'unnamed')
+     ``<built-in function mul>(14, 3)``> succeeded in thread ... with result:
+    42
+
+    >>> print debug.getMessage() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
+    poll ...:
+      {'':
+        {'main':
+          {'active jobs': [], 'error': None,
+           'new jobs': [('\x...', 'unnamed')], 'len': 0, 'size': 3}}}
+    
+
+[#getPollInfo]_ Notice our ``new jobs`` from the poll and the log has a value
+in it now. We can get some information about that job from the dispatcher.
+
     >>> info = dispatcher.getJobInfo(*poll['']['main']['new jobs'][0])
     >>> pprint.pprint(info)
     ... # doctest: +ELLIPSIS
@@ -253,7 +291,8 @@
      >>> info['poll id'] is not None
      True
 
-Notice that the result is a repr.
+Notice that the result is a repr.  If this had been a failure, it would have
+been a (very) verbose traceback [#show_error]_.
 
 As seen in other documents in zc.async, the job can also be a method of a
 persistent object, affecting a persistent object.
@@ -311,18 +350,20 @@
     >>> wait_for_result(job4)
     "reply is 'HIYA'.  Locally it is MISSING."
 
-We can analyze the work the dispatcher has done.  the records for this
-generally only go back about a day.
+We can analyze the work the dispatcher has done. The records for this generally
+only go back about ten or twelve minutes--just enough to get a feel for the
+current health of the dispatcher. Use the log if you want more long-term
+analysis.
 
     >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
-    {'failed': 0,
+    {'failed': 1,
      'longest active': None,
-     'longest failed': None,
+     'longest failed': ('\x00...', 'unnamed'),
      'longest successful': ('\x00...', 'unnamed'),
      'shortest active': None,
-     'shortest failed': None,
+     'shortest failed': ('\x00...', 'unnamed'),
      'shortest successful': ('\x00...', 'unnamed'),
-     'started': 5,
+     'started': 6,
      'statistics end': datetime.datetime(...),
      'statistics start': datetime.datetime(...),
      'successful': 5,
@@ -414,6 +455,39 @@
     ...     ) is poll
     True
 
+.. [#show_error] OK, so you want to see a verbose traceback?  OK, you asked
+    for it. We're eliding more than 90% of this, and this is a small one,
+    believe it or not. Rotate your logs!
+    
+    Notice that all of the values in the logs are reprs.
+
+    >>> bad_job = queue.put(
+    ...     zc.async.job.Job(operator.mul, 14, None))
+    >>> transaction.commit()
+
+    >>> wait_for_result(bad_job)
+    <zc.twist.Failure exceptions.TypeError>
+    
+    >>> for r in reversed(trace_logs.records):
+    ...     if r.levelname == 'ERROR':
+    ...         break
+    ... else:
+    ...     assert False, 'could not find log'
+    ...
+    >>> print r.getMessage() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
+    <zc.async.job.Job (oid ..., db 'unnamed')
+     ``<built-in function mul>(14, None)``> failed in thread ... with result:
+    *--- Failure #... (pickled) ---
+    .../zc/async/job.py:...: _call_with_retry(...)
+     [ Locals ]...
+     ( Globals )...
+    .../zc/async/job.py:...: <lambda>(...)
+     [ Locals ]...
+     ( Globals )...
+    exceptions.TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
+    *--- End of Failure #... ---
+    <BLANKLINE>
+
 .. [#wait_for_annotation]
 
     >>> def wait_for_annotation(job, name):

Modified: zc.async/branches/dev/src/zc/async/monitor.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/monitor.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/monitor.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -1,7 +1,7 @@
 Monitoring Dispatchers
 ======================
 
-A process's zc.async dispatcher[#setUp]_ can be monitored in-process via
+A process's zc.async dispatcher [#setUp]_ can be monitored in-process via
 zc.z3monitor plugins.  Let's imagine we have a connection over which we
 can send text messages to the monitor server [#z3monitor_setup]_.
 

Modified: zc.async/branches/dev/src/zc/async/subscribers.txt
===================================================================
--- zc.async/branches/dev/src/zc/async/subscribers.txt	2008-04-09 23:20:11 UTC (rev 85210)
+++ zc.async/branches/dev/src/zc/async/subscribers.txt	2008-04-10 03:12:27 UTC (rev 85211)
@@ -1,6 +1,6 @@
 The subscribers module provides several conveniences for starting and
 configuring zc.async.  Let's assume we have a database and all of the
-necessary adapters and utilities registered[#setUp]_.
+necessary adapters and utilities registered [#setUp]_.
 
 The first helper we'll discuss is ``threaded_dispatcher_installer``.  This can be
 used as a subscriber to a DatabaseOpened event, as defined by zope.app.appsetup



More information about the Checkins mailing list