[Zope3-checkins] SVN: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner Moved the testrunner into its own package and moved the tests along.

Christian Theune ct at gocept.com
Sat May 3 09:25:03 EDT 2008


Log message for revision 86207:
  Moved the testrunner into its own package and moved the tests along.
  
  

Changed:
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/__init__.py
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-arguments.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-colors.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage-win32.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging-layer-setup.test
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-edge-cases.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-errors.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex/
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex-pp-lib/
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex-pp-products/
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-gc.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-knit.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-api.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-ntd.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks-err.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling-cprofiler.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-progress.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-repeat.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-simple.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-test-selection.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-verbose.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-wo-source.txt
  A   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-arguments.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-colors.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage-win32.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging-layer-setup.test
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-edge-cases.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-errors.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-ex/
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-ex-pp-lib/
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-ex-pp-products/
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-gc.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-knit.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-api.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-ntd.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks-err.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling-cprofiler.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-progress.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-repeat.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-simple.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-test-selection.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-verbose.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-wo-source.txt
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.py
  D   zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.txt

-=-
Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/__init__.py (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner.py)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/__init__.py	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/__init__.py	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,2816 @@
+##############################################################################
+#
+# Copyright (c) 2004-2006 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+"""Test runner
+
+$Id$
+"""
+
+# Too bad: For now, we depend on zope.testing.  This is because
+# we want to use the latest, greatest doctest, which zope.testing
+# provides.  Then again, zope.testing is generally useful.
+
+import gc
+import glob
+import logging
+import optparse
+import os
+import errno
+import pdb
+import re
+import cStringIO
+import sys
+import tempfile
+import threading
+import time
+import trace
+import traceback
+import types
+import unittest
+
+
+available_profilers = {}
+
+try:
+    import cProfile
+    import pstats
+except ImportError:
+    pass
+else:
+    class CProfiler(object):
+        """cProfiler"""
+        def __init__(self, filepath):
+            self.filepath = filepath
+            self.profiler = cProfile.Profile()
+            self.enable = self.profiler.enable
+            self.disable = self.profiler.disable
+
+        def finish(self):
+            self.profiler.dump_stats(self.filepath)
+
+        def loadStats(self, prof_glob):
+            stats = None
+            for file_name in glob.glob(prof_glob):
+                if stats is None:
+                    stats = pstats.Stats(file_name)
+                else:
+                    stats.add(file_name)
+            return stats
+
+    available_profilers['cProfile'] = CProfiler
+
+# some Linux distributions don't include the profiler, which hotshot uses
+try:
+    import hotshot
+    import hotshot.stats
+except ImportError:
+    pass
+else:
+    class HotshotProfiler(object):
+        """hotshot interface"""
+
+        def __init__(self, filepath):
+            self.profiler = hotshot.Profile(filepath)
+            self.enable = self.profiler.start
+            self.disable = self.profiler.stop
+
+        def finish(self):
+            self.profiler.close()
+
+        def loadStats(self, prof_glob):
+            stats = None
+            for file_name in glob.glob(prof_glob):
+                loaded = hotshot.stats.load(file_name)
+                if stats is None:
+                    stats = loaded
+                else:
+                    stats.add(loaded)
+            return stats
+
+    available_profilers['hotshot'] = HotshotProfiler
+
+
+real_pdb_set_trace = pdb.set_trace
+
+# For some reason, the doctest module resets the trace callable randomly, thus
+# disabling the coverage. Simply disallow the code from doing this. A real
+# trace can be set, so that debugging still works.
+osettrace = sys.settrace
+def settrace(trace):
+    if trace is None:
+        return
+    osettrace(trace)
+
+class TestIgnore:
+
+    def __init__(self, options):
+        self._test_dirs = [self._filenameFormat(d[0]) + os.path.sep
+                           for d in test_dirs(options, {})]
+        self._ignore = {}
+        self._ignored = self._ignore.get
+
+    def names(self, filename, modulename):
+        # Special case: Modules generated from text files; i.e. doctests
+        if modulename == '<string>':
+            return True
+        filename = self._filenameFormat(filename)
+        ignore = self._ignored(filename)
+        if ignore is None:
+            ignore = True
+            if filename is not None:
+                for d in self._test_dirs:
+                    if filename.startswith(d):
+                        ignore = False
+                        break
+            self._ignore[filename] = ignore
+        return ignore
+
+    def _filenameFormat(self, filename):
+        return os.path.abspath(filename)
+
+if sys.platform == 'win32':
+    #on win32 drive name can be passed with different case to `names`
+    #that lets e.g. the coverage profiler skip complete files
+    #_filenameFormat will make sure that all drive and filenames get lowercased
+    #albeit trace coverage has still problems with lowercase drive letters
+    #when determining the dotted module name
+    OldTestIgnore = TestIgnore
+
+    class TestIgnore(OldTestIgnore):
+        def _filenameFormat(self, filename):
+            return os.path.normcase(os.path.abspath(filename))
+
+class TestTrace(trace.Trace):
+    """Simple tracer.
+
+    >>> tracer = TestTrace(None, count=False, trace=False)
+
+    Simple rules for use: you can't stop the tracer if it not started
+    and you can't start the tracer if it already started:
+
+    >>> tracer.stop()
+    Traceback (most recent call last):
+        File 'testrunner.py'
+    AssertionError: can't stop if not started
+
+    >>> tracer.start()
+    >>> tracer.start()
+    Traceback (most recent call last):
+        File 'testrunner.py'
+    AssertionError: can't start if already started
+
+    >>> tracer.stop()
+    >>> tracer.stop()
+    Traceback (most recent call last):
+        File 'testrunner.py'
+    AssertionError: can't stop if not started
+    """
+
+    def __init__(self, options, **kw):
+        trace.Trace.__init__(self, **kw)
+        if options is not None:
+            self.ignore = TestIgnore(options)
+        self.started = False
+
+    def start(self):
+        assert not self.started, "can't start if already started"
+        if not self.donothing:
+            sys.settrace = settrace
+            sys.settrace(self.globaltrace)
+            threading.settrace(self.globaltrace)
+        self.started = True
+
+    def stop(self):
+        assert self.started, "can't stop if not started"
+        if not self.donothing:
+            sys.settrace = osettrace
+            sys.settrace(None)
+            threading.settrace(None)
+        self.started = False
+
+class EndRun(Exception):
+    """Indicate that the existing run call should stop
+
+    Used to prevent additional test output after post-mortem debugging.
+    """
+
+def strip_py_ext(options, path):
+    """Return path without its .py (or .pyc or .pyo) extension, or None.
+
+    If options.usecompiled is false:
+        If path ends with ".py", the path without the extension is returned.
+        Else None is returned.
+
+    If options.usecompiled is true:
+        If Python is running with -O, a .pyo extension is also accepted.
+        If Python is running without -O, a .pyc extension is also accepted.
+    """
+    if path.endswith(".py"):
+        return path[:-3]
+    if options.usecompiled:
+        if __debug__:
+            # Python is running without -O.
+            ext = ".pyc"
+        else:
+            # Python is running with -O.
+            ext = ".pyo"
+        if path.endswith(ext):
+            return path[:-len(ext)]
+    return None
+
+def contains_init_py(options, fnamelist):
+    """Return true iff fnamelist contains a suitable spelling of __init__.py.
+
+    If options.usecompiled is false, this is so iff "__init__.py" is in
+    the list.
+
+    If options.usecompiled is true, then "__init__.pyo" is also acceptable
+    if Python is running with -O, and "__init__.pyc" is also acceptable if
+    Python is running without -O.
+    """
+    if "__init__.py" in fnamelist:
+        return True
+    if options.usecompiled:
+        if __debug__:
+            # Python is running without -O.
+            return "__init__.pyc" in fnamelist
+        else:
+            # Python is running with -O.
+            return "__init__.pyo" in fnamelist
+    return False
+
+
+doctest_template = """
+File "%s", line %s, in %s
+
+%s
+Want:
+%s
+Got:
+%s
+"""
+
+
+def tigetnum(attr, default=None):
+    """Return a value from the terminfo database.
+
+    Terminfo is used on Unix-like systems to report various terminal attributes
+    (such as width, height or the number of supported colors).
+
+    Returns ``default`` when the ``curses`` module is not available, or when
+    sys.stdout is not a terminal.
+    """
+    try:
+        import curses
+    except ImportError:
+        # avoid reimporting a broken module in python 2.3
+        sys.modules['curses'] = None
+    else:
+        try:
+            curses.setupterm()
+        except (curses.error, TypeError):
+            # You get curses.error when $TERM is set to an unknown name
+            # You get TypeError when sys.stdout is not a real file object
+            # (e.g. in unit tests that use various wrappers).
+            pass
+        else:
+            return curses.tigetnum(attr)
+    return default
+
+
+class OutputFormatter(object):
+    """Test runner output formatter."""
+
+    # Implementation note: be careful about printing stuff to sys.stderr.
+    # It is used for interprocess communication between the parent and the
+    # child test runner, when you run some test layers in a subprocess.
+    # resume_layer() reasigns sys.stderr for this reason, but be careful
+    # and don't store the original one in __init__ or something.
+
+    max_width = 80
+
+    def __init__(self, options):
+        self.options = options
+        self.last_width = 0
+        self.compute_max_width()
+
+    progress = property(lambda self: self.options.progress)
+    verbose = property(lambda self: self.options.verbose)
+
+    def compute_max_width(self):
+        """Try to determine the terminal width."""
+        # Note that doing this every time is more test friendly.
+        self.max_width = tigetnum('cols', self.max_width)
+
+    def getShortDescription(self, test, room):
+        """Return a description of a test that fits in ``room`` characters."""
+        room -= 1
+        s = str(test)
+        if len(s) > room:
+            pos = s.find(" (")
+            if pos >= 0:
+                w = room - (pos + 5)
+                if w < 1:
+                    # first portion (test method name) is too long
+                    s = s[:room-3] + "..."
+                else:
+                    pre = s[:pos+2]
+                    post = s[-w:]
+                    s = "%s...%s" % (pre, post)
+            else:
+                w = room - 4
+                s = '... ' + s[-w:]
+
+        return ' ' + s[:room]
+
+    def info(self, message):
+        """Print an informative message."""
+        print message
+
+    def info_suboptimal(self, message):
+        """Print an informative message about losing some of the features.
+
+        For example, when you run some tests in a subprocess, you lose the
+        ability to use the debugger.
+        """
+        print message
+
+    def error(self, message):
+        """Report an error."""
+        print message
+
+    def error_with_banner(self, message):
+        """Report an error with a big ASCII banner."""
+        print
+        print '*'*70
+        self.error(message)
+        print '*'*70
+        print
+
+    def profiler_stats(self, stats):
+        """Report profiler stats."""
+        stats.print_stats(50)
+
+    def import_errors(self, import_errors):
+        """Report test-module import errors (if any)."""
+        if import_errors:
+            print "Test-module import failures:"
+            for error in import_errors:
+                self.print_traceback("Module: %s\n" % error.module,
+                                     error.exc_info),
+            print
+
+    def tests_with_errors(self, errors):
+        """Report names of tests with errors (if any)."""
+        if errors:
+            print
+            print "Tests with errors:"
+            for test, exc_info in errors:
+                print "  ", test
+
+    def tests_with_failures(self, failures):
+        """Report names of tests with failures (if any)."""
+        if failures:
+            print
+            print "Tests with failures:"
+            for test, exc_info in failures:
+                print "  ", test
+
+    def modules_with_import_problems(self, import_errors):
+        """Report names of modules with import problems (if any)."""
+        if import_errors:
+            print
+            print "Test-modules with import problems:"
+            for test in import_errors:
+                print "  " + test.module
+
+    def format_seconds(self, n_seconds):
+        """Format a time in seconds."""
+        if n_seconds >= 60:
+            n_minutes, n_seconds = divmod(n_seconds, 60)
+            return "%d minutes %.3f seconds" % (n_minutes, n_seconds)
+        else:
+            return "%.3f seconds" % n_seconds
+
+    def format_seconds_short(self, n_seconds):
+        """Format a time in seconds (short version)."""
+        return "%.3f s" % n_seconds
+
+    def summary(self, n_tests, n_failures, n_errors, n_seconds):
+        """Summarize the results of a single test layer."""
+        print ("  Ran %s tests with %s failures and %s errors in %s."
+               % (n_tests, n_failures, n_errors,
+                  self.format_seconds(n_seconds)))
+
+    def totals(self, n_tests, n_failures, n_errors, n_seconds):
+        """Summarize the results of all layers."""
+        print ("Total: %s tests, %s failures, %s errors in %s."
+               % (n_tests, n_failures, n_errors,
+                  self.format_seconds(n_seconds)))
+
+    def list_of_tests(self, tests, layer_name):
+        """Report a list of test names."""
+        print "Listing %s tests:" % layer_name
+        for test in tests:
+            print ' ', test
+
+    def garbage(self, garbage):
+        """Report garbage generated by tests."""
+        if garbage:
+            print "Tests generated new (%d) garbage:" % len(garbage)
+            print garbage
+
+    def test_garbage(self, test, garbage):
+        """Report garbage generated by a test."""
+        if garbage:
+            print "The following test left garbage:"
+            print test
+            print garbage
+
+    def test_threads(self, test, new_threads):
+        """Report threads left behind by a test."""
+        if new_threads:
+            print "The following test left new threads behind:"
+            print test
+            print "New thread(s):", new_threads
+
+    def refcounts(self, rc, prev):
+        """Report a change in reference counts."""
+        print "  sys refcount=%-8d change=%-6d" % (rc, rc - prev)
+
+    def detailed_refcounts(self, track, rc, prev):
+        """Report a change in reference counts, with extra detail."""
+        print ("  sum detail refcount=%-8d"
+               " sys refcount=%-8d"
+               " change=%-6d"
+               % (track.n, rc, rc - prev))
+        track.output()
+
+    def start_set_up(self, layer_name):
+        """Report that we're setting up a layer.
+
+        The next output operation should be stop_set_up().
+        """
+        print "  Set up %s" % layer_name,
+        sys.stdout.flush()
+
+    def stop_set_up(self, seconds):
+        """Report that we've set up a layer.
+
+        Should be called right after start_set_up().
+        """
+        print "in %s." % self.format_seconds(seconds)
+
+    def start_tear_down(self, layer_name):
+        """Report that we're tearing down a layer.
+
+        The next output operation should be stop_tear_down() or
+        tear_down_not_supported().
+        """
+        print "  Tear down %s" % layer_name,
+        sys.stdout.flush()
+
+    def stop_tear_down(self, seconds):
+        """Report that we've tore down a layer.
+
+        Should be called right after start_tear_down().
+        """
+        print "in %s." % self.format_seconds(seconds)
+
+    def tear_down_not_supported(self):
+        """Report that we could not tear down a layer.
+
+        Should be called right after start_tear_down().
+        """
+        print "... not supported"
+
+    def start_test(self, test, tests_run, total_tests):
+        """Report that we're about to run a test.
+
+        The next output operation should be test_success(), test_error(), or
+        test_failure().
+        """
+        self.test_width = 0
+        if self.progress:
+            if self.last_width:
+                sys.stdout.write('\r' + (' ' * self.last_width) + '\r')
+
+            s = "    %d/%d (%.1f%%)" % (tests_run, total_tests,
+                                        tests_run * 100.0 / total_tests)
+            sys.stdout.write(s)
+            self.test_width += len(s)
+            if self.verbose == 1:
+                room = self.max_width - self.test_width - 1
+                s = self.getShortDescription(test, room)
+                sys.stdout.write(s)
+                self.test_width += len(s)
+
+        elif self.verbose == 1:
+            sys.stdout.write('.' * test.countTestCases())
+
+        if self.verbose > 1:
+            s = str(test)
+            sys.stdout.write(' ')
+            sys.stdout.write(s)
+            self.test_width += len(s) + 1
+
+        sys.stdout.flush()
+
+    def test_success(self, test, seconds):
+        """Report that a test was successful.
+
+        Should be called right after start_test().
+
+        The next output operation should be stop_test().
+        """
+        if self.verbose > 2:
+            s = " (%s)" % self.format_seconds_short(seconds)
+            sys.stdout.write(s)
+            self.test_width += len(s) + 1
+
+    def test_error(self, test, seconds, exc_info):
+        """Report that an error occurred while running a test.
+
+        Should be called right after start_test().
+
+        The next output operation should be stop_test().
+        """
+        if self.verbose > 2:
+            print " (%s)" % self.format_seconds_short(seconds)
+        print
+        self.print_traceback("Error in test %s" % test, exc_info)
+        self.test_width = self.last_width = 0
+
+    def test_failure(self, test, seconds, exc_info):
+        """Report that a test failed.
+
+        Should be called right after start_test().
+
+        The next output operation should be stop_test().
+        """
+        if self.verbose > 2:
+            print " (%s)" % self.format_seconds_short(seconds)
+        print
+        self.print_traceback("Failure in test %s" % test, exc_info)
+        self.test_width = self.last_width = 0
+
+    def print_traceback(self, msg, exc_info):
+        """Report an error with a traceback."""
+        print
+        print msg
+        print self.format_traceback(exc_info)
+
+    def format_traceback(self, exc_info):
+        """Format the traceback."""
+        v = exc_info[1]
+        if isinstance(v, doctest.DocTestFailureException):
+            tb = v.args[0]
+        elif isinstance(v, doctest.DocTestFailure):
+            tb = doctest_template % (
+                v.test.filename,
+                v.test.lineno + v.example.lineno + 1,
+                v.test.name,
+                v.example.source,
+                v.example.want,
+                v.got,
+                )
+        else:
+            tb = "".join(traceback.format_exception(*exc_info))
+        return tb
+
+    def stop_test(self, test):
+        """Clean up the output state after a test."""
+        if self.progress:
+            self.last_width = self.test_width
+        elif self.verbose > 1:
+            print
+        sys.stdout.flush()
+
+    def stop_tests(self):
+        """Clean up the output state after a collection of tests."""
+        if self.progress and self.last_width:
+            sys.stdout.write('\r' + (' ' * self.last_width) + '\r')
+        if self.verbose == 1 or self.progress:
+            print
+
+
+class ColorfulOutputFormatter(OutputFormatter):
+    """Output formatter that uses ANSI color codes.
+
+    Like syntax highlighting in your text editor, colorizing
+    test failures helps the developer.
+    """
+
+    # These colors are carefully chosen to have enough contrast
+    # on terminals with both black and white background.
+    colorscheme = {'normal': 'normal',
+                   'default': 'default',
+                   'info': 'normal',
+                   'suboptimal-behaviour': 'magenta',
+                   'error': 'brightred',
+                   'number': 'green',
+                   'slow-test': 'brightmagenta',
+                   'ok-number': 'green',
+                   'error-number': 'brightred',
+                   'filename': 'lightblue',
+                   'lineno': 'lightred',
+                   'testname': 'lightcyan',
+                   'failed-example': 'cyan',
+                   'expected-output': 'green',
+                   'actual-output': 'red',
+                   'character-diffs': 'magenta',
+                   'diff-chunk': 'magenta',
+                   'exception': 'red'}
+
+    # Map prefix character to color in diff output.  This handles ndiff and
+    # udiff correctly, but not cdiff.  In cdiff we ought to highlight '!' as
+    # expected-output until we see a '-', then highlight '!' as actual-output,
+    # until we see a '*', then switch back to highlighting '!' as
+    # expected-output.  Nevertheless, coloried cdiffs are reasonably readable,
+    # so I'm not going to fix this.
+    #   -- mgedmin
+    diff_color = {'-': 'expected-output',
+                  '+': 'actual-output',
+                  '?': 'character-diffs',
+                  '@': 'diff-chunk',
+                  '*': 'diff-chunk',
+                  '!': 'actual-output',}
+
+    prefixes = [('dark', '0;'),
+                ('light', '1;'),
+                ('bright', '1;'),
+                ('bold', '1;'),]
+
+    colorcodes = {'default': 0, 'normal': 0,
+                  'black': 30,
+                  'red': 31,
+                  'green': 32,
+                  'brown': 33, 'yellow': 33,
+                  'blue': 34,
+                  'magenta': 35,
+                  'cyan': 36,
+                  'grey': 37, 'gray': 37, 'white': 37}
+
+    slow_test_threshold = 10.0 # seconds
+
+    def color_code(self, color):
+        """Convert a color description (e.g. 'lightgray') to a terminal code."""
+        prefix_code = ''
+        for prefix, code in self.prefixes:
+            if color.startswith(prefix):
+                color = color[len(prefix):]
+                prefix_code = code
+                break
+        color_code = self.colorcodes[color]
+        return '\033[%s%sm' % (prefix_code, color_code)
+
+    def color(self, what):
+        """Pick a named color from the color scheme"""
+        return self.color_code(self.colorscheme[what])
+
+    def colorize(self, what, message, normal='normal'):
+        """Wrap message in color."""
+        return self.color(what) + message + self.color(normal)
+
+    def error_count_color(self, n):
+        """Choose a color for the number of errors."""
+        if n:
+            return self.color('error-number')
+        else:
+            return self.color('ok-number')
+
+    def info(self, message):
+        """Print an informative message."""
+        print self.colorize('info', message)
+
+    def info_suboptimal(self, message):
+        """Print an informative message about losing some of the features.
+
+        For example, when you run some tests in a subprocess, you lose the
+        ability to use the debugger.
+        """
+        print self.colorize('suboptimal-behaviour', message)
+
+    def error(self, message):
+        """Report an error."""
+        print self.colorize('error', message)
+
+    def error_with_banner(self, message):
+        """Report an error with a big ASCII banner."""
+        print
+        print self.colorize('error', '*'*70)
+        self.error(message)
+        print self.colorize('error', '*'*70)
+        print
+
+    def tear_down_not_supported(self):
+        """Report that we could not tear down a layer.
+
+        Should be called right after start_tear_down().
+        """
+        print "...", self.colorize('suboptimal-behaviour', "not supported")
+
+    def format_seconds(self, n_seconds, normal='normal'):
+        """Format a time in seconds."""
+        if n_seconds >= 60:
+            n_minutes, n_seconds = divmod(n_seconds, 60)
+            return "%s minutes %s seconds" % (
+                        self.colorize('number', '%d' % n_minutes, normal),
+                        self.colorize('number', '%.3f' % n_seconds, normal))
+        else:
+            return "%s seconds" % (
+                        self.colorize('number', '%.3f' % n_seconds, normal))
+
+    def format_seconds_short(self, n_seconds):
+        """Format a time in seconds (short version)."""
+        if n_seconds >= self.slow_test_threshold:
+            color = 'slow-test'
+        else:
+            color = 'number'
+        return self.colorize(color, "%.3f s" % n_seconds)
+
+    def summary(self, n_tests, n_failures, n_errors, n_seconds):
+        """Summarize the results."""
+        sys.stdout.writelines([
+            self.color('info'), '  Ran ',
+            self.color('number'), str(n_tests),
+            self.color('info'), ' tests with ',
+            self.error_count_color(n_failures), str(n_failures),
+            self.color('info'), ' failures and ',
+            self.error_count_color(n_errors), str(n_errors),
+            self.color('info'), ' errors in ',
+            self.format_seconds(n_seconds, 'info'), '.',
+            self.color('normal'), '\n'])
+
+    def totals(self, n_tests, n_failures, n_errors, n_seconds):
+        """Report totals (number of tests, failures, and errors)."""
+        sys.stdout.writelines([
+            self.color('info'), 'Total: ',
+            self.color('number'), str(n_tests),
+            self.color('info'), ' tests, ',
+            self.error_count_color(n_failures), str(n_failures),
+            self.color('info'), ' failures, ',
+            self.error_count_color(n_errors), str(n_errors),
+            self.color('info'), ' errors in ',
+            self.format_seconds(n_seconds, 'info'), '.',
+            self.color('normal'), '\n'])
+
+    def print_traceback(self, msg, exc_info):
+        """Report an error with a traceback."""
+        print
+        print self.colorize('error', msg)
+        v = exc_info[1]
+        if isinstance(v, doctest.DocTestFailureException):
+            self.print_doctest_failure(v.args[0])
+        elif isinstance(v, doctest.DocTestFailure):
+            # I don't think these are ever used... -- mgedmin
+            tb = self.format_traceback(exc_info)
+            print tb
+        else:
+            tb = self.format_traceback(exc_info)
+            self.print_colorized_traceback(tb)
+
+    def print_doctest_failure(self, formatted_failure):
+        """Report a doctest failure.
+
+        ``formatted_failure`` is a string -- that's what
+        DocTestSuite/DocFileSuite gives us.
+        """
+        color_of_indented_text = 'normal'
+        colorize_diff = False
+        for line in formatted_failure.splitlines():
+            if line.startswith('File '):
+                m = re.match(r'File "(.*)", line (\d*), in (.*)$', line)
+                if m:
+                    filename, lineno, test = m.groups()
+                    sys.stdout.writelines([
+                        self.color('normal'), 'File "',
+                        self.color('filename'), filename,
+                        self.color('normal'), '", line ',
+                        self.color('lineno'), lineno,
+                        self.color('normal'), ', in ',
+                        self.color('testname'), test,
+                        self.color('normal'), '\n'])
+                else:
+                    print line
+            elif line.startswith('    '):
+                if colorize_diff and len(line) > 4:
+                    color = self.diff_color.get(line[4], color_of_indented_text)
+                    print self.colorize(color, line)
+                else:
+                    print self.colorize(color_of_indented_text, line)
+            else:
+                colorize_diff = False
+                if line.startswith('Failed example'):
+                    color_of_indented_text = 'failed-example'
+                elif line.startswith('Expected:'):
+                    color_of_indented_text = 'expected-output'
+                elif line.startswith('Got:'):
+                    color_of_indented_text = 'actual-output'
+                elif line.startswith('Exception raised:'):
+                    color_of_indented_text = 'exception'
+                elif line.startswith('Differences '):
+                    color_of_indented_text = 'normal'
+                    colorize_diff = True
+                else:
+                    color_of_indented_text = 'normal'
+                print line
+        print
+
+    def print_colorized_traceback(self, formatted_traceback):
+        """Report a test failure.
+
+        ``formatted_traceback`` is a string.
+        """
+        for line in formatted_traceback.splitlines():
+            if line.startswith('  File'):
+                m = re.match(r'  File "(.*)", line (\d*), in (.*)$', line)
+                if m:
+                    filename, lineno, test = m.groups()
+                    sys.stdout.writelines([
+                        self.color('normal'), '  File "',
+                        self.color('filename'), filename,
+                        self.color('normal'), '", line ',
+                        self.color('lineno'), lineno,
+                        self.color('normal'), ', in ',
+                        self.color('testname'), test,
+                        self.color('normal'), '\n'])
+                else:
+                    print line
+            elif line.startswith('    '):
+                print self.colorize('failed-example', line)
+            elif line.startswith('Traceback (most recent call last)'):
+                print line
+            else:
+                print self.colorize('exception', line)
+        print
+
+
+def run(defaults=None, args=None):
+    if args is None:
+        args = sys.argv[:]
+
+    # Set the default logging policy.
+    # XXX There are no tests for this logging behavior.
+    # It's not at all clear that the test runner should be doing this.
+    configure_logging()
+
+    # Control reporting flags during run
+    old_reporting_flags = doctest.set_unittest_reportflags(0)
+
+    # Check to see if we are being run as a subprocess. If we are,
+    # then use the resume-layer and defaults passed in.
+    if len(args) > 1 and args[1] == '--resume-layer':
+        args.pop(1)
+        resume_layer = args.pop(1)
+        resume_number = int(args.pop(1))
+        defaults = []
+        while len(args) > 1 and args[1] == '--default':
+            args.pop(1)
+            defaults.append(args.pop(1))
+
+        sys.stdin = FakeInputContinueGenerator()
+    else:
+        resume_layer = resume_number = None
+
+    options = get_options(args, defaults)
+    if options.fail:
+        return True
+
+    output = options.output
+
+    options.testrunner_defaults = defaults
+    options.resume_layer = resume_layer
+    options.resume_number = resume_number
+
+    # Make sure we start with real pdb.set_trace.  This is needed
+    # to make tests of the test runner work properly. :)
+    pdb.set_trace = real_pdb_set_trace
+
+    if (options.profile
+        and sys.version_info[:3] <= (2,4,1)
+        and __debug__):
+        output.error('Because of a bug in Python < 2.4.1, profiling '
+                     'during tests requires the -O option be passed to '
+                     'Python (not the test runner).')
+        sys.exit()
+
+    if options.coverage:
+        tracer = TestTrace(options, trace=False, count=True)
+        tracer.start()
+    else:
+        tracer = None
+
+    if options.profile:
+        prof_prefix = 'tests_profile.'
+        prof_suffix = '.prof'
+        prof_glob = prof_prefix + '*' + prof_suffix
+
+        # if we are going to be profiling, and this isn't a subprocess,
+        # clean up any stale results files
+        if not options.resume_layer:
+            for file_name in glob.glob(prof_glob):
+                os.unlink(file_name)
+
+        # set up the output file
+        oshandle, file_path = tempfile.mkstemp(prof_suffix, prof_prefix, '.')
+        profiler = available_profilers[options.profile](file_path)
+        profiler.enable()
+
+    try:
+        try:
+            failed = not run_with_options(options)
+        except EndRun:
+            failed = True
+    finally:
+        if tracer:
+            tracer.stop()
+        if options.profile:
+            profiler.disable()
+            profiler.finish()
+            # We must explicitly close the handle mkstemp returned, else on
+            # Windows this dies the next time around just above due to an
+            # attempt to unlink a still-open file.
+            os.close(oshandle)
+
+    if options.profile and not options.resume_layer:
+        stats = profiler.loadStats(prof_glob)
+        stats.sort_stats('cumulative', 'calls')
+        output.profiler_stats(stats)
+
+    if tracer:
+        coverdir = os.path.join(os.getcwd(), options.coverage)
+        r = tracer.results()
+        r.write_results(summary=True, coverdir=coverdir)
+
+    doctest.set_unittest_reportflags(old_reporting_flags)
+
+    if failed and options.exitwithstatus:
+        sys.exit(1)
+
+    return failed
+
+def run_with_options(options, found_suites=None):
+    """Find and run tests
+
+    Passing a list of suites using the found_suites parameter will cause
+    that list of suites to be used instead of attempting to load them from
+    the filesystem. This is useful for unit testing the test runner.
+
+    Returns True if all tests passed, or False if there were any failures
+    of any kind.
+    """
+
+    global _layer_name_cache
+    _layer_name_cache = {} # Reset to enforce test isolation
+
+    output = options.output
+
+    if options.resume_layer:
+        original_stderr = sys.stderr
+        sys.stderr = sys.stdout
+    elif options.verbose:
+        if options.all:
+            msg = "Running tests at all levels"
+        else:
+            msg = "Running tests at level %d" % options.at_level
+        output.info(msg)
+
+
+    old_threshold = gc.get_threshold()
+    if options.gc:
+        if len(options.gc) > 3:
+            output.error("Too many --gc options")
+            sys.exit(1)
+        if options.gc[0]:
+            output.info("Cyclic garbage collection threshold set to: %s" %
+                        repr(tuple(options.gc)))
+        else:
+            output.info("Cyclic garbage collection is disabled.")
+
+        gc.set_threshold(*options.gc)
+
+    old_flags = gc.get_debug()
+    if options.gc_option:
+        new_flags = 0
+        for op in options.gc_option:
+            new_flags |= getattr(gc, op)
+        gc.set_debug(new_flags)
+
+    old_reporting_flags = doctest.set_unittest_reportflags(0)
+    reporting_flags = 0
+    if options.ndiff:
+        reporting_flags = doctest.REPORT_NDIFF
+    if options.udiff:
+        if reporting_flags:
+            output.error("Can only give one of --ndiff, --udiff, or --cdiff")
+            sys.exit(1)
+        reporting_flags = doctest.REPORT_UDIFF
+    if options.cdiff:
+        if reporting_flags:
+            output.error("Can only give one of --ndiff, --udiff, or --cdiff")
+            sys.exit(1)
+        reporting_flags = doctest.REPORT_CDIFF
+    if options.report_only_first_failure:
+        reporting_flags |= doctest.REPORT_ONLY_FIRST_FAILURE
+
+    if reporting_flags:
+        doctest.set_unittest_reportflags(reporting_flags)
+    else:
+        doctest.set_unittest_reportflags(old_reporting_flags)
+
+
+    # Add directories to the path
+    for path in options.path:
+        if path not in sys.path:
+            sys.path.append(path)
+
+    remove_stale_bytecode(options)
+
+    tests_by_layer_name = find_tests(options, found_suites)
+
+    ran = 0
+    failures = []
+    errors = []
+    nlayers = 0
+    import_errors = tests_by_layer_name.pop(None, None)
+
+    output.import_errors(import_errors)
+
+    if 'unit' in tests_by_layer_name:
+        tests = tests_by_layer_name.pop('unit')
+        if (not options.non_unit) and not options.resume_layer:
+            if options.layer:
+                should_run = False
+                for pat in options.layer:
+                    if pat('unit'):
+                        should_run = True
+                        break
+            else:
+                should_run = True
+
+            if should_run:
+                if options.list_tests:
+                    output.list_of_tests(tests, 'unit')
+                else:
+                    output.info("Running unit tests:")
+                    nlayers += 1
+                    ran += run_tests(options, tests, 'unit', failures, errors)
+
+    setup_layers = {}
+
+    layers_to_run = list(ordered_layers(tests_by_layer_name))
+    if options.resume_layer is not None:
+        layers_to_run = [
+            (layer_name, layer, tests)
+            for (layer_name, layer, tests) in layers_to_run
+            if layer_name == options.resume_layer
+        ]
+    elif options.layer:
+        layers_to_run = [
+            (layer_name, layer, tests)
+            for (layer_name, layer, tests) in layers_to_run
+            if filter(None, [pat(layer_name) for pat in options.layer])
+        ]
+
+
+    if options.list_tests:
+        for layer_name, layer, tests in layers_to_run:
+            output.list_of_tests(tests, layer_name)
+        return True
+
+    start_time = time.time()
+
+    for layer_name, layer, tests in layers_to_run:
+        nlayers += 1
+        try:
+            ran += run_layer(options, layer_name, layer, tests,
+                             setup_layers, failures, errors)
+        except CanNotTearDown:
+            setup_layers = None
+            if not options.resume_layer:
+                ran += resume_tests(options, layer_name, layers_to_run,
+                                    failures, errors)
+                break
+
+    if setup_layers:
+        if options.resume_layer == None:
+            output.info("Tearing down left over layers:")
+        tear_down_unneeded(options, (), setup_layers, True)
+
+    total_time = time.time() - start_time
+
+    if options.resume_layer:
+        sys.stdout.close()
+        # Communicate with the parent.  The protocol is obvious:
+        print >> original_stderr, ran, len(failures), len(errors)
+        for test, exc_info in failures:
+            print >> original_stderr, ' '.join(str(test).strip().split('\n'))
+        for test, exc_info in errors:
+            print >> original_stderr, ' '.join(str(test).strip().split('\n'))
+
+    else:
+        if options.verbose:
+            output.tests_with_errors(errors)
+            output.tests_with_failures(failures)
+
+        if nlayers != 1:
+            output.totals(ran, len(failures), len(errors), total_time)
+
+        output.modules_with_import_problems(import_errors)
+
+    doctest.set_unittest_reportflags(old_reporting_flags)
+
+    if options.gc_option:
+        gc.set_debug(old_flags)
+
+    if options.gc:
+        gc.set_threshold(*old_threshold)
+
+    return not bool(import_errors or failures or errors)
+
+
+def run_tests(options, tests, name, failures, errors):
+    repeat = options.repeat or 1
+    repeat_range = iter(range(repeat))
+    ran = 0
+
+    output = options.output
+
+    gc.collect()
+    lgarbage = len(gc.garbage)
+
+    sumrc = 0
+    if options.report_refcounts:
+        if options.verbose:
+            track = TrackRefs()
+        rc = sys.gettotalrefcount()
+
+    for iteration in repeat_range:
+        if repeat > 1:
+            output.info("Iteration %d" % (iteration + 1))
+
+        if options.verbose > 0 or options.progress:
+            output.info('  Running:')
+        result = TestResult(options, tests, layer_name=name)
+
+        t = time.time()
+
+        if options.post_mortem:
+            # post-mortem debugging
+            for test in tests:
+                if result.shouldStop:
+                    break
+                result.startTest(test)
+                state = test.__dict__.copy()
+                try:
+                    try:
+                        test.debug()
+                    except KeyboardInterrupt:
+                        raise
+                    except:
+                        result.addError(
+                            test,
+                            sys.exc_info()[:2] + (sys.exc_info()[2].tb_next, ),
+                            )
+                    else:
+                        result.addSuccess(test)
+                finally:
+                    result.stopTest(test)
+                test.__dict__.clear()
+                test.__dict__.update(state)
+
+        else:
+            # normal
+            for test in tests:
+                if result.shouldStop:
+                    break
+                state = test.__dict__.copy()
+                test(result)
+                test.__dict__.clear()
+                test.__dict__.update(state)
+
+        t = time.time() - t
+        output.stop_tests()
+        failures.extend(result.failures)
+        errors.extend(result.errors)
+        output.summary(result.testsRun, len(result.failures), len(result.errors), t)
+        ran = result.testsRun
+
+        gc.collect()
+        if len(gc.garbage) > lgarbage:
+            output.garbage(gc.garbage[lgarbage:])
+            lgarbage = len(gc.garbage)
+
+        if options.report_refcounts:
+
+            # If we are being tested, we don't want stdout itself to
+            # foul up the numbers. :)
+            try:
+                sys.stdout.getvalue()
+            except AttributeError:
+                pass
+
+            prev = rc
+            rc = sys.gettotalrefcount()
+            if options.verbose:
+                track.update()
+                if iteration > 0:
+                    output.detailed_refcounts(track, rc, prev)
+                else:
+                    track.delta = None
+            elif iteration > 0:
+                output.refcounts(rc, prev)
+
+    return ran
+
+def run_layer(options, layer_name, layer, tests, setup_layers,
+              failures, errors):
+
+    output = options.output
+    gathered = []
+    gather_layers(layer, gathered)
+    needed = dict([(l, 1) for l in gathered])
+    if options.resume_number != 0:
+        output.info("Running %s tests:" % layer_name)
+    tear_down_unneeded(options, needed, setup_layers)
+
+    if options.resume_layer != None:
+        output.info_suboptimal( "  Running in a subprocess.")
+
+    try:
+        setup_layer(options, layer, setup_layers)
+    except EndRun:
+        raise
+    except Exception:
+        f = cStringIO.StringIO()
+        traceback.print_exc(file=f)
+        output.error(f.getvalue())
+        errors.append((SetUpLayerFailure(), sys.exc_info()))
+        return 0
+    else:
+        return run_tests(options, tests, layer_name, failures, errors)
+
+class SetUpLayerFailure(unittest.TestCase):
+    def runTest(self):
+        "Layer set up failure."
+
+def resume_tests(options, layer_name, layers, failures, errors):
+    output = options.output
+    layers = [l for (l, _, _) in layers]
+    layers = layers[layers.index(layer_name):]
+    rantotal = 0
+    resume_number = 0
+    for layer_name in layers:
+        args = [sys.executable,
+                options.original_testrunner_args[0],
+                '--resume-layer', layer_name, str(resume_number),
+                ]
+        resume_number += 1
+        for d in options.testrunner_defaults:
+            args.extend(['--default', d])
+
+        args.extend(options.original_testrunner_args[1:])
+
+        # this is because of a bug in Python (http://www.python.org/sf/900092)
+        if (options.profile == 'hotshot'
+            and sys.version_info[:3] <= (2,4,1)):
+            args.insert(1, '-O')
+
+        if sys.platform.startswith('win'):
+            args = args[0] + ' ' + ' '.join([
+                ('"' + a.replace('\\', '\\\\').replace('"', '\\"') + '"')
+                for a in args[1:]
+                ])
+
+        subin, subout, suberr = os.popen3(args)
+        while True:
+            try:
+                for l in subout:
+                    sys.stdout.write(l)
+            except IOError, e:
+                if e.errno == errno.EINTR:
+                    # If the subprocess dies before we finish reading its
+                    # output, a SIGCHLD signal can interrupt the reading.
+                    # The correct thing to to in that case is to retry.
+                    continue
+                output.error("Error reading subprocess output for %s" % layer_name)
+                output.info(str(e))
+            else:
+                break
+
+        line = suberr.readline()
+        try:
+            ran, nfail, nerr = map(int, line.strip().split())
+        except KeyboardInterrupt:
+            raise
+        except:
+            raise SubprocessError(line+suberr.read())
+
+        while nfail > 0:
+            nfail -= 1
+            failures.append((suberr.readline().strip(), None))
+        while nerr > 0:
+            nerr -= 1
+            errors.append((suberr.readline().strip(), None))
+
+        rantotal += ran
+
+    return rantotal
+
+
+class SubprocessError(Exception):
+    """An error occurred when running a subprocess
+    """
+
+class CanNotTearDown(Exception):
+    "Couldn't tear down a test"
+
+def tear_down_unneeded(options, needed, setup_layers, optional=False):
+    # Tear down any layers not needed for these tests. The unneeded
+    # layers might interfere.
+    unneeded = [l for l in setup_layers if l not in needed]
+    unneeded = order_by_bases(unneeded)
+    unneeded.reverse()
+    output = options.output
+    for l in unneeded:
+        output.start_tear_down(name_from_layer(l))
+        t = time.time()
+        try:
+            if hasattr(l, 'tearDown'):
+                l.tearDown()
+        except NotImplementedError:
+            output.tear_down_not_supported()
+            if not optional:
+                raise CanNotTearDown(l)
+        else:
+            output.stop_tear_down(time.time() - t)
+        del setup_layers[l]
+
+
+cant_pm_in_subprocess_message = """
+Can't post-mortem debug when running a layer as a subprocess!
+Try running layer %r by itself.
+"""
+
+def setup_layer(options, layer, setup_layers):
+    assert layer is not object
+    output = options.output
+    if layer not in setup_layers:
+        for base in layer.__bases__:
+            if base is not object:
+                setup_layer(options, base, setup_layers)
+        output.start_set_up(name_from_layer(layer))
+        t = time.time()
+        if hasattr(layer, 'setUp'):
+            try:
+                layer.setUp()
+            except Exception:
+                if options.post_mortem:
+                    if options.resume_layer:
+                        options.output.error_with_banner(
+                            cant_pm_in_subprocess_message
+                            % options.resume_layer)
+                        raise
+                    else:
+                        post_mortem(sys.exc_info())
+                else:
+                    raise
+                    
+        output.stop_set_up(time.time() - t)
+        setup_layers[layer] = 1
+
+def dependencies(bases, result):
+    for base in bases:
+        result[base] = 1
+        dependencies(base.__bases__, result)
+
+class TestResult(unittest.TestResult):
+
+    def __init__(self, options, tests, layer_name=None):
+        unittest.TestResult.__init__(self)
+        self.options = options
+        # Calculate our list of relevant layers we need to call testSetUp
+        # and testTearDown on.
+        if layer_name != 'unit':
+            layers = []
+            gather_layers(layer_from_name(layer_name), layers)
+            self.layers = order_by_bases(layers)
+        else:
+            self.layers = []
+        count = 0
+        for test in tests:
+            count += test.countTestCases()
+        self.count = count
+
+    def testSetUp(self):
+        """A layer may define a setup method to be called before each
+        individual test.
+        """
+        for layer in self.layers:
+            if hasattr(layer, 'testSetUp'):
+                layer.testSetUp()
+
+    def testTearDown(self):
+        """A layer may define a teardown method to be called after each
+           individual test.
+
+           This is useful for clearing the state of global
+           resources or resetting external systems such as relational
+           databases or daemons.
+        """
+        for layer in self.layers[-1::-1]:
+            if hasattr(layer, 'testTearDown'):
+                layer.testTearDown()
+
+    def startTest(self, test):
+        self.testSetUp()
+        unittest.TestResult.startTest(self, test)
+        testsRun = self.testsRun - 1 # subtract the one the base class added
+        count = test.countTestCases()
+        self.testsRun = testsRun + count
+
+        self.options.output.start_test(test, self.testsRun, self.count)
+
+        self._threads = threading.enumerate()
+        self._start_time = time.time()
+
+    def addSuccess(self, test):
+        t = max(time.time() - self._start_time, 0.0)
+        self.options.output.test_success(test, t)
+
+    def addError(self, test, exc_info):
+        self.options.output.test_error(test, time.time() - self._start_time,
+                                       exc_info)
+
+        unittest.TestResult.addError(self, test, exc_info)
+
+        if self.options.post_mortem:
+            if self.options.resume_layer:
+                self.options.output.error_with_banner("Can't post-mortem debug"
+                                                      " when running a layer"
+                                                      " as a subprocess!")
+            else:
+                post_mortem(exc_info)
+
+    def addFailure(self, test, exc_info):
+        self.options.output.test_failure(test, time.time() - self._start_time,
+                                         exc_info)
+
+        unittest.TestResult.addFailure(self, test, exc_info)
+
+        if self.options.post_mortem:
+            # XXX: mgedmin: why isn't there a resume_layer check here like
+            # in addError?
+            post_mortem(exc_info)
+
+    def stopTest(self, test):
+        self.testTearDown()
+        self.options.output.stop_test(test)
+
+        if gc.garbage:
+            self.options.output.test_garbage(test, gc.garbage)
+            # TODO: Perhaps eat the garbage here, so that the garbage isn't
+            #       printed for every subsequent test.
+
+        # Did the test leave any new threads behind?
+        new_threads = [t for t in threading.enumerate()
+                         if (t.isAlive()
+                             and
+                             t not in self._threads)]
+        if new_threads:
+            self.options.output.test_threads(test, new_threads)
+
+
+class FakeInputContinueGenerator:
+
+    def readline(self):
+        print  'c\n'
+        print '*'*70
+        print ("Can't use pdb.set_trace when running a layer"
+               " as a subprocess!")
+        print '*'*70
+        print
+        return 'c\n'
+
+
+def post_mortem(exc_info):
+    err = exc_info[1]
+    if isinstance(err, (doctest.UnexpectedException, doctest.DocTestFailure)):
+
+        if isinstance(err, doctest.UnexpectedException):
+            exc_info = err.exc_info
+
+            # Print out location info if the error was in a doctest
+            if exc_info[2].tb_frame.f_code.co_filename == '<string>':
+                print_doctest_location(err)
+
+        else:
+            print_doctest_location(err)
+            # Hm, we have a DocTestFailure exception.  We need to
+            # generate our own traceback
+            try:
+                exec ('raise ValueError'
+                      '("Expected and actual output are different")'
+                      ) in err.test.globs
+            except:
+                exc_info = sys.exc_info()
+
+    print "%s:" % (exc_info[0], )
+    print exc_info[1]
+    pdb.post_mortem(exc_info[2])
+    raise EndRun
+
+def print_doctest_location(err):
+    # This mimics pdb's output, which gives way cool results in emacs :)
+    filename = err.test.filename
+    if filename.endswith('.pyc'):
+        filename = filename[:-1]
+    print "> %s(%s)_()" % (filename, err.test.lineno+err.example.lineno+1)
+
+def ordered_layers(tests_by_layer_name):
+    layer_names = dict([(layer_from_name(layer_name), layer_name)
+                        for layer_name in tests_by_layer_name])
+    for layer in order_by_bases(layer_names):
+        layer_name = layer_names[layer]
+        yield layer_name, layer, tests_by_layer_name[layer_name]
+
+def gather_layers(layer, result):
+    if layer is not object:
+        result.append(layer)
+    for b in layer.__bases__:
+        gather_layers(b, result)
+
+def layer_from_name(layer_name):
+    """Return the layer for the corresponding layer_name by discovering
+       and importing the necessary module if necessary.
+
+       Note that a name -> layer cache is maintained by name_from_layer
+       to allow locating layers in cases where it would otherwise be
+       impossible.
+    """
+    global _layer_name_cache
+    if _layer_name_cache.has_key(layer_name):
+        return _layer_name_cache[layer_name]
+    layer_names = layer_name.split('.')
+    layer_module, module_layer_name = layer_names[:-1], layer_names[-1]
+    return getattr(import_name('.'.join(layer_module)), module_layer_name)
+
+def order_by_bases(layers):
+    """Order the layers from least to most specific (bottom to top)
+    """
+    named_layers = [(name_from_layer(layer), layer) for layer in layers]
+    named_layers.sort()
+    named_layers.reverse()
+    gathered = []
+    for name, layer in named_layers:
+        gather_layers(layer, gathered)
+    gathered.reverse()
+    seen = {}
+    result = []
+    for layer in gathered:
+        if layer not in seen:
+            seen[layer] = 1
+            if layer in layers:
+                result.append(layer)
+    return result
+
+_layer_name_cache = {}
+
+def name_from_layer(layer):
+    """Determine a name for the Layer using the namespace to avoid conflicts.
+
+    We also cache a name -> layer mapping to enable layer_from_name to work
+    in cases where the layer cannot be imported (such as layers defined
+    in doctests)
+    """
+    if layer.__module__ == '__builtin__':
+        name = layer.__name__
+    else:
+        name = layer.__module__ + '.' + layer.__name__
+    _layer_name_cache[name] = layer
+    return name
+
+def find_tests(options, found_suites=None):
+    """Creates a dictionary mapping layer name to a suite of tests to be run
+    in that layer.
+
+    Passing a list of suites using the found_suites parameter will cause
+    that list of suites to be used instead of attempting to load them from
+    the filesystem. This is useful for unit testing the test runner.
+    """
+    suites = {}
+    if found_suites is None:
+        found_suites = find_suites(options)
+    for suite in found_suites:
+        for test, layer_name in tests_from_suite(suite, options):
+            suite = suites.get(layer_name)
+            if not suite:
+                suite = suites[layer_name] = unittest.TestSuite()
+            suite.addTest(test)
+    return suites
+
+def tests_from_suite(suite, options, dlevel=1, dlayer='unit'):
+    """Returns a sequence of (test, layer_name)
+
+    The tree of suites is recursively visited, with the most specific
+    layer taking precedence. So if a TestCase with a layer of 'foo' is
+    contained in a TestSuite with a layer of 'bar', the test case would be
+    returned with 'foo' as the layer.
+
+    Tests are also filtered out based on the test level and test selection
+    filters stored in the options.
+    """
+    level = getattr(suite, 'level', dlevel)
+    layer = getattr(suite, 'layer', dlayer)
+    if not isinstance(layer, basestring):
+        layer = name_from_layer(layer)
+
+    if isinstance(suite, unittest.TestSuite):
+        for possible_suite in suite:
+            for r in tests_from_suite(possible_suite, options, level, layer):
+                yield r
+    elif isinstance(suite, StartUpFailure):
+        yield (suite, None)
+    else:
+        if level <= options.at_level:
+            for pat in options.test:
+                if pat(str(suite)):
+                    yield (suite, layer)
+                    break
+
+
+def find_suites(options):
+    for fpath, package in find_test_files(options):
+        for (prefix, prefix_package) in options.prefix:
+            if fpath.startswith(prefix) and package == prefix_package:
+                # strip prefix, strip .py suffix and convert separator to dots
+                noprefix = fpath[len(prefix):]
+                noext = strip_py_ext(options, noprefix)
+                assert noext is not None
+                module_name = noext.replace(os.path.sep, '.')
+                if package:
+                    module_name = package + '.' + module_name
+
+                for filter in options.module:
+                    if filter(module_name):
+                        break
+                else:
+                    continue
+
+                try:
+                    module = import_name(module_name)
+                except KeyboardInterrupt:
+                    raise
+                except:
+                    suite = StartUpFailure(
+                        options, module_name,
+                        sys.exc_info()[:2]
+                        + (sys.exc_info()[2].tb_next.tb_next,),
+                        )
+                else:
+                    try:
+                        suite = getattr(module, options.suite_name)()
+                        if isinstance(suite, unittest.TestSuite):
+                            check_suite(suite, module_name)
+                        else:
+                            raise TypeError(
+                                "Invalid test_suite, %r, in %s"
+                                % (suite, module_name)
+                                )
+                    except KeyboardInterrupt:
+                        raise
+                    except:
+                        suite = StartUpFailure(
+                            options, module_name, sys.exc_info()[:2]+(None,))
+
+
+                yield suite
+                break
+
+
+def check_suite(suite, module_name):
+    """Check for bad tests in a test suite.
+
+    "Bad tests" are those that do not inherit from unittest.TestCase.
+
+    Note that this function is pointless on Python 2.5, because unittest itself
+    checks for this in TestSuite.addTest.  It is, however, useful on earlier
+    Pythons.
+    """
+    for x in suite:
+        if isinstance(x, unittest.TestSuite):
+            check_suite(x, module_name)
+        elif not isinstance(x, unittest.TestCase):
+            raise TypeError(
+                "Invalid test, %r,\nin test_suite from %s"
+                % (x, module_name)
+                )
+
+
+
+
+class StartUpFailure(unittest.TestCase):
+    """Empty test case added to the test suite to indicate import failures."""
+
+    def __init__(self, options, module, exc_info):
+        if options.post_mortem:
+            post_mortem(exc_info)
+        self.module = module
+        self.exc_info = exc_info
+
+
+def find_test_files(options):
+    found = {}
+    for f, package in find_test_files_(options):
+        if f not in found:
+            found[f] = 1
+            yield f, package
+
+identifier = re.compile(r'[_a-zA-Z]\w*$').match
+def find_test_files_(options):
+    tests_pattern = options.tests_pattern
+    test_file_pattern = options.test_file_pattern
+
+    # If options.usecompiled, we can accept .pyc or .pyo files instead
+    # of .py files.  We'd rather use a .py file if one exists.  `root2ext`
+    # maps a test file path, sans extension, to the path with the best
+    # extension found (.py if it exists, else .pyc or .pyo).
+    # Note that "py" < "pyc" < "pyo", so if more than one extension is
+    # found, the lexicographically smaller one is best.
+
+    # Found a new test file, in directory `dirname`.  `noext` is the
+    # file name without an extension, and `withext` is the file name
+    # with its extension.
+    def update_root2ext(dirname, noext, withext):
+        key = os.path.join(dirname, noext)
+        new = os.path.join(dirname, withext)
+        if key in root2ext:
+            root2ext[key] = min(root2ext[key], new)
+        else:
+            root2ext[key] = new
+
+    for (p, package) in test_dirs(options, {}):
+        for dirname, dirs, files in walk_with_symlinks(options, p):
+            if dirname != p and not contains_init_py(options, files):
+                continue    # not a plausible test directory
+            root2ext = {}
+            dirs[:] = filter(identifier, dirs)
+            d = os.path.split(dirname)[1]
+            if tests_pattern(d) and contains_init_py(options, files):
+                # tests directory
+                for file in files:
+                    noext = strip_py_ext(options, file)
+                    if noext and test_file_pattern(noext):
+                        update_root2ext(dirname, noext, file)
+
+            for file in files:
+                noext = strip_py_ext(options, file)
+                if noext and tests_pattern(noext):
+                    update_root2ext(dirname, noext, file)
+
+            winners = root2ext.values()
+            winners.sort()
+            for file in winners:
+                yield file, package
+
+def walk_with_symlinks(options, dir):
+    # TODO -- really should have test of this that uses symlinks
+    #         this is hard on a number of levels ...
+    for dirpath, dirs, files in os.walk(dir):
+        dirs.sort()
+        files.sort()
+        dirs[:] = [d for d in dirs if d not in options.ignore_dir]
+        yield (dirpath, dirs, files)
+        for d in dirs:
+            p = os.path.join(dirpath, d)
+            if os.path.islink(p):
+                for sdirpath, sdirs, sfiles in walk_with_symlinks(options, p):
+                    yield (sdirpath, sdirs, sfiles)
+
+compiled_sufixes = '.pyc', '.pyo'
+def remove_stale_bytecode(options):
+    if options.keepbytecode:
+        return
+    for (p, _) in options.test_path:
+        for dirname, dirs, files in walk_with_symlinks(options, p):
+            for file in files:
+                if file[-4:] in compiled_sufixes and file[:-1] not in files:
+                    fullname = os.path.join(dirname, file)
+                    options.output.info("Removing stale bytecode file %s"
+                                        % fullname)
+                    os.unlink(fullname)
+
+
+def test_dirs(options, seen):
+    if options.package:
+        for p in options.package:
+            p = import_name(p)
+            for p in p.__path__:
+                p = os.path.abspath(p)
+                if p in seen:
+                    continue
+                for (prefix, package) in options.prefix:
+                    if p.startswith(prefix) or p == prefix[:-1]:
+                        seen[p] = 1
+                        yield p, package
+                        break
+    else:
+        for dpath in options.test_path:
+            yield dpath
+
+
+def import_name(name):
+    __import__(name)
+    return sys.modules[name]
+
+def configure_logging():
+    """Initialize the logging module."""
+    import logging.config
+
+    # Get the log.ini file from the current directory instead of
+    # possibly buried in the build directory.  TODO: This isn't
+    # perfect because if log.ini specifies a log file, it'll be
+    # relative to the build directory.  Hmm...  logini =
+    # os.path.abspath("log.ini")
+
+    logini = os.path.abspath("log.ini")
+    if os.path.exists(logini):
+        logging.config.fileConfig(logini)
+    else:
+        # If there's no log.ini, cause the logging package to be
+        # silent during testing.
+        root = logging.getLogger()
+        root.addHandler(NullHandler())
+        logging.basicConfig()
+
+    if os.environ.has_key("LOGGING"):
+        level = int(os.environ["LOGGING"])
+        logging.getLogger().setLevel(level)
+
+class NullHandler(logging.Handler):
+    """Logging handler that drops everything on the floor.
+
+    We require silence in the test environment.  Hush.
+    """
+
+    def emit(self, record):
+        pass
+
+
+class TrackRefs(object):
+    """Object to track reference counts across test runs."""
+
+    def __init__(self):
+        self.type2count = {}
+        self.type2all = {}
+        self.delta = None
+        self.n = 0
+        self.update()
+        self.delta = None
+
+    def update(self):
+        gc.collect()
+        obs = sys.getobjects(0)
+        type2count = {}
+        type2all = {}
+        n = 0
+        for o in obs:
+            if type(o) is str and o == '<dummy key>':
+                # avoid dictionary madness
+                continue
+
+            all = sys.getrefcount(o) - 3
+            n += all
+
+            t = type(o)
+            if t is types.InstanceType:
+                t = o.__class__
+
+            if t in type2count:
+                type2count[t] += 1
+                type2all[t] += all
+            else:
+                type2count[t] = 1
+                type2all[t] = all
+
+
+        ct = [(
+               type_or_class_title(t),
+               type2count[t] - self.type2count.get(t, 0),
+               type2all[t] - self.type2all.get(t, 0),
+               )
+              for t in type2count.iterkeys()]
+        ct += [(
+                type_or_class_title(t),
+                - self.type2count[t],
+                - self.type2all[t],
+                )
+               for t in self.type2count.iterkeys()
+               if t not in type2count]
+        ct.sort()
+        self.delta = ct
+        self.type2count = type2count
+        self.type2all = type2all
+        self.n = n
+
+
+    def output(self):
+        printed = False
+        s1 = s2 = 0
+        for t, delta1, delta2 in self.delta:
+            if delta1 or delta2:
+                if not printed:
+                    print (
+                        '    Leak details, changes in instances and refcounts'
+                        ' by type/class:')
+                    print "    %-55s %6s %6s" % ('type/class', 'insts', 'refs')
+                    print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
+                    printed = True
+                print "    %-55s %6d %6d" % (t, delta1, delta2)
+                s1 += delta1
+                s2 += delta2
+
+        if printed:
+            print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
+            print "    %-55s %6s %6s" % ('total', s1, s2)
+
+
+        self.delta = None
+
+def type_or_class_title(t):
+    module = getattr(t, '__module__', '__builtin__')
+    if module == '__builtin__':
+        return t.__name__
+    return "%s.%s" % (module, t.__name__)
+
+
+###############################################################################
+# Command-line UI
+
+parser = optparse.OptionParser("Usage: %prog [options] [MODULE] [TEST]")
+
+######################################################################
+# Searching and filtering
+
+searching = optparse.OptionGroup(parser, "Searching and filtering", """\
+Options in this group are used to define which tests to run.
+""")
+
+searching.add_option(
+    '--package', '--dir', '-s', action="append", dest='package',
+    help="""\
+Search the given package's directories for tests.  This can be
+specified more than once to run tests in multiple parts of the source
+tree.  For example, if refactoring interfaces, you don't want to see
+the way you have broken setups for tests in other packages. You *just*
+want to run the interface tests.
+
+Packages are supplied as dotted names.  For compatibility with the old
+test runner, forward and backward slashed in package names are
+converted to dots.
+
+(In the special case of packages spread over multiple directories,
+only directories within the test search path are searched. See the
+--path option.)
+
+""")
+
+searching.add_option(
+    '--module', '-m', action="append", dest='module',
+    help="""\
+Specify a test-module filter as a regular expression.  This is a
+case-sensitive regular expression, used in search (not match) mode, to
+limit which test modules are searched for tests.  The regular
+expressions are checked against dotted module names.  In an extension
+of Python regexp notation, a leading "!" is stripped and causes the
+sense of the remaining regexp to be negated (so "!bc" matches any
+string that does not match "bc", and vice versa).  The option can be
+specified multiple test-module filters.  Test modules matching any of
+the test filters are searched.  If no test-module filter is specified,
+then all test modules are used.
+""")
+
+searching.add_option(
+    '--test', '-t', action="append", dest='test',
+    help="""\
+Specify a test filter as a regular expression.  This is a
+case-sensitive regular expression, used in search (not match) mode, to
+limit which tests are run.  In an extension of Python regexp notation,
+a leading "!" is stripped and causes the sense of the remaining regexp
+to be negated (so "!bc" matches any string that does not match "bc",
+and vice versa).  The option can be specified multiple test filters.
+Tests matching any of the test filters are included.  If no test
+filter is specified, then all tests are run.
+""")
+
+searching.add_option(
+    '--unit', '-u', action="store_true", dest='unit',
+    help="""\
+Run only unit tests, ignoring any layer options.
+""")
+
+searching.add_option(
+    '--non-unit', '-f', action="store_true", dest='non_unit',
+    help="""\
+Run tests other than unit tests.
+""")
+
+searching.add_option(
+    '--layer', action="append", dest='layer',
+    help="""\
+Specify a test layer to run.  The option can be given multiple times
+to specify more than one layer.  If not specified, all layers are run.
+It is common for the running script to provide default values for this
+option.  Layers are specified regular expressions, used in search
+mode, for dotted names of objects that define a layer.  In an
+extension of Python regexp notation, a leading "!" is stripped and
+causes the sense of the remaining regexp to be negated (so "!bc"
+matches any string that does not match "bc", and vice versa).  The
+layer named 'unit' is reserved for unit tests, however, take note of
+the --unit and non-unit options.
+""")
+
+searching.add_option(
+    '-a', '--at-level', type='int', dest='at_level',
+    help="""\
+Run the tests at the given level.  Any test at a level at or below
+this is run, any test at a level above this is not run.  Level 0
+runs all tests.
+""")
+
+searching.add_option(
+    '--all', action="store_true", dest='all',
+    help="Run tests at all levels.")
+
+searching.add_option(
+    '--list-tests', action="store_true", dest='list_tests', default=False,
+    help="List all tests that matched your filters.  Do not run any tests.")
+
+parser.add_option_group(searching)
+
+######################################################################
+# Reporting
+
+reporting = optparse.OptionGroup(parser, "Reporting", """\
+Reporting options control basic aspects of test-runner output
+""")
+
+reporting.add_option(
+    '--verbose', '-v', action="count", dest='verbose',
+    help="""\
+Make output more verbose.
+Increment the verbosity level.
+""")
+
+reporting.add_option(
+    '--quiet', '-q', action="store_true", dest='quiet',
+    help="""\
+Make the output minimal, overriding any verbosity options.
+""")
+
+reporting.add_option(
+    '--progress', '-p', action="store_true", dest='progress',
+    help="""\
+Output progress status
+""")
+
+reporting.add_option(
+    '--no-progress',action="store_false", dest='progress',
+    help="""\
+Do not output progress status.  This is the default, but can be used to
+counter a previous use of --progress or -p.
+""")
+
+# We use a noop callback because the actual processing will be done in the
+# get_options function, but we want optparse to generate appropriate help info
+# for us, so we add an option anyway.
+reporting.add_option(
+    '--auto-progress', action="callback", callback=lambda *args: None,
+    help="""\
+Output progress status, but only when stdout is a terminal.
+""")
+
+reporting.add_option(
+    '--color', '-c', action="store_true", dest='color',
+    help="""\
+Colorize the output.
+""")
+
+reporting.add_option(
+    '--no-color', '-C', action="store_false", dest='color',
+    help="""\
+Do not colorize the output.  This is the default, but can be used to
+counter a previous use of --color or -c.
+""")
+
+# We use a noop callback because the actual processing will be done in the
+# get_options function, but we want optparse to generate appropriate help info
+# for us, so we add an option anyway.
+reporting.add_option(
+    '--auto-color', action="callback", callback=lambda *args: None,
+    help="""\
+Colorize the output, but only when stdout is a terminal.
+""")
+
+reporting.add_option(
+    '--slow-test', type='float', dest='slow_test_threshold',
+    metavar='N', default=10,
+    help="""\
+With -c and -vvv, highlight tests that take longer than N seconds (default:
+%default).
+""")
+
+reporting.add_option(
+    '-1', '--hide-secondary-failures',
+    action="store_true", dest='report_only_first_failure',
+    help="""\
+Report only the first failure in a doctest. (Examples after the
+failure are still executed, in case they do any cleanup.)
+""")
+
+reporting.add_option(
+    '--show-secondary-failures',
+    action="store_false", dest='report_only_first_failure',
+    help="""\
+Report all failures in a doctest.  This is the default, but can
+be used to counter a default use of -1 or --hide-secondary-failures.
+""")
+
+reporting.add_option(
+    '--ndiff', action="store_true", dest="ndiff",
+    help="""\
+When there is a doctest failure, show it as a diff using the ndiff.py utility.
+""")
+
+reporting.add_option(
+    '--udiff', action="store_true", dest="udiff",
+    help="""\
+When there is a doctest failure, show it as a unified diff.
+""")
+
+reporting.add_option(
+    '--cdiff', action="store_true", dest="cdiff",
+    help="""\
+When there is a doctest failure, show it as a context diff.
+""")
+
+parser.add_option_group(reporting)
+
+######################################################################
+# Analysis
+
+analysis = optparse.OptionGroup(parser, "Analysis", """\
+Analysis options provide tools for analysing test output.
+""")
+
+
+analysis.add_option(
+    '--post-mortem', '-D', action="store_true", dest='post_mortem',
+    help="Enable post-mortem debugging of test failures"
+    )
+
+
+analysis.add_option(
+    '--gc', '-g', action="append", dest='gc', type="int",
+    help="""\
+Set the garbage collector generation threshold.  This can be used
+to stress memory and gc correctness.  Some crashes are only
+reproducible when the threshold is set to 1 (aggressive garbage
+collection).  Do "--gc 0" to disable garbage collection altogether.
+
+The --gc option can be used up to 3 times to specify up to 3 of the 3
+Python gc_threshold settings.
+
+""")
+
+analysis.add_option(
+    '--gc-option', '-G', action="append", dest='gc_option', type="choice",
+    choices=['DEBUG_STATS', 'DEBUG_COLLECTABLE', 'DEBUG_UNCOLLECTABLE',
+             'DEBUG_INSTANCES', 'DEBUG_OBJECTS', 'DEBUG_SAVEALL',
+             'DEBUG_LEAK'],
+    help="""\
+Set a Python gc-module debug flag.  This option can be used more than
+once to set multiple flags.
+""")
+
+analysis.add_option(
+    '--repeat', '-N', action="store", type="int", dest='repeat',
+    help="""\
+Repeat the tests the given number of times.  This option is used to
+make sure that tests leave their environment in the state they found
+it and, with the --report-refcounts option to look for memory leaks.
+""")
+
+analysis.add_option(
+    '--report-refcounts', '-r', action="store_true", dest='report_refcounts',
+    help="""\
+After each run of the tests, output a report summarizing changes in
+refcounts by object type.  This option that requires that Python was
+built with the --with-pydebug option to configure.
+""")
+
+analysis.add_option(
+    '--coverage', action="store", type='string', dest='coverage',
+    help="""\
+Perform code-coverage analysis, saving trace data to the directory
+with the given name.  A code coverage summary is printed to standard
+out.
+""")
+
+analysis.add_option(
+    '--profile', action="store", dest='profile', type="choice",
+    choices=available_profilers.keys(),
+    help="""\
+Run the tests under cProfiler or hotshot and display the top 50 stats, sorted
+by cumulative time and number of calls.
+""")
+
+def do_pychecker(*args):
+    if not os.environ.get("PYCHECKER"):
+        os.environ["PYCHECKER"] = "-q"
+    import pychecker.checker
+
+analysis.add_option(
+    '--pychecker', action="callback", callback=do_pychecker,
+    help="""\
+Run the tests under pychecker
+""")
+
+parser.add_option_group(analysis)
+
+######################################################################
+# Setup
+
+setup = optparse.OptionGroup(parser, "Setup", """\
+Setup options are normally supplied by the testrunner script, although
+they can be overridden by users.
+""")
+
+setup.add_option(
+    '--path', action="append", dest='path',
+    help="""\
+Specify a path to be added to Python's search path.  This option can
+be used multiple times to specify multiple search paths.  The path is
+usually specified by the test-runner script itself, rather than by
+users of the script, although it can be overridden by users.  Only
+tests found in the path will be run.
+
+This option also specifies directories to be searched for tests.
+See the search_directory.
+""")
+
+setup.add_option(
+    '--test-path', action="append", dest='test_path',
+    help="""\
+Specify a path to be searched for tests, but not added to the Python
+search path.  This option can be used multiple times to specify
+multiple search paths.  The path is usually specified by the
+test-runner script itself, rather than by users of the script,
+although it can be overridden by users.  Only tests found in the path
+will be run.
+""")
+
+setup.add_option(
+    '--package-path', action="append", dest='package_path', nargs=2,
+    help="""\
+Specify a path to be searched for tests, but not added to the Python
+search path.  Also specify a package for files found in this path.
+This is used to deal with directories that are stitched into packages
+that are not otherwise searched for tests.
+
+This option takes 2 arguments.  The first is a path name. The second is
+the package name.
+
+This option can be used multiple times to specify
+multiple search paths.  The path is usually specified by the
+test-runner script itself, rather than by users of the script,
+although it can be overridden by users.  Only tests found in the path
+will be run.
+""")
+
+setup.add_option(
+    '--tests-pattern', action="store", dest='tests_pattern',
+    help="""\
+The test runner looks for modules containing tests.  It uses this
+pattern to identify these modules.  The modules may be either packages
+or python files.
+
+If a test module is a package, it uses the value given by the
+test-file-pattern to identify python files within the package
+containing tests.
+""")
+
+setup.add_option(
+    '--suite-name', action="store", dest='suite_name',
+    help="""\
+Specify the name of the object in each test_module that contains the
+module's test suite.
+""")
+
+setup.add_option(
+    '--test-file-pattern', action="store", dest='test_file_pattern',
+    help="""\
+Specify a pattern for identifying python files within a tests package.
+See the documentation for the --tests-pattern option.
+""")
+
+setup.add_option(
+    '--ignore_dir', action="append", dest='ignore_dir',
+    help="""\
+Specifies the name of a directory to ignore when looking for tests.
+""")
+
+parser.add_option_group(setup)
+
+######################################################################
+# Other
+
+other = optparse.OptionGroup(parser, "Other", "Other options")
+
+other.add_option(
+    '--keepbytecode', '-k', action="store_true", dest='keepbytecode',
+    help="""\
+Normally, the test runner scans the test paths and the test
+directories looking for and deleting pyc or pyo files without
+corresponding py files.  This is to prevent spurious test failures due
+to finding compiled modules where source modules have been deleted.
+This scan can be time consuming.  Using this option disables this
+scan.  If you know you haven't removed any modules since last running
+the tests, can make the test run go much faster.
+""")
+
+other.add_option(
+    '--usecompiled', action="store_true", dest='usecompiled',
+    help="""\
+Normally, a package must contain an __init__.py file, and only .py files
+can contain test code.  When this option is specified, compiled Python
+files (.pyc and .pyo) can be used instead:  a directory containing
+__init__.pyc or __init__.pyo is also considered to be a package, and if
+file XYZ.py contains tests but is absent while XYZ.pyc or XYZ.pyo exists
+then the compiled files will be used.  This is necessary when running
+tests against a tree where the .py files have been removed after
+compilation to .pyc/.pyo.  Use of this option implies --keepbytecode.
+""")
+
+other.add_option(
+    '--exit-with-status', action="store_true", dest='exitwithstatus',
+    help="""\
+Return an error exit status if the tests failed.  This can be useful for
+an invoking process that wants to monitor the result of a test run.
+""")
+
+parser.add_option_group(other)
+
+######################################################################
+# Command-line processing
+
+def compile_filter(pattern):
+    if pattern.startswith('!'):
+        pattern = re.compile(pattern[1:]).search
+        return (lambda s: not pattern(s))
+    return re.compile(pattern).search
+
+def merge_options(options, defaults):
+    odict = options.__dict__
+    for name, value in defaults.__dict__.items():
+        if (value is not None) and (odict[name] is None):
+            odict[name] = value
+
+default_setup_args = [
+    '--tests-pattern', '^tests$',
+    '--at-level', '1',
+    '--ignore', '.svn',
+    '--ignore', 'CVS',
+    '--ignore', '{arch}',
+    '--ignore', '.arch-ids',
+    '--ignore', '_darcs',
+    '--test-file-pattern', '^test',
+    '--suite-name', 'test_suite',
+    ]
+
+
+def terminal_has_colors():
+    """Determine whether the terminal supports colors.
+
+    Some terminals (e.g. the emacs built-in one) don't.
+    """
+    return tigetnum('colors', -1) >= 8
+
+
+def get_options(args=None, defaults=None):
+    # Because we want to inspect stdout and decide to colorize or not, we
+    # replace the --auto-color option with the appropriate --color or
+    # --no-color option.  That way the subprocess doesn't have to decide (which
+    # it would do incorrectly anyway because stdout would be a pipe).
+    def apply_auto_color(args):
+        if args and '--auto-color' in args:
+            if sys.stdout.isatty() and terminal_has_colors():
+                colorization = '--color'
+            else:
+                colorization = '--no-color'
+
+            args[:] = [arg.replace('--auto-color', colorization)
+                       for arg in args]
+
+    # The comment of apply_auto_color applies here as well
+    def apply_auto_progress(args):
+        if args and '--auto-progress' in args:
+            if sys.stdout.isatty():
+                progress = '--progress'
+            else:
+                progress = '--no-progress'
+
+            args[:] = [arg.replace('--auto-progress', progress)
+                       for arg in args]
+
+    apply_auto_color(args)
+    apply_auto_color(defaults)
+    apply_auto_progress(args)
+    apply_auto_progress(defaults)
+
+    default_setup, _ = parser.parse_args(default_setup_args)
+    assert not _
+    if defaults:
+        defaults, _ = parser.parse_args(defaults)
+        assert not _
+        merge_options(defaults, default_setup)
+    else:
+        defaults = default_setup
+
+    if args is None:
+        args = sys.argv
+
+    original_testrunner_args = args
+    args = args[1:]
+
+    options, positional = parser.parse_args(args)
+    merge_options(options, defaults)
+    options.original_testrunner_args = original_testrunner_args
+
+    if options.color:
+        options.output = ColorfulOutputFormatter(options)
+        options.output.slow_test_threshold = options.slow_test_threshold
+    else:
+        options.output = OutputFormatter(options)
+
+    options.fail = False
+
+    if positional:
+        module_filter = positional.pop(0)
+        if module_filter != '.':
+            if options.module:
+                options.module.append(module_filter)
+            else:
+                options.module = [module_filter]
+
+        if positional:
+            test_filter = positional.pop(0)
+            if options.test:
+                options.test.append(test_filter)
+            else:
+                options.test = [test_filter]
+
+            if positional:
+                parser.error("Too many positional arguments")
+
+    options.ignore_dir = dict([(d,1) for d in options.ignore_dir])
+    options.test_file_pattern = re.compile(options.test_file_pattern).search
+    options.tests_pattern = re.compile(options.tests_pattern).search
+    options.test = map(compile_filter, options.test or ('.'))
+    options.module = map(compile_filter, options.module or ('.'))
+
+    options.path = map(os.path.abspath, options.path or ())
+    options.test_path = map(os.path.abspath, options.test_path or ())
+    options.test_path += options.path
+
+    options.test_path = ([(path, '') for path in options.test_path]
+                         +
+                         [(os.path.abspath(path), package)
+                          for (path, package) in options.package_path or ()
+                          ])
+
+    if options.package:
+        pkgmap = dict(options.test_path)
+        options.package = [normalize_package(p, pkgmap)
+                           for p in options.package]
+
+    options.prefix = [(path + os.path.sep, package)
+                      for (path, package) in options.test_path]
+    if options.all:
+        options.at_level = sys.maxint
+
+    if options.unit and options.non_unit:
+        # The test runner interprets this as "run only those tests that are
+        # both unit and non-unit at the same time".  The user, however, wants
+        # to run both unit and non-unit tests.  Disable the filtering so that
+        # the user will get what she wants:
+        options.unit = options.non_unit = False
+
+    if options.unit:
+        options.layer = ['unit']
+    if options.layer:
+        options.layer = map(compile_filter, options.layer)
+
+    options.layer = options.layer and dict([(l, 1) for l in options.layer])
+
+    if options.usecompiled:
+        options.keepbytecode = options.usecompiled
+
+    if options.quiet:
+        options.verbose = 0
+
+    if options.report_refcounts and options.repeat < 2:
+        print """\
+        You must use the --repeat (-N) option to specify a repeat
+        count greater than 1 when using the --report_refcounts (-r)
+        option.
+        """
+        options.fail = True
+        return options
+
+
+    if options.report_refcounts and not hasattr(sys, "gettotalrefcount"):
+        print """\
+        The Python you are running was not configured
+        with --with-pydebug. This is required to use
+        the --report-refcounts option.
+        """
+        options.fail = True
+        return options
+
+    return options
+
+def normalize_package(package, package_map={}):
+    r"""Normalize package name passed to the --package option.
+
+        >>> normalize_package('zope.testing')
+        'zope.testing'
+
+    Converts path names into package names for compatibility with the old
+    test runner.
+
+        >>> normalize_package('zope/testing')
+        'zope.testing'
+        >>> normalize_package('zope/testing/')
+        'zope.testing'
+        >>> normalize_package('zope\\testing')
+        'zope.testing'
+
+    Can use a map of absolute pathnames to package names
+
+        >>> a = os.path.abspath
+        >>> normalize_package('src/zope/testing/',
+        ...                   {a('src'): ''})
+        'zope.testing'
+        >>> normalize_package('src/zope_testing/',
+        ...                   {a('src/zope_testing'): 'zope.testing'})
+        'zope.testing'
+        >>> normalize_package('src/zope_something/tests',
+        ...                   {a('src/zope_something'): 'zope.something',
+        ...                    a('src'): ''})
+        'zope.something.tests'
+
+    """
+    package = package.replace('\\', '/')
+    if package.endswith('/'):
+        package = package[:-1]
+    bits = package.split('/')
+    for n in range(len(bits), 0, -1):
+        pkg = package_map.get(os.path.abspath('/'.join(bits[:n])))
+        if pkg is not None:
+            bits = bits[n:]
+            if pkg:
+                bits = [pkg] + bits
+            return '.'.join(bits)
+    return package.replace('/', '.')
+
+# Command-line UI
+###############################################################################
+
+###############################################################################
+# Install 2.4 TestSuite __iter__ into earlier versions
+
+if sys.version_info < (2, 4):
+    def __iter__(suite):
+        return iter(suite._tests)
+    unittest.TestSuite.__iter__ = __iter__
+    del __iter__
+
+# Install 2.4 TestSuite __iter__ into earlier versions
+###############################################################################
+
+###############################################################################
+# Test the testrunner
+
+def test_suite():
+
+    from zope.testing import renormalizing
+    checker = renormalizing.RENormalizing([
+        # 2.5 changed the way pdb reports exceptions
+        (re.compile(r"<class 'exceptions.(\w+)Error'>:"),
+                    r'exceptions.\1Error:'),
+
+        (re.compile('^> [^\n]+->None$', re.M), '> ...->None'),
+        (re.compile(r"<module>"),(r'?')),
+        (re.compile(r"<type 'exceptions.(\w+)Error'>:"),
+                    r'exceptions.\1Error:'),
+        (re.compile("'[A-Za-z]:\\\\"), "'"), # hopefully, we'll make Windows happy
+        (re.compile(r'\\\\'), '/'), # more Windows happiness
+        (re.compile(r'\\'), '/'), # even more Windows happiness
+        (re.compile('/r'), '\\\\r'), # undo damage from previous
+        (re.compile(r'\r'), '\\\\r\n'),
+        (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
+        (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'),
+        (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'),
+        (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'),
+        (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'),
+        (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'),
+                    r'\1test.py(NNN)'),
+        (re.compile(r'[.]py\(\d+\)'), r'.py(NNN)'),
+        (re.compile(r'[.]py:\d+'), r'.py:NNN'),
+        (re.compile(r' line \d+,', re.IGNORECASE), r' Line NNN,'),
+        (re.compile(r' line {([a-z]+)}\d+{', re.IGNORECASE), r' Line {\1}NNN{'),
+
+        # omit traceback entries for unittest.py or doctest.py from
+        # output:
+        (re.compile(r'^ +File "[^\n]*(doc|unit)test.py", [^\n]+\n[^\n]+\n',
+                    re.MULTILINE),
+         r''),
+        (re.compile(r'^{\w+} +File "{\w+}[^\n]*(doc|unit)test.py{\w+}", [^\n]+\n[^\n]+\n',
+                    re.MULTILINE),
+         r''),
+        (re.compile('^> [^\n]+->None$', re.M), '> ...->None'),
+        (re.compile('import pdb; pdb'), 'Pdb()'), # Py 2.3
+        ])
+
+    def setUp(test):
+        test.globs['saved-sys-info'] = (
+            sys.path[:],
+            sys.argv[:],
+            sys.modules.copy(),
+            gc.get_threshold(),
+            )
+        test.globs['this_directory'] = os.path.split(__file__)[0]
+        test.globs['testrunner_script'] = __file__
+
+    def tearDown(test):
+        sys.path[:], sys.argv[:] = test.globs['saved-sys-info'][:2]
+        gc.set_threshold(*test.globs['saved-sys-info'][3])
+        sys.modules.clear()
+        sys.modules.update(test.globs['saved-sys-info'][2])
+
+    suites = [
+        doctest.DocFileSuite(
+        'testrunner-arguments.txt',
+        'testrunner-coverage.txt',
+        'testrunner-debugging-layer-setup.test',
+        'testrunner-debugging.txt',
+        'testrunner-edge-cases.txt',
+        'testrunner-errors.txt',
+        'testrunner-layers-ntd.txt',
+        'testrunner-layers.txt',
+        'testrunner-layers-api.txt',
+        'testrunner-progress.txt',
+        'testrunner-colors.txt',
+        'testrunner-simple.txt',
+        'testrunner-test-selection.txt',
+        'testrunner-verbose.txt',
+        'testrunner-wo-source.txt',
+        'testrunner-repeat.txt',
+        'testrunner-gc.txt',
+        'testrunner-knit.txt',
+        setUp=setUp, tearDown=tearDown,
+        optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+        checker=checker),
+        doctest.DocTestSuite()
+        ]
+
+    if sys.platform == 'win32':
+        suites.append(
+            doctest.DocFileSuite(
+            'testrunner-coverage-win32.txt',
+            setUp=setUp, tearDown=tearDown,
+            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+            checker=checker))
+
+    # Python <= 2.4.1 had a bug that prevented hotshot from running in
+    # non-optimize mode
+    if sys.version_info[:3] > (2,4,1) or not __debug__:
+        # some Linux distributions don't include the profiling module (which
+        # hotshot.stats depends on)
+        try:
+            import hotshot.stats
+        except ImportError:
+            pass
+        else:
+            suites.append(
+                doctest.DocFileSuite(
+                    'testrunner-profiling.txt',
+                    setUp=setUp, tearDown=tearDown,
+                    optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+                    checker = renormalizing.RENormalizing([
+                        (re.compile(r'tests_profile[.]\S*[.]prof'),
+                         'tests_profile.*.prof'),
+                        ]),
+                    )
+                )
+        try:
+            import cProfile
+            import pstats
+        except ImportError:
+            pass
+        else:
+            suites.append(
+                doctest.DocFileSuite(
+                    'testrunner-profiling-cprofiler.txt',
+                    setUp=setUp, tearDown=tearDown,
+                    optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+                    checker = renormalizing.RENormalizing([
+                        (re.compile(r'tests_profile[.]\S*[.]prof'),
+                         'tests_profile.*.prof'),
+                        ]),
+                    )
+                )
+
+
+    if hasattr(sys, 'gettotalrefcount'):
+        suites.append(
+            doctest.DocFileSuite(
+            'testrunner-leaks.txt',
+            setUp=setUp, tearDown=tearDown,
+            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+            checker = renormalizing.RENormalizing([
+              (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
+              (re.compile(r'sys refcount=\d+ +change=\d+'),
+               'sys refcount=NNNNNN change=NN'),
+              (re.compile(r'sum detail refcount=\d+ +'),
+               'sum detail refcount=NNNNNN '),
+              (re.compile(r'total +\d+ +\d+'),
+               'total               NNNN    NNNN'),
+              (re.compile(r"^ +(int|type) +-?\d+ +-?\d+ *\n", re.M),
+               ''),
+              ]),
+
+            )
+        )
+    else:
+        suites.append(
+            doctest.DocFileSuite(
+            'testrunner-leaks-err.txt',
+            setUp=setUp, tearDown=tearDown,
+            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+            checker=checker,
+            )
+        )
+
+
+    return unittest.TestSuite(suites)
+
+def main():
+    default = [
+        '--path', os.path.split(sys.argv[0])[0],
+        '--tests-pattern', '^testrunner$',
+        ]
+    run(default)
+
+if __name__ == '__main__':
+
+    # if --resume_layer is in the arguments, we are being run from the
+    # test runner's own tests.  We need to adjust the path in hopes of
+    # not getting a different version installed in the system python.
+    if len(sys.argv) > 1 and sys.argv[1] == '--resume-layer':
+        sys.path.insert(0,
+            os.path.split(
+                os.path.split(
+                    os.path.split(
+                        os.path.abspath(sys.argv[0])
+                        )[0]
+                    )[0]
+                )[0]
+            )
+
+    # Hm, when run as a script, we need to import the testrunner under
+    # its own name, so that there's the imported flavor has the right
+    # real_pdb_set_trace.
+    import zope.testing.testrunner
+    from zope.testing import doctest
+
+    main()
+
+# Test the testrunner
+###############################################################################
+
+# Delay import to give main an opportunity to fix up the path if
+# necessary
+from zope.testing import doctest

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-arguments.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-arguments.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-arguments.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-arguments.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,27 @@
+Passing arguments explicitly
+============================
+
+In most of the examples here, we set up `sys.argv`.  In normal usage,
+the testrunner just uses `sys.argv`.  It is possible to pass arguments
+explicitly.
+
+    >>> import os.path
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults, 'test --layer 111'.split())
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in N.NNN seconds.
+      Set up samplelayers.Layer1 in N.NNN seconds.
+      Set up samplelayers.Layer11 in N.NNN seconds.
+      Set up samplelayers.Layer111 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer111 in N.NNN seconds.
+      Tear down samplelayers.Layerx in N.NNN seconds.
+      Tear down samplelayers.Layer11 in N.NNN seconds.
+      Tear down samplelayers.Layer1 in N.NNN seconds.
+    False

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-colors.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-colors.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-colors.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-colors.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,372 @@
+Colorful output
+===============
+
+If you're on a Unix-like system, you can ask for colorized output.  The test
+runner emits terminal control sequences to highlight important pieces of
+information (such as the names of failing tests) in different colors.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+
+Since it wouldn't be a good idea to have terminal control characters in a
+test file, let's wrap sys.stdout in a simple terminal interpreter
+
+    >>> import re
+    >>> class Terminal(object):
+    ...     _color_regexp = re.compile('\033[[]([0-9;]*)m')
+    ...     _colors = {'0': 'normal', '1': 'bold', '30': 'black', '31': 'red',
+    ...                '32': 'green', '33': 'yellow', '34': 'blue',
+    ...                '35': 'magenta', '36': 'cyan', '37': 'grey'}
+    ...     def __init__(self, stream):
+    ...         self._stream = stream
+    ...     def __getattr__(self, attr):
+    ...         return getattr(self._stream, attr)
+    ...     def isatty(self):
+    ...         return True
+    ...     def write(self, text):
+    ...         if '\033[' in text:
+    ...             text = self._color_regexp.sub(self._color, text)
+    ...         self._stream.write(text)
+    ...     def writelines(self, lines):
+    ...         for line in lines:
+    ...             self.write(line)
+    ...     def _color(self, match):
+    ...         colorstring = '{'
+    ...         for number in match.group(1).split(';'):
+    ...             colorstring += self._colors.get(number, '?')
+    ...         return colorstring + '}'
+
+    >>> real_stdout = sys.stdout
+    >>> sys.stdout = Terminal(sys.stdout)
+
+
+Successful test
+---------------
+
+A successful test run soothes the developer with warm green colors:
+
+    >>> sys.argv = 'test --layer 122 -c'.split()
+    >>> testrunner.run(defaults)
+    {normal}Running samplelayers.Layer122 tests:{normal}
+      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
+    {normal}Tearing down left over layers:{normal}
+      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
+    False
+
+
+Failed test
+-----------
+
+A failed test run highlights the failures in red:
+
+    >>> sys.argv = 'test -c --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
+    >>> testrunner.run(defaults)
+    {normal}Running unit tests:{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Failure in test eek (sample2.sampletests_e){normal}
+    Failed doctest test for sample2.sampletests_e.eek
+      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    {normal}File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}30{normal}, in {boldcyan}sample2.sampletests_e.eek{normal}
+    Failed example:
+    {cyan}    f(){normal}
+    Exception raised:
+    {red}    Traceback (most recent call last):{normal}
+    {red}      File ".../doctest.py", line 1356, in __run{normal}
+    {red}        compileflags, 1) in test.globs{normal}
+    {red}      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?{normal}
+    {red}        f(){normal}
+    {red}      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f{normal}
+    {red}        g(){normal}
+    {red}      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g{normal}
+    {red}        x = y + 1{normal}
+    {red}    NameError: global name 'y' is not defined{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Error in test test3 (sample2.sampletests_e.Test){normal}
+    Traceback (most recent call last):
+    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}260{normal}, in {boldcyan}run{normal}
+    {cyan}    testMethod(){normal}
+    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}43{normal}, in {boldcyan}test3{normal}
+    {cyan}    f(){normal}
+    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}19{normal}, in {boldcyan}f{normal}
+    {cyan}    g(){normal}
+    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}24{normal}, in {boldcyan}g{normal}
+    {cyan}    x = y + 1{normal}
+    {red}NameError: global name 'y' is not defined{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Failure in test testrunner-ex/sample2/e.txt{normal}
+    Failed doctest test for e.txt
+      File "testrunner-ex/sample2/e.txt", line 0
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    {normal}File "{boldblue}testrunner-ex/sample2/e.txt{normal}", line {boldred}4{normal}, in {boldcyan}e.txt{normal}
+    Failed example:
+    {cyan}    f(){normal}
+    Exception raised:
+    {red}    Traceback (most recent call last):{normal}
+    {red}      File ".../doctest.py", line 1356, in __run{normal}
+    {red}        compileflags, 1) in test.globs{normal}
+    {red}      File "<doctest e.txt[1]>", line 1, in ?{normal}
+    {red}        f(){normal}
+    {red}      File "<doctest e.txt[0]>", line 2, in f{normal}
+    {red}        return x{normal}
+    {red}    NameError: global name 'x' is not defined{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Failure in test test (sample2.sampletests_f.Test){normal}
+    Traceback (most recent call last):
+    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}260{normal}, in {boldcyan}run{normal}
+    {cyan}    testMethod(){normal}
+    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_f.py{normal}", line {boldred}21{normal}, in {boldcyan}test{normal}
+    {cyan}    self.assertEqual(1,0){normal}
+    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}333{normal}, in {boldcyan}failUnlessEqual{normal}
+    {cyan}    raise self.failureException, \{normal}
+    {red}AssertionError: 1 != 0{normal}
+    <BLANKLINE>
+    {normal}  Ran {green}200{normal} tests with {boldred}3{normal} failures and {boldred}1{normal} errors in {green}0.045{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer1 tests:{normal}
+      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}9{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.001{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer11 tests:{normal}
+      Set up samplelayers.Layer11 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer111 tests:{normal}
+      Set up samplelayers.Layerx in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer111 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer112 tests:{normal}
+      Tear down samplelayers.Layer111 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer112 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer12 tests:{normal}
+      Tear down samplelayers.Layer112 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layerx in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer11 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer121 tests:{normal}
+      Set up samplelayers.Layer121 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
+    {normal}Running samplelayers.Layer122 tests:{normal}
+      Tear down samplelayers.Layer121 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
+    {normal}Tearing down left over layers:{normal}
+      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
+    {normal}Total: {green}413{normal} tests, {boldred}3{normal} failures, {boldred}1{normal} errors in {green}0.023{normal} seconds.{normal}
+    True
+
+
+Doctest failures
+----------------
+
+The expected and actual outputs of failed doctests are shown in different
+colors:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ -c'.split()
+    >>> _ = testrunner.run(defaults)
+    {normal}Running unit tests:{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Failure in test pledge (pledge){normal}
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    {normal}File testrunner-ex/pledge.py{normal}", line {boldred}26{normal}, in {boldcyan}pledge.pledge{normal}
+    Failed example:
+    {cyan}    print pledge_template % ('and earthling', 'planet'),{normal}
+    Expected:
+    {green}    I give my pledge, as an earthling,{normal}
+    {green}    to save, and faithfully, to defend from waste,{normal}
+    {green}    the natural resources of my planet.{normal}
+    {green}    It's soils, minerals, forests, waters, and wildlife.{normal}
+    Got:
+    {red}    I give my pledge, as and earthling,{normal}
+    {red}    to save, and faithfully, to defend from waste,{normal}
+    {red}    the natural resources of my planet.{normal}
+    {red}    It's soils, minerals, forests, waters, and wildlife.{normal}
+    <BLANKLINE>
+    {normal}  Ran {green}1{normal} tests with {boldred}1{normal} failures and {green}0{normal} errors in {green}0.002{normal} seconds.{normal}
+
+Diffs are highlighted so you can easily tell the context and the mismatches
+apart:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff -c'.split()
+    >>> _ = testrunner.run(defaults)
+    {normal}Running unit tests:{normal}
+    <BLANKLINE>
+    <BLANKLINE>
+    {boldred}Failure in test pledge (pledge){normal}
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    {normal}File testrunner-ex/pledge.py{normal}", line {boldred}26{normal}, in {boldcyan}pledge.pledge{normal}
+    Failed example:
+    {cyan}    print pledge_template % ('and earthling', 'planet'),{normal}
+    Differences (ndiff with -expected +actual):
+    {green}    - I give my pledge, as an earthling,{normal}
+    {red}    + I give my pledge, as and earthling,{normal}
+    {magenta}    ?                        +{normal}
+    {normal}      to save, and faithfully, to defend from waste,{normal}
+    {normal}      the natural resources of my planet.{normal}
+    {normal}      It's soils, minerals, forests, waters, and wildlife.{normal}
+    <BLANKLINE>
+    {normal}  Ran {green}1{normal} tests with {boldred}1{normal} failures and {green}0{normal} errors in {green}0.003{normal} seconds.{normal}
+
+
+Timing individual tests
+-----------------------
+
+At very high verbosity levels you can see the time taken by each test
+
+    >>> sys.argv = 'test -u -t test_one.TestNotMuch -c -vvv'.split()
+    >>> testrunner.run(defaults)
+    {normal}Running tests at level 1{normal}
+    {normal}Running unit tests:{normal}
+    {normal}  Running:{normal}
+     test_1 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+     test_2 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+     test_3 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+     test_1 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+     test_2 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+     test_3 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
+    {normal}  Ran {green}6{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}N.NNN{normal} seconds.{normal}
+    False
+
+If we had very slow tests we would see their times highlighted in a different color.
+Instead of creating a test that waits 10 seconds, let's lower the slow test threshold
+in the test runner to 0 seconds to make all of the tests seem slow.
+
+    >>> sys.argv = 'test -u -t test_one.TestNotMuch -c -vvv --slow-test 0'.split()
+    >>> testrunner.run(defaults)
+    {normal}Running tests at level 1{normal}
+    {normal}Running unit tests:{normal}
+    {normal}  Running:{normal}
+     test_1 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+     test_2 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+     test_3 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+     test_1 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+     test_2 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+     test_3 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
+    {normal}  Ran {green}6{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}N.NNN{normal} seconds.{normal}
+    False
+
+
+Disabling colors
+----------------
+
+If -c or --color have been previously provided on the command line (perhaps by
+a test runner wrapper script), but no colorized output is desired, the -C or
+--no-color options will disable colorized output:
+
+    >>> sys.argv = 'test --layer 122 -c -C'.split()
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+    >>> sys.argv = 'test --layer 122 -c --no-color'.split()
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+
+Autodetecting colors
+--------------------
+
+The --auto-color option will determine if stdout is a terminal that supports
+colors, and only enable colorized output if so.  Our ``Terminal`` wrapper
+pretends it is a terminal, but the curses module will realize it isn't:
+
+    >>> sys.argv = 'test --layer 122 --auto-color'.split()
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+We can fake it
+
+    >>> class FakeCurses(object):
+    ...     class error(Exception):
+    ...         pass
+    ...     def setupterm(self):
+    ...         pass
+    ...     def tigetnum(self, attr):
+    ...         return dict(colors=8).get(attr, -2)
+    >>> sys.modules['curses'] = FakeCurses()
+
+    >>> sys.argv = 'test --layer 122 --auto-color'.split()
+    >>> testrunner.run(defaults)
+    {normal}Running samplelayers.Layer122 tests:{normal}
+      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
+      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
+    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
+    {normal}Tearing down left over layers:{normal}
+      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
+      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
+    False
+
+    >>> del sys.modules['curses']
+
+The real stdout is not a terminal in a doctest:
+
+    >>> sys.stdout = real_stdout
+
+    >>> sys.argv = 'test --layer 122 --auto-color'.split()
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage-win32.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-coverage-win32.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage-win32.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage-win32.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,31 @@
+Code Coverage
+=============
+
+On Windows drive names can be upper and lower case, these can be
+randomly passed to TestIgnore.names.
+Watch out for the case of the R drive!
+
+  >>> class WinOptions(object):
+  ...   package = None
+  ...   test_path = [('r:\\winproject\\src\\blah\\foo', ''),
+  ...                ('R:\\winproject\\src\\blah\\bar', '')]
+
+  >>> from zope.testing import testrunner
+  >>> ignore = testrunner.TestIgnore(WinOptions())
+  >>> ignore._test_dirs
+  ['r:\\winproject\\src\\blah\\foo\\', 'R:\\winproject\\src\\blah\\bar\\']
+
+We can now ask whether a particular module should be ignored:
+
+  >>> ignore.names('r:\\winproject\\src\\blah\\foo\\baz.py', 'baz')
+  False
+  >>> ignore.names('R:\\winproject\\src\\blah\\foo\\baz.py', 'baz')
+  False
+  >>> ignore.names('r:\\winproject\\src\\blah\\bar\\zab.py', 'zab')
+  False
+  >>> ignore.names('R:\\winproject\\src\\blah\\bar\\zab.py', 'zab')
+  False
+  >>> ignore.names('r:\\winproject\\src\\blah\\hello.py', 'hello')
+  True
+  >>> ignore.names('R:\\winproject\\src\\blah\\hello.py', 'hello')
+  True

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-coverage.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-coverage.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,126 @@
+Code Coverage
+=============
+
+If the --coverage option is used, test coverage reports will be generated.  The
+directory name given as the parameter will be used to hold the reports.
+
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --coverage=coverage_dir'.split()
+
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in 0.125 seconds.
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Ran 9 tests with 0 failures and 0 errors in 0.003 seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.029 seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.024 seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Set up samplelayers.Layer112 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.024 seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.026 seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.025 seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.025 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
+    lines   cov%   module   (path)
+    ...
+       52    92%   sample1.sampletests.test1   (testrunner-ex/sample1/sampletests/test1.py)
+       78    94%   sample1.sampletests.test11   (testrunner-ex/sample1/sampletests/test11.py)
+       78    94%   sample1.sampletests.test111   (testrunner-ex/sample1/sampletests/test111.py)
+       78    94%   sample1.sampletests.test112   (testrunner-ex/sample1/sampletests/test112.py)
+       78    94%   sample1.sampletests.test12   (testrunner-ex/sample1/sampletests/test12.py)
+       78    94%   sample1.sampletests.test121   (testrunner-ex/sample1/sampletests/test121.py)
+       78    94%   sample1.sampletests.test122   (testrunner-ex/sample1/sampletests/test122.py)
+    ...
+    False
+
+The directory specified with the --coverage option will have been created and
+will hold the coverage reports.
+
+    >>> os.path.exists('coverage_dir')
+    True
+    >>> os.listdir('coverage_dir')
+    [...]
+
+(We should clean up after ourselves.)
+
+    >>> import shutil
+    >>> shutil.rmtree('coverage_dir')
+
+
+Ignoring Tests
+--------------
+
+The ``trace`` module supports ignoring directories and modules based the test
+selection. Only directories selected for testing should report coverage. The
+test runner provides a custom implementation of the relevant API.
+
+The ``TestIgnore`` class, the class managing the ignoring, is initialized by
+passing the command line options. It uses the options to determine the
+directories that should be covered.
+
+  >>> class FauxOptions(object):
+  ...   package = None
+  ...   test_path = [('/myproject/src/blah/foo', ''),
+  ...                ('/myproject/src/blah/bar', '')]
+
+  >>> from zope.testing import testrunner
+  >>> ignore = testrunner.TestIgnore(FauxOptions())
+  >>> ignore._test_dirs
+  ['/myproject/src/blah/foo/', '/myproject/src/blah/bar/']
+
+We can now ask whether a particular module should be ignored:
+
+  >>> ignore.names('/myproject/src/blah/foo/baz.py', 'baz')
+  False
+  >>> ignore.names('/myproject/src/blah/bar/mine.py', 'mine')
+  False
+  >>> ignore.names('/myproject/src/blah/foo/__init__.py', 'foo')
+  False
+  >>> ignore.names('/myproject/src/blah/hello.py', 'hello')
+  True
+
+When running the test runner, modules are sometimes created from text
+strings. Those should *always* be ignored:
+
+  >>> ignore.names('/myproject/src/blah/hello.txt', '<string>')
+  True
+
+To make this check fast, the class implements a cache. In an early
+implementation, the result was cached by the module name, which was a problem,
+since a lot of modules carry the same name (not the Python dotted name
+here!). So just because a module has the same name in an ignored and tested
+directory, does not mean it is always ignored:
+
+  >>> ignore.names('/myproject/src/blah/module.py', 'module')
+  True
+  >>> ignore.names('/myproject/src/blah/foo/module.py', 'module')
+  False

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging-layer-setup.test (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-debugging-layer-setup.test)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging-layer-setup.test	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging-layer-setup.test	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,133 @@
+Post-mortem debugging also works when there is a failure in layer
+setup.
+
+    >>> import os, shutil, sys, tempfile
+    >>> tdir = tempfile.mkdtemp()
+    >>> dir = os.path.join(tdir, 'TESTS-DIR')
+    >>> os.mkdir(dir)
+    >>> open(os.path.join(dir, 'tests.py'), 'w').write(
+    ... '''
+    ... import doctest
+    ...
+    ... class Layer:
+    ...     @classmethod
+    ...     def setUp(self):
+    ...         x = 1
+    ...         raise ValueError
+    ...     
+    ... def a_test():
+    ...     """
+    ...     >>> None
+    ...     """
+    ... def test_suite():
+    ...     suite = doctest.DocTestSuite()
+    ...     suite.layer = Layer
+    ...     return suite
+    ... 
+    ... ''')
+    
+    >>> class Input:
+    ...     def __init__(self, src):
+    ...         self.lines = src.split('\n')
+    ...     def readline(self):
+    ...         line = self.lines.pop(0)
+    ...         print line
+    ...         return line+'\n'
+
+    >>> real_stdin = sys.stdin
+    >>> if sys.version_info[:2] == (2, 3):
+    ...     sys.stdin = Input('n\np x\nc')
+    ... else:
+    ...     sys.stdin = Input('p x\nc')
+
+    >>> sys.argv = [testrunner_script]
+    >>> import zope.testing.testrunner
+    >>> try:
+    ...     zope.testing.testrunner.run(['--path', dir, '-D'])
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +ELLIPSIS
+    Running tests.Layer tests:
+      Set up tests.Layer exceptions.ValueError:
+    <BLANKLINE>
+    > ...tests.py(8)setUp()
+    -> raise ValueError
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Note that post-mortem debugging doesn't work when the layer is run in
+a subprocess:
+
+    >>> if sys.version_info[:2] == (2, 3):
+    ...     sys.stdin = Input('n\np x\nc')
+    ... else:
+    ...     sys.stdin = Input('p x\nc')
+
+    >>> open(os.path.join(dir, 'tests2.py'), 'w').write(
+    ... '''
+    ... import doctest, unittest
+    ...
+    ... class Layer1:
+    ...     @classmethod
+    ...     def setUp(self):
+    ...         pass
+    ...
+    ...     @classmethod
+    ...     def tearDown(self):
+    ...         raise NotImplementedError
+    ...
+    ... class Layer2:
+    ...     @classmethod
+    ...     def setUp(self):
+    ...         x = 1
+    ...         raise ValueError
+    ...     
+    ... def a_test():
+    ...     """
+    ...     >>> None
+    ...     """
+    ... def test_suite():
+    ...     suite1 = doctest.DocTestSuite()
+    ...     suite1.layer = Layer1
+    ...     suite2 = doctest.DocTestSuite()
+    ...     suite2.layer = Layer2
+    ...     return unittest.TestSuite((suite1, suite2))
+    ... 
+    ... ''')
+
+    >>> try:
+    ...     zope.testing.testrunner.run(
+    ...       ['--path', dir, '-Dvv', '--tests-pattern', 'tests2'])
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +ELLIPSIS
+    Running tests at level 1
+    Running tests2.Layer1 tests:
+      Set up tests2.Layer1 in 0.000 seconds.
+      Running:
+     a_test (tests2)
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+    Running tests2.Layer2 tests:
+      Tear down tests2.Layer1 ... not supported
+      Running in a subprocess.
+      Set up tests2.Layer2
+    **********************************************************************
+    <BLANKLINE>
+    Can't post-mortem debug when running a layer as a subprocess!
+    Try running layer 'tests2.Layer2' by itself.
+    <BLANKLINE>
+    **********************************************************************
+    <BLANKLINE>
+    Traceback (most recent call last):
+    ...
+        raise ValueError
+    ValueError
+    <BLANKLINE>
+    <BLANKLINE>
+    Tests with errors:
+       runTest (__main__.SetUpLayerFailure)
+    Total: 1 tests, 0 failures, 1 errors in 0.210 seconds.
+    True
+
+    >>> shutil.rmtree(tdir)
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-debugging.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-debugging.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,118 @@
+Debugging
+=========
+
+The testrunner module supports post-mortem debugging and debugging
+using `pdb.set_trace`.  Let's look first at using `pdb.set_trace`.
+To demonstrate this, we'll provide input via helper Input objects:
+
+    >>> class Input:
+    ...     def __init__(self, src):
+    ...         self.lines = src.split('\n')
+    ...     def readline(self):
+    ...         line = self.lines.pop(0)
+    ...         print line
+    ...         return line+'\n'
+
+If a test or code called by a test calls pdb.set_trace, then the
+runner will enter pdb at that point:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> from zope.testing import testrunner
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> real_stdin = sys.stdin
+    >>> if sys.version_info[:2] == (2, 3):
+    ...     sys.stdin = Input('n\np x\nc')
+    ... else:
+    ...     sys.stdin = Input('p x\nc')
+
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace1').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +ELLIPSIS
+    Running unit tests:...
+    > testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+    False
+
+Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to
+break in the pdb.set_trace function.  It was necessary to use 'next'
+or 'up' to get to the application code that called pdb.set_trace.  In
+Python 2.4, pdb.set_trace causes pdb to stop right after the call to
+pdb.set_trace.
+
+You can also do post-mortem debugging, using the --post-mortem (-D)
+option:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem1 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample3/sampletests_d.py",
+              line 34, in test_post_mortem1
+        raise ValueError
+    ValueError
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
+    -> raise ValueError
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Note that the test runner exits after post-mortem debugging.
+
+In the example above, we debugged an error.  Failures are actually
+converted to errors and can be debugged the same way:
+
+    >>> sys.stdin = Input('up\np x\np y\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem_failure1 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_post_mortem_failure1 (sample3.sampletests_d.TestSomething)
+    Traceback (most recent call last):
+      File ".../unittest.py",  line 252, in debug
+        getattr(self, self.__testMethodName)()
+      File "testrunner-ex/sample3/sampletests_d.py",
+        line 42, in test_post_mortem_failure1
+        self.assertEqual(x, y)
+      File ".../unittest.py", line 302, in failUnlessEqual
+        raise self.failureException, \
+    AssertionError: 1 != 2
+    <BLANKLINE>
+    exceptions.AssertionError:
+    1 != 2
+    > .../unittest.py(302)failUnlessEqual()
+    -> ...
+    (Pdb) up
+    > testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
+    -> self.assertEqual(x, y)
+    (Pdb) p x
+    1
+    (Pdb) p y
+    2
+    (Pdb) c
+    True

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-edge-cases.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-edge-cases.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-edge-cases.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-edge-cases.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,476 @@
+testrunner Edge Cases
+=====================
+
+This document has some edge-case examples to test various aspects of
+the test runner.
+
+Separating Python path and test directories
+-------------------------------------------
+
+The --path option defines a directory to be searched for tests *and* a
+directory to be added to Python's search path.  The --test-path option
+can be used when you want to set a test search path without also
+affecting the Python path:
+
+    >>> import os, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+
+    >>> from zope.testing import testrunner
+
+    >>> defaults = [
+    ...     '--test-path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+    >>> sys.argv = ['test']
+    >>> testrunner.run(defaults)
+    ... # doctest: +ELLIPSIS
+    Test-module import failures:
+    <BLANKLINE>
+    Module: sampletestsf
+    <BLANKLINE>
+    ImportError: No module named sampletestsf
+    ...
+
+    >>> sys.path.append(directory_with_tests)
+    >>> sys.argv = ['test']
+    >>> testrunner.run(defaults)
+    ... # doctest: +ELLIPSIS
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in 0.028 seconds.
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
+    ...
+
+Debugging Edge Cases
+--------------------
+
+    >>> class Input:
+    ...     def __init__(self, src):
+    ...         self.lines = src.split('\n')
+    ...     def readline(self):
+    ...         line = self.lines.pop(0)
+    ...         print line
+    ...         return line+'\n'
+
+    >>> real_stdin = sys.stdin
+
+Using pdb.set_trace in a function called by an ordinary test:
+
+    >>> if sys.version_info[:2] == (2, 3):
+    ...     sys.stdin = Input('n\np x\nc')
+    ... else:
+    ...     sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace2').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +ELLIPSIS
+    Running unit tests:...
+    > testrunner-ex/sample3/sampletests_d.py(47)f()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+    False
+
+Using pdb.set_trace in a function called by a doctest in a doc string:
+
+    >>> sys.stdin = Input('n\np x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace4').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    Running unit tests:
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) n
+    > testrunner-ex/sample3/sampletests_d.py(42)f()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
+    False
+
+Using pdb in a docstring-based doctest
+
+    >>> sys.stdin = Input('n\np x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace3').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    Running unit tests:
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) n
+    > <doctest sample3.sampletests_d.set_trace3[1]>(3)...()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
+    False
+
+Using pdb.set_trace in a doc file:
+
+
+    >>> sys.stdin = Input('n\np x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace5').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    Running unit tests:
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) n
+    > <doctest set_trace5.txt[1]>(3)...()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
+    False
+
+
+Using pdb.set_trace in a function called by a doctest in a doc file:
+
+
+    >>> sys.stdin = Input('n\np x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t set_trace6').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    Running unit tests:
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) n
+    > testrunner-ex/sample3/sampletests_d.py(42)f()
+    -> y = x
+    (Pdb) p x
+    1
+    (Pdb) c
+      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
+    False
+
+Post-mortem debugging function called from ordinary test:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem2 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_post_mortem2 (sample3.sampletests_d.TestSomething)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample3/sampletests_d.py",
+           line 37, in test_post_mortem2
+        g()
+      File "testrunner-ex/sample3/sampletests_d.py", line 46, in g
+        raise ValueError
+    ValueError
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > testrunner-ex/sample3/sampletests_d.py(46)g()
+    -> raise ValueError
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+
+Post-mortem debugging docstring-based doctest:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem3 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test post_mortem3 (sample3.sampletests_d)
+    Traceback (most recent call last):
+      File "zope/testing/doctest.py", line 2276, in debug
+        runner.run(self._dt_test)
+      File "zope/testing/doctest.py", line 1731, in run
+        r = DocTestRunner.run(self, test, compileflags, out, False)
+      File "zope/testing/doctest.py", line 1389, in run
+        return self.__run(test, compileflags, out)
+      File "zope/testing/doctest.py", line 1310, in __run
+        exc_info)
+      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
+        raise UnexpectedException(test, example, exc_info)
+    UnexpectedException:
+       from testrunner-ex/sample3/sampletests_d.py:61 (2 examples)>
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > <doctest sample3.sampletests_d.post_mortem3[1]>(1)...()
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Post-mortem debugging function called from docstring-based doctest:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem4 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test post_mortem4 (sample3.sampletests_d)
+    Traceback (most recent call last):
+      File "zope/testing/doctest.py", line 2276, in debug
+        runner.run(self._dt_test)
+      File "zope/testing/doctest.py", line 1731, in run
+        r = DocTestRunner.run(self, test, compileflags, out, False)
+      File "zope/testing/doctest.py", line 1389, in run
+        return self.__run(test, compileflags, out)
+      File "zope/testing/doctest.py", line 1310, in __run
+        exc_info)
+      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
+        raise UnexpectedException(test, example, exc_info)
+    UnexpectedException: testrunner-ex/sample3/sampletests_d.py:67 (1 example)>
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > testrunner-ex/sample3/sampletests_d.py(46)g()
+    -> raise ValueError
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Post-mortem debugging file-based doctest:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem5 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test zope/testing/testrunner-ex/sample3/post_mortem5.txt
+    Traceback (most recent call last):
+      File "zope/testing/doctest.py", line 2276, in debug
+        runner.run(self._dt_test)
+      File "zope/testing/doctest.py", line 1731, in run
+        r = DocTestRunner.run(self, test, compileflags, out, False)
+      File "zope/testing/doctest.py", line 1389, in run
+        return self.__run(test, compileflags, out)
+      File "zope/testing/doctest.py", line 1310, in __run
+        exc_info)
+      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
+        raise UnexpectedException(test, example, exc_info)
+    UnexpectedException: testrunner-ex/sample3/post_mortem5.txt:0 (2 examples)>
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > <doctest post_mortem5.txt[1]>(1)...()
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+
+Post-mortem debugging function called from file-based doctest:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem6 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test zope/testing/testrunner-ex/sample3/post_mortem6.txt
+    Traceback (most recent call last):
+      File "zope/testing/doctest.py", line 2276, in debug
+        runner.run(self._dt_test)
+      File "zope/testing/doctest.py", line 1731, in run
+        r = DocTestRunner.run(self, test, compileflags, out, False)
+      File "zope/testing/doctest.py", line 1389, in run
+        return self.__run(test, compileflags, out)
+      File "zope/testing/doctest.py", line 1310, in __run
+        exc_info)
+      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
+        raise UnexpectedException(test, example, exc_info)
+    UnexpectedException: testrunner-ex/sample3/post_mortem6.txt:0 (2 examples)>
+    <BLANKLINE>
+    exceptions.ValueError:
+    <BLANKLINE>
+    > testrunner-ex/sample3/sampletests_d.py(46)g()
+    -> raise ValueError
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Post-mortem debugging of a docstring doctest failure:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem_failure2 -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test post_mortem_failure2 (sample3.sampletests_d)
+    <BLANKLINE>
+    File "testrunner-ex/sample3/sampletests_d.py",
+                   line 81, in sample3.sampletests_d.post_mortem_failure2
+    <BLANKLINE>
+    x
+    <BLANKLINE>
+    Want:
+    2
+    <BLANKLINE>
+    Got:
+    1
+    <BLANKLINE>
+    <BLANKLINE>
+    > testrunner-ex/sample3/sampletests_d.py(81)_()
+    exceptions.ValueError:
+    Expected and actual output are different
+    > <string>(1)...()
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+
+Post-mortem debugging of a docfile doctest failure:
+
+    >>> sys.stdin = Input('p x\nc')
+    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+    ...             ' -t post_mortem_failure.txt -D').split()
+    >>> try: testrunner.run(defaults)
+    ... finally: sys.stdin = real_stdin
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test /home/jim/z3/zope.testing/src/zope/testing/testrunner-ex/sample3/post_mortem_failure.txt
+    <BLANKLINE>
+    File "testrunner-ex/sample3/post_mortem_failure.txt",
+                                      line 2, in post_mortem_failure.txt
+    <BLANKLINE>
+    x
+    <BLANKLINE>
+    Want:
+    2
+    <BLANKLINE>
+    Got:
+    1
+    <BLANKLINE>
+    <BLANKLINE>
+    > testrunner-ex/sample3/post_mortem_failure.txt(2)_()
+    exceptions.ValueError:
+    Expected and actual output are different
+    > <string>(1)...()
+    (Pdb) p x
+    1
+    (Pdb) c
+    True
+
+Post-mortem debugging with triple verbosity
+
+    >>> sys.argv = 'test --layer samplelayers.Layer1$ -vvv -D'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Running:
+        test_x1 (sampletestsf.TestA1) (0.000 s)
+        test_y0 (sampletestsf.TestA1) (0.000 s)
+        test_z0 (sampletestsf.TestA1) (0.000 s)
+        test_x0 (sampletestsf.TestB1) (0.000 s)
+        test_y1 (sampletestsf.TestB1) (0.000 s)
+        test_z0 (sampletestsf.TestB1) (0.000 s)
+        test_1 (sampletestsf.TestNotMuch1) (0.000 s)
+        test_2 (sampletestsf.TestNotMuch1) (0.000 s)
+        test_3 (sampletestsf.TestNotMuch1) (0.000 s)
+      Ran 9 tests with 0 failures and 0 errors in 0.001 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+Test Suites with None for suites or tests
+-----------------------------------------
+
+    >>> sys.argv = ['test',
+    ...             '--tests-pattern', '^sampletests_none_suite$',
+    ...     ]
+    >>> testrunner.run(defaults)
+    Test-module import failures:
+    <BLANKLINE>
+    Module: sample1.sampletests_none_suite
+    <BLANKLINE>
+    TypeError: Invalid test_suite, None, in sample1.sampletests_none_suite
+    <BLANKLINE>
+    <BLANKLINE>
+    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
+    <BLANKLINE>
+    Test-modules with import problems:
+      sample1.sampletests_none_suite
+    True
+
+
+    >>> sys.argv = ['test',
+    ...             '--tests-pattern', '^sampletests_none_test$',
+    ...     ]
+    >>> testrunner.run(defaults)
+    Test-module import failures:
+    <BLANKLINE>
+    Module: sample1.sampletests_none_test
+    <BLANKLINE>
+    TypeError: ...
+    <BLANKLINE>
+    <BLANKLINE>
+    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
+    <BLANKLINE>
+    Test-modules with import problems:
+      sample1.sampletests_none_test
+    True
+
+You must use --repeat with --report-refcounts
+---------------------------------------------
+
+It is an error to specify --report-refcounts (-r) without specifying a
+repeat count greater than 1
+
+    >>> sys.argv = 'test -r'.split()
+    >>> testrunner.run(defaults)
+            You must use the --repeat (-N) option to specify a repeat
+            count greater than 1 when using the --report_refcounts (-r)
+            option.
+    <BLANKLINE>
+    True
+
+    >>> sys.argv = 'test -r -N1'.split()
+    >>> testrunner.run(defaults)
+            You must use the --repeat (-N) option to specify a repeat
+            count greater than 1 when using the --report_refcounts (-r)
+            option.
+    <BLANKLINE>
+    True

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-errors.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-errors.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-errors.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-errors.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,856 @@
+Errors and Failures
+===================
+
+Let's look at tests that have errors and failures, first we need to make a
+temporary copy of the entire testing directory (except .svn files which may
+be read only):
+
+    >>> import os.path, sys, tempfile, shutil
+    >>> tmpdir = tempfile.mkdtemp()
+    >>> directory_with_tests = os.path.join(tmpdir, 'testrunner-ex')
+    >>> source = os.path.join(this_directory, 'testrunner-ex')
+    >>> n = len(source) + 1
+    >>> for root, dirs, files in os.walk(source):
+    ...     dirs[:] = [d for d in dirs if d != ".svn"] # prune cruft
+    ...     os.mkdir(os.path.join(directory_with_tests, root[n:]))
+    ...     for f in files:
+    ...         shutil.copy(os.path.join(root, f),
+    ...                     os.path.join(directory_with_tests, root[n:], f))
+    
+    >>> from zope.testing import testrunner
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
+    >>> testrunner.run(defaults)
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_e)
+    Failed doctest test for sample2.sampletests_e.eek
+      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+            f()
+          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+            g()
+          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+            x = y + 1
+        NameError: global name 'y' is not defined
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test3 (sample2.sampletests_e.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+        f()
+      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+        g()
+      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+        x = y + 1
+    NameError: global name 'y' is not defined
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test testrunner-ex/sample2/e.txt
+    Failed doctest test for e.txt
+      File "testrunner-ex/sample2/e.txt", line 0
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest e.txt[1]>", line 1, in ?
+            f()
+          File "<doctest e.txt[0]>", line 2, in f
+            return x
+        NameError: global name 'x' is not defined
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test test (sample2.sampletests_f.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+        self.assertEqual(1,0)
+      File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
+        raise self.failureException, \
+    AssertionError: 1 != 0
+    <BLANKLINE>
+      Ran 200 tests with 3 failures and 1 errors in 0.038 seconds.
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Set up samplelayers.Layer112 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 413 tests, 3 failures, 1 errors in N.NNN seconds.
+    True
+
+We see that we get an error report and a traceback for the failing
+test.  In addition, the test runner returned True, indicating that
+there was an error.
+
+If we ask for verbosity, the dotted output will be interrupted, and
+there'll be a summary of the errors at the end of the test:
+
+    >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
+    >>> testrunner.run(defaults)
+    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+    Running tests at level 1
+    Running unit tests:
+      Running:
+     .................................................................................................
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_e)
+    Failed doctest test for sample2.sampletests_e.eek
+      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_e.py", line 30,
+        in sample2.sampletests_e.eek
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+            f()
+          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+            g()
+          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+            x = y + 1
+        NameError: global name 'y' is not defined
+    <BLANKLINE>
+    ...
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test3 (sample2.sampletests_e.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+        f()
+      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+        g()
+      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+        x = y + 1
+    NameError: global name 'y' is not defined
+    <BLANKLINE>
+    ...
+    <BLANKLINE>
+    Failure in test testrunner-ex/sample2/e.txt
+    Failed doctest test for e.txt
+      File "testrunner-ex/sample2/e.txt", line 0
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest e.txt[1]>", line 1, in ?
+            f()
+          File "<doctest e.txt[0]>", line 2, in f
+            return x
+        NameError: global name 'x' is not defined
+    <BLANKLINE>
+    .
+    <BLANKLINE>
+    Failure in test test (sample2.sampletests_f.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+        self.assertEqual(1,0)
+      File ".../unittest.py", line 302, in failUnlessEqual
+        raise self.failureException, \
+    AssertionError: 1 != 0
+    <BLANKLINE>
+    ................................................................................................
+    <BLANKLINE>
+      Ran 200 tests with 3 failures and 1 errors in 0.040 seconds.
+    <BLANKLINE>
+    Tests with errors:
+       test3 (sample2.sampletests_e.Test)
+    <BLANKLINE>
+    Tests with failures:
+       eek (sample2.sampletests_e)
+       testrunner-ex/sample2/e.txt
+       test (sample2.sampletests_f.Test)
+    True
+
+Similarly for progress output, the progress ticker will be interrupted:
+
+    >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
+    ...             ' -p').split()
+    >>> testrunner.run(defaults)
+    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+    Running unit tests:
+      Running:
+        1/56 (1.8%)
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_e)
+    Failed doctest test for sample2.sampletests_e.eek
+      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_e.py", line 30,
+           in sample2.sampletests_e.eek
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+            f()
+          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+            g()
+          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+            x = y + 1
+        NameError: global name 'y' is not defined
+    <BLANKLINE>
+        2/56 (3.6%)\r
+                   \r
+        3/56 (5.4%)\r
+                   \r
+        4/56 (7.1%)
+    <BLANKLINE>
+    Error in test test3 (sample2.sampletests_e.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+        f()
+      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+        g()
+      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+        x = y + 1
+    NameError: global name 'y' is not defined
+    <BLANKLINE>
+        5/56 (8.9%)\r
+                   \r
+        6/56 (10.7%)\r
+                    \r
+        7/56 (12.5%)
+    <BLANKLINE>
+    Failure in test testrunner-ex/sample2/e.txt
+    Failed doctest test for e.txt
+      File "testrunner-ex/sample2/e.txt", line 0
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+    Failed example:
+        f()
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest e.txt[1]>", line 1, in ?
+            f()
+          File "<doctest e.txt[0]>", line 2, in f
+            return x
+        NameError: global name 'x' is not defined
+    <BLANKLINE>
+        8/56 (14.3%)
+    <BLANKLINE>
+    Failure in test test (sample2.sampletests_f.Test)
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+        self.assertEqual(1,0)
+      File ".../unittest.py", line 302, in failUnlessEqual
+        raise self.failureException, \
+    AssertionError: 1 != 0
+    <BLANKLINE>
+        9/56 (16.1%)\r
+                    \r
+        10/56 (17.9%)\r
+                     \r
+        11/56 (19.6%)\r
+                     \r
+        12/56 (21.4%)\r
+                     \r
+        13/56 (23.2%)\r
+                     \r
+        14/56 (25.0%)\r
+                     \r
+        15/56 (26.8%)\r
+                     \r
+        16/56 (28.6%)\r
+                     \r
+        17/56 (30.4%)\r
+                     \r
+        18/56 (32.1%)\r
+                     \r
+        19/56 (33.9%)\r
+                     \r
+        20/56 (35.7%)\r
+                     \r
+        24/56 (42.9%)\r
+                     \r
+        25/56 (44.6%)\r
+                     \r
+        26/56 (46.4%)\r
+                     \r
+        27/56 (48.2%)\r
+                     \r
+        28/56 (50.0%)\r
+                     \r
+        29/56 (51.8%)\r
+                     \r
+        30/56 (53.6%)\r
+                     \r
+        31/56 (55.4%)\r
+                     \r
+        32/56 (57.1%)\r
+                     \r
+        33/56 (58.9%)\r
+                     \r
+        34/56 (60.7%)\r
+                     \r
+        35/56 (62.5%)\r
+                     \r
+        36/56 (64.3%)\r
+                     \r
+        40/56 (71.4%)\r
+                     \r
+        41/56 (73.2%)\r
+                     \r
+        42/56 (75.0%)\r
+                     \r
+        43/56 (76.8%)\r
+                     \r
+        44/56 (78.6%)\r
+                     \r
+        45/56 (80.4%)\r
+                     \r
+        46/56 (82.1%)\r
+                     \r
+        47/56 (83.9%)\r
+                     \r
+        48/56 (85.7%)\r
+                     \r
+        49/56 (87.5%)\r
+                     \r
+        50/56 (89.3%)\r
+                     \r
+        51/56 (91.1%)\r
+                     \r
+        52/56 (92.9%)\r
+                     \r
+        56/56 (100.0%)\r
+                      \r
+    <BLANKLINE>
+      Ran 56 tests with 3 failures and 1 errors in 0.054 seconds.
+    True
+
+If you also want a summary of errors at the end, ask for verbosity as well
+as progress output.
+
+
+Suppressing multiple doctest errors
+-----------------------------------
+
+Often, when a doctest example fails, the failure will cause later
+examples in the same test to fail.  Each failure is reported:
+
+    >>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
+    >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_1)
+    Failed doctest test for sample2.sampletests_1.eek
+      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 19,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x = y
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+            x = y
+        NameError: name 'y' is not defined
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 21,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
+            x
+        NameError: name 'x' is not defined
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 24,
+         in sample2.sampletests_1.eek
+    Failed example:
+        z = x + 1
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
+            z = x + 1
+        NameError: name 'x' is not defined
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+    True
+
+This can be a bit confusing, especially when there are enough tests
+that they scroll off a screen.  Often you just want to see the first
+failure.  This can be accomplished with the -1 option (for "just show
+me the first failed example in a doctest" :)
+
+    >>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
+    >>> testrunner.run(defaults) # doctest:
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_1)
+    Failed doctest test for sample2.sampletests_1.eek
+      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 19,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x = y
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+            x = y
+        NameError: name 'y' is not defined
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
+    True
+
+The --hide-secondary-failures option is an alias for -1:
+
+    >>> sys.argv = (
+    ...     'test --tests-pattern ^sampletests_1$'
+    ...     ' --hide-secondary-failures'
+    ...     ).split()
+    >>> testrunner.run(defaults) # doctest:
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_1)
+    Failed doctest test for sample2.sampletests_1.eek
+      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 19,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x = y
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+            x = y
+        NameError: name 'y' is not defined
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
+    True
+
+The --show-secondary-failures option counters -1 (or it's alias),
+causing the second and subsequent errors to be shown.  This is useful
+if -1 is provided by a test script by inserting it ahead of
+command-line options in sys.argv.
+
+    >>> sys.argv = (
+    ...     'test --tests-pattern ^sampletests_1$'
+    ...     ' --hide-secondary-failures --show-secondary-failures'
+    ...     ).split()
+    >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test eek (sample2.sampletests_1)
+    Failed doctest test for sample2.sampletests_1.eek
+      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 19,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x = y
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+            x = y
+        NameError: name 'y' is not defined
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 21,
+         in sample2.sampletests_1.eek
+    Failed example:
+        x
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
+            x
+        NameError: name 'x' is not defined
+    ----------------------------------------------------------------------
+    File "testrunner-ex/sample2/sampletests_1.py", line 24,
+         in sample2.sampletests_1.eek
+    Failed example:
+        z = x + 1
+    Exception raised:
+        Traceback (most recent call last):
+          File ".../doctest.py", line 1256, in __run
+            compileflags, 1) in test.globs
+          File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
+            z = x + 1
+        NameError: name 'x' is not defined
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+    True
+
+
+Getting diff output for doctest failures
+----------------------------------------
+
+If a doctest has large expected and actual output, it can be hard to
+see differences when expected and actual output differ.  The --ndiff,
+--udiff, and --cdiff options can be used to get diff output of various
+kinds.
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Expected:
+        I give my pledge, as an earthling,
+        to save, and faithfully, to defend from waste,
+        the natural resources of my planet.
+        It's soils, minerals, forests, waters, and wildlife.
+    Got:
+        I give my pledge, as and earthling,
+        to save, and faithfully, to defend from waste,
+        the natural resources of my planet.
+        It's soils, minerals, forests, waters, and wildlife.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+Here, the actual output uses the word "and" rather than the word "an",
+but it's a bit hard to pick this out.  We can use the various diff
+outputs to see this better. We could modify the test to ask for diff
+output, but it's easier to use one of the diff options.
+
+The --ndiff option requests a diff using Python's ndiff utility. This
+is the only method that marks differences within lines as well as
+across lines. For example, if a line of expected output contains digit
+1 where actual output contains letter l, a line is inserted with a
+caret marking the mismatching column positions.
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (ndiff with -expected +actual):
+        - I give my pledge, as an earthling,
+        + I give my pledge, as and earthling,
+        ?                        +
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+          It's soils, minerals, forests, waters, and wildlife.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.003 seconds.
+
+The -udiff option requests a standard "unified" diff:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (unified diff with -expected +actual):
+        @@ -1,3 +1,3 @@
+        -I give my pledge, as an earthling,
+        +I give my pledge, as and earthling,
+         to save, and faithfully, to defend from waste,
+         the natural resources of my planet.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+The -cdiff option requests a standard "context" diff:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (context diff with expected followed by actual):
+        ***************
+        *** 1,3 ****
+        ! I give my pledge, as an earthling,
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+        --- 1,3 ----
+        ! I give my pledge, as and earthling,
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+
+Testing-Module Import Errors
+----------------------------
+
+If there are errors when importing a test module, these errors are
+reported.  In order to illustrate a module with a syntax error, we create
+one now:  this module used to be checked in to the project, but then it was
+included in distributions of projects using zope.testing too, and distutils
+complained about the syntax error when it compiled Python files during
+installation of such projects.  So first we create a module with bad syntax:
+
+    >>> badsyntax_path = os.path.join(directory_with_tests,
+    ...                               "sample2", "sampletests_i.py")
+    >>> f = open(badsyntax_path, "w")
+    >>> print >> f, "importx unittest"  # syntax error
+    >>> f.close()
+
+Then run the tests:
+
+    >>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
+    ...            ).split()
+    >>> testrunner.run(defaults)
+    ... # doctest: +NORMALIZE_WHITESPACE
+    Test-module import failures:
+    <BLANKLINE>
+    Module: sample2.sampletests_i
+    <BLANKLINE>
+      File "testrunner-ex/sample2/sampletests_i.py", line 1
+        importx unittest
+                       ^
+    SyntaxError: invalid syntax
+    <BLANKLINE>
+    <BLANKLINE>
+    Module: sample2.sample21.sampletests_i
+    <BLANKLINE>
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
+        import zope.testing.huh
+    ImportError: No module named huh
+    <BLANKLINE>
+    <BLANKLINE>
+    Module: sample2.sample22.sampletests_i
+    <BLANKLINE>
+    AttributeError: 'module' object has no attribute 'test_suite'
+    <BLANKLINE>
+    <BLANKLINE>
+    Module: sample2.sample23.sampletests_i
+    <BLANKLINE>
+    Traceback (most recent call last):
+      File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
+        class Test(unittest.TestCase):
+      File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
+        raise TypeError('eek')
+    TypeError: eek
+    <BLANKLINE>
+    <BLANKLINE>
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Set up samplelayers.Layer112 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 213 tests, 0 failures, 0 errors in N.NNN seconds.
+    <BLANKLINE>
+    Test-modules with import problems:
+      sample2.sampletests_i
+      sample2.sample21.sampletests_i
+      sample2.sample22.sampletests_i
+      sample2.sample23.sampletests_i
+    True
+
+
+Unicode Errors
+--------------
+
+There was a bug preventing decent error reporting when a result contained
+unicode and another not:
+
+    >>> sys.argv = 'test --tests-pattern ^unicode$ -u'.split()
+    >>> testrunner.run(defaults) # doctest: +REPORT_NDIFF
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure testrunner-ex/unicode.txt
+    Failed doctest test for unicode.txt
+     testrunner-ex/unicode.txt", line 0
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File testrunner-ex/unicode.txt", Line NNN, in unicode.txt
+    Failed example:
+        print get_unicode()
+    Expected:
+        oink
+    Got:
+        foo — bar
+    ----------------------------------------------------------------------
+    File testrunner-ex/unicode.txt", Line NNN, in unicode.txt
+    Failed example:
+        'xyz'
+    Expected:
+        123
+    Got:
+        'xyz'
+    <BLANKLINE>
+      Ran 3 tests with 1 failures and 0 errors in N.NNN seconds.
+    True
+
+ 
+Reporting Errors to Calling Processes
+-------------------------------------
+
+The testrunner can return an error status, indicating that the tests
+failed.  This can be useful for an invoking process that wants to
+monitor the result of a test run.
+
+To use, specify the argument "--exit-with-status".
+
+    >>> sys.argv = (
+    ...     'test --exit-with-status --tests-pattern ^sampletests_1$'.split())
+    >>> try:
+    ...     testrunner.run(defaults)
+    ... except SystemExit, e:
+    ...     print 'exited with code', e.code
+    ... else:
+    ...     print 'sys.exit was not called'
+    ... # doctest: +ELLIPSIS
+    Running unit tests:
+    ...
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+    exited with code 1
+
+A passing test does not exit.
+
+    >>> sys.argv = (
+    ...     'test --exit-with-status --tests-pattern ^sampletests$'.split())
+    >>> try:
+    ...     testrunner.run(defaults)
+    ... except SystemExit, e2:
+    ...     print 'oops'
+    ... else:
+    ...     print 'sys.exit was not called'
+    ... # doctest: +ELLIPSIS
+    Running unit tests:
+    ...
+    Total: 364 tests, 0 failures, 0 errors in N.NNN seconds.
+    ...
+    sys.exit was not called
+
+And remove the temporary directory:
+
+    >>> shutil.rmtree(tmpdir)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-ex)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex-pp-lib (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-ex-pp-lib)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-ex-pp-products (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-ex-pp-products)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-gc.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-gc.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-gc.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-gc.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,82 @@
+Garbage Collection Control
+==========================
+
+When having problems that seem to be caused my memory-management
+errors, it can be helpful to adjust Python's cyclic garbage collector
+or to get garbage colection statistics.  The --gc option can be used
+for this purpose.
+
+If you think you are getting a test failure due to a garbage
+collection problem, you can try disabling garbage collection by
+using the --gc option with a value of zero.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = ['--path', directory_with_tests]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test --tests-pattern ^gc0$ --gc 0 -vv'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Cyclic garbage collection is disabled.
+    Running unit tests:
+      Running:
+        make_sure_gc_is_disabled (gc0)
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+
+Alternatively, if you think you are having a garbage collection
+related problem, you can cause garbage collection to happen more often
+by providing a low threshold:
+    
+    >>> sys.argv = 'test --tests-pattern ^gc1$ --gc 1 -vv'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Cyclic garbage collection threshold set to: (1,)
+    Running unit tests:
+      Running:
+        make_sure_gc_threshold_is_one (gc1)
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+
+You can specify up to 3 --gc options to set each of the 3 gc threshold
+values:
+
+    
+    >>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 11 --gc 9 -vv'
+    ...             .split())
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Cyclic garbage collection threshold set to: (701, 11, 9)
+    Running unit tests:
+      Running:
+        make_sure_gc_threshold_is_701_11_9 (gcset)
+      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+
+Garbage Collection Statistics
+-----------------------------
+
+You can enable gc debugging statistics using the --gc-options (-G)
+option.  You should provide names of one or more of the flags
+described in the library documentation for the gc module.
+
+The output statistics are written to standard error.  
+
+    >>> from StringIO import StringIO
+    >>> err = StringIO()
+    >>> stderr = sys.stderr
+    >>> sys.stderr = err
+    >>> sys.argv = ('test --tests-pattern ^gcstats$ -G DEBUG_STATS'
+    ...             ' -G DEBUG_COLLECTABLE -vv'
+    ...             .split())
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        generate_some_gc_statistics (gcstats)
+      Ran 1 tests with 0 failures and 0 errors in 0.006 seconds.
+
+    >>> sys.stderr = stderr
+
+    >>> print err.getvalue()        # doctest: +ELLIPSIS
+    gc: collecting generation ...
+        

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-knit.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-knit.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-knit.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-knit.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,105 @@
+Knitting in extra package directories
+=====================================
+
+Python packages have __path__ variables that can be manipulated to add
+extra directories cntaining software used in the packages.  The
+testrunner needs to be given extra information about this sort of
+situation.
+
+Let's look at an example.  The testrunner-ex-knit-lib directory
+is a directory that we want to add to the Python path, but that we
+don't want to search for tests.  It has a sample4 package and a
+products subpackage.  The products subpackage adds the
+testrunner-ex-knit-products to it's __path__.  We want to run tests
+from the testrunner-ex-knit-products directory.  When we import these
+tests, we need to import them from the sample4.products package.  We
+can't use the --path option to name testrunner-ex-knit-products.
+It isn't enough to add the containing directory to the test path
+because then we wouldn't be able to determine the package name
+properly.  We might be able to use the --package option to run the
+tests from the sample4/products package, but we want to run tests in
+testrunner-ex that aren't in this package.  
+
+We can use the --package-path option in this case.  The --package-path
+option is like the --test-path option in that it defines a path to be
+searched for tests without affecting the python path.  It differs in
+that it supplied a package name that is added a profex when importing
+any modules found.  The --package-path option takes *two* arguments, a
+package name and file path.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> sys.path.append(os.path.join(this_directory, 'testrunner-ex-pp-lib'))
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     '--package-path',
+    ...     os.path.join(this_directory, 'testrunner-ex-pp-products'),
+    ...     'sample4.products',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test --layer Layer111 -vv'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Running:
+        test_x1 (sample1.sampletests.test111.TestA)
+        test_y0 (sample1.sampletests.test111.TestA)
+        ...
+        test_y0 (sampletests.test111)
+        test_z1 (sampletests.test111)
+        testrunner-ex/sampletests/../sampletestsl.txt
+        test_extra_test_in_products (sample4.products.sampletests.Test)
+        test_another_test_in_products (sample4.products.more.sampletests.Test)
+      Ran 36 tests with 0 failures and 0 errors in 0.008 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+
+In the example, the last test, test_extra_test_in_products, came from
+the products directory.  As usual, we can select the knit-in packages
+or individual packages within knit-in packages:
+
+    >>> sys.argv = 'test --package sample4.products -vv'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Running:
+        test_extra_test_in_products (sample4.products.sampletests.Test)
+        test_another_test_in_products (sample4.products.more.sampletests.Test)
+      Ran 2 tests with 0 failures and 0 errors in 0.000 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+
+    >>> sys.argv = 'test --package sample4.products.more -vv'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer111 in 0.000 seconds.
+      Running:
+        test_another_test_in_products (sample4.products.more.sampletests.Test)
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer111 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-api.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-layers-api.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-api.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-api.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,267 @@
+Layers
+======
+
+A Layer is an object providing setup and teardown methods used to setup
+and teardown the environment provided by the layer. It may also provide
+setup and teardown methods used to reset the environment provided by the
+layer between each test.
+
+Layers are generally implemented as classes using class methods.
+
+>>> class BaseLayer:
+...     def setUp(cls):
+...         log('BaseLayer.setUp')
+...     setUp = classmethod(setUp)
+...
+...     def tearDown(cls):
+...         log('BaseLayer.tearDown')
+...     tearDown = classmethod(tearDown)
+...
+...     def testSetUp(cls):
+...         log('BaseLayer.testSetUp')
+...     testSetUp = classmethod(testSetUp)
+...
+...     def testTearDown(cls):
+...         log('BaseLayer.testTearDown')
+...     testTearDown = classmethod(testTearDown)
+...
+
+Layers can extend other layers. Note that they do not explicitly
+invoke the setup and teardown methods of other layers - the test runner
+does this for us in order to minimize the number of invocations.
+
+>>> class TopLayer(BaseLayer):
+...     def setUp(cls):
+...         log('TopLayer.setUp')
+...     setUp = classmethod(setUp)
+...
+...     def tearDown(cls):
+...         log('TopLayer.tearDown')
+...     tearDown = classmethod(tearDown)
+...
+...     def testSetUp(cls):
+...         log('TopLayer.testSetUp')
+...     testSetUp = classmethod(testSetUp)
+...
+...     def testTearDown(cls):
+...         log('TopLayer.testTearDown')
+...     testTearDown = classmethod(testTearDown)
+...
+
+Tests or test suites specify what layer they need by storing a reference
+in the 'layer' attribute.
+
+>>> import unittest
+>>> class TestSpecifyingBaseLayer(unittest.TestCase):
+...     'This TestCase explicitly specifies its layer'
+...     layer = BaseLayer
+...     name = 'TestSpecifyingBaseLayer' # For testing only
+...
+...     def setUp(self):
+...         log('TestSpecifyingBaseLayer.setUp')
+...
+...     def tearDown(self):
+...         log('TestSpecifyingBaseLayer.tearDown')
+...
+...     def test1(self):
+...         log('TestSpecifyingBaseLayer.test1')
+...
+...     def test2(self):
+...         log('TestSpecifyingBaseLayer.test2')
+...
+>>> class TestSpecifyingNoLayer(unittest.TestCase):
+...     'This TestCase specifies no layer'
+...     name = 'TestSpecifyingNoLayer' # For testing only
+...     def setUp(self):
+...         log('TestSpecifyingNoLayer.setUp')
+...
+...     def tearDown(self):
+...         log('TestSpecifyingNoLayer.tearDown')
+...
+...     def test1(self):
+...         log('TestSpecifyingNoLayer.test')
+...
+...     def test2(self):
+...         log('TestSpecifyingNoLayer.test')
+...
+
+Create a TestSuite containing two test suites, one for each of
+TestSpecifyingBaseLayer and TestSpecifyingNoLayer.
+
+>>> umbrella_suite = unittest.TestSuite()
+>>> umbrella_suite.addTest(unittest.makeSuite(TestSpecifyingBaseLayer))
+>>> no_layer_suite = unittest.makeSuite(TestSpecifyingNoLayer)
+>>> umbrella_suite.addTest(no_layer_suite)
+
+Before we can run the tests, we need to setup some helpers.
+
+>>> from zope.testing import testrunner
+>>> from zope.testing.loggingsupport import InstalledHandler
+>>> import logging
+>>> log_handler = InstalledHandler('zope.testing.tests')
+>>> def log(msg):
+...     logging.getLogger('zope.testing.tests').info(msg)
+>>> def fresh_options():
+...     options = testrunner.get_options(['--test-filter', '.*'])
+...     options.resume_layer = None
+...     options.resume_number = 0
+...     return options
+
+Now we run the tests. Note that the BaseLayer was not setup when
+the TestSpecifyingNoLayer was run and setup/torn down around the
+TestSpecifyingBaseLayer tests.
+
+>>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
+Running unit tests:
+    Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
+    Set up BaseLayer in N.NNN seconds.
+    Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
+Tearing down left over layers:
+    Tear down BaseLayer in N.NNN seconds.
+Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.
+
+Now lets specify a layer in the suite containing TestSpecifyingNoLayer
+and run the tests again. This demonstrates the other method of specifying
+a layer. This is generally how you specify what layer doctests need.
+
+>>> no_layer_suite.layer = BaseLayer
+>>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
+  Set up BaseLayer in N.NNN seconds.
+  Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
+Tearing down left over layers:
+  Tear down BaseLayer in N.NNN seconds.
+
+Clear our logged output, as we want to inspect it shortly.
+
+>>> log_handler.clear()
+
+Now lets also specify a layer in the TestSpecifyingNoLayer class and rerun
+the tests. This demonstrates that the most specific layer is used. It also
+shows the behavior of nested layers - because TopLayer extends BaseLayer,
+both the BaseLayer and TopLayer environments are setup when the
+TestSpecifyingNoLayer tests are run.
+
+>>> TestSpecifyingNoLayer.layer = TopLayer
+>>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
+  Set up BaseLayer in N.NNN seconds.
+  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
+  Set up TopLayer in N.NNN seconds.
+  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
+Tearing down left over layers:
+  Tear down TopLayer in N.NNN seconds.
+  Tear down BaseLayer in N.NNN seconds.
+Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.
+
+If we inspect our trace of what methods got called in what order, we can
+see that the layer setup and teardown methods only got called once. We can
+also see that the layer's test setup and teardown methods got called for
+each test using that layer in the right order.
+
+>>> def report():
+...     for record in log_handler.records:
+...         print record.getMessage()
+>>> report()
+BaseLayer.setUp
+BaseLayer.testSetUp
+TestSpecifyingBaseLayer.setUp
+TestSpecifyingBaseLayer.test1
+TestSpecifyingBaseLayer.tearDown
+BaseLayer.testTearDown
+BaseLayer.testSetUp
+TestSpecifyingBaseLayer.setUp
+TestSpecifyingBaseLayer.test2
+TestSpecifyingBaseLayer.tearDown
+BaseLayer.testTearDown
+TopLayer.setUp
+BaseLayer.testSetUp
+TopLayer.testSetUp
+TestSpecifyingNoLayer.setUp
+TestSpecifyingNoLayer.test
+TestSpecifyingNoLayer.tearDown
+TopLayer.testTearDown
+BaseLayer.testTearDown
+BaseLayer.testSetUp
+TopLayer.testSetUp
+TestSpecifyingNoLayer.setUp
+TestSpecifyingNoLayer.test
+TestSpecifyingNoLayer.tearDown
+TopLayer.testTearDown
+BaseLayer.testTearDown
+TopLayer.tearDown
+BaseLayer.tearDown
+
+Now lets stack a few more layers to ensure that our setUp and tearDown
+methods are called in the correct order.
+
+>>> from zope.testing.testrunner import name_from_layer
+>>> class A(object):
+...     def setUp(cls):
+...         log('%s.setUp' % name_from_layer(cls))
+...     setUp = classmethod(setUp)
+...
+...     def tearDown(cls):
+...         log('%s.tearDown' % name_from_layer(cls))
+...     tearDown = classmethod(tearDown)
+...
+...     def testSetUp(cls):
+...         log('%s.testSetUp' % name_from_layer(cls))
+...     testSetUp = classmethod(testSetUp)
+...
+...     def testTearDown(cls):
+...         log('%s.testTearDown' % name_from_layer(cls))
+...     testTearDown = classmethod(testTearDown)
+...         
+>>> class B(A): pass
+>>> class C(B): pass
+>>> class D(A): pass
+>>> class E(D): pass
+>>> class F(C,E): pass
+
+>>> class DeepTest(unittest.TestCase):
+...     layer = F
+...     def test(self):
+...         pass
+>>> suite = unittest.makeSuite(DeepTest)
+>>> log_handler.clear()
+>>> succeeded = testrunner.run_with_options(fresh_options(), [suite])
+  Set up A in 0.000 seconds.
+  Set up B in 0.000 seconds.
+  Set up C in 0.000 seconds.
+  Set up D in 0.000 seconds.
+  Set up E in 0.000 seconds.
+  Set up F in 0.000 seconds.
+  Ran 1 tests with 0 failures and 0 errors in 0.003 seconds.
+Tearing down left over layers:
+  Tear down F in 0.000 seconds.
+  Tear down E in 0.000 seconds.
+  Tear down D in 0.000 seconds.
+  Tear down C in 0.000 seconds.
+  Tear down B in 0.000 seconds.
+  Tear down A in 0.000 seconds.
+
+>>> report()
+A.setUp
+B.setUp
+C.setUp
+D.setUp
+E.setUp
+F.setUp
+A.testSetUp
+B.testSetUp
+C.testSetUp
+D.testSetUp
+E.testSetUp
+F.testSetUp
+F.testTearDown
+E.testTearDown
+D.testTearDown
+C.testTearDown
+B.testTearDown
+A.testTearDown
+F.tearDown
+E.tearDown
+D.tearDown
+C.tearDown
+B.tearDown
+A.tearDown
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-ntd.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-ntd.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers-ntd.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,240 @@
+Layers that can't be torn down
+==============================
+
+A layer can have a tearDown method that raises NotImplementedError.
+If this is the case and there are no remaining tests to run, the test
+runner will just note that the tear down couldn't be done:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> from zope.testing import testrunner
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
+    >>> testrunner.run(defaults)
+    Running sample2.sampletests_ntd.Layer tests:
+      Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Tearing down left over layers:
+      Tear down sample2.sampletests_ntd.Layer ... not supported
+    False
+
+If the tearDown method raises NotImplementedError and there are remaining
+layers to run, the test runner will restart itself as a new process,
+resuming tests where it left off:
+
+    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
+    >>> testrunner.run(defaults)
+    Running sample1.sampletests_ntd.Layer tests:
+      Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
+      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running sample2.sampletests_ntd.Layer tests:
+      Tear down sample1.sampletests_ntd.Layer ... not supported
+      Running in a subprocess.
+      Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
+      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+      Tear down sample2.sampletests_ntd.Layer ... not supported
+    Running sample3.sampletests_ntd.Layer tests:
+      Running in a subprocess.
+      Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
+        raise TypeError("Can we see errors")
+    TypeError: Can we see errors
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
+        raise TypeError("I hope so")
+    TypeError: I hope so
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
+        self.assertEqual(1, 2)
+    AssertionError: 1 != 2
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
+        self.assertEqual(1, 3)
+    AssertionError: 1 != 3
+    <BLANKLINE>
+      Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
+      Tear down sample3.sampletests_ntd.Layer ... not supported
+    Total: 8 tests, 2 failures, 2 errors in N.NNN seconds.
+    True
+
+in the example above, some of the tests run as a subprocess had errors
+and failures. They were displayed as usual and the failure and error
+statistice were updated as usual.
+
+Note that debugging doesn't work when running tests in a subprocess:
+
+    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
+    ...             '-D', ]
+    >>> testrunner.run(defaults)
+    Running sample1.sampletests_ntd.Layer tests:
+      Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
+      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running sample2.sampletests_ntd.Layer tests:
+      Tear down sample1.sampletests_ntd.Layer ... not supported
+      Running in a subprocess.
+      Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
+      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+      Tear down sample2.sampletests_ntd.Layer ... not supported
+    Running sample3.sampletests_ntd.Layer tests:
+      Running in a subprocess.
+      Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
+        raise TypeError("Can we see errors")
+    TypeError: Can we see errors
+    <BLANKLINE>
+    <BLANKLINE>
+    **********************************************************************
+    Can't post-mortem debug when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
+        raise TypeError("I hope so")
+    TypeError: I hope so
+    <BLANKLINE>
+    <BLANKLINE>
+    **********************************************************************
+    Can't post-mortem debug when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
+        self.assertEqual(1, 2)
+    AssertionError: 1 != 2
+    <BLANKLINE>
+    <BLANKLINE>
+    **********************************************************************
+    Can't post-mortem debug when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    <BLANKLINE>
+    <BLANKLINE>
+    Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
+    Traceback (most recent call last):
+     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
+        self.assertEqual(1, 3)
+    AssertionError: 1 != 3
+    <BLANKLINE>
+    <BLANKLINE>
+    **********************************************************************
+    Can't post-mortem debug when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+      Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
+      Tear down sample3.sampletests_ntd.Layer ... not supported
+    Total: 8 tests, 0 failures, 4 errors in N.NNN seconds.
+    True
+
+Similarly, pdb.set_trace doesn't work when running tests in a layer
+that is run as a subprocess:
+
+    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
+    >>> testrunner.run(defaults)
+    Running sample1.sampletests_ntds.Layer tests:
+      Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Running sample2.sampletests_ntds.Layer tests:
+      Tear down sample1.sampletests_ntds.Layer ... not supported
+      Running in a subprocess.
+      Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
+    --Return--
+    > testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
+    -> import pdb; pdb.set_trace()
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
+    -> import pdb; pdb.set_trace()
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
+    -> import pdb; pdb.set_trace()
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
+    -> import pdb; pdb.set_trace()
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
+    -> import pdb; pdb.set_trace()
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+    --Return--
+    > doctest.py(351)set_trace()->None
+    -> pdb.Pdb.set_trace(self)
+    (Pdb) c
+    <BLANKLINE>
+    **********************************************************************
+    Can't use pdb.set_trace when running a layer as a subprocess!
+    **********************************************************************
+    <BLANKLINE>
+      Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
+      Tear down sample2.sampletests_ntds.Layer ... not supported
+    Total: 8 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+If you want to use pdb from a test in a layer that is run as a
+subprocess, then rerun the test runner selecting *just* that layer so
+that it's not run as a subprocess.

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-layers.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-layers.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,117 @@
+Layer Selection
+===============
+
+We can select which layers to run using the --layer option:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --layer 112 --layer unit'.split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer112 tests:
+      Set up samplelayers.Layerx in N.NNN seconds.
+      Set up samplelayers.Layer1 in N.NNN seconds.
+      Set up samplelayers.Layer11 in N.NNN seconds.
+      Set up samplelayers.Layer112 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer112 in N.NNN seconds.
+      Tear down samplelayers.Layerx in N.NNN seconds.
+      Tear down samplelayers.Layer11 in N.NNN seconds.
+      Tear down samplelayers.Layer1 in N.NNN seconds.
+    Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+We can also specify that we want to run only the unit tests:
+
+    >>> sys.argv = 'test -u'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
+    False
+
+Or that we want to run all of the tests except for the unit tests:
+
+    >>> sys.argv = 'test -f'.split()
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in N.NNN seconds.
+      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in N.NNN seconds.
+      Set up samplelayers.Layer111 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in N.NNN seconds.
+      Set up samplelayers.Layer112 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in N.NNN seconds.
+      Tear down samplelayers.Layerx in N.NNN seconds.
+      Tear down samplelayers.Layer11 in N.NNN seconds.
+      Set up samplelayers.Layer12 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in N.NNN seconds.
+      Set up samplelayers.Layer122 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in N.NNN seconds.
+      Tear down samplelayers.Layer12 in N.NNN seconds.
+      Tear down samplelayers.Layer1 in N.NNN seconds.
+    Total: 213 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+Or we can explicitly say that we want both unit and non-unit tests.
+
+    >>> sys.argv = 'test -uf'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in N.NNN seconds.
+      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in N.NNN seconds.
+      Set up samplelayers.Layer111 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in N.NNN seconds.
+      Set up samplelayers.Layer112 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in N.NNN seconds.
+      Tear down samplelayers.Layerx in N.NNN seconds.
+      Tear down samplelayers.Layer11 in N.NNN seconds.
+      Set up samplelayers.Layer12 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in N.NNN seconds.
+      Set up samplelayers.Layer122 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in N.NNN seconds.
+      Tear down samplelayers.Layer12 in N.NNN seconds.
+      Tear down samplelayers.Layer1 in N.NNN seconds.
+    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks-err.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks-err.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks-err.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,22 @@
+Debugging Memory Leaks without a debug build of Python 
+======================================================
+
+To use the --report-refcounts (-r) to detect or debug memory leaks,
+you must have a debug build of Python. Without a debug build, you will
+get an error message:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test -r -N6'.split()
+    >>> _ = testrunner.run(defaults)
+            The Python you are running was not configured
+            with --with-pydebug. This is required to use
+            the --report-refcounts option.
+    <BLANKLINE>

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-leaks.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-leaks.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,218 @@
+Debugging Memory Leaks
+======================
+
+The --report-refcounts (-r) option can be used with the --repeat (-N)
+option to detect and diagnose memory leaks.  To use this option, you
+must configure Python with the --with-pydebug option. (On Unix, pass
+this option to configure and then build Python.)
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
+    >>> _ = testrunner.run(defaults)
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+    Iteration 1
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+    Iteration 2
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100401   change=0     
+    Iteration 3
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100401   change=0     
+    Iteration 4
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+      sys refcount=100401   change=0     
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+    Iteration 1
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+    Iteration 2
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Iteration 3
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Iteration 4
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Tearing down left over layers:
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 68 tests, 0 failures, 0 errors in N.NNN seconds.
+
+Each layer is repeated the requested number of times.  For each
+iteration after the first, the system refcount and change in system
+refcount is shown. The system refcount is the total of all refcount in
+the system.  When a refcount on any object is changed, the system
+refcount is changed by the same amount.  Tests that don't leak show
+zero changes in systen refcount.
+
+Let's look at an example test that leaks:
+
+    >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    Iteration 1
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Iteration 2
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92506    change=12
+    Iteration 3
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92513    change=12
+    Iteration 4
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92520    change=12
+
+Here we see that the system refcount is increating.  If we specify a
+verbosity greater than one, we can get details broken out by object
+type (or class):
+
+    >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+    Iteration 1
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Iteration 2
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95832    sys refcount=105668   change=16    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        int                                                          2      2
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      3
+        -------------------------------------------------------  -----   ----
+        total                                                        8     16
+    Iteration 3
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95844    sys refcount=105680   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        int                                                         -1      0
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        5     12
+    Iteration 4
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95856    sys refcount=105692   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        6     12
+    Iteration 5
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95868    sys refcount=105704   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        6     12
+
+It is instructive to analyze the results in some detail.  The test
+being run was designed to intentionally leak:
+
+    class ClassicLeakable:
+        def __init__(self):
+            self.x = 'x'
+
+    class Leakable(object):
+        def __init__(self):
+            self.x = 'x'
+
+    leaked = []
+
+    class TestSomething(unittest.TestCase):
+
+        def testleak(self):
+            leaked.append((ClassicLeakable(), Leakable(), time.time()))
+
+Let's go through this by type.
+
+float, leak.ClassicLeakable, leak.Leakable, and tuple
+    We leak one of these every time.  This is to be expected because
+    we are adding one of these to the list every time.
+
+str
+    We don't leak any instances, but we leak 4 references. These are
+    due to the instance attributes avd values.
+
+dict
+    We leak 2 of these, one for each ClassicLeakable and Leakable
+    instance. 
+
+classobj
+    We increase the number of classobj instance references by one each
+    time because each ClassicLeakable instance has a reference to its
+    class.  This instances increases the references in it's class,
+    which increases the total number of references to classic classes
+    (clasobj instances).
+
+type
+    For most interations, we increase the number of type references by
+    one for the same reason we increase the number of clasobj
+    references by one.  The increase of the number of type references
+    by 3 in the second iteration is puzzling, but illustrates that
+    this sort of data is often puzzling.
+
+int
+    The change in the number of int instances and references in this
+    example is a side effect of the statistics being gathered.  Lots
+    of integers are created to keep the memory statistics used here.
+
+The summary statistics include the sum of the detail refcounts.  (Note
+that this sum is less than the system refcount.  This is because the
+detailed analysis doesn't inspect every object. Not all objects in the
+system are returned by sys.getobjects.)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling-cprofiler.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-profiling-cprofiler.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling-cprofiler.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling-cprofiler.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,54 @@
+Profiling
+=========
+
+The testrunner includes the ability to profile the test execution with cProfile
+via the `--profile=cProfile` option::
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> sys.path.append(directory_with_tests)
+
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = [testrunner_script, '--profile=cProfile']
+
+When the tests are run, we get profiling output::
+
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running unit tests:
+    ...
+    Running samplelayers.Layer1 tests:
+    ...
+    Running samplelayers.Layer11 tests:
+    ...
+    Total: ... tests, 0 failures, 0 errors in ... seconds.
+    ...
+       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
+
+Profiling also works across layers::
+
+    >>> sys.argv = [testrunner_script, '-ssample2', '--profile=cProfile', 
+    ...             '--tests-pattern', 'sampletests_ntd']
+    >>> testrunner.run(defaults)
+    Running...
+      Tear down ... not supported...
+       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
+
+The testrunner creates temnporary files containing cProfiler profiler
+data::
+
+    >>> import glob
+    >>> files = list(glob.glob('tests_profile.*.prof'))
+    >>> files.sort()
+    >>> files
+    ['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']
+
+It deletes these when rerun.  We'll delete these ourselves::
+
+    >>> import os
+    >>> for f in files:
+    ...     os.unlink(f)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-profiling.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-profiling.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,54 @@
+Profiling
+=========
+
+The testrunner includes the ability to profile the test execution with hotshot
+via the --profile option.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> sys.path.append(directory_with_tests)
+
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = [testrunner_script, '--profile=hotshot']
+
+When the tests are run, we get profiling output.
+
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running unit tests:
+    ...
+    Running samplelayers.Layer1 tests:
+    ...
+    Running samplelayers.Layer11 tests:
+    ...
+    Total: ... tests, 0 failures, 0 errors in ... seconds.
+    ...
+       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
+
+Profiling also works across layers.
+
+    >>> sys.argv = [testrunner_script, '-ssample2', '--profile=hotshot', 
+    ...             '--tests-pattern', 'sampletests_ntd']
+    >>> testrunner.run(defaults)
+    Running...
+      Tear down ... not supported...
+       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
+
+The testrunner creates temnporary files containing hotshot profiler
+data:
+
+    >>> import glob
+    >>> files = list(glob.glob('tests_profile.*.prof'))
+    >>> files.sort()
+    >>> files
+    ['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']
+
+It deletes these when rerun.  We'll delete these ourselves:
+
+    >>> import os
+    >>> for f in files:
+    ...     os.unlink(f)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-progress.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-progress.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-progress.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-progress.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,428 @@
+Test Progress
+=============
+
+If the --progress (-p) option is used, progress information is printed and
+a carriage return (rather than a new-line) is printed between
+detail lines.  Let's look at the effect of --progress (-p) at different
+levels of verbosity.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --layer 122 -p'.split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        1/34 (2.9%)\r
+                   \r
+        2/34 (5.9%)\r
+                   \r
+        3/34 (8.8%)\r
+                   \r
+        4/34 (11.8%)\r
+                    \r
+        5/34 (14.7%)\r
+                    \r
+        6/34 (17.6%)\r
+                    \r
+        7/34 (20.6%)\r
+                    \r
+        8/34 (23.5%)\r
+                    \r
+        9/34 (26.5%)\r
+                    \r
+        10/34 (29.4%)\r
+                     \r
+        11/34 (32.4%)\r
+                     \r
+        12/34 (35.3%)\r
+                     \r
+        17/34 (50.0%)\r
+                     \r
+        18/34 (52.9%)\r
+                     \r
+        19/34 (55.9%)\r
+                     \r
+        20/34 (58.8%)\r
+                     \r
+        21/34 (61.8%)\r
+                     \r
+        22/34 (64.7%)\r
+                     \r
+        23/34 (67.6%)\r
+                     \r
+        24/34 (70.6%)\r
+                     \r
+        25/34 (73.5%)\r
+                     \r
+        26/34 (76.5%)\r
+                     \r
+        27/34 (79.4%)\r
+                     \r
+        28/34 (82.4%)\r
+                     \r
+        29/34 (85.3%)\r
+                     \r
+        34/34 (100.0%)\r
+                      \r
+    <BLANKLINE>
+      Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+(Note that, in the examples above and below, we show "\r" followed by
+new lines where carriage returns would appear in actual output.)
+
+Using a single level of verbosity causes test descriptions to be
+output, but only if they fit in the terminal width.  The default
+width, when the terminal width can't be determined, is 80:
+
+>>> sys.argv = 'test --layer 122 -pv'.split()
+>>> testrunner.run(defaults)
+Running tests at level 1
+Running samplelayers.Layer122 tests:
+  Set up samplelayers.Layer1 in 0.000 seconds.
+  Set up samplelayers.Layer12 in 0.000 seconds.
+  Set up samplelayers.Layer122 in 0.000 seconds.
+  Running:
+    1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)\r
+                                                           \r
+    2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)\r
+                                                           \r
+    3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)\r
+                                                           \r
+    4/34 (11.8%) test_x0 (sample1.sampletests.test122.TestB)\r
+                                                            \r
+    5/34 (14.7%) test_y1 (sample1.sampletests.test122.TestB)\r
+                                                            \r
+    6/34 (17.6%) test_z0 (sample1.sampletests.test122.TestB)\r
+                                                            \r
+    7/34 (20.6%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                 \r
+    8/34 (23.5%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                 \r
+    9/34 (26.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                 \r
+    10/34 (29.4%) test_x0 (sample1.sampletests.test122)\r
+                                                       \r
+    11/34 (32.4%) test_y0 (sample1.sampletests.test122)\r
+                                                       \r
+    12/34 (35.3%) test_z1 (sample1.sampletests.test122)\r
+                                                       \r
+    17/34 (50.0%) ... /testrunner-ex/sample1/sampletests/../../sampletestsl.txt\r
+                                                                               \r
+    18/34 (52.9%) test_x1 (sampletests.test122.TestA)\r
+                                                     \r
+    19/34 (55.9%) test_y0 (sampletests.test122.TestA)\r
+                                                     \r
+    20/34 (58.8%) test_z0 (sampletests.test122.TestA)\r
+                                                     \r
+    21/34 (61.8%) test_x0 (sampletests.test122.TestB)\r
+                                                     \r
+    22/34 (64.7%) test_y1 (sampletests.test122.TestB)\r
+                                                     \r
+    23/34 (67.6%) test_z0 (sampletests.test122.TestB)\r
+                                                     \r
+    24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)\r
+                                                          \r
+    25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)\r
+                                                          \r
+    26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)\r
+                                                          \r
+    27/34 (79.4%) test_x0 (sampletests.test122)\r
+                                               \r
+    28/34 (82.4%) test_y0 (sampletests.test122)\r
+                                               \r
+    29/34 (85.3%) test_z1 (sampletests.test122)\r
+                                               \r
+    34/34 (100.0%) ... pe/testing/testrunner-ex/sampletests/../sampletestsl.txt\r
+                                                                               \r
+<BLANKLINE>
+  Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
+Tearing down left over layers:
+  Tear down samplelayers.Layer122 in 0.000 seconds.
+  Tear down samplelayers.Layer12 in 0.000 seconds.
+  Tear down samplelayers.Layer1 in 0.000 seconds.
+False
+
+The terminal width is determined using the curses module.  To see
+that, we'll provide a fake curses module:
+
+    >>> class FakeCurses:
+    ...     def setupterm(self):
+    ...         pass
+    ...     def tigetnum(self, ignored):
+    ...         return 60
+    >>> old_curses = sys.modules.get('curses')
+    >>> sys.modules['curses'] = FakeCurses()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)\r
+                                                               \r
+        2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)\r
+                                                               \r
+        3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)\r
+                                                               \r
+        4/34 (11.8%) test_x0 (...le1.sampletests.test122.TestB)\r
+                                                               \r
+        5/34 (14.7%) test_y1 (...le1.sampletests.test122.TestB)\r
+                                                               \r
+        6/34 (17.6%) test_z0 (...le1.sampletests.test122.TestB)\r
+                                                               \r
+        7/34 (20.6%) test_1 (...ampletests.test122.TestNotMuch)\r
+                                                               \r
+        8/34 (23.5%) test_2 (...ampletests.test122.TestNotMuch)\r
+                                                               \r
+        9/34 (26.5%) test_3 (...ampletests.test122.TestNotMuch)\r
+                                                               \r
+        10/34 (29.4%) test_x0 (sample1.sampletests.test122)\r
+                                                           \r
+        11/34 (32.4%) test_y0 (sample1.sampletests.test122)\r
+                                                           \r
+        12/34 (35.3%) test_z1 (sample1.sampletests.test122)\r
+                                                           \r
+        17/34 (50.0%) ... e1/sampletests/../../sampletestsl.txt\r
+                                                               \r
+        18/34 (52.9%) test_x1 (sampletests.test122.TestA)\r
+                                                         \r
+        19/34 (55.9%) test_y0 (sampletests.test122.TestA)\r
+                                                         \r
+        20/34 (58.8%) test_z0 (sampletests.test122.TestA)\r
+                                                         \r
+        21/34 (61.8%) test_x0 (sampletests.test122.TestB)\r
+                                                         \r
+        22/34 (64.7%) test_y1 (sampletests.test122.TestB)\r
+                                                         \r
+        23/34 (67.6%) test_z0 (sampletests.test122.TestB)\r
+                                                         \r
+        24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)\r
+                                                              \r
+        25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)\r
+                                                              \r
+        26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)\r
+                                                              \r
+        27/34 (79.4%) test_x0 (sampletests.test122)\r
+                                                   \r
+        28/34 (82.4%) test_y0 (sampletests.test122)\r
+                                                   \r
+        29/34 (85.3%) test_z1 (sampletests.test122)\r
+                                                   \r
+        34/34 (100.0%) ... r-ex/sampletests/../sampletestsl.txt\r
+                                                               \r
+    <BLANKLINE>
+      Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+    >>> sys.modules['curses'] = old_curses
+
+If a second or third level of verbosity are added, we get additional
+information.
+
+    >>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)\r
+                                                              \r
+        2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)\r
+                                                              \r
+        3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)\r
+                                                               \r
+        4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)\r
+                                                               \r
+        5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)\r
+                                                               \r
+        6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)\r
+                                                               \r
+        7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                    \r
+        8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                    \r
+        9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
+                                                                    \r
+        10/24 (41.7%) test_x0 (sample1.sampletests.test122)\r
+                                                          \r
+        11/24 (45.8%) test_y0 (sample1.sampletests.test122)\r
+                                                          \r
+        12/24 (50.0%) test_z1 (sample1.sampletests.test122)\r
+                                                          \r
+        13/24 (54.2%) test_x1 (sampletests.test122.TestA)\r
+                                                        \r
+        14/24 (58.3%) test_y0 (sampletests.test122.TestA)\r
+                                                        \r
+        15/24 (62.5%) test_z0 (sampletests.test122.TestA)\r
+                                                        \r
+        16/24 (66.7%) test_x0 (sampletests.test122.TestB)\r
+                                                        \r
+        17/24 (70.8%) test_y1 (sampletests.test122.TestB)\r
+                                                        \r
+        18/24 (75.0%) test_z0 (sampletests.test122.TestB)\r
+                                                        \r
+        19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)\r
+                                                             \r
+        20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)\r
+                                                             \r
+        21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)\r
+                                                             \r
+        22/24 (91.7%) test_x0 (sampletests.test122)\r
+                                                  \r
+        23/24 (95.8%) test_y0 (sampletests.test122)\r
+                                                  \r
+        24/24 (100.0%) test_z1 (sampletests.test122)\r
+                                                   \r
+    <BLANKLINE>
+      Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+Note that, in this example, we used a test-selection pattern starting
+with '!' to exclude tests containing the string "txt".
+
+    >>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 s)\r
+                                                                          \r
+        2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 s)\r
+                                                                           \r
+        3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 s)\r
+                                                                           \r
+        4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 s)\r
+                                                                           \r
+        5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 s)\r
+                                                                           \r
+        6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 s)\r
+                                                                           \r
+        7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 s)\r
+                                                                     \r
+        8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 s)\r
+                                                                     \r
+        9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 s)\r
+                                                                     \r
+        10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 s)\r
+                                                                    \r
+        11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 s)\r
+                                                                    \r
+        12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 s)\r
+                                                                    \r
+        13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 s)\r
+                                                                    \r
+        14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 s)\r
+                                                                    \r
+        15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 s)\r
+                                                                    \r
+        16/18 (88.9%) test_x0 (sampletests.test122) (0.001 s)\r
+                                                              \r
+        17/18 (94.4%) test_y0 (sampletests.test122) (0.001 s)\r
+                                                              \r
+        18/18 (100.0%) test_z1 (sampletests.test122) (0.001 s)\r
+                                                               \r
+    <BLANKLINE>
+      Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+In this example, we also excluded tests with "NotMuch" in their names.
+
+Unfortunately, the time data above doesn't buy us much because, in
+practice, the line is cleared before there is time to see the
+times. :/
+
+
+Autodetecting progress
+----------------------
+
+The --auto-progress option will determine if stdout is a terminal, and only enable
+progress output if so.
+
+Let's pretend we have a terminal
+
+    >>> class Terminal(object):
+    ...     def __init__(self, stream):
+    ...         self._stream = stream
+    ...     def __getattr__(self, attr):
+    ...         return getattr(self._stream, attr)
+    ...     def isatty(self):
+    ...         return True
+    >>> real_stdout = sys.stdout
+    >>> sys.stdout = Terminal(sys.stdout)
+
+    >>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Running:
+        1/6 (16.7%)\r
+                   \r
+        2/6 (33.3%)\r
+                   \r
+        3/6 (50.0%)\r
+                   \r
+        4/6 (66.7%)\r
+                   \r
+        5/6 (83.3%)\r
+                   \r
+        6/6 (100.0%)\r
+                    \r
+    <BLANKLINE>
+      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
+    False
+
+Let's stop pretending
+
+    >>> sys.stdout = real_stdout
+
+    >>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
+    False
+
+
+Disabling progress indication
+-----------------------------
+
+If -p or --progress have been previously provided on the command line (perhaps by a
+wrapper script) but you do not desire progress indication, you can switch it off with
+--no-progress:
+
+    >>> sys.argv = 'test -u -t test_one.TestNotMuch -p --no-progress'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
+    False
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-repeat.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-repeat.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-repeat.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-repeat.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,46 @@
+Repeating Tests
+===============
+
+The --repeat option can be used to repeat tests some number of times.
+Repeating tests is useful to help make sure that tests clean up after
+themselves.
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --layer 112 --layer unit --repeat 3'.split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running unit tests:
+    Iteration 1
+      Ran 192 tests with 0 failures and 0 errors in 0.054 seconds.
+    Iteration 2
+      Ran 192 tests with 0 failures and 0 errors in 0.054 seconds.
+    Iteration 3
+      Ran 192 tests with 0 failures and 0 errors in 0.052 seconds.
+    Running samplelayers.Layer112 tests:
+      Set up samplelayers.Layerx in 0.000 seconds.
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer112 in 0.000 seconds.
+    Iteration 1
+      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
+    Iteration 2
+      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
+    Iteration 3
+      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer112 in 0.000 seconds.
+      Tear down samplelayers.Layerx in 0.000 seconds.
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+The tests are repeated by layer.  Layers are set up and torn down only
+once.
+ 

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-simple.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-simple.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-simple.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-simple.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,95 @@
+Simple Usage
+============
+
+The test runner consists of an importable module.  The test runner is
+used by providing scripts that import and invoke the `run` method from
+the module.  The `testrunner` module is controlled via command-line
+options.  Test scripts supply base and default options by supplying a
+list of default command-line options that are processed before the
+user-supplied command-line options are provided.
+
+Typically, a test script does 2 things:
+
+- Adds the directory containing the zope package to the Python
+  path.
+
+- Calls the test runner with default arguments and arguments supplied
+  to the script.
+
+  Normally, it just passes default/setup arguments.  The test runner
+  uses `sys.argv` to get the user's input.
+
+This testrunner_ex subdirectory contains a number of sample packages
+with tests.  Let's run the tests found here. First though, we'll set
+up our default options:
+
+    >>> import os.path
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+The default options are used by a script to customize the test runner
+for a particular application.  In this case, we use two options:
+
+path
+  Set the path where the test runner should look for tests.  This path
+  is also added to the Python path.
+
+tests-pattern
+  Tell the test runner how to recognize modules or packages containing
+  tests.
+
+Now, if we run the tests, without any other options:
+
+    >>> from zope.testing import testrunner
+    >>> import sys
+    >>> sys.argv = ['test']
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer1 tests:
+      Set up samplelayers.Layer1 in N.NNN seconds.
+      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer11 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer111 tests:
+      Set up samplelayers.Layerx in N.NNN seconds.
+      Set up samplelayers.Layer111 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer112 tests:
+      Tear down samplelayers.Layer111 in N.NNN seconds.
+      Set up samplelayers.Layer112 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer112 in N.NNN seconds.
+      Tear down samplelayers.Layerx in N.NNN seconds.
+      Tear down samplelayers.Layer11 in N.NNN seconds.
+      Set up samplelayers.Layer12 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer121 tests:
+      Set up samplelayers.Layer121 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Running samplelayers.Layer122 tests:
+      Tear down samplelayers.Layer121 in N.NNN seconds.
+      Set up samplelayers.Layer122 in N.NNN seconds.
+      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in N.NNN seconds.
+      Tear down samplelayers.Layer12 in N.NNN seconds.
+      Tear down samplelayers.Layer1 in N.NNN seconds.
+    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+we see the normal testrunner output, which summarizes the tests run for
+each layer.  For each layer, we see what layers had to be torn down or
+set up to run the layer and we see the number of tests run, with
+results.
+
+The test runner returns a boolean indicating whether there were
+errors.  In this example, there were no errors, so it returned False.
+
+(Of course, the times shown in these examples are just examples.
+Times will vary depending on system speed.)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-test-selection.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-test-selection.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-test-selection.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,564 @@
+Test Selection
+==============
+
+We've already seen that we can select tests by layer.  There are three
+other ways we can select tests.  We can select tests by package:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        test_x1 (sample1.sampletests.test122.TestA)
+        test_y0 (sample1.sampletests.test122.TestA)
+        test_z0 (sample1.sampletests.test122.TestA)
+        test_x0 (sample1.sampletests.test122.TestB)
+        test_y1 (sample1.sampletests.test122.TestB)
+        test_z0 (sample1.sampletests.test122.TestB)
+        test_1 (sample1.sampletests.test122.TestNotMuch)
+        test_2 (sample1.sampletests.test122.TestNotMuch)
+        test_3 (sample1.sampletests.test122.TestNotMuch)
+        test_x0 (sample1.sampletests.test122)
+        test_y0 (sample1.sampletests.test122)
+        test_z1 (sample1.sampletests.test122)
+        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
+      Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+You can specify multiple packages:
+
+    >>> sys.argv = 'test -u  -vv -ssample1 -ssample2'.split()
+    >>> testrunner.run(defaults) 
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_x1 (sample1.sampletestsf.TestA)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_z0 (sample1.sampletestsf.TestA)
+        test_x0 (sample1.sampletestsf.TestB)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_z0 (sample1.sampletestsf.TestB)
+        test_1 (sample1.sampletestsf.TestNotMuch)
+        test_2 (sample1.sampletestsf.TestNotMuch)
+        test_3 (sample1.sampletestsf.TestNotMuch)
+        test_x0 (sample1.sampletestsf)
+        test_y0 (sample1.sampletestsf)
+        test_z1 (sample1.sampletestsf)
+        testrunner-ex/sample1/../sampletests.txt
+        test_x1 (sample1.sample11.sampletests.TestA)
+        test_y0 (sample1.sample11.sampletests.TestA)
+        test_z0 (sample1.sample11.sampletests.TestA)
+        test_x0 (sample1.sample11.sampletests.TestB)
+        test_y1 (sample1.sample11.sampletests.TestB)
+        test_z0 (sample1.sample11.sampletests.TestB)
+        test_1 (sample1.sample11.sampletests.TestNotMuch)
+        test_2 (sample1.sample11.sampletests.TestNotMuch)
+        test_3 (sample1.sample11.sampletests.TestNotMuch)
+        test_x0 (sample1.sample11.sampletests)
+        test_y0 (sample1.sample11.sampletests)
+        test_z1 (sample1.sample11.sampletests)
+        testrunner-ex/sample1/sample11/../../sampletests.txt
+        test_x1 (sample1.sample13.sampletests.TestA)
+        test_y0 (sample1.sample13.sampletests.TestA)
+        test_z0 (sample1.sample13.sampletests.TestA)
+        test_x0 (sample1.sample13.sampletests.TestB)
+        test_y1 (sample1.sample13.sampletests.TestB)
+        test_z0 (sample1.sample13.sampletests.TestB)
+        test_1 (sample1.sample13.sampletests.TestNotMuch)
+        test_2 (sample1.sample13.sampletests.TestNotMuch)
+        test_3 (sample1.sample13.sampletests.TestNotMuch)
+        test_x0 (sample1.sample13.sampletests)
+        test_y0 (sample1.sample13.sampletests)
+        test_z1 (sample1.sample13.sampletests)
+        testrunner-ex/sample1/sample13/../../sampletests.txt
+        test_x1 (sample1.sampletests.test1.TestA)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_z0 (sample1.sampletests.test1.TestA)
+        test_x0 (sample1.sampletests.test1.TestB)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_z0 (sample1.sampletests.test1.TestB)
+        test_1 (sample1.sampletests.test1.TestNotMuch)
+        test_2 (sample1.sampletests.test1.TestNotMuch)
+        test_3 (sample1.sampletests.test1.TestNotMuch)
+        test_x0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test1)
+        test_z1 (sample1.sampletests.test1)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        test_x1 (sample1.sampletests.test_one.TestA)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_z0 (sample1.sampletests.test_one.TestA)
+        test_x0 (sample1.sampletests.test_one.TestB)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_z0 (sample1.sampletests.test_one.TestB)
+        test_1 (sample1.sampletests.test_one.TestNotMuch)
+        test_2 (sample1.sampletests.test_one.TestNotMuch)
+        test_3 (sample1.sampletests.test_one.TestNotMuch)
+        test_x0 (sample1.sampletests.test_one)
+        test_y0 (sample1.sampletests.test_one)
+        test_z1 (sample1.sampletests.test_one)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        test_x1 (sample2.sample21.sampletests.TestA)
+        test_y0 (sample2.sample21.sampletests.TestA)
+        test_z0 (sample2.sample21.sampletests.TestA)
+        test_x0 (sample2.sample21.sampletests.TestB)
+        test_y1 (sample2.sample21.sampletests.TestB)
+        test_z0 (sample2.sample21.sampletests.TestB)
+        test_1 (sample2.sample21.sampletests.TestNotMuch)
+        test_2 (sample2.sample21.sampletests.TestNotMuch)
+        test_3 (sample2.sample21.sampletests.TestNotMuch)
+        test_x0 (sample2.sample21.sampletests)
+        test_y0 (sample2.sample21.sampletests)
+        test_z1 (sample2.sample21.sampletests)
+        testrunner-ex/sample2/sample21/../../sampletests.txt
+        test_x1 (sample2.sampletests.test_1.TestA)
+        test_y0 (sample2.sampletests.test_1.TestA)
+        test_z0 (sample2.sampletests.test_1.TestA)
+        test_x0 (sample2.sampletests.test_1.TestB)
+        test_y1 (sample2.sampletests.test_1.TestB)
+        test_z0 (sample2.sampletests.test_1.TestB)
+        test_1 (sample2.sampletests.test_1.TestNotMuch)
+        test_2 (sample2.sampletests.test_1.TestNotMuch)
+        test_3 (sample2.sampletests.test_1.TestNotMuch)
+        test_x0 (sample2.sampletests.test_1)
+        test_y0 (sample2.sampletests.test_1)
+        test_z1 (sample2.sampletests.test_1)
+        testrunner-ex/sample2/sampletests/../../sampletests.txt
+        test_x1 (sample2.sampletests.testone.TestA)
+        test_y0 (sample2.sampletests.testone.TestA)
+        test_z0 (sample2.sampletests.testone.TestA)
+        test_x0 (sample2.sampletests.testone.TestB)
+        test_y1 (sample2.sampletests.testone.TestB)
+        test_z0 (sample2.sampletests.testone.TestB)
+        test_1 (sample2.sampletests.testone.TestNotMuch)
+        test_2 (sample2.sampletests.testone.TestNotMuch)
+        test_3 (sample2.sampletests.testone.TestNotMuch)
+        test_x0 (sample2.sampletests.testone)
+        test_y0 (sample2.sampletests.testone)
+        test_z1 (sample2.sampletests.testone)
+        testrunner-ex/sample2/sampletests/../../sampletests.txt
+      Ran 128 tests with 0 failures and 0 errors in 0.025 seconds.
+    False
+
+You can specify directory names instead of packages (useful for
+tab-completion):
+
+    >>> subdir = os.path.join(directory_with_tests, 'sample1')
+    >>> sys.argv = ('test --layer 122 -s %s -vv' % subdir).split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        test_x1 (sample1.sampletests.test122.TestA)
+        test_y0 (sample1.sampletests.test122.TestA)
+        test_z0 (sample1.sampletests.test122.TestA)
+        test_x0 (sample1.sampletests.test122.TestB)
+        test_y1 (sample1.sampletests.test122.TestB)
+        test_z0 (sample1.sampletests.test122.TestB)
+        test_1 (sample1.sampletests.test122.TestNotMuch)
+        test_2 (sample1.sampletests.test122.TestNotMuch)
+        test_3 (sample1.sampletests.test122.TestNotMuch)
+        test_x0 (sample1.sampletests.test122)
+        test_y0 (sample1.sampletests.test122)
+        test_z1 (sample1.sampletests.test122)
+        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
+      Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+We can select by test module name using the --module (-m) option:
+
+    >>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_x1 (sample1.sampletests.test1.TestA)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_z0 (sample1.sampletests.test1.TestA)
+        test_x0 (sample1.sampletests.test1.TestB)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_z0 (sample1.sampletests.test1.TestB)
+        test_1 (sample1.sampletests.test1.TestNotMuch)
+        test_2 (sample1.sampletests.test1.TestNotMuch)
+        test_3 (sample1.sampletests.test1.TestNotMuch)
+        test_x0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test1)
+        test_z1 (sample1.sampletests.test1)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        test_x1 (sample1.sampletests.test_one.TestA)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_z0 (sample1.sampletests.test_one.TestA)
+        test_x0 (sample1.sampletests.test_one.TestB)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_z0 (sample1.sampletests.test_one.TestB)
+        test_1 (sample1.sampletests.test_one.TestNotMuch)
+        test_2 (sample1.sampletests.test_one.TestNotMuch)
+        test_3 (sample1.sampletests.test_one.TestNotMuch)
+        test_x0 (sample1.sampletests.test_one)
+        test_y0 (sample1.sampletests.test_one)
+        test_z1 (sample1.sampletests.test_one)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+      Ran 32 tests with 0 failures and 0 errors in 0.008 seconds.
+    False
+
+and by test within the module using the --test (-t) option:
+
+    >>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_x0 (sample1.sampletests.test1.TestB)
+        test_x0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_x0 (sample1.sampletests.test_one.TestB)
+        test_x0 (sample1.sampletests.test_one)
+        test_y0 (sample1.sampletests.test_one)
+      Ran 8 tests with 0 failures and 0 errors in 0.003 seconds.
+    False
+
+
+    >>> sys.argv = 'test -u  -vv -ssample1 -ttxt'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        testrunner-ex/sample1/../sampletests.txt
+        testrunner-ex/sample1/sample11/../../sampletests.txt
+        testrunner-ex/sample1/sample13/../../sampletests.txt
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+      Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
+    False
+
+The --module and --test options take regular expressions.  If the
+regular expressions specified begin with '!', then tests that don't
+match the regular expression are selected:
+
+    >>> sys.argv = 'test -u  -vv -ssample1 -m!sample1[.]sample1'.split()
+    >>> testrunner.run(defaults) 
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_x1 (sample1.sampletestsf.TestA)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_z0 (sample1.sampletestsf.TestA)
+        test_x0 (sample1.sampletestsf.TestB)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_z0 (sample1.sampletestsf.TestB)
+        test_1 (sample1.sampletestsf.TestNotMuch)
+        test_2 (sample1.sampletestsf.TestNotMuch)
+        test_3 (sample1.sampletestsf.TestNotMuch)
+        test_x0 (sample1.sampletestsf)
+        test_y0 (sample1.sampletestsf)
+        test_z1 (sample1.sampletestsf)
+        testrunner-ex/sample1/../sampletests.txt
+        test_x1 (sample1.sampletests.test1.TestA)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_z0 (sample1.sampletests.test1.TestA)
+        test_x0 (sample1.sampletests.test1.TestB)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_z0 (sample1.sampletests.test1.TestB)
+        test_1 (sample1.sampletests.test1.TestNotMuch)
+        test_2 (sample1.sampletests.test1.TestNotMuch)
+        test_3 (sample1.sampletests.test1.TestNotMuch)
+        test_x0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test1)
+        test_z1 (sample1.sampletests.test1)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        test_x1 (sample1.sampletests.test_one.TestA)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_z0 (sample1.sampletests.test_one.TestA)
+        test_x0 (sample1.sampletests.test_one.TestB)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_z0 (sample1.sampletests.test_one.TestB)
+        test_1 (sample1.sampletests.test_one.TestNotMuch)
+        test_2 (sample1.sampletests.test_one.TestNotMuch)
+        test_3 (sample1.sampletests.test_one.TestNotMuch)
+        test_x0 (sample1.sampletests.test_one)
+        test_y0 (sample1.sampletests.test_one)
+        test_z1 (sample1.sampletests.test_one)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+      Ran 48 tests with 0 failures and 0 errors in 0.017 seconds.
+    False
+
+Module and test filters can also be given as positional arguments:
+
+
+    >>> sys.argv = 'test -u  -vv -ssample1 !sample1[.]sample1'.split()
+    >>> testrunner.run(defaults) 
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_x1 (sample1.sampletestsf.TestA)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_z0 (sample1.sampletestsf.TestA)
+        test_x0 (sample1.sampletestsf.TestB)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_z0 (sample1.sampletestsf.TestB)
+        test_1 (sample1.sampletestsf.TestNotMuch)
+        test_2 (sample1.sampletestsf.TestNotMuch)
+        test_3 (sample1.sampletestsf.TestNotMuch)
+        test_x0 (sample1.sampletestsf)
+        test_y0 (sample1.sampletestsf)
+        test_z1 (sample1.sampletestsf)
+        testrunner-ex/sample1/../sampletests.txt
+        test_x1 (sample1.sampletests.test1.TestA)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_z0 (sample1.sampletests.test1.TestA)
+        test_x0 (sample1.sampletests.test1.TestB)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_z0 (sample1.sampletests.test1.TestB)
+        test_1 (sample1.sampletests.test1.TestNotMuch)
+        test_2 (sample1.sampletests.test1.TestNotMuch)
+        test_3 (sample1.sampletests.test1.TestNotMuch)
+        test_x0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test1)
+        test_z1 (sample1.sampletests.test1)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        test_x1 (sample1.sampletests.test_one.TestA)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_z0 (sample1.sampletests.test_one.TestA)
+        test_x0 (sample1.sampletests.test_one.TestB)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_z0 (sample1.sampletests.test_one.TestB)
+        test_1 (sample1.sampletests.test_one.TestNotMuch)
+        test_2 (sample1.sampletests.test_one.TestNotMuch)
+        test_3 (sample1.sampletests.test_one.TestNotMuch)
+        test_x0 (sample1.sampletests.test_one)
+        test_y0 (sample1.sampletests.test_one)
+        test_z1 (sample1.sampletests.test_one)
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+      Ran 48 tests with 0 failures and 0 errors in 0.017 seconds.
+    False
+
+    >>> sys.argv = 'test -u  -vv -ssample1 . txt'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        testrunner-ex/sample1/../sampletests.txt
+        testrunner-ex/sample1/sample11/../../sampletests.txt
+        testrunner-ex/sample1/sample13/../../sampletests.txt
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+        testrunner-ex/sample1/sampletests/../../sampletests.txt
+      Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
+    False
+
+Sometimes, There are tests that you don't want to run by default.
+For example, you might have tests that take a long time.  Tests can
+have a level attribute.  If no level is specified, a level of 1 is
+assumed and, by default, only tests at level one are run.  to run
+tests at a higher level, use the --at-level (-a) option to specify a higher
+level.  For example, with the following options:
+
+
+    >>> sys.argv = 'test -u  -vv -t test_y1 -t test_y0'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test_y0 (sampletestsf.TestA)
+        test_y1 (sampletestsf.TestB)
+        test_y0 (sampletestsf)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_y0 (sample1.sampletestsf)
+        test_y0 (sample1.sample11.sampletests.TestA)
+        test_y1 (sample1.sample11.sampletests.TestB)
+        test_y0 (sample1.sample11.sampletests)
+        test_y0 (sample1.sample13.sampletests.TestA)
+        test_y1 (sample1.sample13.sampletests.TestB)
+        test_y0 (sample1.sample13.sampletests)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_y0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_y0 (sample1.sampletests.test_one)
+        test_y0 (sample2.sample21.sampletests.TestA)
+        test_y1 (sample2.sample21.sampletests.TestB)
+        test_y0 (sample2.sample21.sampletests)
+        test_y0 (sample2.sampletests.test_1.TestA)
+        test_y1 (sample2.sampletests.test_1.TestB)
+        test_y0 (sample2.sampletests.test_1)
+        test_y0 (sample2.sampletests.testone.TestA)
+        test_y1 (sample2.sampletests.testone.TestB)
+        test_y0 (sample2.sampletests.testone)
+        test_y0 (sample3.sampletests.TestA)
+        test_y1 (sample3.sampletests.TestB)
+        test_y0 (sample3.sampletests)
+        test_y0 (sampletests.test1.TestA)
+        test_y1 (sampletests.test1.TestB)
+        test_y0 (sampletests.test1)
+        test_y0 (sampletests.test_one.TestA)
+        test_y1 (sampletests.test_one.TestB)
+        test_y0 (sampletests.test_one)
+      Ran 36 tests with 0 failures and 0 errors in 0.009 seconds.
+    False
+
+
+We get run 36 tests.  If we specify a level of 2, we get some
+additional tests:
+
+    >>> sys.argv = 'test -u  -vv -a 2 -t test_y1 -t test_y0'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 2
+    Running unit tests:
+      Running:
+        test_y0 (sampletestsf.TestA)
+        test_y0 (sampletestsf.TestA2)
+        test_y1 (sampletestsf.TestB)
+        test_y0 (sampletestsf)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_y0 (sample1.sampletestsf)
+        test_y0 (sample1.sample11.sampletests.TestA)
+        test_y1 (sample1.sample11.sampletests.TestB)
+        test_y1 (sample1.sample11.sampletests.TestB2)
+        test_y0 (sample1.sample11.sampletests)
+        test_y0 (sample1.sample13.sampletests.TestA)
+        test_y1 (sample1.sample13.sampletests.TestB)
+        test_y0 (sample1.sample13.sampletests)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_y0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_y0 (sample1.sampletests.test_one)
+        test_y0 (sample2.sample21.sampletests.TestA)
+        test_y1 (sample2.sample21.sampletests.TestB)
+        test_y0 (sample2.sample21.sampletests)
+        test_y0 (sample2.sampletests.test_1.TestA)
+        test_y1 (sample2.sampletests.test_1.TestB)
+        test_y0 (sample2.sampletests.test_1)
+        test_y0 (sample2.sampletests.testone.TestA)
+        test_y1 (sample2.sampletests.testone.TestB)
+        test_y0 (sample2.sampletests.testone)
+        test_y0 (sample3.sampletests.TestA)
+        test_y1 (sample3.sampletests.TestB)
+        test_y0 (sample3.sampletests)
+        test_y0 (sampletests.test1.TestA)
+        test_y1 (sampletests.test1.TestB)
+        test_y0 (sampletests.test1)
+        test_y0 (sampletests.test_one.TestA)
+        test_y1 (sampletests.test_one.TestB)
+        test_y0 (sampletests.test_one)
+      Ran 38 tests with 0 failures and 0 errors in 0.009 seconds.
+    False
+
+We can use the --all option to run tests at all levels:
+
+    >>> sys.argv = 'test -u  -vv --all -t test_y1 -t test_y0'.split()
+    >>> testrunner.run(defaults)
+    Running tests at all levels
+    Running unit tests:
+      Running:
+        test_y0 (sampletestsf.TestA)
+        test_y0 (sampletestsf.TestA2)
+        test_y1 (sampletestsf.TestB)
+        test_y0 (sampletestsf)
+        test_y0 (sample1.sampletestsf.TestA)
+        test_y1 (sample1.sampletestsf.TestB)
+        test_y0 (sample1.sampletestsf)
+        test_y0 (sample1.sample11.sampletests.TestA)
+        test_y0 (sample1.sample11.sampletests.TestA3)
+        test_y1 (sample1.sample11.sampletests.TestB)
+        test_y1 (sample1.sample11.sampletests.TestB2)
+        test_y0 (sample1.sample11.sampletests)
+        test_y0 (sample1.sample13.sampletests.TestA)
+        test_y1 (sample1.sample13.sampletests.TestB)
+        test_y0 (sample1.sample13.sampletests)
+        test_y0 (sample1.sampletests.test1.TestA)
+        test_y1 (sample1.sampletests.test1.TestB)
+        test_y0 (sample1.sampletests.test1)
+        test_y0 (sample1.sampletests.test_one.TestA)
+        test_y1 (sample1.sampletests.test_one.TestB)
+        test_y0 (sample1.sampletests.test_one)
+        test_y0 (sample2.sample21.sampletests.TestA)
+        test_y1 (sample2.sample21.sampletests.TestB)
+        test_y0 (sample2.sample21.sampletests)
+        test_y0 (sample2.sampletests.test_1.TestA)
+        test_y1 (sample2.sampletests.test_1.TestB)
+        test_y0 (sample2.sampletests.test_1)
+        test_y0 (sample2.sampletests.testone.TestA)
+        test_y1 (sample2.sampletests.testone.TestB)
+        test_y0 (sample2.sampletests.testone)
+        test_y0 (sample3.sampletests.TestA)
+        test_y1 (sample3.sampletests.TestB)
+        test_y0 (sample3.sampletests)
+        test_y0 (sampletests.test1.TestA)
+        test_y1 (sampletests.test1.TestB)
+        test_y0 (sampletests.test1)
+        test_y0 (sampletests.test_one.TestA)
+        test_y1 (sampletests.test_one.TestB)
+        test_y0 (sampletests.test_one)
+      Ran 39 tests with 0 failures and 0 errors in 0.009 seconds.
+    False
+
+
+Listing Selected Tests
+----------------------
+
+When you're trying to figure out why the test you want is not matched by the
+pattern you specified, it is convenient to see which tests match your
+specifications.
+
+    >>> sys.argv = 'test --all -m sample1 -t test_y0 --list-tests'.split()
+    >>> testrunner.run(defaults)
+    Listing unit tests:
+      test_y0 (sample1.sampletestsf.TestA)
+      test_y0 (sample1.sampletestsf)
+      test_y0 (sample1.sample11.sampletests.TestA)
+      test_y0 (sample1.sample11.sampletests.TestA3)
+      test_y0 (sample1.sample11.sampletests)
+      test_y0 (sample1.sample13.sampletests.TestA)
+      test_y0 (sample1.sample13.sampletests)
+      test_y0 (sample1.sampletests.test1.TestA)
+      test_y0 (sample1.sampletests.test1)
+      test_y0 (sample1.sampletests.test_one.TestA)
+      test_y0 (sample1.sampletests.test_one)
+    Listing samplelayers.Layer11 tests:
+      test_y0 (sample1.sampletests.test11.TestA)
+      test_y0 (sample1.sampletests.test11)
+    Listing samplelayers.Layer111 tests:
+      test_y0 (sample1.sampletests.test111.TestA)
+      test_y0 (sample1.sampletests.test111)
+    Listing samplelayers.Layer112 tests:
+      test_y0 (sample1.sampletests.test112.TestA)
+      test_y0 (sample1.sampletests.test112)
+    Listing samplelayers.Layer12 tests:
+      test_y0 (sample1.sampletests.test12.TestA)
+      test_y0 (sample1.sampletests.test12)
+    Listing samplelayers.Layer121 tests:
+      test_y0 (sample1.sampletests.test121.TestA)
+      test_y0 (sample1.sampletests.test121)
+    Listing samplelayers.Layer122 tests:
+      test_y0 (sample1.sampletests.test122.TestA)
+      test_y0 (sample1.sampletests.test122)
+    False
+

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-verbose.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-verbose.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-verbose.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-verbose.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,148 @@
+Verbose Output
+==============
+
+Normally, we just get a summary.  We can use the -v option to get
+increasingly more information.
+
+If we use a single --verbose (-v) option, we get a dot printed for each
+test:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+    >>> sys.argv = 'test --layer 122 -v'.split()
+    >>> from zope.testing import testrunner
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        ..................................
+      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+If there are more than 50 tests, the dots are printed in groups of
+50:
+
+    >>> sys.argv = 'test -uv'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+    ................................................................................................................................................................................................
+      Ran 192 tests with 0 failures and 0 errors in 0.035 seconds.
+    False
+
+If the --verbose (-v) option is used twice, then the name and location of
+each test is printed as it is run:
+
+    >>> sys.argv = 'test --layer 122 -vv'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        test_x1 (sample1.sampletests.test122.TestA)
+        test_y0 (sample1.sampletests.test122.TestA)
+        test_z0 (sample1.sampletests.test122.TestA)
+        test_x0 (sample1.sampletests.test122.TestB)
+        test_y1 (sample1.sampletests.test122.TestB)
+        test_z0 (sample1.sampletests.test122.TestB)
+        test_1 (sample1.sampletests.test122.TestNotMuch)
+        test_2 (sample1.sampletests.test122.TestNotMuch)
+        test_3 (sample1.sampletests.test122.TestNotMuch)
+        test_x0 (sample1.sampletests.test122)
+        test_y0 (sample1.sampletests.test122)
+        test_z1 (sample1.sampletests.test122)
+        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
+        test_x1 (sampletests.test122.TestA)
+        test_y0 (sampletests.test122.TestA)
+        test_z0 (sampletests.test122.TestA)
+        test_x0 (sampletests.test122.TestB)
+        test_y1 (sampletests.test122.TestB)
+        test_z0 (sampletests.test122.TestB)
+        test_1 (sampletests.test122.TestNotMuch)
+        test_2 (sampletests.test122.TestNotMuch)
+        test_3 (sampletests.test122.TestNotMuch)
+        test_x0 (sampletests.test122)
+        test_y0 (sampletests.test122)
+        test_z1 (sampletests.test122)
+        testrunner-ex/sampletests/../sampletestsl.txt
+      Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+if the --verbose (-v) option is used three times, then individual
+test-execution times are printed:
+
+    >>> sys.argv = 'test --layer 122 -vvv'.split()
+    >>> testrunner.run(defaults)
+    Running tests at level 1
+    Running samplelayers.Layer122 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+      Set up samplelayers.Layer122 in 0.000 seconds.
+      Running:
+        test_x1 (sample1.sampletests.test122.TestA) (0.000 s)
+        test_y0 (sample1.sampletests.test122.TestA) (0.000 s)
+        test_z0 (sample1.sampletests.test122.TestA) (0.000 s)
+        test_x0 (sample1.sampletests.test122.TestB) (0.000 s)
+        test_y1 (sample1.sampletests.test122.TestB) (0.000 s)
+        test_z0 (sample1.sampletests.test122.TestB) (0.000 s)
+        test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
+        test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
+        test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
+        test_x0 (sample1.sampletests.test122) (0.001 s)
+        test_y0 (sample1.sampletests.test122) (0.001 s)
+        test_z1 (sample1.sampletests.test122) (0.001 s)
+        testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 s)
+        test_x1 (sampletests.test122.TestA) (0.000 s)
+        test_y0 (sampletests.test122.TestA) (0.000 s)
+        test_z0 (sampletests.test122.TestA) (0.000 s)
+        test_x0 (sampletests.test122.TestB) (0.000 s)
+        test_y1 (sampletests.test122.TestB) (0.000 s)
+        test_z0 (sampletests.test122.TestB) (0.000 s)
+        test_1 (sampletests.test122.TestNotMuch) (0.000 s)
+        test_2 (sampletests.test122.TestNotMuch) (0.000 s)
+        test_3 (sampletests.test122.TestNotMuch) (0.000 s)
+        test_x0 (sampletests.test122) (0.001 s)
+        test_y0 (sampletests.test122) (0.001 s)
+        test_z1 (sampletests.test122) (0.001 s)
+        testrunner-ex/sampletests/../sampletestsl.txt (0.001 s)
+      Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
+    Tearing down left over layers:
+      Tear down samplelayers.Layer122 in 0.000 seconds.
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    False
+
+Quiet output
+------------
+
+The --quiet (-q) option cancels all verbose options.  It's useful when
+the default verbosity is non-zero:
+
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     '-v'
+    ...     ]
+    >>> sys.argv = 'test -q -u'.split()
+    >>> testrunner.run(defaults)
+    Running unit tests:
+      Ran 192 tests with 0 failures and 0 errors in 0.034 seconds.
+    False

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-wo-source.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-wo-source.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner-wo-source.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,89 @@
+Running Without Source Code
+===========================
+
+The ``--usecompiled`` option allows running tests in a tree without .py
+source code, provided compiled .pyc or .pyo files exist (without
+``--usecompiled``, .py files are necessary).
+
+We have a very simple directory tree, under ``usecompiled/``, to test
+this.  Because we're going to delete its .py files, we want to work
+in a copy of that:
+
+    >>> import os.path, shutil, sys, tempfile
+    >>> directory_with_tests = tempfile.mkdtemp()
+
+    >>> NEWNAME = "unlikely_package_name"
+    >>> src = os.path.join(this_directory, 'testrunner-ex', 'usecompiled')
+    >>> os.path.isdir(src)
+    True
+    >>> dst = os.path.join(directory_with_tests, NEWNAME)
+    >>> os.path.isdir(dst)
+    False
+
+Have to use our own copying code, to avoid copying read-only SVN files that
+can't be deleted later.
+
+    >>> n = len(src) + 1
+    >>> for root, dirs, files in os.walk(src):
+    ...     dirs[:] = [d for d in dirs if d == "package"] # prune cruft
+    ...     os.mkdir(os.path.join(dst, root[n:]))
+    ...     for f in files:
+    ...         shutil.copy(os.path.join(root, f),
+    ...                     os.path.join(dst, root[n:], f))
+
+Now run the tests in the copy:
+
+    >>> from zope.testing import testrunner
+
+    >>> mydefaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^compiletest$',
+    ...     '--package', NEWNAME,
+    ...     '-vv',
+    ...     ]
+    >>> sys.argv = ['test']
+    >>> testrunner.run(mydefaults)
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test1 (unlikely_package_name.compiletest.Test)
+        test2 (unlikely_package_name.compiletest.Test)
+        test1 (unlikely_package_name.package.compiletest.Test)
+        test2 (unlikely_package_name.package.compiletest.Test)
+      Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
+    False
+
+If we delete the source files, it's normally a disaster:  the test runner
+doesn't believe any test files, or even packages, exist.  Note that we pass
+``--keepbytecode`` this time, because otherwise the test runner would
+delete the compiled Python files too:
+
+    >>> for root, dirs, files in os.walk(dst):
+    ...    for f in files:
+    ...        if f.endswith(".py"):
+    ...            os.remove(os.path.join(root, f))
+    >>> testrunner.run(mydefaults, ["test", "--keepbytecode"])
+    Running tests at level 1
+    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
+    False
+
+Finally, passing ``--usecompiled`` asks the test runner to treat .pyc
+and .pyo files as adequate replacements for .py files.  Note that the
+output is the same as when running with .py source above.  The absence
+of "removing stale bytecode ..." messages shows that ``--usecompiled``
+also implies ``--keepbytecode``:
+
+    >>> testrunner.run(mydefaults, ["test", "--usecompiled"])
+    Running tests at level 1
+    Running unit tests:
+      Running:
+        test1 (unlikely_package_name.compiletest.Test)
+        test2 (unlikely_package_name.compiletest.Test)
+        test1 (unlikely_package_name.package.compiletest.Test)
+        test2 (unlikely_package_name.package.compiletest.Test)
+      Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
+    False
+
+Remove the temporary directory:
+
+    >>> shutil.rmtree(directory_with_tests)

Copied: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner.txt (from rev 86204, zope.testing/trunk/src/zope/testing/testrunner.txt)
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner.txt	                        (rev 0)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner/testrunner.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -0,0 +1,69 @@
+Test Runner
+===========
+
+The testrunner module is used to run automated tests defined using the
+unittest framework.  Its primary feature is that it *finds* tests by
+searching directory trees.  It doesn't require the manual
+concatenation of specific test suites.  It is highly customizable and
+should be usable with any project.  In addition to finding and running
+tests, it provides the following additional features:
+
+- Test filtering using specifications of:
+
+  o test packages within a larger tree
+
+  o regular expression patterns for test modules
+
+  o regular expression patterns for individual tests
+
+- Organization of tests into levels and layers
+
+  Sometimes, tests take so long to run that you don't want to run them
+  on every run of the test runner.  Tests can be defined at different
+  levels.  The test runner can be configured to only run tests at a
+  specific level or below by default.  Command-line options can be
+  used to specify a minimum level to use for a specific run, or to run
+  all tests.  Individual tests or test suites can specify their level
+  via a 'level' attribute. where levels are integers increasing from 1.
+
+  Most tests are unit tests.  They don't depend on other facilities, or
+  set up whatever dependencies they have.  For larger applications,
+  it's useful to specify common facilities that a large number of
+  tests share.  Making each test set up and and tear down these
+  facilities is both ineffecient and inconvenient.  For this reason,
+  we've introduced the concept of layers, based on the idea of layered
+  application architectures.  Software build for a layer should be
+  able to depend on the facilities of lower layers already being set
+  up.  For example, Zope defines a component architecture.  Much Zope
+  software depends on that architecture.  We should be able to treat
+  the component architecture as a layer that we set up once and reuse.
+  Similarly, Zope application software should be able to depend on the
+  Zope application server without having to set it up in each test.
+
+  The test runner introduces test layers, which are objects that can
+  set up environments for tests within the layers to use.  A layer is
+  set up before running the tests in it.  Individual tests or test
+  suites can define a layer by defining a `layer` attribute, which is
+  a test layer.
+
+- Reporting
+
+  - progress meter
+
+  - summaries of tests run
+
+- Analysis of test execution
+
+  - post-mortem debugging of test failures
+
+  - memory leaks
+
+  - code coverage
+
+  - source analysis using pychecker
+
+  - memory errors
+
+  - execution times
+
+  - profiling

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-arguments.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-arguments.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-arguments.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,27 +0,0 @@
-Passing arguments explicitly
-============================
-
-In most of the examples here, we set up `sys.argv`.  In normal usage,
-the testrunner just uses `sys.argv`.  It is possible to pass arguments
-explicitly.
-
-    >>> import os.path
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults, 'test --layer 111'.split())
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in N.NNN seconds.
-      Set up samplelayers.Layer1 in N.NNN seconds.
-      Set up samplelayers.Layer11 in N.NNN seconds.
-      Set up samplelayers.Layer111 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer111 in N.NNN seconds.
-      Tear down samplelayers.Layerx in N.NNN seconds.
-      Tear down samplelayers.Layer11 in N.NNN seconds.
-      Tear down samplelayers.Layer1 in N.NNN seconds.
-    False

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-colors.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-colors.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-colors.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,372 +0,0 @@
-Colorful output
-===============
-
-If you're on a Unix-like system, you can ask for colorized output.  The test
-runner emits terminal control sequences to highlight important pieces of
-information (such as the names of failing tests) in different colors.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> from zope.testing import testrunner
-
-Since it wouldn't be a good idea to have terminal control characters in a
-test file, let's wrap sys.stdout in a simple terminal interpreter
-
-    >>> import re
-    >>> class Terminal(object):
-    ...     _color_regexp = re.compile('\033[[]([0-9;]*)m')
-    ...     _colors = {'0': 'normal', '1': 'bold', '30': 'black', '31': 'red',
-    ...                '32': 'green', '33': 'yellow', '34': 'blue',
-    ...                '35': 'magenta', '36': 'cyan', '37': 'grey'}
-    ...     def __init__(self, stream):
-    ...         self._stream = stream
-    ...     def __getattr__(self, attr):
-    ...         return getattr(self._stream, attr)
-    ...     def isatty(self):
-    ...         return True
-    ...     def write(self, text):
-    ...         if '\033[' in text:
-    ...             text = self._color_regexp.sub(self._color, text)
-    ...         self._stream.write(text)
-    ...     def writelines(self, lines):
-    ...         for line in lines:
-    ...             self.write(line)
-    ...     def _color(self, match):
-    ...         colorstring = '{'
-    ...         for number in match.group(1).split(';'):
-    ...             colorstring += self._colors.get(number, '?')
-    ...         return colorstring + '}'
-
-    >>> real_stdout = sys.stdout
-    >>> sys.stdout = Terminal(sys.stdout)
-
-
-Successful test
----------------
-
-A successful test run soothes the developer with warm green colors:
-
-    >>> sys.argv = 'test --layer 122 -c'.split()
-    >>> testrunner.run(defaults)
-    {normal}Running samplelayers.Layer122 tests:{normal}
-      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
-    {normal}Tearing down left over layers:{normal}
-      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
-    False
-
-
-Failed test
------------
-
-A failed test run highlights the failures in red:
-
-    >>> sys.argv = 'test -c --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
-    >>> testrunner.run(defaults)
-    {normal}Running unit tests:{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Failure in test eek (sample2.sampletests_e){normal}
-    Failed doctest test for sample2.sampletests_e.eek
-      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    {normal}File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}30{normal}, in {boldcyan}sample2.sampletests_e.eek{normal}
-    Failed example:
-    {cyan}    f(){normal}
-    Exception raised:
-    {red}    Traceback (most recent call last):{normal}
-    {red}      File ".../doctest.py", line 1356, in __run{normal}
-    {red}        compileflags, 1) in test.globs{normal}
-    {red}      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?{normal}
-    {red}        f(){normal}
-    {red}      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f{normal}
-    {red}        g(){normal}
-    {red}      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g{normal}
-    {red}        x = y + 1{normal}
-    {red}    NameError: global name 'y' is not defined{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Error in test test3 (sample2.sampletests_e.Test){normal}
-    Traceback (most recent call last):
-    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}260{normal}, in {boldcyan}run{normal}
-    {cyan}    testMethod(){normal}
-    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}43{normal}, in {boldcyan}test3{normal}
-    {cyan}    f(){normal}
-    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}19{normal}, in {boldcyan}f{normal}
-    {cyan}    g(){normal}
-    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_e.py{normal}", line {boldred}24{normal}, in {boldcyan}g{normal}
-    {cyan}    x = y + 1{normal}
-    {red}NameError: global name 'y' is not defined{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Failure in test testrunner-ex/sample2/e.txt{normal}
-    Failed doctest test for e.txt
-      File "testrunner-ex/sample2/e.txt", line 0
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    {normal}File "{boldblue}testrunner-ex/sample2/e.txt{normal}", line {boldred}4{normal}, in {boldcyan}e.txt{normal}
-    Failed example:
-    {cyan}    f(){normal}
-    Exception raised:
-    {red}    Traceback (most recent call last):{normal}
-    {red}      File ".../doctest.py", line 1356, in __run{normal}
-    {red}        compileflags, 1) in test.globs{normal}
-    {red}      File "<doctest e.txt[1]>", line 1, in ?{normal}
-    {red}        f(){normal}
-    {red}      File "<doctest e.txt[0]>", line 2, in f{normal}
-    {red}        return x{normal}
-    {red}    NameError: global name 'x' is not defined{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Failure in test test (sample2.sampletests_f.Test){normal}
-    Traceback (most recent call last):
-    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}260{normal}, in {boldcyan}run{normal}
-    {cyan}    testMethod(){normal}
-    {normal}  File "{boldblue}testrunner-ex/sample2/sampletests_f.py{normal}", line {boldred}21{normal}, in {boldcyan}test{normal}
-    {cyan}    self.assertEqual(1,0){normal}
-    {normal}  File "{boldblue}unittest.py{normal}", line {boldred}333{normal}, in {boldcyan}failUnlessEqual{normal}
-    {cyan}    raise self.failureException, \{normal}
-    {red}AssertionError: 1 != 0{normal}
-    <BLANKLINE>
-    {normal}  Ran {green}200{normal} tests with {boldred}3{normal} failures and {boldred}1{normal} errors in {green}0.045{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer1 tests:{normal}
-      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}9{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.001{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer11 tests:{normal}
-      Set up samplelayers.Layer11 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer111 tests:{normal}
-      Set up samplelayers.Layerx in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer111 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer112 tests:{normal}
-      Tear down samplelayers.Layer111 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer112 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer12 tests:{normal}
-      Tear down samplelayers.Layer112 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layerx in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer11 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer121 tests:{normal}
-      Set up samplelayers.Layer121 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
-    {normal}Running samplelayers.Layer122 tests:{normal}
-      Tear down samplelayers.Layer121 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.008{normal} seconds.{normal}
-    {normal}Tearing down left over layers:{normal}
-      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
-    {normal}Total: {green}413{normal} tests, {boldred}3{normal} failures, {boldred}1{normal} errors in {green}0.023{normal} seconds.{normal}
-    True
-
-
-Doctest failures
-----------------
-
-The expected and actual outputs of failed doctests are shown in different
-colors:
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$ -c'.split()
-    >>> _ = testrunner.run(defaults)
-    {normal}Running unit tests:{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Failure in test pledge (pledge){normal}
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    {normal}File testrunner-ex/pledge.py{normal}", line {boldred}26{normal}, in {boldcyan}pledge.pledge{normal}
-    Failed example:
-    {cyan}    print pledge_template % ('and earthling', 'planet'),{normal}
-    Expected:
-    {green}    I give my pledge, as an earthling,{normal}
-    {green}    to save, and faithfully, to defend from waste,{normal}
-    {green}    the natural resources of my planet.{normal}
-    {green}    It's soils, minerals, forests, waters, and wildlife.{normal}
-    Got:
-    {red}    I give my pledge, as and earthling,{normal}
-    {red}    to save, and faithfully, to defend from waste,{normal}
-    {red}    the natural resources of my planet.{normal}
-    {red}    It's soils, minerals, forests, waters, and wildlife.{normal}
-    <BLANKLINE>
-    {normal}  Ran {green}1{normal} tests with {boldred}1{normal} failures and {green}0{normal} errors in {green}0.002{normal} seconds.{normal}
-
-Diffs are highlighted so you can easily tell the context and the mismatches
-apart:
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff -c'.split()
-    >>> _ = testrunner.run(defaults)
-    {normal}Running unit tests:{normal}
-    <BLANKLINE>
-    <BLANKLINE>
-    {boldred}Failure in test pledge (pledge){normal}
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    {normal}File testrunner-ex/pledge.py{normal}", line {boldred}26{normal}, in {boldcyan}pledge.pledge{normal}
-    Failed example:
-    {cyan}    print pledge_template % ('and earthling', 'planet'),{normal}
-    Differences (ndiff with -expected +actual):
-    {green}    - I give my pledge, as an earthling,{normal}
-    {red}    + I give my pledge, as and earthling,{normal}
-    {magenta}    ?                        +{normal}
-    {normal}      to save, and faithfully, to defend from waste,{normal}
-    {normal}      the natural resources of my planet.{normal}
-    {normal}      It's soils, minerals, forests, waters, and wildlife.{normal}
-    <BLANKLINE>
-    {normal}  Ran {green}1{normal} tests with {boldred}1{normal} failures and {green}0{normal} errors in {green}0.003{normal} seconds.{normal}
-
-
-Timing individual tests
------------------------
-
-At very high verbosity levels you can see the time taken by each test
-
-    >>> sys.argv = 'test -u -t test_one.TestNotMuch -c -vvv'.split()
-    >>> testrunner.run(defaults)
-    {normal}Running tests at level 1{normal}
-    {normal}Running unit tests:{normal}
-    {normal}  Running:{normal}
-     test_1 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-     test_2 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-     test_3 (sample1.sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-     test_1 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-     test_2 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-     test_3 (sampletests.test_one.TestNotMuch) ({green}N.NNN s{normal})
-    {normal}  Ran {green}6{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}N.NNN{normal} seconds.{normal}
-    False
-
-If we had very slow tests we would see their times highlighted in a different color.
-Instead of creating a test that waits 10 seconds, let's lower the slow test threshold
-in the test runner to 0 seconds to make all of the tests seem slow.
-
-    >>> sys.argv = 'test -u -t test_one.TestNotMuch -c -vvv --slow-test 0'.split()
-    >>> testrunner.run(defaults)
-    {normal}Running tests at level 1{normal}
-    {normal}Running unit tests:{normal}
-    {normal}  Running:{normal}
-     test_1 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-     test_2 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-     test_3 (sample1.sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-     test_1 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-     test_2 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-     test_3 (sampletests.test_one.TestNotMuch) ({boldmagenta}N.NNN s{normal})
-    {normal}  Ran {green}6{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}N.NNN{normal} seconds.{normal}
-    False
-
-
-Disabling colors
-----------------
-
-If -c or --color have been previously provided on the command line (perhaps by
-a test runner wrapper script), but no colorized output is desired, the -C or
---no-color options will disable colorized output:
-
-    >>> sys.argv = 'test --layer 122 -c -C'.split()
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-    >>> sys.argv = 'test --layer 122 -c --no-color'.split()
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-
-Autodetecting colors
---------------------
-
-The --auto-color option will determine if stdout is a terminal that supports
-colors, and only enable colorized output if so.  Our ``Terminal`` wrapper
-pretends it is a terminal, but the curses module will realize it isn't:
-
-    >>> sys.argv = 'test --layer 122 --auto-color'.split()
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-We can fake it
-
-    >>> class FakeCurses(object):
-    ...     class error(Exception):
-    ...         pass
-    ...     def setupterm(self):
-    ...         pass
-    ...     def tigetnum(self, attr):
-    ...         return dict(colors=8).get(attr, -2)
-    >>> sys.modules['curses'] = FakeCurses()
-
-    >>> sys.argv = 'test --layer 122 --auto-color'.split()
-    >>> testrunner.run(defaults)
-    {normal}Running samplelayers.Layer122 tests:{normal}
-      Set up samplelayers.Layer1 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer12 in {green}0.000{normal} seconds.
-      Set up samplelayers.Layer122 in {green}0.000{normal} seconds.
-    {normal}  Ran {green}34{normal} tests with {green}0{normal} failures and {green}0{normal} errors in {green}0.007{normal} seconds.{normal}
-    {normal}Tearing down left over layers:{normal}
-      Tear down samplelayers.Layer122 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer12 in {green}0.000{normal} seconds.
-      Tear down samplelayers.Layer1 in {green}0.000{normal} seconds.
-    False
-
-    >>> del sys.modules['curses']
-
-The real stdout is not a terminal in a doctest:
-
-    >>> sys.stdout = real_stdout
-
-    >>> sys.argv = 'test --layer 122 --auto-color'.split()
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage-win32.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage-win32.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage-win32.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,31 +0,0 @@
-Code Coverage
-=============
-
-On Windows drive names can be upper and lower case, these can be
-randomly passed to TestIgnore.names.
-Watch out for the case of the R drive!
-
-  >>> class WinOptions(object):
-  ...   package = None
-  ...   test_path = [('r:\\winproject\\src\\blah\\foo', ''),
-  ...                ('R:\\winproject\\src\\blah\\bar', '')]
-
-  >>> from zope.testing import testrunner
-  >>> ignore = testrunner.TestIgnore(WinOptions())
-  >>> ignore._test_dirs
-  ['r:\\winproject\\src\\blah\\foo\\', 'R:\\winproject\\src\\blah\\bar\\']
-
-We can now ask whether a particular module should be ignored:
-
-  >>> ignore.names('r:\\winproject\\src\\blah\\foo\\baz.py', 'baz')
-  False
-  >>> ignore.names('R:\\winproject\\src\\blah\\foo\\baz.py', 'baz')
-  False
-  >>> ignore.names('r:\\winproject\\src\\blah\\bar\\zab.py', 'zab')
-  False
-  >>> ignore.names('R:\\winproject\\src\\blah\\bar\\zab.py', 'zab')
-  False
-  >>> ignore.names('r:\\winproject\\src\\blah\\hello.py', 'hello')
-  True
-  >>> ignore.names('R:\\winproject\\src\\blah\\hello.py', 'hello')
-  True

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-coverage.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,126 +0,0 @@
-Code Coverage
-=============
-
-If the --coverage option is used, test coverage reports will be generated.  The
-directory name given as the parameter will be used to hold the reports.
-
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --coverage=coverage_dir'.split()
-
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in 0.125 seconds.
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Ran 9 tests with 0 failures and 0 errors in 0.003 seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.029 seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.024 seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Set up samplelayers.Layer112 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.024 seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.026 seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.025 seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.025 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
-    lines   cov%   module   (path)
-    ...
-       52    92%   sample1.sampletests.test1   (testrunner-ex/sample1/sampletests/test1.py)
-       78    94%   sample1.sampletests.test11   (testrunner-ex/sample1/sampletests/test11.py)
-       78    94%   sample1.sampletests.test111   (testrunner-ex/sample1/sampletests/test111.py)
-       78    94%   sample1.sampletests.test112   (testrunner-ex/sample1/sampletests/test112.py)
-       78    94%   sample1.sampletests.test12   (testrunner-ex/sample1/sampletests/test12.py)
-       78    94%   sample1.sampletests.test121   (testrunner-ex/sample1/sampletests/test121.py)
-       78    94%   sample1.sampletests.test122   (testrunner-ex/sample1/sampletests/test122.py)
-    ...
-    False
-
-The directory specified with the --coverage option will have been created and
-will hold the coverage reports.
-
-    >>> os.path.exists('coverage_dir')
-    True
-    >>> os.listdir('coverage_dir')
-    [...]
-
-(We should clean up after ourselves.)
-
-    >>> import shutil
-    >>> shutil.rmtree('coverage_dir')
-
-
-Ignoring Tests
---------------
-
-The ``trace`` module supports ignoring directories and modules based the test
-selection. Only directories selected for testing should report coverage. The
-test runner provides a custom implementation of the relevant API.
-
-The ``TestIgnore`` class, the class managing the ignoring, is initialized by
-passing the command line options. It uses the options to determine the
-directories that should be covered.
-
-  >>> class FauxOptions(object):
-  ...   package = None
-  ...   test_path = [('/myproject/src/blah/foo', ''),
-  ...                ('/myproject/src/blah/bar', '')]
-
-  >>> from zope.testing import testrunner
-  >>> ignore = testrunner.TestIgnore(FauxOptions())
-  >>> ignore._test_dirs
-  ['/myproject/src/blah/foo/', '/myproject/src/blah/bar/']
-
-We can now ask whether a particular module should be ignored:
-
-  >>> ignore.names('/myproject/src/blah/foo/baz.py', 'baz')
-  False
-  >>> ignore.names('/myproject/src/blah/bar/mine.py', 'mine')
-  False
-  >>> ignore.names('/myproject/src/blah/foo/__init__.py', 'foo')
-  False
-  >>> ignore.names('/myproject/src/blah/hello.py', 'hello')
-  True
-
-When running the test runner, modules are sometimes created from text
-strings. Those should *always* be ignored:
-
-  >>> ignore.names('/myproject/src/blah/hello.txt', '<string>')
-  True
-
-To make this check fast, the class implements a cache. In an early
-implementation, the result was cached by the module name, which was a problem,
-since a lot of modules carry the same name (not the Python dotted name
-here!). So just because a module has the same name in an ignored and tested
-directory, does not mean it is always ignored:
-
-  >>> ignore.names('/myproject/src/blah/module.py', 'module')
-  True
-  >>> ignore.names('/myproject/src/blah/foo/module.py', 'module')
-  False

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging-layer-setup.test
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging-layer-setup.test	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging-layer-setup.test	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,133 +0,0 @@
-Post-mortem debugging also works when there is a failure in layer
-setup.
-
-    >>> import os, shutil, sys, tempfile
-    >>> tdir = tempfile.mkdtemp()
-    >>> dir = os.path.join(tdir, 'TESTS-DIR')
-    >>> os.mkdir(dir)
-    >>> open(os.path.join(dir, 'tests.py'), 'w').write(
-    ... '''
-    ... import doctest
-    ...
-    ... class Layer:
-    ...     @classmethod
-    ...     def setUp(self):
-    ...         x = 1
-    ...         raise ValueError
-    ...     
-    ... def a_test():
-    ...     """
-    ...     >>> None
-    ...     """
-    ... def test_suite():
-    ...     suite = doctest.DocTestSuite()
-    ...     suite.layer = Layer
-    ...     return suite
-    ... 
-    ... ''')
-    
-    >>> class Input:
-    ...     def __init__(self, src):
-    ...         self.lines = src.split('\n')
-    ...     def readline(self):
-    ...         line = self.lines.pop(0)
-    ...         print line
-    ...         return line+'\n'
-
-    >>> real_stdin = sys.stdin
-    >>> if sys.version_info[:2] == (2, 3):
-    ...     sys.stdin = Input('n\np x\nc')
-    ... else:
-    ...     sys.stdin = Input('p x\nc')
-
-    >>> sys.argv = [testrunner_script]
-    >>> import zope.testing.testrunner
-    >>> try:
-    ...     zope.testing.testrunner.run(['--path', dir, '-D'])
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +ELLIPSIS
-    Running tests.Layer tests:
-      Set up tests.Layer exceptions.ValueError:
-    <BLANKLINE>
-    > ...tests.py(8)setUp()
-    -> raise ValueError
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Note that post-mortem debugging doesn't work when the layer is run in
-a subprocess:
-
-    >>> if sys.version_info[:2] == (2, 3):
-    ...     sys.stdin = Input('n\np x\nc')
-    ... else:
-    ...     sys.stdin = Input('p x\nc')
-
-    >>> open(os.path.join(dir, 'tests2.py'), 'w').write(
-    ... '''
-    ... import doctest, unittest
-    ...
-    ... class Layer1:
-    ...     @classmethod
-    ...     def setUp(self):
-    ...         pass
-    ...
-    ...     @classmethod
-    ...     def tearDown(self):
-    ...         raise NotImplementedError
-    ...
-    ... class Layer2:
-    ...     @classmethod
-    ...     def setUp(self):
-    ...         x = 1
-    ...         raise ValueError
-    ...     
-    ... def a_test():
-    ...     """
-    ...     >>> None
-    ...     """
-    ... def test_suite():
-    ...     suite1 = doctest.DocTestSuite()
-    ...     suite1.layer = Layer1
-    ...     suite2 = doctest.DocTestSuite()
-    ...     suite2.layer = Layer2
-    ...     return unittest.TestSuite((suite1, suite2))
-    ... 
-    ... ''')
-
-    >>> try:
-    ...     zope.testing.testrunner.run(
-    ...       ['--path', dir, '-Dvv', '--tests-pattern', 'tests2'])
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +ELLIPSIS
-    Running tests at level 1
-    Running tests2.Layer1 tests:
-      Set up tests2.Layer1 in 0.000 seconds.
-      Running:
-     a_test (tests2)
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-    Running tests2.Layer2 tests:
-      Tear down tests2.Layer1 ... not supported
-      Running in a subprocess.
-      Set up tests2.Layer2
-    **********************************************************************
-    <BLANKLINE>
-    Can't post-mortem debug when running a layer as a subprocess!
-    Try running layer 'tests2.Layer2' by itself.
-    <BLANKLINE>
-    **********************************************************************
-    <BLANKLINE>
-    Traceback (most recent call last):
-    ...
-        raise ValueError
-    ValueError
-    <BLANKLINE>
-    <BLANKLINE>
-    Tests with errors:
-       runTest (__main__.SetUpLayerFailure)
-    Total: 1 tests, 0 failures, 1 errors in 0.210 seconds.
-    True
-
-    >>> shutil.rmtree(tdir)
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-debugging.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,118 +0,0 @@
-Debugging
-=========
-
-The testrunner module supports post-mortem debugging and debugging
-using `pdb.set_trace`.  Let's look first at using `pdb.set_trace`.
-To demonstrate this, we'll provide input via helper Input objects:
-
-    >>> class Input:
-    ...     def __init__(self, src):
-    ...         self.lines = src.split('\n')
-    ...     def readline(self):
-    ...         line = self.lines.pop(0)
-    ...         print line
-    ...         return line+'\n'
-
-If a test or code called by a test calls pdb.set_trace, then the
-runner will enter pdb at that point:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> from zope.testing import testrunner
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> real_stdin = sys.stdin
-    >>> if sys.version_info[:2] == (2, 3):
-    ...     sys.stdin = Input('n\np x\nc')
-    ... else:
-    ...     sys.stdin = Input('p x\nc')
-
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace1').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +ELLIPSIS
-    Running unit tests:...
-    > testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-    False
-
-Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to
-break in the pdb.set_trace function.  It was necessary to use 'next'
-or 'up' to get to the application code that called pdb.set_trace.  In
-Python 2.4, pdb.set_trace causes pdb to stop right after the call to
-pdb.set_trace.
-
-You can also do post-mortem debugging, using the --post-mortem (-D)
-option:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem1 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample3/sampletests_d.py",
-              line 34, in test_post_mortem1
-        raise ValueError
-    ValueError
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
-    -> raise ValueError
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Note that the test runner exits after post-mortem debugging.
-
-In the example above, we debugged an error.  Failures are actually
-converted to errors and can be debugged the same way:
-
-    >>> sys.stdin = Input('up\np x\np y\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem_failure1 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_post_mortem_failure1 (sample3.sampletests_d.TestSomething)
-    Traceback (most recent call last):
-      File ".../unittest.py",  line 252, in debug
-        getattr(self, self.__testMethodName)()
-      File "testrunner-ex/sample3/sampletests_d.py",
-        line 42, in test_post_mortem_failure1
-        self.assertEqual(x, y)
-      File ".../unittest.py", line 302, in failUnlessEqual
-        raise self.failureException, \
-    AssertionError: 1 != 2
-    <BLANKLINE>
-    exceptions.AssertionError:
-    1 != 2
-    > .../unittest.py(302)failUnlessEqual()
-    -> ...
-    (Pdb) up
-    > testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
-    -> self.assertEqual(x, y)
-    (Pdb) p x
-    1
-    (Pdb) p y
-    2
-    (Pdb) c
-    True

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-edge-cases.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-edge-cases.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-edge-cases.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,476 +0,0 @@
-testrunner Edge Cases
-=====================
-
-This document has some edge-case examples to test various aspects of
-the test runner.
-
-Separating Python path and test directories
--------------------------------------------
-
-The --path option defines a directory to be searched for tests *and* a
-directory to be added to Python's search path.  The --test-path option
-can be used when you want to set a test search path without also
-affecting the Python path:
-
-    >>> import os, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-
-    >>> from zope.testing import testrunner
-
-    >>> defaults = [
-    ...     '--test-path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-    >>> sys.argv = ['test']
-    >>> testrunner.run(defaults)
-    ... # doctest: +ELLIPSIS
-    Test-module import failures:
-    <BLANKLINE>
-    Module: sampletestsf
-    <BLANKLINE>
-    ImportError: No module named sampletestsf
-    ...
-
-    >>> sys.path.append(directory_with_tests)
-    >>> sys.argv = ['test']
-    >>> testrunner.run(defaults)
-    ... # doctest: +ELLIPSIS
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in 0.028 seconds.
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
-    ...
-
-Debugging Edge Cases
---------------------
-
-    >>> class Input:
-    ...     def __init__(self, src):
-    ...         self.lines = src.split('\n')
-    ...     def readline(self):
-    ...         line = self.lines.pop(0)
-    ...         print line
-    ...         return line+'\n'
-
-    >>> real_stdin = sys.stdin
-
-Using pdb.set_trace in a function called by an ordinary test:
-
-    >>> if sys.version_info[:2] == (2, 3):
-    ...     sys.stdin = Input('n\np x\nc')
-    ... else:
-    ...     sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace2').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +ELLIPSIS
-    Running unit tests:...
-    > testrunner-ex/sample3/sampletests_d.py(47)f()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-    False
-
-Using pdb.set_trace in a function called by a doctest in a doc string:
-
-    >>> sys.stdin = Input('n\np x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace4').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    Running unit tests:
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) n
-    > testrunner-ex/sample3/sampletests_d.py(42)f()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
-    False
-
-Using pdb in a docstring-based doctest
-
-    >>> sys.stdin = Input('n\np x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace3').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    Running unit tests:
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) n
-    > <doctest sample3.sampletests_d.set_trace3[1]>(3)...()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
-    False
-
-Using pdb.set_trace in a doc file:
-
-
-    >>> sys.stdin = Input('n\np x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace5').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    Running unit tests:
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) n
-    > <doctest set_trace5.txt[1]>(3)...()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
-    False
-
-
-Using pdb.set_trace in a function called by a doctest in a doc file:
-
-
-    >>> sys.stdin = Input('n\np x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t set_trace6').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    Running unit tests:
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) n
-    > testrunner-ex/sample3/sampletests_d.py(42)f()
-    -> y = x
-    (Pdb) p x
-    1
-    (Pdb) c
-      Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
-    False
-
-Post-mortem debugging function called from ordinary test:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem2 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_post_mortem2 (sample3.sampletests_d.TestSomething)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample3/sampletests_d.py",
-           line 37, in test_post_mortem2
-        g()
-      File "testrunner-ex/sample3/sampletests_d.py", line 46, in g
-        raise ValueError
-    ValueError
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > testrunner-ex/sample3/sampletests_d.py(46)g()
-    -> raise ValueError
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-
-Post-mortem debugging docstring-based doctest:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem3 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test post_mortem3 (sample3.sampletests_d)
-    Traceback (most recent call last):
-      File "zope/testing/doctest.py", line 2276, in debug
-        runner.run(self._dt_test)
-      File "zope/testing/doctest.py", line 1731, in run
-        r = DocTestRunner.run(self, test, compileflags, out, False)
-      File "zope/testing/doctest.py", line 1389, in run
-        return self.__run(test, compileflags, out)
-      File "zope/testing/doctest.py", line 1310, in __run
-        exc_info)
-      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
-        raise UnexpectedException(test, example, exc_info)
-    UnexpectedException:
-       from testrunner-ex/sample3/sampletests_d.py:61 (2 examples)>
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > <doctest sample3.sampletests_d.post_mortem3[1]>(1)...()
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Post-mortem debugging function called from docstring-based doctest:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem4 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test post_mortem4 (sample3.sampletests_d)
-    Traceback (most recent call last):
-      File "zope/testing/doctest.py", line 2276, in debug
-        runner.run(self._dt_test)
-      File "zope/testing/doctest.py", line 1731, in run
-        r = DocTestRunner.run(self, test, compileflags, out, False)
-      File "zope/testing/doctest.py", line 1389, in run
-        return self.__run(test, compileflags, out)
-      File "zope/testing/doctest.py", line 1310, in __run
-        exc_info)
-      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
-        raise UnexpectedException(test, example, exc_info)
-    UnexpectedException: testrunner-ex/sample3/sampletests_d.py:67 (1 example)>
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > testrunner-ex/sample3/sampletests_d.py(46)g()
-    -> raise ValueError
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Post-mortem debugging file-based doctest:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem5 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test zope/testing/testrunner-ex/sample3/post_mortem5.txt
-    Traceback (most recent call last):
-      File "zope/testing/doctest.py", line 2276, in debug
-        runner.run(self._dt_test)
-      File "zope/testing/doctest.py", line 1731, in run
-        r = DocTestRunner.run(self, test, compileflags, out, False)
-      File "zope/testing/doctest.py", line 1389, in run
-        return self.__run(test, compileflags, out)
-      File "zope/testing/doctest.py", line 1310, in __run
-        exc_info)
-      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
-        raise UnexpectedException(test, example, exc_info)
-    UnexpectedException: testrunner-ex/sample3/post_mortem5.txt:0 (2 examples)>
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > <doctest post_mortem5.txt[1]>(1)...()
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-
-Post-mortem debugging function called from file-based doctest:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem6 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test zope/testing/testrunner-ex/sample3/post_mortem6.txt
-    Traceback (most recent call last):
-      File "zope/testing/doctest.py", line 2276, in debug
-        runner.run(self._dt_test)
-      File "zope/testing/doctest.py", line 1731, in run
-        r = DocTestRunner.run(self, test, compileflags, out, False)
-      File "zope/testing/doctest.py", line 1389, in run
-        return self.__run(test, compileflags, out)
-      File "zope/testing/doctest.py", line 1310, in __run
-        exc_info)
-      File "zope/testing/doctest.py", line 1737, in report_unexpected_exception
-        raise UnexpectedException(test, example, exc_info)
-    UnexpectedException: testrunner-ex/sample3/post_mortem6.txt:0 (2 examples)>
-    <BLANKLINE>
-    exceptions.ValueError:
-    <BLANKLINE>
-    > testrunner-ex/sample3/sampletests_d.py(46)g()
-    -> raise ValueError
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Post-mortem debugging of a docstring doctest failure:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem_failure2 -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test post_mortem_failure2 (sample3.sampletests_d)
-    <BLANKLINE>
-    File "testrunner-ex/sample3/sampletests_d.py",
-                   line 81, in sample3.sampletests_d.post_mortem_failure2
-    <BLANKLINE>
-    x
-    <BLANKLINE>
-    Want:
-    2
-    <BLANKLINE>
-    Got:
-    1
-    <BLANKLINE>
-    <BLANKLINE>
-    > testrunner-ex/sample3/sampletests_d.py(81)_()
-    exceptions.ValueError:
-    Expected and actual output are different
-    > <string>(1)...()
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-
-Post-mortem debugging of a docfile doctest failure:
-
-    >>> sys.stdin = Input('p x\nc')
-    >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
-    ...             ' -t post_mortem_failure.txt -D').split()
-    >>> try: testrunner.run(defaults)
-    ... finally: sys.stdin = real_stdin
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test /home/jim/z3/zope.testing/src/zope/testing/testrunner-ex/sample3/post_mortem_failure.txt
-    <BLANKLINE>
-    File "testrunner-ex/sample3/post_mortem_failure.txt",
-                                      line 2, in post_mortem_failure.txt
-    <BLANKLINE>
-    x
-    <BLANKLINE>
-    Want:
-    2
-    <BLANKLINE>
-    Got:
-    1
-    <BLANKLINE>
-    <BLANKLINE>
-    > testrunner-ex/sample3/post_mortem_failure.txt(2)_()
-    exceptions.ValueError:
-    Expected and actual output are different
-    > <string>(1)...()
-    (Pdb) p x
-    1
-    (Pdb) c
-    True
-
-Post-mortem debugging with triple verbosity
-
-    >>> sys.argv = 'test --layer samplelayers.Layer1$ -vvv -D'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Running:
-        test_x1 (sampletestsf.TestA1) (0.000 s)
-        test_y0 (sampletestsf.TestA1) (0.000 s)
-        test_z0 (sampletestsf.TestA1) (0.000 s)
-        test_x0 (sampletestsf.TestB1) (0.000 s)
-        test_y1 (sampletestsf.TestB1) (0.000 s)
-        test_z0 (sampletestsf.TestB1) (0.000 s)
-        test_1 (sampletestsf.TestNotMuch1) (0.000 s)
-        test_2 (sampletestsf.TestNotMuch1) (0.000 s)
-        test_3 (sampletestsf.TestNotMuch1) (0.000 s)
-      Ran 9 tests with 0 failures and 0 errors in 0.001 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-Test Suites with None for suites or tests
------------------------------------------
-
-    >>> sys.argv = ['test',
-    ...             '--tests-pattern', '^sampletests_none_suite$',
-    ...     ]
-    >>> testrunner.run(defaults)
-    Test-module import failures:
-    <BLANKLINE>
-    Module: sample1.sampletests_none_suite
-    <BLANKLINE>
-    TypeError: Invalid test_suite, None, in sample1.sampletests_none_suite
-    <BLANKLINE>
-    <BLANKLINE>
-    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
-    <BLANKLINE>
-    Test-modules with import problems:
-      sample1.sampletests_none_suite
-    True
-
-
-    >>> sys.argv = ['test',
-    ...             '--tests-pattern', '^sampletests_none_test$',
-    ...     ]
-    >>> testrunner.run(defaults)
-    Test-module import failures:
-    <BLANKLINE>
-    Module: sample1.sampletests_none_test
-    <BLANKLINE>
-    TypeError: ...
-    <BLANKLINE>
-    <BLANKLINE>
-    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
-    <BLANKLINE>
-    Test-modules with import problems:
-      sample1.sampletests_none_test
-    True
-
-You must use --repeat with --report-refcounts
----------------------------------------------
-
-It is an error to specify --report-refcounts (-r) without specifying a
-repeat count greater than 1
-
-    >>> sys.argv = 'test -r'.split()
-    >>> testrunner.run(defaults)
-            You must use the --repeat (-N) option to specify a repeat
-            count greater than 1 when using the --report_refcounts (-r)
-            option.
-    <BLANKLINE>
-    True
-
-    >>> sys.argv = 'test -r -N1'.split()
-    >>> testrunner.run(defaults)
-            You must use the --repeat (-N) option to specify a repeat
-            count greater than 1 when using the --report_refcounts (-r)
-            option.
-    <BLANKLINE>
-    True

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-errors.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-errors.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-errors.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,856 +0,0 @@
-Errors and Failures
-===================
-
-Let's look at tests that have errors and failures, first we need to make a
-temporary copy of the entire testing directory (except .svn files which may
-be read only):
-
-    >>> import os.path, sys, tempfile, shutil
-    >>> tmpdir = tempfile.mkdtemp()
-    >>> directory_with_tests = os.path.join(tmpdir, 'testrunner-ex')
-    >>> source = os.path.join(this_directory, 'testrunner-ex')
-    >>> n = len(source) + 1
-    >>> for root, dirs, files in os.walk(source):
-    ...     dirs[:] = [d for d in dirs if d != ".svn"] # prune cruft
-    ...     os.mkdir(os.path.join(directory_with_tests, root[n:]))
-    ...     for f in files:
-    ...         shutil.copy(os.path.join(root, f),
-    ...                     os.path.join(directory_with_tests, root[n:], f))
-    
-    >>> from zope.testing import testrunner
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
-    >>> testrunner.run(defaults)
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_e)
-    Failed doctest test for sample2.sampletests_e.eek
-      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
-            f()
-          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-            g()
-          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-            x = y + 1
-        NameError: global name 'y' is not defined
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test3 (sample2.sampletests_e.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
-        f()
-      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-        g()
-      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-        x = y + 1
-    NameError: global name 'y' is not defined
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test testrunner-ex/sample2/e.txt
-    Failed doctest test for e.txt
-      File "testrunner-ex/sample2/e.txt", line 0
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest e.txt[1]>", line 1, in ?
-            f()
-          File "<doctest e.txt[0]>", line 2, in f
-            return x
-        NameError: global name 'x' is not defined
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test test (sample2.sampletests_f.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
-        self.assertEqual(1,0)
-      File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
-        raise self.failureException, \
-    AssertionError: 1 != 0
-    <BLANKLINE>
-      Ran 200 tests with 3 failures and 1 errors in 0.038 seconds.
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Set up samplelayers.Layer112 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 413 tests, 3 failures, 1 errors in N.NNN seconds.
-    True
-
-We see that we get an error report and a traceback for the failing
-test.  In addition, the test runner returned True, indicating that
-there was an error.
-
-If we ask for verbosity, the dotted output will be interrupted, and
-there'll be a summary of the errors at the end of the test:
-
-    >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
-    >>> testrunner.run(defaults)
-    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
-    Running tests at level 1
-    Running unit tests:
-      Running:
-     .................................................................................................
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_e)
-    Failed doctest test for sample2.sampletests_e.eek
-      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_e.py", line 30,
-        in sample2.sampletests_e.eek
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
-            f()
-          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-            g()
-          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-            x = y + 1
-        NameError: global name 'y' is not defined
-    <BLANKLINE>
-    ...
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test3 (sample2.sampletests_e.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
-        f()
-      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-        g()
-      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-        x = y + 1
-    NameError: global name 'y' is not defined
-    <BLANKLINE>
-    ...
-    <BLANKLINE>
-    Failure in test testrunner-ex/sample2/e.txt
-    Failed doctest test for e.txt
-      File "testrunner-ex/sample2/e.txt", line 0
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest e.txt[1]>", line 1, in ?
-            f()
-          File "<doctest e.txt[0]>", line 2, in f
-            return x
-        NameError: global name 'x' is not defined
-    <BLANKLINE>
-    .
-    <BLANKLINE>
-    Failure in test test (sample2.sampletests_f.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
-        self.assertEqual(1,0)
-      File ".../unittest.py", line 302, in failUnlessEqual
-        raise self.failureException, \
-    AssertionError: 1 != 0
-    <BLANKLINE>
-    ................................................................................................
-    <BLANKLINE>
-      Ran 200 tests with 3 failures and 1 errors in 0.040 seconds.
-    <BLANKLINE>
-    Tests with errors:
-       test3 (sample2.sampletests_e.Test)
-    <BLANKLINE>
-    Tests with failures:
-       eek (sample2.sampletests_e)
-       testrunner-ex/sample2/e.txt
-       test (sample2.sampletests_f.Test)
-    True
-
-Similarly for progress output, the progress ticker will be interrupted:
-
-    >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
-    ...             ' -p').split()
-    >>> testrunner.run(defaults)
-    ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
-    Running unit tests:
-      Running:
-        1/56 (1.8%)
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_e)
-    Failed doctest test for sample2.sampletests_e.eek
-      File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_e.py", line 30,
-           in sample2.sampletests_e.eek
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
-            f()
-          File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-            g()
-          File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-            x = y + 1
-        NameError: global name 'y' is not defined
-    <BLANKLINE>
-        2/56 (3.6%)\r
-                   \r
-        3/56 (5.4%)\r
-                   \r
-        4/56 (7.1%)
-    <BLANKLINE>
-    Error in test test3 (sample2.sampletests_e.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
-        f()
-      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
-        g()
-      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
-        x = y + 1
-    NameError: global name 'y' is not defined
-    <BLANKLINE>
-        5/56 (8.9%)\r
-                   \r
-        6/56 (10.7%)\r
-                    \r
-        7/56 (12.5%)
-    <BLANKLINE>
-    Failure in test testrunner-ex/sample2/e.txt
-    Failed doctest test for e.txt
-      File "testrunner-ex/sample2/e.txt", line 0
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/e.txt", line 4, in e.txt
-    Failed example:
-        f()
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest e.txt[1]>", line 1, in ?
-            f()
-          File "<doctest e.txt[0]>", line 2, in f
-            return x
-        NameError: global name 'x' is not defined
-    <BLANKLINE>
-        8/56 (14.3%)
-    <BLANKLINE>
-    Failure in test test (sample2.sampletests_f.Test)
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
-        self.assertEqual(1,0)
-      File ".../unittest.py", line 302, in failUnlessEqual
-        raise self.failureException, \
-    AssertionError: 1 != 0
-    <BLANKLINE>
-        9/56 (16.1%)\r
-                    \r
-        10/56 (17.9%)\r
-                     \r
-        11/56 (19.6%)\r
-                     \r
-        12/56 (21.4%)\r
-                     \r
-        13/56 (23.2%)\r
-                     \r
-        14/56 (25.0%)\r
-                     \r
-        15/56 (26.8%)\r
-                     \r
-        16/56 (28.6%)\r
-                     \r
-        17/56 (30.4%)\r
-                     \r
-        18/56 (32.1%)\r
-                     \r
-        19/56 (33.9%)\r
-                     \r
-        20/56 (35.7%)\r
-                     \r
-        24/56 (42.9%)\r
-                     \r
-        25/56 (44.6%)\r
-                     \r
-        26/56 (46.4%)\r
-                     \r
-        27/56 (48.2%)\r
-                     \r
-        28/56 (50.0%)\r
-                     \r
-        29/56 (51.8%)\r
-                     \r
-        30/56 (53.6%)\r
-                     \r
-        31/56 (55.4%)\r
-                     \r
-        32/56 (57.1%)\r
-                     \r
-        33/56 (58.9%)\r
-                     \r
-        34/56 (60.7%)\r
-                     \r
-        35/56 (62.5%)\r
-                     \r
-        36/56 (64.3%)\r
-                     \r
-        40/56 (71.4%)\r
-                     \r
-        41/56 (73.2%)\r
-                     \r
-        42/56 (75.0%)\r
-                     \r
-        43/56 (76.8%)\r
-                     \r
-        44/56 (78.6%)\r
-                     \r
-        45/56 (80.4%)\r
-                     \r
-        46/56 (82.1%)\r
-                     \r
-        47/56 (83.9%)\r
-                     \r
-        48/56 (85.7%)\r
-                     \r
-        49/56 (87.5%)\r
-                     \r
-        50/56 (89.3%)\r
-                     \r
-        51/56 (91.1%)\r
-                     \r
-        52/56 (92.9%)\r
-                     \r
-        56/56 (100.0%)\r
-                      \r
-    <BLANKLINE>
-      Ran 56 tests with 3 failures and 1 errors in 0.054 seconds.
-    True
-
-If you also want a summary of errors at the end, ask for verbosity as well
-as progress output.
-
-
-Suppressing multiple doctest errors
------------------------------------
-
-Often, when a doctest example fails, the failure will cause later
-examples in the same test to fail.  Each failure is reported:
-
-    >>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
-    >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_1)
-    Failed doctest test for sample2.sampletests_1.eek
-      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 19,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x = y
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
-            x = y
-        NameError: name 'y' is not defined
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 21,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
-            x
-        NameError: name 'x' is not defined
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 24,
-         in sample2.sampletests_1.eek
-    Failed example:
-        z = x + 1
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
-            z = x + 1
-        NameError: name 'x' is not defined
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-    True
-
-This can be a bit confusing, especially when there are enough tests
-that they scroll off a screen.  Often you just want to see the first
-failure.  This can be accomplished with the -1 option (for "just show
-me the first failed example in a doctest" :)
-
-    >>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
-    >>> testrunner.run(defaults) # doctest:
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_1)
-    Failed doctest test for sample2.sampletests_1.eek
-      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 19,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x = y
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
-            x = y
-        NameError: name 'y' is not defined
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
-    True
-
-The --hide-secondary-failures option is an alias for -1:
-
-    >>> sys.argv = (
-    ...     'test --tests-pattern ^sampletests_1$'
-    ...     ' --hide-secondary-failures'
-    ...     ).split()
-    >>> testrunner.run(defaults) # doctest:
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_1)
-    Failed doctest test for sample2.sampletests_1.eek
-      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 19,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x = y
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
-            x = y
-        NameError: name 'y' is not defined
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
-    True
-
-The --show-secondary-failures option counters -1 (or it's alias),
-causing the second and subsequent errors to be shown.  This is useful
-if -1 is provided by a test script by inserting it ahead of
-command-line options in sys.argv.
-
-    >>> sys.argv = (
-    ...     'test --tests-pattern ^sampletests_1$'
-    ...     ' --hide-secondary-failures --show-secondary-failures'
-    ...     ).split()
-    >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test eek (sample2.sampletests_1)
-    Failed doctest test for sample2.sampletests_1.eek
-      File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 19,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x = y
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
-            x = y
-        NameError: name 'y' is not defined
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 21,
-         in sample2.sampletests_1.eek
-    Failed example:
-        x
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
-            x
-        NameError: name 'x' is not defined
-    ----------------------------------------------------------------------
-    File "testrunner-ex/sample2/sampletests_1.py", line 24,
-         in sample2.sampletests_1.eek
-    Failed example:
-        z = x + 1
-    Exception raised:
-        Traceback (most recent call last):
-          File ".../doctest.py", line 1256, in __run
-            compileflags, 1) in test.globs
-          File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
-            z = x + 1
-        NameError: name 'x' is not defined
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-    True
-
-
-Getting diff output for doctest failures
-----------------------------------------
-
-If a doctest has large expected and actual output, it can be hard to
-see differences when expected and actual output differ.  The --ndiff,
---udiff, and --cdiff options can be used to get diff output of various
-kinds.
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$'.split()
-    >>> _ = testrunner.run(defaults)
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test pledge (pledge)
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
-    Failed example:
-        print pledge_template % ('and earthling', 'planet'),
-    Expected:
-        I give my pledge, as an earthling,
-        to save, and faithfully, to defend from waste,
-        the natural resources of my planet.
-        It's soils, minerals, forests, waters, and wildlife.
-    Got:
-        I give my pledge, as and earthling,
-        to save, and faithfully, to defend from waste,
-        the natural resources of my planet.
-        It's soils, minerals, forests, waters, and wildlife.
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-
-Here, the actual output uses the word "and" rather than the word "an",
-but it's a bit hard to pick this out.  We can use the various diff
-outputs to see this better. We could modify the test to ask for diff
-output, but it's easier to use one of the diff options.
-
-The --ndiff option requests a diff using Python's ndiff utility. This
-is the only method that marks differences within lines as well as
-across lines. For example, if a line of expected output contains digit
-1 where actual output contains letter l, a line is inserted with a
-caret marking the mismatching column positions.
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
-    >>> _ = testrunner.run(defaults)
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test pledge (pledge)
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
-    Failed example:
-        print pledge_template % ('and earthling', 'planet'),
-    Differences (ndiff with -expected +actual):
-        - I give my pledge, as an earthling,
-        + I give my pledge, as and earthling,
-        ?                        +
-          to save, and faithfully, to defend from waste,
-          the natural resources of my planet.
-          It's soils, minerals, forests, waters, and wildlife.
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.003 seconds.
-
-The -udiff option requests a standard "unified" diff:
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
-    >>> _ = testrunner.run(defaults)
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test pledge (pledge)
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
-    Failed example:
-        print pledge_template % ('and earthling', 'planet'),
-    Differences (unified diff with -expected +actual):
-        @@ -1,3 +1,3 @@
-        -I give my pledge, as an earthling,
-        +I give my pledge, as and earthling,
-         to save, and faithfully, to defend from waste,
-         the natural resources of my planet.
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-
-The -cdiff option requests a standard "context" diff:
-
-    >>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
-    >>> _ = testrunner.run(defaults)
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test pledge (pledge)
-    Failed doctest test for pledge.pledge
-      File "testrunner-ex/pledge.py", line 24, in pledge
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
-    Failed example:
-        print pledge_template % ('and earthling', 'planet'),
-    Differences (context diff with expected followed by actual):
-        ***************
-        *** 1,3 ****
-        ! I give my pledge, as an earthling,
-          to save, and faithfully, to defend from waste,
-          the natural resources of my planet.
-        --- 1,3 ----
-        ! I give my pledge, as and earthling,
-          to save, and faithfully, to defend from waste,
-          the natural resources of my planet.
-    <BLANKLINE>
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-
-
-Testing-Module Import Errors
-----------------------------
-
-If there are errors when importing a test module, these errors are
-reported.  In order to illustrate a module with a syntax error, we create
-one now:  this module used to be checked in to the project, but then it was
-included in distributions of projects using zope.testing too, and distutils
-complained about the syntax error when it compiled Python files during
-installation of such projects.  So first we create a module with bad syntax:
-
-    >>> badsyntax_path = os.path.join(directory_with_tests,
-    ...                               "sample2", "sampletests_i.py")
-    >>> f = open(badsyntax_path, "w")
-    >>> print >> f, "importx unittest"  # syntax error
-    >>> f.close()
-
-Then run the tests:
-
-    >>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
-    ...            ).split()
-    >>> testrunner.run(defaults)
-    ... # doctest: +NORMALIZE_WHITESPACE
-    Test-module import failures:
-    <BLANKLINE>
-    Module: sample2.sampletests_i
-    <BLANKLINE>
-      File "testrunner-ex/sample2/sampletests_i.py", line 1
-        importx unittest
-                       ^
-    SyntaxError: invalid syntax
-    <BLANKLINE>
-    <BLANKLINE>
-    Module: sample2.sample21.sampletests_i
-    <BLANKLINE>
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
-        import zope.testing.huh
-    ImportError: No module named huh
-    <BLANKLINE>
-    <BLANKLINE>
-    Module: sample2.sample22.sampletests_i
-    <BLANKLINE>
-    AttributeError: 'module' object has no attribute 'test_suite'
-    <BLANKLINE>
-    <BLANKLINE>
-    Module: sample2.sample23.sampletests_i
-    <BLANKLINE>
-    Traceback (most recent call last):
-      File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
-        class Test(unittest.TestCase):
-      File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
-        raise TypeError('eek')
-    TypeError: eek
-    <BLANKLINE>
-    <BLANKLINE>
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Set up samplelayers.Layer112 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 213 tests, 0 failures, 0 errors in N.NNN seconds.
-    <BLANKLINE>
-    Test-modules with import problems:
-      sample2.sampletests_i
-      sample2.sample21.sampletests_i
-      sample2.sample22.sampletests_i
-      sample2.sample23.sampletests_i
-    True
-
-
-Unicode Errors
---------------
-
-There was a bug preventing decent error reporting when a result contained
-unicode and another not:
-
-    >>> sys.argv = 'test --tests-pattern ^unicode$ -u'.split()
-    >>> testrunner.run(defaults) # doctest: +REPORT_NDIFF
-    Running unit tests:
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure testrunner-ex/unicode.txt
-    Failed doctest test for unicode.txt
-     testrunner-ex/unicode.txt", line 0
-    <BLANKLINE>
-    ----------------------------------------------------------------------
-    File testrunner-ex/unicode.txt", Line NNN, in unicode.txt
-    Failed example:
-        print get_unicode()
-    Expected:
-        oink
-    Got:
-        foo — bar
-    ----------------------------------------------------------------------
-    File testrunner-ex/unicode.txt", Line NNN, in unicode.txt
-    Failed example:
-        'xyz'
-    Expected:
-        123
-    Got:
-        'xyz'
-    <BLANKLINE>
-      Ran 3 tests with 1 failures and 0 errors in N.NNN seconds.
-    True
-
- 
-Reporting Errors to Calling Processes
--------------------------------------
-
-The testrunner can return an error status, indicating that the tests
-failed.  This can be useful for an invoking process that wants to
-monitor the result of a test run.
-
-To use, specify the argument "--exit-with-status".
-
-    >>> sys.argv = (
-    ...     'test --exit-with-status --tests-pattern ^sampletests_1$'.split())
-    >>> try:
-    ...     testrunner.run(defaults)
-    ... except SystemExit, e:
-    ...     print 'exited with code', e.code
-    ... else:
-    ...     print 'sys.exit was not called'
-    ... # doctest: +ELLIPSIS
-    Running unit tests:
-    ...
-      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
-    exited with code 1
-
-A passing test does not exit.
-
-    >>> sys.argv = (
-    ...     'test --exit-with-status --tests-pattern ^sampletests$'.split())
-    >>> try:
-    ...     testrunner.run(defaults)
-    ... except SystemExit, e2:
-    ...     print 'oops'
-    ... else:
-    ...     print 'sys.exit was not called'
-    ... # doctest: +ELLIPSIS
-    Running unit tests:
-    ...
-    Total: 364 tests, 0 failures, 0 errors in N.NNN seconds.
-    ...
-    sys.exit was not called
-
-And remove the temporary directory:
-
-    >>> shutil.rmtree(tmpdir)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-gc.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-gc.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-gc.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,82 +0,0 @@
-Garbage Collection Control
-==========================
-
-When having problems that seem to be caused my memory-management
-errors, it can be helpful to adjust Python's cyclic garbage collector
-or to get garbage colection statistics.  The --gc option can be used
-for this purpose.
-
-If you think you are getting a test failure due to a garbage
-collection problem, you can try disabling garbage collection by
-using the --gc option with a value of zero.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = ['--path', directory_with_tests]
-
-    >>> from zope.testing import testrunner
-    
-    >>> sys.argv = 'test --tests-pattern ^gc0$ --gc 0 -vv'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Cyclic garbage collection is disabled.
-    Running unit tests:
-      Running:
-        make_sure_gc_is_disabled (gc0)
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-
-Alternatively, if you think you are having a garbage collection
-related problem, you can cause garbage collection to happen more often
-by providing a low threshold:
-    
-    >>> sys.argv = 'test --tests-pattern ^gc1$ --gc 1 -vv'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Cyclic garbage collection threshold set to: (1,)
-    Running unit tests:
-      Running:
-        make_sure_gc_threshold_is_one (gc1)
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-
-You can specify up to 3 --gc options to set each of the 3 gc threshold
-values:
-
-    
-    >>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 11 --gc 9 -vv'
-    ...             .split())
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Cyclic garbage collection threshold set to: (701, 11, 9)
-    Running unit tests:
-      Running:
-        make_sure_gc_threshold_is_701_11_9 (gcset)
-      Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
-
-Garbage Collection Statistics
------------------------------
-
-You can enable gc debugging statistics using the --gc-options (-G)
-option.  You should provide names of one or more of the flags
-described in the library documentation for the gc module.
-
-The output statistics are written to standard error.  
-
-    >>> from StringIO import StringIO
-    >>> err = StringIO()
-    >>> stderr = sys.stderr
-    >>> sys.stderr = err
-    >>> sys.argv = ('test --tests-pattern ^gcstats$ -G DEBUG_STATS'
-    ...             ' -G DEBUG_COLLECTABLE -vv'
-    ...             .split())
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        generate_some_gc_statistics (gcstats)
-      Ran 1 tests with 0 failures and 0 errors in 0.006 seconds.
-
-    >>> sys.stderr = stderr
-
-    >>> print err.getvalue()        # doctest: +ELLIPSIS
-    gc: collecting generation ...
-        

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-knit.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-knit.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-knit.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,105 +0,0 @@
-Knitting in extra package directories
-=====================================
-
-Python packages have __path__ variables that can be manipulated to add
-extra directories cntaining software used in the packages.  The
-testrunner needs to be given extra information about this sort of
-situation.
-
-Let's look at an example.  The testrunner-ex-knit-lib directory
-is a directory that we want to add to the Python path, but that we
-don't want to search for tests.  It has a sample4 package and a
-products subpackage.  The products subpackage adds the
-testrunner-ex-knit-products to it's __path__.  We want to run tests
-from the testrunner-ex-knit-products directory.  When we import these
-tests, we need to import them from the sample4.products package.  We
-can't use the --path option to name testrunner-ex-knit-products.
-It isn't enough to add the containing directory to the test path
-because then we wouldn't be able to determine the package name
-properly.  We might be able to use the --package option to run the
-tests from the sample4/products package, but we want to run tests in
-testrunner-ex that aren't in this package.  
-
-We can use the --package-path option in this case.  The --package-path
-option is like the --test-path option in that it defines a path to be
-searched for tests without affecting the python path.  It differs in
-that it supplied a package name that is added a profex when importing
-any modules found.  The --package-path option takes *two* arguments, a
-package name and file path.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> sys.path.append(os.path.join(this_directory, 'testrunner-ex-pp-lib'))
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     '--package-path',
-    ...     os.path.join(this_directory, 'testrunner-ex-pp-products'),
-    ...     'sample4.products',
-    ...     ]
-
-    >>> from zope.testing import testrunner
-    
-    >>> sys.argv = 'test --layer Layer111 -vv'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Running:
-        test_x1 (sample1.sampletests.test111.TestA)
-        test_y0 (sample1.sampletests.test111.TestA)
-        ...
-        test_y0 (sampletests.test111)
-        test_z1 (sampletests.test111)
-        testrunner-ex/sampletests/../sampletestsl.txt
-        test_extra_test_in_products (sample4.products.sampletests.Test)
-        test_another_test_in_products (sample4.products.more.sampletests.Test)
-      Ran 36 tests with 0 failures and 0 errors in 0.008 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-
-In the example, the last test, test_extra_test_in_products, came from
-the products directory.  As usual, we can select the knit-in packages
-or individual packages within knit-in packages:
-
-    >>> sys.argv = 'test --package sample4.products -vv'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Running:
-        test_extra_test_in_products (sample4.products.sampletests.Test)
-        test_another_test_in_products (sample4.products.more.sampletests.Test)
-      Ran 2 tests with 0 failures and 0 errors in 0.000 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-
-    >>> sys.argv = 'test --package sample4.products.more -vv'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer111 in 0.000 seconds.
-      Running:
-        test_another_test_in_products (sample4.products.more.sampletests.Test)
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer111 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-api.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-api.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-api.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,267 +0,0 @@
-Layers
-======
-
-A Layer is an object providing setup and teardown methods used to setup
-and teardown the environment provided by the layer. It may also provide
-setup and teardown methods used to reset the environment provided by the
-layer between each test.
-
-Layers are generally implemented as classes using class methods.
-
->>> class BaseLayer:
-...     def setUp(cls):
-...         log('BaseLayer.setUp')
-...     setUp = classmethod(setUp)
-...
-...     def tearDown(cls):
-...         log('BaseLayer.tearDown')
-...     tearDown = classmethod(tearDown)
-...
-...     def testSetUp(cls):
-...         log('BaseLayer.testSetUp')
-...     testSetUp = classmethod(testSetUp)
-...
-...     def testTearDown(cls):
-...         log('BaseLayer.testTearDown')
-...     testTearDown = classmethod(testTearDown)
-...
-
-Layers can extend other layers. Note that they do not explicitly
-invoke the setup and teardown methods of other layers - the test runner
-does this for us in order to minimize the number of invocations.
-
->>> class TopLayer(BaseLayer):
-...     def setUp(cls):
-...         log('TopLayer.setUp')
-...     setUp = classmethod(setUp)
-...
-...     def tearDown(cls):
-...         log('TopLayer.tearDown')
-...     tearDown = classmethod(tearDown)
-...
-...     def testSetUp(cls):
-...         log('TopLayer.testSetUp')
-...     testSetUp = classmethod(testSetUp)
-...
-...     def testTearDown(cls):
-...         log('TopLayer.testTearDown')
-...     testTearDown = classmethod(testTearDown)
-...
-
-Tests or test suites specify what layer they need by storing a reference
-in the 'layer' attribute.
-
->>> import unittest
->>> class TestSpecifyingBaseLayer(unittest.TestCase):
-...     'This TestCase explicitly specifies its layer'
-...     layer = BaseLayer
-...     name = 'TestSpecifyingBaseLayer' # For testing only
-...
-...     def setUp(self):
-...         log('TestSpecifyingBaseLayer.setUp')
-...
-...     def tearDown(self):
-...         log('TestSpecifyingBaseLayer.tearDown')
-...
-...     def test1(self):
-...         log('TestSpecifyingBaseLayer.test1')
-...
-...     def test2(self):
-...         log('TestSpecifyingBaseLayer.test2')
-...
->>> class TestSpecifyingNoLayer(unittest.TestCase):
-...     'This TestCase specifies no layer'
-...     name = 'TestSpecifyingNoLayer' # For testing only
-...     def setUp(self):
-...         log('TestSpecifyingNoLayer.setUp')
-...
-...     def tearDown(self):
-...         log('TestSpecifyingNoLayer.tearDown')
-...
-...     def test1(self):
-...         log('TestSpecifyingNoLayer.test')
-...
-...     def test2(self):
-...         log('TestSpecifyingNoLayer.test')
-...
-
-Create a TestSuite containing two test suites, one for each of
-TestSpecifyingBaseLayer and TestSpecifyingNoLayer.
-
->>> umbrella_suite = unittest.TestSuite()
->>> umbrella_suite.addTest(unittest.makeSuite(TestSpecifyingBaseLayer))
->>> no_layer_suite = unittest.makeSuite(TestSpecifyingNoLayer)
->>> umbrella_suite.addTest(no_layer_suite)
-
-Before we can run the tests, we need to setup some helpers.
-
->>> from zope.testing import testrunner
->>> from zope.testing.loggingsupport import InstalledHandler
->>> import logging
->>> log_handler = InstalledHandler('zope.testing.tests')
->>> def log(msg):
-...     logging.getLogger('zope.testing.tests').info(msg)
->>> def fresh_options():
-...     options = testrunner.get_options(['--test-filter', '.*'])
-...     options.resume_layer = None
-...     options.resume_number = 0
-...     return options
-
-Now we run the tests. Note that the BaseLayer was not setup when
-the TestSpecifyingNoLayer was run and setup/torn down around the
-TestSpecifyingBaseLayer tests.
-
->>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
-Running unit tests:
-    Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
-    Set up BaseLayer in N.NNN seconds.
-    Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
-Tearing down left over layers:
-    Tear down BaseLayer in N.NNN seconds.
-Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.
-
-Now lets specify a layer in the suite containing TestSpecifyingNoLayer
-and run the tests again. This demonstrates the other method of specifying
-a layer. This is generally how you specify what layer doctests need.
-
->>> no_layer_suite.layer = BaseLayer
->>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
-  Set up BaseLayer in N.NNN seconds.
-  Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
-Tearing down left over layers:
-  Tear down BaseLayer in N.NNN seconds.
-
-Clear our logged output, as we want to inspect it shortly.
-
->>> log_handler.clear()
-
-Now lets also specify a layer in the TestSpecifyingNoLayer class and rerun
-the tests. This demonstrates that the most specific layer is used. It also
-shows the behavior of nested layers - because TopLayer extends BaseLayer,
-both the BaseLayer and TopLayer environments are setup when the
-TestSpecifyingNoLayer tests are run.
-
->>> TestSpecifyingNoLayer.layer = TopLayer
->>> succeeded = testrunner.run_with_options(fresh_options(), [umbrella_suite])
-  Set up BaseLayer in N.NNN seconds.
-  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
-  Set up TopLayer in N.NNN seconds.
-  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
-Tearing down left over layers:
-  Tear down TopLayer in N.NNN seconds.
-  Tear down BaseLayer in N.NNN seconds.
-Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.
-
-If we inspect our trace of what methods got called in what order, we can
-see that the layer setup and teardown methods only got called once. We can
-also see that the layer's test setup and teardown methods got called for
-each test using that layer in the right order.
-
->>> def report():
-...     for record in log_handler.records:
-...         print record.getMessage()
->>> report()
-BaseLayer.setUp
-BaseLayer.testSetUp
-TestSpecifyingBaseLayer.setUp
-TestSpecifyingBaseLayer.test1
-TestSpecifyingBaseLayer.tearDown
-BaseLayer.testTearDown
-BaseLayer.testSetUp
-TestSpecifyingBaseLayer.setUp
-TestSpecifyingBaseLayer.test2
-TestSpecifyingBaseLayer.tearDown
-BaseLayer.testTearDown
-TopLayer.setUp
-BaseLayer.testSetUp
-TopLayer.testSetUp
-TestSpecifyingNoLayer.setUp
-TestSpecifyingNoLayer.test
-TestSpecifyingNoLayer.tearDown
-TopLayer.testTearDown
-BaseLayer.testTearDown
-BaseLayer.testSetUp
-TopLayer.testSetUp
-TestSpecifyingNoLayer.setUp
-TestSpecifyingNoLayer.test
-TestSpecifyingNoLayer.tearDown
-TopLayer.testTearDown
-BaseLayer.testTearDown
-TopLayer.tearDown
-BaseLayer.tearDown
-
-Now lets stack a few more layers to ensure that our setUp and tearDown
-methods are called in the correct order.
-
->>> from zope.testing.testrunner import name_from_layer
->>> class A(object):
-...     def setUp(cls):
-...         log('%s.setUp' % name_from_layer(cls))
-...     setUp = classmethod(setUp)
-...
-...     def tearDown(cls):
-...         log('%s.tearDown' % name_from_layer(cls))
-...     tearDown = classmethod(tearDown)
-...
-...     def testSetUp(cls):
-...         log('%s.testSetUp' % name_from_layer(cls))
-...     testSetUp = classmethod(testSetUp)
-...
-...     def testTearDown(cls):
-...         log('%s.testTearDown' % name_from_layer(cls))
-...     testTearDown = classmethod(testTearDown)
-...         
->>> class B(A): pass
->>> class C(B): pass
->>> class D(A): pass
->>> class E(D): pass
->>> class F(C,E): pass
-
->>> class DeepTest(unittest.TestCase):
-...     layer = F
-...     def test(self):
-...         pass
->>> suite = unittest.makeSuite(DeepTest)
->>> log_handler.clear()
->>> succeeded = testrunner.run_with_options(fresh_options(), [suite])
-  Set up A in 0.000 seconds.
-  Set up B in 0.000 seconds.
-  Set up C in 0.000 seconds.
-  Set up D in 0.000 seconds.
-  Set up E in 0.000 seconds.
-  Set up F in 0.000 seconds.
-  Ran 1 tests with 0 failures and 0 errors in 0.003 seconds.
-Tearing down left over layers:
-  Tear down F in 0.000 seconds.
-  Tear down E in 0.000 seconds.
-  Tear down D in 0.000 seconds.
-  Tear down C in 0.000 seconds.
-  Tear down B in 0.000 seconds.
-  Tear down A in 0.000 seconds.
-
->>> report()
-A.setUp
-B.setUp
-C.setUp
-D.setUp
-E.setUp
-F.setUp
-A.testSetUp
-B.testSetUp
-C.testSetUp
-D.testSetUp
-E.testSetUp
-F.testSetUp
-F.testTearDown
-E.testTearDown
-D.testTearDown
-C.testTearDown
-B.testTearDown
-A.testTearDown
-F.tearDown
-E.tearDown
-D.tearDown
-C.tearDown
-B.tearDown
-A.tearDown
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-ntd.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-ntd.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers-ntd.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,240 +0,0 @@
-Layers that can't be torn down
-==============================
-
-A layer can have a tearDown method that raises NotImplementedError.
-If this is the case and there are no remaining tests to run, the test
-runner will just note that the tear down couldn't be done:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> from zope.testing import testrunner
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
-    >>> testrunner.run(defaults)
-    Running sample2.sampletests_ntd.Layer tests:
-      Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-    Tearing down left over layers:
-      Tear down sample2.sampletests_ntd.Layer ... not supported
-    False
-
-If the tearDown method raises NotImplementedError and there are remaining
-layers to run, the test runner will restart itself as a new process,
-resuming tests where it left off:
-
-    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
-    >>> testrunner.run(defaults)
-    Running sample1.sampletests_ntd.Layer tests:
-      Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
-      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running sample2.sampletests_ntd.Layer tests:
-      Tear down sample1.sampletests_ntd.Layer ... not supported
-      Running in a subprocess.
-      Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
-      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
-      Tear down sample2.sampletests_ntd.Layer ... not supported
-    Running sample3.sampletests_ntd.Layer tests:
-      Running in a subprocess.
-      Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
-        raise TypeError("Can we see errors")
-    TypeError: Can we see errors
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
-        raise TypeError("I hope so")
-    TypeError: I hope so
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
-        self.assertEqual(1, 2)
-    AssertionError: 1 != 2
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
-        self.assertEqual(1, 3)
-    AssertionError: 1 != 3
-    <BLANKLINE>
-      Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
-      Tear down sample3.sampletests_ntd.Layer ... not supported
-    Total: 8 tests, 2 failures, 2 errors in N.NNN seconds.
-    True
-
-in the example above, some of the tests run as a subprocess had errors
-and failures. They were displayed as usual and the failure and error
-statistice were updated as usual.
-
-Note that debugging doesn't work when running tests in a subprocess:
-
-    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
-    ...             '-D', ]
-    >>> testrunner.run(defaults)
-    Running sample1.sampletests_ntd.Layer tests:
-      Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
-      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running sample2.sampletests_ntd.Layer tests:
-      Tear down sample1.sampletests_ntd.Layer ... not supported
-      Running in a subprocess.
-      Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
-      Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
-      Tear down sample2.sampletests_ntd.Layer ... not supported
-    Running sample3.sampletests_ntd.Layer tests:
-      Running in a subprocess.
-      Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
-        raise TypeError("Can we see errors")
-    TypeError: Can we see errors
-    <BLANKLINE>
-    <BLANKLINE>
-    **********************************************************************
-    Can't post-mortem debug when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
-        raise TypeError("I hope so")
-    TypeError: I hope so
-    <BLANKLINE>
-    <BLANKLINE>
-    **********************************************************************
-    Can't post-mortem debug when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
-        self.assertEqual(1, 2)
-    AssertionError: 1 != 2
-    <BLANKLINE>
-    <BLANKLINE>
-    **********************************************************************
-    Can't post-mortem debug when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    <BLANKLINE>
-    <BLANKLINE>
-    Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
-    Traceback (most recent call last):
-     testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
-        self.assertEqual(1, 3)
-    AssertionError: 1 != 3
-    <BLANKLINE>
-    <BLANKLINE>
-    **********************************************************************
-    Can't post-mortem debug when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-      Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
-      Tear down sample3.sampletests_ntd.Layer ... not supported
-    Total: 8 tests, 0 failures, 4 errors in N.NNN seconds.
-    True
-
-Similarly, pdb.set_trace doesn't work when running tests in a layer
-that is run as a subprocess:
-
-    >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
-    >>> testrunner.run(defaults)
-    Running sample1.sampletests_ntds.Layer tests:
-      Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-    Running sample2.sampletests_ntds.Layer tests:
-      Tear down sample1.sampletests_ntds.Layer ... not supported
-      Running in a subprocess.
-      Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
-    --Return--
-    > testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
-    -> import pdb; pdb.set_trace()
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
-    -> import pdb; pdb.set_trace()
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
-    -> import pdb; pdb.set_trace()
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
-    -> import pdb; pdb.set_trace()
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
-    -> import pdb; pdb.set_trace()
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-    --Return--
-    > doctest.py(351)set_trace()->None
-    -> pdb.Pdb.set_trace(self)
-    (Pdb) c
-    <BLANKLINE>
-    **********************************************************************
-    Can't use pdb.set_trace when running a layer as a subprocess!
-    **********************************************************************
-    <BLANKLINE>
-      Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
-      Tear down sample2.sampletests_ntds.Layer ... not supported
-    Total: 8 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-If you want to use pdb from a test in a layer that is run as a
-subprocess, then rerun the test runner selecting *just* that layer so
-that it's not run as a subprocess.

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-layers.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,117 +0,0 @@
-Layer Selection
-===============
-
-We can select which layers to run using the --layer option:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --layer 112 --layer unit'.split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer112 tests:
-      Set up samplelayers.Layerx in N.NNN seconds.
-      Set up samplelayers.Layer1 in N.NNN seconds.
-      Set up samplelayers.Layer11 in N.NNN seconds.
-      Set up samplelayers.Layer112 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer112 in N.NNN seconds.
-      Tear down samplelayers.Layerx in N.NNN seconds.
-      Tear down samplelayers.Layer11 in N.NNN seconds.
-      Tear down samplelayers.Layer1 in N.NNN seconds.
-    Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-We can also specify that we want to run only the unit tests:
-
-    >>> sys.argv = 'test -u'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
-    False
-
-Or that we want to run all of the tests except for the unit tests:
-
-    >>> sys.argv = 'test -f'.split()
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in N.NNN seconds.
-      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in N.NNN seconds.
-      Set up samplelayers.Layer111 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in N.NNN seconds.
-      Set up samplelayers.Layer112 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in N.NNN seconds.
-      Tear down samplelayers.Layerx in N.NNN seconds.
-      Tear down samplelayers.Layer11 in N.NNN seconds.
-      Set up samplelayers.Layer12 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in N.NNN seconds.
-      Set up samplelayers.Layer122 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in N.NNN seconds.
-      Tear down samplelayers.Layer12 in N.NNN seconds.
-      Tear down samplelayers.Layer1 in N.NNN seconds.
-    Total: 213 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-Or we can explicitly say that we want both unit and non-unit tests.
-
-    >>> sys.argv = 'test -uf'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in N.NNN seconds.
-      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in N.NNN seconds.
-      Set up samplelayers.Layer111 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in N.NNN seconds.
-      Set up samplelayers.Layer112 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in N.NNN seconds.
-      Tear down samplelayers.Layerx in N.NNN seconds.
-      Tear down samplelayers.Layer11 in N.NNN seconds.
-      Set up samplelayers.Layer12 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in N.NNN seconds.
-      Set up samplelayers.Layer122 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in N.NNN seconds.
-      Tear down samplelayers.Layer12 in N.NNN seconds.
-      Tear down samplelayers.Layer1 in N.NNN seconds.
-    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks-err.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks-err.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks-err.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,22 +0,0 @@
-Debugging Memory Leaks without a debug build of Python 
-======================================================
-
-To use the --report-refcounts (-r) to detect or debug memory leaks,
-you must have a debug build of Python. Without a debug build, you will
-get an error message:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> from zope.testing import testrunner
-    
-    >>> sys.argv = 'test -r -N6'.split()
-    >>> _ = testrunner.run(defaults)
-            The Python you are running was not configured
-            with --with-pydebug. This is required to use
-            the --report-refcounts option.
-    <BLANKLINE>

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-leaks.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,218 +0,0 @@
-Debugging Memory Leaks
-======================
-
-The --report-refcounts (-r) option can be used with the --repeat (-N)
-option to detect and diagnose memory leaks.  To use this option, you
-must configure Python with the --with-pydebug option. (On Unix, pass
-this option to configure and then build Python.)
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> from zope.testing import testrunner
-    
-    >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
-    >>> _ = testrunner.run(defaults)
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer11 in 0.000 seconds.
-    Iteration 1
-      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
-    Iteration 2
-      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
-      sys refcount=100401   change=0     
-    Iteration 3
-      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
-      sys refcount=100401   change=0     
-    Iteration 4
-      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
-      sys refcount=100401   change=0     
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-    Iteration 1
-      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
-    Iteration 2
-      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
-      sys refcount=100411   change=0     
-    Iteration 3
-      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
-      sys refcount=100411   change=0     
-    Iteration 4
-      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
-      sys refcount=100411   change=0     
-    Tearing down left over layers:
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 68 tests, 0 failures, 0 errors in N.NNN seconds.
-
-Each layer is repeated the requested number of times.  For each
-iteration after the first, the system refcount and change in system
-refcount is shown. The system refcount is the total of all refcount in
-the system.  When a refcount on any object is changed, the system
-refcount is changed by the same amount.  Tests that don't leak show
-zero changes in systen refcount.
-
-Let's look at an example test that leaks:
-
-    >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
-    >>> _ = testrunner.run(defaults)
-    Running unit tests:
-    Iteration 1
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-    Iteration 2
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sys refcount=92506    change=12
-    Iteration 3
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sys refcount=92513    change=12
-    Iteration 4
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sys refcount=92520    change=12
-
-Here we see that the system refcount is increating.  If we specify a
-verbosity greater than one, we can get details broken out by object
-type (or class):
-
-    >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
-    >>> _ = testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-    Iteration 1
-      Running:
-        .
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-    Iteration 2
-      Running:
-        .
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sum detail refcount=95832    sys refcount=105668   change=16    
-        Leak details, changes in instances and refcounts by type/class:
-        type/class                                               insts   refs
-        -------------------------------------------------------  -----   ----
-        classobj                                                     0      1
-        dict                                                         2      2
-        float                                                        1      1
-        int                                                          2      2
-        leak.ClassicLeakable                                         1      1
-        leak.Leakable                                                1      1
-        str                                                          0      4
-        tuple                                                        1      1
-        type                                                         0      3
-        -------------------------------------------------------  -----   ----
-        total                                                        8     16
-    Iteration 3
-      Running:
-        .
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sum detail refcount=95844    sys refcount=105680   change=12    
-        Leak details, changes in instances and refcounts by type/class:
-        type/class                                               insts   refs
-        -------------------------------------------------------  -----   ----
-        classobj                                                     0      1
-        dict                                                         2      2
-        float                                                        1      1
-        int                                                         -1      0
-        leak.ClassicLeakable                                         1      1
-        leak.Leakable                                                1      1
-        str                                                          0      4
-        tuple                                                        1      1
-        type                                                         0      1
-        -------------------------------------------------------  -----   ----
-        total                                                        5     12
-    Iteration 4
-      Running:
-        .
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sum detail refcount=95856    sys refcount=105692   change=12    
-        Leak details, changes in instances and refcounts by type/class:
-        type/class                                               insts   refs
-        -------------------------------------------------------  -----   ----
-        classobj                                                     0      1
-        dict                                                         2      2
-        float                                                        1      1
-        leak.ClassicLeakable                                         1      1
-        leak.Leakable                                                1      1
-        str                                                          0      4
-        tuple                                                        1      1
-        type                                                         0      1
-        -------------------------------------------------------  -----   ----
-        total                                                        6     12
-    Iteration 5
-      Running:
-        .
-      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
-      sum detail refcount=95868    sys refcount=105704   change=12    
-        Leak details, changes in instances and refcounts by type/class:
-        type/class                                               insts   refs
-        -------------------------------------------------------  -----   ----
-        classobj                                                     0      1
-        dict                                                         2      2
-        float                                                        1      1
-        leak.ClassicLeakable                                         1      1
-        leak.Leakable                                                1      1
-        str                                                          0      4
-        tuple                                                        1      1
-        type                                                         0      1
-        -------------------------------------------------------  -----   ----
-        total                                                        6     12
-
-It is instructive to analyze the results in some detail.  The test
-being run was designed to intentionally leak:
-
-    class ClassicLeakable:
-        def __init__(self):
-            self.x = 'x'
-
-    class Leakable(object):
-        def __init__(self):
-            self.x = 'x'
-
-    leaked = []
-
-    class TestSomething(unittest.TestCase):
-
-        def testleak(self):
-            leaked.append((ClassicLeakable(), Leakable(), time.time()))
-
-Let's go through this by type.
-
-float, leak.ClassicLeakable, leak.Leakable, and tuple
-    We leak one of these every time.  This is to be expected because
-    we are adding one of these to the list every time.
-
-str
-    We don't leak any instances, but we leak 4 references. These are
-    due to the instance attributes avd values.
-
-dict
-    We leak 2 of these, one for each ClassicLeakable and Leakable
-    instance. 
-
-classobj
-    We increase the number of classobj instance references by one each
-    time because each ClassicLeakable instance has a reference to its
-    class.  This instances increases the references in it's class,
-    which increases the total number of references to classic classes
-    (clasobj instances).
-
-type
-    For most interations, we increase the number of type references by
-    one for the same reason we increase the number of clasobj
-    references by one.  The increase of the number of type references
-    by 3 in the second iteration is puzzling, but illustrates that
-    this sort of data is often puzzling.
-
-int
-    The change in the number of int instances and references in this
-    example is a side effect of the statistics being gathered.  Lots
-    of integers are created to keep the memory statistics used here.
-
-The summary statistics include the sum of the detail refcounts.  (Note
-that this sum is less than the system refcount.  This is because the
-detailed analysis doesn't inspect every object. Not all objects in the
-system are returned by sys.getobjects.)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling-cprofiler.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling-cprofiler.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling-cprofiler.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,54 +0,0 @@
-Profiling
-=========
-
-The testrunner includes the ability to profile the test execution with cProfile
-via the `--profile=cProfile` option::
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> sys.path.append(directory_with_tests)
-
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = [testrunner_script, '--profile=cProfile']
-
-When the tests are run, we get profiling output::
-
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running unit tests:
-    ...
-    Running samplelayers.Layer1 tests:
-    ...
-    Running samplelayers.Layer11 tests:
-    ...
-    Total: ... tests, 0 failures, 0 errors in ... seconds.
-    ...
-       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
-
-Profiling also works across layers::
-
-    >>> sys.argv = [testrunner_script, '-ssample2', '--profile=cProfile', 
-    ...             '--tests-pattern', 'sampletests_ntd']
-    >>> testrunner.run(defaults)
-    Running...
-      Tear down ... not supported...
-       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
-
-The testrunner creates temnporary files containing cProfiler profiler
-data::
-
-    >>> import glob
-    >>> files = list(glob.glob('tests_profile.*.prof'))
-    >>> files.sort()
-    >>> files
-    ['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']
-
-It deletes these when rerun.  We'll delete these ourselves::
-
-    >>> import os
-    >>> for f in files:
-    ...     os.unlink(f)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-profiling.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,54 +0,0 @@
-Profiling
-=========
-
-The testrunner includes the ability to profile the test execution with hotshot
-via the --profile option.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> sys.path.append(directory_with_tests)
-
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = [testrunner_script, '--profile=hotshot']
-
-When the tests are run, we get profiling output.
-
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running unit tests:
-    ...
-    Running samplelayers.Layer1 tests:
-    ...
-    Running samplelayers.Layer11 tests:
-    ...
-    Total: ... tests, 0 failures, 0 errors in ... seconds.
-    ...
-       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
-
-Profiling also works across layers.
-
-    >>> sys.argv = [testrunner_script, '-ssample2', '--profile=hotshot', 
-    ...             '--tests-pattern', 'sampletests_ntd']
-    >>> testrunner.run(defaults)
-    Running...
-      Tear down ... not supported...
-       ncalls  tottime  percall  cumtime  percall filename:lineno(function)...
-
-The testrunner creates temnporary files containing hotshot profiler
-data:
-
-    >>> import glob
-    >>> files = list(glob.glob('tests_profile.*.prof'))
-    >>> files.sort()
-    >>> files
-    ['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']
-
-It deletes these when rerun.  We'll delete these ourselves:
-
-    >>> import os
-    >>> for f in files:
-    ...     os.unlink(f)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-progress.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-progress.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-progress.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,428 +0,0 @@
-Test Progress
-=============
-
-If the --progress (-p) option is used, progress information is printed and
-a carriage return (rather than a new-line) is printed between
-detail lines.  Let's look at the effect of --progress (-p) at different
-levels of verbosity.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --layer 122 -p'.split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        1/34 (2.9%)\r
-                   \r
-        2/34 (5.9%)\r
-                   \r
-        3/34 (8.8%)\r
-                   \r
-        4/34 (11.8%)\r
-                    \r
-        5/34 (14.7%)\r
-                    \r
-        6/34 (17.6%)\r
-                    \r
-        7/34 (20.6%)\r
-                    \r
-        8/34 (23.5%)\r
-                    \r
-        9/34 (26.5%)\r
-                    \r
-        10/34 (29.4%)\r
-                     \r
-        11/34 (32.4%)\r
-                     \r
-        12/34 (35.3%)\r
-                     \r
-        17/34 (50.0%)\r
-                     \r
-        18/34 (52.9%)\r
-                     \r
-        19/34 (55.9%)\r
-                     \r
-        20/34 (58.8%)\r
-                     \r
-        21/34 (61.8%)\r
-                     \r
-        22/34 (64.7%)\r
-                     \r
-        23/34 (67.6%)\r
-                     \r
-        24/34 (70.6%)\r
-                     \r
-        25/34 (73.5%)\r
-                     \r
-        26/34 (76.5%)\r
-                     \r
-        27/34 (79.4%)\r
-                     \r
-        28/34 (82.4%)\r
-                     \r
-        29/34 (85.3%)\r
-                     \r
-        34/34 (100.0%)\r
-                      \r
-    <BLANKLINE>
-      Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-(Note that, in the examples above and below, we show "\r" followed by
-new lines where carriage returns would appear in actual output.)
-
-Using a single level of verbosity causes test descriptions to be
-output, but only if they fit in the terminal width.  The default
-width, when the terminal width can't be determined, is 80:
-
->>> sys.argv = 'test --layer 122 -pv'.split()
->>> testrunner.run(defaults)
-Running tests at level 1
-Running samplelayers.Layer122 tests:
-  Set up samplelayers.Layer1 in 0.000 seconds.
-  Set up samplelayers.Layer12 in 0.000 seconds.
-  Set up samplelayers.Layer122 in 0.000 seconds.
-  Running:
-    1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)\r
-                                                           \r
-    2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)\r
-                                                           \r
-    3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)\r
-                                                           \r
-    4/34 (11.8%) test_x0 (sample1.sampletests.test122.TestB)\r
-                                                            \r
-    5/34 (14.7%) test_y1 (sample1.sampletests.test122.TestB)\r
-                                                            \r
-    6/34 (17.6%) test_z0 (sample1.sampletests.test122.TestB)\r
-                                                            \r
-    7/34 (20.6%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                 \r
-    8/34 (23.5%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                 \r
-    9/34 (26.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                 \r
-    10/34 (29.4%) test_x0 (sample1.sampletests.test122)\r
-                                                       \r
-    11/34 (32.4%) test_y0 (sample1.sampletests.test122)\r
-                                                       \r
-    12/34 (35.3%) test_z1 (sample1.sampletests.test122)\r
-                                                       \r
-    17/34 (50.0%) ... /testrunner-ex/sample1/sampletests/../../sampletestsl.txt\r
-                                                                               \r
-    18/34 (52.9%) test_x1 (sampletests.test122.TestA)\r
-                                                     \r
-    19/34 (55.9%) test_y0 (sampletests.test122.TestA)\r
-                                                     \r
-    20/34 (58.8%) test_z0 (sampletests.test122.TestA)\r
-                                                     \r
-    21/34 (61.8%) test_x0 (sampletests.test122.TestB)\r
-                                                     \r
-    22/34 (64.7%) test_y1 (sampletests.test122.TestB)\r
-                                                     \r
-    23/34 (67.6%) test_z0 (sampletests.test122.TestB)\r
-                                                     \r
-    24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)\r
-                                                          \r
-    25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)\r
-                                                          \r
-    26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)\r
-                                                          \r
-    27/34 (79.4%) test_x0 (sampletests.test122)\r
-                                               \r
-    28/34 (82.4%) test_y0 (sampletests.test122)\r
-                                               \r
-    29/34 (85.3%) test_z1 (sampletests.test122)\r
-                                               \r
-    34/34 (100.0%) ... pe/testing/testrunner-ex/sampletests/../sampletestsl.txt\r
-                                                                               \r
-<BLANKLINE>
-  Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
-Tearing down left over layers:
-  Tear down samplelayers.Layer122 in 0.000 seconds.
-  Tear down samplelayers.Layer12 in 0.000 seconds.
-  Tear down samplelayers.Layer1 in 0.000 seconds.
-False
-
-The terminal width is determined using the curses module.  To see
-that, we'll provide a fake curses module:
-
-    >>> class FakeCurses:
-    ...     def setupterm(self):
-    ...         pass
-    ...     def tigetnum(self, ignored):
-    ...         return 60
-    >>> old_curses = sys.modules.get('curses')
-    >>> sys.modules['curses'] = FakeCurses()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)\r
-                                                               \r
-        2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)\r
-                                                               \r
-        3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)\r
-                                                               \r
-        4/34 (11.8%) test_x0 (...le1.sampletests.test122.TestB)\r
-                                                               \r
-        5/34 (14.7%) test_y1 (...le1.sampletests.test122.TestB)\r
-                                                               \r
-        6/34 (17.6%) test_z0 (...le1.sampletests.test122.TestB)\r
-                                                               \r
-        7/34 (20.6%) test_1 (...ampletests.test122.TestNotMuch)\r
-                                                               \r
-        8/34 (23.5%) test_2 (...ampletests.test122.TestNotMuch)\r
-                                                               \r
-        9/34 (26.5%) test_3 (...ampletests.test122.TestNotMuch)\r
-                                                               \r
-        10/34 (29.4%) test_x0 (sample1.sampletests.test122)\r
-                                                           \r
-        11/34 (32.4%) test_y0 (sample1.sampletests.test122)\r
-                                                           \r
-        12/34 (35.3%) test_z1 (sample1.sampletests.test122)\r
-                                                           \r
-        17/34 (50.0%) ... e1/sampletests/../../sampletestsl.txt\r
-                                                               \r
-        18/34 (52.9%) test_x1 (sampletests.test122.TestA)\r
-                                                         \r
-        19/34 (55.9%) test_y0 (sampletests.test122.TestA)\r
-                                                         \r
-        20/34 (58.8%) test_z0 (sampletests.test122.TestA)\r
-                                                         \r
-        21/34 (61.8%) test_x0 (sampletests.test122.TestB)\r
-                                                         \r
-        22/34 (64.7%) test_y1 (sampletests.test122.TestB)\r
-                                                         \r
-        23/34 (67.6%) test_z0 (sampletests.test122.TestB)\r
-                                                         \r
-        24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)\r
-                                                              \r
-        25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)\r
-                                                              \r
-        26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)\r
-                                                              \r
-        27/34 (79.4%) test_x0 (sampletests.test122)\r
-                                                   \r
-        28/34 (82.4%) test_y0 (sampletests.test122)\r
-                                                   \r
-        29/34 (85.3%) test_z1 (sampletests.test122)\r
-                                                   \r
-        34/34 (100.0%) ... r-ex/sampletests/../sampletestsl.txt\r
-                                                               \r
-    <BLANKLINE>
-      Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-    >>> sys.modules['curses'] = old_curses
-
-If a second or third level of verbosity are added, we get additional
-information.
-
-    >>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)\r
-                                                              \r
-        2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)\r
-                                                              \r
-        3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)\r
-                                                               \r
-        4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)\r
-                                                               \r
-        5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)\r
-                                                               \r
-        6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)\r
-                                                               \r
-        7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                    \r
-        8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                    \r
-        9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
-                                                                    \r
-        10/24 (41.7%) test_x0 (sample1.sampletests.test122)\r
-                                                          \r
-        11/24 (45.8%) test_y0 (sample1.sampletests.test122)\r
-                                                          \r
-        12/24 (50.0%) test_z1 (sample1.sampletests.test122)\r
-                                                          \r
-        13/24 (54.2%) test_x1 (sampletests.test122.TestA)\r
-                                                        \r
-        14/24 (58.3%) test_y0 (sampletests.test122.TestA)\r
-                                                        \r
-        15/24 (62.5%) test_z0 (sampletests.test122.TestA)\r
-                                                        \r
-        16/24 (66.7%) test_x0 (sampletests.test122.TestB)\r
-                                                        \r
-        17/24 (70.8%) test_y1 (sampletests.test122.TestB)\r
-                                                        \r
-        18/24 (75.0%) test_z0 (sampletests.test122.TestB)\r
-                                                        \r
-        19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)\r
-                                                             \r
-        20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)\r
-                                                             \r
-        21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)\r
-                                                             \r
-        22/24 (91.7%) test_x0 (sampletests.test122)\r
-                                                  \r
-        23/24 (95.8%) test_y0 (sampletests.test122)\r
-                                                  \r
-        24/24 (100.0%) test_z1 (sampletests.test122)\r
-                                                   \r
-    <BLANKLINE>
-      Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-Note that, in this example, we used a test-selection pattern starting
-with '!' to exclude tests containing the string "txt".
-
-    >>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 s)\r
-                                                                          \r
-        2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 s)\r
-                                                                           \r
-        3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 s)\r
-                                                                           \r
-        4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 s)\r
-                                                                           \r
-        5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 s)\r
-                                                                           \r
-        6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 s)\r
-                                                                           \r
-        7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 s)\r
-                                                                     \r
-        8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 s)\r
-                                                                     \r
-        9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 s)\r
-                                                                     \r
-        10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 s)\r
-                                                                    \r
-        11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 s)\r
-                                                                    \r
-        12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 s)\r
-                                                                    \r
-        13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 s)\r
-                                                                    \r
-        14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 s)\r
-                                                                    \r
-        15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 s)\r
-                                                                    \r
-        16/18 (88.9%) test_x0 (sampletests.test122) (0.001 s)\r
-                                                              \r
-        17/18 (94.4%) test_y0 (sampletests.test122) (0.001 s)\r
-                                                              \r
-        18/18 (100.0%) test_z1 (sampletests.test122) (0.001 s)\r
-                                                               \r
-    <BLANKLINE>
-      Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-In this example, we also excluded tests with "NotMuch" in their names.
-
-Unfortunately, the time data above doesn't buy us much because, in
-practice, the line is cleared before there is time to see the
-times. :/
-
-
-Autodetecting progress
-----------------------
-
-The --auto-progress option will determine if stdout is a terminal, and only enable
-progress output if so.
-
-Let's pretend we have a terminal
-
-    >>> class Terminal(object):
-    ...     def __init__(self, stream):
-    ...         self._stream = stream
-    ...     def __getattr__(self, attr):
-    ...         return getattr(self._stream, attr)
-    ...     def isatty(self):
-    ...         return True
-    >>> real_stdout = sys.stdout
-    >>> sys.stdout = Terminal(sys.stdout)
-
-    >>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Running:
-        1/6 (16.7%)\r
-                   \r
-        2/6 (33.3%)\r
-                   \r
-        3/6 (50.0%)\r
-                   \r
-        4/6 (66.7%)\r
-                   \r
-        5/6 (83.3%)\r
-                   \r
-        6/6 (100.0%)\r
-                    \r
-    <BLANKLINE>
-      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
-    False
-
-Let's stop pretending
-
-    >>> sys.stdout = real_stdout
-
-    >>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
-    False
-
-
-Disabling progress indication
------------------------------
-
-If -p or --progress have been previously provided on the command line (perhaps by a
-wrapper script) but you do not desire progress indication, you can switch it off with
---no-progress:
-
-    >>> sys.argv = 'test -u -t test_one.TestNotMuch -p --no-progress'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 6 tests with 0 failures and 0 errors in 0.000 seconds.
-    False
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-repeat.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-repeat.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-repeat.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,46 +0,0 @@
-Repeating Tests
-===============
-
-The --repeat option can be used to repeat tests some number of times.
-Repeating tests is useful to help make sure that tests clean up after
-themselves.
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --layer 112 --layer unit --repeat 3'.split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running unit tests:
-    Iteration 1
-      Ran 192 tests with 0 failures and 0 errors in 0.054 seconds.
-    Iteration 2
-      Ran 192 tests with 0 failures and 0 errors in 0.054 seconds.
-    Iteration 3
-      Ran 192 tests with 0 failures and 0 errors in 0.052 seconds.
-    Running samplelayers.Layer112 tests:
-      Set up samplelayers.Layerx in 0.000 seconds.
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer11 in 0.000 seconds.
-      Set up samplelayers.Layer112 in 0.000 seconds.
-    Iteration 1
-      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
-    Iteration 2
-      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
-    Iteration 3
-      Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer112 in 0.000 seconds.
-      Tear down samplelayers.Layerx in 0.000 seconds.
-      Tear down samplelayers.Layer11 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-The tests are repeated by layer.  Layers are set up and torn down only
-once.
- 

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-simple.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-simple.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-simple.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,95 +0,0 @@
-Simple Usage
-============
-
-The test runner consists of an importable module.  The test runner is
-used by providing scripts that import and invoke the `run` method from
-the module.  The `testrunner` module is controlled via command-line
-options.  Test scripts supply base and default options by supplying a
-list of default command-line options that are processed before the
-user-supplied command-line options are provided.
-
-Typically, a test script does 2 things:
-
-- Adds the directory containing the zope package to the Python
-  path.
-
-- Calls the test runner with default arguments and arguments supplied
-  to the script.
-
-  Normally, it just passes default/setup arguments.  The test runner
-  uses `sys.argv` to get the user's input.
-
-This testrunner_ex subdirectory contains a number of sample packages
-with tests.  Let's run the tests found here. First though, we'll set
-up our default options:
-
-    >>> import os.path
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-The default options are used by a script to customize the test runner
-for a particular application.  In this case, we use two options:
-
-path
-  Set the path where the test runner should look for tests.  This path
-  is also added to the Python path.
-
-tests-pattern
-  Tell the test runner how to recognize modules or packages containing
-  tests.
-
-Now, if we run the tests, without any other options:
-
-    >>> from zope.testing import testrunner
-    >>> import sys
-    >>> sys.argv = ['test']
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer1 tests:
-      Set up samplelayers.Layer1 in N.NNN seconds.
-      Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer11 tests:
-      Set up samplelayers.Layer11 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer111 tests:
-      Set up samplelayers.Layerx in N.NNN seconds.
-      Set up samplelayers.Layer111 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer112 tests:
-      Tear down samplelayers.Layer111 in N.NNN seconds.
-      Set up samplelayers.Layer112 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer12 tests:
-      Tear down samplelayers.Layer112 in N.NNN seconds.
-      Tear down samplelayers.Layerx in N.NNN seconds.
-      Tear down samplelayers.Layer11 in N.NNN seconds.
-      Set up samplelayers.Layer12 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer121 tests:
-      Set up samplelayers.Layer121 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Running samplelayers.Layer122 tests:
-      Tear down samplelayers.Layer121 in N.NNN seconds.
-      Set up samplelayers.Layer122 in N.NNN seconds.
-      Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in N.NNN seconds.
-      Tear down samplelayers.Layer12 in N.NNN seconds.
-      Tear down samplelayers.Layer1 in N.NNN seconds.
-    Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-we see the normal testrunner output, which summarizes the tests run for
-each layer.  For each layer, we see what layers had to be torn down or
-set up to run the layer and we see the number of tests run, with
-results.
-
-The test runner returns a boolean indicating whether there were
-errors.  In this example, there were no errors, so it returned False.
-
-(Of course, the times shown in these examples are just examples.
-Times will vary depending on system speed.)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-test-selection.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-test-selection.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-test-selection.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,564 +0,0 @@
-Test Selection
-==============
-
-We've already seen that we can select tests by layer.  There are three
-other ways we can select tests.  We can select tests by package:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-
-    >>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        test_x1 (sample1.sampletests.test122.TestA)
-        test_y0 (sample1.sampletests.test122.TestA)
-        test_z0 (sample1.sampletests.test122.TestA)
-        test_x0 (sample1.sampletests.test122.TestB)
-        test_y1 (sample1.sampletests.test122.TestB)
-        test_z0 (sample1.sampletests.test122.TestB)
-        test_1 (sample1.sampletests.test122.TestNotMuch)
-        test_2 (sample1.sampletests.test122.TestNotMuch)
-        test_3 (sample1.sampletests.test122.TestNotMuch)
-        test_x0 (sample1.sampletests.test122)
-        test_y0 (sample1.sampletests.test122)
-        test_z1 (sample1.sampletests.test122)
-        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
-      Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-You can specify multiple packages:
-
-    >>> sys.argv = 'test -u  -vv -ssample1 -ssample2'.split()
-    >>> testrunner.run(defaults) 
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_x1 (sample1.sampletestsf.TestA)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_z0 (sample1.sampletestsf.TestA)
-        test_x0 (sample1.sampletestsf.TestB)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_z0 (sample1.sampletestsf.TestB)
-        test_1 (sample1.sampletestsf.TestNotMuch)
-        test_2 (sample1.sampletestsf.TestNotMuch)
-        test_3 (sample1.sampletestsf.TestNotMuch)
-        test_x0 (sample1.sampletestsf)
-        test_y0 (sample1.sampletestsf)
-        test_z1 (sample1.sampletestsf)
-        testrunner-ex/sample1/../sampletests.txt
-        test_x1 (sample1.sample11.sampletests.TestA)
-        test_y0 (sample1.sample11.sampletests.TestA)
-        test_z0 (sample1.sample11.sampletests.TestA)
-        test_x0 (sample1.sample11.sampletests.TestB)
-        test_y1 (sample1.sample11.sampletests.TestB)
-        test_z0 (sample1.sample11.sampletests.TestB)
-        test_1 (sample1.sample11.sampletests.TestNotMuch)
-        test_2 (sample1.sample11.sampletests.TestNotMuch)
-        test_3 (sample1.sample11.sampletests.TestNotMuch)
-        test_x0 (sample1.sample11.sampletests)
-        test_y0 (sample1.sample11.sampletests)
-        test_z1 (sample1.sample11.sampletests)
-        testrunner-ex/sample1/sample11/../../sampletests.txt
-        test_x1 (sample1.sample13.sampletests.TestA)
-        test_y0 (sample1.sample13.sampletests.TestA)
-        test_z0 (sample1.sample13.sampletests.TestA)
-        test_x0 (sample1.sample13.sampletests.TestB)
-        test_y1 (sample1.sample13.sampletests.TestB)
-        test_z0 (sample1.sample13.sampletests.TestB)
-        test_1 (sample1.sample13.sampletests.TestNotMuch)
-        test_2 (sample1.sample13.sampletests.TestNotMuch)
-        test_3 (sample1.sample13.sampletests.TestNotMuch)
-        test_x0 (sample1.sample13.sampletests)
-        test_y0 (sample1.sample13.sampletests)
-        test_z1 (sample1.sample13.sampletests)
-        testrunner-ex/sample1/sample13/../../sampletests.txt
-        test_x1 (sample1.sampletests.test1.TestA)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_z0 (sample1.sampletests.test1.TestA)
-        test_x0 (sample1.sampletests.test1.TestB)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_z0 (sample1.sampletests.test1.TestB)
-        test_1 (sample1.sampletests.test1.TestNotMuch)
-        test_2 (sample1.sampletests.test1.TestNotMuch)
-        test_3 (sample1.sampletests.test1.TestNotMuch)
-        test_x0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test1)
-        test_z1 (sample1.sampletests.test1)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        test_x1 (sample1.sampletests.test_one.TestA)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_z0 (sample1.sampletests.test_one.TestA)
-        test_x0 (sample1.sampletests.test_one.TestB)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_z0 (sample1.sampletests.test_one.TestB)
-        test_1 (sample1.sampletests.test_one.TestNotMuch)
-        test_2 (sample1.sampletests.test_one.TestNotMuch)
-        test_3 (sample1.sampletests.test_one.TestNotMuch)
-        test_x0 (sample1.sampletests.test_one)
-        test_y0 (sample1.sampletests.test_one)
-        test_z1 (sample1.sampletests.test_one)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        test_x1 (sample2.sample21.sampletests.TestA)
-        test_y0 (sample2.sample21.sampletests.TestA)
-        test_z0 (sample2.sample21.sampletests.TestA)
-        test_x0 (sample2.sample21.sampletests.TestB)
-        test_y1 (sample2.sample21.sampletests.TestB)
-        test_z0 (sample2.sample21.sampletests.TestB)
-        test_1 (sample2.sample21.sampletests.TestNotMuch)
-        test_2 (sample2.sample21.sampletests.TestNotMuch)
-        test_3 (sample2.sample21.sampletests.TestNotMuch)
-        test_x0 (sample2.sample21.sampletests)
-        test_y0 (sample2.sample21.sampletests)
-        test_z1 (sample2.sample21.sampletests)
-        testrunner-ex/sample2/sample21/../../sampletests.txt
-        test_x1 (sample2.sampletests.test_1.TestA)
-        test_y0 (sample2.sampletests.test_1.TestA)
-        test_z0 (sample2.sampletests.test_1.TestA)
-        test_x0 (sample2.sampletests.test_1.TestB)
-        test_y1 (sample2.sampletests.test_1.TestB)
-        test_z0 (sample2.sampletests.test_1.TestB)
-        test_1 (sample2.sampletests.test_1.TestNotMuch)
-        test_2 (sample2.sampletests.test_1.TestNotMuch)
-        test_3 (sample2.sampletests.test_1.TestNotMuch)
-        test_x0 (sample2.sampletests.test_1)
-        test_y0 (sample2.sampletests.test_1)
-        test_z1 (sample2.sampletests.test_1)
-        testrunner-ex/sample2/sampletests/../../sampletests.txt
-        test_x1 (sample2.sampletests.testone.TestA)
-        test_y0 (sample2.sampletests.testone.TestA)
-        test_z0 (sample2.sampletests.testone.TestA)
-        test_x0 (sample2.sampletests.testone.TestB)
-        test_y1 (sample2.sampletests.testone.TestB)
-        test_z0 (sample2.sampletests.testone.TestB)
-        test_1 (sample2.sampletests.testone.TestNotMuch)
-        test_2 (sample2.sampletests.testone.TestNotMuch)
-        test_3 (sample2.sampletests.testone.TestNotMuch)
-        test_x0 (sample2.sampletests.testone)
-        test_y0 (sample2.sampletests.testone)
-        test_z1 (sample2.sampletests.testone)
-        testrunner-ex/sample2/sampletests/../../sampletests.txt
-      Ran 128 tests with 0 failures and 0 errors in 0.025 seconds.
-    False
-
-You can specify directory names instead of packages (useful for
-tab-completion):
-
-    >>> subdir = os.path.join(directory_with_tests, 'sample1')
-    >>> sys.argv = ('test --layer 122 -s %s -vv' % subdir).split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        test_x1 (sample1.sampletests.test122.TestA)
-        test_y0 (sample1.sampletests.test122.TestA)
-        test_z0 (sample1.sampletests.test122.TestA)
-        test_x0 (sample1.sampletests.test122.TestB)
-        test_y1 (sample1.sampletests.test122.TestB)
-        test_z0 (sample1.sampletests.test122.TestB)
-        test_1 (sample1.sampletests.test122.TestNotMuch)
-        test_2 (sample1.sampletests.test122.TestNotMuch)
-        test_3 (sample1.sampletests.test122.TestNotMuch)
-        test_x0 (sample1.sampletests.test122)
-        test_y0 (sample1.sampletests.test122)
-        test_z1 (sample1.sampletests.test122)
-        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
-      Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-We can select by test module name using the --module (-m) option:
-
-    >>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_x1 (sample1.sampletests.test1.TestA)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_z0 (sample1.sampletests.test1.TestA)
-        test_x0 (sample1.sampletests.test1.TestB)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_z0 (sample1.sampletests.test1.TestB)
-        test_1 (sample1.sampletests.test1.TestNotMuch)
-        test_2 (sample1.sampletests.test1.TestNotMuch)
-        test_3 (sample1.sampletests.test1.TestNotMuch)
-        test_x0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test1)
-        test_z1 (sample1.sampletests.test1)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        test_x1 (sample1.sampletests.test_one.TestA)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_z0 (sample1.sampletests.test_one.TestA)
-        test_x0 (sample1.sampletests.test_one.TestB)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_z0 (sample1.sampletests.test_one.TestB)
-        test_1 (sample1.sampletests.test_one.TestNotMuch)
-        test_2 (sample1.sampletests.test_one.TestNotMuch)
-        test_3 (sample1.sampletests.test_one.TestNotMuch)
-        test_x0 (sample1.sampletests.test_one)
-        test_y0 (sample1.sampletests.test_one)
-        test_z1 (sample1.sampletests.test_one)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-      Ran 32 tests with 0 failures and 0 errors in 0.008 seconds.
-    False
-
-and by test within the module using the --test (-t) option:
-
-    >>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_x0 (sample1.sampletests.test1.TestB)
-        test_x0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_x0 (sample1.sampletests.test_one.TestB)
-        test_x0 (sample1.sampletests.test_one)
-        test_y0 (sample1.sampletests.test_one)
-      Ran 8 tests with 0 failures and 0 errors in 0.003 seconds.
-    False
-
-
-    >>> sys.argv = 'test -u  -vv -ssample1 -ttxt'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        testrunner-ex/sample1/../sampletests.txt
-        testrunner-ex/sample1/sample11/../../sampletests.txt
-        testrunner-ex/sample1/sample13/../../sampletests.txt
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-      Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
-    False
-
-The --module and --test options take regular expressions.  If the
-regular expressions specified begin with '!', then tests that don't
-match the regular expression are selected:
-
-    >>> sys.argv = 'test -u  -vv -ssample1 -m!sample1[.]sample1'.split()
-    >>> testrunner.run(defaults) 
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_x1 (sample1.sampletestsf.TestA)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_z0 (sample1.sampletestsf.TestA)
-        test_x0 (sample1.sampletestsf.TestB)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_z0 (sample1.sampletestsf.TestB)
-        test_1 (sample1.sampletestsf.TestNotMuch)
-        test_2 (sample1.sampletestsf.TestNotMuch)
-        test_3 (sample1.sampletestsf.TestNotMuch)
-        test_x0 (sample1.sampletestsf)
-        test_y0 (sample1.sampletestsf)
-        test_z1 (sample1.sampletestsf)
-        testrunner-ex/sample1/../sampletests.txt
-        test_x1 (sample1.sampletests.test1.TestA)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_z0 (sample1.sampletests.test1.TestA)
-        test_x0 (sample1.sampletests.test1.TestB)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_z0 (sample1.sampletests.test1.TestB)
-        test_1 (sample1.sampletests.test1.TestNotMuch)
-        test_2 (sample1.sampletests.test1.TestNotMuch)
-        test_3 (sample1.sampletests.test1.TestNotMuch)
-        test_x0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test1)
-        test_z1 (sample1.sampletests.test1)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        test_x1 (sample1.sampletests.test_one.TestA)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_z0 (sample1.sampletests.test_one.TestA)
-        test_x0 (sample1.sampletests.test_one.TestB)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_z0 (sample1.sampletests.test_one.TestB)
-        test_1 (sample1.sampletests.test_one.TestNotMuch)
-        test_2 (sample1.sampletests.test_one.TestNotMuch)
-        test_3 (sample1.sampletests.test_one.TestNotMuch)
-        test_x0 (sample1.sampletests.test_one)
-        test_y0 (sample1.sampletests.test_one)
-        test_z1 (sample1.sampletests.test_one)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-      Ran 48 tests with 0 failures and 0 errors in 0.017 seconds.
-    False
-
-Module and test filters can also be given as positional arguments:
-
-
-    >>> sys.argv = 'test -u  -vv -ssample1 !sample1[.]sample1'.split()
-    >>> testrunner.run(defaults) 
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_x1 (sample1.sampletestsf.TestA)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_z0 (sample1.sampletestsf.TestA)
-        test_x0 (sample1.sampletestsf.TestB)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_z0 (sample1.sampletestsf.TestB)
-        test_1 (sample1.sampletestsf.TestNotMuch)
-        test_2 (sample1.sampletestsf.TestNotMuch)
-        test_3 (sample1.sampletestsf.TestNotMuch)
-        test_x0 (sample1.sampletestsf)
-        test_y0 (sample1.sampletestsf)
-        test_z1 (sample1.sampletestsf)
-        testrunner-ex/sample1/../sampletests.txt
-        test_x1 (sample1.sampletests.test1.TestA)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_z0 (sample1.sampletests.test1.TestA)
-        test_x0 (sample1.sampletests.test1.TestB)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_z0 (sample1.sampletests.test1.TestB)
-        test_1 (sample1.sampletests.test1.TestNotMuch)
-        test_2 (sample1.sampletests.test1.TestNotMuch)
-        test_3 (sample1.sampletests.test1.TestNotMuch)
-        test_x0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test1)
-        test_z1 (sample1.sampletests.test1)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        test_x1 (sample1.sampletests.test_one.TestA)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_z0 (sample1.sampletests.test_one.TestA)
-        test_x0 (sample1.sampletests.test_one.TestB)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_z0 (sample1.sampletests.test_one.TestB)
-        test_1 (sample1.sampletests.test_one.TestNotMuch)
-        test_2 (sample1.sampletests.test_one.TestNotMuch)
-        test_3 (sample1.sampletests.test_one.TestNotMuch)
-        test_x0 (sample1.sampletests.test_one)
-        test_y0 (sample1.sampletests.test_one)
-        test_z1 (sample1.sampletests.test_one)
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-      Ran 48 tests with 0 failures and 0 errors in 0.017 seconds.
-    False
-
-    >>> sys.argv = 'test -u  -vv -ssample1 . txt'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        testrunner-ex/sample1/../sampletests.txt
-        testrunner-ex/sample1/sample11/../../sampletests.txt
-        testrunner-ex/sample1/sample13/../../sampletests.txt
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-        testrunner-ex/sample1/sampletests/../../sampletests.txt
-      Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
-    False
-
-Sometimes, There are tests that you don't want to run by default.
-For example, you might have tests that take a long time.  Tests can
-have a level attribute.  If no level is specified, a level of 1 is
-assumed and, by default, only tests at level one are run.  to run
-tests at a higher level, use the --at-level (-a) option to specify a higher
-level.  For example, with the following options:
-
-
-    >>> sys.argv = 'test -u  -vv -t test_y1 -t test_y0'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test_y0 (sampletestsf.TestA)
-        test_y1 (sampletestsf.TestB)
-        test_y0 (sampletestsf)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_y0 (sample1.sampletestsf)
-        test_y0 (sample1.sample11.sampletests.TestA)
-        test_y1 (sample1.sample11.sampletests.TestB)
-        test_y0 (sample1.sample11.sampletests)
-        test_y0 (sample1.sample13.sampletests.TestA)
-        test_y1 (sample1.sample13.sampletests.TestB)
-        test_y0 (sample1.sample13.sampletests)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_y0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_y0 (sample1.sampletests.test_one)
-        test_y0 (sample2.sample21.sampletests.TestA)
-        test_y1 (sample2.sample21.sampletests.TestB)
-        test_y0 (sample2.sample21.sampletests)
-        test_y0 (sample2.sampletests.test_1.TestA)
-        test_y1 (sample2.sampletests.test_1.TestB)
-        test_y0 (sample2.sampletests.test_1)
-        test_y0 (sample2.sampletests.testone.TestA)
-        test_y1 (sample2.sampletests.testone.TestB)
-        test_y0 (sample2.sampletests.testone)
-        test_y0 (sample3.sampletests.TestA)
-        test_y1 (sample3.sampletests.TestB)
-        test_y0 (sample3.sampletests)
-        test_y0 (sampletests.test1.TestA)
-        test_y1 (sampletests.test1.TestB)
-        test_y0 (sampletests.test1)
-        test_y0 (sampletests.test_one.TestA)
-        test_y1 (sampletests.test_one.TestB)
-        test_y0 (sampletests.test_one)
-      Ran 36 tests with 0 failures and 0 errors in 0.009 seconds.
-    False
-
-
-We get run 36 tests.  If we specify a level of 2, we get some
-additional tests:
-
-    >>> sys.argv = 'test -u  -vv -a 2 -t test_y1 -t test_y0'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 2
-    Running unit tests:
-      Running:
-        test_y0 (sampletestsf.TestA)
-        test_y0 (sampletestsf.TestA2)
-        test_y1 (sampletestsf.TestB)
-        test_y0 (sampletestsf)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_y0 (sample1.sampletestsf)
-        test_y0 (sample1.sample11.sampletests.TestA)
-        test_y1 (sample1.sample11.sampletests.TestB)
-        test_y1 (sample1.sample11.sampletests.TestB2)
-        test_y0 (sample1.sample11.sampletests)
-        test_y0 (sample1.sample13.sampletests.TestA)
-        test_y1 (sample1.sample13.sampletests.TestB)
-        test_y0 (sample1.sample13.sampletests)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_y0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_y0 (sample1.sampletests.test_one)
-        test_y0 (sample2.sample21.sampletests.TestA)
-        test_y1 (sample2.sample21.sampletests.TestB)
-        test_y0 (sample2.sample21.sampletests)
-        test_y0 (sample2.sampletests.test_1.TestA)
-        test_y1 (sample2.sampletests.test_1.TestB)
-        test_y0 (sample2.sampletests.test_1)
-        test_y0 (sample2.sampletests.testone.TestA)
-        test_y1 (sample2.sampletests.testone.TestB)
-        test_y0 (sample2.sampletests.testone)
-        test_y0 (sample3.sampletests.TestA)
-        test_y1 (sample3.sampletests.TestB)
-        test_y0 (sample3.sampletests)
-        test_y0 (sampletests.test1.TestA)
-        test_y1 (sampletests.test1.TestB)
-        test_y0 (sampletests.test1)
-        test_y0 (sampletests.test_one.TestA)
-        test_y1 (sampletests.test_one.TestB)
-        test_y0 (sampletests.test_one)
-      Ran 38 tests with 0 failures and 0 errors in 0.009 seconds.
-    False
-
-We can use the --all option to run tests at all levels:
-
-    >>> sys.argv = 'test -u  -vv --all -t test_y1 -t test_y0'.split()
-    >>> testrunner.run(defaults)
-    Running tests at all levels
-    Running unit tests:
-      Running:
-        test_y0 (sampletestsf.TestA)
-        test_y0 (sampletestsf.TestA2)
-        test_y1 (sampletestsf.TestB)
-        test_y0 (sampletestsf)
-        test_y0 (sample1.sampletestsf.TestA)
-        test_y1 (sample1.sampletestsf.TestB)
-        test_y0 (sample1.sampletestsf)
-        test_y0 (sample1.sample11.sampletests.TestA)
-        test_y0 (sample1.sample11.sampletests.TestA3)
-        test_y1 (sample1.sample11.sampletests.TestB)
-        test_y1 (sample1.sample11.sampletests.TestB2)
-        test_y0 (sample1.sample11.sampletests)
-        test_y0 (sample1.sample13.sampletests.TestA)
-        test_y1 (sample1.sample13.sampletests.TestB)
-        test_y0 (sample1.sample13.sampletests)
-        test_y0 (sample1.sampletests.test1.TestA)
-        test_y1 (sample1.sampletests.test1.TestB)
-        test_y0 (sample1.sampletests.test1)
-        test_y0 (sample1.sampletests.test_one.TestA)
-        test_y1 (sample1.sampletests.test_one.TestB)
-        test_y0 (sample1.sampletests.test_one)
-        test_y0 (sample2.sample21.sampletests.TestA)
-        test_y1 (sample2.sample21.sampletests.TestB)
-        test_y0 (sample2.sample21.sampletests)
-        test_y0 (sample2.sampletests.test_1.TestA)
-        test_y1 (sample2.sampletests.test_1.TestB)
-        test_y0 (sample2.sampletests.test_1)
-        test_y0 (sample2.sampletests.testone.TestA)
-        test_y1 (sample2.sampletests.testone.TestB)
-        test_y0 (sample2.sampletests.testone)
-        test_y0 (sample3.sampletests.TestA)
-        test_y1 (sample3.sampletests.TestB)
-        test_y0 (sample3.sampletests)
-        test_y0 (sampletests.test1.TestA)
-        test_y1 (sampletests.test1.TestB)
-        test_y0 (sampletests.test1)
-        test_y0 (sampletests.test_one.TestA)
-        test_y1 (sampletests.test_one.TestB)
-        test_y0 (sampletests.test_one)
-      Ran 39 tests with 0 failures and 0 errors in 0.009 seconds.
-    False
-
-
-Listing Selected Tests
-----------------------
-
-When you're trying to figure out why the test you want is not matched by the
-pattern you specified, it is convenient to see which tests match your
-specifications.
-
-    >>> sys.argv = 'test --all -m sample1 -t test_y0 --list-tests'.split()
-    >>> testrunner.run(defaults)
-    Listing unit tests:
-      test_y0 (sample1.sampletestsf.TestA)
-      test_y0 (sample1.sampletestsf)
-      test_y0 (sample1.sample11.sampletests.TestA)
-      test_y0 (sample1.sample11.sampletests.TestA3)
-      test_y0 (sample1.sample11.sampletests)
-      test_y0 (sample1.sample13.sampletests.TestA)
-      test_y0 (sample1.sample13.sampletests)
-      test_y0 (sample1.sampletests.test1.TestA)
-      test_y0 (sample1.sampletests.test1)
-      test_y0 (sample1.sampletests.test_one.TestA)
-      test_y0 (sample1.sampletests.test_one)
-    Listing samplelayers.Layer11 tests:
-      test_y0 (sample1.sampletests.test11.TestA)
-      test_y0 (sample1.sampletests.test11)
-    Listing samplelayers.Layer111 tests:
-      test_y0 (sample1.sampletests.test111.TestA)
-      test_y0 (sample1.sampletests.test111)
-    Listing samplelayers.Layer112 tests:
-      test_y0 (sample1.sampletests.test112.TestA)
-      test_y0 (sample1.sampletests.test112)
-    Listing samplelayers.Layer12 tests:
-      test_y0 (sample1.sampletests.test12.TestA)
-      test_y0 (sample1.sampletests.test12)
-    Listing samplelayers.Layer121 tests:
-      test_y0 (sample1.sampletests.test121.TestA)
-      test_y0 (sample1.sampletests.test121)
-    Listing samplelayers.Layer122 tests:
-      test_y0 (sample1.sampletests.test122.TestA)
-      test_y0 (sample1.sampletests.test122)
-    False
-

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-verbose.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-verbose.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-verbose.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,148 +0,0 @@
-Verbose Output
-==============
-
-Normally, we just get a summary.  We can use the -v option to get
-increasingly more information.
-
-If we use a single --verbose (-v) option, we get a dot printed for each
-test:
-
-    >>> import os.path, sys
-    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     ]
-    >>> sys.argv = 'test --layer 122 -v'.split()
-    >>> from zope.testing import testrunner
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        ..................................
-      Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-If there are more than 50 tests, the dots are printed in groups of
-50:
-
-    >>> sys.argv = 'test -uv'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-    ................................................................................................................................................................................................
-      Ran 192 tests with 0 failures and 0 errors in 0.035 seconds.
-    False
-
-If the --verbose (-v) option is used twice, then the name and location of
-each test is printed as it is run:
-
-    >>> sys.argv = 'test --layer 122 -vv'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        test_x1 (sample1.sampletests.test122.TestA)
-        test_y0 (sample1.sampletests.test122.TestA)
-        test_z0 (sample1.sampletests.test122.TestA)
-        test_x0 (sample1.sampletests.test122.TestB)
-        test_y1 (sample1.sampletests.test122.TestB)
-        test_z0 (sample1.sampletests.test122.TestB)
-        test_1 (sample1.sampletests.test122.TestNotMuch)
-        test_2 (sample1.sampletests.test122.TestNotMuch)
-        test_3 (sample1.sampletests.test122.TestNotMuch)
-        test_x0 (sample1.sampletests.test122)
-        test_y0 (sample1.sampletests.test122)
-        test_z1 (sample1.sampletests.test122)
-        testrunner-ex/sample1/sampletests/../../sampletestsl.txt
-        test_x1 (sampletests.test122.TestA)
-        test_y0 (sampletests.test122.TestA)
-        test_z0 (sampletests.test122.TestA)
-        test_x0 (sampletests.test122.TestB)
-        test_y1 (sampletests.test122.TestB)
-        test_z0 (sampletests.test122.TestB)
-        test_1 (sampletests.test122.TestNotMuch)
-        test_2 (sampletests.test122.TestNotMuch)
-        test_3 (sampletests.test122.TestNotMuch)
-        test_x0 (sampletests.test122)
-        test_y0 (sampletests.test122)
-        test_z1 (sampletests.test122)
-        testrunner-ex/sampletests/../sampletestsl.txt
-      Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-if the --verbose (-v) option is used three times, then individual
-test-execution times are printed:
-
-    >>> sys.argv = 'test --layer 122 -vvv'.split()
-    >>> testrunner.run(defaults)
-    Running tests at level 1
-    Running samplelayers.Layer122 tests:
-      Set up samplelayers.Layer1 in 0.000 seconds.
-      Set up samplelayers.Layer12 in 0.000 seconds.
-      Set up samplelayers.Layer122 in 0.000 seconds.
-      Running:
-        test_x1 (sample1.sampletests.test122.TestA) (0.000 s)
-        test_y0 (sample1.sampletests.test122.TestA) (0.000 s)
-        test_z0 (sample1.sampletests.test122.TestA) (0.000 s)
-        test_x0 (sample1.sampletests.test122.TestB) (0.000 s)
-        test_y1 (sample1.sampletests.test122.TestB) (0.000 s)
-        test_z0 (sample1.sampletests.test122.TestB) (0.000 s)
-        test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
-        test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
-        test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
-        test_x0 (sample1.sampletests.test122) (0.001 s)
-        test_y0 (sample1.sampletests.test122) (0.001 s)
-        test_z1 (sample1.sampletests.test122) (0.001 s)
-        testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 s)
-        test_x1 (sampletests.test122.TestA) (0.000 s)
-        test_y0 (sampletests.test122.TestA) (0.000 s)
-        test_z0 (sampletests.test122.TestA) (0.000 s)
-        test_x0 (sampletests.test122.TestB) (0.000 s)
-        test_y1 (sampletests.test122.TestB) (0.000 s)
-        test_z0 (sampletests.test122.TestB) (0.000 s)
-        test_1 (sampletests.test122.TestNotMuch) (0.000 s)
-        test_2 (sampletests.test122.TestNotMuch) (0.000 s)
-        test_3 (sampletests.test122.TestNotMuch) (0.000 s)
-        test_x0 (sampletests.test122) (0.001 s)
-        test_y0 (sampletests.test122) (0.001 s)
-        test_z1 (sampletests.test122) (0.001 s)
-        testrunner-ex/sampletests/../sampletestsl.txt (0.001 s)
-      Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
-    Tearing down left over layers:
-      Tear down samplelayers.Layer122 in 0.000 seconds.
-      Tear down samplelayers.Layer12 in 0.000 seconds.
-      Tear down samplelayers.Layer1 in 0.000 seconds.
-    False
-
-Quiet output
-------------
-
-The --quiet (-q) option cancels all verbose options.  It's useful when
-the default verbosity is non-zero:
-
-    >>> defaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^sampletestsf?$',
-    ...     '-v'
-    ...     ]
-    >>> sys.argv = 'test -q -u'.split()
-    >>> testrunner.run(defaults)
-    Running unit tests:
-      Ran 192 tests with 0 failures and 0 errors in 0.034 seconds.
-    False

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-wo-source.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-wo-source.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner-wo-source.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,89 +0,0 @@
-Running Without Source Code
-===========================
-
-The ``--usecompiled`` option allows running tests in a tree without .py
-source code, provided compiled .pyc or .pyo files exist (without
-``--usecompiled``, .py files are necessary).
-
-We have a very simple directory tree, under ``usecompiled/``, to test
-this.  Because we're going to delete its .py files, we want to work
-in a copy of that:
-
-    >>> import os.path, shutil, sys, tempfile
-    >>> directory_with_tests = tempfile.mkdtemp()
-
-    >>> NEWNAME = "unlikely_package_name"
-    >>> src = os.path.join(this_directory, 'testrunner-ex', 'usecompiled')
-    >>> os.path.isdir(src)
-    True
-    >>> dst = os.path.join(directory_with_tests, NEWNAME)
-    >>> os.path.isdir(dst)
-    False
-
-Have to use our own copying code, to avoid copying read-only SVN files that
-can't be deleted later.
-
-    >>> n = len(src) + 1
-    >>> for root, dirs, files in os.walk(src):
-    ...     dirs[:] = [d for d in dirs if d == "package"] # prune cruft
-    ...     os.mkdir(os.path.join(dst, root[n:]))
-    ...     for f in files:
-    ...         shutil.copy(os.path.join(root, f),
-    ...                     os.path.join(dst, root[n:], f))
-
-Now run the tests in the copy:
-
-    >>> from zope.testing import testrunner
-
-    >>> mydefaults = [
-    ...     '--path', directory_with_tests,
-    ...     '--tests-pattern', '^compiletest$',
-    ...     '--package', NEWNAME,
-    ...     '-vv',
-    ...     ]
-    >>> sys.argv = ['test']
-    >>> testrunner.run(mydefaults)
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test1 (unlikely_package_name.compiletest.Test)
-        test2 (unlikely_package_name.compiletest.Test)
-        test1 (unlikely_package_name.package.compiletest.Test)
-        test2 (unlikely_package_name.package.compiletest.Test)
-      Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
-    False
-
-If we delete the source files, it's normally a disaster:  the test runner
-doesn't believe any test files, or even packages, exist.  Note that we pass
-``--keepbytecode`` this time, because otherwise the test runner would
-delete the compiled Python files too:
-
-    >>> for root, dirs, files in os.walk(dst):
-    ...    for f in files:
-    ...        if f.endswith(".py"):
-    ...            os.remove(os.path.join(root, f))
-    >>> testrunner.run(mydefaults, ["test", "--keepbytecode"])
-    Running tests at level 1
-    Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
-    False
-
-Finally, passing ``--usecompiled`` asks the test runner to treat .pyc
-and .pyo files as adequate replacements for .py files.  Note that the
-output is the same as when running with .py source above.  The absence
-of "removing stale bytecode ..." messages shows that ``--usecompiled``
-also implies ``--keepbytecode``:
-
-    >>> testrunner.run(mydefaults, ["test", "--usecompiled"])
-    Running tests at level 1
-    Running unit tests:
-      Running:
-        test1 (unlikely_package_name.compiletest.Test)
-        test2 (unlikely_package_name.compiletest.Test)
-        test1 (unlikely_package_name.package.compiletest.Test)
-        test2 (unlikely_package_name.package.compiletest.Test)
-      Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
-    False
-
-Remove the temporary directory:
-
-    >>> shutil.rmtree(directory_with_tests)

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.py
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.py	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.py	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,2816 +0,0 @@
-##############################################################################
-#
-# Copyright (c) 2004-2006 Zope Corporation and Contributors.
-# All Rights Reserved.
-#
-# This software is subject to the provisions of the Zope Public License,
-# Version 2.1 (ZPL).  A copy of the ZPL should accompany this distribution.
-# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
-# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
-# FOR A PARTICULAR PURPOSE.
-#
-##############################################################################
-"""Test runner
-
-$Id$
-"""
-
-# Too bad: For now, we depend on zope.testing.  This is because
-# we want to use the latest, greatest doctest, which zope.testing
-# provides.  Then again, zope.testing is generally useful.
-
-import gc
-import glob
-import logging
-import optparse
-import os
-import errno
-import pdb
-import re
-import cStringIO
-import sys
-import tempfile
-import threading
-import time
-import trace
-import traceback
-import types
-import unittest
-
-
-available_profilers = {}
-
-try:
-    import cProfile
-    import pstats
-except ImportError:
-    pass
-else:
-    class CProfiler(object):
-        """cProfiler"""
-        def __init__(self, filepath):
-            self.filepath = filepath
-            self.profiler = cProfile.Profile()
-            self.enable = self.profiler.enable
-            self.disable = self.profiler.disable
-
-        def finish(self):
-            self.profiler.dump_stats(self.filepath)
-
-        def loadStats(self, prof_glob):
-            stats = None
-            for file_name in glob.glob(prof_glob):
-                if stats is None:
-                    stats = pstats.Stats(file_name)
-                else:
-                    stats.add(file_name)
-            return stats
-
-    available_profilers['cProfile'] = CProfiler
-
-# some Linux distributions don't include the profiler, which hotshot uses
-try:
-    import hotshot
-    import hotshot.stats
-except ImportError:
-    pass
-else:
-    class HotshotProfiler(object):
-        """hotshot interface"""
-
-        def __init__(self, filepath):
-            self.profiler = hotshot.Profile(filepath)
-            self.enable = self.profiler.start
-            self.disable = self.profiler.stop
-
-        def finish(self):
-            self.profiler.close()
-
-        def loadStats(self, prof_glob):
-            stats = None
-            for file_name in glob.glob(prof_glob):
-                loaded = hotshot.stats.load(file_name)
-                if stats is None:
-                    stats = loaded
-                else:
-                    stats.add(loaded)
-            return stats
-
-    available_profilers['hotshot'] = HotshotProfiler
-
-
-real_pdb_set_trace = pdb.set_trace
-
-# For some reason, the doctest module resets the trace callable randomly, thus
-# disabling the coverage. Simply disallow the code from doing this. A real
-# trace can be set, so that debugging still works.
-osettrace = sys.settrace
-def settrace(trace):
-    if trace is None:
-        return
-    osettrace(trace)
-
-class TestIgnore:
-
-    def __init__(self, options):
-        self._test_dirs = [self._filenameFormat(d[0]) + os.path.sep
-                           for d in test_dirs(options, {})]
-        self._ignore = {}
-        self._ignored = self._ignore.get
-
-    def names(self, filename, modulename):
-        # Special case: Modules generated from text files; i.e. doctests
-        if modulename == '<string>':
-            return True
-        filename = self._filenameFormat(filename)
-        ignore = self._ignored(filename)
-        if ignore is None:
-            ignore = True
-            if filename is not None:
-                for d in self._test_dirs:
-                    if filename.startswith(d):
-                        ignore = False
-                        break
-            self._ignore[filename] = ignore
-        return ignore
-
-    def _filenameFormat(self, filename):
-        return os.path.abspath(filename)
-
-if sys.platform == 'win32':
-    #on win32 drive name can be passed with different case to `names`
-    #that lets e.g. the coverage profiler skip complete files
-    #_filenameFormat will make sure that all drive and filenames get lowercased
-    #albeit trace coverage has still problems with lowercase drive letters
-    #when determining the dotted module name
-    OldTestIgnore = TestIgnore
-
-    class TestIgnore(OldTestIgnore):
-        def _filenameFormat(self, filename):
-            return os.path.normcase(os.path.abspath(filename))
-
-class TestTrace(trace.Trace):
-    """Simple tracer.
-
-    >>> tracer = TestTrace(None, count=False, trace=False)
-
-    Simple rules for use: you can't stop the tracer if it not started
-    and you can't start the tracer if it already started:
-
-    >>> tracer.stop()
-    Traceback (most recent call last):
-        File 'testrunner.py'
-    AssertionError: can't stop if not started
-
-    >>> tracer.start()
-    >>> tracer.start()
-    Traceback (most recent call last):
-        File 'testrunner.py'
-    AssertionError: can't start if already started
-
-    >>> tracer.stop()
-    >>> tracer.stop()
-    Traceback (most recent call last):
-        File 'testrunner.py'
-    AssertionError: can't stop if not started
-    """
-
-    def __init__(self, options, **kw):
-        trace.Trace.__init__(self, **kw)
-        if options is not None:
-            self.ignore = TestIgnore(options)
-        self.started = False
-
-    def start(self):
-        assert not self.started, "can't start if already started"
-        if not self.donothing:
-            sys.settrace = settrace
-            sys.settrace(self.globaltrace)
-            threading.settrace(self.globaltrace)
-        self.started = True
-
-    def stop(self):
-        assert self.started, "can't stop if not started"
-        if not self.donothing:
-            sys.settrace = osettrace
-            sys.settrace(None)
-            threading.settrace(None)
-        self.started = False
-
-class EndRun(Exception):
-    """Indicate that the existing run call should stop
-
-    Used to prevent additional test output after post-mortem debugging.
-    """
-
-def strip_py_ext(options, path):
-    """Return path without its .py (or .pyc or .pyo) extension, or None.
-
-    If options.usecompiled is false:
-        If path ends with ".py", the path without the extension is returned.
-        Else None is returned.
-
-    If options.usecompiled is true:
-        If Python is running with -O, a .pyo extension is also accepted.
-        If Python is running without -O, a .pyc extension is also accepted.
-    """
-    if path.endswith(".py"):
-        return path[:-3]
-    if options.usecompiled:
-        if __debug__:
-            # Python is running without -O.
-            ext = ".pyc"
-        else:
-            # Python is running with -O.
-            ext = ".pyo"
-        if path.endswith(ext):
-            return path[:-len(ext)]
-    return None
-
-def contains_init_py(options, fnamelist):
-    """Return true iff fnamelist contains a suitable spelling of __init__.py.
-
-    If options.usecompiled is false, this is so iff "__init__.py" is in
-    the list.
-
-    If options.usecompiled is true, then "__init__.pyo" is also acceptable
-    if Python is running with -O, and "__init__.pyc" is also acceptable if
-    Python is running without -O.
-    """
-    if "__init__.py" in fnamelist:
-        return True
-    if options.usecompiled:
-        if __debug__:
-            # Python is running without -O.
-            return "__init__.pyc" in fnamelist
-        else:
-            # Python is running with -O.
-            return "__init__.pyo" in fnamelist
-    return False
-
-
-doctest_template = """
-File "%s", line %s, in %s
-
-%s
-Want:
-%s
-Got:
-%s
-"""
-
-
-def tigetnum(attr, default=None):
-    """Return a value from the terminfo database.
-
-    Terminfo is used on Unix-like systems to report various terminal attributes
-    (such as width, height or the number of supported colors).
-
-    Returns ``default`` when the ``curses`` module is not available, or when
-    sys.stdout is not a terminal.
-    """
-    try:
-        import curses
-    except ImportError:
-        # avoid reimporting a broken module in python 2.3
-        sys.modules['curses'] = None
-    else:
-        try:
-            curses.setupterm()
-        except (curses.error, TypeError):
-            # You get curses.error when $TERM is set to an unknown name
-            # You get TypeError when sys.stdout is not a real file object
-            # (e.g. in unit tests that use various wrappers).
-            pass
-        else:
-            return curses.tigetnum(attr)
-    return default
-
-
-class OutputFormatter(object):
-    """Test runner output formatter."""
-
-    # Implementation note: be careful about printing stuff to sys.stderr.
-    # It is used for interprocess communication between the parent and the
-    # child test runner, when you run some test layers in a subprocess.
-    # resume_layer() reasigns sys.stderr for this reason, but be careful
-    # and don't store the original one in __init__ or something.
-
-    max_width = 80
-
-    def __init__(self, options):
-        self.options = options
-        self.last_width = 0
-        self.compute_max_width()
-
-    progress = property(lambda self: self.options.progress)
-    verbose = property(lambda self: self.options.verbose)
-
-    def compute_max_width(self):
-        """Try to determine the terminal width."""
-        # Note that doing this every time is more test friendly.
-        self.max_width = tigetnum('cols', self.max_width)
-
-    def getShortDescription(self, test, room):
-        """Return a description of a test that fits in ``room`` characters."""
-        room -= 1
-        s = str(test)
-        if len(s) > room:
-            pos = s.find(" (")
-            if pos >= 0:
-                w = room - (pos + 5)
-                if w < 1:
-                    # first portion (test method name) is too long
-                    s = s[:room-3] + "..."
-                else:
-                    pre = s[:pos+2]
-                    post = s[-w:]
-                    s = "%s...%s" % (pre, post)
-            else:
-                w = room - 4
-                s = '... ' + s[-w:]
-
-        return ' ' + s[:room]
-
-    def info(self, message):
-        """Print an informative message."""
-        print message
-
-    def info_suboptimal(self, message):
-        """Print an informative message about losing some of the features.
-
-        For example, when you run some tests in a subprocess, you lose the
-        ability to use the debugger.
-        """
-        print message
-
-    def error(self, message):
-        """Report an error."""
-        print message
-
-    def error_with_banner(self, message):
-        """Report an error with a big ASCII banner."""
-        print
-        print '*'*70
-        self.error(message)
-        print '*'*70
-        print
-
-    def profiler_stats(self, stats):
-        """Report profiler stats."""
-        stats.print_stats(50)
-
-    def import_errors(self, import_errors):
-        """Report test-module import errors (if any)."""
-        if import_errors:
-            print "Test-module import failures:"
-            for error in import_errors:
-                self.print_traceback("Module: %s\n" % error.module,
-                                     error.exc_info),
-            print
-
-    def tests_with_errors(self, errors):
-        """Report names of tests with errors (if any)."""
-        if errors:
-            print
-            print "Tests with errors:"
-            for test, exc_info in errors:
-                print "  ", test
-
-    def tests_with_failures(self, failures):
-        """Report names of tests with failures (if any)."""
-        if failures:
-            print
-            print "Tests with failures:"
-            for test, exc_info in failures:
-                print "  ", test
-
-    def modules_with_import_problems(self, import_errors):
-        """Report names of modules with import problems (if any)."""
-        if import_errors:
-            print
-            print "Test-modules with import problems:"
-            for test in import_errors:
-                print "  " + test.module
-
-    def format_seconds(self, n_seconds):
-        """Format a time in seconds."""
-        if n_seconds >= 60:
-            n_minutes, n_seconds = divmod(n_seconds, 60)
-            return "%d minutes %.3f seconds" % (n_minutes, n_seconds)
-        else:
-            return "%.3f seconds" % n_seconds
-
-    def format_seconds_short(self, n_seconds):
-        """Format a time in seconds (short version)."""
-        return "%.3f s" % n_seconds
-
-    def summary(self, n_tests, n_failures, n_errors, n_seconds):
-        """Summarize the results of a single test layer."""
-        print ("  Ran %s tests with %s failures and %s errors in %s."
-               % (n_tests, n_failures, n_errors,
-                  self.format_seconds(n_seconds)))
-
-    def totals(self, n_tests, n_failures, n_errors, n_seconds):
-        """Summarize the results of all layers."""
-        print ("Total: %s tests, %s failures, %s errors in %s."
-               % (n_tests, n_failures, n_errors,
-                  self.format_seconds(n_seconds)))
-
-    def list_of_tests(self, tests, layer_name):
-        """Report a list of test names."""
-        print "Listing %s tests:" % layer_name
-        for test in tests:
-            print ' ', test
-
-    def garbage(self, garbage):
-        """Report garbage generated by tests."""
-        if garbage:
-            print "Tests generated new (%d) garbage:" % len(garbage)
-            print garbage
-
-    def test_garbage(self, test, garbage):
-        """Report garbage generated by a test."""
-        if garbage:
-            print "The following test left garbage:"
-            print test
-            print garbage
-
-    def test_threads(self, test, new_threads):
-        """Report threads left behind by a test."""
-        if new_threads:
-            print "The following test left new threads behind:"
-            print test
-            print "New thread(s):", new_threads
-
-    def refcounts(self, rc, prev):
-        """Report a change in reference counts."""
-        print "  sys refcount=%-8d change=%-6d" % (rc, rc - prev)
-
-    def detailed_refcounts(self, track, rc, prev):
-        """Report a change in reference counts, with extra detail."""
-        print ("  sum detail refcount=%-8d"
-               " sys refcount=%-8d"
-               " change=%-6d"
-               % (track.n, rc, rc - prev))
-        track.output()
-
-    def start_set_up(self, layer_name):
-        """Report that we're setting up a layer.
-
-        The next output operation should be stop_set_up().
-        """
-        print "  Set up %s" % layer_name,
-        sys.stdout.flush()
-
-    def stop_set_up(self, seconds):
-        """Report that we've set up a layer.
-
-        Should be called right after start_set_up().
-        """
-        print "in %s." % self.format_seconds(seconds)
-
-    def start_tear_down(self, layer_name):
-        """Report that we're tearing down a layer.
-
-        The next output operation should be stop_tear_down() or
-        tear_down_not_supported().
-        """
-        print "  Tear down %s" % layer_name,
-        sys.stdout.flush()
-
-    def stop_tear_down(self, seconds):
-        """Report that we've tore down a layer.
-
-        Should be called right after start_tear_down().
-        """
-        print "in %s." % self.format_seconds(seconds)
-
-    def tear_down_not_supported(self):
-        """Report that we could not tear down a layer.
-
-        Should be called right after start_tear_down().
-        """
-        print "... not supported"
-
-    def start_test(self, test, tests_run, total_tests):
-        """Report that we're about to run a test.
-
-        The next output operation should be test_success(), test_error(), or
-        test_failure().
-        """
-        self.test_width = 0
-        if self.progress:
-            if self.last_width:
-                sys.stdout.write('\r' + (' ' * self.last_width) + '\r')
-
-            s = "    %d/%d (%.1f%%)" % (tests_run, total_tests,
-                                        tests_run * 100.0 / total_tests)
-            sys.stdout.write(s)
-            self.test_width += len(s)
-            if self.verbose == 1:
-                room = self.max_width - self.test_width - 1
-                s = self.getShortDescription(test, room)
-                sys.stdout.write(s)
-                self.test_width += len(s)
-
-        elif self.verbose == 1:
-            sys.stdout.write('.' * test.countTestCases())
-
-        if self.verbose > 1:
-            s = str(test)
-            sys.stdout.write(' ')
-            sys.stdout.write(s)
-            self.test_width += len(s) + 1
-
-        sys.stdout.flush()
-
-    def test_success(self, test, seconds):
-        """Report that a test was successful.
-
-        Should be called right after start_test().
-
-        The next output operation should be stop_test().
-        """
-        if self.verbose > 2:
-            s = " (%s)" % self.format_seconds_short(seconds)
-            sys.stdout.write(s)
-            self.test_width += len(s) + 1
-
-    def test_error(self, test, seconds, exc_info):
-        """Report that an error occurred while running a test.
-
-        Should be called right after start_test().
-
-        The next output operation should be stop_test().
-        """
-        if self.verbose > 2:
-            print " (%s)" % self.format_seconds_short(seconds)
-        print
-        self.print_traceback("Error in test %s" % test, exc_info)
-        self.test_width = self.last_width = 0
-
-    def test_failure(self, test, seconds, exc_info):
-        """Report that a test failed.
-
-        Should be called right after start_test().
-
-        The next output operation should be stop_test().
-        """
-        if self.verbose > 2:
-            print " (%s)" % self.format_seconds_short(seconds)
-        print
-        self.print_traceback("Failure in test %s" % test, exc_info)
-        self.test_width = self.last_width = 0
-
-    def print_traceback(self, msg, exc_info):
-        """Report an error with a traceback."""
-        print
-        print msg
-        print self.format_traceback(exc_info)
-
-    def format_traceback(self, exc_info):
-        """Format the traceback."""
-        v = exc_info[1]
-        if isinstance(v, doctest.DocTestFailureException):
-            tb = v.args[0]
-        elif isinstance(v, doctest.DocTestFailure):
-            tb = doctest_template % (
-                v.test.filename,
-                v.test.lineno + v.example.lineno + 1,
-                v.test.name,
-                v.example.source,
-                v.example.want,
-                v.got,
-                )
-        else:
-            tb = "".join(traceback.format_exception(*exc_info))
-        return tb
-
-    def stop_test(self, test):
-        """Clean up the output state after a test."""
-        if self.progress:
-            self.last_width = self.test_width
-        elif self.verbose > 1:
-            print
-        sys.stdout.flush()
-
-    def stop_tests(self):
-        """Clean up the output state after a collection of tests."""
-        if self.progress and self.last_width:
-            sys.stdout.write('\r' + (' ' * self.last_width) + '\r')
-        if self.verbose == 1 or self.progress:
-            print
-
-
-class ColorfulOutputFormatter(OutputFormatter):
-    """Output formatter that uses ANSI color codes.
-
-    Like syntax highlighting in your text editor, colorizing
-    test failures helps the developer.
-    """
-
-    # These colors are carefully chosen to have enough contrast
-    # on terminals with both black and white background.
-    colorscheme = {'normal': 'normal',
-                   'default': 'default',
-                   'info': 'normal',
-                   'suboptimal-behaviour': 'magenta',
-                   'error': 'brightred',
-                   'number': 'green',
-                   'slow-test': 'brightmagenta',
-                   'ok-number': 'green',
-                   'error-number': 'brightred',
-                   'filename': 'lightblue',
-                   'lineno': 'lightred',
-                   'testname': 'lightcyan',
-                   'failed-example': 'cyan',
-                   'expected-output': 'green',
-                   'actual-output': 'red',
-                   'character-diffs': 'magenta',
-                   'diff-chunk': 'magenta',
-                   'exception': 'red'}
-
-    # Map prefix character to color in diff output.  This handles ndiff and
-    # udiff correctly, but not cdiff.  In cdiff we ought to highlight '!' as
-    # expected-output until we see a '-', then highlight '!' as actual-output,
-    # until we see a '*', then switch back to highlighting '!' as
-    # expected-output.  Nevertheless, coloried cdiffs are reasonably readable,
-    # so I'm not going to fix this.
-    #   -- mgedmin
-    diff_color = {'-': 'expected-output',
-                  '+': 'actual-output',
-                  '?': 'character-diffs',
-                  '@': 'diff-chunk',
-                  '*': 'diff-chunk',
-                  '!': 'actual-output',}
-
-    prefixes = [('dark', '0;'),
-                ('light', '1;'),
-                ('bright', '1;'),
-                ('bold', '1;'),]
-
-    colorcodes = {'default': 0, 'normal': 0,
-                  'black': 30,
-                  'red': 31,
-                  'green': 32,
-                  'brown': 33, 'yellow': 33,
-                  'blue': 34,
-                  'magenta': 35,
-                  'cyan': 36,
-                  'grey': 37, 'gray': 37, 'white': 37}
-
-    slow_test_threshold = 10.0 # seconds
-
-    def color_code(self, color):
-        """Convert a color description (e.g. 'lightgray') to a terminal code."""
-        prefix_code = ''
-        for prefix, code in self.prefixes:
-            if color.startswith(prefix):
-                color = color[len(prefix):]
-                prefix_code = code
-                break
-        color_code = self.colorcodes[color]
-        return '\033[%s%sm' % (prefix_code, color_code)
-
-    def color(self, what):
-        """Pick a named color from the color scheme"""
-        return self.color_code(self.colorscheme[what])
-
-    def colorize(self, what, message, normal='normal'):
-        """Wrap message in color."""
-        return self.color(what) + message + self.color(normal)
-
-    def error_count_color(self, n):
-        """Choose a color for the number of errors."""
-        if n:
-            return self.color('error-number')
-        else:
-            return self.color('ok-number')
-
-    def info(self, message):
-        """Print an informative message."""
-        print self.colorize('info', message)
-
-    def info_suboptimal(self, message):
-        """Print an informative message about losing some of the features.
-
-        For example, when you run some tests in a subprocess, you lose the
-        ability to use the debugger.
-        """
-        print self.colorize('suboptimal-behaviour', message)
-
-    def error(self, message):
-        """Report an error."""
-        print self.colorize('error', message)
-
-    def error_with_banner(self, message):
-        """Report an error with a big ASCII banner."""
-        print
-        print self.colorize('error', '*'*70)
-        self.error(message)
-        print self.colorize('error', '*'*70)
-        print
-
-    def tear_down_not_supported(self):
-        """Report that we could not tear down a layer.
-
-        Should be called right after start_tear_down().
-        """
-        print "...", self.colorize('suboptimal-behaviour', "not supported")
-
-    def format_seconds(self, n_seconds, normal='normal'):
-        """Format a time in seconds."""
-        if n_seconds >= 60:
-            n_minutes, n_seconds = divmod(n_seconds, 60)
-            return "%s minutes %s seconds" % (
-                        self.colorize('number', '%d' % n_minutes, normal),
-                        self.colorize('number', '%.3f' % n_seconds, normal))
-        else:
-            return "%s seconds" % (
-                        self.colorize('number', '%.3f' % n_seconds, normal))
-
-    def format_seconds_short(self, n_seconds):
-        """Format a time in seconds (short version)."""
-        if n_seconds >= self.slow_test_threshold:
-            color = 'slow-test'
-        else:
-            color = 'number'
-        return self.colorize(color, "%.3f s" % n_seconds)
-
-    def summary(self, n_tests, n_failures, n_errors, n_seconds):
-        """Summarize the results."""
-        sys.stdout.writelines([
-            self.color('info'), '  Ran ',
-            self.color('number'), str(n_tests),
-            self.color('info'), ' tests with ',
-            self.error_count_color(n_failures), str(n_failures),
-            self.color('info'), ' failures and ',
-            self.error_count_color(n_errors), str(n_errors),
-            self.color('info'), ' errors in ',
-            self.format_seconds(n_seconds, 'info'), '.',
-            self.color('normal'), '\n'])
-
-    def totals(self, n_tests, n_failures, n_errors, n_seconds):
-        """Report totals (number of tests, failures, and errors)."""
-        sys.stdout.writelines([
-            self.color('info'), 'Total: ',
-            self.color('number'), str(n_tests),
-            self.color('info'), ' tests, ',
-            self.error_count_color(n_failures), str(n_failures),
-            self.color('info'), ' failures, ',
-            self.error_count_color(n_errors), str(n_errors),
-            self.color('info'), ' errors in ',
-            self.format_seconds(n_seconds, 'info'), '.',
-            self.color('normal'), '\n'])
-
-    def print_traceback(self, msg, exc_info):
-        """Report an error with a traceback."""
-        print
-        print self.colorize('error', msg)
-        v = exc_info[1]
-        if isinstance(v, doctest.DocTestFailureException):
-            self.print_doctest_failure(v.args[0])
-        elif isinstance(v, doctest.DocTestFailure):
-            # I don't think these are ever used... -- mgedmin
-            tb = self.format_traceback(exc_info)
-            print tb
-        else:
-            tb = self.format_traceback(exc_info)
-            self.print_colorized_traceback(tb)
-
-    def print_doctest_failure(self, formatted_failure):
-        """Report a doctest failure.
-
-        ``formatted_failure`` is a string -- that's what
-        DocTestSuite/DocFileSuite gives us.
-        """
-        color_of_indented_text = 'normal'
-        colorize_diff = False
-        for line in formatted_failure.splitlines():
-            if line.startswith('File '):
-                m = re.match(r'File "(.*)", line (\d*), in (.*)$', line)
-                if m:
-                    filename, lineno, test = m.groups()
-                    sys.stdout.writelines([
-                        self.color('normal'), 'File "',
-                        self.color('filename'), filename,
-                        self.color('normal'), '", line ',
-                        self.color('lineno'), lineno,
-                        self.color('normal'), ', in ',
-                        self.color('testname'), test,
-                        self.color('normal'), '\n'])
-                else:
-                    print line
-            elif line.startswith('    '):
-                if colorize_diff and len(line) > 4:
-                    color = self.diff_color.get(line[4], color_of_indented_text)
-                    print self.colorize(color, line)
-                else:
-                    print self.colorize(color_of_indented_text, line)
-            else:
-                colorize_diff = False
-                if line.startswith('Failed example'):
-                    color_of_indented_text = 'failed-example'
-                elif line.startswith('Expected:'):
-                    color_of_indented_text = 'expected-output'
-                elif line.startswith('Got:'):
-                    color_of_indented_text = 'actual-output'
-                elif line.startswith('Exception raised:'):
-                    color_of_indented_text = 'exception'
-                elif line.startswith('Differences '):
-                    color_of_indented_text = 'normal'
-                    colorize_diff = True
-                else:
-                    color_of_indented_text = 'normal'
-                print line
-        print
-
-    def print_colorized_traceback(self, formatted_traceback):
-        """Report a test failure.
-
-        ``formatted_traceback`` is a string.
-        """
-        for line in formatted_traceback.splitlines():
-            if line.startswith('  File'):
-                m = re.match(r'  File "(.*)", line (\d*), in (.*)$', line)
-                if m:
-                    filename, lineno, test = m.groups()
-                    sys.stdout.writelines([
-                        self.color('normal'), '  File "',
-                        self.color('filename'), filename,
-                        self.color('normal'), '", line ',
-                        self.color('lineno'), lineno,
-                        self.color('normal'), ', in ',
-                        self.color('testname'), test,
-                        self.color('normal'), '\n'])
-                else:
-                    print line
-            elif line.startswith('    '):
-                print self.colorize('failed-example', line)
-            elif line.startswith('Traceback (most recent call last)'):
-                print line
-            else:
-                print self.colorize('exception', line)
-        print
-
-
-def run(defaults=None, args=None):
-    if args is None:
-        args = sys.argv[:]
-
-    # Set the default logging policy.
-    # XXX There are no tests for this logging behavior.
-    # It's not at all clear that the test runner should be doing this.
-    configure_logging()
-
-    # Control reporting flags during run
-    old_reporting_flags = doctest.set_unittest_reportflags(0)
-
-    # Check to see if we are being run as a subprocess. If we are,
-    # then use the resume-layer and defaults passed in.
-    if len(args) > 1 and args[1] == '--resume-layer':
-        args.pop(1)
-        resume_layer = args.pop(1)
-        resume_number = int(args.pop(1))
-        defaults = []
-        while len(args) > 1 and args[1] == '--default':
-            args.pop(1)
-            defaults.append(args.pop(1))
-
-        sys.stdin = FakeInputContinueGenerator()
-    else:
-        resume_layer = resume_number = None
-
-    options = get_options(args, defaults)
-    if options.fail:
-        return True
-
-    output = options.output
-
-    options.testrunner_defaults = defaults
-    options.resume_layer = resume_layer
-    options.resume_number = resume_number
-
-    # Make sure we start with real pdb.set_trace.  This is needed
-    # to make tests of the test runner work properly. :)
-    pdb.set_trace = real_pdb_set_trace
-
-    if (options.profile
-        and sys.version_info[:3] <= (2,4,1)
-        and __debug__):
-        output.error('Because of a bug in Python < 2.4.1, profiling '
-                     'during tests requires the -O option be passed to '
-                     'Python (not the test runner).')
-        sys.exit()
-
-    if options.coverage:
-        tracer = TestTrace(options, trace=False, count=True)
-        tracer.start()
-    else:
-        tracer = None
-
-    if options.profile:
-        prof_prefix = 'tests_profile.'
-        prof_suffix = '.prof'
-        prof_glob = prof_prefix + '*' + prof_suffix
-
-        # if we are going to be profiling, and this isn't a subprocess,
-        # clean up any stale results files
-        if not options.resume_layer:
-            for file_name in glob.glob(prof_glob):
-                os.unlink(file_name)
-
-        # set up the output file
-        oshandle, file_path = tempfile.mkstemp(prof_suffix, prof_prefix, '.')
-        profiler = available_profilers[options.profile](file_path)
-        profiler.enable()
-
-    try:
-        try:
-            failed = not run_with_options(options)
-        except EndRun:
-            failed = True
-    finally:
-        if tracer:
-            tracer.stop()
-        if options.profile:
-            profiler.disable()
-            profiler.finish()
-            # We must explicitly close the handle mkstemp returned, else on
-            # Windows this dies the next time around just above due to an
-            # attempt to unlink a still-open file.
-            os.close(oshandle)
-
-    if options.profile and not options.resume_layer:
-        stats = profiler.loadStats(prof_glob)
-        stats.sort_stats('cumulative', 'calls')
-        output.profiler_stats(stats)
-
-    if tracer:
-        coverdir = os.path.join(os.getcwd(), options.coverage)
-        r = tracer.results()
-        r.write_results(summary=True, coverdir=coverdir)
-
-    doctest.set_unittest_reportflags(old_reporting_flags)
-
-    if failed and options.exitwithstatus:
-        sys.exit(1)
-
-    return failed
-
-def run_with_options(options, found_suites=None):
-    """Find and run tests
-
-    Passing a list of suites using the found_suites parameter will cause
-    that list of suites to be used instead of attempting to load them from
-    the filesystem. This is useful for unit testing the test runner.
-
-    Returns True if all tests passed, or False if there were any failures
-    of any kind.
-    """
-
-    global _layer_name_cache
-    _layer_name_cache = {} # Reset to enforce test isolation
-
-    output = options.output
-
-    if options.resume_layer:
-        original_stderr = sys.stderr
-        sys.stderr = sys.stdout
-    elif options.verbose:
-        if options.all:
-            msg = "Running tests at all levels"
-        else:
-            msg = "Running tests at level %d" % options.at_level
-        output.info(msg)
-
-
-    old_threshold = gc.get_threshold()
-    if options.gc:
-        if len(options.gc) > 3:
-            output.error("Too many --gc options")
-            sys.exit(1)
-        if options.gc[0]:
-            output.info("Cyclic garbage collection threshold set to: %s" %
-                        repr(tuple(options.gc)))
-        else:
-            output.info("Cyclic garbage collection is disabled.")
-
-        gc.set_threshold(*options.gc)
-
-    old_flags = gc.get_debug()
-    if options.gc_option:
-        new_flags = 0
-        for op in options.gc_option:
-            new_flags |= getattr(gc, op)
-        gc.set_debug(new_flags)
-
-    old_reporting_flags = doctest.set_unittest_reportflags(0)
-    reporting_flags = 0
-    if options.ndiff:
-        reporting_flags = doctest.REPORT_NDIFF
-    if options.udiff:
-        if reporting_flags:
-            output.error("Can only give one of --ndiff, --udiff, or --cdiff")
-            sys.exit(1)
-        reporting_flags = doctest.REPORT_UDIFF
-    if options.cdiff:
-        if reporting_flags:
-            output.error("Can only give one of --ndiff, --udiff, or --cdiff")
-            sys.exit(1)
-        reporting_flags = doctest.REPORT_CDIFF
-    if options.report_only_first_failure:
-        reporting_flags |= doctest.REPORT_ONLY_FIRST_FAILURE
-
-    if reporting_flags:
-        doctest.set_unittest_reportflags(reporting_flags)
-    else:
-        doctest.set_unittest_reportflags(old_reporting_flags)
-
-
-    # Add directories to the path
-    for path in options.path:
-        if path not in sys.path:
-            sys.path.append(path)
-
-    remove_stale_bytecode(options)
-
-    tests_by_layer_name = find_tests(options, found_suites)
-
-    ran = 0
-    failures = []
-    errors = []
-    nlayers = 0
-    import_errors = tests_by_layer_name.pop(None, None)
-
-    output.import_errors(import_errors)
-
-    if 'unit' in tests_by_layer_name:
-        tests = tests_by_layer_name.pop('unit')
-        if (not options.non_unit) and not options.resume_layer:
-            if options.layer:
-                should_run = False
-                for pat in options.layer:
-                    if pat('unit'):
-                        should_run = True
-                        break
-            else:
-                should_run = True
-
-            if should_run:
-                if options.list_tests:
-                    output.list_of_tests(tests, 'unit')
-                else:
-                    output.info("Running unit tests:")
-                    nlayers += 1
-                    ran += run_tests(options, tests, 'unit', failures, errors)
-
-    setup_layers = {}
-
-    layers_to_run = list(ordered_layers(tests_by_layer_name))
-    if options.resume_layer is not None:
-        layers_to_run = [
-            (layer_name, layer, tests)
-            for (layer_name, layer, tests) in layers_to_run
-            if layer_name == options.resume_layer
-        ]
-    elif options.layer:
-        layers_to_run = [
-            (layer_name, layer, tests)
-            for (layer_name, layer, tests) in layers_to_run
-            if filter(None, [pat(layer_name) for pat in options.layer])
-        ]
-
-
-    if options.list_tests:
-        for layer_name, layer, tests in layers_to_run:
-            output.list_of_tests(tests, layer_name)
-        return True
-
-    start_time = time.time()
-
-    for layer_name, layer, tests in layers_to_run:
-        nlayers += 1
-        try:
-            ran += run_layer(options, layer_name, layer, tests,
-                             setup_layers, failures, errors)
-        except CanNotTearDown:
-            setup_layers = None
-            if not options.resume_layer:
-                ran += resume_tests(options, layer_name, layers_to_run,
-                                    failures, errors)
-                break
-
-    if setup_layers:
-        if options.resume_layer == None:
-            output.info("Tearing down left over layers:")
-        tear_down_unneeded(options, (), setup_layers, True)
-
-    total_time = time.time() - start_time
-
-    if options.resume_layer:
-        sys.stdout.close()
-        # Communicate with the parent.  The protocol is obvious:
-        print >> original_stderr, ran, len(failures), len(errors)
-        for test, exc_info in failures:
-            print >> original_stderr, ' '.join(str(test).strip().split('\n'))
-        for test, exc_info in errors:
-            print >> original_stderr, ' '.join(str(test).strip().split('\n'))
-
-    else:
-        if options.verbose:
-            output.tests_with_errors(errors)
-            output.tests_with_failures(failures)
-
-        if nlayers != 1:
-            output.totals(ran, len(failures), len(errors), total_time)
-
-        output.modules_with_import_problems(import_errors)
-
-    doctest.set_unittest_reportflags(old_reporting_flags)
-
-    if options.gc_option:
-        gc.set_debug(old_flags)
-
-    if options.gc:
-        gc.set_threshold(*old_threshold)
-
-    return not bool(import_errors or failures or errors)
-
-
-def run_tests(options, tests, name, failures, errors):
-    repeat = options.repeat or 1
-    repeat_range = iter(range(repeat))
-    ran = 0
-
-    output = options.output
-
-    gc.collect()
-    lgarbage = len(gc.garbage)
-
-    sumrc = 0
-    if options.report_refcounts:
-        if options.verbose:
-            track = TrackRefs()
-        rc = sys.gettotalrefcount()
-
-    for iteration in repeat_range:
-        if repeat > 1:
-            output.info("Iteration %d" % (iteration + 1))
-
-        if options.verbose > 0 or options.progress:
-            output.info('  Running:')
-        result = TestResult(options, tests, layer_name=name)
-
-        t = time.time()
-
-        if options.post_mortem:
-            # post-mortem debugging
-            for test in tests:
-                if result.shouldStop:
-                    break
-                result.startTest(test)
-                state = test.__dict__.copy()
-                try:
-                    try:
-                        test.debug()
-                    except KeyboardInterrupt:
-                        raise
-                    except:
-                        result.addError(
-                            test,
-                            sys.exc_info()[:2] + (sys.exc_info()[2].tb_next, ),
-                            )
-                    else:
-                        result.addSuccess(test)
-                finally:
-                    result.stopTest(test)
-                test.__dict__.clear()
-                test.__dict__.update(state)
-
-        else:
-            # normal
-            for test in tests:
-                if result.shouldStop:
-                    break
-                state = test.__dict__.copy()
-                test(result)
-                test.__dict__.clear()
-                test.__dict__.update(state)
-
-        t = time.time() - t
-        output.stop_tests()
-        failures.extend(result.failures)
-        errors.extend(result.errors)
-        output.summary(result.testsRun, len(result.failures), len(result.errors), t)
-        ran = result.testsRun
-
-        gc.collect()
-        if len(gc.garbage) > lgarbage:
-            output.garbage(gc.garbage[lgarbage:])
-            lgarbage = len(gc.garbage)
-
-        if options.report_refcounts:
-
-            # If we are being tested, we don't want stdout itself to
-            # foul up the numbers. :)
-            try:
-                sys.stdout.getvalue()
-            except AttributeError:
-                pass
-
-            prev = rc
-            rc = sys.gettotalrefcount()
-            if options.verbose:
-                track.update()
-                if iteration > 0:
-                    output.detailed_refcounts(track, rc, prev)
-                else:
-                    track.delta = None
-            elif iteration > 0:
-                output.refcounts(rc, prev)
-
-    return ran
-
-def run_layer(options, layer_name, layer, tests, setup_layers,
-              failures, errors):
-
-    output = options.output
-    gathered = []
-    gather_layers(layer, gathered)
-    needed = dict([(l, 1) for l in gathered])
-    if options.resume_number != 0:
-        output.info("Running %s tests:" % layer_name)
-    tear_down_unneeded(options, needed, setup_layers)
-
-    if options.resume_layer != None:
-        output.info_suboptimal( "  Running in a subprocess.")
-
-    try:
-        setup_layer(options, layer, setup_layers)
-    except EndRun:
-        raise
-    except Exception:
-        f = cStringIO.StringIO()
-        traceback.print_exc(file=f)
-        output.error(f.getvalue())
-        errors.append((SetUpLayerFailure(), sys.exc_info()))
-        return 0
-    else:
-        return run_tests(options, tests, layer_name, failures, errors)
-
-class SetUpLayerFailure(unittest.TestCase):
-    def runTest(self):
-        "Layer set up failure."
-
-def resume_tests(options, layer_name, layers, failures, errors):
-    output = options.output
-    layers = [l for (l, _, _) in layers]
-    layers = layers[layers.index(layer_name):]
-    rantotal = 0
-    resume_number = 0
-    for layer_name in layers:
-        args = [sys.executable,
-                options.original_testrunner_args[0],
-                '--resume-layer', layer_name, str(resume_number),
-                ]
-        resume_number += 1
-        for d in options.testrunner_defaults:
-            args.extend(['--default', d])
-
-        args.extend(options.original_testrunner_args[1:])
-
-        # this is because of a bug in Python (http://www.python.org/sf/900092)
-        if (options.profile == 'hotshot'
-            and sys.version_info[:3] <= (2,4,1)):
-            args.insert(1, '-O')
-
-        if sys.platform.startswith('win'):
-            args = args[0] + ' ' + ' '.join([
-                ('"' + a.replace('\\', '\\\\').replace('"', '\\"') + '"')
-                for a in args[1:]
-                ])
-
-        subin, subout, suberr = os.popen3(args)
-        while True:
-            try:
-                for l in subout:
-                    sys.stdout.write(l)
-            except IOError, e:
-                if e.errno == errno.EINTR:
-                    # If the subprocess dies before we finish reading its
-                    # output, a SIGCHLD signal can interrupt the reading.
-                    # The correct thing to to in that case is to retry.
-                    continue
-                output.error("Error reading subprocess output for %s" % layer_name)
-                output.info(str(e))
-            else:
-                break
-
-        line = suberr.readline()
-        try:
-            ran, nfail, nerr = map(int, line.strip().split())
-        except KeyboardInterrupt:
-            raise
-        except:
-            raise SubprocessError(line+suberr.read())
-
-        while nfail > 0:
-            nfail -= 1
-            failures.append((suberr.readline().strip(), None))
-        while nerr > 0:
-            nerr -= 1
-            errors.append((suberr.readline().strip(), None))
-
-        rantotal += ran
-
-    return rantotal
-
-
-class SubprocessError(Exception):
-    """An error occurred when running a subprocess
-    """
-
-class CanNotTearDown(Exception):
-    "Couldn't tear down a test"
-
-def tear_down_unneeded(options, needed, setup_layers, optional=False):
-    # Tear down any layers not needed for these tests. The unneeded
-    # layers might interfere.
-    unneeded = [l for l in setup_layers if l not in needed]
-    unneeded = order_by_bases(unneeded)
-    unneeded.reverse()
-    output = options.output
-    for l in unneeded:
-        output.start_tear_down(name_from_layer(l))
-        t = time.time()
-        try:
-            if hasattr(l, 'tearDown'):
-                l.tearDown()
-        except NotImplementedError:
-            output.tear_down_not_supported()
-            if not optional:
-                raise CanNotTearDown(l)
-        else:
-            output.stop_tear_down(time.time() - t)
-        del setup_layers[l]
-
-
-cant_pm_in_subprocess_message = """
-Can't post-mortem debug when running a layer as a subprocess!
-Try running layer %r by itself.
-"""
-
-def setup_layer(options, layer, setup_layers):
-    assert layer is not object
-    output = options.output
-    if layer not in setup_layers:
-        for base in layer.__bases__:
-            if base is not object:
-                setup_layer(options, base, setup_layers)
-        output.start_set_up(name_from_layer(layer))
-        t = time.time()
-        if hasattr(layer, 'setUp'):
-            try:
-                layer.setUp()
-            except Exception:
-                if options.post_mortem:
-                    if options.resume_layer:
-                        options.output.error_with_banner(
-                            cant_pm_in_subprocess_message
-                            % options.resume_layer)
-                        raise
-                    else:
-                        post_mortem(sys.exc_info())
-                else:
-                    raise
-                    
-        output.stop_set_up(time.time() - t)
-        setup_layers[layer] = 1
-
-def dependencies(bases, result):
-    for base in bases:
-        result[base] = 1
-        dependencies(base.__bases__, result)
-
-class TestResult(unittest.TestResult):
-
-    def __init__(self, options, tests, layer_name=None):
-        unittest.TestResult.__init__(self)
-        self.options = options
-        # Calculate our list of relevant layers we need to call testSetUp
-        # and testTearDown on.
-        if layer_name != 'unit':
-            layers = []
-            gather_layers(layer_from_name(layer_name), layers)
-            self.layers = order_by_bases(layers)
-        else:
-            self.layers = []
-        count = 0
-        for test in tests:
-            count += test.countTestCases()
-        self.count = count
-
-    def testSetUp(self):
-        """A layer may define a setup method to be called before each
-        individual test.
-        """
-        for layer in self.layers:
-            if hasattr(layer, 'testSetUp'):
-                layer.testSetUp()
-
-    def testTearDown(self):
-        """A layer may define a teardown method to be called after each
-           individual test.
-
-           This is useful for clearing the state of global
-           resources or resetting external systems such as relational
-           databases or daemons.
-        """
-        for layer in self.layers[-1::-1]:
-            if hasattr(layer, 'testTearDown'):
-                layer.testTearDown()
-
-    def startTest(self, test):
-        self.testSetUp()
-        unittest.TestResult.startTest(self, test)
-        testsRun = self.testsRun - 1 # subtract the one the base class added
-        count = test.countTestCases()
-        self.testsRun = testsRun + count
-
-        self.options.output.start_test(test, self.testsRun, self.count)
-
-        self._threads = threading.enumerate()
-        self._start_time = time.time()
-
-    def addSuccess(self, test):
-        t = max(time.time() - self._start_time, 0.0)
-        self.options.output.test_success(test, t)
-
-    def addError(self, test, exc_info):
-        self.options.output.test_error(test, time.time() - self._start_time,
-                                       exc_info)
-
-        unittest.TestResult.addError(self, test, exc_info)
-
-        if self.options.post_mortem:
-            if self.options.resume_layer:
-                self.options.output.error_with_banner("Can't post-mortem debug"
-                                                      " when running a layer"
-                                                      " as a subprocess!")
-            else:
-                post_mortem(exc_info)
-
-    def addFailure(self, test, exc_info):
-        self.options.output.test_failure(test, time.time() - self._start_time,
-                                         exc_info)
-
-        unittest.TestResult.addFailure(self, test, exc_info)
-
-        if self.options.post_mortem:
-            # XXX: mgedmin: why isn't there a resume_layer check here like
-            # in addError?
-            post_mortem(exc_info)
-
-    def stopTest(self, test):
-        self.testTearDown()
-        self.options.output.stop_test(test)
-
-        if gc.garbage:
-            self.options.output.test_garbage(test, gc.garbage)
-            # TODO: Perhaps eat the garbage here, so that the garbage isn't
-            #       printed for every subsequent test.
-
-        # Did the test leave any new threads behind?
-        new_threads = [t for t in threading.enumerate()
-                         if (t.isAlive()
-                             and
-                             t not in self._threads)]
-        if new_threads:
-            self.options.output.test_threads(test, new_threads)
-
-
-class FakeInputContinueGenerator:
-
-    def readline(self):
-        print  'c\n'
-        print '*'*70
-        print ("Can't use pdb.set_trace when running a layer"
-               " as a subprocess!")
-        print '*'*70
-        print
-        return 'c\n'
-
-
-def post_mortem(exc_info):
-    err = exc_info[1]
-    if isinstance(err, (doctest.UnexpectedException, doctest.DocTestFailure)):
-
-        if isinstance(err, doctest.UnexpectedException):
-            exc_info = err.exc_info
-
-            # Print out location info if the error was in a doctest
-            if exc_info[2].tb_frame.f_code.co_filename == '<string>':
-                print_doctest_location(err)
-
-        else:
-            print_doctest_location(err)
-            # Hm, we have a DocTestFailure exception.  We need to
-            # generate our own traceback
-            try:
-                exec ('raise ValueError'
-                      '("Expected and actual output are different")'
-                      ) in err.test.globs
-            except:
-                exc_info = sys.exc_info()
-
-    print "%s:" % (exc_info[0], )
-    print exc_info[1]
-    pdb.post_mortem(exc_info[2])
-    raise EndRun
-
-def print_doctest_location(err):
-    # This mimics pdb's output, which gives way cool results in emacs :)
-    filename = err.test.filename
-    if filename.endswith('.pyc'):
-        filename = filename[:-1]
-    print "> %s(%s)_()" % (filename, err.test.lineno+err.example.lineno+1)
-
-def ordered_layers(tests_by_layer_name):
-    layer_names = dict([(layer_from_name(layer_name), layer_name)
-                        for layer_name in tests_by_layer_name])
-    for layer in order_by_bases(layer_names):
-        layer_name = layer_names[layer]
-        yield layer_name, layer, tests_by_layer_name[layer_name]
-
-def gather_layers(layer, result):
-    if layer is not object:
-        result.append(layer)
-    for b in layer.__bases__:
-        gather_layers(b, result)
-
-def layer_from_name(layer_name):
-    """Return the layer for the corresponding layer_name by discovering
-       and importing the necessary module if necessary.
-
-       Note that a name -> layer cache is maintained by name_from_layer
-       to allow locating layers in cases where it would otherwise be
-       impossible.
-    """
-    global _layer_name_cache
-    if _layer_name_cache.has_key(layer_name):
-        return _layer_name_cache[layer_name]
-    layer_names = layer_name.split('.')
-    layer_module, module_layer_name = layer_names[:-1], layer_names[-1]
-    return getattr(import_name('.'.join(layer_module)), module_layer_name)
-
-def order_by_bases(layers):
-    """Order the layers from least to most specific (bottom to top)
-    """
-    named_layers = [(name_from_layer(layer), layer) for layer in layers]
-    named_layers.sort()
-    named_layers.reverse()
-    gathered = []
-    for name, layer in named_layers:
-        gather_layers(layer, gathered)
-    gathered.reverse()
-    seen = {}
-    result = []
-    for layer in gathered:
-        if layer not in seen:
-            seen[layer] = 1
-            if layer in layers:
-                result.append(layer)
-    return result
-
-_layer_name_cache = {}
-
-def name_from_layer(layer):
-    """Determine a name for the Layer using the namespace to avoid conflicts.
-
-    We also cache a name -> layer mapping to enable layer_from_name to work
-    in cases where the layer cannot be imported (such as layers defined
-    in doctests)
-    """
-    if layer.__module__ == '__builtin__':
-        name = layer.__name__
-    else:
-        name = layer.__module__ + '.' + layer.__name__
-    _layer_name_cache[name] = layer
-    return name
-
-def find_tests(options, found_suites=None):
-    """Creates a dictionary mapping layer name to a suite of tests to be run
-    in that layer.
-
-    Passing a list of suites using the found_suites parameter will cause
-    that list of suites to be used instead of attempting to load them from
-    the filesystem. This is useful for unit testing the test runner.
-    """
-    suites = {}
-    if found_suites is None:
-        found_suites = find_suites(options)
-    for suite in found_suites:
-        for test, layer_name in tests_from_suite(suite, options):
-            suite = suites.get(layer_name)
-            if not suite:
-                suite = suites[layer_name] = unittest.TestSuite()
-            suite.addTest(test)
-    return suites
-
-def tests_from_suite(suite, options, dlevel=1, dlayer='unit'):
-    """Returns a sequence of (test, layer_name)
-
-    The tree of suites is recursively visited, with the most specific
-    layer taking precedence. So if a TestCase with a layer of 'foo' is
-    contained in a TestSuite with a layer of 'bar', the test case would be
-    returned with 'foo' as the layer.
-
-    Tests are also filtered out based on the test level and test selection
-    filters stored in the options.
-    """
-    level = getattr(suite, 'level', dlevel)
-    layer = getattr(suite, 'layer', dlayer)
-    if not isinstance(layer, basestring):
-        layer = name_from_layer(layer)
-
-    if isinstance(suite, unittest.TestSuite):
-        for possible_suite in suite:
-            for r in tests_from_suite(possible_suite, options, level, layer):
-                yield r
-    elif isinstance(suite, StartUpFailure):
-        yield (suite, None)
-    else:
-        if level <= options.at_level:
-            for pat in options.test:
-                if pat(str(suite)):
-                    yield (suite, layer)
-                    break
-
-
-def find_suites(options):
-    for fpath, package in find_test_files(options):
-        for (prefix, prefix_package) in options.prefix:
-            if fpath.startswith(prefix) and package == prefix_package:
-                # strip prefix, strip .py suffix and convert separator to dots
-                noprefix = fpath[len(prefix):]
-                noext = strip_py_ext(options, noprefix)
-                assert noext is not None
-                module_name = noext.replace(os.path.sep, '.')
-                if package:
-                    module_name = package + '.' + module_name
-
-                for filter in options.module:
-                    if filter(module_name):
-                        break
-                else:
-                    continue
-
-                try:
-                    module = import_name(module_name)
-                except KeyboardInterrupt:
-                    raise
-                except:
-                    suite = StartUpFailure(
-                        options, module_name,
-                        sys.exc_info()[:2]
-                        + (sys.exc_info()[2].tb_next.tb_next,),
-                        )
-                else:
-                    try:
-                        suite = getattr(module, options.suite_name)()
-                        if isinstance(suite, unittest.TestSuite):
-                            check_suite(suite, module_name)
-                        else:
-                            raise TypeError(
-                                "Invalid test_suite, %r, in %s"
-                                % (suite, module_name)
-                                )
-                    except KeyboardInterrupt:
-                        raise
-                    except:
-                        suite = StartUpFailure(
-                            options, module_name, sys.exc_info()[:2]+(None,))
-
-
-                yield suite
-                break
-
-
-def check_suite(suite, module_name):
-    """Check for bad tests in a test suite.
-
-    "Bad tests" are those that do not inherit from unittest.TestCase.
-
-    Note that this function is pointless on Python 2.5, because unittest itself
-    checks for this in TestSuite.addTest.  It is, however, useful on earlier
-    Pythons.
-    """
-    for x in suite:
-        if isinstance(x, unittest.TestSuite):
-            check_suite(x, module_name)
-        elif not isinstance(x, unittest.TestCase):
-            raise TypeError(
-                "Invalid test, %r,\nin test_suite from %s"
-                % (x, module_name)
-                )
-
-
-
-
-class StartUpFailure(unittest.TestCase):
-    """Empty test case added to the test suite to indicate import failures."""
-
-    def __init__(self, options, module, exc_info):
-        if options.post_mortem:
-            post_mortem(exc_info)
-        self.module = module
-        self.exc_info = exc_info
-
-
-def find_test_files(options):
-    found = {}
-    for f, package in find_test_files_(options):
-        if f not in found:
-            found[f] = 1
-            yield f, package
-
-identifier = re.compile(r'[_a-zA-Z]\w*$').match
-def find_test_files_(options):
-    tests_pattern = options.tests_pattern
-    test_file_pattern = options.test_file_pattern
-
-    # If options.usecompiled, we can accept .pyc or .pyo files instead
-    # of .py files.  We'd rather use a .py file if one exists.  `root2ext`
-    # maps a test file path, sans extension, to the path with the best
-    # extension found (.py if it exists, else .pyc or .pyo).
-    # Note that "py" < "pyc" < "pyo", so if more than one extension is
-    # found, the lexicographically smaller one is best.
-
-    # Found a new test file, in directory `dirname`.  `noext` is the
-    # file name without an extension, and `withext` is the file name
-    # with its extension.
-    def update_root2ext(dirname, noext, withext):
-        key = os.path.join(dirname, noext)
-        new = os.path.join(dirname, withext)
-        if key in root2ext:
-            root2ext[key] = min(root2ext[key], new)
-        else:
-            root2ext[key] = new
-
-    for (p, package) in test_dirs(options, {}):
-        for dirname, dirs, files in walk_with_symlinks(options, p):
-            if dirname != p and not contains_init_py(options, files):
-                continue    # not a plausible test directory
-            root2ext = {}
-            dirs[:] = filter(identifier, dirs)
-            d = os.path.split(dirname)[1]
-            if tests_pattern(d) and contains_init_py(options, files):
-                # tests directory
-                for file in files:
-                    noext = strip_py_ext(options, file)
-                    if noext and test_file_pattern(noext):
-                        update_root2ext(dirname, noext, file)
-
-            for file in files:
-                noext = strip_py_ext(options, file)
-                if noext and tests_pattern(noext):
-                    update_root2ext(dirname, noext, file)
-
-            winners = root2ext.values()
-            winners.sort()
-            for file in winners:
-                yield file, package
-
-def walk_with_symlinks(options, dir):
-    # TODO -- really should have test of this that uses symlinks
-    #         this is hard on a number of levels ...
-    for dirpath, dirs, files in os.walk(dir):
-        dirs.sort()
-        files.sort()
-        dirs[:] = [d for d in dirs if d not in options.ignore_dir]
-        yield (dirpath, dirs, files)
-        for d in dirs:
-            p = os.path.join(dirpath, d)
-            if os.path.islink(p):
-                for sdirpath, sdirs, sfiles in walk_with_symlinks(options, p):
-                    yield (sdirpath, sdirs, sfiles)
-
-compiled_sufixes = '.pyc', '.pyo'
-def remove_stale_bytecode(options):
-    if options.keepbytecode:
-        return
-    for (p, _) in options.test_path:
-        for dirname, dirs, files in walk_with_symlinks(options, p):
-            for file in files:
-                if file[-4:] in compiled_sufixes and file[:-1] not in files:
-                    fullname = os.path.join(dirname, file)
-                    options.output.info("Removing stale bytecode file %s"
-                                        % fullname)
-                    os.unlink(fullname)
-
-
-def test_dirs(options, seen):
-    if options.package:
-        for p in options.package:
-            p = import_name(p)
-            for p in p.__path__:
-                p = os.path.abspath(p)
-                if p in seen:
-                    continue
-                for (prefix, package) in options.prefix:
-                    if p.startswith(prefix) or p == prefix[:-1]:
-                        seen[p] = 1
-                        yield p, package
-                        break
-    else:
-        for dpath in options.test_path:
-            yield dpath
-
-
-def import_name(name):
-    __import__(name)
-    return sys.modules[name]
-
-def configure_logging():
-    """Initialize the logging module."""
-    import logging.config
-
-    # Get the log.ini file from the current directory instead of
-    # possibly buried in the build directory.  TODO: This isn't
-    # perfect because if log.ini specifies a log file, it'll be
-    # relative to the build directory.  Hmm...  logini =
-    # os.path.abspath("log.ini")
-
-    logini = os.path.abspath("log.ini")
-    if os.path.exists(logini):
-        logging.config.fileConfig(logini)
-    else:
-        # If there's no log.ini, cause the logging package to be
-        # silent during testing.
-        root = logging.getLogger()
-        root.addHandler(NullHandler())
-        logging.basicConfig()
-
-    if os.environ.has_key("LOGGING"):
-        level = int(os.environ["LOGGING"])
-        logging.getLogger().setLevel(level)
-
-class NullHandler(logging.Handler):
-    """Logging handler that drops everything on the floor.
-
-    We require silence in the test environment.  Hush.
-    """
-
-    def emit(self, record):
-        pass
-
-
-class TrackRefs(object):
-    """Object to track reference counts across test runs."""
-
-    def __init__(self):
-        self.type2count = {}
-        self.type2all = {}
-        self.delta = None
-        self.n = 0
-        self.update()
-        self.delta = None
-
-    def update(self):
-        gc.collect()
-        obs = sys.getobjects(0)
-        type2count = {}
-        type2all = {}
-        n = 0
-        for o in obs:
-            if type(o) is str and o == '<dummy key>':
-                # avoid dictionary madness
-                continue
-
-            all = sys.getrefcount(o) - 3
-            n += all
-
-            t = type(o)
-            if t is types.InstanceType:
-                t = o.__class__
-
-            if t in type2count:
-                type2count[t] += 1
-                type2all[t] += all
-            else:
-                type2count[t] = 1
-                type2all[t] = all
-
-
-        ct = [(
-               type_or_class_title(t),
-               type2count[t] - self.type2count.get(t, 0),
-               type2all[t] - self.type2all.get(t, 0),
-               )
-              for t in type2count.iterkeys()]
-        ct += [(
-                type_or_class_title(t),
-                - self.type2count[t],
-                - self.type2all[t],
-                )
-               for t in self.type2count.iterkeys()
-               if t not in type2count]
-        ct.sort()
-        self.delta = ct
-        self.type2count = type2count
-        self.type2all = type2all
-        self.n = n
-
-
-    def output(self):
-        printed = False
-        s1 = s2 = 0
-        for t, delta1, delta2 in self.delta:
-            if delta1 or delta2:
-                if not printed:
-                    print (
-                        '    Leak details, changes in instances and refcounts'
-                        ' by type/class:')
-                    print "    %-55s %6s %6s" % ('type/class', 'insts', 'refs')
-                    print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
-                    printed = True
-                print "    %-55s %6d %6d" % (t, delta1, delta2)
-                s1 += delta1
-                s2 += delta2
-
-        if printed:
-            print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
-            print "    %-55s %6s %6s" % ('total', s1, s2)
-
-
-        self.delta = None
-
-def type_or_class_title(t):
-    module = getattr(t, '__module__', '__builtin__')
-    if module == '__builtin__':
-        return t.__name__
-    return "%s.%s" % (module, t.__name__)
-
-
-###############################################################################
-# Command-line UI
-
-parser = optparse.OptionParser("Usage: %prog [options] [MODULE] [TEST]")
-
-######################################################################
-# Searching and filtering
-
-searching = optparse.OptionGroup(parser, "Searching and filtering", """\
-Options in this group are used to define which tests to run.
-""")
-
-searching.add_option(
-    '--package', '--dir', '-s', action="append", dest='package',
-    help="""\
-Search the given package's directories for tests.  This can be
-specified more than once to run tests in multiple parts of the source
-tree.  For example, if refactoring interfaces, you don't want to see
-the way you have broken setups for tests in other packages. You *just*
-want to run the interface tests.
-
-Packages are supplied as dotted names.  For compatibility with the old
-test runner, forward and backward slashed in package names are
-converted to dots.
-
-(In the special case of packages spread over multiple directories,
-only directories within the test search path are searched. See the
---path option.)
-
-""")
-
-searching.add_option(
-    '--module', '-m', action="append", dest='module',
-    help="""\
-Specify a test-module filter as a regular expression.  This is a
-case-sensitive regular expression, used in search (not match) mode, to
-limit which test modules are searched for tests.  The regular
-expressions are checked against dotted module names.  In an extension
-of Python regexp notation, a leading "!" is stripped and causes the
-sense of the remaining regexp to be negated (so "!bc" matches any
-string that does not match "bc", and vice versa).  The option can be
-specified multiple test-module filters.  Test modules matching any of
-the test filters are searched.  If no test-module filter is specified,
-then all test modules are used.
-""")
-
-searching.add_option(
-    '--test', '-t', action="append", dest='test',
-    help="""\
-Specify a test filter as a regular expression.  This is a
-case-sensitive regular expression, used in search (not match) mode, to
-limit which tests are run.  In an extension of Python regexp notation,
-a leading "!" is stripped and causes the sense of the remaining regexp
-to be negated (so "!bc" matches any string that does not match "bc",
-and vice versa).  The option can be specified multiple test filters.
-Tests matching any of the test filters are included.  If no test
-filter is specified, then all tests are run.
-""")
-
-searching.add_option(
-    '--unit', '-u', action="store_true", dest='unit',
-    help="""\
-Run only unit tests, ignoring any layer options.
-""")
-
-searching.add_option(
-    '--non-unit', '-f', action="store_true", dest='non_unit',
-    help="""\
-Run tests other than unit tests.
-""")
-
-searching.add_option(
-    '--layer', action="append", dest='layer',
-    help="""\
-Specify a test layer to run.  The option can be given multiple times
-to specify more than one layer.  If not specified, all layers are run.
-It is common for the running script to provide default values for this
-option.  Layers are specified regular expressions, used in search
-mode, for dotted names of objects that define a layer.  In an
-extension of Python regexp notation, a leading "!" is stripped and
-causes the sense of the remaining regexp to be negated (so "!bc"
-matches any string that does not match "bc", and vice versa).  The
-layer named 'unit' is reserved for unit tests, however, take note of
-the --unit and non-unit options.
-""")
-
-searching.add_option(
-    '-a', '--at-level', type='int', dest='at_level',
-    help="""\
-Run the tests at the given level.  Any test at a level at or below
-this is run, any test at a level above this is not run.  Level 0
-runs all tests.
-""")
-
-searching.add_option(
-    '--all', action="store_true", dest='all',
-    help="Run tests at all levels.")
-
-searching.add_option(
-    '--list-tests', action="store_true", dest='list_tests', default=False,
-    help="List all tests that matched your filters.  Do not run any tests.")
-
-parser.add_option_group(searching)
-
-######################################################################
-# Reporting
-
-reporting = optparse.OptionGroup(parser, "Reporting", """\
-Reporting options control basic aspects of test-runner output
-""")
-
-reporting.add_option(
-    '--verbose', '-v', action="count", dest='verbose',
-    help="""\
-Make output more verbose.
-Increment the verbosity level.
-""")
-
-reporting.add_option(
-    '--quiet', '-q', action="store_true", dest='quiet',
-    help="""\
-Make the output minimal, overriding any verbosity options.
-""")
-
-reporting.add_option(
-    '--progress', '-p', action="store_true", dest='progress',
-    help="""\
-Output progress status
-""")
-
-reporting.add_option(
-    '--no-progress',action="store_false", dest='progress',
-    help="""\
-Do not output progress status.  This is the default, but can be used to
-counter a previous use of --progress or -p.
-""")
-
-# We use a noop callback because the actual processing will be done in the
-# get_options function, but we want optparse to generate appropriate help info
-# for us, so we add an option anyway.
-reporting.add_option(
-    '--auto-progress', action="callback", callback=lambda *args: None,
-    help="""\
-Output progress status, but only when stdout is a terminal.
-""")
-
-reporting.add_option(
-    '--color', '-c', action="store_true", dest='color',
-    help="""\
-Colorize the output.
-""")
-
-reporting.add_option(
-    '--no-color', '-C', action="store_false", dest='color',
-    help="""\
-Do not colorize the output.  This is the default, but can be used to
-counter a previous use of --color or -c.
-""")
-
-# We use a noop callback because the actual processing will be done in the
-# get_options function, but we want optparse to generate appropriate help info
-# for us, so we add an option anyway.
-reporting.add_option(
-    '--auto-color', action="callback", callback=lambda *args: None,
-    help="""\
-Colorize the output, but only when stdout is a terminal.
-""")
-
-reporting.add_option(
-    '--slow-test', type='float', dest='slow_test_threshold',
-    metavar='N', default=10,
-    help="""\
-With -c and -vvv, highlight tests that take longer than N seconds (default:
-%default).
-""")
-
-reporting.add_option(
-    '-1', '--hide-secondary-failures',
-    action="store_true", dest='report_only_first_failure',
-    help="""\
-Report only the first failure in a doctest. (Examples after the
-failure are still executed, in case they do any cleanup.)
-""")
-
-reporting.add_option(
-    '--show-secondary-failures',
-    action="store_false", dest='report_only_first_failure',
-    help="""\
-Report all failures in a doctest.  This is the default, but can
-be used to counter a default use of -1 or --hide-secondary-failures.
-""")
-
-reporting.add_option(
-    '--ndiff', action="store_true", dest="ndiff",
-    help="""\
-When there is a doctest failure, show it as a diff using the ndiff.py utility.
-""")
-
-reporting.add_option(
-    '--udiff', action="store_true", dest="udiff",
-    help="""\
-When there is a doctest failure, show it as a unified diff.
-""")
-
-reporting.add_option(
-    '--cdiff', action="store_true", dest="cdiff",
-    help="""\
-When there is a doctest failure, show it as a context diff.
-""")
-
-parser.add_option_group(reporting)
-
-######################################################################
-# Analysis
-
-analysis = optparse.OptionGroup(parser, "Analysis", """\
-Analysis options provide tools for analysing test output.
-""")
-
-
-analysis.add_option(
-    '--post-mortem', '-D', action="store_true", dest='post_mortem',
-    help="Enable post-mortem debugging of test failures"
-    )
-
-
-analysis.add_option(
-    '--gc', '-g', action="append", dest='gc', type="int",
-    help="""\
-Set the garbage collector generation threshold.  This can be used
-to stress memory and gc correctness.  Some crashes are only
-reproducible when the threshold is set to 1 (aggressive garbage
-collection).  Do "--gc 0" to disable garbage collection altogether.
-
-The --gc option can be used up to 3 times to specify up to 3 of the 3
-Python gc_threshold settings.
-
-""")
-
-analysis.add_option(
-    '--gc-option', '-G', action="append", dest='gc_option', type="choice",
-    choices=['DEBUG_STATS', 'DEBUG_COLLECTABLE', 'DEBUG_UNCOLLECTABLE',
-             'DEBUG_INSTANCES', 'DEBUG_OBJECTS', 'DEBUG_SAVEALL',
-             'DEBUG_LEAK'],
-    help="""\
-Set a Python gc-module debug flag.  This option can be used more than
-once to set multiple flags.
-""")
-
-analysis.add_option(
-    '--repeat', '-N', action="store", type="int", dest='repeat',
-    help="""\
-Repeat the tests the given number of times.  This option is used to
-make sure that tests leave their environment in the state they found
-it and, with the --report-refcounts option to look for memory leaks.
-""")
-
-analysis.add_option(
-    '--report-refcounts', '-r', action="store_true", dest='report_refcounts',
-    help="""\
-After each run of the tests, output a report summarizing changes in
-refcounts by object type.  This option that requires that Python was
-built with the --with-pydebug option to configure.
-""")
-
-analysis.add_option(
-    '--coverage', action="store", type='string', dest='coverage',
-    help="""\
-Perform code-coverage analysis, saving trace data to the directory
-with the given name.  A code coverage summary is printed to standard
-out.
-""")
-
-analysis.add_option(
-    '--profile', action="store", dest='profile', type="choice",
-    choices=available_profilers.keys(),
-    help="""\
-Run the tests under cProfiler or hotshot and display the top 50 stats, sorted
-by cumulative time and number of calls.
-""")
-
-def do_pychecker(*args):
-    if not os.environ.get("PYCHECKER"):
-        os.environ["PYCHECKER"] = "-q"
-    import pychecker.checker
-
-analysis.add_option(
-    '--pychecker', action="callback", callback=do_pychecker,
-    help="""\
-Run the tests under pychecker
-""")
-
-parser.add_option_group(analysis)
-
-######################################################################
-# Setup
-
-setup = optparse.OptionGroup(parser, "Setup", """\
-Setup options are normally supplied by the testrunner script, although
-they can be overridden by users.
-""")
-
-setup.add_option(
-    '--path', action="append", dest='path',
-    help="""\
-Specify a path to be added to Python's search path.  This option can
-be used multiple times to specify multiple search paths.  The path is
-usually specified by the test-runner script itself, rather than by
-users of the script, although it can be overridden by users.  Only
-tests found in the path will be run.
-
-This option also specifies directories to be searched for tests.
-See the search_directory.
-""")
-
-setup.add_option(
-    '--test-path', action="append", dest='test_path',
-    help="""\
-Specify a path to be searched for tests, but not added to the Python
-search path.  This option can be used multiple times to specify
-multiple search paths.  The path is usually specified by the
-test-runner script itself, rather than by users of the script,
-although it can be overridden by users.  Only tests found in the path
-will be run.
-""")
-
-setup.add_option(
-    '--package-path', action="append", dest='package_path', nargs=2,
-    help="""\
-Specify a path to be searched for tests, but not added to the Python
-search path.  Also specify a package for files found in this path.
-This is used to deal with directories that are stitched into packages
-that are not otherwise searched for tests.
-
-This option takes 2 arguments.  The first is a path name. The second is
-the package name.
-
-This option can be used multiple times to specify
-multiple search paths.  The path is usually specified by the
-test-runner script itself, rather than by users of the script,
-although it can be overridden by users.  Only tests found in the path
-will be run.
-""")
-
-setup.add_option(
-    '--tests-pattern', action="store", dest='tests_pattern',
-    help="""\
-The test runner looks for modules containing tests.  It uses this
-pattern to identify these modules.  The modules may be either packages
-or python files.
-
-If a test module is a package, it uses the value given by the
-test-file-pattern to identify python files within the package
-containing tests.
-""")
-
-setup.add_option(
-    '--suite-name', action="store", dest='suite_name',
-    help="""\
-Specify the name of the object in each test_module that contains the
-module's test suite.
-""")
-
-setup.add_option(
-    '--test-file-pattern', action="store", dest='test_file_pattern',
-    help="""\
-Specify a pattern for identifying python files within a tests package.
-See the documentation for the --tests-pattern option.
-""")
-
-setup.add_option(
-    '--ignore_dir', action="append", dest='ignore_dir',
-    help="""\
-Specifies the name of a directory to ignore when looking for tests.
-""")
-
-parser.add_option_group(setup)
-
-######################################################################
-# Other
-
-other = optparse.OptionGroup(parser, "Other", "Other options")
-
-other.add_option(
-    '--keepbytecode', '-k', action="store_true", dest='keepbytecode',
-    help="""\
-Normally, the test runner scans the test paths and the test
-directories looking for and deleting pyc or pyo files without
-corresponding py files.  This is to prevent spurious test failures due
-to finding compiled modules where source modules have been deleted.
-This scan can be time consuming.  Using this option disables this
-scan.  If you know you haven't removed any modules since last running
-the tests, can make the test run go much faster.
-""")
-
-other.add_option(
-    '--usecompiled', action="store_true", dest='usecompiled',
-    help="""\
-Normally, a package must contain an __init__.py file, and only .py files
-can contain test code.  When this option is specified, compiled Python
-files (.pyc and .pyo) can be used instead:  a directory containing
-__init__.pyc or __init__.pyo is also considered to be a package, and if
-file XYZ.py contains tests but is absent while XYZ.pyc or XYZ.pyo exists
-then the compiled files will be used.  This is necessary when running
-tests against a tree where the .py files have been removed after
-compilation to .pyc/.pyo.  Use of this option implies --keepbytecode.
-""")
-
-other.add_option(
-    '--exit-with-status', action="store_true", dest='exitwithstatus',
-    help="""\
-Return an error exit status if the tests failed.  This can be useful for
-an invoking process that wants to monitor the result of a test run.
-""")
-
-parser.add_option_group(other)
-
-######################################################################
-# Command-line processing
-
-def compile_filter(pattern):
-    if pattern.startswith('!'):
-        pattern = re.compile(pattern[1:]).search
-        return (lambda s: not pattern(s))
-    return re.compile(pattern).search
-
-def merge_options(options, defaults):
-    odict = options.__dict__
-    for name, value in defaults.__dict__.items():
-        if (value is not None) and (odict[name] is None):
-            odict[name] = value
-
-default_setup_args = [
-    '--tests-pattern', '^tests$',
-    '--at-level', '1',
-    '--ignore', '.svn',
-    '--ignore', 'CVS',
-    '--ignore', '{arch}',
-    '--ignore', '.arch-ids',
-    '--ignore', '_darcs',
-    '--test-file-pattern', '^test',
-    '--suite-name', 'test_suite',
-    ]
-
-
-def terminal_has_colors():
-    """Determine whether the terminal supports colors.
-
-    Some terminals (e.g. the emacs built-in one) don't.
-    """
-    return tigetnum('colors', -1) >= 8
-
-
-def get_options(args=None, defaults=None):
-    # Because we want to inspect stdout and decide to colorize or not, we
-    # replace the --auto-color option with the appropriate --color or
-    # --no-color option.  That way the subprocess doesn't have to decide (which
-    # it would do incorrectly anyway because stdout would be a pipe).
-    def apply_auto_color(args):
-        if args and '--auto-color' in args:
-            if sys.stdout.isatty() and terminal_has_colors():
-                colorization = '--color'
-            else:
-                colorization = '--no-color'
-
-            args[:] = [arg.replace('--auto-color', colorization)
-                       for arg in args]
-
-    # The comment of apply_auto_color applies here as well
-    def apply_auto_progress(args):
-        if args and '--auto-progress' in args:
-            if sys.stdout.isatty():
-                progress = '--progress'
-            else:
-                progress = '--no-progress'
-
-            args[:] = [arg.replace('--auto-progress', progress)
-                       for arg in args]
-
-    apply_auto_color(args)
-    apply_auto_color(defaults)
-    apply_auto_progress(args)
-    apply_auto_progress(defaults)
-
-    default_setup, _ = parser.parse_args(default_setup_args)
-    assert not _
-    if defaults:
-        defaults, _ = parser.parse_args(defaults)
-        assert not _
-        merge_options(defaults, default_setup)
-    else:
-        defaults = default_setup
-
-    if args is None:
-        args = sys.argv
-
-    original_testrunner_args = args
-    args = args[1:]
-
-    options, positional = parser.parse_args(args)
-    merge_options(options, defaults)
-    options.original_testrunner_args = original_testrunner_args
-
-    if options.color:
-        options.output = ColorfulOutputFormatter(options)
-        options.output.slow_test_threshold = options.slow_test_threshold
-    else:
-        options.output = OutputFormatter(options)
-
-    options.fail = False
-
-    if positional:
-        module_filter = positional.pop(0)
-        if module_filter != '.':
-            if options.module:
-                options.module.append(module_filter)
-            else:
-                options.module = [module_filter]
-
-        if positional:
-            test_filter = positional.pop(0)
-            if options.test:
-                options.test.append(test_filter)
-            else:
-                options.test = [test_filter]
-
-            if positional:
-                parser.error("Too many positional arguments")
-
-    options.ignore_dir = dict([(d,1) for d in options.ignore_dir])
-    options.test_file_pattern = re.compile(options.test_file_pattern).search
-    options.tests_pattern = re.compile(options.tests_pattern).search
-    options.test = map(compile_filter, options.test or ('.'))
-    options.module = map(compile_filter, options.module or ('.'))
-
-    options.path = map(os.path.abspath, options.path or ())
-    options.test_path = map(os.path.abspath, options.test_path or ())
-    options.test_path += options.path
-
-    options.test_path = ([(path, '') for path in options.test_path]
-                         +
-                         [(os.path.abspath(path), package)
-                          for (path, package) in options.package_path or ()
-                          ])
-
-    if options.package:
-        pkgmap = dict(options.test_path)
-        options.package = [normalize_package(p, pkgmap)
-                           for p in options.package]
-
-    options.prefix = [(path + os.path.sep, package)
-                      for (path, package) in options.test_path]
-    if options.all:
-        options.at_level = sys.maxint
-
-    if options.unit and options.non_unit:
-        # The test runner interprets this as "run only those tests that are
-        # both unit and non-unit at the same time".  The user, however, wants
-        # to run both unit and non-unit tests.  Disable the filtering so that
-        # the user will get what she wants:
-        options.unit = options.non_unit = False
-
-    if options.unit:
-        options.layer = ['unit']
-    if options.layer:
-        options.layer = map(compile_filter, options.layer)
-
-    options.layer = options.layer and dict([(l, 1) for l in options.layer])
-
-    if options.usecompiled:
-        options.keepbytecode = options.usecompiled
-
-    if options.quiet:
-        options.verbose = 0
-
-    if options.report_refcounts and options.repeat < 2:
-        print """\
-        You must use the --repeat (-N) option to specify a repeat
-        count greater than 1 when using the --report_refcounts (-r)
-        option.
-        """
-        options.fail = True
-        return options
-
-
-    if options.report_refcounts and not hasattr(sys, "gettotalrefcount"):
-        print """\
-        The Python you are running was not configured
-        with --with-pydebug. This is required to use
-        the --report-refcounts option.
-        """
-        options.fail = True
-        return options
-
-    return options
-
-def normalize_package(package, package_map={}):
-    r"""Normalize package name passed to the --package option.
-
-        >>> normalize_package('zope.testing')
-        'zope.testing'
-
-    Converts path names into package names for compatibility with the old
-    test runner.
-
-        >>> normalize_package('zope/testing')
-        'zope.testing'
-        >>> normalize_package('zope/testing/')
-        'zope.testing'
-        >>> normalize_package('zope\\testing')
-        'zope.testing'
-
-    Can use a map of absolute pathnames to package names
-
-        >>> a = os.path.abspath
-        >>> normalize_package('src/zope/testing/',
-        ...                   {a('src'): ''})
-        'zope.testing'
-        >>> normalize_package('src/zope_testing/',
-        ...                   {a('src/zope_testing'): 'zope.testing'})
-        'zope.testing'
-        >>> normalize_package('src/zope_something/tests',
-        ...                   {a('src/zope_something'): 'zope.something',
-        ...                    a('src'): ''})
-        'zope.something.tests'
-
-    """
-    package = package.replace('\\', '/')
-    if package.endswith('/'):
-        package = package[:-1]
-    bits = package.split('/')
-    for n in range(len(bits), 0, -1):
-        pkg = package_map.get(os.path.abspath('/'.join(bits[:n])))
-        if pkg is not None:
-            bits = bits[n:]
-            if pkg:
-                bits = [pkg] + bits
-            return '.'.join(bits)
-    return package.replace('/', '.')
-
-# Command-line UI
-###############################################################################
-
-###############################################################################
-# Install 2.4 TestSuite __iter__ into earlier versions
-
-if sys.version_info < (2, 4):
-    def __iter__(suite):
-        return iter(suite._tests)
-    unittest.TestSuite.__iter__ = __iter__
-    del __iter__
-
-# Install 2.4 TestSuite __iter__ into earlier versions
-###############################################################################
-
-###############################################################################
-# Test the testrunner
-
-def test_suite():
-
-    import renormalizing
-    checker = renormalizing.RENormalizing([
-        # 2.5 changed the way pdb reports exceptions
-        (re.compile(r"<class 'exceptions.(\w+)Error'>:"),
-                    r'exceptions.\1Error:'),
-
-        (re.compile('^> [^\n]+->None$', re.M), '> ...->None'),
-        (re.compile(r"<module>"),(r'?')),
-        (re.compile(r"<type 'exceptions.(\w+)Error'>:"),
-                    r'exceptions.\1Error:'),
-        (re.compile("'[A-Za-z]:\\\\"), "'"), # hopefully, we'll make Windows happy
-        (re.compile(r'\\\\'), '/'), # more Windows happiness
-        (re.compile(r'\\'), '/'), # even more Windows happiness
-        (re.compile('/r'), '\\\\r'), # undo damage from previous
-        (re.compile(r'\r'), '\\\\r\n'),
-        (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
-        (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'),
-        (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'),
-        (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'),
-        (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'),
-        (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'),
-                    r'\1test.py(NNN)'),
-        (re.compile(r'[.]py\(\d+\)'), r'.py(NNN)'),
-        (re.compile(r'[.]py:\d+'), r'.py:NNN'),
-        (re.compile(r' line \d+,', re.IGNORECASE), r' Line NNN,'),
-        (re.compile(r' line {([a-z]+)}\d+{', re.IGNORECASE), r' Line {\1}NNN{'),
-
-        # omit traceback entries for unittest.py or doctest.py from
-        # output:
-        (re.compile(r'^ +File "[^\n]*(doc|unit)test.py", [^\n]+\n[^\n]+\n',
-                    re.MULTILINE),
-         r''),
-        (re.compile(r'^{\w+} +File "{\w+}[^\n]*(doc|unit)test.py{\w+}", [^\n]+\n[^\n]+\n',
-                    re.MULTILINE),
-         r''),
-        (re.compile('^> [^\n]+->None$', re.M), '> ...->None'),
-        (re.compile('import pdb; pdb'), 'Pdb()'), # Py 2.3
-        ])
-
-    def setUp(test):
-        test.globs['saved-sys-info'] = (
-            sys.path[:],
-            sys.argv[:],
-            sys.modules.copy(),
-            gc.get_threshold(),
-            )
-        test.globs['this_directory'] = os.path.split(__file__)[0]
-        test.globs['testrunner_script'] = __file__
-
-    def tearDown(test):
-        sys.path[:], sys.argv[:] = test.globs['saved-sys-info'][:2]
-        gc.set_threshold(*test.globs['saved-sys-info'][3])
-        sys.modules.clear()
-        sys.modules.update(test.globs['saved-sys-info'][2])
-
-    suites = [
-        doctest.DocFileSuite(
-        'testrunner-arguments.txt',
-        'testrunner-coverage.txt',
-        'testrunner-debugging-layer-setup.test',
-        'testrunner-debugging.txt',
-        'testrunner-edge-cases.txt',
-        'testrunner-errors.txt',
-        'testrunner-layers-ntd.txt',
-        'testrunner-layers.txt',
-        'testrunner-layers-api.txt',
-        'testrunner-progress.txt',
-        'testrunner-colors.txt',
-        'testrunner-simple.txt',
-        'testrunner-test-selection.txt',
-        'testrunner-verbose.txt',
-        'testrunner-wo-source.txt',
-        'testrunner-repeat.txt',
-        'testrunner-gc.txt',
-        'testrunner-knit.txt',
-        setUp=setUp, tearDown=tearDown,
-        optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-        checker=checker),
-        doctest.DocTestSuite()
-        ]
-
-    if sys.platform == 'win32':
-        suites.append(
-            doctest.DocFileSuite(
-            'testrunner-coverage-win32.txt',
-            setUp=setUp, tearDown=tearDown,
-            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-            checker=checker))
-
-    # Python <= 2.4.1 had a bug that prevented hotshot from running in
-    # non-optimize mode
-    if sys.version_info[:3] > (2,4,1) or not __debug__:
-        # some Linux distributions don't include the profiling module (which
-        # hotshot.stats depends on)
-        try:
-            import hotshot.stats
-        except ImportError:
-            pass
-        else:
-            suites.append(
-                doctest.DocFileSuite(
-                    'testrunner-profiling.txt',
-                    setUp=setUp, tearDown=tearDown,
-                    optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-                    checker = renormalizing.RENormalizing([
-                        (re.compile(r'tests_profile[.]\S*[.]prof'),
-                         'tests_profile.*.prof'),
-                        ]),
-                    )
-                )
-        try:
-            import cProfile
-            import pstats
-        except ImportError:
-            pass
-        else:
-            suites.append(
-                doctest.DocFileSuite(
-                    'testrunner-profiling-cprofiler.txt',
-                    setUp=setUp, tearDown=tearDown,
-                    optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-                    checker = renormalizing.RENormalizing([
-                        (re.compile(r'tests_profile[.]\S*[.]prof'),
-                         'tests_profile.*.prof'),
-                        ]),
-                    )
-                )
-
-
-    if hasattr(sys, 'gettotalrefcount'):
-        suites.append(
-            doctest.DocFileSuite(
-            'testrunner-leaks.txt',
-            setUp=setUp, tearDown=tearDown,
-            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-            checker = renormalizing.RENormalizing([
-              (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
-              (re.compile(r'sys refcount=\d+ +change=\d+'),
-               'sys refcount=NNNNNN change=NN'),
-              (re.compile(r'sum detail refcount=\d+ +'),
-               'sum detail refcount=NNNNNN '),
-              (re.compile(r'total +\d+ +\d+'),
-               'total               NNNN    NNNN'),
-              (re.compile(r"^ +(int|type) +-?\d+ +-?\d+ *\n", re.M),
-               ''),
-              ]),
-
-            )
-        )
-    else:
-        suites.append(
-            doctest.DocFileSuite(
-            'testrunner-leaks-err.txt',
-            setUp=setUp, tearDown=tearDown,
-            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-            checker=checker,
-            )
-        )
-
-
-    return unittest.TestSuite(suites)
-
-def main():
-    default = [
-        '--path', os.path.split(sys.argv[0])[0],
-        '--tests-pattern', '^testrunner$',
-        ]
-    run(default)
-
-if __name__ == '__main__':
-
-    # if --resume_layer is in the arguments, we are being run from the
-    # test runner's own tests.  We need to adjust the path in hopes of
-    # not getting a different version installed in the system python.
-    if len(sys.argv) > 1 and sys.argv[1] == '--resume-layer':
-        sys.path.insert(0,
-            os.path.split(
-                os.path.split(
-                    os.path.split(
-                        os.path.abspath(sys.argv[0])
-                        )[0]
-                    )[0]
-                )[0]
-            )
-
-    # Hm, when run as a script, we need to import the testrunner under
-    # its own name, so that there's the imported flavor has the right
-    # real_pdb_set_trace.
-    import zope.testing.testrunner
-    from zope.testing import doctest
-
-    main()
-
-# Test the testrunner
-###############################################################################
-
-# Delay import to give main an opportunity to fix up the path if
-# necessary
-from zope.testing import doctest

Deleted: zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.txt
===================================================================
--- zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.txt	2008-05-03 13:24:29 UTC (rev 86206)
+++ zope.testing/branches/ctheune-cleanup/src/zope/testing/testrunner.txt	2008-05-03 13:25:02 UTC (rev 86207)
@@ -1,69 +0,0 @@
-Test Runner
-===========
-
-The testrunner module is used to run automated tests defined using the
-unittest framework.  Its primary feature is that it *finds* tests by
-searching directory trees.  It doesn't require the manual
-concatenation of specific test suites.  It is highly customizable and
-should be usable with any project.  In addition to finding and running
-tests, it provides the following additional features:
-
-- Test filtering using specifications of:
-
-  o test packages within a larger tree
-
-  o regular expression patterns for test modules
-
-  o regular expression patterns for individual tests
-
-- Organization of tests into levels and layers
-
-  Sometimes, tests take so long to run that you don't want to run them
-  on every run of the test runner.  Tests can be defined at different
-  levels.  The test runner can be configured to only run tests at a
-  specific level or below by default.  Command-line options can be
-  used to specify a minimum level to use for a specific run, or to run
-  all tests.  Individual tests or test suites can specify their level
-  via a 'level' attribute. where levels are integers increasing from 1.
-
-  Most tests are unit tests.  They don't depend on other facilities, or
-  set up whatever dependencies they have.  For larger applications,
-  it's useful to specify common facilities that a large number of
-  tests share.  Making each test set up and and tear down these
-  facilities is both ineffecient and inconvenient.  For this reason,
-  we've introduced the concept of layers, based on the idea of layered
-  application architectures.  Software build for a layer should be
-  able to depend on the facilities of lower layers already being set
-  up.  For example, Zope defines a component architecture.  Much Zope
-  software depends on that architecture.  We should be able to treat
-  the component architecture as a layer that we set up once and reuse.
-  Similarly, Zope application software should be able to depend on the
-  Zope application server without having to set it up in each test.
-
-  The test runner introduces test layers, which are objects that can
-  set up environments for tests within the layers to use.  A layer is
-  set up before running the tests in it.  Individual tests or test
-  suites can define a layer by defining a `layer` attribute, which is
-  a test layer.
-
-- Reporting
-
-  - progress meter
-
-  - summaries of tests run
-
-- Analysis of test execution
-
-  - post-mortem debugging of test failures
-
-  - memory leaks
-
-  - code coverage
-
-  - source analysis using pychecker
-
-  - memory errors
-
-  - execution times
-
-  - profiling



More information about the Zope3-Checkins mailing list