[Zope3-checkins] SVN: zope.testing/trunk/src/zope/testing/testrunner Added support for identifying memory leaks. (See

Jim Fulton jim at zope.com
Fri Oct 28 10:44:39 EDT 2005


Log message for revision 39690:
  Added support for identifying memory leaks. (See
  testrunner-leaks.txt.)
  
  Added support for diff output of doctest failures.  (See
  testrunner-errors.txt.)
  

Changed:
  U   zope.testing/trunk/src/zope/testing/testrunner-edge-cases.txt
  U   zope.testing/trunk/src/zope/testing/testrunner-errors.txt
  A   zope.testing/trunk/src/zope/testing/testrunner-ex/leak.py
  A   zope.testing/trunk/src/zope/testing/testrunner-ex/pledge.py
  A   zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt
  A   zope.testing/trunk/src/zope/testing/testrunner-leaks.txt
  U   zope.testing/trunk/src/zope/testing/testrunner-repeat.txt
  U   zope.testing/trunk/src/zope/testing/testrunner.py
  U   zope.testing/trunk/src/zope/testing/testrunner.txt

-=-
Modified: zope.testing/trunk/src/zope/testing/testrunner-edge-cases.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-edge-cases.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-edge-cases.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -453,3 +453,25 @@
     Test-modules with import problems:
       sample1.sampletests_none_test
     True
+
+You must use --repeat with --report-refcounts
+---------------------------------------------
+
+It is an error to specify --report-refcounts (-r) without specifying a
+repeat count greater than 1
+
+    >>> sys.argv = 'test -r'.split() 
+    >>> testrunner.run(defaults)
+            You must use the --repeat (-N) option to specify a repeat
+            count greater than 1 when using the --report_refcounts (-r)
+            option.
+    <BLANKLINE>
+    True
+
+    >>> sys.argv = 'test -r -N1'.split() 
+    >>> testrunner.run(defaults)
+            You must use the --repeat (-N) option to specify a repeat
+            count greater than 1 when using the --report_refcounts (-r)
+            option.
+    <BLANKLINE>
+    True

Modified: zope.testing/trunk/src/zope/testing/testrunner-errors.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-errors.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-errors.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -514,13 +514,13 @@
       Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
     True
 
-This can be a bid confusing, especially when there are enough tests
+This can be a bit confusing, especially when there are enough tests
 that they scroll off a screen.  Often you just want to see the first
 failure.  This can be accomplished with the -1 option (for "just show
 me the first failed example in a doctest" :)
 
     >>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
-    >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
+    >>> testrunner.run(defaults) # doctest:
     Running unit tests:
     <BLANKLINE>
     <BLANKLINE>
@@ -544,7 +544,127 @@
       Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
     True
 
+Getting diff output for doctest failures
+----------------------------------------
 
+If a doctest has large expected and actual output, it can be hard to
+see differences when expected and actual output differ.  The --ndiff,
+--udiff, and --cdiff options can be used to get diff output pf various
+kinds.
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Expected:
+        I give my pledge, as an earthling,
+        to save, and faithfully, to defend from waste,
+        the natural resources of my planet.
+        It's soils, minerals, forests, waters, and wildlife.
+    Got:
+        I give my pledge, as and earthling,
+        to save, and faithfully, to defend from waste,
+        the natural resources of my planet.
+        It's soils, minerals, forests, waters, and wildlife.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+Here, the actual output uses the word "and" rather than the word "an",
+but it's a bit hard to pick this out.  Wr can use the various diff
+outputs to see this better. We could modify the test to ask for diff
+output, but it's easier to use one of the diff options.
+
+The --ndiff option requests a diff using Python's ndiff utility. This
+is the only method that marks differences within lines as well as
+across lines. For example, if a line of expected output contains digit
+1 where actual output contains letter l, a line is inserted with a
+caret marking the mismatching column positions.
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (ndiff with -expected +actual):
+        - I give my pledge, as an earthling,
+        + I give my pledge, as and earthling,
+        ?                        +
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+          It's soils, minerals, forests, waters, and wildlife.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.003 seconds.
+
+The -udiff option requests a standard "unified" diff:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (unified diff with -expected +actual):
+        @@ -1,3 +1,3 @@
+        -I give my pledge, as an earthling,
+        +I give my pledge, as and earthling,
+         to save, and faithfully, to defend from waste,
+         the natural resources of my planet.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+The -cdiff option requests a standard "context" diff:
+
+    >>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    <BLANKLINE>
+    <BLANKLINE>
+    Failure in test pledge (pledge)
+    Failed doctest test for pledge.pledge
+      File "testrunner-ex/pledge.py", line 24, in pledge
+    <BLANKLINE>
+    ----------------------------------------------------------------------
+    File "testrunner-ex/pledge.py", line 26, in pledge.pledge
+    Failed example:
+        print pledge_template % ('and earthling', 'planet'),
+    Differences (context diff with expected followed by actual):
+        ***************
+        *** 1,3 ****
+        ! I give my pledge, as an earthling,
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+        --- 1,3 ----
+        ! I give my pledge, as and earthling,
+          to save, and faithfully, to defend from waste,
+          the natural resources of my planet.
+    <BLANKLINE>
+      Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+
+
 Testing-Module Import Errors
 ----------------------------
 

Added: zope.testing/trunk/src/zope/testing/testrunner-ex/leak.py
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-ex/leak.py	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-ex/leak.py	2005-10-28 14:44:39 UTC (rev 39690)
@@ -0,0 +1,39 @@
+##############################################################################
+#
+# Copyright (c) 2003 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.0 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+
+import unittest, sys, time
+
+class ClassicLeakable:
+    def __init__(self):
+        self.x = 'x'
+
+class Leakable(object):
+    def __init__(self):
+        self.x = 'x'
+
+leaked = []
+
+class TestSomething(unittest.TestCase):
+
+    def testleak(self):
+        leaked.append((ClassicLeakable(), Leakable(), time.time()))
+
+def test_suite():
+    suite = unittest.TestSuite()
+    suite.addTest(unittest.makeSuite(TestSomething))
+    return suite
+
+
+if __name__ == '__main__':
+    unittest.main()


Property changes on: zope.testing/trunk/src/zope/testing/testrunner-ex/leak.py
___________________________________________________________________
Name: svn:keywords
   + Id
Name: svn:eol-style
   + native

Added: zope.testing/trunk/src/zope/testing/testrunner-ex/pledge.py
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-ex/pledge.py	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-ex/pledge.py	2005-10-28 14:44:39 UTC (rev 39690)
@@ -0,0 +1,35 @@
+##############################################################################
+#
+# Copyright (c) 2004 Zope Corporation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.0 (ZPL).  A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+import unittest
+from zope.testing import doctest
+
+pledge_template = """\
+I give my pledge, as %s,
+to save, and faithfully, to defend from waste,
+the natural resources of my %s.
+It's soils, minerals, forests, waters, and wildlife.
+"""
+
+def pledge():
+    """
+    >>> print pledge_template % ('and earthling', 'planet'),
+    I give my pledge, as an earthling,
+    to save, and faithfully, to defend from waste,
+    the natural resources of my planet.
+    It's soils, minerals, forests, waters, and wildlife.
+    """
+
+def test_suite():
+    return doctest.DocTestSuite()
+


Property changes on: zope.testing/trunk/src/zope/testing/testrunner-ex/pledge.py
___________________________________________________________________
Name: svn:keywords
   + Id
Name: svn:eol-style
   + native

Added: zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -0,0 +1,25 @@
+Test Runner
+===========
+
+Debugging Memory Leaks without a debug build of Python 
+------------------------------------------------------
+
+To use the --report-refcounts (-r) to detect or debug memory leaks,
+you must have a debug build of Python. Without a debug build, you will
+get an error message:
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test -r -N6'.split()
+    >>> _ = testrunner.run(defaults)
+            The Python you are running was not configured
+            with --with-pydebug. This is required to use
+            the --report-refcounts option.
+    <BLANKLINE>


Property changes on: zope.testing/trunk/src/zope/testing/testrunner-leaks-err.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: zope.testing/trunk/src/zope/testing/testrunner-leaks.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-leaks.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-leaks.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -0,0 +1,221 @@
+Test Runner
+===========
+
+Debugging Memory Leaks
+----------------------
+
+The --report-refcounts (-r) option can be used with the --repeat (-N)
+option to detect and diagnose memory leaks.  To use this option, you
+must configure Python with the --with-pydebug option. (On Unix, pass
+this option to configure and then build Python.)
+
+    >>> import os.path, sys
+    >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+    >>> defaults = [
+    ...     '--path', directory_with_tests,
+    ...     '--tests-pattern', '^sampletestsf?$',
+    ...     ]
+
+    >>> from zope.testing import testrunner
+    
+    >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
+    >>> _ = testrunner.run(defaults)
+    Running samplelayers.Layer11 tests:
+      Set up samplelayers.Layer1 in 0.000 seconds.
+      Set up samplelayers.Layer11 in 0.000 seconds.
+    Iteration 1
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+    Iteration 2
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100401   change=0     
+    Iteration 3
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100401   change=0     
+    Iteration 4
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+      sys refcount=100401   change=0     
+    Running samplelayers.Layer12 tests:
+      Tear down samplelayers.Layer11 in 0.000 seconds.
+      Set up samplelayers.Layer12 in 0.000 seconds.
+    Iteration 1
+      Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
+    Iteration 2
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Iteration 3
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Iteration 4
+      Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
+      sys refcount=100411   change=0     
+    Tearing down left over layers:
+      Tear down samplelayers.Layer12 in 0.000 seconds.
+      Tear down samplelayers.Layer1 in 0.000 seconds.
+    Total: 68 tests, 0 failures, 0 errors
+
+Each layer is repeated the requested number of times.  For each
+iteration after the first, the system refcount and change in system
+refcount is shown. The system refcount is the total of all refcount in
+the system.  When a refcount on any object is changed, the system
+refcount is changed by the same amount.  Tests that don't leak show
+zero changes in systen refcount.
+
+Let's look at an example test that leaks:
+
+    >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
+    >>> _ = testrunner.run(defaults)
+    Running unit tests:
+    Iteration 1
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Iteration 2
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92506    change=12
+    Iteration 3
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92513    change=12
+    Iteration 4
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sys refcount=92520    change=12
+
+Here we see that the system refcount is increating.  If we specify a
+verbosity greater than one, we can get details broken out by object
+type (or class):
+
+    >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
+    >>> _ = testrunner.run(defaults)
+    Running tests at level 1
+    Running unit tests:
+    Iteration 1
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+    Iteration 2
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95832    sys refcount=105668   change=16    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        int                                                          2      2
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      3
+        -------------------------------------------------------  -----   ----
+        total                                                        8     16
+    Iteration 3
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95844    sys refcount=105680   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        int                                                         -1      0
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        5     12
+    Iteration 4
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95856    sys refcount=105692   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        6     12
+    Iteration 5
+      Running:
+        .
+      Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+      sum detail refcount=95868    sys refcount=105704   change=12    
+        Leak details, changes in instances and refcounts by type/class:
+        type/class                                               insts   refs
+        -------------------------------------------------------  -----   ----
+        classobj                                                     0      1
+        dict                                                         2      2
+        float                                                        1      1
+        leak.ClassicLeakable                                         1      1
+        leak.Leakable                                                1      1
+        str                                                          0      4
+        tuple                                                        1      1
+        type                                                         0      1
+        -------------------------------------------------------  -----   ----
+        total                                                        6     12
+
+It is instructive to analyze the results in some detail.  The test
+being run was designed to intentionally leak:
+
+    class ClassicLeakable:
+        def __init__(self):
+            self.x = 'x'
+
+    class Leakable(object):
+        def __init__(self):
+            self.x = 'x'
+
+    leaked = []
+
+    class TestSomething(unittest.TestCase):
+
+        def testleak(self):
+            leaked.append((ClassicLeakable(), Leakable(), time.time()))
+
+Let's go through this by type.
+
+float, leak.ClassicLeakable, leak.Leakable, and tuple
+    We leak one of these every time.  This is to be expected because
+    we are adding one of these to the list every time.
+
+str
+    We don't leak any instances, but we leak 4 references. These are
+    due to the instance attributes avd values.
+
+dict
+    We leak 2 of these, one for each ClassicLeakable and Leakable
+    instance. 
+
+classobj
+    We increase the number of classobj instance references by one each
+    time because each ClassicLeakable instance has a reference to its
+    class.  This instances increases the references in it's class,
+    which increases the total number of references to classic classes
+    (clasobj instances).
+
+type
+    For most interations, we increase the number of type references by
+    one for the same reason we increase the number of clasobj
+    references by one.  The increase of the number of type references
+    by 3 in the second iteration is puzzling, but illustrates that
+    this sort of data is often puzzling.
+
+int
+    The change in the number of int instances and references in this
+    example is a side effect of the statistics being gathered.  Lots
+    of integers are created to keep the memory statistics used here.
+
+The summary statistics include the sum of the detail refcounts.  (Note
+that this sum is less than the system refcount.  This is because the
+detailed analysis doesn't inspect every object. Not all objects in the
+system are returned by sys.getobjects.)


Property changes on: zope.testing/trunk/src/zope/testing/testrunner-leaks.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zope.testing/trunk/src/zope/testing/testrunner-repeat.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-repeat.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner-repeat.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -41,7 +41,7 @@
       Tear down samplelayers.Layerx in 0.000 seconds.
       Tear down samplelayers.Layer11 in 0.000 seconds.
       Tear down samplelayers.Layer1 in 0.000 seconds.
-    Total: 678 tests, 0 failures, 0 errors
+    Total: 226 tests, 0 failures, 0 errors
     False
 
 The tests are repeated by layer.  Layers are set up and torn down only

Modified: zope.testing/trunk/src/zope/testing/testrunner.py
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner.py	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner.py	2005-10-28 14:44:39 UTC (rev 39690)
@@ -32,6 +32,7 @@
 import time
 import trace
 import traceback
+import types
 import unittest
 
 # some Linux distributions don't include the profiler, which hotshot uses
@@ -147,6 +148,9 @@
         resume_layer = None
 
     options = get_options(args, defaults)
+    if options.fail:
+        return True
+    
     options.testrunner_defaults = defaults
     options.resume_layer = resume_layer
 
@@ -241,7 +245,7 @@
     old_threshold = gc.get_threshold()
     if options.gc:
         if len(options.gc) > 3:
-            print >> sys.stderr, "Too many --gc options"
+            print "Too many --gc options"
             sys.exit(1)
         if options.gc[0]:
             print ("Cyclic garbage collection threshold set to: %s" %
@@ -258,6 +262,29 @@
             new_flags |= getattr(gc, op)
         gc.set_debug(new_flags)
 
+    old_reporting_flags = doctest.set_unittest_reportflags(0)
+    reporting_flags = 0
+    if options.ndiff:
+        reporting_flags = doctest.REPORT_NDIFF
+    if options.udiff:
+        if reporting_flags:
+            print "Can only give one of --ndiff, --udiff, or --cdiff"
+            sys.exit(1)
+        reporting_flags = doctest.REPORT_UDIFF
+    if options.cdiff:
+        if reporting_flags:
+            print "Can only give one of --ndiff, --udiff, or --cdiff"
+            sys.exit(1)
+        reporting_flags = doctest.REPORT_CDIFF
+    if options.report_only_first_failure:
+        reporting_flags |= doctest.REPORT_ONLY_FIRST_FAILURE
+        
+    if reporting_flags:
+        doctest.set_unittest_reportflags(reporting_flags)
+    else:
+        doctest.set_unittest_reportflags(old_reporting_flags)
+
+
     # Add directories to the path
     for path in options.path:
         if path not in sys.path:
@@ -365,6 +392,8 @@
             for test in import_errors:
                 print "  " + test.module
 
+    doctest.set_unittest_reportflags(old_reporting_flags)
+
     if options.gc_option:
         gc.set_debug(old_flags)
         
@@ -375,8 +404,19 @@
 
 def run_tests(options, tests, name, failures, errors):
     repeat = options.repeat or 1
+    repeat_range = iter(range(repeat))
     ran = 0
-    for i in range(repeat):
+
+    gc.collect()
+    lgarbage = len(gc.garbage)
+
+    sumrc = 0
+    if options.report_refcounts:
+        if options.verbose:
+            track = TrackRefs()
+        rc = sys.gettotalrefcount()
+        
+    for i in repeat_range:
         if repeat > 1:
             print "Iteration", i+1
 
@@ -385,9 +425,9 @@
         if options.verbose == 1 and not options.progress:
             print '    ',
         result = TestResult(options, tests)
+        
         t = time.time()
 
-
         if options.post_mortem:
             # post-mortem debugging
             for test in tests:
@@ -418,8 +458,41 @@
             "  Ran %s tests with %s failures and %s errors in %.3f seconds." %
             (result.testsRun, len(result.failures), len(result.errors), t)
             )
-        ran += result.testsRun
+        ran = result.testsRun
 
+        gc.collect()
+        if len(gc.garbage) > lgarbage:
+            print ("Tests generated new (%d) garbage:"
+                   % len(gc.garbage)-lgarbage)
+            print gc.garbage[lgarbage:]
+            lgarbage = len(gc.garbage)
+
+        if options.report_refcounts:
+
+            # If we are being tested, we don't want stdout itself to
+            # foul up the numbers. :)
+            try:
+                sys.stdout.getvalue()
+            except AttributeError:
+                pass
+            
+            prev = rc
+            rc = sys.gettotalrefcount()
+            if options.verbose:
+                track.update()
+                if i:
+                    print (" "
+                           " sum detail refcount=%-8d"
+                           " sys refcount=%-8d"
+                           " change=%-6d"
+                           % (track.n, rc, rc - prev))
+                    if options.verbose:
+                        track.output()
+                else:
+                    track.delta = None
+            elif i:
+                print "  sys refcount=%-8d change=%-6d" % (rc, rc - prev)
+
     return ran
 
 def run_layer(options, layer_name, layer, tests, setup_layers,
@@ -983,7 +1056,93 @@
         pass
 
 
+class TrackRefs(object):
+    """Object to track reference counts across test runs."""
 
+    def __init__(self):
+        self.type2count = {}
+        self.type2all = {}
+        self.delta = None
+        self.n = 0
+        self.update()
+        self.delta = None
+
+    def update(self):
+        gc.collect()
+        obs = sys.getobjects(0)
+        type2count = {}
+        type2all = {}
+        n = 0
+        for o in obs:
+            if type(o) is str and o == '<dummy key>':
+                # avoid dictionary madness
+                continue
+
+            all = sys.getrefcount(o) - 3
+            n += all
+
+            t = type(o)
+            if t is types.InstanceType:
+                t = o.__class__
+
+            if t in type2count:
+                type2count[t] += 1
+                type2all[t] += all
+            else:
+                type2count[t] = 1
+                type2all[t] = all
+
+
+        ct = [(
+               type_or_class_title(t),
+               type2count[t] - self.type2count.get(t, 0),
+               type2all[t] - self.type2all.get(t, 0),
+               )
+              for t in type2count.iterkeys()]
+        ct += [(
+                type_or_class_title(t),
+                - self.type2count[t],
+                - self.type2all[t],
+                )
+               for t in self.type2count.iterkeys()
+               if t not in type2count]
+        ct.sort()
+        self.delta = ct
+        self.type2count = type2count
+        self.type2all = type2all
+        self.n = n
+
+
+    def output(self):
+        printed = False
+        s1 = s2 = 0
+        for t, delta1, delta2 in self.delta:
+            if delta1 or delta2:
+                if not printed:
+                    print (
+                        '    Leak details, changes in instances and refcounts'
+                        ' by type/class:')
+                    print "    %-55s %6s %6s" % ('type/class', 'insts', 'refs')
+                    print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
+                    printed = True
+                print "    %-55s %6d %6d" % (t, delta1, delta2)
+                s1 += delta1
+                s2 += delta2
+
+        if printed:
+            print "    %-55s %6s %6s" % ('-' * 55, '-----', '----')
+            print "    %-55s %6s %6s" % ('total', s1, s2)
+
+
+        self.delta = None
+
+def type_or_class_title(t):
+    module = getattr(t, '__module__', '__builtin__')
+    if module == '__builtin__':
+        return t.__name__
+    return "%s.%s" % (module, t.__name__)
+
+
 ###############################################################################
 # Command-line UI
 
@@ -1109,17 +1268,31 @@
 Output progress status
 """)
 
-def report_only_first_failure(*args):
-    old = doctest.set_unittest_reportflags(0)
-    doctest.set_unittest_reportflags(old | doctest.REPORT_ONLY_FIRST_FAILURE)
-
 reporting.add_option(
-    '-1', action="callback", callback=report_only_first_failure,
+    '-1', action="store_true", dest='report_only_first_failure',
     help="""\
 Report only the first failure in a doctest. (Examples after the
 failure are still executed, in case they do any cleanup.)
 """)
 
+reporting.add_option(
+    '--ndiff', action="store_true", dest="ndiff",
+    help="""\
+When there is a doctest failure, show it as a diff using the ndiff.py utility.
+""")
+
+reporting.add_option(
+    '--udiff', action="store_true", dest="udiff",
+    help="""\
+When there is a doctest failure, show it as a unified diff.
+""")
+
+reporting.add_option(
+    '--cdiff', action="store_true", dest="cdiff",
+    help="""\
+When there is a doctest failure, show it as a context diff.
+""")
+
 parser.add_option_group(reporting)
 
 ######################################################################
@@ -1160,24 +1333,15 @@
 """)
 
 analysis.add_option(
-    '--repeat', action="store", type="int", dest='repeat',
+    '--repeat', '-N', action="store", type="int", dest='repeat',
     help="""\
 Repeat the testst the given number of times.  This option is used to
 make sure that tests leave thier environment in the state they found
-it and, with the --refcount option to look for memory leaks.
+it and, with the --report-refcounts option to look for memory leaks.
 """)
 
-def refcount_available(*args):
-    if not hasattr(sys, "gettotalrefcount"):
-        raise optparse.OptionValueError("""\
-The Python you are running was not configured with --with-pydebug.
-This is required to use the --refount option.
-""")
-
 analysis.add_option(
-    '--refcount',
-    action="callback", callback=refcount_available,
-    dest='refcount',
+    '--report-refcounts', '-r', action="store_true", dest='report_refcounts',
     help="""\
 After each run of the tests, output a report summarizing changes in
 refcounts by object type.  This option that requires that Python was
@@ -1356,6 +1520,8 @@
     merge_options(options, defaults)
     options.original_testrunner_args = original_testrunner_args
 
+    options.fail = False
+
     if positional:
         module_filter = positional.pop()
         if module_filter != '.':
@@ -1400,6 +1566,25 @@
     if options.quiet:
         options.verbose = 0
 
+    if options.report_refcounts and options.repeat < 2:
+        print """\
+        You must use the --repeat (-N) option to specify a repeat
+        count greater than 1 when using the --report_refcounts (-r)
+        option.
+        """
+        options.fail = True
+        return options
+
+
+    if options.report_refcounts and not hasattr(sys, "gettotalrefcount"):
+        print """\
+        The Python you are running was not configured
+        with --with-pydebug. This is required to use
+        the --report-refcounts option.
+        """
+        options.fail = True
+        return options
+
     return options
 
 # Command-line UI
@@ -1453,7 +1638,7 @@
             sys.path[:],
             sys.argv[:],
             sys.modules.copy(),
-            gc.get_threshold()
+            gc.get_threshold(),
             )
         test.globs['this_directory'] = os.path.split(__file__)[0]
         test.globs['testrunner_script'] = __file__
@@ -1464,7 +1649,8 @@
         sys.modules.clear()
         sys.modules.update(test.globs['saved-sys-info'][2])
 
-    suite = doctest.DocFileSuite(
+    suites = [
+        doctest.DocFileSuite(
         'testrunner-arguments.txt',
         'testrunner-coverage.txt',
         'testrunner-debugging.txt',
@@ -1482,6 +1668,7 @@
         setUp=setUp, tearDown=tearDown,
         optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
         checker=checker)
+        ]
 
     # Python <= 2.4.1 had a bug that prevented hotshot from runnint in
     # non-optimize mode
@@ -1493,13 +1680,46 @@
         except ImportError:
             pass
         else:
-            suite = unittest.TestSuite([suite, doctest.DocFileSuite(
-                'profiling.txt',
-                setUp=setUp, tearDown=tearDown,
-                optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
-                checker=checker)])
+            suites.append(
+                doctest.DocFileSuite(
+                    'profiling.txt',
+                    setUp=setUp, tearDown=tearDown,
+                    optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+                    checker=checker,
+                    )
+                )
+            
+    if hasattr(sys, 'gettotalrefcount'):
+        suites.append(
+            doctest.DocFileSuite(
+            'testrunner-leaks.txt',
+            setUp=setUp, tearDown=tearDown,
+            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+            checker = renormalizing.RENormalizing([
+              (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
+              (re.compile(r'sys refcount=\d+ +change=\d+'),
+               'sys refcount=NNNNNN change=NN'),
+              (re.compile(r'sum detail refcount=\d+ +'),
+               'sum detail refcount=NNNNNN '),
+              (re.compile(r'total +\d+ +\d+'),
+               'total               NNNN    NNNN'),
+              (re.compile(r"^ +(int|type) +-?\d+ +-?\d+ *\n", re.M),
+               ''),
+              ]),
+            
+            )
+        )
+    else:
+        suites.append(
+            doctest.DocFileSuite(
+            'testrunner-leaks-err.txt',
+            setUp=setUp, tearDown=tearDown,
+            optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
+            checker=checker,
+            )
+        )            
 
-    return suite
+    return unittest.TestSuite(suites)
 
 def main():
     default = [
@@ -1529,6 +1749,7 @@
     # real_pdb_set_trace.
     import zope.testing.testrunner
     from zope.testing import doctest
+    
     main()
 
 # Test the testrunner

Modified: zope.testing/trunk/src/zope/testing/testrunner.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner.txt	2005-10-28 13:15:07 UTC (rev 39689)
+++ zope.testing/trunk/src/zope/testing/testrunner.txt	2005-10-28 14:44:39 UTC (rev 39690)
@@ -83,4 +83,5 @@
 - `Running Without Source Code <testrunner-wo-source.txt>`_
 - `Repeating Tests <testrunner-looping.txt>`_
 - `Garbage Collection Control and Statistics <testrunner-gc.txt>`_
+- `Debugging Memory Leaks <testrunner-leaks.txt>`_
 - `Edge Cases <testrunner-edge-cases.txt>`_



More information about the Zope3-Checkins mailing list