[Checkins] SVN: z3c.vcsync/trunk/ Clean up conflict resolution by letting the root of the checkout be

Martijn Faassen faassen at infrae.com
Fri May 30 12:26:53 EDT 2008


Log message for revision 87063:
  Clean up conflict resolution by letting the root of the checkout be
  equivalent to the root of the state. This allows for a special 'found'
  container to be part of the rest of the state.
  

Changed:
  U   z3c.vcsync/trunk/CHANGES.txt
  U   z3c.vcsync/trunk/src/z3c/vcsync/README.txt
  U   z3c.vcsync/trunk/src/z3c/vcsync/conflict.txt
  U   z3c.vcsync/trunk/src/z3c/vcsync/internal.txt
  U   z3c.vcsync/trunk/src/z3c/vcsync/svn.py
  U   z3c.vcsync/trunk/src/z3c/vcsync/vc.py

-=-
Modified: z3c.vcsync/trunk/CHANGES.txt
===================================================================
--- z3c.vcsync/trunk/CHANGES.txt	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/CHANGES.txt	2008-05-30 16:26:51 UTC (rev 87063)
@@ -4,8 +4,17 @@
 0.13 (unreleased)
 -----------------
 
-* ...
+* The root directory of the checkout is now truly equivalent to the
+  root object of the state. This means that if the SVN checkout
+  content is to remain the same, the state root to synchronize should
+  be one level higher (the parent of the current state root).
 
+* Conflict resolution has been cleaned up. When a conflict occurs, the
+  other half of the conflict (the one not resolved) is moved into the
+  ``found`` directory, which is created in the checkout root. This is
+  also represented in the state container object and is synchronized
+  like any other content.
+
 0.12 (2008-05-16)
 -----------------
 

Modified: z3c.vcsync/trunk/src/z3c/vcsync/README.txt
===================================================================
--- z3c.vcsync/trunk/src/z3c/vcsync/README.txt	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/src/z3c/vcsync/README.txt	2008-05-30 16:26:51 UTC (rev 87063)
@@ -49,8 +49,13 @@
 -----
  
 Content to synchronize is represented by an object that provides
-``IState``. The following methods need to be implemented:
+``IState``. A state represents a container object, which should
+contain a ``data`` object (a container that contains the actual data
+to be synchronized) and a ``found`` object (a container that contains
+objects that would otherwise be lost during conflict resolution).
 
+The following methods need to be implemented:
+
 * ``get_revision_nr()``: return the last revision number that the
    application was synchronized with. The state typically stores this
    the application object.
@@ -113,10 +118,17 @@
 
 It is a class that implements enough of the dictionary API and
 implements the ``IContainer`` interface. A normal Zope 3 folder or
-Grok container will also work. Let's now set up the tree::
+Grok container will also work. 
 
-  >>> data = Container()
-  >>> data.__name__ = 'root'
+Let's create a container now::
+
+  >>> root = Container()
+  >>> root.__name__ = 'root'
+
+The container has two subcontainers (``data`` and ``found``).
+
+  >>> root['data'] = data = Container()
+  >>> root['found'] = Container()
   >>> data['foo'] = Item(payload=1)
   >>> data['bar'] = Item(payload=2)
   >>> data['sub'] = Container()
@@ -129,7 +141,7 @@
 Now that we have an implementation of ``IState`` that works for our
 state, let's create our ``state`` object::
 
-  >>> state = TestState(data)
+  >>> state = TestState(root)
 
 Reading from and writing to the filesystem
 ------------------------------------------
@@ -198,6 +210,10 @@
   >>> from z3c.vcsync.svn import SvnCheckout
   >>> checkout = SvnCheckout(wc)
 
+The root directory of the working copy will be synchronized with the
+root container of the state. The checkout will therefore contain
+``data`` and a ``found`` sub-directories.
+
 Constructing the synchronizer
 -----------------------------
 
@@ -245,19 +261,19 @@
 files on the filesystem::
 
   >>> pretty_paths(wc.listdir())
-  ['root']
-  >>> pretty_paths(wc.join('root').listdir())
-  ['root/bar.test', 'root/foo.test', 'root/sub']
-  >>> pretty_paths(wc.join('root').join('sub').listdir())
-  ['root/sub/qux.test']
+  ['data']
+  >>> pretty_paths(wc.join('data').listdir())
+  ['data/bar.test', 'data/foo.test', 'data/sub']
+  >>> pretty_paths(wc.join('data').join('sub').listdir())
+  ['data/sub/qux.test']
 
 The ``.test`` files have the payload data we expect::
   
-  >>> print wc.join('root').join('foo.test').read()
+  >>> print wc.join('data').join('foo.test').read()
   1
-  >>> print wc.join('root').join('bar.test').read()
+  >>> print wc.join('data').join('bar.test').read()
   2
-  >>> print wc.join('root').join('sub').join('qux.test').read()
+  >>> print wc.join('data').join('sub').join('qux.test').read()
   3
 
 Synchronization back into objects
@@ -268,9 +284,9 @@
 
 We have a second, empty tree that we will load objects into::
 
-  >>> data2 = Container()
-  >>> data2.__name__ = 'root'
-  >>> state2 = TestState(data2)
+  >>> root2 = Container()
+  >>> root2.__name__ = 'root'
+  >>> state2 = TestState(root2)
 
 We make another checkout of the repository::
 
@@ -295,13 +311,16 @@
 
 The state of objects in the tree must now mirror that of the original state::
 
-  >>> sorted(data2.keys())    
-  ['bar', 'foo', 'root', 'sub']
+  >>> sorted(root2.keys())    
+  ['data']
 
+  >>> sorted(root2['data'].keys())
+  ['bar', 'foo', 'sub']
+
 Now we will change some of these objects, and synchronize again::
 
-  >>> data2['bar'].payload = 20
-  >>> data2['sub']['qux'].payload = 30
+  >>> root2['data']['bar'].payload = 20
+  >>> root2['data']['sub']['qux'].payload = 30
   >>> info2 = s2.sync("synchronize")
 
 We can now synchronize the original tree again::
@@ -311,9 +330,9 @@
 
 We should see the changes reflected into the original tree::
 
-  >>> data['bar'].payload
+  >>> root2['data']['bar'].payload
   20
-  >>> data['sub']['qux'].payload
+  >>> root2['data']['sub']['qux'].payload
   30
 
 More information

Modified: z3c.vcsync/trunk/src/z3c/vcsync/conflict.txt
===================================================================
--- z3c.vcsync/trunk/src/z3c/vcsync/conflict.txt	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/src/z3c/vcsync/conflict.txt	2008-05-30 16:26:51 UTC (rev 87063)
@@ -27,8 +27,9 @@
 
 Let's set up a simple tree::
 
-  >>> data1 = Container()
-  >>> data1.__name__ = 'root'
+  >>> root1 = Container()
+  >>> root1.__name__ = 'root'
+  >>> root1['data'] = data1 = Container()
   >>> data1['bar'] = Item(payload=1)
   >>> data1['sub'] = Container()
   >>> data1['sub']['qux'] = Item(payload=3)
@@ -37,7 +38,7 @@
 each user). Let's represent this tree in ``state1`` first::
 
   >>> from z3c.vcsync.tests import TestState
-  >>> state1 = TestState(data1)
+  >>> state1 = TestState(root1)
 
 Let's make sure we can save and load the objects by grokking the
 right serializers, parser and factories::
@@ -82,9 +83,9 @@
 
 And synchronize it back into another tree::
 
-  >>> data2 = Container()
-  >>> data2.__name__ = 'root'
-  >>> state2 = TestState(data2)
+  >>> root2 = Container()
+  >>> root2.__name__ = 'root'
+  >>> state2 = TestState(root2)
 
   >>> import py
   >>> wc2 = py.test.ensuretemp('wc2')
@@ -95,6 +96,10 @@
   >>> current_synchronizer = s2
   >>> info = s2.sync("Synchronize")
 
+We now should have a ``data`` folder in ``root2`` as well::
+
+  >>> data2 = root2['data']
+
 File conflicts
 --------------
 
@@ -123,6 +128,14 @@
   >>> data1['bar'].payload
   200
 
+The other version of the conflicting object is not gone. It is stored
+under a special ``found`` directory. We see the conflicting value that
+was stored by the second tree in here::
+
+  >>> found1 = root1['found']
+  >>> found1['data']['bar'].payload
+  250
+  
 When we synchronize from the second tree again, we will see the
 resolved value appear as well::
 
@@ -131,20 +144,11 @@
   >>> data2['bar'].payload
   200
 
-The other version of the conflicting object is not gone. It is stored
-under a special ``found`` directory. We'll synchronize this as well::
+The other version of the conflicting object is also available to the
+second hierarchy::
 
-  >>> found_data = Container()
-  >>> found_data.__name__ = 'found'
-  >>> found_state = TestState(found_data)
-  >>> found_s = Synchronizer(checkout1, found_state)
-  >>> current_synchronizer = found_s
-  >>> info = found_s.sync("synchronize")
-
-We see the conflicting value that was stored by the second tree in
-here::
-
-  >>> found_data['root']['bar'].payload
+  >>> found2 = root2['found']
+  >>> found2['data']['bar'].payload
   250
 
 Conflicts in subdirectories should also be resolved properly::
@@ -166,9 +170,7 @@
 The found version in this case will reside in the same subdirectory,
 ``sub``::
 
-  >>> current_synchronizer = found_s
-  >>> info = found_s.sync("Synchronize")
-  >>> found_data['root']['sub']['qux'].payload
+  >>> found2['data']['sub']['qux'].payload
   36
 
 Re-occurrence of a conflict
@@ -194,9 +196,8 @@
 The ``found`` directory will contain the other part of the conflict
 (having overwritten the previous value)::
 
-  >>> current_synchronizer = found_s
-  >>> info = found_s.sync("synchronize")
-  >>> found_data['root']['bar'].payload
+  >>> found1 = root1['found']
+  >>> found1['data']['bar'].payload
   251
 
 Conflicting file conflicts
@@ -207,26 +208,22 @@
 time, another user creates a conflict that causes this file to be
 re-created?
 
-Let's synchronize the found directory for the second user::
+Let's make sure the second user also has the same content::
 
-  >>> found_data2 = Container()
-  >>> found_data2.__name__ = 'found'
-  >>> found_state2 = TestState(found_data2)
-  >>> found_s2 = Synchronizer(checkout2, found_state2)
-  >>> current_synchronizer2 = found_s2
-  >>> info = found_s2.sync("synchronize")
+  >>> current_synchronizer = s2
+  >>> info = s2.sync("Synchronize")
 
-We currently already have a conflicting object in ``found``::
+We currently already have a conflicting object in ``found2``::
 
-  >>> found_data2['root']['bar'].payload
+  >>> found2['data']['bar'].payload
   251
 
 Now the user throws away the ``bar`` object from ``found``::
 
   >>> from z3c.vcsync.vc import get_object_path
-  >>> found_state2._removed.append(
-  ...    get_object_path(found_data2, found_data2['root']['bar']))
-  >>> del found_data2['root']['bar']
+  >>> state2._removed.append(
+  ...    get_object_path(root2, found2['data']['bar']))
+  >>> del found2['data']['bar']
 
 Now let's generate a conflict on ``bar`` again::
 
@@ -238,18 +235,10 @@
   >>> current_synchronizer = s1
   >>> info = s1.sync("Synchronize")
 
-We synchronize the throwing away of ``bar`` in ``found``, generating
-a potential conflict in the ``found`` directory::
-
-  >>> current_synchronizer = found_s2
-  >>> 'bar' in found_data2['root']
-  False
-  >>> info = found_s2.sync("synchronize")
-
 The result should be that the found object is there, updated to the
 new conflict::
 
-  >>> found_data2['root']['bar'].payload
+  >>> found1['data']['bar'].payload
   252
 
 Folder conflicts
@@ -257,8 +246,8 @@
 
 Let's now examine a case of a conflict in case of containers.
 
-A user (we'll call him ``user1``) creates a new container in ``data``
-with some content in it, and synchronize it::
+A user (``user1``) creates a new container in ``data`` with some
+content in it, and synchronizes it::
 
   >>> current_synchronizer = s1
   >>> data1['folder'] = Container()
@@ -275,10 +264,20 @@
 causing ``folder`` to be gone in SVN::
 
   >>> current_synchronizer = s1
-  >>> state1._removed.append(get_object_path(data1, data1['folder']))
-  >>> del data1['folder']
+  >>> state1._removed.append(get_object_path(root1, root1['data']['folder']))
+  >>> del root1['data']['folder']
   >>> info = s1.sync("Synchronize")
 
+It's really gone now::
+
+  >>> 'folder' in root1['data']
+  False
+
+It's also gone in SVN::
+
+  >>> s1.checkout.path.join('data').join('folder').check()
+  False
+
 Meanwhile, ``user2`` happily alters data in ``folder`` by changing
 ``content`` in instance 2::
 
@@ -289,7 +288,8 @@
 
   >>> info = s2.sync("Synchronize")
 
-All changes ``user2`` made are now gone, as ``folder`` is gone::
+Since the folder was previously removed, all changes ``user2`` made
+are now gone, as ``folder`` is gone::
 
   >>> 'folder' in data2
   False
@@ -297,10 +297,7 @@
 The folder with its content can however be retrieved in the found data
 section::
 
-  >>> found_s = Synchronizer(checkout1, found_state)
-  >>> current_synchronizer = found_s
-  >>> info = found_s.sync("synchronize")
-  >>> found_data['root']['folder']['content'].payload
+  >>> found2['data']['folder']['content'].payload
   15
 
 Conflicting directory conflicts

Modified: z3c.vcsync/trunk/src/z3c/vcsync/internal.txt
===================================================================
--- z3c.vcsync/trunk/src/z3c/vcsync/internal.txt	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/src/z3c/vcsync/internal.txt	2008-05-30 16:26:51 UTC (rev 87063)
@@ -68,8 +68,9 @@
 items and sub-containers in it::
 
   >>> from z3c.vcsync.tests import Container
-  >>> data = Container()
-  >>> data.__name__ = 'root'
+  >>> root = Container()
+  >>> root.__name__ = 'root'
+  >>> root['data'] = data = Container()
   >>> data['foo'] = Item(payload=1)
   >>> data['bar'] = Item(payload=2)
   >>> data['sub'] = Container()
@@ -91,14 +92,14 @@
 We also have a test state representing the object data::
 
   >>> from z3c.vcsync.tests import TestAllState
-  >>> state = TestAllState(data)
+  >>> state = TestAllState(root)
 
 The test state will always return a list of all objects. We pass in
 ``None`` for the revision_nr here, as the ``TestAllState`` ignores this
 information anyway::
 
   >>> sorted([obj.__name__ for obj in state.objects(None)])
-  ['bar', 'foo', 'qux', 'root', 'sub']
+  ['bar', 'data', 'foo', 'qux', 'root', 'sub']
 
 Now let's synchronize. For this, we need a synchronizer initialized
 with the checkout and the state::
@@ -112,34 +113,40 @@
   >>> s.save(None)
 
 The filesystem should now contain the right objects. Everything is
-always saved in a directory called ``root``:
+saved in a directory called ``data``:
  
-  >>> root = testpath.join('root')
-  >>> root.check(dir=True)
+  >>> root_dir = testpath
+  >>> data_dir = root_dir.join('data')
+  >>> root_dir.check(dir=True)
   True
 
 This root directory should contain the right objects::
 
-  >>> sorted([entry.basename for entry in root.listdir()])
+  >>> sorted([entry.basename for entry in root_dir.listdir()])
+  ['data']
+
+The ``data`` subdirectory should contain the right sub-objects::
+
+  >>> sorted([entry.basename for entry in data_dir.listdir()])
   ['bar.test', 'foo.test', 'sub']
 
 We expect the right contents in ``bar.test`` and ``foo.test``::
 
-  >>> root.join('bar.test').read()
+  >>> data_dir.join('bar.test').read()
   '2\n'
-  >>> root.join('foo.test').read()
+  >>> data_dir.join('foo.test').read()
   '1\n'
 
 ``sub`` is a container so should be represented as a directory::
 
-  >>> sub_path = root.join('sub')
-  >>> sub_path.check(dir=True)
+  >>> sub_dir = data_dir.join('sub')
+  >>> sub_dir.check(dir=True)
   True
 
-  >>> sorted([entry.basename for entry in sub_path.listdir()])
+  >>> sorted([entry.basename for entry in sub_dir.listdir()])
   ['qux.test']
 
-  >>> sub_path.join('qux.test').read()
+  >>> sub_dir.join('qux.test').read()
   '3\n'
 
 Modifying an existing checkout
@@ -159,7 +166,7 @@
 do this manually here, though in a real application typically you
 would subscribe to the ``IObjectRemovedEvent``.
 
-  >>> removed_paths = ['/root/bar']
+  >>> removed_paths = ['/data/bar']
   >>> state.removed_paths = removed_paths
 
 The added object always will return with ``objects``, but in your
@@ -171,12 +178,12 @@
 
 We expect the ``hoi.test`` file to be added::
 
-  >>> root.join('hoi.test').read()
+  >>> data_dir.join('hoi.test').read()
   '4\n'
 
 We also expect the ``bar.test`` file to be removed::
 
-  >>> root.join('bar.test').check()
+  >>> data_dir.join('bar.test').check()
   False
 
 Modifying an existing checkout, some edge cases
@@ -192,7 +199,7 @@
 was removed with a new container with the same name, so we have to
 remember this::
 
-  >>> removed_paths = ['/root/hoi']
+  >>> removed_paths = ['/data/hoi']
   >>> state.removed_paths = removed_paths
 
 We put some things into the new container::
@@ -206,12 +213,12 @@
 
 Let's check the filesystem state::
 
-  >>> sorted([entry.basename for entry in root.listdir()])
+  >>> sorted([entry.basename for entry in data_dir.listdir()])
   ['foo.test', 'hoi', 'sub']
 
 We expect ``hoi`` to contain ``something.test``::
 
-  >>> hoi_path = root.join('hoi')
+  >>> hoi_path = data_dir.join('hoi')
   >>> something_path = hoi_path.join('something.test')
   >>> something_path.read()
   '15\n'
@@ -225,12 +232,12 @@
 
 This means we need to mark the path to the container to be removed::
 
-  >>> removed_paths = ['/root/hoi']
+  >>> removed_paths = ['/data/hoi']
   >>> state.removed_paths = removed_paths
 
 We expect to see a ``hoi.test`` but no ``hoi`` directory anymore::
 
-  >>> sorted([entry.basename for entry in root.listdir()])
+  >>> sorted([entry.basename for entry in data_dir.listdir()])
   ['foo.test', 'hoi.test', 'sub']
 
 Note: creating a container with the name ``hoi.test`` (using the
@@ -242,7 +249,7 @@
 avoid creating any directories with a postfix in use by items. The
 following should be forbidden::
 
-  data['hoi.test'] = Container()
+  data['hoi.test'] = Container() # XXX forbidden
 
 multiple object types
 ---------------------
@@ -275,7 +282,7 @@
 
 We need to mark this removal in our ``removed_paths`` list::
 
-  >>> state.removed_paths = ['/root/hoi']
+  >>> state.removed_paths = ['/data/hoi']
 
 We then introduce the new ``hoi``::
 
@@ -287,19 +294,19 @@
 
 We expect to see a ``hoi.other`` item now::
 
-  >>> sorted([entry.basename for entry in root.listdir()])
+  >>> sorted([entry.basename for entry in data_dir.listdir()])
   ['foo.test', 'hoi.other', 'sub']
 
 Let's change the object back again::
 
   >>> del data['hoi']
-  >>> state.removed_paths = ['/root/hoi']
+  >>> state.removed_paths = ['/data/hoi']
   >>> data['hoi'] = Item(payload=16)
   >>> s.save(None)
 
 We expect to see a ``hoi.test`` item again::
 
-  >>> sorted([entry.basename for entry in root.listdir()])
+  >>> sorted([entry.basename for entry in data_dir.listdir()])
   ['foo.test', 'hoi.test', 'sub']
 
 loading a checkout state into python objects
@@ -366,8 +373,9 @@
 purposes of this test we maintain it manually. In this case,
 everything is added so appears in the files list::
 
-  >>> checkout._files = [root.join('foo.test'), root.join('hoi.test'),
-  ...   root.join('sub'), root.join('sub', 'qux.test')]
+  >>> checkout._files = [data_dir, data_dir.join('foo.test'), 
+  ...   data_dir.join('hoi.test'), data_dir.join('sub'), 
+  ...   data_dir.join('sub', 'qux.test')]
 
 Nothing was removed::
 
@@ -390,23 +398,26 @@
 We expect the proper objects to be in the new container::
 
   >>> sorted(container2.keys())
+  ['data']
+  >>> data2 = container2['data']
+  >>> sorted(data2.keys())
   ['foo', 'hoi', 'sub']
 
 We check whether the items contains the right information::
 
-  >>> isinstance(container2['foo'], Item)
+  >>> isinstance(data2['foo'], Item)
   True
-  >>> container2['foo'].payload
+  >>> data2['foo'].payload
   1
-  >>> isinstance(container2['hoi'], Item)
+  >>> isinstance(data2['hoi'], Item)
   True
-  >>> container2['hoi'].payload
+  >>> data2['hoi'].payload
   16
-  >>> isinstance(container2['sub'], Container)
+  >>> isinstance(data2['sub'], Container)
   True
-  >>> sorted(container2['sub'].keys())
+  >>> sorted(data2['sub'].keys())
   ['qux']
-  >>> container2['sub']['qux'].payload
+  >>> data2['sub']['qux'].payload
   3
 
 version control changes a file
@@ -422,7 +433,7 @@
 simulate what might happen during a version control system ``update``
 operation. Let's define one here that modifies text in a file::
 
-  >>> hoi_path = root.join('hoi.test')
+  >>> hoi_path = data_dir.join('hoi.test')
   >>> def update_function():
   ...    hoi_path.write('200\n')
   >>> checkout.update_function = update_function
@@ -442,7 +453,7 @@
  
 We expect the ``hoi`` object to be modified::
 
-  >>> container2['hoi'].payload
+  >>> data2['hoi'].payload
   200
 
 version control adds a file
@@ -450,7 +461,7 @@
 
 We update our checkout again and cause a file to be added::
 
-  >>> hallo = root.join('hallo.test').ensure()
+  >>> hallo = data_dir.join('hallo.test').ensure()
   >>> def update_function():
   ...   hallo.write('300\n')
   >>> checkout.update_function = update_function
@@ -468,7 +479,7 @@
  
 We expect there to be a new object ``hallo``::
 
-  >>> 'hallo' in container2.keys()
+  >>> 'hallo' in data2.keys()
   True
 
 version control removes a file
@@ -477,7 +488,7 @@
 We update our checkout and cause a file to be removed::
 
   >>> def update_function():
-  ...   root.join('hallo.test').remove()
+  ...   data_dir.join('hallo.test').remove()
   >>> checkout.update_function = update_function
 
   >>> checkout.up()
@@ -493,7 +504,7 @@
 
 We expect the object ``hallo`` to be gone again::
 
-  >>> 'hallo' in container2.keys()
+  >>> 'hallo' in data2.keys()
   False
 
 version control adds a directory
@@ -502,7 +513,7 @@
 We update our checkout and cause a directory (with a file inside) to be
 added::
 
-  >>> newdir_path = root.join('newdir')
+  >>> newdir_path = data_dir.join('newdir')
   >>> def update_function():
   ...   newdir_path.ensure(dir=True)
   ...   newfile_path = newdir_path.join('newfile.test').ensure()
@@ -519,11 +530,11 @@
 Reloading this will cause a new container to exist::
 
   >>> dummy = s.load(None)
-  >>> 'newdir' in container2.keys()
+  >>> 'newdir' in data2.keys()
   True
-  >>> isinstance(container2['newdir'], Container)
+  >>> isinstance(data2['newdir'], Container)
   True
-  >>> container2['newdir']['newfile'].payload
+  >>> data2['newdir']['newfile'].payload
   400
 
 version control removes a directory
@@ -548,7 +559,7 @@
 
 Reloading this will cause the new container to be gone again::
 
-  >>> 'newdir' in container2.keys()
+  >>> 'newdir' in data2.keys()
   False
 
 version control changes a file into a directory
@@ -557,7 +568,7 @@
 Some sequence of actions by other users has caused a name that previously
 referred to a file to now refer to a directory::
 
-  >>> hoi_path2 = root.join('hoi')
+  >>> hoi_path2 = data_dir.join('hoi')
   >>> def update_function():
   ...   hoi_path.remove()
   ...   hoi_path2.ensure(dir=True)
@@ -575,9 +586,9 @@
 Reloading this will cause a new container to be there instead of the file::
 
   >>> dummy = s.load(None)
-  >>> isinstance(container2['hoi'], Container)
+  >>> isinstance(data2['hoi'], Container)
   True
-  >>> container2['hoi']['some'].payload
+  >>> data2['hoi']['some'].payload
   1000
 
 version control changes a directory into a file
@@ -588,7 +599,7 @@
 
   >>> def update_function():
   ...   hoi_path2.remove()
-  ...   hoi_path = root.join('hoi.test').ensure()
+  ...   hoi_path = data_dir.join('hoi.test').ensure()
   ...   hoi_path.write('2000\n')
   >>> checkout.update_function = update_function
 
@@ -603,9 +614,9 @@
 container::
 
   >>> dummy = s.load(None)
-  >>> isinstance(container2['hoi'], Item)
+  >>> isinstance(data2['hoi'], Item)
   True
-  >>> container2['hoi'].payload
+  >>> data2['hoi'].payload
   2000
 
 version control changes a file into one with a different file type
@@ -642,7 +653,7 @@
 Now we define an update function that replaces ``hoi.test`` with
 ``hoi.test2``::
 
-  >>> hoi_path3 = root.join('hoi.test2')
+  >>> hoi_path3 = data_dir.join('hoi.test2')
   >>> def update_function():
   ...    hoi_path.remove()
   ...    hoi_path3.ensure()
@@ -659,17 +670,17 @@
 type::
 
   >>> dummy = s.load(None)
-  >>> isinstance(container2['hoi'], Item2)
+  >>> isinstance(data2['hoi'], Item2)
   True
-  >>> container2['hoi'].payload
+  >>> data2['hoi'].payload
   1936
 
 Let's restore the original ``hoi.test`` object::
  
   >>> hoi_path3.remove()
   >>> hoi_path.write('2000\n')
-  >>> del container2['hoi']
-  >>> container2['hoi'] = Item(2000)
+  >>> del data2['hoi']
+  >>> data2['hoi'] = Item(2000)
 
 Complete synchronization
 ------------------------
@@ -677,12 +688,12 @@
 Let's now exercise the ``sync`` method directly. First we'll modify
 the payload of the ``hoi`` item::
 
-  >>> container2['hoi'].payload = 3000
+  >>> data2['hoi'].payload = 3000
  
 Next, we will add a new ``alpha`` file to the checkout when we do an
 ``up()``, so again we simulate the actions of our version control system::
 
-  >>> alpha_path = root.join('alpha.test').ensure()
+  >>> alpha_path = data_dir.join('alpha.test').ensure()
   >>> def update_function():
   ...   alpha_path.write('4000\n')
   >>> checkout.update_function = update_function
@@ -717,7 +728,7 @@
 someone else)::
 
   >>> info.files_changed()
-  [local('.../root/alpha.test')]
+  [local('.../data/alpha.test')]
 
 We removed no objects from our database since the last update::
 
@@ -728,16 +739,16 @@
 all objects here (returning more objects is allowed)::
 
   >>> info.objects_changed()
-  ['/root/foo', '/root/hoi', '/root', '/root/sub/qux', '/root/sub']
+  ['/', '/data/foo', '/data/hoi', '/data', '/data/sub/qux', '/data/sub']
 
 We expect the checkout to reflect the changed state of the ``hoi`` object::
 
-  >>> root.join('hoi.test').read()
+  >>> data_dir.join('hoi.test').read()
   '3000\n'
 
 We also expect the database to reflect the creation of the new
 ``alpha`` object::
 
-  >>> container2['alpha'].payload
+  >>> data2['alpha'].payload
   4000
 

Modified: z3c.vcsync/trunk/src/z3c/vcsync/svn.py
===================================================================
--- z3c.vcsync/trunk/src/z3c/vcsync/svn.py	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/src/z3c/vcsync/svn.py	2008-05-30 16:26:51 UTC (rev 87063)
@@ -39,6 +39,7 @@
         self._updated_revision_nr = None
         
     def resolve(self):
+        self._found_files = set()
         self._resolve_helper(self.path)
 
     def commit(self, message):
@@ -70,7 +71,9 @@
         save_path = found.join(*rel_path.split(path.sep))
         save_path.ensure()
         save_path.write(content)
-
+        for part in save_path.parts():
+            self._found_files.add(part)
+        
     def _found_container(self, path):
         """Store conflicting/lost container in found directory.
 
@@ -82,9 +85,12 @@
         save_path = found.join(*rel_path.split(path.sep))
         py.path.local(path.strpath).copy(save_path)
         save_path.add()
+        for part in save_path.parts():
+            self._found_files.add(part)
         for new_path in save_path.visit():
             new_path.add()
-            
+            self._found_files.add(new_path)
+
     def _update_files(self, revision_nr):
         """Go through svn log and update self._files and self._removed.
         """
@@ -103,8 +109,7 @@
             logs = []
         checkout_path = self._checkout_path()
         files, removed = self._info_from_logs(logs, checkout_path)
-
-        self._files = files
+        self._files = files.union(self._found_files)
         self._removed = removed
         self._updated_revision_nr = new_revision_nr
 
@@ -129,30 +134,27 @@
                     files.add(path)                
         return files, removed
 
+    def _resolve_path(self, path):
+        # resolve any direct conflicts
+        for conflict in path.status().conflict:
+            mine, other = conflict_info(conflict)
+            conflict.write(mine.read())
+            self._found(conflict, other.read())
+            conflict._svn('resolved')
+        # move any status unknown directories away
+        for unknown in path.status().unknown:
+            if unknown.check(dir=True):
+                self._found_container(unknown)
+                unknown.remove()
+
     def _resolve_helper(self, path):
+        if not path.check(dir=True):
+            return
+        # resolve paths in this dir
+        self._resolve_path(path)
         for p in path.listdir():
-            if not p.check(dir=True):
-                continue
-            try:
-                # resolve any direct conflicts
-                for conflict in p.status().conflict:
-                    mine, other = conflict_info(conflict)
-                    conflict.write(mine.read())
-                    self._found(conflict, other.read())
-                    conflict._svn('resolved')
-                # move any status unknown directories away
-                for unknown in p.status().unknown:
-                    if unknown.check(dir=True):
-                        self._found_container(unknown)
-                        unknown.remove()
-            # XXX This is a horrible hack to skip status of R. This
-            # is not supported by Py 0.9.0, and raises a NotImplementedError.
-            # This has been fixed on the trunk of Py.
-            # When we upgrade to a new release of Py this can go away
-            except NotImplementedError:
-                pass
             self._resolve_helper(p)
-    
+
 def conflict_info(conflict):
     path = conflict.dirpath()
     name = conflict.basename

Modified: z3c.vcsync/trunk/src/z3c/vcsync/vc.py
===================================================================
--- z3c.vcsync/trunk/src/z3c/vcsync/vc.py	2008-05-30 09:32:44 UTC (rev 87062)
+++ z3c.vcsync/trunk/src/z3c/vcsync/vc.py	2008-05-30 16:26:51 UTC (rev 87063)
@@ -50,7 +50,6 @@
     rel_path = path.relto(root_path)
     steps = rel_path.split(os.path.sep)
     steps = [step for step in steps if step != '']
-    steps = steps[1:]
     obj = root
     for step in steps:
         name, ex = os.path.splitext(step)
@@ -74,7 +73,7 @@
     steps = [step for step in steps if step != '']
     if not steps:
         return None
-    steps = steps[1:-1]
+    steps = steps[:-1]
     obj = root
     for step in steps:
         try:
@@ -89,10 +88,8 @@
     Given state root container and obj, return internal path to this obj.
     """
     steps = []
-    while True:
+    while obj is not root:
         steps.append(obj.__name__)
-        if obj is root:
-            break
         obj = obj.__parent__
     steps.reverse()
     return '/' + '/'.join(steps)
@@ -182,7 +179,8 @@
         # now save all files that have been modified/added
         root = self.state.root
         for obj in self.state.objects(revision_nr):
-            IDump(obj).save(self._get_container_path(root, obj))
+            if obj is not root:
+                IDump(obj).save(self._get_container_path(root, obj))
 
     def load(self, revision_nr):
         # remove all objects that have been removed in the checkout
@@ -226,9 +224,11 @@
     
     def _get_container_path(self, root, obj):
         steps = []
+        assert root is not obj, "No container exists for the root"
+        obj = obj.__parent__
         while obj is not root:
+            steps.append(obj.__name__)
             obj = obj.__parent__
-            steps.append(obj.__name__)
         steps.reverse()
         return self.checkout.path.join(*steps)
 



More information about the Checkins mailing list