[ZODB-Dev] Big OOBTree --> memory exhausted

Toby Dickenson tdickenson@geminidataloggers.com
Thu, 31 Jan 2002 11:57:47 +0000


On Thu, 31 Jan 2002 11:50:17 +0100, Thomas Guettler
<zopestoller@thomas-guettler.de> wrote:

>    def relate(self, obj1, obj2):
>        """Relate two objects. Both arguments can be lists.
>        If they are lists each item in obj1 gets related to each item
>        in obj2"""
>        if type(obj1) not in ( type(()), type([]) ): obj1=3D[obj1]
>        if type(obj2) not in ( type(()), type([]) ): obj2=3D[obj2]
>        i=3D0
>        counter=3D0
>        for o1 in obj1:
>            for o2 in obj2:
>                self.relate_one_direction(o1, o2)
>                self.relate_one_direction(o2, o1)
>                i=3Di+1
>                if i>10000:
>                    get_transaction().commit()

you have committed the transaction, which is good. You should be
seeing your on-disk database growing.=20

Those new and modified objects now have their state on disk, not just
in memory, so they *can* be deactivated. However objects dont get
deactivated on their own; you have to tickle the garbage collecter.
You need to add

 self._p_jar.cacheGC()

=46or more information on this solution, check the archives for a thread
involving me and ChrisW about memory leaks.=20

Note that the current garbage collector is not very good at supporting
memory-intensive operations such as this. You might find that adding
cacheGC() is enough to slow down memory growth but not stop it. I have
an experimental implementation at
http://www.zope.org/Members/htrd/cache which will behave much better.
I would appreciate any feedback if you have a chance to try it.



Toby Dickenson
tdickenson@geminidataloggers.com