Hi there, This is with Zope 2.62
I want to offer Plones "Quick undo" to a user only in some parts of a site.
When I test for the right whether he is allowed to "Undo chages" with the following test that checks what roles are allowed a permission
[ s for s in map(lambda x : len(x['selected']) and x['name'], context.rolesOfPermission('Undo changes')) if s]
I do not get the inherited roles but only the ones assigned to the folder I perform the test on. If I do the same for "View" I get also the inherited permissions.
Should not rolesOfPermission also return the inherited roles ?
Robert
robert rottermann wrote at 2004-2-25 08:12 +0100:
... [ s for s in map(lambda x : len(x['selected']) and x['name'], context.rolesOfPermission('Undo changes')) if s]
I do not get the inherited roles but only the ones assigned to the folder I perform the test on. If I do the same for "View" I get also the inherited permissions.
Are you sure? It may look so as you may get a default (see "AccessControl.Permission.Permission.getRoles", last lines).
Should not rolesOfPermission also return the inherited roles ?
I do not think so.
Hi Zopers,
The memory use of my Zope instance (2.6.2 with python 2.1 on FreeBSD4) increases over time until the whole thing hits the limit (at 512Mb) and restarts. I have installed LeakFinder, but there doesn't seem to be an obvious memory leak.
From what I've learned from LeakFinder, it seems that the Catalog is the
problem, or rather, my use of the catalog. We have 50.000. objects in the Catalog, which are instances of ZClasses. These ZClasses have sometimes rather large string/text properties and some DateTime() properties. When memory use approaches the 500Mbyte, I typically see around 100.000. instances of BTrees._IOBTree.IOBucket, which seems like a lot considering that we have 50.000. pageviews a day and this usually occurs after an hour or two. There'll be around 50.000. DateTime() instances and also quite some OFS.Image.Image (maybe 15.000.). And yes, our pages do use the Catalog exensively, but this seems hard to avoid.
How do I find where this memory goes? The pages do not take a long time to lot (2 seconds max) and the cpu usage is not that high that I'd expect some kind of deadlocks there. Also, Zope never seems to release memory, but just takes more and more, even if I flush the cache and/or check the perform garbage collection box in LeakFinder.
Douwe Osinga http://www.world66.com
Douwe Osinga wrote at 2004-3-2 17:47 +0100:
... The memory use of my Zope instance (2.6.2 with python 2.1 on FreeBSD4) increases over time until the whole thing hits the limit (at 512Mb) and restarts. I have installed LeakFinder, but there doesn't seem to be an obvious memory leak.
From what I've learned from LeakFinder, it seems that the Catalog is the
problem, or rather, my use of the catalog. We have 50.000. objects in the Catalog, which are instances of ZClasses. These ZClasses have sometimes rather large string/text properties and some DateTime() properties. When memory use approaches the 500Mbyte, I typically see around 100.000. instances of BTrees._IOBTree.IOBucket,
Many of them are likely "ZCatalog" metadata blocks. They can become huge monsters.
Unless you do statistics or present huge numbers of hits in a single page, you do not need metadata. Access the object instead (via "proxy.getObject()").
Under no circumstances put large fields in the metadata table.
Also remove the "bobobase_modification_time" from this table.
... How do I find where this memory goes? The pages do not take a long time to lot (2 seconds max) and the cpu usage is not that high that I'd expect some kind of deadlocks there. Also, Zope never seems to release memory, but just takes more and more, even if I flush the cache and/or check the perform garbage collection box in LeakFinder.
Older versions of Zope had a "cacheExtremDetail" page in "Control_Panel --> Database". This is lost with Zope 2.7.
But "ZODB.DB" still has the respective method "cacheExtremeDetail". It tells you which objects (given by their "oid" and class) are in the various caches. Given the "oid" you can find the actual objects (although the analysis is very tedious).
Hi,
[stuff deleted where I describe that my zope memory use seems to
grow unreasonably (at least, that's what I think) with lots of BTrees._IOBTree.IOBucket objects]
Many of them are likely "ZCatalog" metadata blocks. They can become huge monsters.
Unless you do statistics or present huge numbers of hits in a single page, you do not need metadata. Access the object instead (via "proxy.getObject()").
Under no circumstances put large fields in the metadata table.
Also remove the "bobobase_modification_time" from this table.
Tweaking the catalog use and what's in it helps marginally. Thanks for the tips. However I still have memory problems. For example:
I've packed the database - it is now ~300 MByte. If I try to export the main object, i.e. the database containing all our travel information, zope runs out of memory, gobbling up the 512 Mbyte RAM I have per process in a few minutes. I then cleared the catalog and now the zexp sometimes (but not always) makes it.
Also, I reduced the number of threads and the number of objects per thread, but this doesn't seem to help.
Douwe Osinga
Douwe Osinga wrote:
I've packed the database - it is now ~300 MByte. If I try to export the main object, i.e. the database containing all our travel information, zope runs out of memory, gobbling up the 512 Mbyte RAM I have per process in a few minutes. I then cleared the catalog and now the zexp sometimes (but not always) makes it.
Well yes, export is a memory intensive process ;-)
Why are you looking to do such an export?
cheers,
Chris