[ZODB-Dev] what's the latest on zodb/zeo+memcached?

Claudiu Saftoiu csaftoiu at gmail.com
Tue Jan 15 19:08:34 UTC 2013


On Tue, Jan 15, 2013 at 2:07 PM, Leonardo Santagada <santagada at gmail.com>wrote:

>
>
>
> On Tue, Jan 15, 2013 at 3:10 PM, Jim Fulton <jim at zope.com> wrote:
>
>> On Tue, Jan 15, 2013 at 12:00 PM, Claudiu Saftoiu <csaftoiu at gmail.com>
>> wrote:
>> > Hello all,
>> >
>> > I'm looking to speed up my server and it seems memcached would be a good
>> > way to do it - at least for the `Catalog` (I've already put the catalog
>> in a
>> > separate
>> > zodb with a separate zeoserver with persistent client caching enabled
>> and it
>> > still doesn't run as nice as I like...)
>> >
>> > I've googled around a bit and found nothing definitive, though...
>> what's the
>> > best way to combine zodb/zeo + memcached as of now?
>>
>> My opinion is that a distributed memcached isn't
>> a big enough win, but this likely depends on your  use cases.
>>
>> We (ZC) took a different approach.  If there is a reasonable way
>> to classify your corpus by URL (or other request parameter),
>> then check out zc.resumelb.  This fit our use cases well.
>>
>
> Maybe I don't understand zodb correctly but if the catalog is small enough
> to fit in memory wouldn't it be much faster to just cache the whole catalog
> on the clients? Then at least for catalog searches it is all mostly as fast
> as running through python objects. Memcache will put an extra
> serialize/deserialize step into it (plus network io, plus context
> switches).
>

That would be fine, actually. Is there a way to explicitly tell ZODB/ZEO to
load an entire object and keep it in the cache? I also want it to remain in
the cache on connection restart, but I think I've already accomplished that
with persistent client-side caching.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.zope.org/pipermail/zodb-dev/attachments/20130115/575da5a7/attachment.html>


More information about the ZODB-Dev mailing list