[Zope] large images to a database via zope.

marc lindahl marc@bowery.com
Fri, 13 Apr 2001 13:02:13 -0400


> Speed is probably not the biggest issue with storing large objects in
> the ZODB. What makes it less practical is the undo support. Every time
> you change a property on one of these large objects, the entire object
> is replicated. This can quickly eat up space.

For a static image store, like a library, I don't see the problem.  Just
'pack', right?  In fact with a library-like store there shouldn't be much
modification anyway.

>From what I understand of localFS or extimage, every time you access the
object pointing to it, it reads the whole file.  Is that correct?  Because
it seems like that would slow down searches (e.g. searching by author)

> Also, there are some issues with Data.fs files when they grow to 2GB in
> size (both OS and Python). This can happen rather quickly if you have
> many large objects stored that are changed. There are ways around this,
> but they take some effort.

I understand the 2GB limit only under linux 6, right?  Like RH6?  Redhat 7
doesn't have this file limit, and other linux's (like debian) don't have it.
Is there a file size limit imposed by Python?
Anyway, there's this:
http://www.zope.org/Members/hathawsh/PartitionedFileStorage