[Zope] [PROBLEM] ftping large files to ZOPE

alan runyan runyaga@thisbox.com
Thu, 22 Feb 2001 18:07:02 -0600


Zope 2.3.0 (binary release, python 1.5.2, linux2-x86)
1.5.2 (#10, Dec 6 1999, 12:16:27) [GCC 2.7.2.3]

I've uploaded pretty large files into ZOPE and always had fairly decent
success.  but I just tried upload a 150MB file and this is the console logs:

2001-02-22T23:26:44 INFO(0) ZServer Successful login.
------
2001-02-22T23:26:48 PROBLEM(100) ZServer unhandled connect event
------
2001-02-22T23:26:50 PROBLEM(100) ZServer unhandled connect event
------
2001-02-23T00:45:00 ERROR(200) ZServer uncaptured python exception, closing
chan
nel <zope_ftp_channel connected 208.123.214.200:25172 at 87b8bd8>
(socket.error:
(9, 'Bad file descriptor')
[/home/zope/Zope-2.3.0-linux2-x86/ZServer/medusa/asyn
chat.py|initiate_send|211]
[/home/zope/Zope-2.3.0-linux2-x86/ZServer/medusa/asyn
core.py|send|282])

ZOPE appears to keep all incoming xfer byte in memory and when the xfer
completes it writes it out to disk.  The first attempt to upload this file,
I was playing around and tried to upload a smaller file to zope in a another
directory, and the original large file connection broke.  So I decided to
just leave it alone until the file xfer completed.  so I get home file is
still xfering and watch the memory creep up until ~the file size.  it got to
be about 175MB and then something really odd happened.. the CPU went from
~1% idle to 60% idle and the memory starting balooning (scaring me because I
only had ~400MB of total memory on the box) it got to like 380MB and the CPU
went back to idle, I never ran out of virtual memory.  when I logged into
ZOPE,  my file didnt save.  :(

I've uploaded large files (i believe larger than 100MB) before but it was on
a local network and didnt have a problem.  but 'in the wild' it didnt work
as I thought.  anyone have any insight?  is this just not a good idea --
uploading files >XXX bytes?  is this commonly known?