[Zope-Checkins] CVS: Zope/ZServer/medusa - asyncore.py:1.14.2.1 asynchat.py:1.17.76.1

Brian Lloyd brian@digicool.com
Thu, 13 Dec 2001 15:29:38 -0500


Update of /cvs-repository/Zope/ZServer/medusa
In directory cvs.zope.org:/tmp/cvs-serv21165/medusa

Modified Files:
      Tag: Zope-2_5-branch
	asynchat.py 
Added Files:
      Tag: Zope-2_5-branch
	asyncore.py 
Log Message:
Changes to address asyncore problems. We now include the version of 
asyncore / asynchat from the Python 2.2 distribution, and do some hackery 
in ZServer to ensure that the bundled versions are used instead of the 
built-in ones in Python 2.1.


=== Added File Zope/ZServer/medusa/asyncore.py === (445/545 lines abridged)
# -*- Mode: Python -*-
#   Id: asyncore.py,v 2.51 2000/09/07 22:29:26 rushing Exp
#   Author: Sam Rushing <rushing@nightmare.com>

# ======================================================================
# Copyright 1996 by Sam Rushing
#
#                         All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software and
# its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of Sam
# Rushing not be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior
# permission.
#
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# ======================================================================

"""Basic infrastructure for asynchronous socket service clients and servers.

There are only two ways to have a program on a single processor do "more
than one thing at a time".  Multi-threaded programming is the simplest and
most popular way to do it, but there is another very different technique,
that lets you have nearly all the advantages of multi-threading, without
actually using multiple threads. it's really only practical if your program
is largely I/O bound. If your program is CPU bound, then pre-emptive
scheduled threads are probably what you really need. Network servers are
rarely CPU-bound, however.

If your operating system supports the select() system call in its I/O
library (and nearly all do), then you can use it to juggle multiple
communication channels at once; doing other work while your I/O is taking
place in the "background."  Although this strategy can seem strange and
complex, especially at first, it is in many ways easier to understand and
control than multi-threaded programming. The module documented here solves
many of the difficult problems for you, making the task of building
sophisticated high-performance network servers and clients a snap.
"""

import exceptions
import select

[-=- -=- -=- 445 lines omitted -=- -=- -=-]

# After a little research (reading man pages on various unixen, and
# digging through the linux kernel), I've determined that select()
# isn't meant for doing doing asynchronous file i/o.
# Heartening, though - reading linux/mm/filemap.c shows that linux
# supports asynchronous read-ahead.  So _MOST_ of the time, the data
# will be sitting in memory for us already when we go to read it.
#
# What other OS's (besides NT) support async file i/o?  [VMS?]
#
# Regardless, this is useful for pipes, and stdin/stdout...

import os
if os.name == 'posix':
    import fcntl

    class file_wrapper:
        # here we override just enough to make a file
        # look like a socket for the purposes of asyncore.
        def __init__ (self, fd):
            self.fd = fd

        def recv (self, *args):
            return apply (os.read, (self.fd,)+args)

        def send (self, *args):
            return apply (os.write, (self.fd,)+args)

        read = recv
        write = send

        def close (self):
            return os.close (self.fd)

        def fileno (self):
            return self.fd

    class file_dispatcher (dispatcher):
        def __init__ (self, fd):
            dispatcher.__init__ (self)
            self.connected = 1
            # set it to non-blocking mode
            flags = fcntl.fcntl (fd, fcntl.F_GETFL, 0)
            flags = flags | os.O_NONBLOCK
            fcntl.fcntl (fd, fcntl.F_SETFL, flags)
            self.set_file (fd)

        def set_file (self, fd):
            self._fileno = fd
            self.socket = file_wrapper (fd)
            self.add_channel()


=== Zope/ZServer/medusa/asynchat.py 1.17 => 1.17.76.1 ===
-#	$Id$
-#	Author: Sam Rushing <rushing@nightmare.com>
+#       Id: asynchat.py,v 2.26 2000/09/07 22:29:26 rushing Exp
+#       Author: Sam Rushing <rushing@nightmare.com>
 
 # ======================================================================
 # Copyright 1996 by Sam Rushing
-# 
+#
 #                         All Rights Reserved
-# 
+#
 # Permission to use, copy, modify, and distribute this software and
 # its documentation for any purpose and without fee is hereby
 # granted, provided that the above copyright notice appear in all
@@ -15,7 +15,7 @@
 # Rushing not be used in advertising or publicity pertaining to
 # distribution of the software without specific, written prior
 # permission.
-# 
+#
 # SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
 # INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
 # NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
@@ -25,7 +25,7 @@
 # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 # ======================================================================
 
-"""A class supporting chat-style (command/response) protocols.
+r"""A class supporting chat-style (command/response) protocols.
 
 This class adds support for 'chat' style protocols - where one side
 sends a 'command', and the other sends a response (examples would be
@@ -48,59 +48,58 @@
 
 import socket
 import asyncore
-import string
 
 class async_chat (asyncore.dispatcher):
     """This is an abstract class.  You must derive from this class, and add
     the two methods collect_incoming_data() and found_terminator()"""
-    
+
     # these are overridable defaults
-    
-    ac_in_buffer_size	= 4096
-    ac_out_buffer_size	= 4096
-    
+
+    ac_in_buffer_size       = 4096
+    ac_out_buffer_size      = 4096
+
     def __init__ (self, conn=None):
         self.ac_in_buffer = ''
         self.ac_out_buffer = ''
         self.producer_fifo = fifo()
         asyncore.dispatcher.__init__ (self, conn)
-        
+
     def set_terminator (self, term):
         "Set the input delimiter.  Can be a fixed string of any length, an integer, or None"
         self.terminator = term
-        
+
     def get_terminator (self):
         return self.terminator
-        
-        # grab some more data from the socket,
-        # throw it to the collector method,
-        # check for the terminator,
-        # if found, transition to the next state.
-        
+
+    # grab some more data from the socket,
+    # throw it to the collector method,
+    # check for the terminator,
+    # if found, transition to the next state.
+
     def handle_read (self):
-    
+
         try:
             data = self.recv (self.ac_in_buffer_size)
         except socket.error, why:
             self.handle_error()
             return
-            
+
         self.ac_in_buffer = self.ac_in_buffer + data
-        
+
         # Continue to search for self.terminator in self.ac_in_buffer,
         # while calling self.collect_incoming_data.  The while loop
         # is necessary because we might read several data+terminator
         # combos with a single recv(1024).
-        
+
         while self.ac_in_buffer:
             lb = len(self.ac_in_buffer)
             terminator = self.get_terminator()
             if terminator is None:
-                    # no terminator, collect it all
+                # no terminator, collect it all
                 self.collect_incoming_data (self.ac_in_buffer)
                 self.ac_in_buffer = ''
             elif type(terminator) == type(0):
-                    # numeric terminator
+                # numeric terminator
                 n = terminator
                 if lb < n:
                     self.collect_incoming_data (self.ac_in_buffer)
@@ -112,71 +111,71 @@
                     self.terminator = 0
                     self.found_terminator()
             else:
-                    # 3 cases:
-                    # 1) end of buffer matches terminator exactly:
-                    #    collect data, transition
-                    # 2) end of buffer matches some prefix:
-                    #    collect data to the prefix
-                    # 3) end of buffer does not match any prefix:
-                    #    collect data
+                # 3 cases:
+                # 1) end of buffer matches terminator exactly:
+                #    collect data, transition
+                # 2) end of buffer matches some prefix:
+                #    collect data to the prefix
+                # 3) end of buffer does not match any prefix:
+                #    collect data
                 terminator_len = len(terminator)
-                index = string.find (self.ac_in_buffer, terminator)
+                index = self.ac_in_buffer.find(terminator)
                 if index != -1:
-                        # we found the terminator
+                    # we found the terminator
                     if index > 0:
-                            # don't bother reporting the empty string (source of subtle bugs)
+                        # don't bother reporting the empty string (source of subtle bugs)
                         self.collect_incoming_data (self.ac_in_buffer[:index])
                     self.ac_in_buffer = self.ac_in_buffer[index+terminator_len:]
                     # This does the Right Thing if the terminator is changed here.
                     self.found_terminator()
                 else:
-                        # check for a prefix of the terminator
+                    # check for a prefix of the terminator
                     index = find_prefix_at_end (self.ac_in_buffer, terminator)
                     if index:
                         if index != lb:
-                                # we found a prefix, collect up to the prefix
+                            # we found a prefix, collect up to the prefix
                             self.collect_incoming_data (self.ac_in_buffer[:-index])
                             self.ac_in_buffer = self.ac_in_buffer[-index:]
                         break
                     else:
-                            # no prefix, collect it all
+                        # no prefix, collect it all
                         self.collect_incoming_data (self.ac_in_buffer)
                         self.ac_in_buffer = ''
-                        
+
     def handle_write (self):
         self.initiate_send ()
-        
+
     def handle_close (self):
         self.close()
-        
+
     def push (self, data):
         self.producer_fifo.push (simple_producer (data))
         self.initiate_send()
-        
+
     def push_with_producer (self, producer):
         self.producer_fifo.push (producer)
         self.initiate_send()
-        
+
     def readable (self):
         "predicate for inclusion in the readable for select()"
         return (len(self.ac_in_buffer) <= self.ac_in_buffer_size)
-        
+
     def writable (self):
         "predicate for inclusion in the writable for select()"
         # return len(self.ac_out_buffer) or len(self.producer_fifo) or (not self.connected)
         # this is about twice as fast, though not as clear.
         return not (
-                (self.ac_out_buffer is '') and
+                (self.ac_out_buffer == '') and
                 self.producer_fifo.is_empty() and
                 self.connected
                 )
-        
+
     def close_when_done (self):
         "automatically close this channel once the outgoing queue is empty"
         self.producer_fifo.push (None)
-        
-        # refill the outgoing buffer by calling the more() method
-        # of the first producer in the queue
+
+    # refill the outgoing buffer by calling the more() method
+    # of the first producer in the queue
     def refill_buffer (self):
         _string_type = type('')
         while 1:
@@ -201,38 +200,38 @@
                     self.producer_fifo.pop()
             else:
                 return
-                
+
     def initiate_send (self):
         obs = self.ac_out_buffer_size
         # try to refill the buffer
         if (len (self.ac_out_buffer) < obs):
             self.refill_buffer()
-            
+
         if self.ac_out_buffer and self.connected:
-                # try to send the buffer
+            # try to send the buffer
             try:
                 num_sent = self.send (self.ac_out_buffer[:obs])
                 if num_sent:
                     self.ac_out_buffer = self.ac_out_buffer[num_sent:]
-                    
+
             except socket.error, why:
                 self.handle_error()
                 return
-                
+
     def discard_buffers (self):
-            # Emergencies only!
+        # Emergencies only!
         self.ac_in_buffer = ''
         self.ac_out_buffer = ''
         while self.producer_fifo:
             self.producer_fifo.pop()
-            
-            
+
+
 class simple_producer:
 
     def __init__ (self, data, buffer_size=512):
         self.data = data
         self.buffer_size = buffer_size
-        
+
     def more (self):
         if len (self.data) > self.buffer_size:
             result = self.data[:self.buffer_size]
@@ -242,26 +241,26 @@
             result = self.data
             self.data = ''
             return result
-            
+
 class fifo:
     def __init__ (self, list=None):
         if not list:
             self.list = []
         else:
             self.list = list
-            
+
     def __len__ (self):
         return len(self.list)
-        
+
     def is_empty (self):
         return self.list == []
-        
+
     def first (self):
         return self.list[0]
-        
+
     def push (self, data):
         self.list.append (data)
-        
+
     def pop (self):
         if self.list:
             result = self.list[0]
@@ -269,24 +268,26 @@
             return (1, result)
         else:
             return (0, None)
-            
-            # Given 'haystack', see if any prefix of 'needle' is at its end.  This
-            # assumes an exact match has already been checked.  Return the number of
-            # characters matched.
-            # for example:
-            # f_p_a_e ("qwerty\r", "\r\n") => 1
-            # f_p_a_e ("qwertydkjf", "\r\n") => 0
-            # f_p_a_e ("qwerty\r\n", "\r\n") => <undefined>
-            
-            # this could maybe be made faster with a computed regex?
-            # [answer: no; circa Python-2.0, Jan 2001]
-            # new python:   28961/s
-            # old python:   18307/s
-            # re:           12820/s
-            # regex:        14035/s
-            
+
+# Given 'haystack', see if any prefix of 'needle' is at its end.  This
+# assumes an exact match has already been checked.  Return the number of
+# characters matched.
+# for example:
+# f_p_a_e ("qwerty\r", "\r\n") => 1
+# f_p_a_e ("qwerty\r\n", "\r\n") => 2
+# f_p_a_e ("qwertydkjf", "\r\n") => 0
+
+# this could maybe be made faster with a computed regex?
+# [answer: no; circa Python-2.0, Jan 2001]
+# python:    18307/s
+# re:        12820/s
+# regex:     14035/s
+
 def find_prefix_at_end (haystack, needle):
-    l = len(needle) - 1
-    while l and not haystack.endswith(needle[:l]):
-        l -= 1
-    return l
+    nl = len(needle)
+    result = 0
+    for i in range (1,nl):
+        if haystack[-(nl-i):] == needle[:(nl-i)]:
+            result = nl-i
+            break
+    return result