[Zope] ZSERVER, THREADS, PERFORMANCE & FUTURE PERFORMANCE

Sam Gendler sgendler@teknolojix.com
Thu, 11 Nov 1999 13:09:52 -0800


>
>
> This time I changed the concurrency (-c , -t is timelimit) on zeus_benc side
>  -c 1  - 41 TPS
>  -c 2  and up ~ 35 TPS and max 4 threads showing as being used in top.
>
> the top lines of top looked like this on -t 30 -c 100
>
>

If even going to 2 threads on the client side causes a 20% degradation in
performance, there has got to be something wrong with the threading model in the
server.  I will look at it as I get a chance, but I am in the midst of building a
huge application with a deadline of Dec 20, so I will not have much time until
then.  However, if someone could do the timings that I requested, I can certainly
help diagnose the problem.  the fact that it gets such poor performance with even a
single thread is quite bothersome.  A perl script should be able to do better than
that, quite frankly.  It tends to suggest something systemic, like a lookup that is
taking to long, or one of the ugly TCP side effects that I mentioned.

A couple of things to look for (things that have bitten apache and others over the
years)...

Make sure you send the largest possible packets.  If the client receives a 'small
packet' (I can't remember the criteria, although 1/2 maximum segment size rings a
bell), it will delay the ACK of that packet for 200ms while it waits for data to go
back the other way.  Since this is a web client, there is no data to go back the
other way, so there is a 200ms delay after every small packet from the server.  On
most systems, you can set TCP_NODELAY to off, but linux doesn't support this, last
time I checked.  Many servers started out sending each HTTP header as a different
packet, or else they send the headers as one packet and the body as another.  In
either case, you will usually trigger this behaviour.  Simply buffering the output
is enough to prevent the problem.

If you are seeing latency between starting to receive the headers and finishing the
headers, or between receiving the headers and starting to receive the body, this is
probably the culprit.  Latency before receiving anything could be just about
anywhere, however.

The fact that going to 2 client threads impacts performance really points to the
threads scheduler itself.  It would almost appear that the server is trying to
handle all requests with the same thread.  Someone may want to look and make sure
that the server is doing what it is supposed to be doing.

Just my $.02

--sam