[Checkins] SVN: zodbshootout/ Created the zodbshootout package.
Shane Hathaway
shane at hathawaymix.org
Tue Nov 17 03:42:10 EST 2009
Log message for revision 105749:
Created the zodbshootout package.
Changed:
A zodbshootout/
A zodbshootout/trunk/
A zodbshootout/trunk/README.txt
A zodbshootout/trunk/buildout-oracle.cfg
A zodbshootout/trunk/buildout.cfg
A zodbshootout/trunk/etc/
A zodbshootout/trunk/etc/fs-sample.conf
A zodbshootout/trunk/etc/my.cnf.in
A zodbshootout/trunk/etc/oracle-sample.conf
A zodbshootout/trunk/etc/remote-sample.conf
A zodbshootout/trunk/etc/sample.conf
A zodbshootout/trunk/etc/zeo.conf.in
A zodbshootout/trunk/mysql-no-read-etc.patch
A zodbshootout/trunk/reset
A zodbshootout/trunk/setup.py
A zodbshootout/trunk/src/
A zodbshootout/trunk/src/zodbshootout/
A zodbshootout/trunk/src/zodbshootout/__init__.py
A zodbshootout/trunk/src/zodbshootout/fork.py
A zodbshootout/trunk/src/zodbshootout/main.py
-=-
Added: zodbshootout/trunk/README.txt
===================================================================
--- zodbshootout/trunk/README.txt (rev 0)
+++ zodbshootout/trunk/README.txt 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,187 @@
+zodbshootout
+------------
+
+This application measures and compares the performance of various
+ZODB storages and configurations. It is derived from the RelStorage
+speedtest script, but this version allows more types of storages and
+configurations, provides more measurements, and produces numbers that
+are easier to interpret.
+
+Although you can ``easy_install`` this package, the best way to get
+started is to follow the directions below to set up a complete testing
+environment with sample tests.
+
+How to set up ``zodbshootout`` using Buildout
+---------------------------------------------
+
+First, be sure you have certain packages installed so you can compile
+software. Ubuntu and Debian users should do this (tested with Ubuntu
+8.04, Ubuntu 9.10, Debian Etch, and Debian Lenny)::
+
+ $ sudo apt-get install build-essential python-dev
+ $ sudo apt-get install ncurses-dev libevent-dev libreadline-dev zlib1g-dev
+
+Download the ``zodbshootout`` tar file. Unpack it and change to its
+top level directory::
+
+ $ tar xvzf zodbshootout-*.tar.gz
+ $ cd zodbshootout-*
+
+Set up that same directory as a partly isolated Python environment
+using ``virtualenv``::
+
+ $ virtualenv --no-site-packages .
+
+Install Buildout in that environment, which will create a script
+named ``bin/buildout``::
+
+ $ bin/easy_install zc.buildout
+
+Make sure you have adequate space in your temporary directory (normally
+``/tmp``) to compile MySQL and PostgreSQL. You may want to switch to a
+different temporary directory by setting the TMPDIR environment
+variable::
+
+ $ TMPDIR=/var/tmp
+ $ export TMPDIR
+
+Run Buildout. Buildout will follow the instructions specified by
+``buildout.cfg`` to download, compile, and initialize versions of MySQL
+and PostgreSQL. It will also install several other Python packages.
+This may take a half hour or more the first time::
+
+ $ bin/buildout
+
+If that command fails, first check for missing dependencies. The
+dependencies are listed above. To retry, just run ``bin/buildout``
+again.
+
+Once Buildout completes successfully, start the test environment
+using Supervisord::
+
+ $ bin/supervisord
+
+Confirm that Supervisor started all processes::
+
+ $ bin/supervisorctl status
+
+If all processes are running, the test environment is now ready. Run
+a sample test::
+
+ $ bin/zodbshootout etc/sample.conf
+
+The ``sample.conf`` test compares the performance of RelStorage with
+MySQL and PostgreSQL, along with FileStorage behind ZEO, where the
+client and server are located on the same computer.
+
+See also ``remote-sample.conf``, which tests database speed over a
+network link. Set up ``remote-sample.conf`` by building
+``zodbshootout`` on two networked computers, then point the client at
+the server by changing the ``%define host`` line at the top of
+``remote-sample.conf``. The ``etc`` directory contains other sample
+database configurations.
+
+Running ``zodbshootout``
+------------------------
+
+The ``zodbshootout`` script accepts the name of a database
+configuration file. The configuration file contains a list of databases
+to test, in ZConfig format. The script deletes all data from each of
+the databases, then writes and reads the databases while taking
+measurements. Finally, the script produces a tabular summary of
+objects written or read per second in each configuration.
+
+**Repeated Warning**: ``zodbshootout`` deletes all data from all
+databases specified in the configuration file. Do not configure it to
+open production databases!
+
+The ``zodbshootout`` script accepts the following options.
+
+* ``-n`` (``--object-counts``) specifies how many persistent objects to
+ write or read per transaction. The default is 1000. An interesting
+ value to use is 1, causing the test to primarily measure the speed of
+ opening connections and committing transactions.
+
+* ``-c`` (``--concurrency``) specifies how many tests to run in
+ parallel. The default is 2. Each of the concurrent tests runs in a
+ separate process to prevent contention over the CPython global
+ interpreter lock. In single-host configurations, the performance
+ measurements should increase with the concurrency level, up to the
+ number of CPU cores in the computer. In more complex configurations,
+ performance will be limited by other factors such as network latency.
+
+* ``-p`` (``--profile``) enables the Python profiler while running the
+ tests and outputs a profile for each test in the specified directory.
+ Note that the profiler typically reduces the database speed by a lot.
+ This option is intended to help developers discover performance
+ bottlenecks.
+
+You should write a configuration file that models your intended
+database and network configuration. Running ``zodbshootout`` may reveal
+configuration optimizations that would significantly increase your
+application's performance.
+
+Interpreting the Results
+------------------------
+
+The table below shows typical output of running ``zodbshootout`` with
+``etc/sample.conf`` on a dual core, 2.1 GHz laptop::
+
+ "Transaction", postgresql, mysql, mysql_mc, zeo_fs
+ "Write 1000 Objects", 6346, 9441, 8229, 4965
+ "Read 1000 Warm Objects", 5091, 6224, 21603, 1950
+ "Read 1000 Cold Objects", 5030, 10867, 5224, 1932
+ "Read 1000 Hot Objects", 36440, 38322, 38197, 38166
+ "Read 1000 Steamin' Objects", 4773034, 3909861, 3490163, 4254936
+
+``zodbshootout`` runs five kinds of tests for each database. For each
+test, ``zodbshootout`` instructs all processes to perform similar
+transactions concurrently, computes the average duration of the
+concurrent transactions, takes the fastest timing of three test runs,
+and derives how many objects per second the database is capable of
+writing or reading under the given conditions.
+
+* Write objects
+
+ ``zodbshootout`` begins a transaction, adds the specified number of
+ persistent objects to a ``PersistentMapping``, and commits the
+ transaction.
+
+* Read warm objects
+
+ In a different process, without clearing any caches,
+ ``zodbshootout`` reads all of the objects just written. This test
+ favors databases that use either a persistent cache or a cache
+ shared by multiple processes (such as memcached).
+
+* Read cold objects
+
+ In the same process as was used for reading warm objects,
+ ``zodbshootout`` clears all ZODB caches (the pickle cache, the ZEO
+ cache, and/or memcached) then reads all of the objects written by
+ the write test. This test favors databases that read objects
+ quickly, independently of caching.
+
+* Read hot objects
+
+ In the same process as was used for reading cold objects,
+ ``zodbshootout`` clears only the in-memory ZODB caches (the pickle
+ cache) then reads all of the objects written by the write test.
+ This test favors databases that have a process-specific cache.
+
+* Read steamin' objects
+
+ In the same process as was used for reading hot objects,
+ ``zodbshootout`` once again reads all of the objects written by the
+ write test. This test favors databases that take advantage of the
+ ZODB pickle cache. As can be seen from the sample output above,
+ accessing an object from the ZODB pickle cache is much faster
+ than any operation that requires network access or unpickling.
+
+Known Issues
+------------
+
+This application seems to freeze with Python versions before 2.6, most
+likely due to some issue connected with the backported version of the
+``multiprocessing`` module. Assistance in finding a resolution would be
+greatly appreciated.
Added: zodbshootout/trunk/buildout-oracle.cfg
===================================================================
--- zodbshootout/trunk/buildout-oracle.cfg (rev 0)
+++ zodbshootout/trunk/buildout-oracle.cfg 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,24 @@
+
+# To include support for Oracle 10g XE in zodbshootout, use:
+# bin/buildout -c buildout-oracle.cfg
+
+[buildout]
+extends = buildout.cfg
+parts =
+ cx_Oracle
+ ${buildout:base-parts}
+oracle_home = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
+
+[zodbshootout]
+eggs += cx_Oracle
+initialization =
+ import os
+ os.environ['ORACLE_HOME'] = '${buildout:oracle_home}'
+
+[cx_Oracle]
+recipe = zc.recipe.egg:custom
+environment = oracle-env
+rpath = ${buildout:oracle_home}/lib
+
+[oracle-env]
+ORACLE_HOME = ${buildout:oracle_home}
Added: zodbshootout/trunk/buildout.cfg
===================================================================
--- zodbshootout/trunk/buildout.cfg (rev 0)
+++ zodbshootout/trunk/buildout.cfg 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,182 @@
+
+# Note: you need at least the following packages to build this.
+#
+# build-essential (for gcc, make, etc.)
+# python-dev
+# ncurses-dev
+# libevent-dev
+# libreadline-dev
+# zlib1g-dev
+#
+# Be sure you have lots of TMPDIR space. 1 GB recommended. You
+# might want to "export TMPDIR=/var/tmp".
+#
+# Confirm that name lookups to 'localhost' are working.
+
+[buildout]
+develop = .
+base-parts =
+ postgresql
+ postgresqlinit
+ psycopg2
+ mysqlconf
+ mysql
+ mysqlinit
+ MySQL-python
+ memcached
+ libmemcached
+ pylibmc
+ zeoconf
+ zeoserver
+ pidproxy
+ supervisor
+ zodbshootout
+parts = ${buildout:base-parts}
+find-links = http://packages.willowrise.org/
+
+[postgresql]
+recipe = zc.recipe.cmmi
+url = ftp://ftp.postgresql.org/pub/source/v8.4.1/postgresql-8.4.1.tar.bz2
+md5sum = f2015af17bacbbfe140daf0d1067f9c9
+extra_options =
+ --with-pgport=24004
+
+[postgresqlinit]
+recipe = iw.recipe.cmd
+on_install = true
+on_update = true
+datadir = ${buildout:directory}/var/postgresql
+cmds =
+ test -e ${buildout:directory}/bin/psql || \
+ ln -s ${postgresql:location}/bin/psql ${buildout:directory}/bin/psql
+ test -e ${postgresqlinit:datadir} && exit 0
+ ${postgresql:location}/bin/initdb ${postgresqlinit:datadir}
+ ${postgresql:location}/bin/postgres --single -D ${postgresqlinit:datadir} \
+ template1 << EOF
+ CREATE USER relstoragetest WITH PASSWORD 'relstoragetest';
+ CREATE DATABASE relstoragetest OWNER relstoragetest;
+ CREATE DATABASE relstoragetest_hf OWNER relstoragetest;
+ EOF
+ echo 'host all relstoragetest 0.0.0.0/0 md5' \
+ >> ${postgresqlinit:datadir}/pg_hba.conf
+ echo "listen_addresses = '*'" >> ${postgresqlinit:datadir}/postgresql.conf
+
+[psycopg2]
+recipe = zc.recipe.egg:custom
+environment = psycopg2-env
+rpath = ${postgresql:location}/lib
+
+[psycopg2-env]
+# This is needed to help psycopg2 find the pg_config script
+PATH=${postgresql:location}/bin:%(PATH)s
+
+
+[mysqlconf]
+recipe = collective.recipe.template
+input = ${buildout:directory}/etc/my.cnf.in
+output = ${buildout:directory}/etc/my.cnf
+datadir = ${buildout:directory}/var/mysql
+logdir = ${buildout:directory}/var/log
+port = 24002
+
+[mysql]
+recipe = zc.recipe.cmmi
+url = http://mysql.mirrors.pair.com/Downloads/MySQL-5.1/mysql-5.1.40.tar.gz
+md5sum = 32e7373c16271606007374396e6742ad
+extra_options =
+ --localstatedir=${mysqlconf:datadir}
+ --sysconfdir=${buildout:directory}/etc
+ --with-unix-socket-path=${mysqlconf:datadir}/mysqld.sock
+ --with-plugins=innobase,myisam
+# This MySQL instance should not load configuration from /etc
+patch = ${buildout:directory}/mysql-no-read-etc.patch
+
+[mysqlinit]
+recipe = iw.recipe.cmd
+on_install = true
+on_update = true
+cmds =
+ test -e ${buildout:directory}/bin/mysql || \
+ ln -s ${mysql:location}/bin/mysql ${buildout:directory}/bin/mysql
+ test -e ${mysqlconf:datadir}/relstoragetest_hf && exit 0
+ mkdir -p ${mysqlconf:datadir}
+ ${mysql:location}/bin/mysql_install_db
+ ${mysql:location}/bin/mysqld_safe &
+ sleep 5
+ ${buildout:directory}/bin/mysql -u root << EOF
+ CREATE USER 'relstoragetest'@'localhost' IDENTIFIED BY 'relstoragetest';
+ CREATE USER 'relstoragetest'@'%' IDENTIFIED BY 'relstoragetest';
+ CREATE DATABASE relstoragetest;
+ GRANT ALL ON relstoragetest.* TO 'relstoragetest'@'localhost';
+ GRANT ALL ON relstoragetest.* TO 'relstoragetest'@'%';
+ CREATE DATABASE relstoragetest_hf;
+ GRANT ALL ON relstoragetest_hf.* TO 'relstoragetest'@'localhost';
+ GRANT ALL ON relstoragetest_hf.* TO 'relstoragetest'@'%';
+ FLUSH PRIVILEGES;
+ EOF
+ kill `cat ${mysqlconf:datadir}/mysqld.pid`
+
+[MySQL-python]
+recipe = zc.recipe.egg:custom
+find-links = http://packages.willowrise.org/
+environment = MySQL-python-env
+rpath = ${mysql:location}/lib/mysql
+
+[MySQL-python-env]
+# This is needed to help MySQL-python find the mysql_config script
+PATH=${mysql:location}/bin:%(PATH)s
+
+[memcached]
+recipe = zc.recipe.cmmi
+url = http://memcached.googlecode.com/files/memcached-1.4.3.tar.gz
+md5sum = 83c6cc6bad9612536b5acbbbddab3eb3
+
+[libmemcached]
+recipe = zc.recipe.cmmi
+url = http://download.tangent.org/libmemcached-0.35.tar.gz
+md5sum = 1fd295009451933ac837a49265d702da
+extra_options = --without-memcached
+
+[pylibmc]
+recipe = zc.recipe.egg:custom
+environment = pylibmc-env
+rpath = ${libmemcached:location}/lib
+
+[pylibmc-env]
+LIBMEMCACHED_DIR=${libmemcached:location}
+
+[zeoconf]
+recipe = collective.recipe.template
+input = ${buildout:directory}/etc/zeo.conf.in
+output = ${buildout:directory}/etc/zeo.conf
+port = 24003
+
+[zeoserver]
+recipe = zc.recipe.egg
+eggs = ZODB3
+scripts = runzeo
+
+[pidproxy]
+recipe = zc.recipe.egg
+eggs = supervisor
+scripts = pidproxy
+
+[supervisor]
+recipe = collective.recipe.supervisor
+port = 127.0.0.1:24001
+serverurl = http://127.0.0.1:24001
+programs =
+ 10 postgresql ${postgresql:location}/bin/postgres [-D ${postgresqlinit:datadir}] ${postgresql:location} true
+ 20 mysql ${buildout:directory}/bin/pidproxy [${mysqlconf:datadir}/mysqld.pid ${mysql:location}/bin/mysqld_safe] ${mysql:location} true
+ 30 zeo ${buildout:directory}/bin/runzeo [-C ${buildout:directory}/etc/zeo.conf] ${buildout:directory} true
+ 40 memcached ${memcached:location}/bin/memcached [-s ${buildout:directory}/var/memcached.sock] ${memcached:location} true
+
+[zodbshootout]
+recipe = zc.recipe.egg
+eggs =
+ zodbshootout
+ RelStorage
+ MySQL-python
+ psycopg2
+ pylibmc
+interpreter = py
Added: zodbshootout/trunk/etc/fs-sample.conf
===================================================================
--- zodbshootout/trunk/etc/fs-sample.conf (rev 0)
+++ zodbshootout/trunk/etc/fs-sample.conf 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,55 @@
+
+# This configuration compares a database running raw FileStorage
+# (no ZEO), along with a databases running FileStorage behind ZEO
+# with a persistent ZEO cache, with some other databases.
+#
+# *This test can only run with a concurrency level of 1!*
+
+%import relstorage
+
+<zodb fs>
+ <filestorage>
+ path var/Data2.fs
+ </filestorage>
+</zodb>
+
+<zodb zeofs_pcache>
+ <zeoclient>
+ server localhost:24003
+ client 0
+ var var
+ cache-size 200000000
+ </zeoclient>
+</zodb>
+
+<zodb zeo_fs>
+ <zeoclient>
+ server localhost:24003
+ </zeoclient>
+</zodb>
+
+<zodb mysql_hf>
+ <relstorage>
+ keep-history false
+ poll-interval 5
+ <mysql>
+ db relstoragetest_hf
+ user relstoragetest
+ passwd relstoragetest
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb mysql_hf_mc>
+ <relstorage>
+ keep-history false
+ poll-interval 5
+ cache-module-name relstorage.pylibmc_wrapper
+ cache-servers var/memcached.sock
+ <mysql>
+ db relstoragetest_hf
+ user relstoragetest
+ passwd relstoragetest
+ </mysql>
+ </relstorage>
+</zodb>
Added: zodbshootout/trunk/etc/my.cnf.in
===================================================================
--- zodbshootout/trunk/etc/my.cnf.in (rev 0)
+++ zodbshootout/trunk/etc/my.cnf.in 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,89 @@
+#
+# The MySQL database server configuration file.
+#
+[client]
+socket = ${datadir}/mysqld.sock
+port = ${port}
+
+[mysqld_safe]
+socket = ${datadir}/mysqld.sock
+nice = 0
+
+[mysqld]
+#
+# * Basic Settings
+#
+
+socket = ${datadir}/mysqld.sock
+port = ${port}
+pid-file = ${datadir}/mysqld.pid
+basedir = ${mysql:location}
+datadir = ${datadir}
+tmpdir = /tmp
+skip-external-locking
+bind-address = 0.0.0.0
+
+#
+# * Fine Tuning
+#
+key_buffer = 16M
+max_allowed_packet = 32M
+thread_stack = 128K
+thread_cache_size = 8
+myisam-recover = BACKUP
+#max_connections = 100
+#table_cache = 64
+#thread_concurrency = 10
+
+#
+# * Query Cache Configuration
+#
+query_cache_limit = 1M
+query_cache_size = 16M
+
+#
+# * Logging and Replication
+#
+# Both location gets rotated by the cronjob.
+# Be aware that this log type is a performance killer.
+#log = ${logdir}/mysql.log
+#
+# Error logging goes to syslog. This is a Debian improvement :)
+#
+# Here you can see queries with especially long duration
+#log_slow_queries = ${logdir}/mysql-slow.log
+#long_query_time = 2
+#log-queries-not-using-indexes
+#
+# The following can be used as easy to replay backup logs or for replication.
+#server-id = 1
+#log_bin = ${datadir}/mysql-bin.log
+#binlog_format = ROW
+expire_logs_days = 10
+max_binlog_size = 100M
+sync_binlog = 1
+#binlog_do_db = include_database_name
+#binlog_ignore_db = include_database_name
+
+#
+# * InnoDB
+#
+innodb_data_file_path = ibdata1:10M:autoextend
+innodb_buffer_pool_size=64M
+innodb_log_file_size=16M
+innodb_log_buffer_size=8M
+innodb_flush_log_at_trx_commit=1
+innodb_file_per_table
+innodb_locks_unsafe_for_binlog=1
+
+
+[mysqldump]
+quick
+quote-names
+max_allowed_packet = 32M
+
+[mysql]
+#no-auto-rehash # faster start of mysql but no tab completion
+
+[isamchk]
+key_buffer = 16M
Added: zodbshootout/trunk/etc/oracle-sample.conf
===================================================================
--- zodbshootout/trunk/etc/oracle-sample.conf (rev 0)
+++ zodbshootout/trunk/etc/oracle-sample.conf 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,40 @@
+
+# This configuration compares Oracle, PostgreSQL, MySQL, and ZEO.
+
+<zodb oracle_hf>
+ <relstorage>
+ keep-history false
+ <oracle>
+ dsn XE
+ user relstoragetest_hf
+ password relstoragetest
+ </oracle>
+ </relstorage>
+</zodb>
+
+<zodb postgresql_hf>
+ <relstorage>
+ keep-history false
+ <postgresql>
+ dsn dbname='relstoragetest_hf' user='relstoragetest' password='relstoragetest'
+ </postgresql>
+ </relstorage>
+</zodb>
+
+<zodb mysql_hf>
+ <relstorage>
+ keep-history false
+ <mysql>
+ db relstoragetest_hf
+ user relstoragetest
+ passwd relstoragetest
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb zeo_fs>
+ <zeoclient>
+ server localhost:24003
+ </zeoclient>
+</zodb>
+
Added: zodbshootout/trunk/etc/remote-sample.conf
===================================================================
--- zodbshootout/trunk/etc/remote-sample.conf (rev 0)
+++ zodbshootout/trunk/etc/remote-sample.conf 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,68 @@
+
+# This configuration compares the performance of networked databases
+# based on history-free PostgreSQL (with and without memcached),
+# MySQL (with and without memcached), and ZEO.
+
+# Change the line below to match the IP address of the server to test
+# against.
+%define host 192.168.1.10
+
+%import relstorage
+
+<zodb postgresql_hf>
+ <relstorage>
+ keep-history false
+ poll-interval 2
+ <postgresql>
+ dsn dbname='relstoragetest_hf' user='relstoragetest' password='relstoragetest' host='$host' port='24004'
+ </postgresql>
+ </relstorage>
+</zodb>
+
+<zodb postgresql_hf_mc>
+ <relstorage>
+ keep-history true
+ poll-interval 2
+ cache-module-name relstorage.pylibmc_wrapper
+ cache-servers var/memcached.sock
+ <postgresql>
+ dsn dbname='relstoragetest' user='relstoragetest' password='relstoragetest' host='$host' port='24004'
+ </postgresql>
+ </relstorage>
+</zodb>
+
+<zodb mysql_hf>
+ <relstorage>
+ keep-history false
+ poll-interval 2
+ <mysql>
+ db relstoragetest_hf
+ user relstoragetest
+ passwd relstoragetest
+ host $host
+ port 24002
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb mysql_hf_mc>
+ <relstorage>
+ keep-history false
+ poll-interval 2
+ cache-module-name relstorage.pylibmc_wrapper
+ cache-servers var/memcached.sock
+ <mysql>
+ db relstoragetest_hf
+ user relstoragetest
+ passwd relstoragetest
+ host $host
+ port 24002
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb zeo_fs>
+ <zeoclient>
+ server $host:24003
+ </zeoclient>
+</zodb>
Added: zodbshootout/trunk/etc/sample.conf
===================================================================
--- zodbshootout/trunk/etc/sample.conf (rev 0)
+++ zodbshootout/trunk/etc/sample.conf 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,45 @@
+
+# This configuration compares the performance of local databases
+# based on PostgreSQL, MySQL (with and without memcached), and ZEO.
+
+%import relstorage
+
+<zodb postgresql>
+ <relstorage>
+ poll-interval 5
+ <postgresql>
+ dsn dbname='relstoragetest' user='relstoragetest' password='relstoragetest'
+ </postgresql>
+ </relstorage>
+</zodb>
+
+<zodb mysql>
+ <relstorage>
+ poll-interval 5
+ <mysql>
+ db relstoragetest
+ user relstoragetest
+ passwd relstoragetest
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb mysql_mc>
+ <relstorage>
+ poll-interval 5
+ cache-module-name relstorage.pylibmc_wrapper
+ cache-servers var/memcached.sock
+ <mysql>
+ db relstoragetest
+ user relstoragetest
+ passwd relstoragetest
+ </mysql>
+ </relstorage>
+</zodb>
+
+<zodb zeo_fs>
+ <zeoclient>
+ server localhost:24003
+ </zeoclient>
+</zodb>
+
Added: zodbshootout/trunk/etc/zeo.conf.in
===================================================================
--- zodbshootout/trunk/etc/zeo.conf.in (rev 0)
+++ zodbshootout/trunk/etc/zeo.conf.in 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,12 @@
+<zeo>
+ address ${port}
+ read-only false
+ invalidation-queue-size 100
+ pid-filename ${buildout:directory}/var/zeo.pid
+ # monitor-address PORT
+ # transaction-timeout SECONDS
+</zeo>
+
+<filestorage 1>
+ path ${buildout:directory}/var/Data.fs
+</filestorage>
Added: zodbshootout/trunk/mysql-no-read-etc.patch
===================================================================
--- zodbshootout/trunk/mysql-no-read-etc.patch (rev 0)
+++ zodbshootout/trunk/mysql-no-read-etc.patch 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,12 @@
+--- mysys/default.c.orig 2009-09-28 15:44:21.000000000 -0600
++++ mysys/default.c 2009-09-28 15:44:57.000000000 -0600
+@@ -1123,9 +1123,6 @@
+
+ #else
+
+- errors += add_directory(alloc, "/etc/", dirs);
+- errors += add_directory(alloc, "/etc/mysql/", dirs);
+-
+ #if defined(DEFAULT_SYSCONFDIR)
+ if (DEFAULT_SYSCONFDIR[0])
+ errors += add_directory(alloc, DEFAULT_SYSCONFDIR, dirs);
Added: zodbshootout/trunk/reset
===================================================================
--- zodbshootout/trunk/reset (rev 0)
+++ zodbshootout/trunk/reset 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,8 @@
+#!/bin/sh
+bin/supervisorctl shutdown
+sleep 3
+rm -rf var/mysql
+bin/buildout -No
+sleep 3
+bin/supervisord
+bin/supervisorctl status
Property changes on: zodbshootout/trunk/reset
___________________________________________________________________
Added: svn:executable
+
Added: zodbshootout/trunk/setup.py
===================================================================
--- zodbshootout/trunk/setup.py (rev 0)
+++ zodbshootout/trunk/setup.py 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,52 @@
+##############################################################################
+#
+# Copyright (c) 2009 Zope Foundation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+"""A ZODB performance test"""
+
+version = "0.1dev"
+
+from setuptools import setup, find_packages
+import os
+import sys
+
+install_requires=[
+ 'setuptools',
+ 'ZODB3',
+]
+
+if sys.version_info < (2, 6):
+ install_requires.append('multiprocessing')
+
+def read_file(*path):
+ base_dir = os.path.dirname(__file__)
+ return open(os.path.join(base_dir, *tuple(path))).read()
+
+setup(
+ name='zodbshootout',
+ version = version,
+ description = __doc__,
+ long_description = read_file("README.txt"),
+ keywords = 'ZODB ZEO RelStorage',
+ author = 'Shane Hathaway',
+ license = 'ZPL',
+ packages = find_packages('src'),
+ package_dir = {'': 'src'},
+ namespace_packages = [],
+ include_package_data = True,
+ platforms = 'Any',
+ zip_safe = False,
+ install_requires=install_requires,
+ entry_points = {'console_scripts': [
+ 'zodbshootout = zodbshootout.main:main',
+ ]},
+)
Added: zodbshootout/trunk/src/zodbshootout/__init__.py
===================================================================
--- zodbshootout/trunk/src/zodbshootout/__init__.py (rev 0)
+++ zodbshootout/trunk/src/zodbshootout/__init__.py 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1 @@
+
Added: zodbshootout/trunk/src/zodbshootout/fork.py
===================================================================
--- zodbshootout/trunk/src/zodbshootout/fork.py (rev 0)
+++ zodbshootout/trunk/src/zodbshootout/fork.py 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,150 @@
+##############################################################################
+#
+# Copyright (c) 2008 Zope Foundation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+"""Multiprocessing utilities.
+"""
+
+from multiprocessing import Process
+from multiprocessing import Queue
+from Queue import Empty
+import sys
+import time
+
+# message_delay contains the maximum expected message delay. If a message
+# takes longer than this to deliver to a child process, synchronized
+# execution will fail. OTOH, this delays test execution, so it should
+# be reasonably short.
+message_delay = 0.5
+
+
+class ChildProcessError(Exception):
+ """A child process failed"""
+
+
+def run_in_child(func, *args, **kw):
+ """Call a function in a child process. Don't return anything.
+
+ Raises an exception if the child process fails.
+ """
+ p = Process(target=func, args=args, kwargs=kw)
+ p.start()
+ p.join()
+ if p.exitcode:
+ raise ChildProcessError(
+ "process running %r failed with exit code %d" % (func, p.exitcode))
+
+
+class Child(object):
+
+ def __init__(self, child_num, parent_queue, func, param):
+ self.child_num = child_num
+ self.parent_queue = parent_queue
+ self.func = func
+ self.param = param
+ self.process = Process(target=self.run)
+ self.child_queue = Queue()
+
+ def run(self):
+ try:
+ res = self.func(self.param, self.sync)
+ except (SystemExit, KeyboardInterrupt):
+ return
+ except Exception, e:
+ self.parent_queue.put((
+ self.child_num, 'exception', '%s: %s' % (type(e), str(e))))
+ else:
+ self.parent_queue.put((self.child_num, 'ok', res))
+
+ def sync(self):
+ self.parent_queue.put((self.child_num, 'sync', None))
+ resume_time = self.child_queue.get()
+ now = time.time()
+ if now > resume_time:
+ raise AssertionError(
+ "Resume time has already passed (%fs too late). Consider "
+ "increasing 'message_delay', which is currently set to %f."
+ % (now - resume_time, message_delay))
+ # sleep until the resume time is near
+ delay = resume_time - time.time() - 0.1
+ if delay > 0:
+ time.sleep(delay)
+ # busy wait until the exact resume time
+ while time.time() < resume_time:
+ pass
+
+
+def distribute(func, param_iter):
+ """Call a function in separate processes concurrently.
+
+ param_iter is an iterator that provides the first parameter for
+ each function call. The second parameter for each call is a "sync"
+ function. The sync function pauses execution, then resumes all
+ processes at the same time. It is expected that all child processes
+ will call the sync function the same number of times.
+
+ The results of calling the function are appended to a list, which
+ is returned once all functions have returned. If any function
+ raises an error, this raises AssertionError.
+ """
+ children = {}
+ parent_queue = Queue()
+ for child_num, param in enumerate(param_iter):
+ child = Child(child_num, parent_queue, func, param)
+ children[child_num] = child
+ for child in children.itervalues():
+ child.process.start()
+
+ try:
+ results = []
+ sync_waiting = set(children)
+
+ while children:
+
+ try:
+ child_num, msg, arg = parent_queue.get(timeout=1)
+ except Empty:
+ # While we're waiting, see if any children have died.
+ for child in children.itervalues():
+ if not child.process.is_alive():
+ raise ChildProcessError(
+ "process running %r failed with exit code %d" % (
+ child.func, child.process.exitcode))
+ continue
+
+ if msg == 'ok':
+ results.append(arg)
+ child = children[child_num]
+ child.process.join()
+ del children[child_num]
+ elif msg == 'exception':
+ raise AssertionError(arg)
+ elif msg == 'sync':
+ sync_waiting.remove(child_num)
+ else:
+ raise AssertionError("unknown message: %s" % msg)
+
+ if not sync_waiting:
+ # All children have called sync(), so tell them
+ # to resume shortly and set up for another sync.
+ resume_time = time.time() + message_delay
+ for child in children.itervalues():
+ child.child_queue.put(resume_time)
+ sync_waiting = set(children)
+
+ return results
+
+ finally:
+ for child in children.itervalues():
+ child.process.terminate()
+ child.process.join()
+ parent_queue.close()
Added: zodbshootout/trunk/src/zodbshootout/main.py
===================================================================
--- zodbshootout/trunk/src/zodbshootout/main.py (rev 0)
+++ zodbshootout/trunk/src/zodbshootout/main.py 2009-11-17 08:42:10 UTC (rev 105749)
@@ -0,0 +1,362 @@
+##############################################################################
+#
+# Copyright (c) 2009 Zope Foundation and Contributors.
+# All Rights Reserved.
+#
+# This software is subject to the provisions of the Zope Public License,
+# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
+# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
+# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
+# FOR A PARTICULAR PURPOSE.
+#
+##############################################################################
+"""Compare the speed of different ZODB storages.
+
+Opens the databases specified by a ZConfig file.
+
+Splits into many processes to avoid contention over the global
+interpreter lock.
+"""
+
+from persistent import Persistent
+from persistent.mapping import PersistentMapping
+from StringIO import StringIO
+from zodbshootout.fork import ChildProcessError
+from zodbshootout.fork import distribute
+from zodbshootout.fork import run_in_child
+import optparse
+import os
+import sys
+import time
+import transaction
+import warnings
+import ZConfig
+
+warnings.filterwarnings("ignore", "the sha module is deprecated",
+ DeprecationWarning)
+
+debug = False
+repetitions = 3
+max_attempts = 20
+
+schema_xml = """
+<schema>
+ <import package="ZODB"/>
+ <multisection type="ZODB.database" name="*" attribute="databases" />
+</schema>
+"""
+
+class PObject(Persistent):
+ """A trivial persistent object"""
+ attr = 1
+
+class SpeedTest:
+
+ def __init__(self, concurrency, objects_per_txn, profile_dir=None):
+ self.concurrency = concurrency
+ self.objects_per_txn = objects_per_txn
+ self.data_to_store = dict(
+ (n, PObject()) for n in range(objects_per_txn))
+ self.profile_dir = profile_dir
+ self.contender_name = None
+ self.rep = 0 # repetition number
+
+ def populate(self, db_factory):
+ db = db_factory()
+ conn = db.open()
+ root = conn.root()
+
+ # clear the database
+ root['speedtest'] = None
+ transaction.commit()
+ db.pack()
+
+ # put a tree in the database
+ root['speedtest'] = t = PersistentMapping()
+ for i in range(self.concurrency):
+ t[i] = PersistentMapping()
+ transaction.commit()
+ conn.close()
+ db.close()
+ if debug:
+ print >> sys.stderr, 'Populated storage.'
+
+ def write_test(self, db_factory, n, sync):
+ db = db_factory()
+
+ def do_write():
+ start = time.time()
+ conn = db.open()
+ root = conn.root()
+ m = root['speedtest'][n]
+ m.update(self.data_to_store)
+ transaction.commit()
+ conn.close()
+ end = time.time()
+ return end - start
+
+ db.open().close()
+ sync()
+ t = self._execute(do_write, 'write', n)
+
+ time.sleep(.1)
+ db.close()
+ return t
+
+ def read_test(self, db_factory, n, sync):
+ db = db_factory()
+ db.setCacheSize(len(self.data_to_store)+400)
+
+ def do_read():
+ start = time.time()
+ conn = db.open()
+ got = 0
+ for obj in conn.root()['speedtest'][n].itervalues():
+ got += obj.attr
+ del obj
+ if got != self.objects_per_txn:
+ raise AssertionError('data mismatch')
+ conn.close()
+ end = time.time()
+ return end - start
+
+ db.open().close()
+ sync()
+ warm = self._execute(do_read, 'warm', n)
+
+ # Clear all caches
+ conn = db.open()
+ conn.cacheMinimize()
+ storage = conn._storage
+ if hasattr(storage, '_cache'):
+ storage._cache.clear()
+ conn.close()
+
+ sync()
+ cold = self._execute(do_read, 'cold', n)
+
+ conn = db.open()
+ conn.cacheMinimize()
+ conn.close()
+
+ sync()
+ hot = self._execute(do_read, 'hot', n)
+ sync()
+ steamin = self._execute(do_read, 'steamin', n)
+
+ db.close()
+ return warm, cold, hot, steamin
+
+ def _execute(self, func, phase_name, n):
+ if not self.profile_dir:
+ return func()
+ basename = '%s-%s-%d-%02d-%d' % (
+ self.contender_name, phase_name, self.objects_per_txn, n, self.rep)
+ txt_fn = os.path.join(self.profile_dir, basename + ".txt")
+ prof_fn = os.path.join(self.profile_dir, basename + ".prof")
+ import cProfile
+ output = []
+ d = {'_func': func, '_output': output}
+ cProfile.runctx("_output.append(_func())", d, d, prof_fn)
+ res = output[0]
+ from pstats import Stats
+ f = open(txt_fn, 'w')
+ st = Stats(prof_fn, stream=f)
+ st.strip_dirs()
+ st.sort_stats('cumulative')
+ st.print_stats()
+ f.close()
+ return res
+
+ def run(self, db_factory, contender_name, rep):
+ """Run a write and read test.
+
+ Returns the mean time per transaction for 4 phases:
+ write, cold read, hot read, and steamin' read.
+ """
+ self.contender_name = contender_name
+ self.rep = rep
+
+ run_in_child(self.populate, db_factory)
+
+ def write(n, sync):
+ return self.write_test(db_factory, n, sync)
+ def read(n, sync):
+ return self.read_test(db_factory, n, sync)
+
+ r = range(self.concurrency)
+ write_times = distribute(write, r)
+ read_times = distribute(read, r)
+ warm_times = [t[0] for t in read_times]
+ cold_times = [t[1] for t in read_times]
+ hot_times = [t[2] for t in read_times]
+ steamin_times = [t[3] for t in read_times]
+ return (
+ sum(write_times) / self.concurrency,
+ sum(warm_times) / self.concurrency,
+ sum(cold_times) / self.concurrency,
+ sum(hot_times) / self.concurrency,
+ sum(steamin_times) / self.concurrency,
+ )
+
+
+def align_columns(rows):
+ """Format a list of rows as CSV with aligned columns.
+ """
+ col_widths = []
+ for col in zip(*rows):
+ col_widths.append(max(len(value) for value in col))
+ for row_num, row in enumerate(rows):
+ line = []
+ last_col = len(row) - 1
+ for col_num, (width, value) in enumerate(zip(col_widths, row)):
+ space = ' ' * (width - len(value))
+ if row_num == 0:
+ if col_num == last_col:
+ line.append(value)
+ else:
+ line.append('%s, %s' % (value, space))
+ elif col_num == last_col:
+ if col_num == 0:
+ line.append(value)
+ else:
+ line.append('%s%s' % (space, value))
+ else:
+ if col_num == 0:
+ line.append('%s, %s' % (value, space))
+ else:
+ line.append('%s%s, ' % (space, value))
+ yield ''.join(line)
+
+
+def main(argv=None):
+ if argv is None:
+ argv = sys.argv[1:]
+
+ parser = optparse.OptionParser(usage='%prog [options] config_file')
+ parser.add_option(
+ "-n", "--object-counts", dest="counts", default="1000",
+ help="Object counts to use, separated by commas (default 1000)",
+ )
+ parser.add_option(
+ "-c", "--concurrency", dest="concurrency", default="2",
+ help="Concurrency levels to use, separated by commas (default 2)",
+ )
+ parser.add_option(
+ "-p", "--profile", dest="profile_dir", default="",
+ help="Profile all tests and output results to the specified directory",
+ )
+
+ options, args = parser.parse_args(argv)
+ if len(args) != 1:
+ parser.error("exactly one database configuration file is required")
+ conf_fn = args[0]
+
+ object_counts = [int(x.strip())
+ for x in options.counts.split(',')]
+ concurrency_levels = [int(x.strip())
+ for x in options.concurrency.split(',')]
+ profile_dir = options.profile_dir
+ if profile_dir and not os.path.exists(profile_dir):
+ os.makedirs(profile_dir)
+
+ schema = ZConfig.loadSchemaFile(StringIO(schema_xml))
+ config, handler = ZConfig.loadConfig(schema, conf_fn)
+ contenders = [(db.name, db) for db in config.databases]
+
+ # results: {(objects_per_txn, concurrency, contender, phase): [time]}}
+ results = {}
+ for objects_per_txn in object_counts:
+ for concurrency in concurrency_levels:
+ for contender_name, db in contenders:
+ for phase in range(5):
+ key = (objects_per_txn, concurrency,
+ contender_name, phase)
+ results[key] = []
+
+ try:
+ for objects_per_txn in object_counts:
+ for concurrency in concurrency_levels:
+ speedtest = SpeedTest(concurrency, objects_per_txn, profile_dir)
+ for contender_name, db in contenders:
+ print >> sys.stderr, (
+ 'Testing %s with objects_per_txn=%d and concurrency=%d'
+ % (contender_name, objects_per_txn, concurrency))
+ db_factory = db.open
+ key = (objects_per_txn, concurrency, contender_name)
+
+ for rep in range(repetitions):
+ for attempt in range(max_attempts):
+ msg = ' Running %d/%d...' % (rep + 1, repetitions)
+ if attempt > 0:
+ msg += ' (attempt %d)' % (attempt + 1)
+ print >> sys.stderr, msg,
+ try:
+ times = speedtest.run(
+ db.open, contender_name, rep)
+ except ChildProcessError:
+ if attempt >= max_attempts - 1:
+ raise
+ else:
+ break
+ msg = (
+ 'write %6.4fs, warm %6.4fs, cold %6.4fs, '
+ 'hot %6.4fs, steamin %6.4fs'
+ % times)
+ print >> sys.stderr, msg
+ for i in range(5):
+ results[key + (i,)].append(times[i])
+
+ # The finally clause causes test results to print even if the tests
+ # stop early.
+ finally:
+
+ # show the results in CSV format
+ print >> sys.stderr
+ print >> sys.stderr, (
+ 'Results show objects written or read per second. '
+ 'Best of 3.')
+
+ txn_descs = (
+ "Write %d Objects",
+ "Read %d Warm Objects",
+ "Read %d Cold Objects",
+ "Read %d Hot Objects",
+ "Read %d Steamin' Objects",
+ )
+
+ for concurrency in concurrency_levels:
+ print
+ print '** concurrency=%d **' % concurrency
+
+ rows = []
+ row = ['"Transaction"']
+ for contender_name, db in contenders:
+ row.append(contender_name)
+ rows.append(row)
+
+ for phase in range(5):
+ for objects_per_txn in object_counts:
+ desc = txn_descs[phase] % objects_per_txn
+ if objects_per_txn == 1:
+ desc = desc[:-1]
+ row = ['"%s"' % desc]
+ for contender_name, db in contenders:
+ key = (objects_per_txn, concurrency,
+ contender_name, phase)
+ times = results[key]
+ if times:
+ count = (
+ concurrency * objects_per_txn / min(times))
+ row.append('%d' % count)
+ else:
+ row.append('?')
+ rows.append(row)
+
+ for line in align_columns(rows):
+ print line
+
+
+if __name__ == '__main__':
+ main()
More information about the checkins
mailing list