"Fossies" - the Fresh Open Source Software Archive

Member "httperf-0.9.0/README" (26 Apr 2007, 17579 Bytes) of package /linux/www/old/httperf-0.9.0.tar.gz:


As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 -*-Mode: outline-*-
    2 
    3 * Building httperf
    4 
    5 This release of httperf is using the standard GNU configuration
    6 mechanism.  The following steps can be used to build it:
    7 
    8 	$ mkdir build
    9 	$ cd build
   10 	$ SRCDIR/configure
   11 	$ make
   12 	$ make install
   13 
   14 In this example, SRCDIR refers to the httperf source directory.  The
   15 last step may have to be executed as "root".
   16 
   17 To build httperf with debug support turned on, invoke configure with
   18 option "--enable-debug".
   19 
   20 By default, the httperf binary is installed in /usr/local/bin/httperf
   21 and the man-page is installed in /usr/local/man/man1/httperf.  You can
   22 change these defaults by passing appropriate options to the
   23 "configure" script.  See "configure --help" for details.
   24 
   25 This release of httperf has preliminary SSL support.  To enable it,
   26 you need to have OpenSSL (http://www.openssl.org/) already installed
   27 on your system.  The configure script assumes that the OpenSSH header
   28 files and libraries can be found in standard locations (e.g.,
   29 /usr/include and /usr/lib).  If the files are in a different place,
   30 you need to tell the configure script where to find them.  This can be
   31 done by setting environment variables CPPFLAGS and LDFLAGS before
   32 invoking "configure".  For example, if the SSL header files are
   33 installed in /usr/local/ssl/include and the SSL libraries are
   34 installed in /usr/local/ssl/lib, then the environment variables should
   35 be set like this:
   36 
   37 	CPPFLAGS="-I/usr/local/ssl/include"
   38 	LDFLAGS="-L/usr/local/ssl/lib"
   39 
   40 With these settings in place, "configure" can be invoked as usual and
   41 SSL should now be found.  If SSL has been detected, the following
   42 three checks should be answered with "yes":
   43 
   44 	checking for main in -lcrypto... yes
   45 	checking for SSL_version in -lssl... yes
   46 		:
   47 	checking for openssl/ssl.h... yes
   48 
   49 Note: you may have to delete "config.cache" to ensure that "configure"
   50 re-evaluates those checks after changing the settings of the
   51 environment variables.
   52 
   53   WARNING:
   54 	httperf uses a deterministic seed for the random number
   55 	generator used by SSL.  Thus, the SSL encrypted data is
   56 	likely to be easy to crack.  In other words, do not assume
   57 	that SSL data transferred when using httperf is (well)
   58 	encrypted!
   59 
   60 This release of httperf has been tested under the following operating systems:
   61 HP-UX 11i (64-bit PA-RISC and IA-64)
   62 Red Hat Enterprise Linux AS (AMD64 and IA-64)
   63 SUSE Linux 10.1 (i386)
   64 openSUSE 10.2 (i386)
   65 OpenBSD 4.0 (i386)
   66 FreeBSD 6.0 (AMD64)
   67 Solaris 8 (UltraSparc 64-bit)
   68 
   69 It should be straight-forward to build httperf on other platforms, please report
   70 any build problems to the mailing list along with the platform specifications.
   71 
   72 * Mailing list
   73 
   74 A mailing list has been set up to encourage discussions among the
   75 httperf user community.  This list is managed by majordomo.  To
   76 subscribe to the list, send a mail containing the body:
   77 
   78 	subscribe httperf
   79 
   80 to majordomo@linux.hpl.hp.com.  To post an article to the list, send
   81 it directly to httperf@linux.hpl.hp.com.
   82 
   83 * Running httperf
   84 
   85 IMPORTANT: It is crucial to run just one copy of httperf per client
   86 machine.  httperf sucks up all available CPU time on a machine.  It is
   87 therefore important not to run any other (CPU-intensive) tasks on a
   88 client machine while httperf is running.  httperf is a CPU hog to
   89 ensure that it can generate the desired workload with good accuracy,
   90 so do not try to change this without fully understanding what the
   91 issues are.
   92 
   93 ** Examples
   94 
   95 The simplest way to invoke httperf is with a command line of the form:
   96 
   97  httperf --server wailua --port 6800
   98 
   99 This command results in httperf attempting to make one request for URL
  100 http://wailua:6800/.  After the reply is received, performance
  101 statistics will be printed and the client exits (the statistics are
  102 explained below).
  103 
  104 A list of all available options can be obtained by specifying the
  105 --help option (all option names can be abbreviated as long as they
  106 remain unambiguous).
  107 
  108 A more realistic test case might be to issue 1000 HTTP requests at a
  109 rate of 10 requests per second.  This can be achieved by additionally
  110 specifying the --num-conns and --rate options.  When specifying the
  111 --rate option, it's generally a good idea to also specify a timeout
  112 value using the --timeout option.  In the example below, a timeout of
  113 one second is specified (the ramification of this option will be
  114 explained later):
  115 
  116  httperf --server wailua --port 6800 --num-conns 100 --rate 10 --timeout 1
  117 
  118 The performance statistics printed by httperf at the end of the test
  119 might look like this:
  120 
  121     Total: connections 100 requests 100 replies 100 test-duration 9.905 s
  122 
  123     Connection rate: 10.1 conn/s (99.1 ms/conn, <=1 concurrent connections)
  124     Connection time [ms]: min 4.6 avg 5.6 max 19.9 median 4.5 stddev 2.0
  125     Connection time [ms]: connect 1.4
  126     Connection length [replies/conn]: 1.000
  127 
  128     Request rate: 10.1 req/s (99.1 ms/req)
  129     Request size [B]: 57.0
  130 
  131     Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
  132     Reply time [ms]: response 4.1 transfer 0.0
  133     Reply size [B]: header 219.0 content 204.0 footer 0.0 (total 423.0)
  134     Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
  135 
  136     CPU time [s]: user 2.71 system 7.08 (user 27.4% system 71.5% total 98.8%)
  137     Net I/O: 4.7 KB/s (0.0*10^6 bps)
  138 
  139     Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  140     Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
  141 
  142 There are six groups of statistics: overall results ("Total"),
  143 connection related results ("Connection"), results relating to the
  144 issuing of HTTP requests ("Request"), results relating to the replies
  145 received from the server ("Reply"), miscellaneous results relating to
  146 the CPU time and network bandwidth used, and, finally, a summary of
  147 errors encountered ("Errors").  Let's discuss each in turn:
  148 
  149 ** "Total" Results
  150 
  151 The "Total" line summarizes how many TCP connections were initiated by
  152 the client, how many requests it sent, how many replies it received,
  153 and what the total test duration was.  The line below shows that 100
  154 connections were initiated, 100 requests were performed and 100
  155 replies were received.  It also shows that total test-duration was
  156 9.905 seconds meaning that the average request rate was almost exactly
  157 10 request per second.
  158 
  159     Total: connections 100 requests 100 replies 100 test-duration 9.905 s
  160 
  161 ** "Connection" Results
  162 
  163 These results convey information related to the TCP connections that
  164 are used to communicate with the web server.
  165 
  166 Specifically, the line below show that new connections were initiated
  167 at a rate of 10.1 connections per second.  This rate corresponds to a
  168 period of 99.1 milliseconds per connection.  Finally, the last number
  169 shows that at most one connection was open to the server at any given
  170 time.
  171 
  172     Connection rate: 10.1 conn/s (99.1 ms/conn, <=1 concurrent connections)
  173 
  174 The next line in the output gives lifetime statistics for successful
  175 connections.  The lifetime of a connection is the time between a TCP
  176 connection was initiated and the time the connection was closed.  A
  177 connection is considered successful if it had at least one request
  178 that resulted in a reply from the server.  The line shown below
  179 indicates that the minimum ("min") connection lifetime was 4.6
  180 milliseconds, the average ("avg") lifetime was 5.6 milliseconds, the
  181 maximum ("max") was 19.9 milliseconds, the median ("median") lifetime
  182 was 4.5 milliseconds, and that the standard deviation of the lifetimes
  183 was 2.0 milliseconds.
  184 
  185     Connection time [ms]: min 4.6 avg 5.6 max 19.9 median 4.5 stddev 2.0
  186 
  187 To compute the median time, httperf collects a histogram of connection
  188 lifetimes.  The granularity of this histogram is currently 1
  189 milliseconds and the maximum connection lifetime that can be
  190 accommodated with the histogram is 100 seconds (these numbers can be
  191 changed by editing macros BIN_WIDTH and MAX_LIFETIME in stat/basic.c).
  192 This implies that the granularity of the median time is 1 millisecond
  193 and that at least 50% of the lifetime samples must have a lifetime of
  194 less than 100 seconds.
  195 
  196 The next statistic in this section is the average time it took to
  197 establish a TCP connection to the server (all successful TCP
  198 connections establishments are counted, even connections that may have
  199 failed eventually).  The line below shows that, on average, it took
  200 1.4 milliseconds to establish a connection.
  201 
  202     Connection time [ms]: connect 1.4
  203 
  204 The final line in this section gives the average number of replies
  205 that were received per connection.  With regular HTTP/1.0, this value
  206 is at most 1.0 (when there are no failures), but with HTTP Keep-Alives
  207 or HTTP/1.1 persistent connections, this value can be arbitrarily
  208 high, indicating that the same connection was used to receive multiple
  209 responses.
  210 
  211     Connection length [replies/conn]: 1.000
  212 
  213 ** "Request" Results
  214 
  215 The first line in the "Request"-related results give the rate at which
  216 HTTP requests were issued and the period-length that the rate
  217 corresponds to.  In the example below, the request rate was 10.1
  218 requests per second, which corresponds to 99.1 milliseconds per
  219 request.
  220 
  221     Request rate: 10.1 req/s (99.1 ms/req)
  222 
  223 As long as no persistent connections are employed, the "Request"
  224 results are typically very similar or identical to the "Connection"
  225 results.  However, when persistent connections are used, several
  226 requests can be issued on a single connection in which case the
  227 results would be different.
  228 
  229 The next line gives the average size of the HTTP request in bytes.  In
  230 the line show below, the average request size was 57 bytes.
  231 
  232     Request size [B]: 57.0
  233 
  234 
  235 ** "Reply" Results
  236 
  237 For simple measurements, the section with the "Reply" results is
  238 probably the most interesting one.  The first line gives statistics on
  239 the reply rate:
  240 
  241     Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
  242 
  243 The line above indicates that the minimum ("min"), average ("avg"),
  244 and maximum ("max") reply rate was ten replies per second.  Given
  245 these numbers, the standard deviation is, of course, zero.  The last
  246 number shows that only one reply rate sample was acquired.  The
  247 present version of httperf collects one rate sample about once every
  248 five seconds.  To obtain a meaningful standard deviation, it is
  249 recommended to run each test long enough so at least thirty samples
  250 are obtained---this would correspond to a test duration of at least
  251 150 seconds, or two and a half minutes.
  252 
  253 The next line gives information on how long it took for the server to
  254 respond and how long it took to receive the reply.  The line below
  255 shows that it took 4.1 milliseconds between sending the first byte of
  256 the request and receiving the first byte of the reply.  The time to
  257 "transfer", or read, the reply was too short to be measured, so it
  258 shows up as zero (as we'll see below, the entire reply fit into a
  259 single TCP segment and that's why the transfer time was measured as
  260 zero).
  261 
  262     Reply time [ms]: response 4.1 transfer 0.0
  263 
  264 Next follow some statistics on the size of the reply---all numbers are
  265 reported in bytes.  Specifically, the average length of reply headers,
  266 the average length of the content, and the average length of reply
  267 footers are given (HTTP/1.1 uses footers to realize the "chunked"
  268 transfer encoding).  For convenience, the average total number of
  269 bytes in the replies is also given.  In the example below, the average
  270 header length ("header") was 219 bytes, the average content length
  271 ("content") was 204 bytes, and there were no footers ("footer"),
  272 yielding a total reply length of 423 bytes on average.
  273 
  274     Reply size [B]: header 219.0 content 204.0 footer 0.0 (total 423.0)
  275 
  276 The final piece in this section is a histogram on the status codes
  277 received in the replies.  The example below shows that all 100 replies
  278 were "successful" replies as they contained a status code of 200
  279 (presumably):
  280 
  281     Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
  282 
  283 
  284 ** Miscellaneous Results
  285 
  286 This section starts with a summary of the CPU time the client
  287 consumed.  The line below shows that 2.71 seconds were spent executing
  288 in user mode ("user"), 7.08 seconds were spent executing in system
  289 mode ("system") and that this corresponds to 27.4% user mode execution
  290 and 71.5% system execution.  The total utilization was almost exactly
  291 100%, which is expected given that httperf is a CPU hog:
  292 
  293     CPU time [s]: user 2.71 system 7.08 (user 27.4% system 71.5% total 98.8%)
  294 
  295 Note that any time the total CPU utilization is significantly less
  296 than 100%, some other processes must have been running on the client
  297 machine while httperf was executing.  This makes it likely that the
  298 results are "polluted" and the test should be rerun.
  299 
  300 The next line gives the average network throughput in kilobytes per
  301 second (where a kilobyte is 1024 bytes) and in megabits per second
  302 (where a megabit is 10^6 bit).  The line below shows an average
  303 network bandwidth of about 4.7 kilobyte per second.  The megabit per
  304 second number is zero due to rounding errors.
  305 
  306     Net I/O: 4.7 KB/s (0.0*10^6 bps)
  307 
  308 The network bandwidth is computed from the number of bytes sent and
  309 received on TCP connections.  This means that it accounts for the
  310 network payload only (i.e., it doesn't account for protocol headers)
  311 and does not take into account retransmissions that may occur at the
  312 TCP level.
  313 
  314 ** "Errors"
  315 
  316 The final section contains statistics on the errors that occurred
  317 during the test.  The "total" figure shows the total number of errors
  318 that occurred.  The two lines below show that in our example run there
  319 were no errors:
  320 
  321     Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  322     Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
  323 
  324 The meaning of each error is described below:
  325 
  326  total:
  327 	The sum of all following error counts.
  328 
  329  client-timo:
  330 	Each time a request is made to the server, a watchdog timer
  331 	is started.  If no (partial) response is received by the time
  332 	the watchdog timer expires, httperf times out that request
  333 	a increments this error counter.  This is the most common error
  334 	when driving a server into overload.
  335 
  336  socket-timo
  337 	The number of times a TCP connection failed with a
  338 	socket-level time out (ETIMEDOUT).
  339 
  340  connrefused
  341 	The number of times a TCP connection attempt failed with
  342 	a "connection refused by server" error (ECONNREFUSED).
  343 
  344  connreset
  345 	The number of times a TCP connection failed due to a reset
  346 	(close) by the server.
  347 
  348  fd-unavail
  349 	The number of times the httperf client was out of file
  350 	descriptors.  Whenever this count is bigger than
  351 	zero, the test results are meaning less because the client
  352 	was overloaded (see discussion on setting --timeout below).
  353 
  354  addrunavail
  355 	The number of times the client was out of TCP port numbers
  356 	(EADDRNOTAVAIL).  This error should never occur.  If it
  357 	does, the results should be discarded.
  358 
  359  ftab-full
  360 	The number of times the system's file descriptor table
  361 	was full.  Again, this error should never occur.  If it
  362 	does, the results should be discarded.
  363 
  364  other
  365 	The number of times other errors occurred.  Whenever this
  366 	occurs, it is necessary to track down the actual error
  367 	reason.  This can be done by compiling httperf with
  368 	debug support and specifying option --debug 1.
  369 
  370 
  371 ** Selecting appropriate timeout values
  372 
  373 Since the client machine has only a limited set of resource available,
  374 it cannot sustain arbitrarily high HTTP request rates.  One limit is
  375 that there are only roughly 60,000 TCP port numbers that can be in use
  376 at any given time.  Since, on HP-UX, it takes one minute for a TCP
  377 connection to be fully closed (leave the TIME_WAIT state), the maximum
  378 rate a client can sustain is about 1,000 requests per second.
  379 
  380 The actual sustainable rate is typically lower than this because
  381 before running out of TCP ports, a client is likely to run out of file
  382 descriptors (one file descriptor is required per open TCP connection).
  383 By default, HP-UX 10.20 allows 1024 file descriptors per process.
  384 Without a watchdog timer, httperf could potentially quickly use up all
  385 available file descriptors, at which point it could not induce any new
  386 load on the server (this would primarily happen when the server is
  387 overloaded).  To avoid this problem, httperf requires that the web
  388 server must respond within the time specified by option --timeout.  If
  389 it does not respond within that time, the client considers the
  390 connection to be "dead" and closes it (and increases the "client-timo"
  391 error count).  The only exception to this rule is that after sending a
  392 request, httperf allows the server to take some additional time before
  393 it starts responding (to accommodate HTTP requests that take a long
  394 time to complete on the server).  This additional time is called the
  395 "server think time" and can be specified by option --think-timeout.
  396 By default, this additional think time is zero, so by default the
  397 server has to be able to respond within the time allowed by the
  398 --timeout option.
  399 
  400 In practice, we found that with a --timeout value of 1 second, an HP
  401 9000/735 machine running HP-UX 10.20 can sustain a rate of about 700
  402 connections per second before it starts to run out of file descriptor
  403 (the exact rate depends, of course, on a number of factors).  To
  404 achieve web server loads bigger than that, it is necessary to employ
  405 several independent machines, each running one copy of httperf.  A
  406 timeout of one second effectively means that "slow" connections will
  407 typically timeout before TCP even gets a chance to retransmit (the
  408 initial retransmission timeout is on the order of 3 seconds).  This is
  409 usually OK, except that one should keep in mind that it has the effect
  410 of truncating the connection life time distribution.