"Fossies" - the Fresh Open Source Software Archive

Member "stress-ng-0.13.05/stress-ng.1" (11 Oct 2021, 189376 Bytes) of package /linux/privat/stress-ng-0.13.05.tar.xz:


As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 .\"                                      Hey, EMACS: -*- nroff -*-
    2 .\" First parameter, NAME, should be all caps
    3 .\" Second parameter, SECTION, should be 1-8, maybe w/ subsection
    4 .\" other parameters are allowed: see man(7), man(1)
    5 .TH STRESS-NG 1 "Oct 11, 2021"
    6 .\" Please adjust this date whenever revising the manpage.
    7 .\"
    8 .\" Some roff macros, for reference:
    9 .\" .nh        disable hyphenation
   10 .\" .hy        enable hyphenation
   11 .\" .ad l      left justify
   12 .\" .ad b      justify to both left and right margins
   13 .\" .nf        disable filling
   14 .\" .fi        enable filling
   15 .\" .br        insert line break
   16 .\" .sp <n>    insert n+1 empty lines
   17 .\" for manpage-specific macros, see man(7)
   18 .\"
   19 .\" left margin - right margin minus a fudge factor
   20 .SH NAME
   21 stress\-ng \- a tool to load and stress a computer system
   22 .sp 1
   23 .SH SYNOPSIS
   24 .B stress\-ng
   25 [\fIOPTION \fR[\fIARG\fR]] ...
   26 .sp 1
   27 .SH DESCRIPTION
   28 stress\-ng will stress test a computer system in various selectable ways. It
   29 was designed to exercise various physical subsystems of a computer as well
   30 as the various operating system kernel interfaces.
   31 stress\-ng also has a wide range of CPU specific stress tests that exercise
   32 floating point, integer, bit manipulation and control flow.
   33 .PP
   34 stress\-ng was originally intended to make a machine work hard and trip
   35 hardware issues such as thermal overruns as well as operating
   36 system bugs that only occur when a system is being thrashed hard. Use
   37 stress\-ng with caution as some of the tests can make a system run hot
   38 on poorly designed hardware and also can cause excessive system thrashing
   39 which may be difficult to stop.
   40 .PP
   41 stress\-ng can also measure test throughput rates; this can be
   42 useful to observe performance changes across different
   43 operating system releases or types of hardware. However, it has never been
   44 intended to be used as a precise benchmark test suite, so do NOT use it
   45 in this manner.
   46 .PP
   47 Running stress\-ng with root privileges will adjust out of memory settings
   48 on Linux systems to make the stressors unkillable in low memory situations,
   49 so use this judiciously.  With the appropriate privilege, stress\-ng can allow
   50 the ionice class and ionice levels to be adjusted, again, this should be
   51 used with care.
   52 .PP
   53 One can specify the number of processes to invoke per type of stress test;
   54 specifying a zero value will select the number of processors
   55 available as defined by sysconf(_SC_NPROCESSORS_CONF), if that can't be
   56 determined then the number of online CPUs is used.  If the value is less
   57 than zero then the number of online CPUs is used.
   58 .SH OPTIONS
   59 .PP
   60 .B General stress\-ng control options:
   61 .TP
   62 .B \-\-abort
   63 this option will force all running stressors to abort (terminate) if any
   64 other stressor terminates prematurely because of a failure.
   65 .TP
   66 .B \-\-aggressive
   67 enables more file, cache and memory aggressive options. This may slow tests
   68 down, increase latencies and reduce the number of bogo ops as well as changing
   69 the balance of user time vs system time used depending on the type of stressor
   70 being used.
   71 .TP
   72 .B \-a N, \-\-all N, \-\-parallel N
   73 start N instances of all stressors in parallel. If N is less than zero, then
   74 the number of CPUs online is used for the number of instances.  If N is zero,
   75 then the number of configured CPUs in the system is used.
   76 .TP
   77 .B \-b N, \-\-backoff N
   78 wait N microseconds between the start of each stress worker process. This
   79 allows one to ramp up the stress tests over time.
   80 .TP
   81 .B \-\-class name
   82 specify the class of stressors to run. Stressors are classified into one or
   83 more of the following classes: cpu, cpu-cache, device, io, interrupt,
   84 filesystem, memory, network, os, pipe, scheduler and vm.  Some stressors fall
   85 into just one class. For example the 'get' stressor is just in the 'os'
   86 class. Other stressors fall into more than one class, for example,
   87 the 'lsearch' stressor falls into the 'cpu', 'cpu-cache' and 'memory' classes
   88 as it exercises all these three.  Selecting a specific class will run all
   89 the stressors that fall into that class only when run with the \-\-sequential
   90 option.
   91 
   92 Specifying a name followed by a question mark (for example \-\-class vm?) will
   93 print out all the stressors in that specific class.
   94 .TP
   95 .B \-n, \-\-dry\-run
   96 parse options, but do not run stress tests. A no-op.
   97 .TP
   98 .B \-\-ftrace
   99 enable kernel function call tracing (Linux only).  This will use the
  100 kernel debugfs ftrace mechanism to record all the kernel functions
  101 used on the system while stress-ng is running.  This is only as accurate
  102 as the kernel ftrace output, so there may be some variability on the
  103 data reported.
  104 .TP
  105 .B \-h, \-\-help
  106 show help.
  107 .TP
  108 .B \-\-ignite\-cpu
  109 alter kernel controls to try and maximize the CPU. This requires root
  110 privilege to alter various /sys interface controls.  Currently this only
  111 works for Intel P-State enabled x86 systems on Linux.
  112 .TP
  113 .B \-\-ionice\-class class
  114 specify ionice class (only on Linux). Can be idle (default), besteffort, be,
  115 realtime, rt.
  116 .TP
  117 .B \-\-ionice\-level level
  118 specify ionice level (only on Linux). For idle, 0 is the only possible
  119 option. For besteffort or realtime values 0 (highest priority) to 7 (lowest
  120 priority). See ionice(1) for more details.
  121 .TP
  122 .B \-\-iostat S
  123 every S seconds show I/O statistics on the device that stores the stress-ng
  124 temporary files. This is either the device of the current working directory
  125 or the \-\-temp\-path specified path. Currently a Linux only option.
  126 The fields output are:
  127 .TS
  128 expand;
  129 lB lB lB
  130 l l s.
  131 Column Heading	Explanation
  132 T{
  133 Inflight
  134 T}	T{
  135 number of I/O requests that have been issued to
  136 the device driver but have not yet completed
  137 T}
  138 T{
  139 Rd K/s
  140 T}	T{
  141 read rate in 1024 bytes per second
  142 T}
  143 T{
  144 Wr K/s
  145 T}	T{
  146 write rate in 1024 bytes per second
  147 T}
  148 T{
  149 Dscd K/s
  150 T}	T{
  151 discard rate in 1024 bytes per second
  152 T}
  153 T{
  154 Rd/s
  155 T}	T{
  156 reads per second
  157 T}
  158 T{
  159 Wr/s
  160 T}	T{
  161 writes per second
  162 T}
  163 T{
  164 Dscd/s
  165 T}	T{
  166 discards per second
  167 T}
  168 .TE
  169 .TP
  170 .B \-\-job jobfile
  171 run stressors using a jobfile.  The jobfile is essentially a file containing
  172 stress-ng options (without the leading \-\-) with one option per line. Lines
  173 may have comments with comment text proceeded by the # character. A simple
  174 example is as follows:
  175 .PP
  176 .RS
  177 .nf
  178 run sequential   # run stressors sequentially
  179 verbose          # verbose output
  180 metrics-brief    # show metrics at end of run
  181 timeout 60s      # stop each stressor after 60 seconds
  182 #
  183 # vm stressor options:
  184 #
  185 vm 2             # 2 vm stressors
  186 vm-bytes 128M    # 128MB available memory
  187 vm-keep          # keep vm mapping
  188 vm-populate      # populate memory
  189 #
  190 # memcpy stressor options:
  191 #
  192 memcpy 5         # 5 memcpy stressors
  193 .fi
  194 .RE
  195 .RS
  196 .PP
  197 The job file introduces the run command that specifies how to run the
  198 stressors:
  199 .PP
  200 run sequential \- run stressors sequentially
  201 .br
  202 run parallel \- run stressors together in parallel
  203 .PP
  204 Note that 'run parallel' is the default.
  205 .RE
  206 .TP
  207 .B \-k, \-\-keep\-name
  208 by default, stress\-ng will attempt to change the name of the stress
  209 processes according to their functionality; this option disables this and
  210 keeps the process names to be the name of the parent process, that is,
  211 stress\-ng.
  212 .TP
  213 .B \-\-log\-brief
  214 by default stress\-ng will report the name of the program, the message type
  215 and the process id as a prefix to all output. The \-\-log\-brief option will
  216 output messages without these fields to produce a less verbose output.
  217 .TP
  218 .B \-\-log\-file filename
  219 write messages to the specified log file.
  220 .TP
  221 .B \-\-maximize
  222 overrides the default stressor settings and instead sets these to the maximum
  223 settings allowed.  These defaults can always be overridden by the per stressor
  224 settings options if required.
  225 .TP
  226 .B \-\-max\-fd N
  227 set the maximum limit on file descriptors (value or a % of system allowed
  228 maximum).  By default, stress-ng can use all the available file descriptors;
  229 this option sets the limit in the range from 10 up to the maximum limit of
  230 RLIMIT_NOFILE.  One can use a % setting too, e.g. 50% is half the maximum
  231 allowed file descriptors.  Note that stress-ng will use about 5 of the
  232 available file descriptors so take this into consideration when using this
  233 setting.
  234 .TP
  235 .B \-\-metrics
  236 output number of bogo operations in total performed by the stress processes.
  237 Note that these are not a reliable metric of performance or throughput and
  238 have not been designed to be used for benchmarking whatsoever. The metrics are
  239 just a useful way to observe how a system behaves when under various kinds of
  240 load.
  241 .RS
  242 .PP
  243 The following columns of information are output:
  244 .TS
  245 expand;
  246 lB lB lB
  247 l l s.
  248 Column Heading	Explanation
  249 T{
  250 bogo ops
  251 T}	T{
  252 number of iterations of the stressor during the run. This is metric of
  253 how much overall "work" has been achieved in bogo operations.
  254 T}
  255 T{
  256 real time (secs)
  257 T}	T{
  258 average wall clock duration (in seconds) of the stressor. This is the total
  259 wall clock time of all the instances of that particular stressor divided by
  260 the number of these stressors being run.
  261 T}
  262 T{
  263 usr time (secs)
  264 T}	T{
  265 total user time (in seconds) consumed running all the instances of the
  266 stressor.
  267 T}
  268 T{
  269 sys time (secs)
  270 T}	T{
  271 total system time (in seconds) consumed running all the instances of the
  272 stressor.
  273 T}
  274 T{
  275 bogo ops/s (real time)
  276 T}	T{
  277 total bogo operations per second based on wall clock run time. The wall clock
  278 time reflects the apparent run time. The more processors one has on a system
  279 the more the work load can be distributed onto these and hence the wall clock
  280 time will reduce and the bogo ops rate will increase.  This is essentially
  281 the "apparent" bogo ops rate of the system.
  282 T}
  283 T{
  284 bogo ops/s (usr+sys time)
  285 T}	T{
  286 total bogo operations per second based on cumulative user and system time.
  287 This is the real bogo ops rate of the system taking into consideration the
  288 actual time execution time of the stressor across all the processors.
  289 Generally this will decrease as one adds more concurrent stressors due to
  290 contention on cache, memory, execution units, buses and I/O devices.
  291 T}
  292 T{
  293 CPU used per instance (%)
  294 T}	T{
  295 total percentage of CPU used divided by number of stressor instances. 100%
  296 is 1 full CPU. Some stressors run multiple threads so it is possible to have
  297 a figure greater than 100%.
  298 T}
  299 .TE
  300 .RE
  301 .TP
  302 .B \-\-metrics\-brief
  303 show shorter list of stressor metrics (no CPU used per instance).
  304 .TP
  305 .B \-\-minimize
  306 overrides the default stressor settings and instead sets these to the minimum
  307 settings allowed.  These defaults can always be overridden by the per stressor
  308 settings options if required.
  309 .TP
  310 .B \-\-no\-madvise
  311 from version 0.02.26 stress\-ng automatically calls madvise(2) with random
  312 advise options before each mmap and munmap to stress the vm subsystem a
  313 little harder. The \-\-no\-advise option turns this default off.
  314 .TP
  315 .B \-\-no\-oom\-adjust
  316 disable any form of out-of-memory score adjustments, keep the system defaults.
  317 Normally stress-ng will adjust the out-of-memory scores on stressors to try
  318 to create more memory pressure. This option disables the adjustments.
  319 .TP
  320 .B \-\-no\-rand\-seed
  321 Do not seed the stress-ng pseudo-random number generator with a quasi random
  322 start seed, but instead seed it with constant values. This forces tests to
  323 run each time using the same start conditions which can be useful when one
  324 requires reproducible stress tests.
  325 .TP
  326 .B \-\-oomable
  327 Do not respawn a stressor if it gets killed by the Out-of-Memory (OOM) killer.
  328 The default behaviour is to restart a new instance of a stressor if the kernel
  329 OOM killer terminates the process. This option disables this default
  330 behaviour.
  331 .TP
  332 .B \-\-page\-in
  333 touch allocated pages that are not in core, forcing them to be paged back in.
  334 This is a useful option to force all the allocated pages to be paged in when
  335 using the bigheap, mmap and vm stressors.  It will severely degrade
  336 performance when the memory in the system is less than the allocated buffer
  337 sizes.  This uses mincore(2) to determine the pages that are not in core and
  338 hence need touching to page them back in.
  339 .TP
  340 .B \-\-pathological
  341 enable stressors that are known to hang systems.  Some stressors can quickly
  342 consume resources in such a way that they can rapidly hang a system before
  343 the kernel can OOM kill them. These stressors are not enabled by default,
  344 this option enables them, but you probably don't want to do this. You have
  345 been warned.
  346 .TP
  347 .B \-\-perf
  348 measure processor and system activity using perf events. Linux only and
  349 caveat emptor, according to perf_event_open(2): "Always double-check your
  350 results! Various generalized events have had wrong values.".  Note that
  351 with Linux 4.7 one needs to have CAP_SYS_ADMIN capabilities for this
  352 option to work, or adjust  /proc/sys/kernel/perf_event_paranoid to below
  353 2 to use this without CAP_SYS_ADMIN.
  354 .TP
  355 .B \-q, \-\-quiet
  356 do not show any output.
  357 .TP
  358 .B \-r N, \-\-random N
  359 start N random stress workers. If N is 0, then the number of configured
  360 processors is used for N.
  361 .TP
  362 .B \-\-sched scheduler
  363 select the named scheduler (only on Linux). To see the list of available
  364 schedulers use: stress\-ng \-\-sched which
  365 .TP
  366 .B \-\-sched\-prio prio
  367 select the scheduler priority level (only on Linux). If the scheduler does
  368 not support this then the default priority level of 0 is chosen.
  369 .TP
  370 .B \-\-sched\-period period
  371 select the period parameter for deadline scheduler (only on Linux). Default
  372 value is 0 (in nanoseconds).
  373 .TP
  374 .B \-\-sched\-runtime runtime
  375 select the runtime parameter for deadline scheduler (only on Linux). Default
  376 value is 99999 (in nanoseconds).
  377 .TP
  378 .B \-\-sched\-deadline deadline
  379 select the deadline parameter for deadline scheduler (only on Linux). Default
  380 value is 100000 (in nanoseconds).
  381 .TP
  382 .B \-\-sched\-reclaim
  383 use cpu bandwidth reclaim feature for deadline scheduler (only on Linux).
  384 .TP
  385 .B \-\-seed N
  386 set the random number generate seed with a 64 bit value. Allows stressors to
  387 use the same random number generator sequences on each invocation.
  388 .TP
  389 .B \-\-sequential N
  390 sequentially run all the stressors one by one for a default of 60 seconds. The
  391 number of instances of each of the individual stressors to be started is N.  If
  392 N is less than zero, then the number of CPUs online is used for the number
  393 of instances.  If N is zero, then the number of CPUs in the system is used.
  394 Use the \-\-timeout option to specify the duration to run each stressor.
  395 .TP
  396 .B \-\-skip\-silent
  397 silence messages that report that a stressor has been skipped because it
  398 requires features not supported by the system, such as unimplemented system
  399 calls, missing resources or processor specific features.
  400 .TP
  401 .B \-\-smart
  402 scan the block devices for changes S.M.A.R.T. statistics (Linux only). This
  403 requires root privileges to read the Self-Monitoring, Analysis and Reporting
  404 Technology data from all block devies and will report any changes in the
  405 statistics. One caveat is that device manufacturers provide different sets
  406 of data, the exact meaning of the data can be vague and the data may be
  407 inaccurate.
  408 .TP
  409 .B \-\-stressors
  410 output the names of the available stressors.
  411 .TP
  412 .B \-\-syslog
  413 log output (except for verbose \-v messages) to the syslog.
  414 .TP
  415 .B \-\-taskset list
  416 set CPU affinity based on the list of CPUs provided; stress-ng is bound to
  417 just use these CPUs (Linux only). The CPUs to be used are specified by a
  418 comma separated list of CPU (0 to N-1). One can specify a range of CPUs
  419 using '-', for example: \-\-taskset 0,2-3,6,7-11
  420 .TP
  421 .B \-\-temp\-path path
  422 specify a path for stress\-ng temporary directories and temporary files;
  423 the default path is the current working directory.  This path must have
  424 read and write access for the stress-ng stress processes.
  425 .TP
  426 .B \-\-thermalstat S
  427 every S seconds show CPU and thermal load statistics. This option shows
  428 average CPU frequency in GHz (average of online-CPUs), load averages (1 minute,
  429 5 minute and 15 minutes) and available thermal zone temperatures in degrees
  430 Centigrade.
  431 .TP
  432 .B \-\-thrash
  433 This can only be used when running on Linux and with root privilege. This
  434 option starts a background thrasher process that works through all the
  435 processes on a system and tries to page as many pages in the processes
  436 as possible. It also periodically drops the page cache, frees reclaimable
  437 slab objects and pagecache. This will cause considerable amount of
  438 thrashing of swap on an over-committed system.
  439 .TP
  440 .B \-t N, \-\-timeout T
  441 run each stress test for at least T seconds. One can also specify the units
  442 of time in seconds, minutes, hours, days or years with the suffix s, m, h,
  443 d or y. Each stressor will be sent a SIGALRM signal at the timeout time, however
  444 if the stress test is swapped out, in a non-interritable system call or
  445 performing clean up (such as removing hundreds of test file) it may take a
  446 while to finally terminate.  A 0 timeout will run stress-ng for ever with
  447 no timeout.
  448 .TP
  449 .B \--timestamp
  450 add a timestamp in hours, minutes, seconds and hundredths of a second to the
  451 log output.
  452 .TP
  453 .B \-\-timer\-slack N
  454 adjust the per process timer slack to N nanoseconds (Linux only). Increasing
  455 the timer slack allows the kernel to coalesce timer events by adding some
  456 fuzziness to timer expiration times and hence reduce wakeups.  Conversely,
  457 decreasing the timer slack will increase wakeups.  A value of 0 for the
  458 timer-slack will set the system default of 50,000 nanoseconds.
  459 .TP
  460 .B \-\-times
  461 show the cumulative user and system times of all the child processes at the
  462 end of the stress run.  The percentage of utilisation of available CPU time is
  463 also calculated from the number of on-line CPUs in the system.
  464 .TP
  465 .B \-\-tz
  466 collect temperatures from the available thermal zones on the machine (Linux
  467 only).  Some devices may have one or more thermal zones, where as others may
  468 have none.
  469 .TP
  470 .B \-v, \-\-verbose
  471 show all debug, warnings and normal information output.
  472 .TP
  473 .B \-\-verify
  474 verify results when a test is run. This is not available on all tests. This
  475 will sanity check the computations or memory contents from a test run and
  476 report to stderr any unexpected failures.
  477 .TP
  478 .B \-V, \-\-version
  479 show version of stress-ng, version of toolchain used to build stress-ng
  480 and system information.
  481 .TP
  482 .B \-\-vmstat S
  483 every S seconds show statistics about processes, memory, paging, block I/O,
  484 interrupts, context switches, disks and cpu activity.  The output is similar
  485 that to the output from the vmstat(8) utility. Currently a Linux only option.
  486 .TP
  487 .B \-x, \-\-exclude list
  488 specify a list of one or more stressors to exclude (that is, do not run them).
  489 This is useful to exclude specific stressors when one selects many stressors
  490 to run using the \-\-class option, \-\-sequential, \-\-all and \-\-random
  491 options. Example, run the cpu class stressors concurrently and exclude the
  492 numa and search stressors:
  493 .IP
  494 stress\-ng \-\-class cpu \-\-all 1 \-x numa,bsearch,hsearch,lsearch
  495 .TP
  496 .B \-Y, \-\-yaml filename
  497 output gathered statistics to a YAML formatted file named 'filename'.
  498 .br
  499 .sp 2
  500 .PP
  501 .B Stressor specific options:
  502 .TP
  503 .B \-\-access N
  504 start N workers that work through various settings of file mode bits
  505 (read, write, execute) for the file owner and checks if the user permissions
  506 of the file using access(2) and faccessat(2) are sane.
  507 .TP
  508 .B \-\-access\-ops N
  509 stop access workers after N bogo access sanity checks.
  510 .TP
  511 .B \-\-affinity N
  512 start N workers that run 16 processes that rapidly change CPU affinity
  513 (only on Linux). Rapidly switching CPU affinity can contribute to
  514 poor cache behaviour and high context switch rate.
  515 .TP
  516 .B \-\-affinity\-ops N
  517 stop affinity workers after N bogo affinity operations. Note
  518 that the counters across the 16 processes are not locked to improve affinity
  519 test rates so the final number of bogo-ops will be equal or more than the
  520 specified ops stop threshold because of racy unlocked bogo-op counting.
  521 .TP
  522 .B \-\-affinity\-delay N
  523 delay for N nanoseconds before changing affinity to the next CPU.
  524 The delay will spin on CPU scheduling yield operations for N nanoseconds
  525 before the process is moved to another CPU. The default is 0 nanosconds.
  526 .TP
  527 .B \-\-affinity\-pin
  528 pin all the 16 per stressor processes to a CPU. All 16 processes follow the
  529 CPU chosen by the main parent stressor, forcing heavy per CPU loading.
  530 .TP
  531 .B \-\-affinity\-rand
  532 switch CPU affinity randomly rather than the default of sequentially.
  533 .TP
  534 .B \-\-affinity\-sleep N
  535 sleep for N nanoseconds before changing affinity to the next CPU.
  536 .TP
  537 .B \-\-af\-alg N
  538 start N workers that exercise the AF_ALG socket domain by hashing and encrypting
  539 various sized random messages. This exercises the available hashes, ciphers,
  540 rng and aead crypto engines in the Linux kernel.
  541 .TP
  542 .B \-\-af\-alg\-ops N
  543 stop af\-alg workers after N AF_ALG messages are hashed.
  544 .TP
  545 .B \-\-af\-alg\-dump
  546 dump the internal list representing cryptographic algorithms
  547 parsed from the /proc/crypto file to standard output (stdout).
  548 .TP
  549 .B \-\-aio N
  550 start N workers that issue multiple small asynchronous I/O writes and reads on
  551 a relatively small temporary file using the POSIX aio interface.  This will
  552 just hit the file system cache and soak up a lot of user and kernel time in
  553 issuing and handling I/O requests.  By default, each worker process will
  554 handle 16 concurrent I/O requests.
  555 .TP
  556 .B \-\-aio\-ops N
  557 stop POSIX asynchronous I/O workers after N bogo asynchronous I/O requests.
  558 .TP
  559 .B \-\-aio\-requests N
  560 specify the number of POSIX asynchronous I/O requests each worker should issue,
  561 the default is 16; 1 to 4096 are allowed.
  562 .TP
  563 .B \-\-aiol N
  564 start N workers that issue multiple 4K random asynchronous I/O writes using
  565 the Linux aio system calls io_setup(2), io_submit(2), io_getevents(2) and
  566 io_destroy(2).  By default, each worker process will handle 16 concurrent I/O
  567 requests.
  568 .TP
  569 .B \-\-aiol\-ops N
  570 stop Linux asynchronous I/O workers after N bogo asynchronous I/O requests.
  571 .TP
  572 .B \-\-aiol\-requests N
  573 specify the number of Linux asynchronous I/O requests each worker should issue,
  574 the default is 16; 1 to 4096 are allowed.
  575 .TP
  576 .B \-\-alarm N
  577 start N workers that exercise alarm(2) with MAXINT, 0 and random alarm and
  578 sleep delays that get prematurely interrupted. Before each alarm is scheduled
  579 any previous pending alarms are cancelled with zero second alarm calls.
  580 .TP
  581 .B \-\-alarm\-ops N
  582 stop after N alarm bogo operations.
  583 .TP
  584 .B \-\-apparmor N
  585 start N workers that exercise various parts of the AppArmor interface. Currently
  586 one needs root permission to run this particular test. Only available
  587 on Linux systems with AppArmor support and requires the CAP_MAC_ADMIN capability.
  588 .TP
  589 .B \-\-apparmor-ops
  590 stop the AppArmor workers after N bogo operations.
  591 .TP
  592 .B \-\-atomic N
  593 start N workers that exercise various GCC __atomic_*() built in operations
  594 on 8, 16, 32 and 64 bit integers that are shared among the N workers. This
  595 stressor is only available for builds using GCC 4.7.4 or higher. The stressor
  596 forces many front end cache stalls and cache references.
  597 .TP
  598 .B \-\-atomic\-ops N
  599 stop the atomic workers after N bogo atomic operations.
  600 .TP
  601 .B \-\-bad\-altstack N
  602 start N workers that create broken alternative signal stacks for SIGSEGV
  603 and SIGBUS handling that in turn create secondary SIGSEGV/SIGBUS errors.
  604 A variety of randonly selected nefarious methods are used to create the stacks:
  605 .PP
  606 .RS
  607 .PD 0
  608 .IP \(bu 2
  609 Unmapping the alternative signal stack, before triggering the signal handling.
  610 .IP \(bu 2
  611 Changing the alternative signal stack to just being read only, write only, execute only.
  612 .IP \(bu 2
  613 Using a NULL alternative signal stack.
  614 .IP \(bu 2
  615 Using the signal handler object as the alternative signal stack.
  616 .IP \(bu 2
  617 Unmapping the alternative signal stack during execution of the signal handler.
  618 .IP \(bu 2
  619 Using a read-only text segment for the alternative signal stack.
  620 .IP \(bu 2
  621 Using an undersized alternative signal stack.
  622 .IP \(bu 2
  623 Using the VDSO as an alternative signal stack.
  624 .IP \(bu 2
  625 Using an alternative stack mapped onto /dev/zero.
  626 .IP \(bu 2
  627 Using an alternative stack mapped to a zero sized temporary file to generate a SIGBUS error.
  628 .PD
  629 .RE
  630 .TP
  631 .B \-\-bad\-altstack\-ops N
  632 stop the bad alternative stack stressors after N SIGSEGV bogo operations.
  633 .TP
  634 .TP
  635 .B \-\-bad\-ioctl N
  636 start N workers that perform a range of illegal bad read ioctls (using _IOR) across the
  637 device drivers. This exercises page size, 64 bit, 32 bit, 16 bit and 8 bit reads as
  638 well as NULL addresses, non-readable pages and PROT_NONE mapped pages. Currently only
  639 for Linux and requires the --pathological option.
  640 .TP
  641 .B \-\-bad\-ioctl\-ops N
  642 stop the bad ioctl stressors after N bogo ioctl operations.
  643 .TP
  644 .B \-B N, \-\-bigheap N
  645 start N workers that grow their heaps by reallocating memory. If the out of
  646 memory killer (OOM) on Linux kills the worker or the allocation fails then the
  647 allocating process starts all over again.  Note that the OOM adjustment for the
  648 worker is set so that the OOM killer will treat these workers as the first
  649 candidate processes to kill.
  650 .TP
  651 .B \-\-bigheap\-ops N
  652 stop the big heap workers after N bogo allocation operations are completed.
  653 .TP
  654 .B \-\-bigheap\-growth N
  655 specify amount of memory to grow heap by per iteration. Size can be from 4K to
  656 64MB. Default is 64K.
  657 .TP
  658 .B \-\-binderfs N
  659 start N workers that mount, exercise and unmount binderfs. The binder control
  660 device is exercised with 256 sequential BINDER_CTL_ADD ioctl calls per loop.
  661 .TP
  662 .B \-\-binderfs\-ops N
  663 stop after N binderfs cycles.
  664 .TP
  665 .B \-\-bind\-mount N
  666 start N workers that repeatedly bind mount / to / inside a user namespace. This
  667 can consume resources rapidly, forcing out of memory situations. Do not use this
  668 stressor unless you want to risk hanging your machine.
  669 .TP
  670 .B \-\-bind\-mount\-ops N
  671 stop after N bind mount bogo operations.
  672 .TP
  673 .B \-\-branch N
  674 start N workers that randomly jump to 256 randomly selected locations and
  675 hence exercise the CPU branch prediction logic.
  676 .TP
  677 .B \-\-branch\-ops N
  678 stop the branch stressors after N jumps
  679 .TP
  680 .B \-\-brk N
  681 start N workers that grow the data segment by one page at a time using multiple
  682 brk(2) calls. Each successfully allocated new page is touched to ensure it is
  683 resident in memory.  If an out of memory condition occurs then the test will
  684 reset the data segment to the point before it started and repeat the data
  685 segment resizing over again.  The process adjusts the out of memory setting so
  686 that it may be killed by the out of memory (OOM) killer before other processes.
  687 If it is killed by the OOM killer then it will be automatically re-started by
  688 a monitoring parent process.
  689 .TP
  690 .B \-\-brk\-ops N
  691 stop the brk workers after N bogo brk operations.
  692 .TP
  693 .B \-\-brk\-mlock
  694 attempt to mlock future brk pages into memory causing more memory pressure. If
  695 mlock(MCL_FUTURE) is implemented then this will stop new brk pages from being
  696 swapped out.
  697 .TP
  698 .B \-\-brk\-notouch
  699 do not touch each newly allocated data segment page. This disables the default
  700 of touching each newly allocated page and hence avoids the kernel from
  701 necessarily backing the page with real physical memory.
  702 .TP
  703 .B \-\-bsearch N
  704 start N workers that binary search a sorted array of 32 bit integers using
  705 bsearch(3). By default, there are 65536 elements in the array.  This is a
  706 useful method to exercise random access of memory and processor cache.
  707 .TP
  708 .B \-\-bsearch\-ops N
  709 stop the bsearch worker after N bogo bsearch operations are completed.
  710 .TP
  711 .B \-\-bsearch\-size N
  712 specify the size (number of 32 bit integers) in the array to bsearch. Size can
  713 be from 1K to 4M.
  714 .TP
  715 .B \-C N, \-\-cache N
  716 start N workers that perform random wide spread memory read and writes to
  717 thrash the CPU cache.  The code does not intelligently determine the CPU cache
  718 configuration and so it may be sub-optimal in producing hit-miss read/write
  719 activity for some processors.
  720 .TP
  721 .B \-\-cache\-fence
  722 force write serialization on each store operation (x86 only). This is a no-op
  723 for non-x86 architectures.
  724 .TP
  725 .B \-\-cache\-flush
  726 force flush cache on each store operation (x86 only). This is a no-op for
  727 non-x86 architectures.
  728 .TP
  729 .B \-\-cache\-level N
  730 specify level of cache to exercise (1=L1 cache, 2=L2 cache, 3=L3/LLC cache (the default)).
  731 If the cache hierarchy cannot be determined, built-in defaults will apply.
  732 .TP
  733 .B \-\-cache\-no\-affinity
  734 do not change processor affinity when
  735 .B \-\-cache
  736 is in effect.
  737 .TP
  738 .B \-\-cache\-sfence
  739 force write serialization on each store operation using the sfence instruction
  740 (x86 only). This is a no-op for non-x86 architectures.
  741 .TP
  742 .B \-\-cache\-ops N
  743 stop cache thrash workers after N bogo cache thrash operations.
  744 .TP
  745 .B \-\-cache\-prefetch
  746 force read prefetch on next read address on architectures that support
  747 prefetching.
  748 .TP
  749 .B \-\-cache\-ways N
  750 specify the number of cache ways to exercise. This allows a subset of
  751 the overall cache size to be exercised.
  752 .TP
  753 .B \-\-cap N
  754 start N workers that read per process capabilities via calls to capget(2)
  755 (Linux only).
  756 .TP
  757 .B \-\-cap\-ops N
  758 stop after N cap bogo operations.
  759 .TP
  760 .B \-\-chattr N
  761 start N workers that attempt to exercise file attributes via the
  762 EXT2_IOC_SETFLAGS ioctl. This is intended to be intentionally racy and
  763 exercise a range of chattr attributes by enabling and disabling them on
  764 a file shared amongst the N chattr stressor processes. (Linux only).
  765 .TP
  766 .B \-\-chattr\-ops N
  767 stop after N chattr bogo operations.
  768 .TP
  769 .B \-\-chdir N
  770 start N workers that change directory between directories using chdir(2).
  771 .TP
  772 .B \-\-chdir\-ops N
  773 stop after N chdir bogo operations.
  774 .TP
  775 .B \-\-chdir\-dirs N
  776 exercise chdir on N directories. The default is 8192 directories, this allows
  777 64 to 65536 directories to be used instead.
  778 .TP
  779 .B \-\-chmod N
  780 start N workers that change the file mode bits via chmod(2) and fchmod(2) on
  781 the same file. The greater the value for N then the more contention on the
  782 single file.  The stressor will work through all the combination of mode bits.
  783 .TP
  784 .B \-\-chmod\-ops N
  785 stop after N chmod bogo operations.
  786 .TP
  787 .B \-\-chown N
  788 start N workers that exercise chown(2) on the same file. The greater the
  789 value for N then the more contention on the single file.
  790 .TP
  791 .B \-\-chown\-ops N
  792 stop the chown workers after N bogo chown(2) operations.
  793 .TP
  794 .B \-\-chroot N
  795 start N workers that exercise chroot(2) on various valid and invalid
  796 chroot paths. Only available on Linux systems and requires the CAP_SYS_ADMIN
  797 capability.
  798 .TP
  799 .B \-\-chroot\-ops N
  800 stop the chroot workers after N bogo chroot(2) operations.
  801 .TP
  802 .B \-\-clock N
  803 start N workers exercising clocks and POSIX timers. For all known clock types
  804 this will exercise clock_getres(2), clock_gettime(2) and clock_nanosleep(2).
  805 For all known timers it will create a 50000ns timer and busy poll this until
  806 it expires.  This stressor will cause frequent context switching.
  807 .TP
  808 .B \-\-clock\-ops N
  809 stop clock stress workers after N bogo operations.
  810 .TP
  811 .B \-\-clone N
  812 start N workers that create clones (via the clone(2) and clone3() system calls).
  813 This will rapidly try to create a default of 8192 clones that immediately die
  814 and wait in a zombie state until they are reaped.  Once the maximum number of
  815 clones is reached (or clone fails because one has reached the maximum allowed)
  816 the oldest clone thread is reaped and a new clone is then created in a first-in
  817 first-out manner, and then repeated.  A random clone flag is selected for each
  818 clone to try to exercise different clone operations.  The clone stressor is a Linux
  819 only option.
  820 .TP
  821 .B \-\-clone\-ops N
  822 stop clone stress workers after N bogo clone operations.
  823 .TP
  824 .B \-\-clone\-max N
  825 try to create as many as N clone threads. This may not be reached if the system
  826 limit is less than N.
  827 .TP
  828 .B \-\-close N
  829 start N workers that try to force race conditions on closing opened file
  830 descriptors.  These file descriptors have been opened in various ways to try
  831 and exercise different kernel close handlers.
  832 .TP
  833 .B \-\-close\-ops N
  834 stop close workers after N bogo close operations.
  835 .TP
  836 .B \-\-context N
  837 start N workers that run three threads that use swapcontext(3) to implement the
  838 thread-to-thread context switching. This exercises rapid process context saving
  839 and restoring and is bandwidth limited by register and memory save and restore
  840 rates.
  841 .TP
  842 .B \-\-context\-ops N
  843 stop context workers after N bogo context switches.  In this stressor, 1 bogo
  844 op is equivalent to 1000 swapcontext calls.
  845 .TP
  846 .B \-\-copy\-file N
  847 start N stressors that copy a file using the Linux copy_file_range(2) system
  848 call. 2MB chunks of data are copied from random locations from one file to
  849 random locations to a destination file.  By default, the files are 256 MB in
  850 size. Data is sync'd to the filesystem after each copy_file_range(2) call.
  851 .TP
  852 .B \-\-copy\-file\-ops N
  853 stop after N copy_file_range() calls.
  854 .TP
  855 .B \-\-copy\-file\-bytes N
  856 copy file size, the default is 256 MB. One can specify the size as % of free
  857 space on the file system or in units of Bytes, KBytes, MBytes and GBytes using
  858 the suffix b, k, m or g.
  859 .TP
  860 .B \-c N, \-\-cpu N
  861 start N workers exercising the CPU by sequentially working through all the
  862 different CPU stress methods. Instead of exercising all the CPU stress methods,
  863 one can specify a specific CPU stress method with the \-\-cpu\-method option.
  864 .TP
  865 .B \-\-cpu\-ops N
  866 stop cpu stress workers after N bogo operations.
  867 .TP
  868 .B \-l P, \-\-cpu\-load P
  869 load CPU with P percent loading for the CPU stress workers. 0 is effectively a
  870 sleep (no load) and 100 is full loading.  The loading loop is broken into
  871 compute time (load%) and sleep time (100% - load%). Accuracy depends on the
  872 overall load of the processor and the responsiveness of the scheduler, so the
  873 actual load may be different from the desired load.  Note that the number of
  874 bogo CPU operations may not be linearly scaled with the load as some systems
  875 employ CPU frequency scaling and so heavier loads produce an increased CPU
  876 frequency and greater CPU bogo operations.
  877 
  878 Note: This option only applies to the \-\-cpu stressor option and not to
  879 all of the cpu class of stressors.
  880 .TP
  881 .B \-\-cpu\-load\-slice S
  882 note \- this option is only useful when \-\-cpu\-load is less than 100%. The
  883 CPU load is broken into multiple busy and idle cycles. Use this option to
  884 specify the duration of a busy time slice.  A negative value for S specifies
  885 the number of iterations to run before idling the CPU (e.g. -30 invokes 30
  886 iterations of a CPU stress loop).  A zero value selects a random busy time
  887 between 0 and 0.5 seconds.  A positive value for S specifies the number of
  888 milliseconds to run before idling the CPU (e.g. 100 keeps the CPU busy for
  889 0.1 seconds).  Specifying small values for S lends to small time slices and
  890 smoother scheduling.  Setting \-\-cpu\-load as a relatively low value and
  891 \-\-cpu\-load\-slice to be large will cycle the CPU between long idle and
  892 busy cycles and exercise different CPU frequencies.  The thermal range of
  893 the CPU is also cycled, so this is a good mechanism to exercise the scheduler,
  894 frequency scaling and passive/active thermal cooling mechanisms.
  895 
  896 Note: This option only applies to the \-\-cpu stressor option and not to
  897 all of the cpu class of stressors.
  898 .TP
  899 .B \-\-cpu\-method method
  900 specify a cpu stress method. By default, all the stress methods are exercised
  901 sequentially, however one can specify just one method to be used if required.
  902 Available cpu stress methods are described as follows:
  903 .TS
  904 expand;
  905 lB2 lB lB
  906 l l s.
  907 Method	Description
  908 all	T{
  909 iterate over all the below cpu stress methods
  910 T}
  911 ackermann	T{
  912 Ackermann function: compute A(3, 7), where:
  913  A(m, n) = n + 1 if m = 0;
  914  A(m - 1, 1) if m > 0 and n = 0;
  915  A(m - 1, A(m, n - 1)) if m > 0 and n > 0
  916 T}
  917 apery	T{
  918 calculate Apery's constant \[*z](3); the sum of 1/(n \[ua] 3) to a precision of 1.0x10\[ua]14
  919 T}
  920 bitops	T{
  921 various bit operations from bithack, namely: reverse bits, parity check, bit
  922 count, round to nearest power of 2
  923 T}
  924 callfunc	T{
  925 recursively call 8 argument C function to a depth of 1024 calls and unwind
  926 T}
  927 cfloat	T{
  928 1000 iterations of a mix of floating point complex operations
  929 T}
  930 cdouble	T{
  931 1000 iterations of a mix of double floating point complex operations
  932 T}
  933 clongdouble	T{
  934 1000 iterations of a mix of long double floating point complex operations
  935 T}
  936 collatz	T{
  937 compute the 1348 steps in the collatz sequence starting from number 989345275647.
  938 Where f(n) = n / 2 (for even n) and f(n) = 3n + 1 (for odd n).
  939 T}
  940 correlate	T{
  941 perform a 8192 \(mu 512 correlation of random doubles
  942 T}
  943 cpuid	T{
  944 fetch cpu specific information using the cpuid instruction (x86 only)
  945 T}
  946 crc16	T{
  947 compute 1024 rounds of CCITT CRC16 on random data
  948 T}
  949 decimal32	T{
  950 1000 iterations of a mix of 32 bit decimal floating point operations (GCC only)
  951 T}
  952 decimal64	T{
  953 1000 iterations of a mix of 64 bit decimal floating point operations (GCC only)
  954 T}
  955 decimal128	T{
  956 1000 iterations of a mix of 128 bit decimal floating point operations (GCC
  957 only)
  958 T}
  959 dither	T{
  960 Floyd–Steinberg dithering of a 1024 \(mu 768 random image from 8 bits down to
  961 1 bit of depth
  962 T}
  963 div16	T{
  964 50,000 16 bit unsigned integer divisions
  965 T}
  966 div32	T{
  967 50,000 32 bit unsigned integer divisions
  968 T}
  969 div64	T{
  970 50,000 64 bit unsigned integer divisions
  971 T}
  972 djb2a	T{
  973 128 rounds of hash DJB2a (Dan Bernstein hash using the xor variant) on 128 to
  974 1 bytes of random strings
  975 T}
  976 double	T{
  977 1000 iterations of a mix of double precision floating point operations
  978 T}
  979 euler	T{
  980 compute e using n \[eq] (1 + (1 \[di] n)) \[ua] n
  981 T}
  982 explog	T{
  983 iterate on n \[eq] exp(log(n) \[di] 1.00002)
  984 T}
  985 factorial	T{
  986 find factorials from 1..150 using Stirling's and Ramanujan's approximations
  987 T}
  988 fibonacci	T{
  989 compute Fibonacci sequence of 0, 1, 1, 2, 5, 8...
  990 T}
  991 fft	T{
  992 4096 sample Fast Fourier Transform
  993 T}
  994 fletcher16	T{
  995 1024 rounds of a naive implementation of a 16 bit Fletcher's checksum
  996 T}
  997 float	T{
  998 1000 iterations of a mix of floating point operations
  999 T}
 1000 float16	T{
 1001 1000 iterations of a mix of 16 bit floating point operations
 1002 T}
 1003 float32	T{
 1004 1000 iterations of a mix of 32 bit floating point operations
 1005 T}
 1006 float64	T{
 1007 1000 iterations of a mix of 64 bit floating point operations
 1008 T}
 1009 float80	T{
 1010 1000 iterations of a mix of 80 bit floating point operations
 1011 T}
 1012 float128	T{
 1013 1000 iterations of a mix of 128 bit floating point operations
 1014 T}
 1015 floatconversion	T{
 1016 perform 65536 iterations of floating point conversions between
 1017 float, double and long double floating point variables.
 1018 T}
 1019 fnv1a	T{
 1020 128 rounds of hash FNV-1a (Fowler–Noll–Vo hash using the xor then multiply
 1021 variant) on 128 to 1 bytes of random strings
 1022 T}
 1023 gamma	T{
 1024 calculate the Euler\-Mascheroni constant \(*g using the limiting difference
 1025 between the harmonic series (1 + 1/2 + 1/3 + 1/4 + 1/5 ... + 1/n) and the
 1026 natural logarithm ln(n), for n = 80000.
 1027 T}
 1028 gcd	T{
 1029 compute GCD of integers
 1030 T}
 1031 gray	T{
 1032 calculate binary to gray code and gray code back to binary for integers
 1033 from 0 to 65535
 1034 T}
 1035 hamming	T{
 1036 compute Hamming H(8,4) codes on 262144 lots of 4 bit data. This turns 4 bit
 1037 data into 8 bit Hamming code containing 4 parity bits. For data bits d1..d4,
 1038 parity bits are computed as:
 1039   p1 = d2 + d3 + d4
 1040   p2 = d1 + d3 + d4
 1041   p3 = d1 + d2 + d4
 1042   p4 = d1 + d2 + d3
 1043 T}
 1044 hanoi	T{
 1045 solve a 21 disc Towers of Hanoi stack using the recursive solution
 1046 T}
 1047 hyperbolic	T{
 1048 compute sinh(\(*h) \(mu cosh(\(*h) + sinh(2\(*h) + cosh(3\(*h) for float,
 1049 double and long double hyperbolic sine and cosine functions where \(*h = 0
 1050 to 2\(*p in 1500 steps
 1051 T}
 1052 idct	T{
 1053 8 \(mu 8 IDCT (Inverse Discrete Cosine Transform).
 1054 T}
 1055 int8	T{
 1056 1000 iterations of a mix of 8 bit integer operations.
 1057 T}
 1058 int16	T{
 1059 1000 iterations of a mix of 16 bit integer operations.
 1060 T}
 1061 int32	T{
 1062 1000 iterations of a mix of 32 bit integer operations.
 1063 T}
 1064 int64	T{
 1065 1000 iterations of a mix of 64 bit integer operations.
 1066 T}
 1067 int128	T{
 1068 1000 iterations of a mix of 128 bit integer operations (GCC only).
 1069 T}
 1070 int32float	T{
 1071 1000 iterations of a mix of 32 bit integer and floating point operations.
 1072 T}
 1073 int32double	T{
 1074 1000 iterations of a mix of 32 bit integer and double precision floating point
 1075 operations.
 1076 T}
 1077 int32longdouble	T{
 1078 1000 iterations of a mix of 32 bit integer and long double precision floating
 1079 point operations.
 1080 T}
 1081 int64float	T{
 1082 1000 iterations of a mix of 64 bit integer and floating point operations.
 1083 T}
 1084 int64double	T{
 1085 1000 iterations of a mix of 64 bit integer and double precision floating point
 1086 operations.
 1087 T}
 1088 int64longdouble	T{
 1089 1000 iterations of a mix of 64 bit integer and long double precision floating
 1090 point operations.
 1091 T}
 1092 int128float	T{
 1093 1000 iterations of a mix of 128 bit integer and floating point operations
 1094 (GCC only).
 1095 T}
 1096 int128double	T{
 1097 1000 iterations of a mix of 128 bit integer and double precision floating point
 1098 operations (GCC only).
 1099 T}
 1100 int128longdouble	T{
 1101 1000 iterations of a mix of 128 bit integer and long double precision floating
 1102 point operations (GCC only).
 1103 T}
 1104 int128decimal32	T{
 1105 1000 iterations of a mix of 128 bit integer and 32 bit decimal floating point
 1106 operations (GCC only).
 1107 T}
 1108 int128decimal64	T{
 1109 1000 iterations of a mix of 128 bit integer and 64 bit decimal floating point
 1110 operations (GCC only).
 1111 T}
 1112 int128decimal128	T{
 1113 1000 iterations of a mix of 128 bit integer and 128 bit decimal floating point
 1114 operations (GCC only).
 1115 T}
 1116 intconversion	T{
 1117 perform 65536 iterations of integer conversions between
 1118 int16, int32 and int64 variables.
 1119 T}
 1120 ipv4checksum	T{
 1121 compute 1024 rounds of the 16 bit ones' complement IPv4 checksum.
 1122 T}
 1123 jenkin	T{
 1124 Jenkin's integer hash on 128 rounds of 128..1 bytes of random data.
 1125 T}
 1126 jmp	T{
 1127 Simple unoptimised compare >, <, == and jmp branching.
 1128 T}
 1129 lfsr32	T{
 1130 16384 iterations of a 32 bit Galois linear feedback shift register using
 1131 the polynomial x\[ua]32 + x\[ua]31 + x\[ua]29 + x + 1. This generates a
 1132 ring of 2\[ua]32 - 1 unique values (all 32 bit values except for 0).
 1133 T}
 1134 ln2	T{
 1135 compute ln(2) based on series:
 1136  1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
 1137 T}
 1138 longdouble	T{
 1139 1000 iterations of a mix of long double precision floating point operations.
 1140 T}
 1141 loop	T{
 1142 simple empty loop.
 1143 T}
 1144 matrixprod	T{
 1145 matrix product of two 128 \(mu 128 matrices of double floats. Testing on 64
 1146 bit x86 hardware shows that this is provides a good mix of memory, cache and
 1147 floating point operations and is probably the best CPU method to use to make
 1148 a CPU run hot.
 1149 T}
 1150 murmur3_32	T{
 1151 murmur3_32 hash (Austin Appleby's Murmur3 hash, 32 bit variant) on 128
 1152 rounds of of 128..1 bytes of random data.
 1153 T}
 1154 nhash	T{
 1155 exim's nhash on 128 rounds of 128..1 bytes of random data.
 1156 T}
 1157 nsqrt	T{
 1158 compute sqrt() of long doubles using Newton-Raphson.
 1159 T}
 1160 omega	T{
 1161 compute the omega constant defined by \(*We\[ua]\(*W = 1 using efficient
 1162 iteration of \(*Wn+1 = (1 + \(*Wn) / (1 + e\[ua]\(*Wn).
 1163 T}
 1164 parity	T{
 1165 compute parity using various methods from the Standford Bit Twiddling Hacks.
 1166 Methods employed are: the na\[:i]ve way, the na\[:i]ve way with the Brian
 1167 Kernigan bit counting optimisation, the multiply way, the parallel way,
 1168 the lookup table ways (2 variations) and using the __builtin_parity function.
 1169 T}
 1170 phi	T{
 1171 compute the Golden Ratio \(*f using series.
 1172 T}
 1173 pi	T{
 1174 compute \(*p using the Srinivasa Ramanujan fast convergence algorithm.
 1175 T}
 1176 pjw	T{
 1177 128 rounds of hash pjw function on 128 to 1 bytes of random strings.
 1178 T}
 1179 prime	T{
 1180 find the first 10000 prime numbers using a slightly optimised brute
 1181 force na\[:i]ve trial division search.
 1182 T}
 1183 psi	T{
 1184 compute \(*q (the reciprocal Fibonacci constant) using the sum of the
 1185 reciprocals of the Fibonacci numbers.
 1186 T}
 1187 queens	T{
 1188 compute all the solutions of the classic 8 queens problem for board sizes 1..11.
 1189 T}
 1190 rand	T{
 1191 16384 iterations of rand(), where rand is the MWC pseudo
 1192 random number generator.
 1193 The MWC random function concatenates two 16 bit multiply\-with\-carry
 1194 generators:
 1195  x(n) = 36969 \(mu x(n - 1) + carry,
 1196  y(n) = 18000 \(mu y(n - 1) + carry mod 2 \[ua] 16
 1197 .sp 1
 1198 and has period of around 2 \[ua] 60.
 1199 T}
 1200 rand48	T{
 1201 16384 iterations of drand48(3) and lrand48(3).
 1202 T}
 1203 rgb	T{
 1204 convert RGB to YUV and back to RGB (CCIR 601).
 1205 T}
 1206 sdbm	T{
 1207 128 rounds of hash sdbm (as used in the SDBM database and GNU awk) on 128 to
 1208 1 bytes of random strings.
 1209 T}
 1210 sieve	T{
 1211 find the first 10000 prime numbers using the sieve of Eratosthenes.
 1212 T}
 1213 stats	T{
 1214 calculate minimum, maximum, arithmetic mean, geometric mean, harmoninc mean
 1215 and standard deviation on 250 randomly generated positive double precision
 1216 values.
 1217 T}
 1218 sqrt	T{
 1219 compute sqrt(rand()), where rand is the MWC pseudo random number generator.
 1220 T}
 1221 trig	T{
 1222 compute sin(\(*h) \(mu cos(\(*h) + sin(2\(*h) + cos(3\(*h) for float, double
 1223 and long double sine and cosine functions where \(*h = 0 to 2\(*p in 1500 steps.
 1224 T}
 1225 union	T{
 1226 perform integer arithmetic on a mix of bit fields in a C union.  This exercises
 1227 how well the compiler and CPU can perform integer bit field loads and stores.
 1228 T}
 1229 zeta	T{
 1230 compute the Riemann Zeta function \[*z](s) for s = 2.0..10.0
 1231 T}
 1232 .TE
 1233 .RS
 1234 .PP
 1235 Note that some of these methods try to exercise the CPU with computations found
 1236 in some real world use cases. However, the code has not been optimised on a
 1237 per-architecture basis, so may be a sub-optimal compared to hand-optimised code
 1238 used in some applications.  They do try to represent the typical instruction
 1239 mixes found in these use cases.
 1240 .RE
 1241 .TP
 1242 .B \-\-cpu\-online N
 1243 start N workers that put randomly selected CPUs offline and online. This Linux
 1244 only stressor requires root privilege to perform this action. By default the
 1245 first CPU (CPU 0) is never offlined as this has been found to be problematic
 1246 on some systems and can result in a shutdown.
 1247 .TP
 1248 .B \-\-cpu\-online\-all
 1249 The default is to never offline the first CPU.  This option will offline and
 1250 online all the CPUs include CPU 0. This may cause some systems to shutdown.
 1251 .TP
 1252 .B \-\-cpu\-online\-ops N
 1253 stop after offline/online operations.
 1254 .TP
 1255 .B \-\-crypt N
 1256 start N workers that encrypt a 16 character random password using crypt(3).
 1257 The password is encrypted using MD5, SHA-256 and SHA-512 encryption methods.
 1258 .TP
 1259 .B \-\-crypt\-ops N
 1260 stop after N bogo encryption operations.
 1261 .TP
 1262 .B \-\-cyclic N
 1263 start N workers that exercise the real time FIFO or Round Robin schedulers
 1264 with cyclic nanosecond sleeps. Normally one would just use 1 worker instance
 1265 with this stressor to get reliable statistics.  This stressor measures the
 1266 first 10 thousand latencies and calculates the mean, mode, minimum, maximum
 1267 latencies along with various latency percentiles for the just the first
 1268 cyclic stressor instance. One has to run this stressor with CAP_SYS_NICE
 1269 capability to enable the real time scheduling policies. The FIFO scheduling
 1270 policy is the default.
 1271 .TP
 1272 .B \-\-cyclic\-ops N
 1273 stop after N sleeps.
 1274 .TP
 1275 .B \-\-cyclic\-dist N
 1276 calculate and print a latency distribution with the interval of N nanoseconds.
 1277 This is helpful to see where the latencies are clustering.
 1278 .TP
 1279 .B \-\-cyclic\-method [ clock_ns | itimer | poll | posix_ns | pselect | usleep ]
 1280 specify the cyclic method to be used, the default is clock_ns. The available
 1281 cyclic methods are as follows:
 1282 .TS
 1283 expand;
 1284 lB2 lB lB
 1285 l l s.
 1286 Method	Description
 1287 clock_ns	T{
 1288 sleep for the specified time using the clock_nanosleep(2) high
 1289 resolution nanosleep and the CLOCK_REALTIME real time clock.
 1290 T}
 1291 itimer	T{
 1292 wakeup a paused process with a CLOCK_REALTIME itimer signal.
 1293 T}
 1294 poll	T{
 1295 delay for the specified time using a poll delay loop that checks
 1296 for time changes using clock_gettime(2) on the CLOCK_REALTIME clock.
 1297 T}
 1298 posix_ns	T{
 1299 sleep for the specified time using the POSIX nanosleep(2) high
 1300 resolution nanosleep.
 1301 T}
 1302 pselect	T{
 1303 sleep for the specified time using pselect(2) with null file descriptors.
 1304 T}
 1305 usleep	T{
 1306 sleep to the nearest microsecond using usleep(2).
 1307 T}
 1308 .TE
 1309 .TP
 1310 .B \-\-cyclic\-policy [ fifo | rr ]
 1311 specify the desired real time scheduling policy, ff (first-in, first-out)
 1312 or rr (round robin).
 1313 .TP
 1314 .B \-\-cyclic\-prio P
 1315 specify the scheduling priority P. Range from 1 (lowest) to 100 (highest).
 1316 .TP
 1317 .B \-\-cyclic\-sleep N
 1318 sleep for N nanoseconds per test cycle using clock_nanosleep(2) with the
 1319 CLOCK_REALTIME timer. Range from 1 to 1000000000 nanoseconds.
 1320 .TP
 1321 .B \-\-daemon N
 1322 start N workers that each create a daemon that dies immediately after creating
 1323 another daemon and so on. This effectively works through the process table with
 1324 short lived processes that do not have a parent and are waited for by init.
 1325 This puts pressure on init to do rapid child reaping.  The daemon processes
 1326 perform the usual mix of calls to turn into typical UNIX daemons, so this
 1327 artificially mimics very heavy daemon system stress.
 1328 .TP
 1329 .B \-\-daemon\-ops N
 1330 stop daemon workers after N daemons have been created.
 1331 .TP
 1332 .B \-\-dccp N
 1333 start N workers that send and receive data using the Datagram Congestion
 1334 Control Protocol (DCCP) (RFC4340). This involves a pair of client/server
 1335 processes performing rapid connect, send and receives and disconnects on
 1336 the local host.
 1337 .TP
 1338 .B \-\-dccp\-domain D
 1339 specify the domain to use, the default is ipv4. Currently ipv4 and ipv6
 1340 are supported.
 1341 .TP
 1342 .B \-\-dccp\-port P
 1343 start DCCP at port P. For N dccp worker processes, ports P to P - 1 are
 1344 used.
 1345 .TP
 1346 .B \-\-dccp\-ops N
 1347 stop dccp stress workers after N bogo operations.
 1348 .TP
 1349 .B \-\-dccp\-opts [ send | sendmsg | sendmmsg ]
 1350 by default, messages are sent using send(2). This option allows one to specify
 1351 the sending method using send(2), sendmsg(2) or sendmmsg(2).  Note that
 1352 sendmmsg is only available for Linux systems that support this system call.
 1353 .TP
 1354 .B \-D N, \-\-dentry N
 1355 start N workers that create and remove directory entries.  This should create
 1356 file system meta data activity. The directory entry names are suffixed by a
 1357 gray-code encoded number to try to mix up the hashing of the namespace.
 1358 .TP
 1359 .B \-\-dentry\-ops N
 1360 stop denty thrash workers after N bogo dentry operations.
 1361 .TP
 1362 .B \-\-dentry\-order [ forward | reverse | stride | random ]
 1363 specify unlink order of dentries, can be one of forward, reverse, stride
 1364 or random.
 1365 By default, dentries are unlinked in random order.  The forward
 1366 order will unlink them from first to last, reverse order will unlink
 1367 them from last to first, stride order will unlink them by stepping
 1368 around order in a quasi-random pattern and random order will randomly
 1369 select one of forward, reverse or stride orders.
 1370 .TP
 1371 .B \-\-dentries N
 1372 create N dentries per dentry thrashing loop, default is 2048.
 1373 .TP
 1374 .B \-\-dev N
 1375 start N workers that exercise the /dev devices. Each worker runs 5
 1376 concurrent threads that perform open(2), fstat(2), lseek(2), poll(2),
 1377 fcntl(2), mmap(2), munmap(2), fsync(2) and close(2) on each device.
 1378 Note that watchdog devices are not exercised.
 1379 .TP
 1380 .B \-\-dev\-ops N
 1381 stop dev workers after N bogo device exercising operations.
 1382 .TP
 1383 .B \-\-dev\-file filename
 1384 specify the device file to exercise, for example, /dev/null. By default
 1385 the stressor will work through all the device files it can fine, however,
 1386 this option allows a single device file to be exercised.
 1387 .TP
 1388 .B \-\-dev\-shm N
 1389 start N workers that fallocate large files in /dev/shm and then mmap
 1390 these into memory and touch all the pages. This exercises pages being
 1391 moved to/from the buffer cache. Linux only.
 1392 .TP
 1393 .B \-\-dev\-shm\-ops N
 1394 stop after N bogo allocation and mmap /dev/shm operations.
 1395 .TP
 1396 .B \-\-dir N
 1397 start N workers that create and remove directories using mkdir and rmdir.
 1398 .TP
 1399 .B \-\-dir\-ops N
 1400 stop directory thrash workers after N bogo directory operations.
 1401 .TP
 1402 .B \-\-dir\-dirs N
 1403 exercise dir on N directories. The default is 8192 directories, this allows
 1404 64 to 65536 directories to be used instead.
 1405 .TP
 1406 .B \-\-dirdeep N
 1407 start N workers that create a depth-first tree of directories to a maximum
 1408 depth as limited by PATH_MAX or ENAMETOOLONG (which ever occurs first).
 1409 By default, each level of the tree contains one directory, but this can
 1410 be increased to a maximum of 10 sub-trees using the \-\-dirdeep\-dir option.
 1411 To stress inode creation, a symlink and a hardlink to a file at the root
 1412 of the tree is created in each level.
 1413 .TP
 1414 .B \-\-dirdeep\-ops N
 1415 stop directory depth workers after N bogo directory operations.
 1416 .TP
 1417 .B \-\-dirdeep\-dirs N
 1418 create N directories at each tree level. The default is just 1 but can be
 1419 increased to a maximum of 10 per level.
 1420 .TP
 1421 .B \-\-dirdeep\-inodes N
 1422 consume up to N inodes per dirdeep stressor while creating directories and
 1423 links. The value N can be the number of inodes or a percentage of the total
 1424 available free inodes on the filesystem being used.
 1425 .TP
 1426 .B \-\-dirmany N
 1427 start N stressors that create as many empty files in a directory as possible
 1428 and then remove them. The file creation phase stops when an error occurs
 1429 (for example, out of inodes, too many files, quota reached, etc.) and then
 1430 the files are removed. This cycles until the the run time is reached or the
 1431 file creation count bogo-ops metric is reached. This is a much faster and
 1432 light weight directory exercising stressor compared to the dentry stressor.
 1433 .TP
 1434 .B \-\-dirmany\-ops N
 1435 stop dirmany stressors after N empty files have been created.
 1436 .TP
 1437 .B \-\-dnotify N
 1438 start N workers performing file system activities such as making/deleting
 1439 files/directories, renaming files, etc. to stress exercise the various dnotify
 1440 events (Linux only).
 1441 .TP
 1442 .B \-\-dnotify\-ops N
 1443 stop inotify stress workers after N dnotify bogo operations.
 1444 .TP
 1445 .B \-\-dup N
 1446 start N workers that perform dup(2) and then close(2) operations on /dev/zero.
 1447 The maximum opens at one time is system defined, so the test will run up to
 1448 this maximum, or 65536 open file descriptors, which ever comes first.
 1449 .TP
 1450 .B \-\-dup\-ops N
 1451 stop the dup stress workers after N bogo open operations.
 1452 .TP
 1453 .B \-\-dynlib N
 1454 start N workers that dynamically load and unload various shared libraries. This
 1455 exercises memory mapping and dynamic code loading and symbol lookups. See
 1456 dlopen(3) for more details of this mechanism.
 1457 .TP
 1458 .B \-\-dynlib\-ops N
 1459 stop workers after N bogo load/unload cycles.
 1460 .TP
 1461 .B \-\-efivar N
 1462 start N works that exercise the Linux /sys/firmware/efi/vars interface by
 1463 reading the EFI variables. This is a Linux only stress test for platforms
 1464 that support the EFI vars interface and requires the CAP_SYS_ADMIN
 1465 capability.
 1466 .TP
 1467 .B \-\-efivar-ops N
 1468 stop the efivar stressors after N EFI variable read operations.
 1469 .TP
 1470 .B \-\-enosys N
 1471 start N workers that exercise non-functional system call numbers. This calls
 1472 a wide range of system call numbers to see if it can break a system where these
 1473 are not wired up correctly.  It also keeps track of system calls that exist
 1474 (ones that don't return ENOSYS) so that it can focus on purely finding and
 1475 exercising non-functional system calls. This stressor exercises system calls
 1476 from 0 to __NR_syscalls + 1024, random system calls within constrained in the
 1477 ranges of 0 to 2^8, 2^16, 2^24, 2^32, 2^40, 2^48, 2^56 and 2^64 bits,
 1478 high system call numbers and various other bit patterns to try to get wide good
 1479 coverage. To keep the environment clean, each system call being tested runs
 1480 in a child process with reduced capabilities.
 1481 .TP
 1482 .B \-\-enosys\-ops N
 1483 stop after N bogo enosys system call attempts
 1484 .TP
 1485 .B \-\-env N
 1486 start N workers that creates numerous large environment variables to try to
 1487 trigger out of memory conditions using setenv(3).  If ENOMEM occurs then the
 1488 environment is emptied and another memory filling retry occurs.  The process
 1489 is restarted if it is killed by the Out Of Memory (OOM) killer.
 1490 .TP
 1491 .B \-\-env\-ops N
 1492 stop after N bogo setenv/unsetenv attempts.
 1493 .TP
 1494 .B \-\-epoll N
 1495 start N workers that perform various related socket stress activity using
 1496 epoll_wait(2) to monitor and handle new connections. This involves
 1497 client/server processes performing rapid connect, send/receives and disconnects
 1498 on the local host.  Using epoll allows a large number of connections to be
 1499 efficiently handled, however, this can lead to the connection table filling up
 1500 and blocking further socket connections, hence impacting on the epoll bogo op
 1501 stats.  For ipv4 and ipv6 domains, multiple servers are spawned on multiple
 1502 ports. The epoll stressor is for Linux only.
 1503 .TP
 1504 .B \-\-epoll\-domain D
 1505 specify the domain to use, the default is unix (aka local). Currently ipv4,
 1506 ipv6 and unix are supported.
 1507 .TP
 1508 .B \-\-epoll\-port P
 1509 start at socket port P. For N epoll worker processes, ports P to (P * 4) - 1
 1510 are used for ipv4, ipv6 domains and ports P to P - 1 are used for the unix
 1511 domain.
 1512 .TP
 1513 .B \-\-epoll\-ops N
 1514 stop epoll workers after N bogo operations.
 1515 .TP
 1516 .B \-\-eventfd N
 1517 start N parent and child worker processes that read and write 8 byte event
 1518 messages between them via the eventfd mechanism (Linux only).
 1519 .TP
 1520 .B \-\-eventfd\-ops N
 1521 stop eventfd workers after N bogo operations.
 1522 .TP
 1523 .B \-\-eventfd\-nonblock N
 1524 enable EFD_NONBLOCK to allow non-blocking on the event file descriptor. This
 1525 will cause reads and writes to return with EAGAIN rather the blocking and hence
 1526 causing a high rate of polling I/O.
 1527 .TP
 1528 .B \-\-exec N
 1529 start N workers continually forking children that exec stress-ng and then exit
 1530 almost immediately. If a system has pthread support then 1 in 4 of the exec's
 1531 will be from inside a pthread to exercise exec'ing from inside a pthread
 1532 context.
 1533 .TP
 1534 .B \-\-exec\-ops N
 1535 stop exec stress workers after N bogo operations.
 1536 .TP
 1537 .B \-\-exec\-max P
 1538 create P child processes that exec stress-ng and then wait for them to exit per
 1539 iteration. The default is just 1; higher values will create many temporary
 1540 zombie processes that are waiting to be reaped. One can potentially fill up the
 1541 process table using high values for \-\-exec\-max and \-\-exec.
 1542 .TP
 1543 .B \-\-exit\-group N
 1544 start N workers that create 16 pthreads and terminate the pthreads and
 1545 the controlling child process using exit_group(2). (Linux only stressor).
 1546 .TP
 1547 .B \-\-exit\-group\-ops N
 1548 stop after N iterations of pthread creation and deletion loops.
 1549 .TP
 1550 .B \-F N, \-\-fallocate N
 1551 start N workers continually fallocating (preallocating file space) and
 1552 ftruncating (file truncating) temporary files.  If the file is larger than the
 1553 free space, fallocate will produce an ENOSPC error which is ignored by this
 1554 stressor.
 1555 .TP
 1556 .B \-\-fallocate\-bytes N
 1557 allocated file size, the default is 1 GB. One can specify the size as % of free
 1558 space on the file system or in units of Bytes, KBytes, MBytes and GBytes using
 1559 the suffix b, k, m or g.
 1560 .TP
 1561 .B \-\-fallocate\-ops N
 1562 stop fallocate stress workers after N bogo fallocate operations.
 1563 .TP
 1564 .B \-\-fanotify N
 1565 start N workers performing file system activities such as creating, opening,
 1566 writing, reading and unlinking files to exercise the fanotify event monitoring
 1567 interface (Linux only). Each stressor runs a child process to generate file
 1568 events and a parent process to read file events using fanotify. Has to be run
 1569 with CAP_SYS_ADMIN capability.
 1570 .TP
 1571 .B \-\-fanotify-ops N
 1572 stop fanotify stress workers after N bogo fanotify events.
 1573 .TP
 1574 .B \-\-fault N
 1575 start N workers that generates minor and major page faults.
 1576 .TP
 1577 .B \-\-fault\-ops N
 1578 stop the page fault workers after N bogo page fault operations.
 1579 .TP
 1580 .B \-\-fcntl N
 1581 start N workers that perform fcntl(2) calls with various commands.  The
 1582 exercised commands (if available) are: F_DUPFD, F_DUPFD_CLOEXEC, F_GETFD,
 1583 F_SETFD, F_GETFL, F_SETFL, F_GETOWN, F_SETOWN, F_GETOWN_EX, F_SETOWN_EX,
 1584 F_GETSIG, F_SETSIG, F_GETLK, F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK
 1585 and F_OFD_SETLKW.
 1586 .TP
 1587 .B \-\-fcntl\-ops N
 1588 stop the fcntl workers after N bogo fcntl operations.
 1589 .TP
 1590 .B \-\-fiemap N
 1591 start N workers that each create a file with many randomly changing extents
 1592 and has 4 child processes per worker that gather the extent information using
 1593 the FS_IOC_FIEMAP ioctl(2).
 1594 .TP
 1595 .B \-\-fiemap\-ops N
 1596 stop after N fiemap bogo operations.
 1597 .TP
 1598 .B \-\-fiemap\-bytes N
 1599 specify the size of the fiemap'd file in bytes.  One can specify the size
 1600 as % of free space on the file system or in units of Bytes, KBytes, MBytes
 1601 and GBytes using the suffix b, k, m or g.  Larger files will contain more
 1602 extents, causing more stress when gathering extent information.
 1603 .TP
 1604 .B \-\-fifo N
 1605 start N workers that exercise a named pipe by transmitting 64 bit integers.
 1606 .TP
 1607 .B \-\-fifo-ops N
 1608 stop fifo workers after N bogo pipe write operations.
 1609 .TP
 1610 .B \-\-fifo-readers N
 1611 for each worker, create N fifo reader workers that read
 1612 the named pipe using simple blocking reads.
 1613 .TP
 1614 .B \-\-file\-ioctl N
 1615 start N workers that exercise various file specific ioctl(2) calls. This will
 1616 attempt to use the FIONBIO, FIOQSIZE, FIGETBSZ, FIOCLEX, FIONCLEX, FIONBIO,
 1617 FIOASYNC, FIOQSIZE, FIFREEZE, FITHAW, FICLONE, FICLONERANGE, FIONREAD,
 1618 FIONWRITE and FS_IOC_RESVSP ioctls if these are defined.
 1619 .TP
 1620 .B \-\-file\-ioctl\-ops N
 1621 stop file\-ioctl workers after N file ioctl bogo operations.
 1622 .TP
 1623 .B \-\-filename N
 1624 start N workers that exercise file creation using various length filenames
 1625 containing a range of allowed filename characters.  This will try to see if
 1626 it can exceed the file system allowed filename length was well as test
 1627 various filename lengths between 1 and the maximum allowed by the file system.
 1628 .TP
 1629 .B \-\-filename-ops N
 1630 stop filename workers after N bogo filename tests.
 1631 .TP
 1632 .B \-\-filename-opts opt
 1633 use characters in the filename based on option 'opt'. Valid options are:
 1634 .TS
 1635 expand;
 1636 lB lB lB lB
 1637 l l s s.
 1638 Option	Description
 1639 probe	T{
 1640 default option, probe the file system for valid allowed characters in a file name
 1641 and use these
 1642 T}
 1643 posix	T{
 1644 use characters as specified by The Open Group Base Specifications Issue 7,
 1645 POSIX.1-2008, 3.278 Portable Filename Character Set
 1646 T}
 1647 ext	T{
 1648 use characters allowed by the ext2, ext3, ext4 file systems, namely any 8
 1649 bit character apart from NUL and /
 1650 T}
 1651 .TE
 1652 .TP
 1653 .B \-\-flock N
 1654 start N workers locking on a single file.
 1655 .TP
 1656 .B \-\-flock\-ops N
 1657 stop flock stress workers after N bogo flock operations.
 1658 .TP
 1659 .B \-f N, \-\-fork N
 1660 start N workers continually forking children that immediately exit.
 1661 .TP
 1662 .B \-\-fork\-ops N
 1663 stop fork stress workers after N bogo operations.
 1664 .TP
 1665 .B \-\-fork\-max P
 1666 create P child processes and then wait for them to exit per iteration. The
 1667 default is just 1; higher values will create many temporary zombie processes
 1668 that are waiting to be reaped. One can potentially fill up the process
 1669 table using high values for \-\-fork\-max and \-\-fork.
 1670 .TP
 1671 .B \-\-fork\-vm
 1672 enable detrimental performance virtual memory advice using madvise on
 1673 all pages of the forked process. Where possible this will try to set
 1674 every page in the new process with using madvise MADV_MERGEABLE,
 1675 MADV_WILLNEED, MADV_HUGEPAGE and MADV_RANDOM flags. Linux only.
 1676 .TP
 1677 .B \-\-fp\-error N
 1678 start N workers that generate floating point exceptions. Computations are
 1679 performed to force and check for the FE_DIVBYZERO, FE_INEXACT, FE_INVALID,
 1680 FE_OVERFLOW and FE_UNDERFLOW exceptions.  EDOM and ERANGE errors are also
 1681 checked.
 1682 .TP
 1683 .B \-\-fp\-error\-ops N
 1684 stop after N bogo floating point exceptions.
 1685 .TP
 1686 .B \-\-fpunch N
 1687 start N workers that punch and fill holes in a 16 MB file using five
 1688 concurrent processes per stressor exercising on the same file. Where
 1689 available, this uses fallocate(2) FALLOC_FL_KEEP_SIZE,
 1690 FALLOC_FL_PUNCH_HOLE, FALLOC_FL_ZERO_RANGE, FALLOC_FL_COLLAPSE_RANGE
 1691 and FALLOC_FL_INSERT_RANGE to make and fill holes across the file
 1692 and breaks it into multiple extents.
 1693 .TP
 1694 .B \-\-fpunch\-ops N
 1695 stop fpunch workers after N punch and fill bogo operations.
 1696 .TP
 1697 .B \-\-fstat N
 1698 start N workers fstat'ing files in a directory (default is /dev).
 1699 .TP
 1700 .B \-\-fstat\-ops N
 1701 stop fstat stress workers after N bogo fstat operations.
 1702 .TP
 1703 .B \-\-fstat\-dir directory
 1704 specify the directory to fstat to override the default of /dev.
 1705 All the files in the directory will be fstat'd repeatedly.
 1706 .TP
 1707 .B \-\-full N
 1708 start N workers that exercise /dev/full.  This attempts to write to
 1709 the device (which should always get error ENOSPC), to read from the device
 1710 (which should always return a buffer of zeros) and to seek randomly on the
 1711 device (which should always succeed).  (Linux only).
 1712 .TP
 1713 .B \-\-full\-ops N
 1714 stop the stress full workers after N bogo I/O operations.
 1715 .TP
 1716 .B \-\-funccall N
 1717 start N workers that call functions of 1 through to 9 arguments. By default
 1718 functions with uint64_t arguments are called, however, this can be changed
 1719 using the \-\-funccall\-method option.
 1720 .TP
 1721 .B \-\-funccall\-ops N
 1722 stop the funccall workers after N bogo function call operations. Each bogo
 1723 operation is 1000 calls of functions of 1 through to 9 arguments of the chosen
 1724 argument type.
 1725 .TP
 1726 .B \-\-funccall\-method method
 1727 specify the method of funccall argument type to be used. The
 1728 default is uint64_t but can be one of bool, uint8, uint16, uint32, uint64,
 1729 uint128, float, double, longdouble, cfloat (complex float),
 1730 cdouble (complex double), clongdouble (complex long double), float16,
 1731 float32, float64, float80, float128, decimal32, decimal64 and decimal128.
 1732 Note that some of these types are only available with specific architectures
 1733 and compiler versions.
 1734 .TP
 1735 .B \-\-funcret N
 1736 start N workers that pass and return by value various small to large data
 1737 types.
 1738 .TP
 1739 .B \-\-funcret\-ops N
 1740 stop the funcret workers after N bogo function call operations.
 1741 .TP
 1742 .B \-\-funcret\-method method
 1743 specify the method of funcret argument type to be used. The
 1744 default is uint64_t but can be one of uint8 uint16 uint32 uint64 uint128
 1745 float double longdouble float80 float128 decimal32 decimal64 decimal128
 1746 uint8x32 uint8x128 uint64x128.
 1747 .TP
 1748 .B \-\-futex N
 1749 start N workers that rapidly exercise the futex system call. Each worker has
 1750 two processes, a futex waiter and a futex waker. The waiter waits with a very
 1751 small timeout to stress the timeout and rapid polled futex waiting. This is a
 1752 Linux specific stress option.
 1753 .TP
 1754 .B \-\-futex\-ops N
 1755 stop futex workers after N bogo successful futex wait operations.
 1756 .TP
 1757 .B \-\-get N
 1758 start N workers that call system calls that fetch data from the kernel,
 1759 currently these are: getpid, getppid, getcwd, getgid, getegid, getuid,
 1760 getgroups, getpgrp, getpgid, getpriority, getresgid, getresuid, getrlimit,
 1761 prlimit, getrusage, getsid, gettid, getcpu, gettimeofday, uname, adjtimex,
 1762 sysfs.  Some of these system calls are OS specific.
 1763 .TP
 1764 .B \-\-get\-ops N
 1765 stop get workers after N bogo get operations.
 1766 .TP
 1767 .B \-\-getdent N
 1768 start N workers that recursively read directories /proc, /dev/, /tmp, /sys
 1769 and /run using getdents and getdents64 (Linux only).
 1770 .TP
 1771 .B \-\-getdent\-ops N
 1772 stop getdent workers after N bogo getdent bogo operations.
 1773 .TP
 1774 .B \-\-getrandom N
 1775 start N workers that get 8192 random bytes from the /dev/urandom pool using
 1776 the getrandom(2) system call (Linux) or getentropy(2) (OpenBSD).
 1777 .TP
 1778 .B \-\-getrandom\-ops N
 1779 stop getrandom workers after N bogo get operations.
 1780 .TP
 1781 .B \-\-handle N
 1782 start N workers that exercise the name_to_handle_at(2) and open_by_handle_at(2)
 1783 system calls. (Linux only).
 1784 .TP
 1785 .B \-\-handle\-ops N
 1786 stop after N handle bogo operations.
 1787 .TP
 1788 .B \-d N, \-\-hdd N
 1789 start N workers continually writing, reading and removing temporary files. The
 1790 default mode is to stress test sequential writes and reads.  With
 1791 the \-\-aggressive option enabled without any \-\-hdd\-opts options the
 1792 hdd stressor will work through all the \-\-hdd\-opt options one by one to
 1793 cover a range of I/O options.
 1794 .TP
 1795 .B \-\-hdd\-bytes N
 1796 write N bytes for each hdd process, the default is 1 GB. One can specify the
 1797 size as % of free space on the file system or in units of Bytes, KBytes, MBytes
 1798 and GBytes using the suffix b, k, m or g.
 1799 .TP
 1800 .B \-\-hdd\-opts list
 1801 specify various stress test options as a comma separated list. Options are as
 1802 follows:
 1803 .TS
 1804 expand;
 1805 lB lB lB lB
 1806 l l s s.
 1807 Option	Description
 1808 direct	T{
 1809 try to minimize cache effects of the I/O. File I/O writes are performed
 1810 directly from user space buffers and synchronous transfer is also attempted.
 1811 To guarantee synchronous I/O, also use the sync option.
 1812 T}
 1813 dsync	T{
 1814 ensure output has been transferred to underlying hardware and file metadata
 1815 has been updated (using the O_DSYNC open flag). This is equivalent to each
 1816 write(2) being followed by a call to fdatasync(2). See also the fdatasync
 1817 option.
 1818 T}
 1819 fadv\-dontneed	T{
 1820 advise kernel to expect the data will not be accessed in the near future.
 1821 T}
 1822 fadv\-noreuse	T{
 1823 advise kernel to expect the data to be accessed only once.
 1824 T}
 1825 fadv\-normal	T{
 1826 advise kernel there are no explicit access pattern for the data. This is the
 1827 default advice assumption.
 1828 T}
 1829 fadv\-rnd	T{
 1830 advise kernel to expect random access patterns for the data.
 1831 T}
 1832 fadv\-seq	T{
 1833 advise kernel to expect sequential access patterns for the data.
 1834 T}
 1835 fadv\-willneed	T{
 1836 advise kernel to expect the data to be accessed in the near future.
 1837 T}
 1838 fsync	T{
 1839 flush all modified in-core data after each write to the output device using an
 1840 explicit fsync(2) call.
 1841 T}
 1842 fdatasync	T{
 1843 similar to fsync, but do not flush the modified metadata unless metadata is
 1844 required for later data reads to be handled correctly. This uses an explicit
 1845 fdatasync(2) call.
 1846 T}
 1847 iovec	T{
 1848 use readv/writev multiple buffer I/Os rather than read/write. Instead of 1
 1849 read/write operation, the buffer is broken into an iovec of 16 buffers.
 1850 T}
 1851 noatime	T{
 1852 do not update the file last access timestamp, this can reduce metadata writes.
 1853 T}
 1854 sync	T{
 1855 ensure output has been transferred to underlying hardware (using the O_SYNC
 1856 open flag). This is equivalent to a each write(2) being followed by a call to
 1857 fsync(2). See also the fsync option.
 1858 T}
 1859 rd\-rnd	T{
 1860 read data randomly. By default, written data is not read back, however, this
 1861 option will force it to be read back randomly.
 1862 T}
 1863 rd\-seq	T{
 1864 read data sequentially. By default, written data is not read back, however,
 1865 this option will force it to be read back sequentially.
 1866 T}
 1867 syncfs	T{
 1868 write all buffered modifications of file metadata and data on the filesystem
 1869 that contains the hdd worker files.
 1870 T}
 1871 utimes	T{
 1872 force update of file timestamp which may increase metadata writes.
 1873 T}
 1874 wr\-rnd	T{
 1875 write data randomly. The wr\-seq option cannot be used at the same time.
 1876 T}
 1877 wr\-seq	T{
 1878 write data sequentially. This is the default if no write modes are specified.
 1879 T}
 1880 .TE
 1881 .RE
 1882 .PP
 1883 Note that some of these options are mutually exclusive, for example, there can
 1884 be only one method of writing or reading.  Also, fadvise flags may be mutually
 1885 exclusive, for example fadv-willneed cannot be used with fadv-dontneed.
 1886 .TP
 1887 .B \-\-hdd\-ops N
 1888 stop hdd stress workers after N bogo operations.
 1889 .TP
 1890 .B \-\-hdd\-write\-size N
 1891 specify size of each write in bytes. Size can be from 1 byte to 4MB.
 1892 .TP
 1893 .B \-\-heapsort N
 1894 start N workers that sort 32 bit integers using the BSD heapsort.
 1895 .TP
 1896 .B \-\-heapsort\-ops N
 1897 stop heapsort stress workers after N bogo heapsorts.
 1898 .TP
 1899 .B \-\-heapsort\-size N
 1900 specify number of 32 bit integers to sort, default is 262144 (256 \(mu 1024).
 1901 .TP
 1902 .B \-\-hrtimers N
 1903 start N workers that exercise high resolution times at a high frequency. Each
 1904 stressor starts 32 processes that run with random timer intervals of 0..499999
 1905 nanoseconds. Running this stressor with appropriate privilege will run these
 1906 with the SCHED_RR policy.
 1907 .TP
 1908 .B \-\-hrtimers\-ops N
 1909 stop hrtimers stressors after N timer event bogo operations
 1910 .TP
 1911 .B \-\-hsearch N
 1912 start N workers that search a 80% full hash table using hsearch(3). By default,
 1913 there are 8192 elements inserted into the hash table.  This is a useful method
 1914 to exercise access of memory and processor cache.
 1915 .TP
 1916 .B \-\-hsearch\-ops N
 1917 stop the hsearch workers after N bogo hsearch operations are completed.
 1918 .TP
 1919 .B \-\-hsearch\-size N
 1920 specify the number of hash entries to be inserted into the hash table. Size can
 1921 be from 1K to 4M.
 1922 .TP
 1923 .B \-\-icache N
 1924 start N workers that stress the instruction cache by forcing instruction cache
 1925 reloads.  This is achieved by modifying an instruction cache line,  causing
 1926 the processor to reload it when we call a function in inside it. Currently
 1927 only verified and enabled for Intel x86 CPUs.
 1928 .TP
 1929 .B \-\-icache\-ops N
 1930 stop the icache workers after N bogo icache operations are completed.
 1931 .TP
 1932 .B \-\-icmp\-flood N
 1933 start N workers that flood localhost with randonly sized ICMP ping packets.
 1934 This stressor requires the CAP_NET_RAW capbility.
 1935 .TP
 1936 .B \-\-icmp\-flood\-ops N
 1937 stop icmp flood workers after N ICMP ping packets have been sent.
 1938 .TP
 1939 .B \-\-idle\-scan N
 1940 start N workers that scan the idle page bitmap across a range of physical
 1941 pages. This sets and checks for idle pages via the idle page tracking
 1942 interface /sys/kernel/mm/page_idle/bitmap.  This is for Linux only.
 1943 .TP
 1944 .B \-\-idle\-scan\-ops N
 1945 stop after N bogo page scan operations. Currently one bogo page scan
 1946 operation is equivalent to setting and checking 64 physical pages.
 1947 .TP
 1948 .B \-\-idle\-page N
 1949 start N workers that walks through every page exercising the Linux
 1950 /sys/kernel/mm/page_idle/bitmap interface. Requires CAP_SYS_RESOURCE
 1951 capability.
 1952 .TP
 1953 .B \-\-idle\-page\-ops N
 1954 stop after N bogo idle page operations.
 1955 .TP
 1956 .B \-\-inode-flags N
 1957 start N workers that exercise inode flags using the FS_IOC_GETFLAGS and
 1958 FS_IOC_SETFLAGS ioctl(2). This attempts to apply all the available inode
 1959 flags onto a directory and file even if the underlying file system may not
 1960 support these flags (errors are just ignored).  Each worker runs 4 threads
 1961 that exercise the flags on the same directory and file to try to force
 1962 races. This is a Linux only stressor, see ioctl_iflags(2) for more details.
 1963 .TP
 1964 .B \-\-inode-flags-ops N
 1965 stop the inode-flags workers after N ioctl flag setting attempts.
 1966 .TP
 1967 .B \-\-inotify N
 1968 start N workers performing file system activities such as making/deleting
 1969 files/directories, moving files, etc. to stress exercise the various inotify
 1970 events (Linux only).
 1971 .TP
 1972 .B \-\-inotify\-ops N
 1973 stop inotify stress workers after N inotify bogo operations.
 1974 .TP
 1975 .B \-i N, \-\-io N
 1976 start N workers continuously calling sync(2) to commit buffer cache to disk.
 1977 This can be used in conjunction with the \-\-hdd options.
 1978 .TP
 1979 .B \-\-io\-ops N
 1980 stop io stress workers after N bogo operations.
 1981 .TP
 1982 .B \-\-iomix N
 1983 start N workers that perform a mix of sequential, random and memory mapped
 1984 read/write operations as well as forced sync'ing and (if run as root)
 1985 cache dropping.  Multiple child processes are spawned to all share a single
 1986 file and perform different I/O operations on the same file.
 1987 .TP
 1988 .B \-\-iomix\-bytes N
 1989 write N bytes for each iomix worker process, the default is 1 GB. One can
 1990 specify the size as % of free space on the file system or in units of Bytes,
 1991 KBytes, MBytes and GBytes using the suffix b, k, m or g.
 1992 .TP
 1993 .B \-\-iomix\-ops N
 1994 stop iomix stress workers after N bogo iomix I/O operations.
 1995 .TP
 1996 .B \-\-ioport N
 1997 start N workers than perform bursts of 16 reads and 16 writes of ioport 0x80
 1998 (x86 Linux systems only).  I/O performed on x86 platforms on port 0x80 will
 1999 cause delays on the CPU performing the I/O.
 2000 .TP
 2001 .B \-\-ioport\-ops N
 2002 stop the ioport stressors after N bogo I/O operations
 2003 .TP
 2004 .B \-\-ioport\-opts [ in | out | inout ]
 2005 specify if port reads in, port read writes out or reads and writes are
 2006 to be performed.  The default is both in and out.
 2007 .TP
 2008 .B \-\-ioprio N
 2009 start N workers that exercise the ioprio_get(2) and ioprio_set(2) system calls
 2010 (Linux only).
 2011 .TP
 2012 .B \-\-ioprio\-ops N
 2013 stop after N io priority bogo operations.
 2014 .TP
 2015 .B \-\-io\-uring N
 2016 start N workers that perform iovec write and read I/O operations using the
 2017 Linux io-uring interface. On each bogo-loop 1024 \(mu 512 byte writes and
 2018 1024 \(mu reads are performed on a temporary file.
 2019 .TP
 2020 .B \-\-io\-uring\-ops
 2021 stop after N rounds of write and reads.
 2022 .TP
 2023 .B \-\-ipsec\-mb N
 2024 start N workers that perform cryptographic processing using the highly
 2025 optimized Intel Multi-Buffer Crypto for IPsec library. Depending on the
 2026 features available, SSE, AVX, AVX and AVX512 CPU features will be used
 2027 on data encrypted by SHA, DES, CMAC, CTR, HMAC MD5, HMAC SHA1 and
 2028 HMAC SHA512 cryptographic routines. This is only available for x86-64
 2029 modern Intel CPUs.
 2030 .TP
 2031 .B \-\-ipsec\-mb\-ops N
 2032 stop after N rounds of processing of data using the cryptographic
 2033 routines.
 2034 .TP
 2035 .B \-\-ipsec\-mb\-feature [ sse | avx | avx2 | avx512 ]
 2036 Just use the specified processor CPU feature. By default, all the available
 2037 features for the CPU are exercised.
 2038 .TP
 2039 .B \-\-itimer N
 2040 start N workers that exercise the system interval timers. This sets up an
 2041 ITIMER_PROF itimer that generates a SIGPROF signal.  The default frequency for
 2042 the itimer is 1 MHz, however, the Linux kernel will set this to be no more that
 2043 the jiffy setting, hence high frequency SIGPROF signals are not normally
 2044 possible.  A busy loop spins on getitimer(2) calls to consume CPU and hence
 2045 decrement the itimer based on amount of time spent in CPU and system time.
 2046 .TP
 2047 .B \-\-itimer\-ops N
 2048 stop itimer stress workers after N bogo itimer SIGPROF signals.
 2049 .TP
 2050 .B \-\-itimer\-freq F
 2051 run itimer at F Hz; range from 1 to 1000000 Hz. Normally the highest frequency
 2052 is limited by the number of jiffy ticks per second, so running above 1000 Hz
 2053 is difficult to attain in practice.
 2054 .TP
 2055 .B \-\-itimer\-rand
 2056 select an interval timer frequency based around the interval timer
 2057 frequency +/- 12.5% random jitter. This tries to force more variability in
 2058 the timer interval to make the scheduling less predictable.
 2059 .TP
 2060 .B \-\-judy N
 2061 start N workers that insert, search and delete 32 bit integers in a Judy
 2062 array using a predictable yet sparse array index. By default,
 2063 there are 131072 integers used in the Judy array.  This is a useful method
 2064 to exercise random access of memory and processor cache.
 2065 .TP
 2066 .B \-\-judy\-ops N
 2067 stop the judy workers after N bogo judy operations are completed.
 2068 .TP
 2069 .B \-\-judy\-size N
 2070 specify the size (number of 32 bit integers) in the Judy array to exercise.
 2071 Size can be from 1K to 4M 32 bit integers.
 2072 .TP
 2073 .B \-\-kcmp N
 2074 start N workers that use kcmp(2) to compare parent and child processes to
 2075 determine if they share kernel resources. Supported only for Linux and
 2076 requires CAP_SYS_PTRACE capability.
 2077 .TP
 2078 .B \-\-kcmp\-ops N
 2079 stop kcmp workers after N bogo kcmp operations.
 2080 .TP
 2081 .B \-\-key N
 2082 start N workers that create and manipulate keys using add_key(2) and
 2083 ketctl(2). As many keys are created as the per user limit allows and then the
 2084 following keyctl commands are exercised on each key: KEYCTL_SET_TIMEOUT,
 2085 KEYCTL_DESCRIBE, KEYCTL_UPDATE, KEYCTL_READ, KEYCTL_CLEAR and
 2086 KEYCTL_INVALIDATE.
 2087 .TP
 2088 .B \-\-key\-ops N
 2089 stop key workers after N bogo key operations.
 2090 .TP
 2091 .B \-\-kill N
 2092 start N workers sending SIGUSR1 kill signals to a SIG_IGN signal handler
 2093 in the stressor and SIGUSR1 kill signal to a child stressor with a SIGUSR1
 2094 handler. Most of the process time will end up in kernel space.
 2095 .TP
 2096 .B \-\-kill\-ops N
 2097 stop kill workers after N bogo kill operations.
 2098 .TP
 2099 .B \-\-klog N
 2100 start N workers exercising the kernel syslog(2) system call.  This will
 2101 attempt to read the kernel log with various sized read buffers. Linux only.
 2102 .TP
 2103 .B \-\-klog\-ops N
 2104 stop klog workers after N syslog operations.
 2105 .TP
 2106 .B \-\-l1cache N
 2107 start N workers that exercise the CPU level 1 cache with reads and writes. A cache
 2108 aligned buffer that is twice the level 1 cache size is read and then written
 2109 in level 1 cache set sized steps over each level 1 cache set. This is designed
 2110 to exercise cache block evictions. The bogo-op count measures the number of
 2111 million cache lines touched.  Where possible, the level 1 cache geometry is
 2112 determined from the kernel, however, this is not possible on some architectures
 2113 or kernels, so one may need to specify these manually. One can specify 3 out
 2114 of the 4 cache geometric parameters, these are as follows:
 2115 .TP
 2116 .B \-\-l1cache-line-size N
 2117 specify the level 1 cache line size (in bytes)
 2118 .TP
 2119 .B \-\-l1cache-sets N
 2120 specify the number of level 1 cache sets
 2121 .TP
 2122 .B \-\-l1cache-size N
 2123 specify the level 1 cache size (in bytes)
 2124 .TP
 2125 .B \-\-l1cache-ways N
 2126 specify the number of level 1 cache ways
 2127 .TP
 2128 .B \-\-landlock N
 2129 start N workers that exercise Linux 5.13 landlocking. A range of
 2130 landlock_create_ruleset flags are exercised with a read only file rule
 2131 to see if a directory can be accessed and a read-write file create can
 2132 be blocked. Each ruleset attempt is exercised in a new child context and
 2133 this is the limiting factor on the speed of the stressor.
 2134 .TP
 2135 .B \-\-landlock-ops N
 2136 stop the landlock stressors after N landlock ruleset bogo operations.
 2137 .TP
 2138 .B \-\-lease N
 2139 start N workers locking, unlocking and breaking leases via the fcntl(2)
 2140 F_SETLEASE operation. The parent processes continually lock and unlock a lease
 2141 on a file while a user selectable number of child processes open the file with
 2142 a non-blocking open to generate SIGIO lease breaking notifications to the
 2143 parent.  This stressor is only available if F_SETLEASE, F_WRLCK and F_UNLCK
 2144 support is provided by fcntl(2).
 2145 .TP
 2146 .B \-\-lease\-ops N
 2147 stop lease workers after N bogo operations.
 2148 .TP
 2149 .B \-\-lease\-breakers N
 2150 start N lease breaker child processes per lease worker.  Normally one child is
 2151 plenty to force many SIGIO lease breaking notification signals to the parent,
 2152 however, this option allows one to specify more child processes if required.
 2153 .TP
 2154 .B \-\-link N
 2155 start N workers creating and removing hardlinks.
 2156 .TP
 2157 .B \-\-link\-ops N
 2158 stop link stress workers after N bogo operations.
 2159 .TP
 2160 .B \-\-list N
 2161 start N workers that exercise list data structures. The default is
 2162 to add, find and remove 5,000 64 bit integers into circleq (doubly
 2163 linked circle queue), list (doubly linked list), slist (singly
 2164 linked list), slistt (singly linked list using tail), stailq (singly
 2165 linked tail queue) and tailq (doubly linked tail queue) lists. The
 2166 intention of this stressor is to exercise memory and cache with the
 2167 various list operations.
 2168 .TP
 2169 .B \-\-list\-ops N
 2170 stop list stressors after N bogo ops. A bogo op covers the addition,
 2171 finding and removing all the items into the list(s).
 2172 .TP
 2173 .B \-\-list\-size N
 2174 specify the size of the list, where N is the number of 64 bit integers
 2175 to be added into the list.
 2176 .TP
 2177 .B \-\-list\-method [ all | circleq | list | slist | stailq | tailq ]
 2178 specify the list to be used. By default, all the list methods are
 2179 used (the 'all' option).
 2180 .TP
 2181 .B \-\-loadavg N
 2182 start N workers that attempt to create thousands of pthreads that run
 2183 at the lowest nice priority to force very high load averages. Linux
 2184 systems will also perform some I/O writes as pending I/O is also
 2185 factored into system load accounting.
 2186 .TP
 2187 .B \-\-loadavg\-ops N
 2188 stop loadavg workers after N bogo scheduling yields by the pthreads
 2189 have been reached.
 2190 .TP
 2191 .B \-\-lockbus N
 2192 start N workers that rapidly lock and increment 64 bytes of randomly chosen
 2193 memory from a 16MB mmap'd region (Intel x86 and ARM CPUs only).  This will
 2194 cause cacheline misses and stalling of CPUs.
 2195 .TP
 2196 .B \-\-lockbus-ops N
 2197 stop lockbus workers after N bogo operations.
 2198 .TP
 2199 .B \-\-locka N
 2200 start N workers that randomly lock and unlock regions of a file using the
 2201 POSIX advisory locking mechanism (see fcntl(2), F_SETLK, F_GETLK). Each
 2202 worker creates a 1024 KB file and attempts to hold a maximum of 1024
 2203 concurrent locks with a child process that also tries to hold 1024
 2204 concurrent locks. Old locks are unlocked in a first-in, first-out basis.
 2205 .TP
 2206 .B \-\-locka\-ops N
 2207 stop locka workers after N bogo locka operations.
 2208 .TP
 2209 .B \-\-lockf N
 2210 start N workers that randomly lock and unlock regions of a file using the
 2211 POSIX lockf(3) locking mechanism. Each worker creates a 64 KB file and
 2212 attempts to hold a maximum of 1024 concurrent locks with a child process
 2213 that also tries to hold 1024 concurrent locks. Old locks are unlocked in
 2214 a first-in, first-out basis.
 2215 .TP
 2216 .B \-\-lockf\-ops N
 2217 stop lockf workers after N bogo lockf operations.
 2218 .TP
 2219 .B \-\-lockf\-nonblock
 2220 instead of using blocking F_LOCK lockf(3) commands, use non-blocking F_TLOCK
 2221 commands and re-try if the lock failed.  This creates extra system call
 2222 overhead and CPU utilisation as the number of lockf workers increases and
 2223 should increase locking contention.
 2224 .TP
 2225 .B \-\-lockofd N
 2226 start N workers that randomly lock and unlock regions of a file using the
 2227 Linux open file description locks (see fcntl(2), F_OFD_SETLK, F_OFD_GETLK).
 2228 Each worker creates a 1024 KB file and attempts to hold a maximum of 1024
 2229 concurrent locks with a child process that also tries to hold 1024
 2230 concurrent locks. Old locks are unlocked in a first-in, first-out basis.
 2231 .TP
 2232 .B \-\-lockofd\-ops N
 2233 stop lockofd workers after N bogo lockofd operations.
 2234 .TP
 2235 .B \-\-longjmp N
 2236 start N workers that exercise setjmp(3)/longjmp(3) by rapid looping on
 2237 longjmp calls.
 2238 .TP
 2239 .B \-\-longjmp-ops N
 2240 stop longjmp stress workers after N bogo longjmp operations (1 bogo op is 1000
 2241 longjmp calls).
 2242 .TP
 2243 .B \-\-loop N
 2244 start N workers that exercise the loopback control device. This creates 2MB
 2245 loopback devices, expands them to 4MB, performs some loopback status information
 2246 get and set operations and then destoys them. Linux only and requires
 2247 CAP_SYS_ADMIN capability.
 2248 .TP
 2249 .B \-\-loop\-ops N
 2250 stop after N bogo loopback creation/deletion operations.
 2251 .TP
 2252 .B \-\-lsearch N
 2253 start N workers that linear search a unsorted array of 32 bit integers using
 2254 lsearch(3). By default, there are 8192 elements in the array.  This is a
 2255 useful method to exercise sequential access of memory and processor cache.
 2256 .TP
 2257 .B \-\-lsearch\-ops N
 2258 stop the lsearch workers after N bogo lsearch operations are completed.
 2259 .TP
 2260 .B \-\-lsearch\-size N
 2261 specify the size (number of 32 bit integers) in the array to lsearch. Size can
 2262 be from 1K to 4M.
 2263 .TP
 2264 .B \-\-madvise N
 2265 start N workers that apply random madvise(2) advise settings on pages of
 2266 a 4MB file backed shared memory mapping.
 2267 .TP
 2268 .B \-\-madvise\-ops N
 2269 stop madvise stressors after N bogo madvise operations.
 2270 .TP
 2271 .B \-\-malloc N
 2272 start N workers continuously calling malloc(3), calloc(3), realloc(3) and
 2273 free(3). By default, up to 65536 allocations can be active at any point, but
 2274 this can be altered with the \-\-malloc\-max option.  Allocation, reallocation
 2275 and freeing are chosen at random; 50% of the time memory is allocation (via
 2276 malloc, calloc or realloc) and 50% of the time allocations are free'd.
 2277 Allocation sizes are also random, with the maximum allocation size controlled
 2278 by the \-\-malloc\-bytes option, the default size being 64K.  The worker is
 2279 re-started if it is killed by the out of memory (OOM) killer.
 2280 .TP
 2281 .B \-\-malloc\-bytes N
 2282 maximum per allocation/reallocation size. Allocations are randomly selected
 2283 from 1 to N bytes. One can specify the size as % of total available memory
 2284 or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or
 2285 g.  Large allocation sizes cause the memory allocator to use mmap(2) rather
 2286 than expanding the heap using brk(2).
 2287 .TP
 2288 .B \-\-malloc\-max N
 2289 maximum number of active allocations allowed. Allocations are chosen at random
 2290 and placed in an allocation slot. Because about 50%/50% split between
 2291 allocation and freeing, typically half of the allocation slots are in use at
 2292 any one time.
 2293 .TP
 2294 .B \-\-malloc\-ops N
 2295 stop after N malloc bogo operations. One bogo operations relates to a
 2296 successful malloc(3), calloc(3) or realloc(3).
 2297 .TP
 2298 .B \-\-malloc\-pthreads N
 2299 specify number of malloc stressing concurrent pthreads to run. The default is
 2300 0 (just one main process, no pthreads). This option will do nothing if pthreads
 2301 are not supported.
 2302 .TP
 2303 .B \-\-malloc\-thresh N
 2304 specify the threshold where malloc uses mmap(2) instead of sbrk(2) to allocate
 2305 more memory. This is only available on systems that provide the GNU C
 2306 mallopt(3) tuning function.
 2307 .TP
 2308 .B \-\-malloc\-touch
 2309 touch every allocated page to force pages to be populated in memory. This will
 2310 increase the memory pressure and exercise the virtual memory harder. By default
 2311 the malloc stressor will madvise pages into memory or use mincore to check for
 2312 non-resident memory pages and try to force them into memory; this option
 2313 aggressively forces pages to be memory resident.
 2314 .TP
 2315 .B \-\-matrix N
 2316 start N workers that perform various matrix operations on floating point
 2317 values. Testing on 64 bit x86 hardware shows that this provides a good
 2318 mix of memory, cache and floating point operations and is an excellent way
 2319 to make a CPU run hot.
 2320 
 2321 By default, this will exercise all the matrix stress methods one by
 2322 one.  One can specify a specific matrix stress method with the
 2323 \-\-matrix\-method option.
 2324 .TP
 2325 .B \-\-matrix\-ops N
 2326 stop matrix stress workers after N bogo operations.
 2327 .TP
 2328 .B \-\-matrix\-method method
 2329 specify a matrix stress method. Available matrix stress methods are described
 2330 as follows:
 2331 .TS
 2332 expand;
 2333 lB2 lB lB lB
 2334 l l s s.
 2335 Method	Description
 2336 all	T{
 2337 iterate over all the below matrix stress methods
 2338 T}
 2339 add	T{
 2340 add two N \(mu N matrices
 2341 T}
 2342 copy	T{
 2343 copy one N \(mu N matrix to another
 2344 T}
 2345 div	T{
 2346 divide an N \(mu N matrix by a scalar
 2347 T}
 2348 frobenius	T{
 2349 Frobenius product of two N \(mu N matrices
 2350 T}
 2351 hadamard	T{
 2352 Hadamard product of two N \(mu N matrices
 2353 T}
 2354 identity	T{
 2355 create an N \(mu N identity matrix
 2356 T}
 2357 mean	T{
 2358 arithmetic mean of two N \(mu N matrices
 2359 T}
 2360 mult	T{
 2361 multiply an N \(mu N matrix by a scalar
 2362 T}
 2363 negate	T{
 2364 negate an N \(mu N matrix
 2365 T}
 2366 prod	T{
 2367 product of two N \(mu N matrices
 2368 T}
 2369 sub	T{
 2370 subtract one N \(mu N matrix from another N \(mu N matrix
 2371 T}
 2372 square	T{
 2373 multiply an N \(mu N matrix by itself
 2374 T}
 2375 trans	T{
 2376 transpose an N \(mu N matrix
 2377 T}
 2378 zero	T{
 2379 zero an N \(mu N matrix
 2380 T}
 2381 .TE
 2382 .TP
 2383 .B \-\-matrix\-size N
 2384 specify the N \(mu N size of the matrices.  Smaller values result in a
 2385 floating point compute throughput bound stressor, where as large values result
 2386 in a cache and/or memory bandwidth bound stressor.
 2387 .TP
 2388 .B \-\-matrix\-yx
 2389 perform matrix operations in order y by x rather than the default x by y. This
 2390 is suboptimal ordering compared to the default and will perform more data
 2391 cache stalls.
 2392 .TP
 2393 .B \-\-matrix-3d N
 2394 start N workers that perform various 3D matrix operations on floating point
 2395 values. Testing on 64 bit x86 hardware shows that this provides a good
 2396 mix of memory, cache and floating point operations and is an excellent way
 2397 to make a CPU run hot.
 2398 
 2399 By default, this will exercise all the 3D matrix stress methods one by
 2400 one.  One can specify a specific 3D matrix stress method with the
 2401 \-\-matrix\-3d\-method option.
 2402 .TP
 2403 .B \-\-matrix\-3d\-ops N
 2404 stop the 3D matrix stress workers after N bogo operations.
 2405 .TP
 2406 .B \-\-matrix\-3d\-method method
 2407 specify a 3D matrix stress method. Available 3D matrix stress methods are described
 2408 as follows:
 2409 .TS
 2410 expand;
 2411 lB2 lB lB lB
 2412 l l s s.
 2413 Method	Description
 2414 all	T{
 2415 iterate over all the below matrix stress methods
 2416 T}
 2417 add	T{
 2418 add two N \(mu N \(mu N matrices
 2419 T}
 2420 copy	T{
 2421 copy one N \(mu N \(mu N matrix to another
 2422 T}
 2423 div	T{
 2424 divide an N \(mu N \(mu N matrix by a scalar
 2425 T}
 2426 frobenius	T{
 2427 Frobenius product of two N \(mu N \(mu N matrices
 2428 T}
 2429 hadamard	T{
 2430 Hadamard product of two N \(mu N \(mu N matrices
 2431 T}
 2432 identity	T{
 2433 create an N \(mu N \(mu N identity matrix
 2434 T}
 2435 mean	T{
 2436 arithmetic mean of two N \(mu N \(mu N matrices
 2437 T}
 2438 mult	T{
 2439 multiply an N \(mu N \(mu N matrix by a scalar
 2440 T}
 2441 negate	T{
 2442 negate an N \(mu N \(mu N matrix
 2443 T}
 2444 sub	T{
 2445 subtract one N \(mu N \(mu N matrix from another N \(mu N \(mu N matrix
 2446 T}
 2447 trans	T{
 2448 transpose an N \(mu N \(mu N matrix
 2449 T}
 2450 zero	T{
 2451 zero an N \(mu N \(mu N matrix
 2452 T}
 2453 .TE
 2454 .TP
 2455 .B \-\-matrix\-3d\-size N
 2456 specify the N \(mu N \(mu N size of the matrices.  Smaller values result in a
 2457 floating point compute throughput bound stressor, where as large values result
 2458 in a cache and/or memory bandwidth bound stressor.
 2459 .TP
 2460 .B \-\-matrix\-3d\-zyx
 2461 perform matrix operations in order z by y by x rather than the default
 2462 x by y by z. This is suboptimal ordering compared to the default and will
 2463 perform more data cache stalls.
 2464 .TP
 2465 .B \-\-mcontend N
 2466 start N workers that produce memory contention read/write patterns. Each
 2467 stressor runs with 5 threads that read and write to two different mappings
 2468 of the same underlying physical page. Various caching operations are also
 2469 exercised to cause sub-optimal memory access patterns.  The threads also
 2470 randomly change CPU affinity to exercise CPU and memory migration stress.
 2471 .TP
 2472 .B \-\-mcontend\-ops N
 2473 stop mcontend stressors after N bogo read/write operations.
 2474 .TP
 2475 .B \-\-membarrier N
 2476 start N workers that exercise the membarrier system call (Linux only).
 2477 .TP
 2478 .B \-\-membarrier\-ops N
 2479 stop membarrier stress workers after N bogo membarrier operations.
 2480 .TP
 2481 .B \-\-memcpy N
 2482 start N workers that copy 2MB of data from a shared region to a buffer using
 2483 memcpy(3) and then move the data in the buffer with memmove(3) with 3
 2484 different alignments. This will exercise processor cache and system memory.
 2485 .TP
 2486 .B \-\-memcpy\-ops N
 2487 stop memcpy stress workers after N bogo memcpy operations.
 2488 .TP
 2489 .B \-\-memcpy\-method [ all | libc | builtin | naive ]
 2490 specify a memcpy copying method. Available memcpy methods are described
 2491 as follows:
 2492 .TS
 2493 expand;
 2494 lB2 lB lB lB
 2495 l l s s.
 2496 Method	Description
 2497 all	T{
 2498 use libc, builtin and naive methods
 2499 T}
 2500 libc	T{
 2501 use libc memcpy and memmove functions, this is the default
 2502 T}
 2503 builtin	T{
 2504 use the compiler built in optimized memcpy and memmove functions
 2505 T}
 2506 naive	T{
 2507 use naive byte by byte copying and memory moving build with default
 2508 compiler optimization flags
 2509 T}
 2510 naive_o0	T{
 2511 use unoptimized naive byte by byte copying and memory moving
 2512 T}
 2513 naive_o3	T{
 2514 use optimized naive byte by byte copying and memory moving build with -O3
 2515 optimization and where possible use CPU specific optimizations
 2516 T}
 2517 .TE
 2518 .TP
 2519 .B \-\-memfd N
 2520 start N workers that create allocations of 1024 pages using memfd_create(2)
 2521 and ftruncate(2) for allocation and mmap(2) to map the allocation into the
 2522 process address space.  (Linux only).
 2523 .TP
 2524 .B \-\-memfd\-bytes N
 2525 allocate N bytes per memfd stress worker, the default is 256MB. One can specify
 2526 the size in as % of total available memory or in units of Bytes, KBytes, MBytes
 2527 and GBytes using the suffix b, k, m or g.
 2528 .TP
 2529 .B \-\-memfd\-fds N
 2530 create N memfd file descriptors, the default is 256. One can select 8 to 4096
 2531 memfd file descriptions with this option.
 2532 .TP
 2533 .B \-\-memfd\-ops N
 2534 stop after N memfd-create(2) bogo operations.
 2535 .TP
 2536 .B \-\-memhotplug N
 2537 start N workers that offline and online memory hotplug regions. Linux only
 2538 and requires CAP_SYS_ADMIN capabilities.
 2539 .TP
 2540 .B \-\-memhotplug\-ops N
 2541 stop memhotplug stressors after N memory offline and online bogo operations.
 2542 .TP
 2543 .B \-\-memrate N
 2544 start N workers that exercise a buffer with 64, 32, 16 and 8 bit reads and
 2545 writes.  This memory stressor allows one to also specify the maximum read
 2546 and write rates. The stressors will run at maximum speed if no read or
 2547 write rates are specified.
 2548 .TP
 2549 .B \-\-memrate\-ops N
 2550 stop after N bogo memrate operations.
 2551 .TP
 2552 .B \-\-memrate\-bytes N
 2553 specify the size of the memory buffer being exercised. The default size
 2554 is 256MB. One can specify the size in units of Bytes, KBytes, MBytes and
 2555 GBytes using the suffix b, k, m or g.
 2556 .TP
 2557 .B \-\-memrate\-rd\-mbs N
 2558 specify the maximum allowed read rate in MB/sec. The actual read rate
 2559 is dependent on scheduling jitter and memory accesses from other running
 2560 processes.
 2561 .TP
 2562 .B \-\-memrate\-wr\-mbs N
 2563 specify the maximum allowed read rate in MB/sec. The actual write rate
 2564 is dependent on scheduling jitter and memory accesses from other running
 2565 processes.
 2566 .TP
 2567 .B \-\-memthrash N
 2568 start N workers that thrash and exercise a 16MB buffer in various ways to
 2569 try and trip thermal overrun.  Each stressor will start 1 or more threads.
 2570 The number of threads is chosen so that there will be at least 1 thread
 2571 per CPU. Note that the optimal choice for N is a value that divides into
 2572 the number of CPUs.
 2573 .TP
 2574 .B \-\-memthrash-ops N
 2575 stop after N memthrash bogo operations.
 2576 .TP
 2577 .B \-\-memthrash\-method method
 2578 specify a memthrash stress method. Available memthrash stress methods are described
 2579 as follows:
 2580 .TS
 2581 expand;
 2582 lB2 lB lB lB
 2583 l l s s.
 2584 Method	Description
 2585 all	T{
 2586 iterate over all the below memthrash methods
 2587 T}
 2588 chunk1	T{
 2589 memset 1 byte chunks of random data into random locations
 2590 T}
 2591 chunk8	T{
 2592 memset 8 byte chunks of random data into random locations
 2593 T}
 2594 chunk64	T{
 2595 memset 64 byte chunks of random data into random locations
 2596 T}
 2597 chunk256	T{
 2598 memset 256 byte chunks of random data into random locations
 2599 T}
 2600 chunkpage	T{
 2601 memset page size chunks of random data into random locations
 2602 T}
 2603 flip	T{
 2604 flip (invert) all bits in random locations
 2605 T}
 2606 flush	T{
 2607 flush cache line in random locations
 2608 T}
 2609 lock	T{
 2610 lock randomly choosing locations (Intel x86 and ARM CPUs only)
 2611 T}
 2612 matrix	T{
 2613 treat memory as a 2 \(mu 2 matrix and swap random elements
 2614 T}
 2615 memmove	T{
 2616 copy all the data in buffer to the next memory location
 2617 T}
 2618 memset	T{
 2619 memset the memory with random data
 2620 T}
 2621 mfence	T{
 2622 stores with write serialization
 2623 T}
 2624 prefetch	T{
 2625 prefetch data at random memory locations
 2626 T}
 2627 random	T{
 2628 randomly run any of the memthrash methods except for 'random' and 'all'
 2629 T}
 2630 spinread	T{
 2631 spin loop read the same random location 2^19 times
 2632 T}
 2633 spinwrite	T{
 2634 spin loop write the same random location 2^19 times
 2635 T}
 2636 swap	T{
 2637 step through memory swapping bytes in steps of 65 and 129 byte strides
 2638 T}
 2639 .TE
 2640 .TP
 2641 .B -\-mergesort N
 2642 start N workers that sort 32 bit integers using the BSD mergesort.
 2643 .TP
 2644 .B \-\-mergesort\-ops N
 2645 stop mergesort stress workers after N bogo mergesorts.
 2646 .TP
 2647 .B \-\-mergesort\-size N
 2648 specify number of 32 bit integers to sort, default is 262144 (256 \(mu 1024).
 2649 .TP
 2650 .B \-\-mincore N
 2651 start N workers that walk through all of memory 1 page at a time checking if
 2652 the page mapped and also is resident in memory using mincore(2). It also
 2653 maps and unmaps a page to check if the page is mapped or not using mincore(2).
 2654 .TP
 2655 .B \-\-mincore\-ops N
 2656 stop after N mincore bogo operations. One mincore bogo op is equivalent to a
 2657 300 mincore(2) calls.
 2658 .TE
 2659 .B \-\-mincore\-random
 2660 instead of walking through pages sequentially, select pages at random. The
 2661 chosen address is iterated over by shifting it right one place and checked by
 2662 mincore until the address is less or equal to the page size.
 2663 .TP
 2664 .B \-\-misaligned N
 2665 start N workers that perform misaligned read and writes. By default, this
 2666 will exercise 128 bit misaligned read and writes in 8 x 16 bits, 4 x 32 bits,
 2667 2 x 64 bits and 1 x 128 bits at the start of a page boundary, at the end
 2668 of a page boundary and over a cache boundary. Misaligned read and writes
 2669 operate at 1 byte offset from the natural alignment of the data
 2670 type. On some architectures this can cause SIGBUS, SIGILL or SIGSEGV, these are
 2671 handled and the misaligned stressor method causing the error is disabled.
 2672 .TP
 2673 .B \-\-misaligned\-ops N
 2674 stop after N misaligned bogo operation. A misaligned bogo op is equivalent
 2675 to 65536 x 128 bit reads or writes.
 2676 .TP
 2677 .B \-\-misaligned\-method M
 2678 Available misaligned stress methods are described as follows:
 2679 .TS
 2680 expand;
 2681 lB2 lB lB lB
 2682 l l s s.
 2683 Method	Description
 2684 all	iterate over all the following misaligned methods
 2685 int16rd	8 x 16 bit integer reads
 2686 int16wr	8 x 16 bit integer writes
 2687 int16inc	8 x 16 bit integer increments
 2688 int16atomic	8 x 16 bit atomic integer increments
 2689 int32rd	4 x 32 bit integer reads
 2690 int32wr	4 x 32 bit integer writes
 2691 int32inc	4 x 32 bit integer increments
 2692 int32atomic	4 x 32 bit atomic integer increments
 2693 int64rd	2 x 64 bit integer reads
 2694 int64wr	2 x 64 bit integer writes
 2695 int64inc	2 x 64 bit integer increments
 2696 int64atomic	2 x 64 bit atomic integer increments
 2697 int128rd	1 x 128 bit integer reads
 2698 int128wr	1 x 128 bit integer writes
 2699 int128inc	1 x 128 bit integer increments
 2700 int128atomic	1 x 128 bit atomic integer increments
 2701 .TE
 2702 .PP
 2703 Note that some of these options (128 bit integer and/or atomic operations) may
 2704 not be available on some systems.
 2705 .TP
 2706 .B \-\-mknod N
 2707 start N workers that create and remove fifos, empty files and named sockets
 2708 using mknod and unlink.
 2709 .TP
 2710 .B \-\-mknod\-ops N
 2711 stop directory thrash workers after N bogo mknod operations.
 2712 .TP
 2713 .B \-\-mlock N
 2714 start N workers that lock and unlock memory mapped pages using mlock(2),
 2715 munlock(2), mlockall(2) and munlockall(2). This is achieved by the mapping of
 2716 three contiguous pages and then locking the second page, hence ensuring
 2717 non-contiguous pages are locked . This is then repeated until the maximum
 2718 allowed mlocks or a maximum of 262144 mappings are made.  Next, all future
 2719 mappings are mlocked and the worker attempts to map 262144 pages, then all
 2720 pages are munlocked and the pages are unmapped.
 2721 .TP
 2722 .B \-\-mlock\-ops N
 2723 stop after N mlock bogo operations.
 2724 .TP
 2725 .B \-\-mlockmany N
 2726 start N workers that fork off a default of 1024 child processes in total;
 2727 each child will attempt to anonymously mmap and mlock the maximum allowed
 2728 mlockable memory size.  The stress test attempts to avoid swapping by
 2729 tracking low memory and swap allocations (but some swapping may occur). Once
 2730 either the maximum number of child process is reached or all mlockable in-core
 2731 memory is locked then child processes are killed and the stress test is
 2732 repeated.
 2733 .TP
 2734 .B \-\-mlockmany\-ops N
 2735 stop after N mlockmany (mmap and mlock) operations.
 2736 .TP
 2737 .B \-\-mlockmany\-procs N
 2738 set the number of child processes to create per stressor. The default is to
 2739 start a maximum of 1024 child processes in total across all the stressors. This
 2740 option allows the setting of N child processes per stressor.
 2741 .TP
 2742 .B \-\-mmap N
 2743 start N workers continuously calling mmap(2)/munmap(2).  The initial mapping
 2744 is a large chunk (size specified by \-\-mmap\-bytes) followed by pseudo-random
 2745 4K unmappings, then pseudo-random 4K mappings, and then linear 4K unmappings.
 2746 Note that this can cause systems to trip the kernel OOM killer on Linux
 2747 systems if not enough physical memory and swap is not available.  The
 2748 MAP_POPULATE option is used to populate pages into memory on systems that
 2749 support this.  By default, anonymous mappings are used, however, the
 2750 \-\-mmap\-file and \-\-mmap\-async options allow one to perform file based
 2751 mappings if desired.
 2752 .TP
 2753 .B \-\-mmap\-ops N
 2754 stop mmap stress workers after N bogo operations.
 2755 .TP
 2756 .B \-\-mmap\-async
 2757 enable file based memory mapping and use asynchronous msync'ing on each page,
 2758 see \-\-mmap\-file.
 2759 .TP
 2760 .B \-\-mmap\-bytes N
 2761 allocate N bytes per mmap stress worker, the default is 256MB. One can specify
 2762 the size as % of total available memory or in units of Bytes, KBytes, MBytes
 2763 and GBytes using the suffix b, k, m or g.
 2764 .TP
 2765 .B \-\-mmap\-file
 2766 enable file based memory mapping and by default use synchronous msync'ing on
 2767 each page.
 2768 .TP
 2769 .B \-\-mmap\-mmap2
 2770 use mmap2 for 4K page aligned offsets if mmap2 is available, otherwise fall back
 2771 to mmap.
 2772 .TP
 2773 .B \-\-mmap\-mprotect
 2774 change protection settings on each page of memory.  Each time a page or a
 2775 group of pages are mapped or remapped then this option will make the pages
 2776 read-only, write-only, exec-only, and read-write.
 2777 .TP
 2778 .B \-\-mmap\-odirect
 2779 enable file based memory mapping and use O_DIRECT direct I/O.
 2780 .TP
 2781 .B \-\-mmap\-osync
 2782 enable file based memory mapping and used O_SYNC synchronous I/O
 2783 integrity completion.
 2784 .TP
 2785 .B \-\-mmapaddr N
 2786 start N workers that memory map pages at a random memory location that is
 2787 not already mapped.  On 64 bit machines the random address is randomly
 2788 chosen 32 bit or 64 bit address. If the mapping works a second page is
 2789 memory mapped from the first mapped address. The stressor exercises
 2790 mmap/munmap, mincore and segfault handling.
 2791 .TP
 2792 .B \-\-mmapaddr\-ops N
 2793 stop after N random address mmap bogo operations.
 2794 .TP
 2795 .B \-\-mmapfork N
 2796 start N workers that each fork off 32 child processes, each of which tries to
 2797 allocate some of the free memory left in the system (and trying to avoid
 2798 any swapping).  The child processes then hint that the allocation will be
 2799 needed with madvise(2) and then memset it to zero and hint that it is no longer
 2800 needed with madvise before exiting.  This produces significant amounts of VM
 2801 activity, a lot of cache misses and with minimal swapping.
 2802 .TP
 2803 .B \-\-mmapfork\-ops N
 2804 stop after N mmapfork bogo operations.
 2805 .TP
 2806 .B \-\-mmapfixed N
 2807 start N workers that perform fixed address allocations from the top virtual
 2808 address down to 128K.  The allocated sizes are from 1 page to 8 pages and
 2809 various random mmap flags are used MAP_SHARED/MAP_PRIVATE, MAP_LOCKED,
 2810 MAP_NORESERVE, MAP_POPULATE. If successfully map'd then the allocation
 2811 is remap'd to an address that is several pages higher in memory. Mappings
 2812 and remappings are madvised with random madvise options to further exercise
 2813 the mappings.
 2814 .TP
 2815 .B \-\-mmapfixed\-ops N
 2816 stop after N mmapfixed memory mapping bogo operations.
 2817 .TP
 2818 .B \-\-mmaphuge N
 2819 start N workers that attempt to mmap a set of huge pages and large huge
 2820 page sized mappings. Successful mappings are madvised with MADV_NOHUGEPAGE
 2821 and MADV_HUGEPAGE settings and then 1/64th of the normal small page size pages
 2822 are touched. Finally, an attempt to unmap a small page size page at the
 2823 end of the mapping is made (these may fail on huge pages) before the set
 2824 of pages are unmapped. By default 8192 mappings are attempted per round
 2825 of mappings or until swapping is detected.
 2826 .TP
 2827 .B \-\-mmaphuge\-ops N
 2828 stop after N mmaphuge bogo operations
 2829 .TP
 2830 .B \-\-mmaphuge\-mmaps N
 2831 set the number of huge page mappings to attempt in each round of mappings. The
 2832 default is 8192 mappings.
 2833 .TP
 2834 .B \-\-mmapmany N
 2835 start N workers that attempt to create the maximum allowed per-process memory
 2836 mappings. This is achieved by mapping 3 contiguous pages and then unmapping the
 2837 middle page hence splitting the mapping into two. This is then repeated until
 2838 the maximum allowed mappings or a maximum of 262144 mappings are made.
 2839 .TP
 2840 .B \-\-mmapmany\-ops N
 2841 stop after N mmapmany bogo operations
 2842 .TP
 2843 .B \-\-mq N
 2844 start N sender and receiver processes that continually send and receive
 2845 messages using POSIX message queues. (Linux only).
 2846 .TP
 2847 .B \-\-mq\-ops N
 2848 stop after N bogo POSIX message send operations completed.
 2849 .TP
 2850 .B \-\-mq\-size N
 2851 specify size of POSIX message queue. The default size is 10 messages and most
 2852 Linux systems this is the maximum allowed size for normal users. If the given
 2853 size is greater than the allowed message queue size then a warning is issued
 2854 and the maximum allowed size is used instead.
 2855 .TP
 2856 .B \-\-mremap N
 2857 start N workers continuously calling mmap(2), mremap(2) and munmap(2).  The
 2858 initial anonymous mapping is a large chunk (size specified by
 2859 \-\-mremap\-bytes) and then iteratively halved in size by remapping all the
 2860 way down to a page size and then back up to the original size.  This worker
 2861 is only available for Linux.
 2862 .TP
 2863 .B \-\-mremap\-ops N
 2864 stop mremap stress workers after N bogo operations.
 2865 .TP
 2866 .B \-\-mremap\-bytes N
 2867 initially allocate N bytes per remap stress worker, the default is 256MB. One
 2868 can specify the size in units of Bytes, KBytes, MBytes and GBytes using the
 2869 suffix b, k, m or g.
 2870 .TP
 2871 .B \-\-mremap\-mlock
 2872 attempt to mlock remapped pages into memory prohibiting them from being
 2873 paged out.  This is a no-op if mlock(2) is not available.
 2874 .TP
 2875 .B \-\-msg N
 2876 start N sender and receiver processes that continually send and receive
 2877 messages using System V message IPC.
 2878 .TP
 2879 .B \-\-msg\-ops N
 2880 stop after N bogo message send operations completed.
 2881 .TP
 2882 .B \-\-msg\-types N
 2883 select the quality of message types (mtype) to use. By default, msgsnd sends
 2884 messages with a mtype of 1, this option allows one to send messages types
 2885 in the range 1..N to exercise the message queue receive ordering. This will
 2886 also impact throughput performance.
 2887 .TP
 2888 .B \-\-msync N
 2889 start N stressors that msync data from a file backed memory mapping from
 2890 memory back to the file and msync modified data from the file back to the
 2891 mapped memory. This exercises the msync(2) MS_SYNC and MS_INVALIDATE sync
 2892 operations.
 2893 .TP
 2894 .B \-\-msync\-ops N
 2895 stop after N msync bogo operations completed.
 2896 .TP
 2897 .B \-\-msync\-bytes N
 2898 allocate N bytes for the memory mapped file, the default is 256MB. One
 2899 can specify the size as % of total available memory or in units of Bytes,
 2900 KBytes, MBytes and GBytes using the suffix b, k, m or g.
 2901 .TP
 2902 .B \-\-munmap N
 2903 start N stressors that exercise unmapping of shared non-executable mapped
 2904 regions of child processes (Linux only). The unmappings map shared memory regions page
 2905 by page with a prime sized stride that creates many temporary mapping holes.
 2906 One the unmappings are complete the child will exit and a new one is started.
 2907 Note that this may trigger segmentation faults in the child process, these
 2908 are handled where possible by forcing the child process to call _exit(2).
 2909 .TP
 2910 .B \-\-munmap\-ops N
 2911 stop after N page unmappings.
 2912 .TP
 2913 .B \-\-nanosleep N
 2914 start N workers that each run 256 pthreads that call nanosleep with random
 2915 delays from 1 to 2^18 nanoseconds. This should exercise the high resolution
 2916 timers and scheduler.
 2917 .TP
 2918 .B \-\-nanosleep\-ops N
 2919 stop the nanosleep stressor after N bogo nanosleep operations.
 2920 .TP
 2921 .B \-\-netdev N
 2922 start N workers that exercise various netdevice ioctl commands across
 2923 all the available network devices. The ioctls exercised by this stressor
 2924 are as follows: SIOCGIFCONF, SIOCGIFINDEX, SIOCGIFNAME, SIOCGIFFLAGS,
 2925 SIOCGIFADDR, SIOCGIFNETMASK, SIOCGIFMETRIC, SIOCGIFMTU, SIOCGIFHWADDR,
 2926 SIOCGIFMAP and SIOCGIFTXQLEN. See netdevice(7) for more details of these
 2927 ioctl commands.
 2928 .TP
 2929 .B \-\-netdev\-ops N
 2930 stop after N netdev bogo operations completed.
 2931 .TP
 2932 .B \-\-netlink\-proc N
 2933 start N workers that spawn child processes and monitor fork/exec/exit
 2934 process events via the proc netlink connector. Each event received is counted
 2935 as a bogo op. This stressor can only be run on Linux and requires
 2936 CAP_NET_ADMIN capability.
 2937 .TP
 2938 .B \-\-netlink\-proc\-ops N
 2939 stop the proc netlink connector stressors after N bogo ops.
 2940 .TP
 2941 .B \-\-netlink\-task N
 2942 start N workers that collect task statistics via the netlink taskstats
 2943 interface.  This stressor can only be run on Linux and requires
 2944 CAP_NET_ADMIN capability.
 2945 .TP
 2946 .B \-\-netlink\-task\-ops N
 2947 stop the taskstats netlink connector stressors after N bogo ops.
 2948 .TP
 2949 .B \-\-nice N
 2950 start N cpu consuming workers that exercise the available nice levels. Each
 2951 iteration forks off a child process that runs through the all the nice levels
 2952 running a busy loop for 0.1 seconds per level and then exits.
 2953 .TP
 2954 .B \-\-nice\-ops N
 2955 stop after N nice bogo nice loops
 2956 .TP
 2957 .B \-\-nop N
 2958 start N workers that consume cpu cycles issuing no-op instructions. This
 2959 stressor is available if the assembler supports the "nop" instruction.
 2960 .TP
 2961 .B \-\-nop\-ops N
 2962 stop nop workers after N no-op bogo operations. Each bogo-operation is
 2963 equivalent to 256 loops of 256 no-op instructions.
 2964 .TP
 2965 .B \-\-nop\-instr INSTR
 2966 use alternative nop instruction INSTR. For x86 CPUs INSTR can be one
 2967 of nop, pause, nop2 (2 byte nop) through to nop11 (11 byte nop). For
 2968 ARM CPUs, INSTR can be one of nop and yield. For other processors, INSTR
 2969 is only nop. If the chosen INSTR generates an SIGILL signal, then the
 2970 stressor falls back to the vanilla nop instruction.
 2971 .TP
 2972 .B \-\-null N
 2973 start N workers writing to /dev/null.
 2974 .TP
 2975 .B \-\-null\-ops N
 2976 stop null stress workers after N /dev/null bogo write operations.
 2977 .TP
 2978 .B \-\-numa N
 2979 start N workers that migrate stressors and a 4MB memory mapped buffer around
 2980 all the available NUMA nodes.  This uses migrate_pages(2) to move the stressors
 2981 and mbind(2) and move_pages(2) to move the pages of the mapped buffer. After
 2982 each move, the buffer is written to force activity over the bus which results
 2983 cache misses.  This test will only run on hardware with NUMA enabled and more
 2984 than 1 NUMA node.
 2985 .TP
 2986 .B \-\-numa\-ops N
 2987 stop NUMA stress workers after N bogo NUMA operations.
 2988 .TP
 2989 .B \-\-oom\-pipe N
 2990 start N workers that create as many pipes as allowed and exercise expanding
 2991 and shrinking the pipes from the largest pipe size down to a page size. Data
 2992 is written into the pipes and read out again to fill the pipe buffers. With
 2993 the \-\-aggressive mode enabled the data is not read out when the pipes are
 2994 shrunk, causing the kernel to OOM processes aggressively.  Running many
 2995 instances of this stressor will force kernel to OOM processes due to the
 2996 many large pipe buffer allocations.
 2997 .TP
 2998 .B \-\-oom\-pipe\-ops N
 2999 stop after N bogo pipe expand/shrink operations.
 3000 .TP
 3001 .B \-\-opcode N
 3002 start N workers that fork off children that execute randomly generated
 3003 executable code.  This will generate issues such as illegal instructions,
 3004 bus errors, segmentation faults, traps, floating point errors that are
 3005 handled gracefully by the stressor.
 3006 .TP
 3007 .B \-\-opcode\-ops N
 3008 stop after N attempts to execute illegal code.
 3009 .TP
 3010 .B \-\-opcode\-method [ inc | mixed | random | text ]
 3011 select the opcode generation method.  By default, random bytes are used to
 3012 generate the executable code. This option allows one to select one of the
 3013 three methods:
 3014 .TS
 3015 expand;
 3016 lBw(8n) lB lB
 3017 l l s.
 3018 Method	Description
 3019 inc	T{
 3020 use incrementing 32 bit opcode patterns from 0x00000000 to 0xfffffff inclusive.
 3021 T}
 3022 mixed	T{
 3023 use a mix of incrementing 32 bit opcode patterns and random 32 bit opcode patterns that
 3024 are also inverted, encoded with gray encoding and bit reversed.
 3025 T}
 3026 random	T{
 3027 generate opcodes using random bytes from a mwc random generator.
 3028 T}
 3029 text	T{
 3030 copies random chunks of code from the stress-ng text segment and randomly flips
 3031 single bits in a random choice of 1/8th of the code.
 3032 T}
 3033 .TE
 3034 .TP
 3035 .B \-o N, \-\-open N
 3036 start N workers that perform open(2) and then close(2) operations on
 3037 /dev/zero. The maximum opens at one time is system defined, so the test will
 3038 run up to this maximum, or 65536 open file descriptors, which ever comes first.
 3039 .TP
 3040 .B \-\-open\-ops N
 3041 stop the open stress workers after N bogo open operations.
 3042 .TP
 3043 .B \-\-open\-fd
 3044 run a child process that scans /proc/$PID/fd and attempts to open the files
 3045 that the stressor has opened. This exercises racing open/close operations
 3046 on the proc interface.
 3047 .TP
 3048 .B \-\-pci N
 3049 exercise PCI sysfs by running N workers that read data (and mmap/unmap
 3050 PCI config or PCI resource files). Linux only. Running as root will allow
 3051 config and resource mmappings to be read and exercises PCI I/O mapping.
 3052 .TP
 3053 .B \-\-pci\-ops N
 3054 stop pci stress workers after N PCI subdirectory exercising operations.
 3055 .TP
 3056 .B \-\-personality N
 3057 start N workers that attempt to set personality and get all the available
 3058 personality types (process execution domain types) via the personality(2)
 3059 system call. (Linux only).
 3060 .TP
 3061 .B \-\-personality\-ops N
 3062 stop personality stress workers after N bogo personality operations.
 3063 .TP
 3064 .B \-\-physpage N
 3065 start N workers that use /proc/self/pagemap and /proc/kpagecount to determine
 3066 the physical page and page count of a virtual mapped page and a page that is
 3067 shared among all the stressors. Linux only and requires the CAP_SYS_ADMIN
 3068 capabilities.
 3069 .TP
 3070 .B \-\-physpage\-ops N
 3071 stop physpage stress workers after N bogo physical address lookups.
 3072 .TP
 3073 .B \-\-pidfd N
 3074 start N workers that exercise signal sending via the pidfd_send_signal system call.
 3075 This stressor creates child processes and checks if they exist and can be
 3076 stopped, restarted and killed using the pidfd_send_signal system call.
 3077 .TP
 3078 .B \-\-pidfd\-ops N
 3079 stop pidfd stress workers after N child processes have been created, tested
 3080 and killed with pidfd_send_signal.
 3081 .TP
 3082 .B \-\-ping\-sock N
 3083 start N workers that send small randomized ICMP messages to the localhost
 3084 across a range of ports (1024..65535) using a "ping" socket with an AF_INET
 3085 domain, a SOCK_DGRAM socket type and an IPPROTO_ICMP protocol.
 3086 .TP
 3087 .B \-\-ping\-sock\-ops N
 3088 stop the ping\-sock stress workers after N ICMP messages are sent.
 3089 .TP
 3090 .B \-p N, \-\-pipe N
 3091 start N workers that perform large pipe writes and reads to exercise pipe I/O.
 3092 This exercises memory write and reads as well as context switching.  Each
 3093 worker has two processes, a reader and a writer.
 3094 .TP
 3095 .B \-\-pipe\-ops N
 3096 stop pipe stress workers after N bogo pipe write operations.
 3097 .TP
 3098 .B \-\-pipe\-data\-size N
 3099 specifies the size in bytes of each write to the pipe (range from 4 bytes
 3100 to 4096 bytes). Setting a small data size will cause more writes to be
 3101 buffered in the pipe, hence reducing the context switch rate between the
 3102 pipe writer and pipe reader processes. Default size is the page size.
 3103 .TP
 3104 .B \-\-pipe\-size N
 3105 specifies the size of the pipe in bytes (for systems that support the
 3106 F_SETPIPE_SZ fcntl() command). Setting a small pipe size will cause the pipe
 3107 to fill and block more frequently, hence increasing the context switch rate
 3108 between the pipe writer and the pipe reader processes. Default size is 512
 3109 bytes.
 3110 .TP
 3111 .B \-\-pipeherd N
 3112 start N workers that pass a 64 bit token counter to/from 100 child processes
 3113 over a shared pipe. This forces a high context switch rate and can trigger
 3114 a "thundering herd" of wakeups on processes that are blocked on pipe waits.
 3115 .TP
 3116 .B \-\-pipeherd\-ops N
 3117 stop pipe stress workers after N bogo pipe write operations.
 3118 .TP
 3119 .B \-\-pipeherd\-yield
 3120 force a scheduling yield after each write, this increases the context
 3121 switch rate.
 3122 .TP
 3123 .B \-\-pkey N
 3124 start N workers that change memory protection using a protection key (pkey) and
 3125 the pkey_mprotect call (Linux only). This will try to allocate a pkey and
 3126 use this for the page protection, however, if this fails then the special
 3127 pkey -1 will be used (and the kernel will use the normal mprotect mechanism
 3128 instead).  Various page protection mixes of read/write/exec/none will
 3129 be cycled through on randomly chosen pre-allocated pages.
 3130 .TP
 3131 .B \-\-pkey\-ops N
 3132 stop after N pkey_mprotect page protection cycles.
 3133 .TP
 3134 .B \-P N, \-\-poll N
 3135 start N workers that perform zero timeout polling via the poll(2), ppoll(2),
 3136 select(2), pselect(2) and sleep(3) calls. This wastes system and user time
 3137 doing nothing.
 3138 .TP
 3139 .B \-\-poll\-ops N
 3140 stop poll stress workers after N bogo poll operations.
 3141 .TP
 3142 .B \-\-poll\-fds N
 3143 specify the number of file descriptors to poll/ppoll/select/pselect on.
 3144 The maximum number for select/pselect is limited by FD_SETSIZE and the
 3145 upper maximum is also limited by the maximum number of pipe open descriptors
 3146 allowed.
 3147 .TP
 3148 .B \-\-prctl N
 3149 start N workers that exercise the majority of the prctl(2) system call
 3150 options. Each batch of prctl calls is performed inside a new child process
 3151 to ensure the limit of prctl is contained inside a new process every time.
 3152 Some prctl options are architecture specific, however, this stressor will
 3153 exercise these even if they are not implemented.
 3154 .TP
 3155 .B \-\-prctl\-ops N
 3156 stop prctl workers after N batches of prctl calls
 3157 .TP
 3158 .B \-\-prefetch N
 3159 start N workers that benchmark prefetch and non-prefetch reads of a L3
 3160 cache sized buffer. The buffer is read with loops of 8 \(mu 64 bit reads
 3161 per iteration. In the prefetch cases, data is prefetched ahead of the
 3162 current read position by various sized offsets, from 64 bytes to 8K
 3163 to find the best memory read throughput. The stressor reports the
 3164 non-prefetch read rate and the best prefetched read rate. It also reports
 3165 the prefetch offset and an estimate of the amount of time between the
 3166 prefetch issue and the actual memory read operation. These statistics
 3167 will vary from run-to-run due to system noise and CPU frequency scaling.
 3168 .TP
 3169 .B \-\-prefetch-ops N
 3170 stop prefetch stressors after N benchmark operations
 3171 .TP
 3172 .B \-\-prefetch-l3-size N
 3173 specify the size of the l3 cache
 3174 .TP
 3175 .B \-\-procfs N
 3176 start N workers that read files from /proc and recursively read files from
 3177 /proc/self (Linux only).
 3178 .TP
 3179 .B \-\-procfs\-ops N
 3180 stop procfs reading after N bogo read operations. Note, since the number of
 3181 entries may vary between kernels, this bogo ops metric is probably very
 3182 misleading.
 3183 .TP
 3184 .B \-\-pthread N
 3185 start N workers that iteratively creates and terminates multiple pthreads
 3186 (the default is 1024 pthreads per worker). In each iteration, each newly
 3187 created pthread waits until the worker has created all the pthreads and then
 3188 they all terminate together.
 3189 .TP
 3190 .B \-\-pthread\-ops N
 3191 stop pthread workers after N bogo pthread create operations.
 3192 .TP
 3193 .B \-\-pthread\-max N
 3194 create N pthreads per worker. If the product of the number of pthreads by the
 3195 number of workers is greater than the soft limit of allowed pthreads then the
 3196 maximum is re-adjusted down to the maximum allowed.
 3197 .TP
 3198 .B \-\-ptrace N
 3199 start N workers that fork and trace system calls of a child process using
 3200 ptrace(2).
 3201 .TP
 3202 .B \-\-ptrace\-ops N
 3203 stop ptracer workers after N bogo system calls are traced.
 3204 .TP
 3205 .B \-\-pty N
 3206 start N workers that repeatedly attempt to open pseudoterminals and
 3207 perform various pty ioctls upon the ptys before closing them.
 3208 .TP
 3209 .B \-\-pty\-ops N
 3210 stop pty workers after N pty bogo operations.
 3211 .TP
 3212 .B \-\-pty\-max N
 3213 try to open a maximum of N pseudoterminals, the default is 65536. The allowed
 3214 range of this setting is 8..65536.
 3215 .TP
 3216 .B \-Q, \-\-qsort N
 3217 start N workers that sort 32 bit integers using qsort.
 3218 .TP
 3219 .B \-\-qsort\-ops N
 3220 stop qsort stress workers after N bogo qsorts.
 3221 .TP
 3222 .B \-\-qsort\-size N
 3223 specify number of 32 bit integers to sort, default is 262144 (256 \(mu 1024).
 3224 .TP
 3225 .B \-\-quota N
 3226 start N workers that exercise the Q_GETQUOTA, Q_GETFMT, Q_GETINFO, Q_GETSTATS
 3227 and Q_SYNC quotactl(2) commands on all the available mounted block based file
 3228 systems. Requires CAP_SYS_ADMIN capability to run.
 3229 .TP
 3230 .B \-\-\quota\-ops N
 3231 stop quota stress workers after N bogo quotactl operations.
 3232 .TP
 3233 .B \-\-radixsort N
 3234 start N workers that sort random 8 byte strings using radixsort.
 3235 .TP
 3236 .B \-\-radixsort\-ops N
 3237 stop radixsort stress workers after N bogo radixsorts.
 3238 .TP
 3239 .B \-\-radixsort\-size N
 3240 specify number of strings to sort, default is 262144 (256 \(mu 1024).
 3241 .TP
 3242 .B \-\-ramfs N
 3243 start N workers mounting a memory based file system using ramfs and
 3244 tmpfs (Linux only). This alternates between mounting and umounting a
 3245 ramfs or tmpfs file system using the traditional mount(2) and
 3246 umount(2) system call as well as the newer Linux 5.2 fsopen(2),
 3247 fsmount(2), fsconfig(2) and move_mount(2) system calls if they
 3248 are available. The default ram file system size is 2MB.
 3249 .TP
 3250 .B \-\-ramfs\-ops N
 3251 stop after N ramfs mount operations.
 3252 .TP
 3253 .B \-\-ramfs\-size N
 3254 set the ramfs size (must be multiples of the page size).
 3255 .TP
 3256 .B \-\-rawdev N
 3257 start N workers that read the underlying raw drive device using direct
 3258 IO reads. The device (with minor number 0) that stores the current working
 3259 directory is the raw device to be read by the stressor.  The read size is
 3260 exactly the size of the underlying device block size.  By default, this
 3261 stressor will exercise all the of the rawdev methods (see the
 3262 \-\-rawdev\-method option). This is a Linux only stressor and requires
 3263 root privilege to be able to read the raw device.
 3264 .TP
 3265 .B \-\-rawdev\-ops N
 3266 stop the rawdev stress workers after N raw device read bogo operations.
 3267 .TP
 3268 .B \-\-rawdev\-method M
 3269 Available rawdev stress methods are described as follows:
 3270 .TS
 3271 expand;
 3272 lB2 lB lB lB
 3273 l l s s.
 3274 Method	Description
 3275 all	T{
 3276 iterate over all the rawdev stress methods as listed below:
 3277 T}
 3278 sweep	T{
 3279 repeatedly read across the raw device from the 0th block to the end block in steps
 3280 of the number of blocks on the device / 128 and back to the start again.
 3281 T}
 3282 wiggle	T{
 3283 repeatedly read across the raw device in 128 evenly steps with each step reading
 3284 1024 blocks backwards from each step.
 3285 T}
 3286 ends	T{
 3287 repeatedly read the first and last 128 start and end blocks of the raw device
 3288 alternating from start of the device to the end of the device.
 3289 T}
 3290 random	T{
 3291 repeatedly read 256 random blocks
 3292 T}
 3293 burst	T{
 3294 repeatedly read 256 sequential blocks starting from a random block on the raw device.
 3295 T}
 3296 .TE
 3297 .TP
 3298 .B \-\-rawsock N
 3299 start N workers that send and receive packet data using raw sockets on the
 3300 localhost. Requires CAP_NET_RAW to run.
 3301 .TP
 3302 .B \-\-rawsock-ops N
 3303 stop rawsock workers after N packets are received.
 3304 .TP
 3305 .B \-\-rawpkt N
 3306 start N workers that sends and receives ethernet packets
 3307 using raw packets on the localhost via the loopback device. Requires
 3308 CAP_NET_RAW to run.
 3309 .TP
 3310 .B \-\-rawpkt\-ops N
 3311 stop rawpkt workers after N packets from the sender process are received.
 3312 .TP
 3313 .B \-\-rawpkt\-port N
 3314 start at port P. For N rawpkt worker processes, ports P to (P * 4) - 1
 3315 are used. The default starting port is port 14000.
 3316 .TP
 3317 .B \-\-rawudp N
 3318 start N workers that send and receive UDP packets using raw sockets on the
 3319 localhost. Requires CAP_NET_RAW to run.
 3320 .TP
 3321 .B \-\-rawudp\-ops N
 3322 stop rawudp workers after N packets are received.
 3323 .TP
 3324 .B \-\-rawudp\-port N
 3325 start at port P. For N rawudp worker processes, ports P to (P * 4) - 1
 3326 are used. The default starting port is port 13000.
 3327 .TP
 3328 .B \-\-rdrand N
 3329 start N workers that read a random number from an on-chip random number generator
 3330 This uses the rdrand instruction on Intel processors or the darn instruction
 3331 on Power9 processors.
 3332 .TP
 3333 .B \-\-rdrand\-ops N
 3334 stop rdrand stress workers after N bogo rdrand operations (1 bogo op = 2048
 3335 random bits successfully read).
 3336 .TP
 3337 .B \-\-readahead N
 3338 start N workers that randomly seek and perform 4096 byte read/write I/O
 3339 operations on a file with readahead. The default file size is 64 MB.  Readaheads
 3340 and reads are batched into 16 readaheads and then 16 reads.
 3341 .TP
 3342 .B \-\-readahead\-bytes N
 3343 set the size of readahead file, the default is 1 GB. One can specify the size
 3344 as % of free space on the file system or in units of Bytes, KBytes, MBytes and
 3345 GBytes using the suffix b, k, m or g.
 3346 .TP
 3347 .B \-\-readahead\-ops N
 3348 stop readahead stress workers after N bogo read operations.
 3349 .TP
 3350 .B \-\-reboot N
 3351 start N workers that exercise the reboot(2) system call. When possible, it
 3352 will create a process in a PID namespace and perform a reboot power off command
 3353 that should shutdown the process.  Also, the stressor exercises invalid
 3354 reboot magic values and invalid reboots when there are insufficient privileges
 3355 that will not actually reboot the system.
 3356 .TP
 3357 .B \-\-reboot\-ops N
 3358 stop the reboot stress workers after N bogo reboot cycles.
 3359 .TP
 3360 .B \-\-remap N
 3361 start N workers that map 512 pages and re-order these pages using the
 3362 deprecated system call remap_file_pages(2). Several page re-orderings are
 3363 exercised: forward, reverse, random and many pages to 1 page.
 3364 .TP
 3365 .B \-\-remap\-ops N
 3366 stop after N remapping bogo operations.
 3367 .TP
 3368 .B \-R N, \-\-rename N
 3369 start N workers that each create a file and then repeatedly rename it.
 3370 .TP
 3371 .B \-\-rename\-ops N
 3372 stop rename stress workers after N bogo rename operations.
 3373 .TP
 3374 .B \-\-resources N
 3375 start N workers that consume various system resources. Each worker will spawn
 3376 1024 child processes that iterate 1024 times consuming shared memory, heap,
 3377 stack, temporary files and various file descriptors (eventfds, memoryfds,
 3378 userfaultfds, pipes and sockets).
 3379 .TP
 3380 .B \-\-resources\-ops N
 3381 stop after N resource child forks.
 3382 .TP
 3383 .B \-\-revio N
 3384 start N workers continually writing in reverse position order to temporary
 3385 files. The default mode is to stress test reverse position ordered writes
 3386 with randomly sized sparse holes between each write.  With
 3387 the \-\-aggressive option enabled without any \-\-revio\-opts options the
 3388 revio stressor will work through all the \-\-revio\-opt options one by one to
 3389 cover a range of I/O options.
 3390 .TP
 3391 .B \-\-revio\-bytes N
 3392 write N bytes for each revio process, the default is 1 GB. One can specify the
 3393 size as % of free space on the file system or in units of Bytes, KBytes, MBytes
 3394 and GBytes using the suffix b, k, m or g.
 3395 .TP
 3396 .B \-\-revio\-opts list
 3397 specify various stress test options as a comma separated list. Options are the
 3398 same as \-\-hdd\-opts but without the iovec option.
 3399 .TP
 3400 .B \-\-revio\-ops N
 3401 stop revio stress workers after N bogo operations.
 3402 .TP
 3403 .B \-\-revio\-write\-size N
 3404 specify size of each write in bytes. Size can be from 1 byte to 4MB.
 3405 .TP
 3406 .B \-\-rlimit N
 3407 start N workers that exceed CPU and file size resource imits, generating
 3408 SIGXCPU and SIGXFSZ signals.
 3409 .TP
 3410 .B \-\-rlimit\-ops N
 3411 stop after N bogo resource limited SIGXCPU and SIGXFSZ signals have been caught.
 3412 .TP
 3413 .B \-\-rmap N
 3414 start N workers that exercise the VM reverse-mapping. This creates 16 processes
 3415 per worker that write/read multiple file-backed memory mappings. There are 64
 3416 lots of 4 page mappings made onto the file, with each mapping overlapping the
 3417 previous by 3 pages and at least 1 page of non-mapped memory between each
 3418 of the mappings. Data is synchronously msync'd to the file 1 in every
 3419 256 iterations in a random manner.
 3420 .TP
 3421 .B \-\-rmap\-ops N
 3422 stop after N bogo rmap memory writes/reads.
 3423 .TP
 3424 .B \-\-rseq N
 3425 start N workers that exercise restartable sequences via the rseq(2) system
 3426 call.  This loops over a long duration critical section that is likely to
 3427 be interrupted.  A rseq abort handler keeps count of the number of
 3428 interruptions and a SIGSEV handler also tracks any failed rseq aborts that
 3429 can occur if there is a mistmatch in a rseq check signature. Linux only.
 3430 .TP
 3431 .B \-\-rseq\-ops N
 3432 stop after N bogo rseq operations. Each bogo rseq operation is equivalent
 3433 to 10000 iterations over a long duration rseq handled critical section.
 3434 .TP
 3435 .B \-\-rtc N
 3436 start N workers that exercise the real time clock (RTC) interfaces via /dev/rtc
 3437 and /sys/class/rtc/rtc0. No destructive writes (modifications) are performed on
 3438 the RTC. This is a Linux only stressor.
 3439 .TP
 3440 .B \-\-rtc\-ops N
 3441 stop after N bogo RTC interface accesses.
 3442 .TP
 3443 .B \-\-schedpolicy N
 3444 start N workers that work set the worker to various available scheduling
 3445 policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE, SCHED_FIFO,
 3446 SCHED_RR and SCHED_DEADLINE.  For the real time scheduling policies a
 3447 random sched priority is selected between the minimum and maximum
 3448 scheduling priority settings.
 3449 .TP
 3450 .B \-\-schedpolicy\-ops N
 3451 stop after N bogo scheduling policy changes.
 3452 .TP
 3453 .B \-\-sctp N
 3454 start N workers that perform network sctp stress activity using the Stream
 3455 Control Transmission Protocol (SCTP).  This involves client/server processes
 3456 performing rapid connect, send/receives and disconnects on the local host.
 3457 .TP
 3458 .B \-\-sctp\-domain D
 3459 specify the domain to use, the default is ipv4. Currently ipv4 and ipv6
 3460 are supported.
 3461 .TP
 3462 .B \-\-sctp\-ops N
 3463 stop sctp workers after N bogo operations.
 3464 .TP
 3465 .B \-\-sctp\-port P
 3466 start at sctp port P. For N sctp worker processes, ports P to (P * 4) - 1
 3467 are used for ipv4, ipv6 domains and ports P to P - 1 are used for the unix
 3468 domain.
 3469 .TP
 3470 .B \-\-seal N
 3471 start N workers that exercise the fcntl(2) SEAL commands on a small anonymous
 3472 file created using memfd_create(2).  After each SEAL command is issued the
 3473 stressor also sanity checks if the seal operation has sealed the file correctly.
 3474 (Linux only).
 3475 .TP
 3476 .B \-\-seal\-ops N
 3477 stop after N bogo seal operations.
 3478 .TP
 3479 .B \-\-seccomp N
 3480 start N workers that exercise Secure Computing system call filtering. Each
 3481 worker creates child processes that write a short message to /dev/null and then
 3482 exits. 2% of the child processes have a seccomp filter that disallows
 3483 the write system call and hence it is killed by seccomp with a SIGSYS.  Note
 3484 that this stressor can generate many audit log messages each time the child is
 3485 killed.  Requires CAP_SYS_ADMIN to run.
 3486 .TP
 3487 .B \-\-seccomp-ops N
 3488 stop seccomp stress workers after N seccomp filter tests.
 3489 .TP
 3490 .B \-\-secretmem N
 3491 start N workers that mmap pages using file mapping off a memfd_secret file
 3492 descriptor. Each stress loop iteration will expand the mappable region by 3
 3493 pages using ftruncate and mmap and touches the pages. The pages are then
 3494 fragmented by unmapping the middle page and then umapping the first and
 3495 last pages. This tries to force page fragmentation and also trigger out of
 3496 memory (OOM) kills of the stressor when the secret memory is exhausted.
 3497 Note this is a Linux 5.11+ only stressor and the kernel needs to be booted
 3498 with "secretmem=" option to allocate a secret memory reservation.
 3499 .TP
 3500 .B \-\-secretmem-ops N
 3501 stop secretmem stress workers after N stress loop iterations.
 3502 .TP
 3503 .B \-\-seek N
 3504 start N workers that randomly seeks and performs 512 byte read/write I/O
 3505 operations on a file. The default file size is 16 GB.
 3506 .TP
 3507 .B \-\-seek\-ops N
 3508 stop seek stress workers after N bogo seek operations.
 3509 .TP
 3510 .B \-\-seek\-punch
 3511 punch randomly located 8K holes into the file to cause more extents to force
 3512 a more demanding seek stressor, (Linux only).
 3513 .TP
 3514 .B \-\-seek\-size N
 3515 specify the size of the file in bytes. Small file sizes allow the I/O to occur
 3516 in the cache, causing greater CPU load. Large file sizes force more I/O
 3517 operations to drive causing more wait time and more I/O on the drive. One can
 3518 specify the size in units of Bytes, KBytes, MBytes and GBytes using the suffix
 3519 b, k, m or g.
 3520 .TP
 3521 .B \-\-sem N
 3522 start N workers that perform POSIX semaphore wait and post operations. By
 3523 default, a parent and 4 children are started per worker to provide some
 3524 contention on the semaphore. This stresses fast semaphore operations and
 3525 produces rapid context switching.
 3526 .TP
 3527 .B \-\-sem\-ops N
 3528 stop semaphore stress workers after N bogo semaphore operations.
 3529 .TP
 3530 .B \-\-sem\-procs N
 3531 start N child workers per worker to provide contention on the semaphore, the
 3532 default is 4 and a maximum of 64 are allowed.
 3533 .TP
 3534 .B \-\-sem\-sysv N
 3535 start N workers that perform System V semaphore wait and post operations. By
 3536 default, a parent and 4 children are started per worker to provide some
 3537 contention on the semaphore. This stresses fast semaphore operations and
 3538 produces rapid context switching.
 3539 .TP
 3540 .B \-\-sem\-sysv\-ops N
 3541 stop semaphore stress workers after N bogo System V semaphore operations.
 3542 .TP
 3543 .B \-\-sem\-sysv\-procs N
 3544 start N child processes per worker to provide contention on the System V
 3545 semaphore, the default is 4 and a maximum of 64 are allowed.
 3546 .TP
 3547 .B \-\-sendfile N
 3548 start N workers that send an empty file to /dev/null. This operation spends
 3549 nearly all the time in the kernel.  The default sendfile size is 4MB.  The
 3550 sendfile options are for Linux only.
 3551 .TP
 3552 .B \-\-sendfile\-ops N
 3553 stop sendfile workers after N sendfile bogo operations.
 3554 .TP
 3555 .B \-\-sendfile\-size S
 3556 specify the size to be copied with each sendfile call. The default size is
 3557 4MB. One can specify the size in units of Bytes, KBytes, MBytes and GBytes
 3558 using the suffix b, k, m or g.
 3559 .TP
 3560 .B \-\-session N
 3561 start N workers that create child and grandchild processes that set and
 3562 get their session ids. 25% of the grandchild processes are not waited for
 3563 by the child to create orphaned sessions that need to be reaped by init.
 3564 .TP
 3565 .B \-\-session\-ops N
 3566 stop session workers after N child processes are spawned and reaped.
 3567 .TP
 3568 .B \-\-set N
 3569 start N workers that call system calls that try to set data in the kernel,
 3570 currently these are: setgid, sethostname, setpgid, setpgrp, setuid,
 3571 setgroups, setreuid, setregid, setresuid, setresgid and setrlimit.
 3572 Some of these system calls are OS specific.
 3573 .TP
 3574 .B \-\-set\-ops N
 3575 stop set workers after N bogo set operations.
 3576 .TP
 3577 .B \-\-shellsort N
 3578 start N workers that sort 32 bit integers using shellsort.
 3579 .TP
 3580 .B \-\-shellsort\-ops N
 3581 stop shellsort stress workers after N bogo shellsorts.
 3582 .TP
 3583 .B \-\-shellsort\-size N
 3584 specify number of 32 bit integers to sort, default is 262144 (256 \(mu 1024).
 3585 .TP
 3586 .B \-\-shm N
 3587 start N workers that open and allocate shared memory objects using the POSIX
 3588 shared memory interfaces.  By default, the test will repeatedly create and
 3589 destroy 32 shared memory objects, each of which is 8MB in size.
 3590 .TP
 3591 .B \-\-shm\-ops N
 3592 stop after N POSIX shared memory create and destroy bogo operations are
 3593 complete.
 3594 .TP
 3595 .B \-\-shm\-bytes N
 3596 specify the size of the POSIX shared memory objects to be created. One can
 3597 specify the size as % of total available memory or in units of Bytes, KBytes,
 3598 MBytes and GBytes using the suffix b, k, m or g.
 3599 .TP
 3600 .B \-\-shm\-objs N
 3601 specify the number of shared memory objects to be created.
 3602 .TP
 3603 .B \-\-shm\-sysv N
 3604 start N workers that allocate shared memory using the System V shared memory
 3605 interface.  By default, the test will repeatedly create and destroy 8 shared
 3606 memory segments, each of which is 8MB in size.
 3607 .TP
 3608 .B \-\-shm\-sysv\-ops N
 3609 stop after N shared memory create and destroy bogo operations are complete.
 3610 .TP
 3611 .B \-\-shm\-sysv\-bytes N
 3612 specify the size of the shared memory segment to be created. One can specify
 3613 the size as % of total available memory or in units of Bytes, KBytes, MBytes
 3614 and GBytes using the suffix b, k, m or g.
 3615 .TP
 3616 .B \-\-shm\-sysv\-segs N
 3617 specify the number of shared memory segments to be created. The default is
 3618 8 segments.
 3619 .TP
 3620 .B \-\-sigabrt N
 3621 start N workers that create children that are killed by SIGABRT signals or
 3622 by calling abort(3).
 3623 .TP
 3624 .B \-\-sigabrt\-ops N
 3625 stop the sigabrt workers after N SIGABRT signals are successfully handled.
 3626 .TP
 3627 .B \-\-sigchld N
 3628 start N workers that create children to generate SIGCHLD signals. This exercises
 3629 children that exit (CLD_EXITED), get killed (CLD_KILLED), get stopped
 3630 (CLD_STOPPED) or continued (CLD_CONTINUED).
 3631 .TP
 3632 .B \-\-sigchld\-ops N
 3633 stop the sigchld workers after N SIGCHLD signals are successfully handled.
 3634 .TP
 3635 .B \-\-sigfd N
 3636 start N workers that generate SIGRT signals and are handled by reads by a child
 3637 process using a file descriptor set up using signalfd(2).  (Linux only). This
 3638 will generate a heavy context switch load when all CPUs are fully loaded.
 3639 .TP
 3640 .B \-\-sigfd\-ops
 3641 stop sigfd workers after N bogo SIGUSR1 signals are sent.
 3642 .TP
 3643 .B \-\-sigfpe N
 3644 start N workers that rapidly cause division by zero SIGFPE faults.
 3645 .TP
 3646 .B \-\-sigfpe\-ops N
 3647 stop sigfpe stress workers after N bogo SIGFPE faults.
 3648 .TP
 3649 .B \-\-sigio N
 3650 start N workers that read data from a child process via a pipe and generate
 3651 SIGIO signals. This exercises asynchronous I/O via SIGIO.
 3652 .TP
 3653 .B \-\-sigio\-ops N
 3654 stop sigio stress workers after handling N SIGIO signals.
 3655 .TP
 3656 .B \-\-signal N
 3657 start N workers that exercise the signal system call three different signal
 3658 handlers, SIG_IGN (ignore), a SIGCHLD handler and SIG_DFL (default action).
 3659 For the SIGCHLD handler, the stressor sends itself a SIGCHLD signal and checks
 3660 if it has been handled. For other handlers, the stressor checks that the
 3661 SIGCHLD handler has not been called.  This stress test calls the signal system
 3662 call directly when possible and will try to avoid the C library attempt to
 3663 replace signal with the more modern sigaction system call.
 3664 .TP
 3665 .B \-\-signal\-ops N
 3666 stop signal stress workers after N rounds of signal handler setting.
 3667 .TP
 3668 .B \-\-signest N
 3669 start N workers that exercise nested signal handling. A signal is raised and
 3670 inside the signal handler a different signal is raised, working through a
 3671 list of signals to exercise. An alternative signal stack is used that is
 3672 large enough to handle all the nested signal calls.  The \-v option will
 3673 log the approximate size of the stack required and the average stack size
 3674 per nested call.
 3675 .TP
 3676 .B \-\-signest\-ops N
 3677 stop after handling N nested signals.
 3678 .TP
 3679 .B \-\-sigpending N
 3680 start N workers that check if SIGUSR1 signals are pending. This stressor masks
 3681 SIGUSR1, generates a SIGUSR1 signal and uses sigpending(2) to see if the signal
 3682 is pending. Then it unmasks the signal and checks if the signal is no longer
 3683 pending.
 3684 .TP
 3685 .B \-\-sigpending-ops N
 3686 stop sigpending stress workers after N bogo sigpending pending/unpending checks.
 3687 .TP
 3688 .B \-\-sigpipe N
 3689 start N workers that repeatedly spawn off child process that exits before a
 3690 parent can complete a pipe write, causing a SIGPIPE signal.  The child
 3691 process is either spawned using clone(2) if it is available or use the slower
 3692 fork(2) instead.
 3693 .TP
 3694 .B \-\-sigpipe\-ops N
 3695 stop N workers after N SIGPIPE signals have been caught and handled.
 3696 .TP
 3697 .B \-\-sigq N
 3698 start N workers that rapidly send SIGUSR1 signals using sigqueue(3) to child
 3699 processes that wait for the signal via sigwaitinfo(2).
 3700 .TP
 3701 .B \-\-sigq\-ops N
 3702 stop sigq stress workers after N bogo signal send operations.
 3703 .TP
 3704 .B \-\-sigrt N
 3705 start N workers that each create child processes to handle SIGRTMIN to
 3706 SIGRMAX real time signals. The parent sends each child process a RT signal
 3707 via siqueue(2) and the child process waits for this via sigwaitinfo(2).
 3708 When the child receives the signal it then sends a RT signal to one of the
 3709 other child processes also via sigqueue(2).
 3710 .TP
 3711 .B \-\-sigrt\-ops N
 3712 stop sigrt stress workers after N bogo sigqueue signal send operations.
 3713 .TP
 3714 .B \-\-sigsegv N
 3715 start N workers that rapidly create and catch segmentation faults.
 3716 .TP
 3717 .B \-\-sigsegv\-ops N
 3718 stop sigsegv stress workers after N bogo segmentation faults.
 3719 .TP
 3720 .B \-\-sigsuspend N
 3721 start N workers that each spawn off 4 child processes that wait for a SIGUSR1
 3722 signal from the parent using sigsuspend(2). The parent sends SIGUSR1 signals
 3723 to each child in rapid succession.  Each sigsuspend wakeup is counted as one
 3724 bogo operation.
 3725 .TP
 3726 .B \-\-sigsuspend-ops N
 3727 stop sigsuspend stress workers after N bogo sigsuspend wakeups.
 3728 .TP
 3729 .B \-\-sigtrap N
 3730 start N workers that exercise the SIGTRAP signal. For systems that support
 3731 SIGTRAP, the signal is generated using raise(SIGTRAP). Only x86 Linux systems
 3732 the SIGTRAP is also generated by an int 3 instruction.
 3733 .TP
 3734 .B \-\-sigtrap-ops N
 3735 stop sigtrap stress workers after N SIGTRAPs have been handled.
 3736 .TP
 3737 .B \-\-skiplist N
 3738 start N workers that store and then search for integers using a skiplist.
 3739 By default, 65536 integers are added and searched.  This is a useful method
 3740 to exercise random access of memory and processor cache.
 3741 .TP
 3742 .B \-\-skiplist\-ops N
 3743 stop the skiplist worker after N skiplist store and search cycles are completed.
 3744 .TP
 3745 .B \-\-skiplist\-size N
 3746 specify the size (number of integers) to store and search in the skiplist. Size can
 3747 be from 1K to 4M.
 3748 .TP
 3749 .B \-\-sleep N
 3750 start N workers that spawn off multiple threads that each perform multiple
 3751 sleeps of ranges 1us to 0.1s.  This creates multiple context switches and
 3752 timer interrupts.
 3753 .TP
 3754 .B \-\-sleep\-ops N
 3755 stop after N sleep bogo operations.
 3756 .TP
 3757 .B \-\-sleep\-max P
 3758 start P threads per worker. The default is 1024, the maximum allowed is
 3759 30000.
 3760 .TP
 3761 .B \-\-smi N
 3762 start N workers that attempt to generate system management interrupts (SMIs)
 3763 into the x86 ring -2 system management mode (SMM) by exercising the advanced
 3764 power management (APM) port 0xb2. This requires the --pathological option and
 3765 root privilege and is only implemented on x86 Linux platforms. This probably
 3766 does not work in a virtualized environment.  The stressor will attempt to
 3767 determine the time stolen by SMIs with some naive benchmarking.
 3768 .TP
 3769 .B \-\-smi\-ops N
 3770 stop after N attempts to trigger the SMI.
 3771 .TP
 3772 .B \-S N, \-\-sock N
 3773 start N workers that perform various socket stress activity. This involves a
 3774 pair of client/server processes performing rapid connect, send and receives
 3775 and disconnects on the local host.
 3776 .TP
 3777 .B \-\-sock\-domain D
 3778 specify the domain to use, the default is ipv4. Currently ipv4, ipv6 and unix
 3779 are supported.
 3780 .TP
 3781 .B \-\-sock\-nodelay
 3782 This disables the TCP Nagle algorithm, so data segments are always sent
 3783 as soon as possible.  This stops data from being buffered before being
 3784 transmitted, hence resulting in poorer network utilisation and more context
 3785 switches between the sender and receiver.
 3786 .TP
 3787 .B \-\-sock\-port P
 3788 start at socket port P. For N socket worker processes, ports P to P - 1 are
 3789 used.
 3790 .TP
 3791 .B \-\-sock\-protocol P
 3792 Use the specified protocol P, default is tcp. Options are tcp and mptcp (if
 3793 supported by the operating system).
 3794 .TP
 3795 .B \-\-sock\-ops N
 3796 stop socket stress workers after N bogo operations.
 3797 .TP
 3798 .B \-\-sock\-opts [ random | send | sendmsg | sendmmsg ]
 3799 by default, messages are sent using send(2). This option allows one to specify
 3800 the sending method using send(2), sendmsg(2), sendmmsg(2) or a random selection
 3801 of one of thse 3 on each iteration.  Note that sendmmsg is only available for
 3802 Linux systems that support this system call.
 3803 .TP
 3804 .B \-\-sock\-type [ stream | seqpacket ]
 3805 specify the socket type to use. The default type is stream. seqpacket currently
 3806 only works for the unix socket domain.
 3807 .TP
 3808 .B \-\-sock\-zerocopy
 3809 enable zerocopy for send and recv calls if the MSG_ZEROCOPY is supported.
 3810 .TP
 3811 .B \-\-sockabuse N
 3812 start N workers that abuse a socket file descriptor with various file based
 3813 system that don't normally act on sockets. The kernel should handle these
 3814 illegal and unexpected calls gracefully.
 3815 .TP
 3816 .B \-\-sockabuse\-ops N
 3817 stop after N iterations of the socket abusing stressor loop.
 3818 .TP
 3819 .B \-\-sockdiag N
 3820 start N workers that exercise the Linux sock_diag netlink socket diagnostics
 3821 (Linux only).  This currently requests diagnostics using UDIAG_SHOW_NAME,
 3822 UDIAG_SHOW_VFS, UDIAG_SHOW_PEER, UDIAG_SHOW_ICONS, UDIAG_SHOW_RQLEN and
 3823 UDIAG_SHOW_MEMINFO for the AF_UNIX family of socket connections.
 3824 .TP
 3825 .B \-\-sockdiag\-ops N
 3826 stop after receiving N sock_diag diagnostic messages.
 3827 .TP
 3828 .B \-\-sockfd N
 3829 start N workers that pass file descriptors over a UNIX domain socket using the
 3830 CMSG(3) ancillary data mechanism. For each worker, pair of client/server
 3831 processes are created, the server opens as many file descriptors on /dev/null
 3832 as possible and passing these over the socket to a client that reads these from
 3833 the CMSG data and immediately closes the files.
 3834 .TP
 3835 .B \-\-sockfd\-ops N
 3836 stop sockfd stress workers after N bogo operations.
 3837 .TP
 3838 .B \-\-sockfd\-port P
 3839 start at socket port P. For N socket worker processes, ports P to P - 1 are
 3840 used.
 3841 .TP
 3842 .B \-\-sockmany N
 3843 start N workers that use a client process to attempt to open as many as 100000
 3844 TCP/IP socket connections to a server on port 10000.
 3845 .TP
 3846 .B \-\-sockmany\-ops N
 3847 stop after N connections.
 3848 .TP
 3849 .B \-\-sockpair N
 3850 start N workers that perform socket pair I/O read/writes. This involves a pair
 3851 of client/server processes performing randomly sized socket I/O operations.
 3852 .TP
 3853 .B \-\-sockpair\-ops N
 3854 stop socket pair stress workers after N bogo operations.
 3855 .TP
 3856 .B \-\-softlockup N
 3857 start N workers that flip between with the "real-time" SCHED_FIO and SCHED_RR
 3858 scheduling policies at the highest priority to force softlockups. This can
 3859 only be run with CAP_SYS_NICE capability and for best results the number of
 3860 stressors should be at least the number of online CPUs. Once running, this is
 3861 practically impossible to stop and it will force softlockup issues and may
 3862 trigger watchdog timeout reboots.
 3863 .TP
 3864 .B \-\-softlockup\-ops N
 3865 stop softlockup stress workers after N bogo scheduler policy changes.
 3866 .TP
 3867 .B \-\-spawn N
 3868 start N workers continually spawn children using posix_spawn(3) that exec
 3869 stress-ng and then exit almost immediately. Currently Linux only.
 3870 .TP
 3871 .B \-\-spawn\-ops N
 3872 stop spawn stress workers after N bogo spawns.
 3873 .TP
 3874 .B \-\-splice N
 3875 move data from /dev/zero to /dev/null through a pipe without any copying
 3876 between kernel address space and user address space using splice(2). This is
 3877 only available for Linux.
 3878 .TP
 3879 .B \-\-splice-ops N
 3880 stop after N bogo splice operations.
 3881 .TP
 3882 .B \-\-splice-bytes N
 3883 transfer N bytes per splice call, the default is 64K. One can specify the size
 3884 as % of total available memory or in units of Bytes, KBytes, MBytes and GBytes
 3885 using the suffix b, k, m or g.
 3886 .TP
 3887 .B \-\-stack N
 3888 start N workers that rapidly cause and catch stack overflows by use of
 3889 large recursive stack allocations.  Much like the brk stressor, this can eat
 3890 up pages rapidly and may trigger the kernel OOM killer on the process,
 3891 however, the killed stressor is respawned again by a monitoring parent
 3892 process.
 3893 .TP
 3894 .B \-\-stack\-fill
 3895 the default action is to touch the lowest page on each stack allocation. This
 3896 option touches all the pages by filling the new stack allocation with zeros
 3897 which forces physical pages to be allocated and hence is more aggressive.
 3898 .TP
 3899 .B \-\-stack\-mlock
 3900 attempt to mlock stack pages into memory prohibiting them from being
 3901 paged out.  This is a no-op if mlock(2) is not available.
 3902 .TP
 3903 .B \-\-stack\-ops N
 3904 stop stack stress workers after N bogo stack overflows.
 3905 .TP
 3906 .B \-\-stackmmap N
 3907 start N workers that use a 2MB stack that is memory mapped onto a temporary
 3908 file. A recursive function works down the stack and flushes dirty stack pages
 3909 back to the memory mapped file using msync(2) until the end of the stack is
 3910 reached (stack overflow). This exercises dirty page and stack exception handling.
 3911 .TP
 3912 .B \-\-stackmmap\-ops N
 3913 stop workers after N stack overflows have occurred.
 3914 .TP
 3915 .B \-\-str N
 3916 start N workers that exercise various libc string functions on random strings.
 3917 .TP
 3918 .B \-\-str-method strfunc
 3919 select a specific libc string function to stress. Available string functions to
 3920 stress are: all, index, rindex, strcasecmp, strcat, strchr, strcoll, strcmp,
 3921 strcpy, strlen, strncasecmp, strncat, strncmp, strrchr and strxfrm.  See
 3922 string(3) for more information on these string functions.  The 'all' method is
 3923 the default and will exercise all the string methods.
 3924 .TP
 3925 .B \-\-str-ops N
 3926 stop after N bogo string operations.
 3927 .TP
 3928 .B \-\-stream N
 3929 start N workers exercising a memory bandwidth stressor loosely based on the
 3930 STREAM "Sustainable Memory Bandwidth in High Performance Computers" benchmarking
 3931 tool by John D. McCalpin, Ph.D.  This stressor allocates buffers that are at
 3932 least 4 times the size of the CPU L2 cache and continually performs rounds of
 3933 following computations on large arrays of double precision floating point numbers:
 3934 .TS
 3935 expand;
 3936 lB2 lB lB
 3937 l l s.
 3938 Operation	Description
 3939 copy	T{
 3940 c[i] = a[i]
 3941 T}
 3942 scale	T{
 3943 b[i] = scalar * c[i]
 3944 T}
 3945 add	T{
 3946 c[i] = a[i] + b[i]
 3947 T}
 3948 triad	T{
 3949 a[i] = b[i] + (c[i] * scalar)
 3950 T}
 3951 .TE
 3952 .RS
 3953 .PP
 3954 Since this is loosely based on a variant of the STREAM benchmark code,
 3955 DO NOT submit results based on this as it is intended to in stress-ng just
 3956 to stress memory and compute and NOT intended for STREAM accurate
 3957 tuned or non-tuned benchmarking whatsoever.  Use the official STREAM
 3958 benchmarking tool if you desire accurate and standardised STREAM benchmarks.
 3959 .RE
 3960 .TP
 3961 .B \-\-stream\-ops N
 3962 stop after N stream bogo operations, where a bogo operation is one round
 3963 of copy, scale, add and triad operations.
 3964 .TP
 3965 .B \-\-stream\-index N
 3966 specify number of stream indices used to index into the data arrays a, b and
 3967 c.  This adds indirection into the data lookup by using randomly shuffled
 3968 indexing into the three data arrays. Level 0 (no indexing) is the default,
 3969 and 3 is where all 3 arrays are indexed via 3 different randomly shuffled
 3970 indexes. The higher the index setting the more impact this has on L1, L2
 3971 and L3 caching and hence forces higher memory read/write latencies.
 3972 .TP
 3973 .B \-\-stream\-l3\-size N
 3974 Specify the CPU Level 3 cache size in bytes.  One can specify the size in
 3975 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
 3976 If the L3 cache size is not provided, then stress-ng will attempt to
 3977 determine the cache size, and failing this, will default the size to 4MB.
 3978 .TP
 3979 .B \-\-stream\-madvise [ hugepage | nohugepage | normal ]
 3980 Specify the madvise options used on the memory mapped buffer used in the
 3981 stream stressor. Non-linux systems will only have the 'normal' madvise
 3982 advice. The default is 'normal'.
 3983 .TP
 3984 .B \-\-swap N
 3985 start N workers that add and remove small randomly sizes swap partitions
 3986 (Linux only).  Note that if too many swap partitions are added then the
 3987 stressors may exit with exit code 3 (not enough resources).  Requires
 3988 CAP_SYS_ADMIN to run.
 3989 .TP
 3990 .B \-\-swap\-ops N
 3991 stop the swap workers after N swapon/swapoff iterations.
 3992 .TP
 3993 .B \-s N, \-\-switch N
 3994 start N workers that send messages via pipe to a child to force context
 3995 switching.
 3996 .TP
 3997 .B \-\-switch\-ops N
 3998 stop context switching workers after N bogo operations.
 3999 .TP
 4000 .B \-\-switch\-freq F
 4001 run the context switching at the frequency of F context switches per
 4002 second. Note that the specified switch rate may not be achieved
 4003 because of CPU speed and memory bandwidth limitations.
 4004 .TP
 4005 .B \-\-symlink N
 4006 start N workers creating and removing symbolic links.
 4007 .TP
 4008 .B \-\-symlink\-ops N
 4009 stop symlink stress workers after N bogo operations.
 4010 .TP
 4011 .B \-\-sync\-file N
 4012 start N workers that perform a range of data syncs across a file using
 4013 sync_file_range(2).  Three mixes of syncs are performed, from start to the end
 4014 of the file,  from end of the file to the start, and a random mix. A random
 4015 selection of valid sync types are used, covering the SYNC_FILE_RANGE_WAIT_BEFORE,
 4016 SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
 4017 .TP
 4018 .B \-\-sync\-file\-ops N
 4019 stop sync\-file workers after N bogo sync operations.
 4020 .TP
 4021 .B \-\-sync\-file\-bytes N
 4022 specify the size of the file to be sync'd. One can specify the size as % of free
 4023 space on the file system in units of Bytes, KBytes, MBytes and GBytes using the
 4024 suffix b, k, m or g.
 4025 .TP
 4026 .B \-\-sysbadaddr N
 4027 start N workers that pass bad addresses to system calls to exercise bad address
 4028 and fault handling. The addresses used are null pointers, read only pages,
 4029 write only pages, unmapped addresses, text only pages, unaligned addresses and top of
 4030 memory addresses.
 4031 .TP
 4032 .B \-\-sysbadaddr\-ops N
 4033 stop the sysbadaddr stressors after N bogo system calls.
 4034 .TP
 4035 .B \-\-sysinfo N
 4036 start N workers that continually read system and process specific information.
 4037 This reads the process user and system times using the times(2) system call.
 4038 For Linux systems, it also reads overall system statistics using the sysinfo(2)
 4039 system call and also the file system statistics for all mounted file systems
 4040 using statfs(2).
 4041 .TP
 4042 .B \-\-sysinfo\-ops N
 4043 stop the sysinfo workers after N bogo operations.
 4044 .TP
 4045 .B \-\-sysinval N
 4046 start N workers that exercise system calls in random order with permutations
 4047 of invalid arguments to force kernel error handling checks. The stress test
 4048 autodetects system calls that cause processes to crash or exit prematurely
 4049 and will blocklist these after several repeated breakages. System call
 4050 arguments that cause system calls to work successfully are also detected an
 4051 blocklisted too.  Linux only.
 4052 .TP
 4053 .B \-\-sysinval-ops N
 4054 stop sysinval workers after N system call attempts.
 4055 .TP
 4056 .B \-\-sysfs N
 4057 start N workers that recursively read files from /sys (Linux only).  This may
 4058 cause specific kernel drivers to emit messages into the kernel log.
 4059 .TP
 4060 .B \-\-sys\-ops N
 4061 stop sysfs reading after N bogo read operations. Note, since the number of
 4062 entries may vary between kernels, this bogo ops metric is probably very
 4063 misleading.
 4064 .TP
 4065 .B \-\-tee N
 4066 move data from a writer process to a reader process through pipes and to
 4067 /dev/null without any copying between kernel address space and user address
 4068 space using tee(2). This is only available for Linux.
 4069 .TP
 4070 .B \-\-tee-ops N
 4071 stop after N bogo tee operations.
 4072 .TP
 4073 .B \-T N, \-\-timer N
 4074 start N workers creating timer events at a default rate of 1 MHz (Linux only);
 4075 this can create a many thousands of timer clock interrupts. Each timer event
 4076 is caught by a signal handler and counted as a bogo timer op.
 4077 .TP
 4078 .B \-\-timer\-ops N
 4079 stop timer stress workers after N bogo timer events (Linux only).
 4080 .TP
 4081 .B \-\-timer\-freq F
 4082 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only). By selecting
 4083 an appropriate frequency stress\-ng can generate hundreds of thousands of
 4084 interrupts per second.  Note: it is also worth using \-\-timer\-slack 0 for
 4085 high frequencies to stop the kernel from coalescing timer events.
 4086 .TP
 4087 .B \-\-timer\-rand
 4088 select a timer frequency based around the timer frequency +/- 12.5% random
 4089 jitter. This tries to force more variability in the timer interval to make the
 4090 scheduling less predictable.
 4091 .TP
 4092 .B \-\-timerfd N
 4093 start N workers creating timerfd events at a default rate of 1 MHz (Linux
 4094 only); this can create a many thousands of timer clock events. Timer events
 4095 are waited for on the timer file descriptor using select(2) and then read and
 4096 counted as a bogo timerfd op.
 4097 .TP
 4098 .B \-\-timerfd\-ops N
 4099 stop timerfd stress workers after N bogo timerfd events (Linux only).
 4100 .TP
 4101 .B \-\-timerfs\-fds N
 4102 try to use a maximum of N timerfd file descriptors per stressor.
 4103 .TP
 4104 .B \-\-timerfd\-freq F
 4105 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only). By selecting
 4106 an appropriate frequency stress\-ng can generate hundreds of thousands of
 4107 interrupts per second.
 4108 .TP
 4109 .B \-\-timerfd\-rand
 4110 select a timerfd frequency based around the timer frequency +/- 12.5% random
 4111 jitter. This tries to force more variability in the timer interval to make the
 4112 scheduling less predictable.
 4113 .TP
 4114 .B \-\-tlb\-shootdown N
 4115 start N workers that force Translation Lookaside Buffer (TLB) shootdowns.
 4116 This is achieved by creating up to 16 child processes that all share a
 4117 region of memory and these processes are shared amongst the available
 4118 CPUs.  The processes adjust the page mapping settings causing TLBs to
 4119 be force flushed on the other processors, causing the TLB shootdowns.
 4120 .TP
 4121 .B \-\-tlb\-shootdown\-ops N
 4122 stop after N bogo TLB shootdown operations are completed.
 4123 .TP
 4124 .B \-\-tmpfs N
 4125 start N workers that create a temporary file on an available tmpfs
 4126 file system and perform various file based mmap operations upon it.
 4127 .TP
 4128 .B \-\-tmpfs\-ops N
 4129 stop tmpfs stressors after N bogo mmap operations.
 4130 .TP
 4131 .B \-\-tmpfs\-mmap\-async
 4132 enable file based memory mapping and use asynchronous msync'ing on each page,
 4133 see \-\-tmpfs\-mmap\-file.
 4134 .TP
 4135 .B \-\-tmpfs\-mmap\-file
 4136 enable tmpfs file based memory mapping and by default use synchronous
 4137 msync'ing on each page.
 4138 .TP
 4139 .B \-\-tree N
 4140 start N workers that exercise tree data structures. The default is
 4141 to add, find and remove 250,000 64 bit integers into AVL (avl),
 4142 Red-Black (rb), Splay (splay) and binary trees.  The intention of
 4143 this stressor is to exercise memory and cache with the various tree
 4144 operations.
 4145 .TP
 4146 .B \-\-tree\-ops N
 4147 stop tree stressors after N bogo ops. A bogo op covers the addition,
 4148 finding and removing all the items into the tree(s).
 4149 .TP
 4150 .B \-\-tree\-size N
 4151 specify the size of the tree, where N is the number of 64 bit integers
 4152 to be added into the tree.
 4153 .TP
 4154 .B \-\-tree\-method [ all | avl | binary | rb | splay ]
 4155 specify the tree to be used. By default, both the rb ad splay trees
 4156 are used (the 'all' option).
 4157 .TP
 4158 .B \-\-tsc N
 4159 start N workers that read the Time Stamp Counter (TSC) 256 times per loop
 4160 iteration (bogo operation).  This exercises the tsc instruction for x86,
 4161 the mftb instruction for ppc64 and the rdcycle instruction for RISC-V.
 4162 .TP
 4163 .B \-\-tsc\-ops N
 4164 stop the tsc workers after N bogo operations are completed.
 4165 .TP
 4166 .B \-\-tsearch N
 4167 start N workers that insert, search and delete 32 bit integers on a binary
 4168 tree using tsearch(3), tfind(3) and tdelete(3). By default, there are 65536
 4169 randomized integers used in the tree.  This is a useful method to exercise
 4170 random access of memory and processor cache.
 4171 .TP
 4172 .B \-\-tsearch\-ops N
 4173 stop the tsearch workers after N bogo tree operations are completed.
 4174 .TP
 4175 .B \-\-tsearch\-size N
 4176 specify the size (number of 32 bit integers) in the array to tsearch. Size
 4177 can be from 1K to 4M.
 4178 .TP
 4179 .B \-\-tun N
 4180 start N workers that create a network tunnel device and sends and receives
 4181 packets over the tunnel using UDP and then destroys it. A new random
 4182 192.168.*.* IPv4 address is used each time a tunnel is created.
 4183 .TP
 4184 .B \-\-tun\-ops N
 4185 stop after N iterations of creating/sending/receiving/destroying a tunnel.
 4186 .TP
 4187 .B \-\-tun\-tap
 4188 use network tap device using level 2 frames (bridging) rather than a tun device
 4189 for level 3 raw packets (tunnelling).
 4190 .TP
 4191 .B \-\-udp N
 4192 start N workers that transmit data using UDP. This involves a pair of
 4193 client/server processes performing rapid connect, send and receives and
 4194 disconnects on the local host.
 4195 .TP
 4196 .B \-\-udp\-domain D
 4197 specify the domain to use, the default is ipv4. Currently ipv4, ipv6 and unix
 4198 are supported.
 4199 .TP
 4200 .B \-\-udp\-lite
 4201 use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv6 domains).
 4202 .TP
 4203 .B \-\-udp\-ops N
 4204 stop udp stress workers after N bogo operations.
 4205 .TP
 4206 .B \-\-udp\-port P
 4207 start at port P. For N udp worker processes, ports P to P - 1 are used. By
 4208 default, ports 7000 upwards are used.
 4209 .TP
 4210 .B \-\-udp\-flood N
 4211 start N workers that attempt to flood the host with UDP packets to random
 4212 ports. The IP address of the packets are currently not spoofed. This is only
 4213 available on systems that support AF_PACKET.
 4214 .TP
 4215 .B \-\-udp\-flood\-domain D
 4216 specify the domain to use, the default is ipv4. Currently ipv4 and ipv6 are
 4217 supported.
 4218 .TP
 4219 .B \-\-udp\-flood\-ops N
 4220 stop udp-flood stress workers after N bogo operations.
 4221 .TP
 4222 .B \-\-unshare N
 4223 start N workers that each fork off 32 child processes, each of which exercises
 4224 the unshare(2) system call by disassociating parts of the process execution
 4225 context. (Linux only).
 4226 .TP
 4227 .B \-\-unshare\-ops N
 4228 stop after N bogo unshare operations.
 4229 .TP
 4230 .B \-\-uprobe N
 4231 start N workers that trace the entry to libc function getpid() using the
 4232 Linux uprobe kernel tracing mechanism. This requires CAP_SYS_ADMIN
 4233 capabilities and a modern Linux uprobe capable kernel.
 4234 .TP
 4235 .B \-\-uprobe\-ops N
 4236 stop uprobe tracing after N trace events of the function that is being traced.
 4237 .TP
 4238 .B \-u N, \-\-urandom N
 4239 start N workers reading /dev/urandom (Linux only). This will load the kernel
 4240 random number source.
 4241 .TP
 4242 .B \-\-urandom\-ops N
 4243 stop urandom stress workers after N urandom bogo read operations (Linux only).
 4244 .TP
 4245 .B \-\-userfaultfd N
 4246 start N workers that generate write page faults on a small anonymously mapped
 4247 memory region and handle these faults using the user space fault handling via
 4248 the userfaultfd mechanism.  This will generate a large quantity of major page
 4249 faults and also context switches during the handling of the page faults.
 4250 (Linux only).
 4251 .TP
 4252 .B \-\-userfaultfd-ops N
 4253 stop userfaultfd stress workers after N page faults.
 4254 .TP
 4255 .B \-\-userfaultfd-bytes N
 4256 mmap N bytes per userfaultfd worker to page fault on, the default is 16MB.
 4257 One can specify the size as % of total available memory or in units of Bytes,
 4258 KBytes, MBytes and GBytes using the suffix b, k, m or g.
 4259 .TP
 4260 .B \-\-utime N
 4261 start N workers updating file timestamps. This is mainly CPU bound when the
 4262 default is used as the system flushes metadata changes only periodically.
 4263 .TP
 4264 .B \-\-utime\-ops N
 4265 stop utime stress workers after N utime bogo operations.
 4266 .TP
 4267 .B \-\-utime\-fsync
 4268 force metadata changes on each file timestamp update to be flushed to disk.
 4269 This forces the test to become I/O bound and will result in many dirty metadata
 4270 writes.
 4271 .TP
 4272 .B \-\-vdso N
 4273 start N workers that repeatedly call each of the system call functions in the
 4274 vDSO (virtual dynamic shared object).  The vDSO is a shared library that the
 4275 kernel maps into the address space of all user-space applications to allow
 4276 fast access to kernel data to some system calls without the need of
 4277 performing an expensive system call.
 4278 .TP
 4279 .B \-\-vdso\-ops N
 4280 stop after N vDSO functions calls.
 4281 .TP
 4282 .B \-\-vdso\-func F
 4283 Instead of calling all the vDSO functions, just call the vDSO function F. The
 4284 functions depend on the kernel being used, but are typically clock_gettime,
 4285 getcpu, gettimeofday and time.
 4286 .TP
 4287 .B \-\-vecmath N
 4288 start N workers that perform various unsigned integer math operations on
 4289 various 128 bit vectors. A mix of vector math operations are performed on the
 4290 following vectors: 16 \(mu 8 bits, 8 \(mu 16 bits, 4 \(mu 32 bits, 2 \(mu 64
 4291 bits. The metrics produced by this mix depend on the processor architecture
 4292 and the vector math optimisations produced by the compiler.
 4293 .TP
 4294 .B \-\-vecmath\-ops N
 4295 stop after N bogo vector integer math operations.
 4296 .TP
 4297 .B \-\-verity N
 4298 start N workers that exercise read-only file based authenticy protection
 4299 using the verity ioctls FS_IOC_ENABLE_VERITY and FS_IOC_MEASURE_VERITY.
 4300 This requires file systems with verity support (currently ext4 and f2fs
 4301 on Linux) with the verity feature enabled. The test attempts to creates
 4302 a small file with multiple small extents and enables verity on the file
 4303 and verifies it. It also checks to see if the file has verity enabled
 4304 with the FS_VERITY_FL bit set on the file flags.
 4305 .TP
 4306 .B \-\-verity\-ops N
 4307 stop the verity workers after N file create, enable verity, check verity
 4308 and unlink cycles.
 4309 .TP
 4310 .B \-\-vfork N
 4311 start N workers continually vforking children that immediately exit.
 4312 .TP
 4313 .B \-\-vfork\-ops N
 4314 stop vfork stress workers after N bogo operations.
 4315 .TP
 4316 .B \-\-vfork\-max P
 4317 create P processes and then wait for them to exit per iteration. The default
 4318 is just 1; higher values will create many temporary zombie processes that are
 4319 waiting to be reaped. One can potentially fill up the process table using
 4320 high values for \-\-vfork\-max and \-\-vfork.
 4321 .TP
 4322 .B \-\-vfork\-vm
 4323 enable detrimental performance virtual memory advice using madvise on
 4324 all pages of the vforked process. Where possible this will try to set
 4325 every page in the new process with using madvise MADV_MERGEABLE,
 4326 MADV_WILLNEED, MADV_HUGEPAGE and MADV_RANDOM flags. Linux only.
 4327 .TP
 4328 .B \-\-vforkmany N
 4329 start N workers that spawn off a chain of vfork children until the process
 4330 table fills up and/or vfork fails.  vfork can rapidly create child processes
 4331 and the parent process has to wait until the child dies, so this stressor
 4332 rapidly fills up the process table.
 4333 .TP
 4334 .B \-\-vforkmany\-ops N
 4335 stop vforkmany stressors after N vforks have been made.
 4336 .TP
 4337 .B \-\-vforkmany\-vm
 4338 enable detrimental performance virtual memory advice using madvise on
 4339 all pages of the vforked process. Where possible this will try to set
 4340 every page in the new process with using madvise MADV_MERGEABLE,
 4341 MADV_WILLNEED, MADV_HUGEPAGE and MADV_RANDOM flags. Linux only.
 4342 .TP
 4343 .B \-m N, \-\-vm N
 4344 start N workers continuously calling mmap(2)/munmap(2) and writing to the
 4345 allocated memory. Note that this can cause systems to trip the kernel OOM
 4346 killer on Linux systems if not enough physical memory and swap is not
 4347 available.
 4348 .TP
 4349 .B \-\-vm\-bytes N
 4350 mmap N bytes per vm worker, the default is 256MB. One can specify the size
 4351 as % of total available memory or in units of Bytes, KBytes, MBytes and GBytes
 4352 using the suffix b, k, m or g.
 4353 .TP
 4354 .B \-\-vm\-ops N
 4355 stop vm workers after N bogo operations.
 4356 .TP
 4357 .B \-\-vm\-hang N
 4358 sleep N seconds before unmapping memory, the default is zero seconds.
 4359 Specifying 0 will do an infinite wait.
 4360 .TP
 4361 .B \-\-vm\-keep
 4362 do not continually unmap and map memory, just keep on re-writing to it.
 4363 .TP
 4364 .B \-\-vm\-locked
 4365 Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since
 4366 Linux 2.5.37).  This is similar to locking memory as described in mlock(2).
 4367 .TP
 4368 .B \-\-vm\-madvise advice
 4369 Specify the madvise 'advice' option used on the memory mapped regions used in
 4370 the vm stressor. Non-linux systems will only have the 'normal' madvise
 4371 advice, linux systems support 'dontneed', 'hugepage', 'mergeable'
 4372 , 'nohugepage', 'normal', 'random', 'sequential', 'unmergeable'
 4373 and 'willneed' advice. If this option is not used then the default is to pick
 4374 random madvise advice for each mmap call. See madvise(2) for more details.
 4375 .TP
 4376 .B \-\-vm\-method m
 4377 specify a vm stress method. By default, all the stress methods are exercised
 4378 sequentially, however one can specify just one method to be used if required.
 4379 Each of the vm workers have 3 phases:
 4380 .RS
 4381 .PP
 4382 1. Initialised. The anonymously memory mapped region is set to a known pattern.
 4383 .PP
 4384 2. Exercised. Memory is modified in a known predictable way. Some vm workers
 4385 alter memory sequentially, some use small or large strides to step along memory.
 4386 .PP
 4387 3. Checked. The modified memory is checked to see if it matches the expected
 4388 result.
 4389 .PP
 4390 The vm methods containing 'prime' in their name have a stride of the largest
 4391 prime less than 2^64, allowing to them to thoroughly step through memory and
 4392 touch all locations just once while also doing without touching memory cells
 4393 next to each other. This strategy exercises the cache and page non-locality.
 4394 .PP
 4395 Since the memory being exercised is virtually mapped then there is no
 4396 guarantee of touching page addresses in any particular physical order.  These
 4397 workers should not be used to test that all the system's memory is working
 4398 correctly either, use tools such as memtest86 instead.
 4399 .PP
 4400 The vm stress methods are intended to exercise memory in ways to possibly find
 4401 memory issues and to try to force thermal errors.
 4402 .PP
 4403 Available vm stress methods are described as follows:
 4404 .TS
 4405 expand;
 4406 lB2 lB lB lB
 4407 l l s s.
 4408 Method	Description
 4409 all	T{
 4410 iterate over all the vm stress methods as listed below.
 4411 T}
 4412 flip	T{
 4413 sequentially work through memory 8 times, each time just one bit in memory
 4414 flipped (inverted). This will effectively invert each byte in 8 passes.
 4415 T}
 4416 galpat-0	T{
 4417 galloping pattern zeros. This sets all bits to 0 and flips just 1 in 4096 bits
 4418 to 1. It then checks to see if the 1s are pulled down to 0 by their neighbours
 4419 or of the neighbours have been pulled up to 1.
 4420 T}
 4421 galpat-1	T{
 4422 galloping pattern ones. This sets all bits to 1 and flips just 1 in 4096 bits
 4423 to 0. It then checks to see if the 0s are pulled up to 1 by their neighbours
 4424 or of the neighbours have been pulled down to 0.
 4425 T}
 4426 gray	T{
 4427 fill the memory with sequential gray codes (these only change 1 bit at a time
 4428 between adjacent bytes) and then check if they are set correctly.
 4429 T}
 4430 incdec	T{
 4431 work sequentially through memory twice, the first pass increments each byte by
 4432 a specific value and the second pass decrements each byte back to the original
 4433 start value. The increment/decrement value changes on each invocation of the
 4434 stressor.
 4435 T}
 4436 inc-nybble	T{
 4437 initialise memory to a set value (that changes on each invocation of the
 4438 stressor) and then sequentially work through each byte incrementing the bottom
 4439 4 bits by 1 and the top 4 bits by 15.
 4440 T}
 4441 rand-set	T{
 4442 sequentially work through memory in 64 bit chunks setting bytes in the chunk
 4443 to the same 8 bit random value.  The random value changes on each chunk.
 4444 Check that the values have not changed.
 4445 T}
 4446 rand-sum	T{
 4447 sequentially set all memory to random values and then summate the number of
 4448 bits that have changed from the original set values.
 4449 T}
 4450 read64	T{
 4451 sequentially read memory using 32 x 64 bit reads per bogo loop. Each loop
 4452 equates to one bogo operation.  This exercises raw memory reads.
 4453 T}
 4454 ror	T{
 4455 fill memory with a random pattern and then sequentially rotate 64 bits of
 4456 memory right by one bit, then check the final load/rotate/stored values.
 4457 T}
 4458 swap	T{
 4459 fill memory in 64 byte chunks with random patterns. Then swap each 64 chunk
 4460 with a randomly chosen chunk. Finally, reverse the swap to put the chunks back
 4461 to their original place and check if the data is correct. This exercises
 4462 adjacent and random memory load/stores.
 4463 T}
 4464 move-inv	T{
 4465 sequentially fill memory 64 bits of memory at a time with random values, and
 4466 then check if the memory is set correctly.  Next, sequentially invert each 64
 4467 bit pattern and again check if the memory is set as expected.
 4468 T}
 4469 modulo-x	T{
 4470 fill memory over 23 iterations. Each iteration starts one byte further along
 4471 from the start of the memory and steps along in 23 byte strides. In each
 4472 stride, the first byte is set to a random pattern and all other bytes are set
 4473 to the inverse.  Then it checks see if the first byte contains the expected
 4474 random pattern. This exercises cache store/reads as well as seeing if
 4475 neighbouring cells influence each other.
 4476 T}
 4477 mscan	T{
 4478 fill each bit in each byte with 1s then check these are set, fill each bit
 4479 in each byte with 0s and check these are clear.
 4480 T}
 4481 prime-0	T{
 4482 iterate 8 times by stepping through memory in very large prime strides clearing
 4483 just on bit at a time in every byte. Then check to see if all bits are set to
 4484 zero.
 4485 T}
 4486 prime-1	T{
 4487 iterate 8 times by stepping through memory in very large prime strides setting
 4488 just on bit at a time in every byte. Then check to see if all bits are set to
 4489 one.
 4490 T}
 4491 prime-gray-0	T{
 4492 first step through memory in very large prime strides clearing just on bit
 4493 (based on a gray code) in every byte. Next, repeat this but clear the other
 4494 7 bits. Then check to see if all bits are set to zero.
 4495 T}
 4496 prime-gray-1	T{
 4497 first step through memory in very large prime strides setting just on bit
 4498 (based on a gray code) in every byte. Next, repeat this but set the other 7
 4499 bits. Then check to see if all bits are set to one.
 4500 T}
 4501 rowhammer	T{
 4502 try to force memory corruption using the rowhammer memory stressor. This
 4503 fetches two 32 bit integers from memory and forces a cache flush on the two
 4504 addresses multiple times. This has been known to force bit flipping on some
 4505 hardware, especially with lower frequency memory refresh cycles.
 4506 T}
 4507 walk-0d	T{
 4508 for each byte in memory, walk through each data line setting them to low (and
 4509 the others are set high) and check that the written value is as expected. This
 4510 checks if any data lines are stuck.
 4511 T}
 4512 walk-1d	T{
 4513 for each byte in memory, walk through each data line setting them to high (and
 4514 the others are set low) and check that the written value is as expected. This
 4515 checks if any data lines are stuck.
 4516 T}
 4517 walk-0a	T{
 4518 in the given memory mapping, work through a range of specially chosen addresses
 4519 working through address lines to see if any address lines are stuck low. This
 4520 works best with physical memory addressing, however, exercising these virtual
 4521 addresses has some value too.
 4522 T}
 4523 walk-1a	T{
 4524 in the given memory mapping, work through a range of specially chosen addresses
 4525 working through address lines to see if any address lines are stuck high. This
 4526 works best with physical memory addressing, however, exercising these virtual
 4527 addresses has some value too.
 4528 T}
 4529 write64	T{
 4530 sequentially write memory using 32 x 64 bit writes per bogo loop. Each loop
 4531 equates to one bogo operation.  This exercises raw memory writes.  Note that
 4532 memory writes are not checked at the end of each test iteration.
 4533 T}
 4534 zero-one	T{
 4535 set all memory bits to zero and then check if any bits are not zero. Next, set
 4536 all the memory bits to one and check if any bits are not one.
 4537 T}
 4538 .TE
 4539 .RE
 4540 .TP
 4541 .B \-\-vm\-populate
 4542 populate (prefault) page tables for the memory mappings; this can stress
 4543 swapping. Only available on systems that support MAP_POPULATE (since Linux
 4544 2.5.46).
 4545 .TP
 4546 .B \-\-vm\-addr N
 4547 start N workers that exercise virtual memory addressing using various
 4548 methods to walk through a memory mapped address range. This will exercise
 4549 mapped private addresses from 8MB to 64MB per worker and try to generate
 4550 cache and TLB inefficient addressing patterns. Each method will set the
 4551 memory to a random pattern in a write phase and then sanity check this
 4552 in a read phase.
 4553 .TP
 4554 .B \-\-vm\-addr\-ops N
 4555 stop N workers after N bogo addressing passes.
 4556 .TP
 4557 .B \-\-vm\-addr\-method M
 4558 specify a vm address stress method. By default, all the stress methods are exercised
 4559 sequentially, however one can specify just one method to be used if required.
 4560 .RS
 4561 .PP
 4562 Available vm address stress methods are described as follows:
 4563 .TS
 4564 expand;
 4565 lB2 lB lB lB
 4566 l l s s.
 4567 Method	Description
 4568 all	T{
 4569 iterate over all the vm stress methods as listed below.
 4570 T}
 4571 pwr2	T{
 4572 work through memory addresses in steps of powers of two.
 4573 T}
 4574 pwr2inv	T{
 4575 like pwr2, but with the all relevant address bits inverted.
 4576 T}
 4577 gray	T{
 4578 work through memory with gray coded addresses so that each
 4579 change of address just changes 1 bit compared to the previous
 4580 address.
 4581 T}
 4582 grayinv	T{
 4583 like gray, but with the all relevant address bits inverted,
 4584 hence all bits change apart from 1 in the address range.
 4585 T}
 4586 rev	T{
 4587 work through the address range with the bits in the address
 4588 range reversed.
 4589 T}
 4590 revinv	T{
 4591 like rev, but with all the relevant address bits inverted.
 4592 T}
 4593 inc	T{
 4594 work through the address range forwards sequentially, byte
 4595 by byte.
 4596 T}
 4597 incinv 	T{
 4598 like inc, but with all the relevant address bits inverted.
 4599 T}
 4600 dec	T{
 4601 work through the address range backwards sequentially, byte
 4602 by byte.
 4603 T}
 4604 decinv	T{
 4605 like dec, but with all the relevant address bits inverted.
 4606 T}
 4607 .TE
 4608 .RE
 4609 .TP
 4610 .B \-\-vm\-rw N
 4611 start N workers that transfer memory to/from a parent/child using
 4612 process_vm_writev(2) and process_vm_readv(2). This is feature is only
 4613 supported on Linux.  Memory transfers are only verified if the \-\-verify
 4614 option is enabled.
 4615 .TP
 4616 .B \-\-vm\-rw\-ops N
 4617 stop vm\-rw workers after N memory read/writes.
 4618 .TP
 4619 .B \-\-vm\-rw\-bytes N
 4620 mmap N bytes per vm\-rw worker, the default is 16MB. One can specify the size
 4621 as % of total available memory or in units of Bytes, KBytes, MBytes and GBytes
 4622 using the suffix b, k, m or g.
 4623 .TP
 4624 .B \-\-vm\-segv N
 4625 start N workers that create a child process that unmaps its address space
 4626 causing a SIGSEGV on return from the unmap.
 4627 .TP
 4628 .B \-\-vm\-segv\-ops N
 4629 stop after N bogo vm\-segv SIGSEGV faults.
 4630 .TP
 4631 .B \-\-vm\-splice N
 4632 move data from memory to /dev/null through a pipe without any copying between
 4633 kernel address space and user address space using vmsplice(2) and splice(2).
 4634 This is only available for Linux.
 4635 .TP
 4636 .B \-\-vm\-splice-ops N
 4637 stop after N bogo vm\-splice operations.
 4638 .TP
 4639 .B \-\-vm\-splice-bytes N
 4640 transfer N bytes per vmsplice call, the default is 64K. One can specify the
 4641 size as % of total available memory or in units of Bytes, KBytes, MBytes and
 4642 GBytes using the suffix b, k, m or g.
 4643 .TP
 4644 .B \-\-wait N
 4645 start N workers that spawn off two children; one spins in a pause(2) loop, the
 4646 other continually stops and continues the first. The controlling process waits
 4647 on the first child to be resumed by the delivery of SIGCONT using waitpid(2)
 4648 and waitid(2).
 4649 .TP
 4650 .B \-\-wait\-ops N
 4651 stop after N bogo wait operations.
 4652 .TP
 4653 .B \-\-watchdog N
 4654 start N workers that exercising the /dev/watchdog watchdog interface by
 4655 opening it, perform various watchdog specific ioctl(2) commands on the
 4656 device and close it.  Before closing the special watchdog magic close
 4657 message is written to the device to try and force it to never trip a
 4658 watchdog reboot after the stressor has been run.  Note that this stressor
 4659 needs to be run as root with the \-\-pathological option and is only
 4660 available on Linux.
 4661 .TP
 4662 .B \-\-watchdog\-ops N
 4663 stop after N bogo operations on the watchdog device.
 4664 .TP
 4665 .B \-\-wcs N
 4666 start N workers that exercise various libc wide character string functions on
 4667 random strings.
 4668 .TP
 4669 .B \-\-wcs-method wcsfunc
 4670 select a specific libc wide character string function to stress. Available
 4671 string functions to stress are: all, wcscasecmp, wcscat, wcschr, wcscoll,
 4672 wcscmp, wcscpy, wcslen, wcsncasecmp, wcsncat, wcsncmp, wcsrchr and wcsxfrm.
 4673 The 'all' method is the default and will exercise all the string methods.
 4674 .TP
 4675 .B \-\-wcs-ops N
 4676 stop after N bogo wide character string operations.
 4677 .TP
 4678 .B \-\-x86syscall N
 4679 start N workers that repeatedly exercise the x86-64 syscall instruction to
 4680 call the getcpu(2), gettimeofday(2) and time(2) system using the Linux
 4681 vsyscall handler. Only for Linux.
 4682 .TP
 4683 .B \-\-x86syscall\-ops N
 4684 stop after N x86syscall system calls.
 4685 .TP
 4686 .B \-\-x86syscall\-func F
 4687 Instead of exercising the 3 syscall system calls, just call the syscall
 4688 function F. The function F must be one of getcpu, gettimeofday and time.
 4689 .TP
 4690 .B \-\-xattr N
 4691 start N workers that create, update and delete batches of extended attributes
 4692 on a file.
 4693 .TP
 4694 .B \-\-xattr\-ops N
 4695 stop after N bogo extended attribute operations.
 4696 .TP
 4697 .B \-y N, \-\-yield N
 4698 start N workers that call sched_yield(2). This stressor ensures that at
 4699 least 2 child processes per CPU exercise shield_yield(2) no matter how
 4700 many workers are specified, thus always ensuring rapid context switching.
 4701 .TP
 4702 .B \-\-yield\-ops N
 4703 stop yield stress workers after N sched_yield(2) bogo operations.
 4704 .TP
 4705 .B \-\-zero N
 4706 start N workers reading /dev/zero.
 4707 .TP
 4708 .B \-\-zero\-ops N
 4709 stop zero stress workers after N /dev/zero bogo read operations.
 4710 .TP
 4711 .B \-\-zlib N
 4712 start N workers compressing and decompressing random data using zlib. Each
 4713 worker has two processes, one that compresses random data and pipes it to
 4714 another process that decompresses the data. This stressor exercises CPU,
 4715 cache and memory.
 4716 .TP
 4717 .B \-\-zlib\-ops N
 4718 stop after N bogo compression operations, each bogo compression operation
 4719 is a compression of 64K of random data at the highest compression level.
 4720 .TP
 4721 .B \-\-zlib\-level L
 4722 specify the compression level (0..9), where 0 = no compression, 1 = fastest
 4723 compression and 9 = best compression.
 4724 .TP
 4725 .B \-\-zlib\-method method
 4726 specify the type of random data to send to the zlib library.  By default,
 4727 the data stream is created from a random selection of the different data
 4728 generation processes.  However one can specify just one method to be used if required.
 4729 Available zlib data generation methods are described as follows:
 4730 .TS
 4731 expand;
 4732 lB2 lB lB lB
 4733 l l s s.
 4734 Method	Description
 4735 00ff	T{
 4736 randomly distributed 0x00 and 0xFF values.
 4737 T}
 4738 ascii01	T{
 4739 randomly distributed ASCII 0 and 1 characters.
 4740 T}
 4741 asciidigits	T{
 4742 randomly distributed ASCII digits in the range of 0 and 9.
 4743 T}
 4744 bcd	T{
 4745 packed binary coded decimals, 0..99 packed into 2 4-bit nybbles.
 4746 T}
 4747 binary	T{
 4748 32 bit random numbers.
 4749 T}
 4750 brown	T{
 4751 8 bit brown noise (Brownian motion/Random Walk noise).
 4752 T}
 4753 double	T{
 4754 double precision floating point numbers from sin(\(*h).
 4755 T}
 4756 fixed	T{
 4757 data stream is repeated 0x04030201.
 4758 T}
 4759 gray	T{
 4760 16 bit gray codes generated from an incrementing counter.
 4761 T}
 4762 latin	T{
 4763 Random latin sentences from a sample of Lorem Ipsum text.
 4764 T}
 4765 logmap	T{
 4766 Values generated from a logistical map of the equation
 4767 \[*X]n+1 = r \(mu  \[*X]n \(mu (1 - \[*X]n) where r > \[~~] 3.56994567
 4768 to produce chaotic data. The values are scaled by a large arbitrary
 4769 value and the lower 8 bits of this value are compressed.
 4770 T}
 4771 lfsr32	T{
 4772 Values generated from a 32 bit Galois linear feedback shift register using
 4773 the polynomial  x\[ua]32 + x\[ua]31 + x\[ua]29 + x + 1. This generates a
 4774 ring of  2\[ua]32 - 1 unique values (all 32 bit values except for 0).
 4775 T}
 4776 lrand48	T{
 4777 Uniformly distributed pseudo-random 32 bit values generated from lrand48(3).
 4778 T}
 4779 morse	T{
 4780 Morse code generated from random latin sentences from a sample of Lorem Ipsum text.
 4781 T}
 4782 nybble	T{
 4783 randomly distributed bytes in the range of 0x00 to 0x0f.
 4784 T}
 4785 objcode	T{
 4786 object code selected from a random start point in the stress-ng text segment.
 4787 T}
 4788 parity	T{
 4789 7 bit binary data with 1 parity bit.
 4790 T}
 4791 pink	T{
 4792 pink noise in the range 0..255 generated using the Gardner method with
 4793 the McCartney selection tree optimization. Pink noise is where the power
 4794 spectral density is inversely proportional to the frequency of the signal
 4795 and hence is slightly compressible.
 4796 T}
 4797 random	T{
 4798 segments of the data stream are created by randomly calling the different data generation
 4799 methods.
 4800 T}
 4801 rarely1	T{
 4802 data that has a single 1 in every 32 bits, randomly located.
 4803 T}
 4804 rarely0	T{
 4805 data that has a single 0 in every 32 bits, randomly located.
 4806 T}
 4807 text	T{
 4808 random ASCII text.
 4809 T}
 4810 utf8	T{
 4811 random 8 bit data encoded to UTF-8.
 4812 T}
 4813 zero	T{
 4814 all zeros, compresses very easily.
 4815 T}
 4816 .TE
 4817 .TP
 4818 .B \-\-zlib\-window-bits W
 4819 specify the window bits used to specify the history buffer size. The value is
 4820 specified as the base two logarithm of the buffer size (e.g. value 9 is 2^9 =
 4821 512 bytes).
 4822 Default is 15.
 4823 .PP
 4824 .RS
 4825 .nf
 4826 Values:
 4827 -8-(-15): raw deflate format
 4828     8-15: zlib format
 4829    24-31: gzip format
 4830    40-47: inflate auto format detection using zlib deflate format
 4831 .fi
 4832 .RE
 4833 .PP
 4834 .B \-\-zlib\-mem-level L
 4835 specify the reserved compression state memory for zlib.
 4836 Default is 8.
 4837 .PP
 4838 .RS
 4839 .nf
 4840 Values:
 4841 1 = minimum memory usage
 4842 9 = maximum memory usage
 4843 .fi
 4844 .RE
 4845 .TP
 4846 .B \-\-zlib\-strategy S
 4847 specifies the strategy to use when deflating data. This is used to tune the
 4848 compression algorithm.
 4849 Default is 0.
 4850 .PP
 4851 .RS
 4852 .nf
 4853 Values:
 4854 0: used for normal data (Z_DEFAULT_STRATEGY)
 4855 1: for data generated by a filter or predictor (Z_FILTERED)
 4856 2: forces huffman encoding (Z_HUFFMAN_ONLY)
 4857 3: Limit match distances to one run-length-encoding (Z_RLE)
 4858 4: prevents dynamic huffman codes (Z_FIXED)
 4859 .fi
 4860 .RE
 4861 .TP
 4862 .B \-\-zlib\-stream-bytes S
 4863 specify the amount of bytes to deflate until deflate should finish the block
 4864 and return with Z_STREAM_END. One can specify the size in units of Bytes,
 4865 KBytes, MBytes and GBytes using the suffix b, k, m or g.
 4866 Default is 0 which creates and endless stream until stressor ends.
 4867 .PP
 4868 .RS
 4869 .nf
 4870 Values:
 4871 0: creates an endless deflate stream until stressor stops
 4872 n: creates an stream of n bytes over and over again.
 4873    Each block will be closed with Z_STREAM_END.
 4874 .fi
 4875 .RE
 4876 .TP
 4877 .TP
 4878 .B \-\-zombie N
 4879 start N workers that create zombie processes. This will rapidly try to create
 4880 a default of 8192 child processes that immediately die and wait in a zombie
 4881 state until they are reaped.  Once the maximum number of processes is reached
 4882 (or fork fails because one has reached the maximum allowed number of children)
 4883 the oldest child is reaped and a new process is then created in a first-in
 4884 first-out manner, and then repeated.
 4885 .TP
 4886 .B \-\-zombie\-ops N
 4887 stop zombie stress workers after N bogo zombie operations.
 4888 .TP
 4889 .B \-\-zombie\-max N
 4890 try to create as many as N zombie processes. This may not be reached if the
 4891 system limit is less than N.
 4892 .LP
 4893 .SH EXAMPLES
 4894 .LP
 4895 stress\-ng \-\-vm 8 \-\-vm\-bytes 80% -t 1h
 4896 .IP
 4897 run 8 virtual memory stressors that combined use 80% of the available memory
 4898 for 1 hour. Thus each stressor uses 10% of the available memory.
 4899 .LP
 4900 stress\-ng \-\-cpu 4 \-\-io 2 \-\-vm 1 \-\-vm\-bytes 1G \-\-timeout 60s
 4901 .IP
 4902 runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1 vm stressor
 4903 using 1GB of virtual memory.
 4904 .LP
 4905 stress\-ng \-\-iomix 2 \-\-iomix\-bytes 10% -t 10m
 4906 .IP
 4907 runs 2 instances of the mixed I/O stressors using a total of 10% of the
 4908 available file system space for 10 minutes. Each stressor will use 5% of the
 4909 available file system space.
 4910 .LP
 4911 stress\-ng \-\-cyclic 1 \-\-cyclic\-dist 2500 \-\-cyclic\-method clock_ns \-\-cyclic\-prio 100 \-\-cyclic\-sleep 10000 \-\-hdd 0 -t 1m
 4912 .IP
 4913 measures real time scheduling latencies created by the hdd stressor. This
 4914 uses the high resolution nanosecond clock to measure latencies during
 4915 sleeps of 10,000 nanoseconds. At the end of 1 minute of stressing, the
 4916 latency distribution with 2500 ns intervals will be displayed. NOTE: this
 4917 must be run with the CAP_SYS_NICE capability to enable the real time scheduling
 4918 to get accurate measurements.
 4919 .LP
 4920 stress\-ng \-\-cpu 8 \-\-cpu\-ops 800000
 4921 .IP
 4922 runs 8 cpu stressors and stops after 800000 bogo operations.
 4923 .LP
 4924 stress\-ng \-\-sequential 2 \-\-timeout 2m \-\-metrics
 4925 .IP
 4926 run 2 simultaneous instances of all the stressors sequentially one by one,
 4927 each for 2 minutes and summarise with performance metrics at the end.
 4928 .LP
 4929 stress\-ng \-\-cpu 4 \-\-cpu-method fft \-\-cpu-ops 10000 \-\-metrics\-brief
 4930 .IP
 4931 run 4 FFT cpu stressors, stop after 10000 bogo operations and produce a
 4932 summary just for the FFT results.
 4933 .LP
 4934 stress\-ng \-\-cpu -1 \-\-cpu-method all \-t 1h \-\-cpu\-load 90
 4935 .IP
 4936 run cpu stressors on all online CPUs working through all the available CPU
 4937 stressors for 1 hour, loading the CPUs at 90% load capacity.
 4938 .LP
 4939 stress\-ng \-\-cpu 0 \-\-cpu-method all \-t 20m
 4940 .IP
 4941 run cpu stressors on all configured CPUs working through all the available CPU
 4942 stressors for 20 minutes
 4943 .LP
 4944 stress\-ng \-\-all 4 \-\-timeout 5m
 4945 .IP
 4946 run 4 instances of all the stressors for 5 minutes.
 4947 .LP
 4948 stress\-ng \-\-random 64
 4949 .IP
 4950 run 64 stressors that are randomly chosen from all the available stressors.
 4951 .LP
 4952 stress\-ng \-\-cpu 64 \-\-cpu\-method all \-\-verify \-t 10m \-\-metrics\-brief
 4953 .IP
 4954 run 64 instances of all the different cpu stressors and verify that the
 4955 computations are correct for 10 minutes with a bogo operations summary at the
 4956 end.
 4957 .LP
 4958 stress\-ng \-\-sequential -1 \-t 10m
 4959 .IP
 4960 run all the stressors one by one for 10 minutes, with the number of instances
 4961 of each stressor matching the number of online CPUs.
 4962 .LP
 4963 stress\-ng \-\-sequential 8 \-\-class io \-t 5m \-\-times
 4964 .IP
 4965 run all the stressors in the io class one by one for 5 minutes each, with 8
 4966 instances of each stressor running concurrently and show overall time
 4967 utilisation statistics at the end of the run.
 4968 .LP
 4969 stress\-ng \-\-all -1 \-\-maximize \-\-aggressive
 4970 .IP
 4971 run all the stressors (1 instance of each per online CPU) simultaneously, maximize
 4972 the settings (memory sizes, file allocations, etc.) and select the most
 4973 demanding/aggressive options.
 4974 .LP
 4975 stress\-ng \-\-random 32 \-x numa,hdd,key
 4976 .IP
 4977 run 32 randomly selected stressors and exclude the numa, hdd and key stressors
 4978 .LP
 4979 stress\-ng \-\-sequential 4 \-\-class vm \-\-exclude bigheap,brk,stack
 4980 .IP
 4981 run 4 instances of the VM stressors one after each other, excluding the
 4982 bigheap, brk and stack stressors
 4983 .LP
 4984 stress\-ng \-\-taskset 0,2-3 \-\-cpu 3
 4985 .IP
 4986 run 3 instances of the CPU stressor and pin them to CPUs 0, 2 and 3.
 4987 .SH EXIT STATUS
 4988 .TS
 4989 cBw(10) lBx
 4990 c l.
 4991 Status	Description
 4992 0	T{
 4993 Success.
 4994 T}
 4995 1	T{
 4996 Error; incorrect user options or a fatal resource issue in the stress-ng
 4997 stressor harness (for example, out of memory).
 4998 T}
 4999 2	T{
 5000 One or more stressors failed.
 5001 T}
 5002 3	T{
 5003 One or more stressors failed to initialise because of lack of resources,
 5004 for example ENOMEM (no memory), ENOSPC (no space on file system) or a
 5005 missing or unimplemented system call.
 5006 T}
 5007 4	T{
 5008 One or more stressors were not implemented on a specific architecture
 5009 or operating system.
 5010 T}
 5011 5	T{
 5012 A stressor has been killed by an unexpected signal.
 5013 T}
 5014 6	T{
 5015 A stressor exited by exit(2) which was not expected and timing metrics
 5016 could not be gathered.
 5017 T}
 5018 7	T{
 5019 The bogo ops metrics maybe untrustworthy. This is most likely to occur when
 5020 a stress test is terminated during the update of a bogo-ops counter such
 5021 as when it has been OOM killed. A less likely reason is that the counter
 5022 ready indicator has been corrupted.
 5023 T}
 5024 .TE
 5025 .SH BUGS
 5026 File bug reports at:
 5027   https://launchpad.net/ubuntu/+source/stress\-ng/+filebug
 5028 .SH SEE ALSO
 5029 .BR cpuburn (1),
 5030 .BR perf (1),
 5031 .BR stress (1),
 5032 .BR taskset (1)
 5033 .SH AUTHOR
 5034 stress\-ng was written by Colin King <colin.king@canonical.com> and
 5035 is a clean room re-implementation and extension of the original
 5036 stress tool by Amos Waterland. Thanks also for
 5037 contributions from Abdul Haleem, Adrian Ratiu, André Wild, Baruch Siach,
 5038 Carlos Santos, Christian Ehrhardt, Chunyu Hu, Danilo Krummrich,
 5039 David Turner, Dominik B Czarnota, Fabien Malfoy, Fabrice Fontaine,
 5040 Helmut Grohne, James Hunt, James Wang, Jianshen Liu, Jim Rowan,
 5041 Joseph DeVincentis, Khalid Elmously, Khem Raj, Luca Pizzamiglio,
 5042 Luis Henriques, Manoj Iyer, Matthew Tippett, Mauricio Faria de Oliveira,
 5043 Maxime Chevallier, Piyush Goyal, Ralf Ramsauer, Rob Colclaser,
 5044 Thadeu Lima de Souza Cascardo, Thia Wyrod, Tim Gardner, Tim Orling,
 5045 Tommi Rantala, Witold Baryluk, Zhiyi Sun and others.
 5046 .SH NOTES
 5047 Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to
 5048 terminate all the stressor processes and ensures temporary files and
 5049 shared memory segments are removed cleanly.
 5050 .PP
 5051 Sending a SIGUSR2 to stress-ng will dump out the current load average
 5052 and memory statistics.
 5053 .PP
 5054 Note that the stress\-ng cpu, io, vm and hdd tests are different
 5055 implementations of the original stress
 5056 tests and hence may produce different stress characteristics.
 5057 stress\-ng does not support any GPU stress tests.
 5058 .PP
 5059 The bogo operations metrics may change with each release  because of bug
 5060 fixes to the code, new features, compiler optimisations or changes in system
 5061 call performance.
 5062 .SH COPYRIGHT
 5063 Copyright \(co 2013-2021 Canonical Ltd.
 5064 .br
 5065 This is free software; see the source for copying conditions.  There is NO
 5066 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.