"Fossies" - the Fresh Open Source Software Archive

Member "stress-ng-0.09.56/example-jobs/vm.job" (15 Mar 2019, 11752 Bytes) of package /linux/privat/stress-ng-0.09.56.tar.xz:


As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 #
    2 # vm class stressors:
    3 #   various options have been commented out, one can remove the
    4 #   proceeding comment to enable these options if required.
    5 
    6 #
    7 # run the following tests in parallel or sequentially
    8 #
    9 run sequential
   10 # run parallel
   11 
   12 #
   13 # aggressive:
   14 #   enables more file, cache and memory aggressive options. This may
   15 #   slow tests down, increase latencies and  reduce  the  number  of
   16 #   bogo  ops as well as changing the balance of user time vs system
   17 #   time used depending on the type of stressor being used.
   18 #
   19 # aggressive
   20 
   21 #
   22 # ignite-cpu:
   23 #   alter kernel controls to try and maximize the CPU. This requires
   24 #   root  privilege  to alter various /sys interface controls.  Cur‐
   25 #   rently this only works for Intel P-State enabled x86 systems  on
   26 #   Linux.
   27 #
   28 # ignite-cpu
   29 
   30 #
   31 # keep-name:
   32 #   by  default,  stress-ng  will  attempt to change the name of the
   33 #   stress processes according to their functionality;  this  option
   34 #   disables  this and keeps the process names to be the name of the
   35 #   parent process, that is, stress-ng.
   36 #
   37 # keep-name
   38 
   39 #
   40 # metrics-brief:
   41 #   enable metrics and only output metrics that are non-zero.
   42 #
   43 metrics-brief
   44 
   45 #
   46 # verbose
   47 #   show all debug, warnings and normal information output.
   48 #
   49 verbose
   50 
   51 #
   52 # run each of the tests for 60 seconds
   53 #  stop stress test after N seconds. One can also specify the units
   54 #  of time in seconds, minutes, hours, days or years with the  suf‐
   55 #  fix s, m, h, d or y.
   56 #
   57 timeout 60s
   58 
   59 #
   60 # per stressor options start here
   61 #
   62 
   63 
   64 #
   65 # bigheap stressor options:
   66 #   start N workers that grow their heaps by reallocating memory. If
   67 #   the  out of memory killer (OOM) on Linux kills the worker or the
   68 #   allocation fails then the allocating  process  starts  all  over
   69 #   again.   Note  that  the OOM adjustment for the worker is set so
   70 #   that the OOM killer will treat these workers as the first candi‐
   71 #   date processes to kill.
   72 #
   73 bigheap 0		# 0 means 1 stressor per CPU
   74 # bigheap-ops 1000000	# stop after 1000000 bogo ops
   75 # bigheap-growth 64K	# grow heap by 64K each loop iteration
   76 
   77 #
   78 # brk stressor options:
   79 #   start N workers that grow the data segment by one page at a time
   80 #   using  multiple  brk(2)  calls.  Each successfully allocated new
   81 #   page is touched to ensure it is resident in memory.  If  an  out
   82 #   of  memory  condition  occurs  then the test will reset the data
   83 #   segment to the point before it started and repeat the data  seg‐
   84 #   ment resizing over again.  The process adjusts the out of memory
   85 #   setting so that it may be killed by  the  out  of  memory  (OOM)
   86 #   killer  before  other  processes.   If  it  is killed by the OOM
   87 #   killer then it will be automatically re-started by a  monitoring
   88 #   parent process.
   89 #
   90 brk 0			# 0 means 1 stressor per CPU
   91 # brk-ops 1000000	# stop after 1000000 bogo ops
   92 # brk-notouch		# don't touch allocated pages
   93 
   94 #
   95 # madvise stressor options:
   96 #   start N workers that apply random madvise(2) advise settings  on
   97 #   pages of a 4MB file backed shared memory mapping.
   98 #
   99 madvise 0		# 0 means 1 stressor per CPU
  100 # madvise-ops 1000000	# stop after 1000000 bogo ops
  101 
  102 #
  103 # malloc stressor options:
  104 #   start N workers continuously calling malloc(3), calloc(3), real‐
  105 #   loc(3)  and  free(3). By default, up to 65536 allocations can be
  106 #   active at any point, but this can be  altered  with  the  --mal‐
  107 #   loc-max option.  Allocation, reallocation and freeing are chosen
  108 #   at random; 50% of the time memory  is  allocation  (via  malloc,
  109 #   calloc  or  realloc) and 50% of the time allocations are free'd.
  110 #   Allocation sizes are also random, with  the  maximum  allocation
  111 #   size  controlled  by the --malloc-bytes option, the default size
  112 #   being 64K.  The worker is re-started if it is killed by the  out
  113 #   of mememory (OOM) killer.
  114 #
  115 malloc 0		# 0 means 1 stressor per CPU
  116 # malloc-bytes 64K	# maximum allocation chunk size
  117 # malloc-max 65536	# maximum number of allocations of chunks
  118 # malloc-ops 1000000	# stop after 1000000 bogo ops
  119 # malloc-thresh 1M	# use mmap when allocation exceeds this size
  120 
  121 #
  122 # mlock stressor options:
  123 #   start  N  workers that lock and unlock memory mapped pages using
  124 #   mlock(2), munlock(2), mlockall(2)  and  munlockall(2).  This  is
  125 #   achieved by the mapping of three contiguous pages and then lock‐
  126 #   ing the second page, hence  ensuring  non-contiguous  pages  are
  127 #   locked  . This is then repeated until the maximum allowed mlocks
  128 #   or a maximum of 262144 mappings are made.  Next, all future map‐
  129 #   pings  are  mlocked and the worker attempts to map 262144 pages,
  130 #   then all pages are munlocked and the pages are unmapped.
  131 #
  132 mlock 0			# 0 means 1 stressor per CPU
  133 # mlock-ops 1000000	# stop after 1000000 bogo ops
  134 
  135 #
  136 # mmap stressor options:
  137 #   start N workers  continuously  calling  mmap(2)/munmap(2).   The
  138 #   initial   mapping   is   a   large   chunk  (size  specified  by
  139 #   --mmap-bytes) followed  by  pseudo-random  4K  unmappings,  then
  140 #   pseudo-random  4K mappings, and then linear 4K unmappings.  Note
  141 #   that this can cause systems to trip the  kernel  OOM  killer  on
  142 #   Linux  systems  if  not  enough  physical memory and swap is not
  143 #   available.  The MAP_POPULATE option is used  to  populate  pages
  144 #   into memory on systems that support this.  By default, anonymous
  145 #   mappings are used, however,  the  --mmap-file  and  --mmap-async
  146 #   options allow one to perform file based mappings if desired.
  147 #
  148 mmap 0			# 0 means 1 stressor per CPU
  149 # mmap-ops 1000000	# stop after 1000000 bogo ops
  150 # mmap-async		# msync on each page when using file mmaps
  151 # mmap-bytes 256M	# allocate 256M per mmap stressor
  152 # mmap-file		# enable file based memory mapping
  153 # mmap-mprotect		# twiddle page protection settings
  154 
  155 #
  156 # mmapfork stressor options:
  157 #   start  N  workers that each fork off 32 child processes, each of
  158 #   which tries to allocate some of the free memory left in the sys‐
  159 #   tem  (and  trying  to  avoid any swapping).  The child processes
  160 #   then hint that the allocation will be needed with madvise(2) and
  161 #   then memset it to zero and hint that it is no longer needed with
  162 #   madvise before exiting.  This produces significant amounts of VM
  163 #   activity, a lot of cache misses and with minimal swapping.
  164 #
  165 mmapfork 0		# 0 means 1 stressor per CPU
  166 # mmapfork-ops 1000000	# stop after 1000000 bogo ops
  167 
  168 #
  169 # mmapmany stressor options:
  170 #   start  N workers that attempt to create the maximum allowed per-
  171 #   process memory mappings. This is achieved by mapping 3  contigu‐
  172 #   ous pages and then unmapping the middle page hence splitting the
  173 #   mapping into two.  This  is  then  repeated  until  the  maximum
  174 #   allowed mappings or a maximum of 262144 mappings are made.
  175 #
  176 mmapmany 0		# 0 means 1 stressor per CPU
  177 # mmapmany-ops 1000000	# stop after 1000000 bogo ops
  178 
  179 #
  180 # mremap stressor options:
  181 #   start N workers continuously calling mmap(2), mremap(2) and mun‐
  182 #   map(2).  The initial anonymous mapping is a  large  chunk  (size
  183 #   specified by --mremap-bytes) and then iteratively halved in size
  184 #   by remapping all the way down to a page size and then back up to
  185 #   the original size.  This worker is only available for Linux.
  186 #
  187 mremap 0		# 0 means 1 stressor per CPU
  188 # mremap-ops 1000000	# stop after 1000000 bogo ops
  189 # mremap-bytes 256M	# allocate 256M per mremap stressor
  190 
  191 #
  192 # msync stressor options:
  193 #   start N stressors that msync data from a file backed memory map‐
  194 #   ping  from  memory back to the file and msync modified data from
  195 #   the file back to the mapped memory. This exercises the  msync(2)
  196 #   MS_SYNC and MS_INVALIDATE sync operations.
  197 #
  198 msync 0			# 0 means 1 stressor per CPU
  199 # msync-ops 1000000	# stop after 1000000 bogo ops
  200 # msync-bytes 256M	# allocate 256M per mremap stressor
  201 
  202 #
  203 # shm stressor options:
  204 #   start  N  workers  that  open and allocate shared memory objects
  205 #   using the POSIX shared memory interfaces.  By default, the  test
  206 #   will  repeatedly  create  and  destroy 32 shared memory objects,
  207 #   each of which is 8MB in size.
  208 #
  209 shm 0			# 0 means 1 stressor per CPU
  210 # shm-ops 1000000	# stop after 1000000 bogo ops
  211 # shm-bytes 8M		# size of each shared memory object
  212 # shm-objs 32		# number of shared memory objects created
  213 
  214 #
  215 # shm-sysv
  216 #   start N workers that allocate shared memory using the  System  V
  217 #   shared  memory  interface.  By default, the test will repeatedly
  218 #   create and destroy 8 shared memory segments, each  of  which  is
  219 #   8MB in size.
  220 #
  221 shm-sysv 0		# 0 means 1 stressor per CPU
  222 # shm-sysv-ops 1000000	# stop after 1000000 bogo ops
  223 # shm-sysv-bytes 8M	# size of each shared memory segments
  224 # shm-sysv-segs 32	# number of shared memory segments created
  225 
  226 #
  227 # stack stressor options:
  228 #   start N workers that rapidly cause and catch stack overflows  by
  229 #   use of alloca(3).
  230 #
  231 stack 0			# 0 means 1 stressor per CPU
  232 # stack-ops 1000000	# stop after 1000000 bogo ops
  233 # stack-fill		# zero stack to force pages in
  234 
  235 #
  236 # stackmmap stressor options:
  237 #   start N workers that use a 2MB stack that is memory mapped  onto
  238 #   a  temporary file. A recursive function works down the stack and
  239 #   flushes dirty stack pages back to the memory mapped  file  using
  240 #   msync(2) until the end of the stack is reached (stack overflow).
  241 #   This exercises dirty page and stack exception handling.
  242 #
  243 stackmmap 0		# 0 means 1 stressor per CPU
  244 # stackmmap-ops 1000000	# stop after 1000000 bogo ops
  245 
  246 #
  247 # tmpfs stressor options:
  248 #   start N workers that create a temporary  file  on  an  available
  249 #   tmpfs file system and perform various file based mmap operations
  250 #   upon it.
  251 #
  252 tmpfs 0			# 0 means 1 stressor per CPU
  253 # tmpfs-ops 1000000	# stop after 1000000 bogo ops
  254 
  255 #
  256 # userfaultfd stressor options:
  257 #   start  N  workers  that  generate  write  page faults on a small
  258 #   anonymously mapped memory region and handle these  faults  using
  259 #   the  user  space  fault  handling via the userfaultfd mechanism.
  260 #   This will generate a large quanity of major page faults and also
  261 #   context switches during the handling of the page faults.  (Linux
  262 #   only).
  263 #
  264 userfaultfd 0		# 0 means 1 stressor per CPU
  265 # userfaultfd-ops 1000000 # stop after 1000000 bogo ops
  266 # userfaultfd-bytes 16M	# size of memory mapped region to fault
  267 
  268 #
  269 # vm stressor options:
  270 #   start N workers continuously calling mmap(2)/munmap(2) and writ‐
  271 #   ing to the allocated memory. Note that this can cause systems to
  272 #   trip the kernel OOM killer on Linux systems if not enough physi‐
  273 #   cal memory and swap is not available.
  274 #
  275 vm 0			# 0 means 1 stressor per CPU
  276 # vm-ops 1000000	# stop after 1000000 bogo ops
  277 # vm-bytes 256M		# size of each vm mmapping
  278 # vm-hang 0		# sleep 0 seconds before unmapping
  279 # vm-keep		# don't keep unmapping and remapping
  280 # vm-locked		# lock pages into memory using MAP_LOCKED
  281 # vm-method all		# vm data exercising method; use all types
  282 # vm-populate		# populate (prefault) pages into memory
  283 
  284 #
  285 # vm-rw stressor options:
  286 #   start N workers that  transfer  memory  to/from  a  parent/child
  287 #   using process_vm_writev(2) and process_vm_readv(2). This is fea‐
  288 #   ture is only supported on Linux.  Memory transfers are only ver‐
  289 #   ified if the --verify option is enabled.
  290 #
  291 vm-rw 0			# 0 means 1 stressor per CPU
  292 # vm-rw-ops 1000000	# stop after 1000000 bogo ops
  293 # vm-rw-bytes 16M	# size of each mmap'd region per stressor
  294 
  295 #
  296 # vm-splice stressor options:
  297 #   move  data  from  memory to /dev/null through a pipe without any
  298 #   copying between kernel address  space  and  user  address  space
  299 #   using  vmsplice(2)  and  splice(2).   This is only available for
  300 #   Linux.
  301 #
  302 vm-splice 0		# 0 means 1 stressor per CPU
  303 # vm-splice-ops 0	# stop after 1000000 bogo ops
  304 # vm-splice-bytes 64K	# transfer 64K per vmsplice call