"Fossies" - the Fresh Open Source Software Archive

Member "haproxy-2.0.0/doc/management.txt" (16 Jun 2019, 156334 Bytes) of package /linux/misc/haproxy-2.0.0.tar.gz:

As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file. See also the latest Fossies "Diffs" side-by-side code changes report for "management.txt": 1.9.8_vs_2.0.0.

    1                              ------------------------
    2                              HAProxy Management Guide
    3                              ------------------------
    4                                    version 2.0
    7 This document describes how to start, stop, manage, and troubleshoot HAProxy,
    8 as well as some known limitations and traps to avoid. It does not describe how
    9 to configure it (for this please read configuration.txt).
   11 Note to documentation contributors :
   12     This document is formatted with 80 columns per line, with even number of
   13     spaces for indentation and without tabs. Please follow these rules strictly
   14     so that it remains easily printable everywhere. If you add sections, please
   15     update the summary below for easier searching.
   18 Summary
   19 -------
   21 1.    Prerequisites
   22 2.    Quick reminder about HAProxy's architecture
   23 3.    Starting HAProxy
   24 4.    Stopping and restarting HAProxy
   25 5.    File-descriptor limitations
   26 6.    Memory management
   27 7.    CPU usage
   28 8.    Logging
   29 9.    Statistics and monitoring
   30 9.1.      CSV format
   31 9.2.      Typed output format
   32 9.3.      Unix Socket commands
   33 9.4.      Master CLI
   34 10.   Tricks for easier configuration management
   35 11.   Well-known traps to avoid
   36 12.   Debugging and performance issues
   37 13.   Security considerations
   40 1. Prerequisites
   41 ----------------
   43 In this document it is assumed that the reader has sufficient administration
   44 skills on a UNIX-like operating system, uses the shell on a daily basis and is
   45 familiar with troubleshooting utilities such as strace and tcpdump.
   48 2. Quick reminder about HAProxy's architecture
   49 ----------------------------------------------
   51 HAProxy is a multi-threaded, event-driven, non-blocking daemon. This means is
   52 uses event multiplexing to schedule all of its activities instead of relying on
   53 the system to schedule between multiple activities. Most of the time it runs as
   54 a single process, so the output of "ps aux" on a system will report only one
   55 "haproxy" process, unless a soft reload is in progress and an older process is
   56 finishing its job in parallel to the new one. It is thus always easy to trace
   57 its activity using the strace utility. In order to scale with the number of
   58 available processors, by default haproxy will start one worker thread per
   59 processor it is allowed to run on. Unless explicitly configured differently,
   60 the incoming traffic is spread over all these threads, all running the same
   61 event loop. A great care is taken to limit inter-thread dependencies to the
   62 strict minimum, so as to try to achieve near-linear scalability. This has some
   63 impacts such as the fact that a given connection is served by a single thread.
   64 Thus in order to use all available processing capacity, it is needed to have at
   65 least as many connections as there are threads, which is almost always granted.
   67 HAProxy is designed to isolate itself into a chroot jail during startup, where
   68 it cannot perform any file-system access at all. This is also true for the
   69 libraries it depends on (eg: libc, libssl, etc). The immediate effect is that
   70 a running process will not be able to reload a configuration file to apply
   71 changes, instead a new process will be started using the updated configuration
   72 file. Some other less obvious effects are that some timezone files or resolver
   73 files the libc might attempt to access at run time will not be found, though
   74 this should generally not happen as they're not needed after startup. A nice
   75 consequence of this principle is that the HAProxy process is totally stateless,
   76 and no cleanup is needed after it's killed, so any killing method that works
   77 will do the right thing.
   79 HAProxy doesn't write log files, but it relies on the standard syslog protocol
   80 to send logs to a remote server (which is often located on the same system).
   82 HAProxy uses its internal clock to enforce timeouts, that is derived from the
   83 system's time but where unexpected drift is corrected. This is done by limiting
   84 the time spent waiting in poll() for an event, and measuring the time it really
   85 took. In practice it never waits more than one second. This explains why, when
   86 running strace over a completely idle process, periodic calls to poll() (or any
   87 of its variants) surrounded by two gettimeofday() calls are noticed. They are
   88 normal, completely harmless and so cheap that the load they imply is totally
   89 undetectable at the system scale, so there's nothing abnormal there. Example :
   91   16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0
   92   16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0
   93   16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0
   94   16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0
   95   16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0
   96   16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0
   98 HAProxy is a TCP proxy, not a router. It deals with established connections that
   99 have been validated by the kernel, and not with packets of any form nor with
  100 sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence
  101 may prevent it from binding a port. It relies on the system to accept incoming
  102 connections and to initiate outgoing connections. An immediate effect of this is
  103 that there is no relation between packets observed on the two sides of a
  104 forwarded connection, which can be of different size, numbers and even family.
  105 Since a connection may only be accepted from a socket in LISTEN state, all the
  106 sockets it is listening to are necessarily visible using the "netstat" utility
  107 to show listening sockets. Example :
  109   # netstat -ltnp
  110   Active Internet connections (only servers)
  111   Proto Recv-Q Send-Q Local Address   Foreign Address   State    PID/Program name
  112   tcp        0      0*         LISTEN   1629/sshd
  113   tcp        0      0*         LISTEN   2847/haproxy
  114   tcp        0      0*         LISTEN   2847/haproxy
  117 3. Starting HAProxy
  118 -------------------
  120 HAProxy is started by invoking the "haproxy" program with a number of arguments
  121 passed on the command line. The actual syntax is :
  123   $ haproxy [<options>]*
  125 where [<options>]* is any number of options. An option always starts with '-'
  126 followed by one of more letters, and possibly followed by one or multiple extra
  127 arguments. Without any option, HAProxy displays the help page with a reminder
  128 about supported options. Available options may vary slightly based on the
  129 operating system. A fair number of these options overlap with an equivalent one
  130 if the "global" section. In this case, the command line always has precedence
  131 over the configuration file, so that the command line can be used to quickly
  132 enforce some settings without touching the configuration files. The current
  133 list of options is :
  135   -- <cfgfile>* : all the arguments following "--" are paths to configuration
  136     file/directory to be loaded and processed in the declaration order. It is
  137     mostly useful when relying on the shell to load many files that are
  138     numerically ordered. See also "-f". The difference between "--" and "-f" is
  139     that one "-f" must be placed before each file name, while a single "--" is
  140     needed before all file names. Both options can be used together, the
  141     command line ordering still applies. When more than one file is specified,
  142     each file must start on a section boundary, so the first keyword of each
  143     file must be one of "global", "defaults", "peers", "listen", "frontend",
  144     "backend", and so on. A file cannot contain just a server list for example.
  146   -f <cfgfile|cfgdir> : adds <cfgfile> to the list of configuration files to be
  147     loaded. If <cfgdir> is a directory, all the files (and only files) it
  148     contains are added in lexical order (using LC_COLLATE=C) to the list of
  149     configuration files to be loaded ; only files with ".cfg" extension are
  150     added, only non hidden files (not prefixed with ".") are added.
  151     Configuration files are loaded and processed in their declaration order.
  152     This option may be specified multiple times to load multiple files. See
  153     also "--". The difference between "--" and "-f" is that one "-f" must be
  154     placed before each file name, while a single "--" is needed before all file
  155     names. Both options can be used together, the command line ordering still
  156     applies. When more than one file is specified, each file must start on a
  157     section boundary, so the first keyword of each file must be one of
  158     "global", "defaults", "peers", "listen", "frontend", "backend", and so on.
  159     A file cannot contain just a server list for example.
  161   -C <dir> : changes to directory <dir> before loading configuration
  162     files. This is useful when using relative paths. Warning when using
  163     wildcards after "--" which are in fact replaced by the shell before
  164     starting haproxy.
  166   -D : start as a daemon. The process detaches from the current terminal after
  167     forking, and errors are not reported anymore in the terminal. It is
  168     equivalent to the "daemon" keyword in the "global" section of the
  169     configuration. It is recommended to always force it in any init script so
  170     that a faulty configuration doesn't prevent the system from booting.
  172   -L <name> : change the local peer name to <name>, which defaults to the local
  173     hostname. This is used only with peers replication. You can use the
  174     variable $HAPROXY_LOCALPEER in the configuration file to reference the
  175     peer name.
  177   -N <limit> : sets the default per-proxy maxconn to <limit> instead of the
  178     builtin default value (usually 2000). Only useful for debugging.
  180   -V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or
  181     "quiet".
  183   -W : master-worker mode. It is equivalent to the "master-worker" keyword in
  184     the "global" section of the configuration. This mode will launch a "master"
  185     which will monitor the "workers". Using this mode, you can reload HAProxy
  186     directly by sending a SIGUSR2 signal to the master.  The master-worker mode
  187     is compatible either with the foreground or daemon mode.  It is
  188     recommended to use this mode with multiprocess and systemd.
  190   -Ws : master-worker mode with support of `notify` type of systemd service.
  191     This option is only available when HAProxy was built with `USE_SYSTEMD`
  192     build option enabled.
  194   -c : only performs a check of the configuration files and exits before trying
  195     to bind. The exit status is zero if everything is OK, or non-zero if an
  196     error is encountered.
  198   -d : enable debug mode. This disables daemon mode, forces the process to stay
  199     in foreground and to show incoming and outgoing events. It is equivalent to
  200     the "global" section's "debug" keyword. It must never be used in an init
  201     script.
  203   -dG : disable use of getaddrinfo() to resolve host names into addresses. It
  204     can be used when suspecting that getaddrinfo() doesn't work as expected.
  205     This option was made available because many bogus implementations of
  206     getaddrinfo() exist on various systems and cause anomalies that are
  207     difficult to troubleshoot.
  209   -dM[<byte>] : forces memory poisoning, which means that each and every
  210     memory region allocated with malloc() or pool_alloc() will be filled with
  211     <byte> before being passed to the caller. When <byte> is not specified, it
  212     defaults to 0x50 ('P'). While this slightly slows down operations, it is
  213     useful to reliably trigger issues resulting from missing initializations in
  214     the code that cause random crashes. Note that -dM0 has the effect of
  215     turning any malloc() into a calloc(). In any case if a bug appears or
  216     disappears when using this option it means there is a bug in haproxy, so
  217     please report it.
  219   -dS : disable use of the splice() system call. It is equivalent to the
  220     "global" section's "nosplice" keyword. This may be used when splice() is
  221     suspected to behave improperly or to cause performance issues, or when
  222     using strace to see the forwarded data (which do not appear when using
  223     splice()).
  225   -dV : disable SSL verify on the server side. It is equivalent to having
  226     "ssl-server-verify none" in the "global" section. This is useful when
  227     trying to reproduce production issues out of the production
  228     environment. Never use this in an init script as it degrades SSL security
  229     to the servers.
  231   -db : disable background mode and multi-process mode. The process remains in
  232     foreground. It is mainly used during development or during small tests, as
  233     Ctrl-C is enough to stop the process. Never use it in an init script.
  235   -de : disable the use of the "epoll" poller. It is equivalent to the "global"
  236     section's keyword "noepoll". It is mostly useful when suspecting a bug
  237     related to this poller. On systems supporting epoll, the fallback will
  238     generally be the "poll" poller.
  240   -dk : disable the use of the "kqueue" poller. It is equivalent to the
  241     "global" section's keyword "nokqueue". It is mostly useful when suspecting
  242     a bug related to this poller. On systems supporting kqueue, the fallback
  243     will generally be the "poll" poller.
  245   -dp : disable the use of the "poll" poller. It is equivalent to the "global"
  246     section's keyword "nopoll". It is mostly useful when suspecting a bug
  247     related to this poller. On systems supporting poll, the fallback will
  248     generally be the "select" poller, which cannot be disabled and is limited
  249     to 1024 file descriptors.
  251   -dr : ignore server address resolution failures. It is very common when
  252     validating a configuration out of production not to have access to the same
  253     resolvers and to fail on server address resolution, making it difficult to
  254     test a configuration. This option simply appends the "none" method to the
  255     list of address resolution methods for all servers, ensuring that even if
  256     the libc fails to resolve an address, the startup sequence is not
  257     interrupted.
  259   -m <limit> : limit the total allocatable memory to <limit> megabytes across
  260     all processes. This may cause some connection refusals or some slowdowns
  261     depending on the amount of memory needed for normal operations. This is
  262     mostly used to force the processes to work in a constrained resource usage
  263     scenario. It is important to note that the memory is not shared between
  264     processes, so in a multi-process scenario, this value is first divided by
  265     global.nbproc before forking.
  267   -n <limit> : limits the per-process connection limit to <limit>. This is
  268     equivalent to the global section's keyword "maxconn". It has precedence
  269     over this keyword. This may be used to quickly force lower limits to avoid
  270     a service outage on systems where resource limits are too low.
  272   -p <file> : write all processes' pids into <file> during startup. This is
  273     equivalent to the "global" section's keyword "pidfile". The file is opened
  274     before entering the chroot jail, and after doing the chdir() implied by
  275     "-C". Each pid appears on its own line.
  277   -q : set "quiet" mode. This disables some messages during the configuration
  278     parsing and during startup. It can be used in combination with "-c" to
  279     just check if a configuration file is valid or not.
  281   -S <bind>[,bind_options...]: in master-worker mode, bind a master CLI, which
  282     allows the access to every processes, running or leaving ones.
  283     For security reasons, it is recommended to bind the master CLI to a local
  284     UNIX socket. The bind options are the same as the keyword "bind" in
  285     the configuration file with words separated by commas instead of spaces.
  287     Note that this socket can't be used to retrieve the listening sockets from
  288     an old process during a seamless reload.
  290   -sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot
  291     completion to ask them to finish what they are doing and to leave. <pid>
  292     is a list of pids to signal (one per argument). The list ends on any
  293     option starting with a "-". It is not a problem if the list of pids is
  294     empty, so that it can be built on the fly based on the result of a command
  295     like "pidof" or "pgrep".
  297   -st <pid>* : send the "terminate" signal (SIGTERM) to older processes after
  298     boot completion to terminate them immediately without finishing what they
  299     were doing. <pid> is a list of pids to signal (one per argument). The list
  300     is ends on any option starting with a "-". It is not a problem if the list
  301     of pids is empty, so that it can be built on the fly based on the result of
  302     a command like "pidof" or "pgrep".
  304   -v : report the version and build date.
  306   -vv : display the version, build options, libraries versions and usable
  307     pollers. This output is systematically requested when filing a bug report.
  309   -x <unix_socket> : connect to the specified socket and try to retrieve any
  310     listening sockets from the old process, and use them instead of trying to
  311     bind new ones. This is useful to avoid missing any new connection when
  312     reloading the configuration on Linux. The capability must be enable on the
  313     stats socket using "expose-fd listeners" in your configuration.
  315 A safe way to start HAProxy from an init file consists in forcing the daemon
  316 mode, storing existing pids to a pid file and using this pid file to notify
  317 older processes to finish before leaving :
  319    haproxy -f /etc/haproxy.cfg \
  320            -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
  322 When the configuration is split into a few specific files (eg: tcp vs http),
  323 it is recommended to use the "-f" option :
  325    haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
  326            -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
  327            -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
  328            -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
  330 When an unknown number of files is expected, such as customer-specific files,
  331 it is recommended to assign them a name starting with a fixed-size sequence
  332 number and to use "--" to load them, possibly after loading some defaults :
  334    haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
  335            -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
  336            -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
  337            -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \
  338            -f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/*
  340 Sometimes a failure to start may happen for whatever reason. Then it is
  341 important to verify if the version of HAProxy you are invoking is the expected
  342 version and if it supports the features you are expecting (eg: SSL, PCRE,
  343 compression, Lua, etc). This can be verified using "haproxy -vv". Some
  344 important information such as certain build options, the target system and
  345 the versions of the libraries being used are reported there. It is also what
  346 you will systematically be asked for when posting a bug report :
  348   $ haproxy -vv
  349   HA-Proxy version 1.6-dev7-a088d3-4 2015/10/08
  350   Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
  352   Build options :
  353     TARGET  = linux2628
  354     CPU     = generic
  355     CC      = gcc
  356     CFLAGS  = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \
  357               -DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19
  360   Default settings :
  361     maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200
  363   Encrypted password support via crypt(3): yes
  364   Built with zlib version : 1.2.6
  365   Compression algorithms supported : identity("identity"), deflate("deflate"), \
  366                                      raw-deflate("deflate"), gzip("gzip")
  367   Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
  368   Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
  369   OpenSSL library supports TLS extensions : yes
  370   OpenSSL library supports SNI : yes
  371   OpenSSL library supports prefer-server-ciphers : yes
  372   Built with PCRE version : 8.12 2011-01-15
  373   PCRE library supports JIT : no (USE_PCRE_JIT not set)
  374   Built with Lua version : Lua 5.3.1
  375   Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
  377   Available polling systems :
  378         epoll : pref=300,  test result OK
  379          poll : pref=200,  test result OK
  380        select : pref=150,  test result OK
  381   Total: 3 (3 usable), will use epoll.
  383 The relevant information that many non-developer users can verify here are :
  384   - the version : 1.6-dev7-a088d3-4 above means the code is currently at commit
  385     ID "a088d3" which is the 4th one after after official version "1.6-dev7".
  386     Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in
  387     fact "1.6-dev7". This is the 7th development version of what will become
  388     version 1.6 in the future. A development version not suitable for use in
  389     production (unless you know exactly what you are doing). A stable version
  390     will show as a 3-numbers version, such as "1.5.14-16f863", indicating the
  391     14th level of fix on top of version 1.5. This is a production-ready version.
  393   - the release date : 2015/10/08. It is represented in the universal
  394     year/month/day format. Here this means August 8th, 2015. Given that stable
  395     releases are issued every few months (1-2 months at the beginning, sometimes
  396     6 months once the product becomes very stable), if you're seeing an old date
  397     here, it means you're probably affected by a number of bugs or security
  398     issues that have since been fixed and that it might be worth checking on the
  399     official site.
  401   - build options : they are relevant to people who build their packages
  402     themselves, they can explain why things are not behaving as expected. For
  403     example the development version above was built for Linux 2.6.28 or later,
  404     targeting a generic CPU (no CPU-specific optimizations), and lacks any
  405     code optimization (-O0) so it will perform poorly in terms of performance.
  407   - libraries versions : zlib version is reported as found in the library
  408     itself. In general zlib is considered a very stable product and upgrades
  409     are almost never needed. OpenSSL reports two versions, the version used at
  410     build time and the one being used, as found on the system. These ones may
  411     differ by the last letter but never by the numbers. The build date is also
  412     reported because most OpenSSL bugs are security issues and need to be taken
  413     seriously, so this library absolutely needs to be kept up to date. Seeing a
  414     4-months old version here is highly suspicious and indeed an update was
  415     missed. PCRE provides very fast regular expressions and is highly
  416     recommended. Certain of its extensions such as JIT are not present in all
  417     versions and still young so some people prefer not to build with them,
  418     which is why the build status is reported as well. Regarding the Lua
  419     scripting language, HAProxy expects version 5.3 which is very young since
  420     it was released a little time before HAProxy 1.6. It is important to check
  421     on the Lua web site if some fixes are proposed for this branch.
  423   - Available polling systems will affect the process's scalability when
  424     dealing with more than about one thousand of concurrent connections. These
  425     ones are only available when the correct system was indicated in the TARGET
  426     variable during the build. The "epoll" mechanism is highly recommended on
  427     Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them
  428     will result in poll() or even select() being used, causing a high CPU usage
  429     when dealing with a lot of connections.
  432 4. Stopping and restarting HAProxy
  433 ----------------------------------
  435 HAProxy supports a graceful and a hard stop. The hard stop is simple, when the
  436 SIGTERM signal is sent to the haproxy process, it immediately quits and all
  437 established connections are closed. The graceful stop is triggered when the
  438 SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding
  439 from listening ports, but continue to process existing connections until they
  440 close. Once the last connection is closed, the process leaves.
  442 The hard stop method is used for the "stop" or "restart" actions of the service
  443 management script. The graceful stop is used for the "reload" action which
  444 tries to seamlessly reload a new configuration in a new process.
  446 Both of these signals may be sent by the new haproxy process itself during a
  447 reload or restart, so that they are sent at the latest possible moment and only
  448 if absolutely required. This is what is performed by the "-st" (hard) and "-sf"
  449 (graceful) options respectively.
  451 In master-worker mode, it is not needed to start a new haproxy process in
  452 order to reload the configuration. The master process reacts to the SIGUSR2
  453 signal by reexecuting itself with the -sf parameter followed by the PIDs of
  454 the workers. The master will then parse the configuration file and fork new
  455 workers.
  457 To understand better how these signals are used, it is important to understand
  458 the whole restart mechanism.
  460 First, an existing haproxy process is running. The administrator uses a system
  461 specific command such as "/etc/init.d/haproxy reload" to indicate he wants to
  462 take the new configuration file into effect. What happens then is the following.
  463 First, the service script (/etc/init.d/haproxy or equivalent) will verify that
  464 the configuration file parses correctly using "haproxy -c". After that it will
  465 try to start haproxy with this configuration file, using "-st" or "-sf".
  467 Then HAProxy tries to bind to all listening ports. If some fatal errors happen
  468 (eg: address not present on the system, permission denied), the process quits
  469 with an error. If a socket binding fails because a port is already in use, then
  470 the process will first send a SIGTTOU signal to all the pids specified in the
  471 "-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs
  472 all existing haproxy processes to temporarily stop listening to their ports so
  473 that the new process can try to bind again. During this time, the old process
  474 continues to process existing connections. If the binding still fails (because
  475 for example a port is shared with another daemon), then the new process sends a
  476 SIGTTIN signal to the old processes to instruct them to resume operations just
  477 as if nothing happened. The old processes will then restart listening to the
  478 ports and continue to accept connections. Not that this mechanism is system
  479 dependent and some operating systems may not support it in multi-process mode.
  481 If the new process manages to bind correctly to all ports, then it sends either
  482 the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case
  483 of "-sf") to all processes to notify them that it is now in charge of operations
  484 and that the old processes will have to leave, either immediately or once they
  485 have finished their job.
  487 It is important to note that during this timeframe, there are two small windows
  488 of a few milliseconds each where it is possible that a few connection failures
  489 will be noticed during high loads. Typically observed failure rates are around
  490 1 failure during a reload operation every 10000 new connections per second,
  491 which means that a heavily loaded site running at 30000 new connections per
  492 second may see about 3 failed connection upon every reload. The two situations
  493 where this happens are :
  495   - if the new process fails to bind due to the presence of the old process,
  496     it will first have to go through the SIGTTOU+SIGTTIN sequence, which
  497     typically lasts about one millisecond for a few tens of frontends, and
  498     during which some ports will not be bound to the old process and not yet
  499     bound to the new one. HAProxy works around this on systems that support the
  500     SO_REUSEPORT socket options, as it allows the new process to bind without
  501     first asking the old one to unbind. Most BSD systems have been supporting
  502     this almost forever. Linux has been supporting this in version 2.0 and
  503     dropped it around 2.2, but some patches were floating around by then. It
  504     was reintroduced in kernel 3.9, so if you are observing a connection
  505     failure rate above the one mentioned above, please ensure that your kernel
  506     is 3.9 or newer, or that relevant patches were backported to your kernel
  507     (less likely).
  509   - when the old processes close the listening ports, the kernel may not always
  510     redistribute any pending connection that was remaining in the socket's
  511     backlog. Under high loads, a SYN packet may happen just before the socket
  512     is closed, and will lead to an RST packet being sent to the client. In some
  513     critical environments where even one drop is not acceptable, these ones are
  514     sometimes dealt with using firewall rules to block SYN packets during the
  515     reload, forcing the client to retransmit. This is totally system-dependent,
  516     as some systems might be able to visit other listening queues and avoid
  517     this RST. A second case concerns the ACK from the client on a local socket
  518     that was in SYN_RECV state just before the close. This ACK will lead to an
  519     RST packet while the haproxy process is still not aware of it. This one is
  520     harder to get rid of, though the firewall filtering rules mentioned above
  521     will work well if applied one second or so before restarting the process.
  523 For the vast majority of users, such drops will never ever happen since they
  524 don't have enough load to trigger the race conditions. And for most high traffic
  525 users, the failure rate is still fairly within the noise margin provided that at
  526 least SO_REUSEPORT is properly supported on their systems.
  529 5. File-descriptor limitations
  530 ------------------------------
  532 In order to ensure that all incoming connections will successfully be served,
  533 HAProxy computes at load time the total number of file descriptors that will be
  534 needed during the process's life. A regular Unix process is generally granted
  535 1024 file descriptors by default, and a privileged process can raise this limit
  536 itself. This is one reason for starting HAProxy as root and letting it adjust
  537 the limit. The default limit of 1024 file descriptors roughly allow about 500
  538 concurrent connections to be processed. The computation is based on the global
  539 maxconn parameter which limits the total number of connections per process, the
  540 number of listeners, the number of servers which have a health check enabled,
  541 the agent checks, the peers, the loggers and possibly a few other technical
  542 requirements. A simple rough estimate of this number consists in simply
  543 doubling the maxconn value and adding a few tens to get the approximate number
  544 of file descriptors needed.
  546 Originally HAProxy did not know how to compute this value, and it was necessary
  547 to pass the value using the "ulimit-n" setting in the global section. This
  548 explains why even today a lot of configurations are seen with this setting
  549 present. Unfortunately it was often miscalculated resulting in connection
  550 failures when approaching maxconn instead of throttling incoming connection
  551 while waiting for the needed resources. For this reason it is important to
  552 remove any vestigial "ulimit-n" setting that can remain from very old versions.
  554 Raising the number of file descriptors to accept even moderate loads is
  555 mandatory but comes with some OS-specific adjustments. First, the select()
  556 polling system is limited to 1024 file descriptors. In fact on Linux it used
  557 to be capable of handling more but since certain OS ship with excessively
  558 restrictive SELinux policies forbidding the use of select() with more than
  559 1024 file descriptors, HAProxy now refuses to start in this case in order to
  560 avoid any issue at run time. On all supported operating systems, poll() is
  561 available and will not suffer from this limitation. It is automatically picked
  562 so there is nothing to do to get a working configuration. But poll's becomes
  563 very slow when the number of file descriptors increases. While HAProxy does its
  564 best to limit this performance impact (eg: via the use of the internal file
  565 descriptor cache and batched processing), a good rule of thumb is that using
  566 poll() with more than a thousand concurrent connections will use a lot of CPU.
  568 For Linux systems base on kernels 2.6 and above, the epoll() system call will
  569 be used. It's a much more scalable mechanism relying on callbacks in the kernel
  570 that guarantee a constant wake up time regardless of the number of registered
  571 monitored file descriptors. It is automatically used where detected, provided
  572 that HAProxy had been built for one of the Linux flavors. Its presence and
  573 support can be verified using "haproxy -vv".
  575 For BSD systems which support it, kqueue() is available as an alternative. It
  576 is much faster than poll() and even slightly faster than epoll() thanks to its
  577 batched handling of changes. At least FreeBSD and OpenBSD support it. Just like
  578 with Linux's epoll(), its support and availability are reported in the output
  579 of "haproxy -vv".
  581 Having a good poller is one thing, but it is mandatory that the process can
  582 reach the limits. When HAProxy starts, it immediately sets the new process's
  583 file descriptor limits and verifies if it succeeds. In case of failure, it
  584 reports it before forking so that the administrator can see the problem. As
  585 long as the process is started by as root, there should be no reason for this
  586 setting to fail. However, it can fail if the process is started by an
  587 unprivileged user. If there is a compelling reason for *not* starting haproxy
  588 as root (eg: started by end users, or by a per-application account), then the
  589 file descriptor limit can be raised by the system administrator for this
  590 specific user. The effectiveness of the setting can be verified by issuing
  591 "ulimit -n" from the user's command line. It should reflect the new limit.
  593 Warning: when an unprivileged user's limits are changed in this user's account,
  594 it is fairly common that these values are only considered when the user logs in
  595 and not at all in some scripts run at system boot time nor in crontabs. This is
  596 totally dependent on the operating system, keep in mind to check "ulimit -n"
  597 before starting haproxy when running this way. The general advice is never to
  598 start haproxy as an unprivileged user for production purposes. Another good
  599 reason is that it prevents haproxy from enabling some security protections.
  601 Once it is certain that the system will allow the haproxy process to use the
  602 requested number of file descriptors, two new system-specific limits may be
  603 encountered. The first one is the system-wide file descriptor limit, which is
  604 the total number of file descriptors opened on the system, covering all
  605 processes. When this limit is reached, accept() or socket() will typically
  606 return ENFILE. The second one is the per-process hard limit on the number of
  607 file descriptors, it prevents setrlimit() from being set higher. Both are very
  608 dependent on the operating system. On Linux, the system limit is set at boot
  609 based on the amount of memory. It can be changed with the "fs.file-max" sysctl.
  610 And the per-process hard limit is set to 1048576 by default, but it can be
  611 changed using the "fs.nr_open" sysctl.
  613 File descriptor limitations may be observed on a running process when they are
  614 set too low. The strace utility will report that accept() and socket() return
  615 "-1 EMFILE" when the process's limits have been reached. In this case, simply
  616 raising the "ulimit-n" value (or removing it) will solve the problem. If these
  617 system calls return "-1 ENFILE" then it means that the kernel's limits have
  618 been reached and that something must be done on a system-wide parameter. These
  619 trouble must absolutely be addressed, as they result in high CPU usage (when
  620 accept() fails) and failed connections that are generally visible to the user.
  621 One solution also consists in lowering the global maxconn value to enforce
  622 serialization, and possibly to disable HTTP keep-alive to force connections
  623 to be released and reused faster.
  626 6. Memory management
  627 --------------------
  629 HAProxy uses a simple and fast pool-based memory management. Since it relies on
  630 a small number of different object types, it's much more efficient to pick new
  631 objects from a pool which already contains objects of the appropriate size than
  632 to call malloc() for each different size. The pools are organized as a stack or
  633 LIFO, so that newly allocated objects are taken from recently released objects
  634 still hot in the CPU caches. Pools of similar sizes are merged together, in
  635 order to limit memory fragmentation.
  637 By default, since the focus is set on performance, each released object is put
  638 back into the pool it came from, and allocated objects are never freed since
  639 they are expected to be reused very soon.
  641 On the CLI, it is possible to check how memory is being used in pools thanks to
  642 the "show pools" command :
  644   > show pools
  645   Dumping pools usage. Use SIGQUIT to flush them.
  646     - Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccc40=03 [SHARED]
  647     - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users, @0x9ccac0=00 [SHARED]
  648     - Pool comp_state (48 bytes) : 3 allocated (144 bytes), 3 used, 0 failures, 5 users, @0x9cccc0=04 [SHARED]
  649     - Pool filter (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 3 users, @0x9ccbc0=02 [SHARED]
  650     - Pool vars (80 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccb40=01 [SHARED]
  651     - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9cd240=15 [SHARED]
  652     - Pool task (144 bytes) : 55 allocated (7920 bytes), 55 used, 0 failures, 1 users, @0x9cd040=11 [SHARED]
  653     - Pool session (160 bytes) : 1 allocated (160 bytes), 1 used, 0 failures, 1 users, @0x9cd140=13 [SHARED]
  654     - Pool h2s (208 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccec0=08 [SHARED]
  655     - Pool h2c (288 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cce40=07 [SHARED]
  656     - Pool spoe_ctx (304 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccf40=09 [SHARED]
  657     - Pool connection (400 bytes) : 2 allocated (800 bytes), 2 used, 0 failures, 1 users, @0x9cd1c0=14 [SHARED]
  658     - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd340=17 [SHARED]
  659     - Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccdc0=06 [SHARED]
  660     - Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccd40=05 [SHARED]
  661     - Pool stream (960 bytes) : 1 allocated (960 bytes), 1 used, 0 failures, 1 users, @0x9cd0c0=12 [SHARED]
  662     - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd2c0=16 [SHARED]
  663     - Pool buffer (8030 bytes) : 3 allocated (24090 bytes), 2 used, 0 failures, 1 users, @0x9cd3c0=18 [SHARED]
  664     - Pool trash (8062 bytes) : 1 allocated (8062 bytes), 1 used, 0 failures, 1 users, @0x9cd440=19
  665   Total: 19 pools, 42296 bytes allocated, 34266 used.
  667 The pool name is only indicative, it's the name of the first object type using
  668 this pool. The size in parenthesis is the object size for objects in this pool.
  669 Object sizes are always rounded up to the closest multiple of 16 bytes. The
  670 number of objects currently allocated and the equivalent number of bytes is
  671 reported so that it is easy to know which pool is responsible for the highest
  672 memory usage. The number of objects currently in use is reported as well in the
  673 "used" field. The difference between "allocated" and "used" corresponds to the
  674 objects that have been freed and are available for immediate use. The address
  675 at the end of the line is the pool's address, and the following number is the
  676 pool index when it exists, or is reported as -1 if no index was assigned.
  678 It is possible to limit the amount of memory allocated per process using the
  679 "-m" command line option, followed by a number of megabytes. It covers all of
  680 the process's addressable space, so that includes memory used by some libraries
  681 as well as the stack, but it is a reliable limit when building a resource
  682 constrained system. It works the same way as "ulimit -v" on systems which have
  683 it, or "ulimit -d" for the other ones.
  685 If a memory allocation fails due to the memory limit being reached or because
  686 the system doesn't have any enough memory, then haproxy will first start to
  687 free all available objects from all pools before attempting to allocate memory
  688 again. This mechanism of releasing unused memory can be triggered by sending
  689 the signal SIGQUIT to the haproxy process. When doing so, the pools state prior
  690 to the flush will also be reported to stderr when the process runs in
  691 foreground.
  693 During a reload operation, the process switched to the graceful stop state also
  694 automatically performs some flushes after releasing any connection so that all
  695 possible memory is released to save it for the new process.
  698 7. CPU usage
  699 ------------
  701 HAProxy normally spends most of its time in the system and a smaller part in
  702 userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end
  703 connection setups and closes per second at 100% CPU on a single core. When one
  704 core is saturated, typical figures are :
  705   - 95% system, 5% user for long TCP connections or large HTTP objects
  706   - 85% system and 15% user for short TCP connections or small HTTP objects in
  707     close mode
  708   - 70% system and 30% user for small HTTP objects in keep-alive mode
  710 The amount of rules processing and regular expressions will increase the user
  711 land part. The presence of firewall rules, connection tracking, complex routing
  712 tables in the system will instead increase the system part.
  714 On most systems, the CPU time observed during network transfers can be cut in 4
  715 parts :
  716   - the interrupt part, which concerns all the processing performed upon I/O
  717     receipt, before the target process is even known. Typically Rx packets are
  718     accounted for in interrupt. On some systems such as Linux where interrupt
  719     processing may be deferred to a dedicated thread, it can appear as softirq,
  720     and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of
  721     this load is generally defined by the hardware settings, though in the case
  722     of softirq it is often possible to remap the processing to another CPU.
  723     This interrupt part will often be perceived as parasitic since it's not
  724     associated with any process, but it actually is some processing being done
  725     to prepare the work for the process.
  727   - the system part, which concerns all the processing done using kernel code
  728     called from userland. System calls are accounted as system for example. All
  729     synchronously delivered Tx packets will be accounted for as system time. If
  730     some packets have to be deferred due to queues filling up, they may then be
  731     processed in interrupt context later (eg: upon receipt of an ACK opening a
  732     TCP window).
  734   - the user part, which exclusively runs application code in userland. HAProxy
  735     runs exclusively in this part, though it makes heavy use of system calls.
  736     Rules processing, regular expressions, compression, encryption all add to
  737     the user portion of CPU consumption.
  739   - the idle part, which is what the CPU does when there is nothing to do. For
  740     example HAProxy waits for an incoming connection, or waits for some data to
  741     leave, meaning the system is waiting for an ACK from the client to push
  742     these data.
  744 In practice regarding HAProxy's activity, it is in general reasonably accurate
  745 (but totally inexact) to consider that interrupt/softirq are caused by Rx
  746 processing in kernel drivers, that user-land is caused by layer 7 processing
  747 in HAProxy, and that system time is caused by network processing on the Tx
  748 path.
  750 Since HAProxy runs around an event loop, it waits for new events using poll()
  751 (or any alternative) and processes all these events as fast as possible before
  752 going back to poll() waiting for new events. It measures the time spent waiting
  753 in poll() compared to the time spent doing processing events. The ratio of
  754 polling time vs total time is called the "idle" time, it's the amount of time
  755 spent waiting for something to happen. This ratio is reported in the stats page
  756 on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means
  757 the load is extremely low. When it's close to 0%, it means that there is
  758 constantly some activity. While it cannot be very accurate on an overloaded
  759 system due to other processes possibly preempting the CPU from the haproxy
  760 process, it still provides a good estimate about how HAProxy considers it is
  761 working : if the load is low and the idle ratio is low as well, it may indicate
  762 that HAProxy has a lot of work to do, possibly due to very expensive rules that
  763 have to be processed. Conversely, if HAProxy indicates the idle is close to
  764 100% while things are slow, it means that it cannot do anything to speed things
  765 up because it is already waiting for incoming data to process. In the example
  766 below, haproxy is completely idle :
  768   $ echo "show info" | socat - /var/run/haproxy.sock | grep ^Idle
  769   Idle_pct: 100
  771 When the idle ratio starts to become very low, it is important to tune the
  772 system and place processes and interrupts correctly to save the most possible
  773 CPU resources for all tasks. If a firewall is present, it may be worth trying
  774 to disable it or to tune it to ensure it is not responsible for a large part
  775 of the performance limitation. It's worth noting that unloading a stateful
  776 firewall generally reduces both the amount of interrupt/softirq and of system
  777 usage since such firewalls act both on the Rx and the Tx paths. On Linux,
  778 unloading the nf_conntrack and ip_conntrack modules will show whether there is
  779 anything to gain. If so, then the module runs with default settings and you'll
  780 have to figure how to tune it for better performance. In general this consists
  781 in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will
  782 disable the "pf" firewall and its stateful engine at the same time.
  784 If it is observed that a lot of time is spent in interrupt/softirq, it is
  785 important to ensure that they don't run on the same CPU. Most systems tend to
  786 pin the tasks on the CPU where they receive the network traffic because for
  787 certain workloads it improves things. But with heavily network-bound workloads
  788 it is the opposite as the haproxy process will have to fight against its kernel
  789 counterpart. Pinning haproxy to one CPU core and the interrupts to another one,
  790 all sharing the same L3 cache tends to sensibly increase network performance
  791 because in practice the amount of work for haproxy and the network stack are
  792 quite close, so they can almost fill an entire CPU each. On Linux this is done
  793 using taskset (for haproxy) or using cpu-map (from the haproxy config), and the
  794 interrupts are assigned under /proc/irq. Many network interfaces support
  795 multiple queues and multiple interrupts. In general it helps to spread them
  796 across a small number of CPU cores provided they all share the same L3 cache.
  797 Please always stop irq_balance which always does the worst possible thing on
  798 such workloads.
  800 For CPU-bound workloads consisting in a lot of SSL traffic or a lot of
  801 compression, it may be worth using multiple processes dedicated to certain
  802 tasks, though there is no universal rule here and experimentation will have to
  803 be performed.
  805 In order to increase the CPU capacity, it is possible to make HAProxy run as
  806 several processes, using the "nbproc" directive in the global section. There
  807 are some limitations though :
  808   - health checks are run per process, so the target servers will get as many
  809     checks as there are running processes ;
  810   - maxconn values and queues are per-process so the correct value must be set
  811     to avoid overloading the servers ;
  812   - outgoing connections should avoid using port ranges to avoid conflicts
  813   - stick-tables are per process and are not shared between processes ;
  814   - each peers section may only run on a single process at a time ;
  815   - the CLI operations will only act on a single process at a time.
  817 With this in mind, it appears that the easiest setup often consists in having
  818 one first layer running on multiple processes and in charge for the heavy
  819 processing, passing the traffic to a second layer running in a single process.
  820 This mechanism is suited to SSL and compression which are the two CPU-heavy
  821 features. Instances can easily be chained over UNIX sockets (which are cheaper
  822 than TCP sockets and which do not waste ports), and the proxy protocol which is
  823 useful to pass client information to the next stage. When doing so, it is
  824 generally a good idea to bind all the single-process tasks to process number 1
  825 and extra tasks to next processes, as this will make it easier to generate
  826 similar configurations for different machines.
  828 On Linux versions 3.9 and above, running HAProxy in multi-process mode is much
  829 more efficient when each process uses a distinct listening socket on the same
  830 IP:port ; this will make the kernel evenly distribute the load across all
  831 processes instead of waking them all up. Please check the "process" option of
  832 the "bind" keyword lines in the configuration manual for more information.
  835 8. Logging
  836 ----------
  838 For logging, HAProxy always relies on a syslog server since it does not perform
  839 any file-system access. The standard way of using it is to send logs over UDP
  840 to the log server (by default on port 514). Very commonly this is configured to
  841 where the local syslog daemon is running, but it's also used over the
  842 network to log to a central server. The central server provides additional
  843 benefits especially in active-active scenarios where it is desirable to keep
  844 the logs merged in arrival order. HAProxy may also make use of a UNIX socket to
  845 send its logs to the local syslog daemon, but it is not recommended at all,
  846 because if the syslog server is restarted while haproxy runs, the socket will
  847 be replaced and new logs will be lost. Since HAProxy will be isolated inside a
  848 chroot jail, it will not have the ability to reconnect to the new socket. It
  849 has also been observed in field that the log buffers in use on UNIX sockets are
  850 very small and lead to lost messages even at very light loads. But this can be
  851 fine for testing however.
  853 It is recommended to add the following directive to the "global" section to
  854 make HAProxy log to the local daemon using facility "local0" :
  856       log local0
  858 and then to add the following one to each "defaults" section or to each frontend
  859 and backend section :
  861       log global
  863 This way, all logs will be centralized through the global definition of where
  864 the log server is.
  866 Some syslog daemons do not listen to UDP traffic by default, so depending on
  867 the daemon being used, the syntax to enable this will vary :
  869   - on sysklogd, you need to pass argument "-r" on the daemon's command line
  870     so that it listens to a UDP socket for "remote" logs ; note that there is
  871     no way to limit it to address so it will also receive logs from
  872     remote systems ;
  874   - on rsyslogd, the following lines must be added to the configuration file :
  876       $ModLoad imudp
  877       $UDPServerAddress *
  878       $UDPServerRun 514
  880   - on syslog-ng, a new source can be created the following way, it then needs
  881     to be added as a valid source in one of the "log" directives :
  883       source s_udp {
  884         udp(ip( port(514));
  885       };
  887 Please consult your syslog daemon's manual for more information. If no logs are
  888 seen in the system's log files, please consider the following tests :
  890   - restart haproxy. Each frontend and backend logs one line indicating it's
  891     starting. If these logs are received, it means logs are working.
  893   - run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some
  894     activity that you expect to be logged. You should see the log messages
  895     being sent using sendmsg() there. If they don't appear, restart using
  896     strace on top of haproxy. If you still see no logs, it definitely means
  897     that something is wrong in your configuration.
  899   - run tcpdump to watch for port 514, for example on the loopback interface if
  900     the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the
  901     packets are seen there, it's the proof they're sent then the syslogd daemon
  902     needs to be troubleshooted.
  904 While traffic logs are sent from the frontends (where the incoming connections
  905 are accepted), backends also need to be able to send logs in order to report a
  906 server state change consecutive to a health check. Please consult HAProxy's
  907 configuration manual for more information regarding all possible log settings.
  909 It is convenient to chose a facility that is not used by other daemons. HAProxy
  910 examples often suggest "local0" for traffic logs and "local1" for admin logs
  911 because they're never seen in field. A single facility would be enough as well.
  912 Having separate logs is convenient for log analysis, but it's also important to
  913 remember that logs may sometimes convey confidential information, and as such
  914 they must not be mixed with other logs that may accidentally be handed out to
  915 unauthorized people.
  917 For in-field troubleshooting without impacting the server's capacity too much,
  918 it is recommended to make use of the "halog" utility provided with HAProxy.
  919 This is sort of a grep-like utility designed to process HAProxy log files at
  920 a very fast data rate. Typical figures range between 1 and 2 GB of logs per
  921 second. It is capable of extracting only certain logs (eg: search for some
  922 classes of HTTP status codes, connection termination status, search by response
  923 time ranges, look for errors only), count lines, limit the output to a number
  924 of lines, and perform some more advanced statistics such as sorting servers
  925 by response time or error counts, sorting URLs by time or count, sorting client
  926 addresses by access count, and so on. It is pretty convenient to quickly spot
  927 anomalies such as a bot looping on the site, and block them.
  930 9. Statistics and monitoring
  931 ----------------------------
  933 It is possible to query HAProxy about its status. The most commonly used
  934 mechanism is the HTTP statistics page. This page also exposes an alternative
  935 CSV output format for monitoring tools. The same format is provided on the
  936 Unix socket.
  939 9.1. CSV format
  940 ---------------
  942 The statistics may be consulted either from the unix socket or from the HTTP
  943 page. Both means provide a CSV format whose fields follow. The first line
  944 begins with a sharp ('#') and has one word per comma-delimited field which
  945 represents the title of the column. All other lines starting at the second one
  946 use a classical CSV format using a comma as the delimiter, and the double quote
  947 ('"') as an optional text delimiter, but only if the enclosed text is ambiguous
  948 (if it contains a quote or a comma). The double-quote character ('"') in the
  949 text is doubled ('""'), which is the format that most tools recognize. Please
  950 do not insert any column before these ones in order not to break tools which
  951 use hard-coded column positions.
  953 In brackets after each field name are the types which may have a value for
  954 that field. The types are L (Listeners), F (Frontends), B (Backends), and
  955 S (Servers).
  957   0. pxname [LFBS]: proxy name
  958   1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend,
  959      any name for server/listener)
  960   2. qcur [..BS]: current queued requests. For the backend this reports the
  961      number queued without a server assigned.
  962   3. qmax [..BS]: max value of qcur
  963   4. scur [LFBS]: current sessions
  964   5. smax [LFBS]: max sessions
  965   6. slim [LFBS]: configured session limit
  966   7. stot [LFBS]: cumulative number of sessions
  967   8. bin [LFBS]: bytes in
  968   9. bout [LFBS]: bytes out
  969  10. dreq [LFB.]: requests denied because of security concerns.
  970      - For tcp this is because of a matched tcp-request content rule.
  971      - For http this is because of a matched http-request or tarpit rule.
  972  11. dresp [LFBS]: responses denied because of security concerns.
  973      - For http this is because of a matched http-request rule, or
  974        "option checkcache".
  975  12. ereq [LF..]: request errors. Some of the possible causes are:
  976      - early termination from the client, before the request has been sent.
  977      - read error from the client
  978      - client timeout
  979      - client closed connection
  980      - various bad requests from the client.
  981      - request was tarpitted.
  982  13. econ [..BS]: number of requests that encountered an error trying to
  983      connect to a backend server. The backend stat is the sum of the stat
  984      for all servers of that backend, plus any connection errors not
  985      associated with a particular server (such as the backend having no
  986      active servers).
  987  14. eresp [..BS]: response errors. srv_abrt will be counted here also.
  988      Some other errors are:
  989      - write error on the client socket (won't be counted for the server stat)
  990      - failure applying filters to the response.
  991  15. wretr [..BS]: number of times a connection to a server was retried.
  992  16. wredis [..BS]: number of times a request was redispatched to another
  993      server. The server value counts the number of times that server was
  994      switched away from.
  995  17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)/MAINT(resolution)...)
  996  18. weight [..BS]: total weight (backend), server weight (server)
  997  19. act [..BS]: number of active servers (backend), server is active (server)
  998  20. bck [..BS]: number of backup servers (backend), server is backup (server)
  999  21. chkfail [...S]: number of failed checks. (Only counts checks failed when
 1000      the server is up.)
 1001  22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts
 1002      transitions to the whole backend being down, rather than the sum of the
 1003      counters for each server.
 1004  23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition
 1005  24. downtime [..BS]: total downtime (in seconds). The value for the backend
 1006      is the downtime for the whole backend, not the sum of the server downtime.
 1007  25. qlimit [...S]: configured maxqueue for the server, or nothing in the
 1008      value is 0 (default, meaning no limit)
 1009  26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
 1010  27. iid [LFBS]: unique proxy id
 1011  28. sid [L..S]: server id (unique inside a proxy)
 1012  29. throttle [...S]: current throttle percentage for the server, when
 1013      slowstart is active, or no value if not in slowstart.
 1014  30. lbtot [..BS]: total number of times a server was selected, either for new
 1015      sessions, or when re-dispatching. The server counter is the number
 1016      of times that server was selected.
 1017  31. tracked [...S]: id of proxy/server if tracking is enabled.
 1018  32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener)
 1019  33. rate [.FBS]: number of sessions per second over last elapsed second
 1020  34. rate_lim [.F..]: configured limit on new sessions per second
 1021  35. rate_max [.FBS]: max number of new sessions per second
 1022  36. check_status [...S]: status of last health check, one of:
 1023         UNK     -> unknown
 1024         INI     -> initializing
 1025         SOCKERR -> socket error
 1026         L4OK    -> check passed on layer 4, no upper layers testing enabled
 1027         L4TOUT  -> layer 1-4 timeout
 1028         L4CON   -> layer 1-4 connection problem, for example
 1029                    "Connection refused" (tcp rst) or "No route to host" (icmp)
 1030         L6OK    -> check passed on layer 6
 1031         L6TOUT  -> layer 6 (SSL) timeout
 1032         L6RSP   -> layer 6 invalid response - protocol error
 1033         L7OK    -> check passed on layer 7
 1034         L7OKC   -> check conditionally passed on layer 7, for example 404 with
 1035                    disable-on-404
 1036         L7TOUT  -> layer 7 (HTTP/SMTP) timeout
 1037         L7RSP   -> layer 7 invalid response - protocol error
 1038         L7STS   -> layer 7 response error, for example HTTP 5xx
 1039      Notice: If a check is currently running, the last known status will be
 1040      reported, prefixed with "* ". e. g. "* L7OK".
 1041  37. check_code [...S]: layer5-7 code, if available
 1042  38. check_duration [...S]: time in ms took to finish last health check
 1043  39. hrsp_1xx [.FBS]: http responses with 1xx code
 1044  40. hrsp_2xx [.FBS]: http responses with 2xx code
 1045  41. hrsp_3xx [.FBS]: http responses with 3xx code
 1046  42. hrsp_4xx [.FBS]: http responses with 4xx code
 1047  43. hrsp_5xx [.FBS]: http responses with 5xx code
 1048  44. hrsp_other [.FBS]: http responses with other codes (protocol error)
 1049  45. hanafail [...S]: failed health checks details
 1050  46. req_rate [.F..]: HTTP requests per second over last elapsed second
 1051  47. req_rate_max [.F..]: max number of HTTP requests per second observed
 1052  48. req_tot [.FB.]: total number of HTTP requests received
 1053  49. cli_abrt [..BS]: number of data transfers aborted by the client
 1054  50. srv_abrt [..BS]: number of data transfers aborted by the server
 1055      (inc. in eresp)
 1056  51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor
 1057  52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor
 1058  53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor
 1059      (CPU/BW limit)
 1060  54. comp_rsp [.FB.]: number of HTTP responses that were compressed
 1061  55. lastsess [..BS]: number of seconds since last session assigned to
 1062      server/backend
 1063  56. last_chk [...S]: last health check contents or textual error
 1064  57. last_agt [...S]: last agent check contents or textual error
 1065  58. qtime [..BS]: the average queue time in ms over the 1024 last requests
 1066  59. ctime [..BS]: the average connect time in ms over the 1024 last requests
 1067  60. rtime [..BS]: the average response time in ms over the 1024 last requests
 1068      (0 for TCP)
 1069  61. ttime [..BS]: the average total session time in ms over the 1024 last
 1070      requests
 1071  62. agent_status [...S]: status of last agent check, one of:
 1072         UNK     -> unknown
 1073         INI     -> initializing
 1074         SOCKERR -> socket error
 1075         L4OK    -> check passed on layer 4, no upper layers testing enabled
 1076         L4TOUT  -> layer 1-4 timeout
 1077         L4CON   -> layer 1-4 connection problem, for example
 1078                    "Connection refused" (tcp rst) or "No route to host" (icmp)
 1079         L7OK    -> agent reported "up"
 1080         L7STS   -> agent reported "fail", "stop", or "down"
 1081  63. agent_code [...S]: numeric code reported by agent if any (unused for now)
 1082  64. agent_duration [...S]: time in ms taken to finish last check
 1083  65. check_desc [...S]: short human-readable description of check_status
 1084  66. agent_desc [...S]: short human-readable description of agent_status
 1085  67. check_rise [...S]: server's "rise" parameter used by checks
 1086  68. check_fall [...S]: server's "fall" parameter used by checks
 1087  69. check_health [...S]: server's health check value between 0 and rise+fall-1
 1088  70. agent_rise [...S]: agent's "rise" parameter, normally 1
 1089  71. agent_fall [...S]: agent's "fall" parameter, normally 1
 1090  72. agent_health [...S]: agent's health parameter, between 0 and rise+fall-1
 1091  73. addr [L..S]: address:port or "unix". IPv6 has brackets around the address.
 1092  74: cookie [..BS]: server's cookie value or backend's cookie name
 1093  75: mode [LFBS]: proxy mode (tcp, http, health, unknown)
 1094  76: algo [..B.]: load balancing algorithm
 1095  77: conn_rate [.F..]: number of connections over the last elapsed second
 1096  78: conn_rate_max [.F..]: highest known conn_rate
 1097  79: conn_tot [.F..]: cumulative number of connections
 1098  80: intercepted [.FB.]: cum. number of intercepted requests (monitor, stats)
 1099  81: dcon [LF..]: requests denied by "tcp-request connection" rules
 1100  82: dses [LF..]: requests denied by "tcp-request session" rules
 1101  83: wrew [LFBS]: cumulative number of failed header rewriting warnings
 1104 9.2) Typed output format
 1105 ------------------------
 1107 Both "show info" and "show stat" support a mode where each output value comes
 1108 with its type and sufficient information to know how the value is supposed to
 1109 be aggregated between processes and how it evolves.
 1111 In all cases, the output consists in having a single value per line with all
 1112 the information split into fields delimited by colons (':').
 1114 The first column designates the object or metric being dumped. Its format is
 1115 specific to the command producing this output and will not be described in this
 1116 section. Usually it will consist in a series of identifiers and field names.
 1118 The second column contains 3 characters respectively indicating the origin, the
 1119 nature and the scope of the value being reported. The first character (the
 1120 origin) indicates where the value was extracted from. Possible characters are :
 1122   M   The value is a metric. It is valid at one instant any may change depending
 1123       on its nature .
 1125   S   The value is a status. It represents a discrete value which by definition
 1126       cannot be aggregated. It may be the status of a server ("UP" or "DOWN"),
 1127       the PID of the process, etc.
 1129   K   The value is a sorting key. It represents an identifier which may be used
 1130       to group some values together because it is unique among its class. All
 1131       internal identifiers are keys. Some names can be listed as keys if they
 1132       are unique (eg: a frontend name is unique). In general keys come from the
 1133       configuration, even though some of them may automatically be assigned. For
 1134       most purposes keys may be considered as equivalent to configuration.
 1136   C   The value comes from the configuration. Certain configuration values make
 1137       sense on the output, for example a concurrent connection limit or a cookie
 1138       name. By definition these values are the same in all processes started
 1139       from the same configuration file.
 1141   P   The value comes from the product itself. There are very few such values,
 1142       most common use is to report the product name, version and release date.
 1143       These elements are also the same between all processes.
 1145 The second character (the nature) indicates the nature of the information
 1146 carried by the field in order to let an aggregator decide on what operation to
 1147 use to aggregate multiple values. Possible characters are :
 1149   A   The value represents an age since a last event. This is a bit different
 1150       from the duration in that an age is automatically computed based on the
 1151       current date. A typical example is how long ago did the last session
 1152       happen on a server. Ages are generally aggregated by taking the minimum
 1153       value and do not need to be stored.
 1155   a   The value represents an already averaged value. The average response times
 1156       and server weights are of this nature. Averages can typically be averaged
 1157       between processes.
 1159   C   The value represents a cumulative counter. Such measures perpetually
 1160       increase until they wrap around. Some monitoring protocols need to tell
 1161       the difference between a counter and a gauge to report a different type.
 1162       In general counters may simply be summed since they represent events or
 1163       volumes. Examples of metrics of this nature are connection counts or byte
 1164       counts.
 1166   D   The value represents a duration for a status. There are a few usages of
 1167       this, most of them include the time taken by the last health check and
 1168       the time a server has spent down. Durations are generally not summed,
 1169       most of the time the maximum will be retained to compute an SLA.
 1171   G   The value represents a gauge. It's a measure at one instant. The memory
 1172       usage or the current number of active connections are of this nature.
 1173       Metrics of this type are typically summed during aggregation.
 1175   L   The value represents a limit (generally a configured one). By nature,
 1176       limits are harder to aggregate since they are specific to the point where
 1177       they were retrieved. In certain situations they may be summed or be kept
 1178       separate.
 1180   M   The value represents a maximum. In general it will apply to a gauge and
 1181       keep the highest known value. An example of such a metric could be the
 1182       maximum amount of concurrent connections that was encountered in the
 1183       product's life time. To correctly aggregate maxima, you are supposed to
 1184       output a range going from the maximum of all maxima and the sum of all
 1185       of them. There is indeed no way to know if they were encountered
 1186       simultaneously or not.
 1188   m   The value represents a minimum. In general it will apply to a gauge and
 1189       keep the lowest known value. An example of such a metric could be the
 1190       minimum amount of free memory pools that was encountered in the product's
 1191       life time. To correctly aggregate minima, you are supposed to output a
 1192       range going from the minimum of all minima and the sum of all of them.
 1193       There is indeed no way to know if they were encountered simultaneously
 1194       or not.
 1196   N   The value represents a name, so it is a string. It is used to report
 1197       proxy names, server names and cookie names. Names have configuration or
 1198       keys as their origin and are supposed to be the same among all processes.
 1200   O   The value represents a free text output. Outputs from various commands,
 1201       returns from health checks, node descriptions are of such nature.
 1203   R   The value represents an event rate. It's a measure at one instant. It is
 1204       quite similar to a gauge except that the recipient knows that this measure
 1205       moves slowly and may decide not to keep all values. An example of such a
 1206       metric is the measured amount of connections per second. Metrics of this
 1207       type are typically summed during aggregation.
 1209   T   The value represents a date or time. A field emitting the current date
 1210       would be of this type. The method to aggregate such information is left
 1211       as an implementation choice. For now no field uses this type.
 1213 The third character (the scope) indicates what extent the value reflects. Some
 1214 elements may be per process while others may be per configuration or per system.
 1215 The distinction is important to know whether or not a single value should be
 1216 kept during aggregation or if values have to be aggregated. The following
 1217 characters are currently supported :
 1219   C   The value is valid for a whole cluster of nodes, which is the set of nodes
 1220       communicating over the peers protocol. An example could be the amount of
 1221       entries present in a stick table that is replicated with other peers. At
 1222       the moment no metric use this scope.
 1224   P   The value is valid only for the process reporting it. Most metrics use
 1225       this scope.
 1227   S   The value is valid for the whole service, which is the set of processes
 1228       started together from the same configuration file. All metrics originating
 1229       from the configuration use this scope. Some other metrics may use it as
 1230       well for some shared resources (eg: shared SSL cache statistics).
 1232   s   The value is valid for the whole system, such as the system's hostname,
 1233       current date or resource usage. At the moment this scope is not used by
 1234       any metric.
 1236 Consumers of these information will generally have enough of these 3 characters
 1237 to determine how to accurately report aggregated information across multiple
 1238 processes.
 1240 After this column, the third column indicates the type of the field, among "s32"
 1241 (signed 32-bit integer), "s64" (signed 64-bit integer), "u32" (unsigned 32-bit
 1242 integer), "u64" (unsigned 64-bit integer), "str" (string). It is important to
 1243 know the type before parsing the value in order to properly read it. For example
 1244 a string containing only digits is still a string an not an integer (eg: an
 1245 error code extracted by a check).
 1247 Then the fourth column is the value itself, encoded according to its type.
 1248 Strings are dumped as-is immediately after the colon without any leading space.
 1249 If a string contains a colon, it will appear normally. This means that the
 1250 output should not be exclusively split around colons or some check outputs
 1251 or server addresses might be truncated.
 1254 9.3. Unix Socket commands
 1255 -------------------------
 1257 The stats socket is not enabled by default. In order to enable it, it is
 1258 necessary to add one line in the global section of the haproxy configuration.
 1259 A second line is recommended to set a larger timeout, always appreciated when
 1260 issuing commands by hand :
 1262     global
 1263         stats socket /var/run/haproxy.sock mode 600 level admin
 1264         stats timeout 2m
 1266 It is also possible to add multiple instances of the stats socket by repeating
 1267 the line, and make them listen to a TCP port instead of a UNIX socket. This is
 1268 never done by default because this is dangerous, but can be handy in some
 1269 situations :
 1271     global
 1272         stats socket /var/run/haproxy.sock mode 600 level admin
 1273         stats socket ipv4@ level admin
 1274         stats timeout 2m
 1276 To access the socket, an external utility such as "socat" is required. Socat is
 1277 a swiss-army knife to connect anything to anything. We use it to connect
 1278 terminals to the socket, or a couple of stdin/stdout pipes to it for scripts.
 1279 The two main syntaxes we'll use are the following :
 1281     # socat /var/run/haproxy.sock stdio
 1282     # socat /var/run/haproxy.sock readline
 1284 The first one is used with scripts. It is possible to send the output of a
 1285 script to haproxy, and pass haproxy's output to another script. That's useful
 1286 for retrieving counters or attack traces for example.
 1288 The second one is only useful for issuing commands by hand. It has the benefit
 1289 that the terminal is handled by the readline library which supports line
 1290 editing and history, which is very convenient when issuing repeated commands
 1291 (eg: watch a counter).
 1293 The socket supports two operation modes :
 1294   - interactive
 1295   - non-interactive
 1297 The non-interactive mode is the default when socat connects to the socket. In
 1298 this mode, a single line may be sent. It is processed as a whole, responses are
 1299 sent back, and the connection closes after the end of the response. This is the
 1300 mode that scripts and monitoring tools use. It is possible to send multiple
 1301 commands in this mode, they need to be delimited by a semi-colon (';'). For
 1302 example :
 1304     # echo "show info;show stat;show table" | socat /var/run/haproxy stdio
 1306 If a command needs to use a semi-colon or a backslash (eg: in a value), it
 1307 must be preceded by a backslash ('\').
 1309 The interactive mode displays a prompt ('>') and waits for commands to be
 1310 entered on the line, then processes them, and displays the prompt again to wait
 1311 for a new command. This mode is entered via the "prompt" command which must be
 1312 sent on the first line in non-interactive mode. The mode is a flip switch, if
 1313 "prompt" is sent in interactive mode, it is disabled and the connection closes
 1314 after processing the last command of the same line.
 1316 For this reason, when debugging by hand, it's quite common to start with the
 1317 "prompt" command :
 1319    # socat /var/run/haproxy readline
 1320    prompt
 1321    > show info
 1322    ...
 1323    >
 1325 Since multiple commands may be issued at once, haproxy uses the empty line as a
 1326 delimiter to mark an end of output for each command, and takes care of ensuring
 1327 that no command can emit an empty line on output. A script can thus easily
 1328 parse the output even when multiple commands were pipelined on a single line.
 1330 Some commands may take an optional payload. To add one to a command, the first
 1331 line needs to end with the "<<\n" pattern. The next lines will be treated as
 1332 the payload and can contain as many lines as needed. To validate a command with
 1333 a payload, it needs to end with an empty line.
 1335 Limitations do exist: the length of the whole buffer passed to the CLI must
 1336 not be greater than tune.bfsize and the pattern "<<" must not be glued to the
 1337 last word of the line.
 1339 When entering a paylod while in interactive mode, the prompt will change from
 1340 "> " to "+ ".
 1342 It is important to understand that when multiple haproxy processes are started
 1343 on the same sockets, any process may pick up the request and will output its
 1344 own stats.
 1346 The list of commands currently supported on the stats socket is provided below.
 1347 If an unknown command is sent, haproxy displays the usage message which reminds
 1348 all supported commands. Some commands support a more complex syntax, generally
 1349 it will explain what part of the command is invalid when this happens.
 1351 Some commands require a higher level of privilege to work. If you do not have
 1352 enough privilege, you will get an error "Permission denied". Please check
 1353 the "level" option of the "bind" keyword lines in the configuration manual
 1354 for more information.
 1356 add acl <acl> <pattern>
 1357   Add an entry into the acl <acl>. <acl> is the #<id> or the <file> returned by
 1358   "show acl". This command does not verify if the entry already exists. This
 1359   command cannot be used if the reference <acl> is a file also used with a map.
 1360   In this case, you must use the command "add map" in place of "add acl".
 1362 add map <map> <key> <value>
 1363 add map <map> <payload>
 1364   Add an entry into the map <map> to associate the value <value> to the key
 1365   <key>. This command does not verify if the entry already exists. It is
 1366   mainly used to fill a map after a clear operation. Note that if the reference
 1367   <map> is a file and is shared with a map, this map will contain also a new
 1368   pattern entry. Using the payload syntax it is possible to add multiple
 1369   key/value pairs by entering them on separate lines. On each new line, the
 1370   first word is the key and the rest of the line is considered to be the value
 1371   which can even contains spaces.
 1373   Example:
 1375     # socat /tmp/sock1 -
 1376     prompt
 1378     > add map #-1 <<
 1379     + key1 value1
 1380     + key2 value2 with spaces
 1381     + key3 value3 also with spaces
 1382     + key4 value4
 1384     >
 1386 clear counters
 1387   Clear the max values of the statistics counters in each proxy (frontend &
 1388   backend) and in each server. The accumulated counters are not affected. The
 1389   internal activity counters reported by "show activity" are also reset. This
 1390   can be used to get clean counters after an incident, without having to
 1391   restart nor to clear traffic counters. This command is restricted and can
 1392   only be issued on sockets configured for levels "operator" or "admin".
 1394 clear counters all
 1395   Clear all statistics counters in each proxy (frontend & backend) and in each
 1396   server. This has the same effect as restarting. This command is restricted
 1397   and can only be issued on sockets configured for level "admin".
 1399 clear acl <acl>
 1400   Remove all entries from the acl <acl>. <acl> is the #<id> or the <file>
 1401   returned by "show acl". Note that if the reference <acl> is a file and is
 1402   shared with a map, this map will be also cleared.
 1404 clear map <map>
 1405   Remove all entries from the map <map>. <map> is the #<id> or the <file>
 1406   returned by "show map". Note that if the reference <map> is a file and is
 1407   shared with a acl, this acl will be also cleared.
 1409 clear table <table> [ data.<type> <operator> <value> ] | [ key <key> ]
 1410   Remove entries from the stick-table <table>.
 1412   This is typically used to unblock some users complaining they have been
 1413   abusively denied access to a service, but this can also be used to clear some
 1414   stickiness entries matching a server that is going to be replaced (see "show
 1415   table" below for details).  Note that sometimes, removal of an entry will be
 1416   refused because it is currently tracked by a session. Retrying a few seconds
 1417   later after the session ends is usual enough.
 1419   In the case where no options arguments are given all entries will be removed.
 1421   When the "data." form is used entries matching a filter applied using the
 1422   stored data (see "stick-table" in section 4.2) are removed.  A stored data
 1423   type must be specified in <type>, and this data type must be stored in the
 1424   table otherwise an error is reported. The data is compared according to
 1425   <operator> with the 64-bit integer <value>.  Operators are the same as with
 1426   the ACLs :
 1428     - eq : match entries whose data is equal to this value
 1429     - ne : match entries whose data is not equal to this value
 1430     - le : match entries whose data is less than or equal to this value
 1431     - ge : match entries whose data is greater than or equal to this value
 1432     - lt : match entries whose data is less than this value
 1433     - gt : match entries whose data is greater than this value
 1435   When the key form is used the entry <key> is removed.  The key must be of the
 1436   same type as the table, which currently is limited to IPv4, IPv6, integer and
 1437   string.
 1439   Example :
 1440         $ echo "show table http_proxy" | socat stdio /tmp/sock1
 1441     >>> # table: http_proxy, type: ip, size:204800, used:2
 1442     >>> 0x80e6a4c: key= use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
 1443           bytes_out_rate(60000)=187
 1444     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 1445           bytes_out_rate(60000)=191
 1447         $ echo "clear table http_proxy key" | socat stdio /tmp/sock1
 1449         $ echo "show table http_proxy" | socat stdio /tmp/sock1
 1450     >>> # table: http_proxy, type: ip, size:204800, used:1
 1451     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 1452           bytes_out_rate(60000)=191
 1453         $ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1
 1454         $ echo "show table http_proxy" | socat stdio /tmp/sock1
 1455     >>> # table: http_proxy, type: ip, size:204800, used:1
 1457 debug dev <command> [args]*
 1458   Call a developer-specific command. Only supported when haproxy is built with
 1459   DEBUG_DEV defined. Supported commands are then listed in the help message.
 1460   All of these commands require admin privileges, and must never appear on a
 1461   production system as most of them are unsafe and dangerous.
 1463 del acl <acl> [<key>|#<ref>]
 1464   Delete all the acl entries from the acl <acl> corresponding to the key <key>.
 1465   <acl> is the #<id> or the <file> returned by "show acl". If the <ref> is used,
 1466   this command delete only the listed reference. The reference can be found with
 1467   listing the content of the acl. Note that if the reference <acl> is a file and
 1468   is shared with a map, the entry will be also deleted in the map.
 1470 del map <map> [<key>|#<ref>]
 1471   Delete all the map entries from the map <map> corresponding to the key <key>.
 1472   <map> is the #<id> or the <file> returned by "show map". If the <ref> is used,
 1473   this command delete only the listed reference. The reference can be found with
 1474   listing the content of the map. Note that if the reference <map> is a file and
 1475   is shared with a acl, the entry will be also deleted in the map.
 1477 disable agent <backend>/<server>
 1478   Mark the auxiliary agent check as temporarily stopped.
 1480   In the case where an agent check is being run as a auxiliary check, due
 1481   to the agent-check parameter of a server directive, new checks are only
 1482   initialized when the agent is in the enabled. Thus, disable agent will
 1483   prevent any new agent checks from begin initiated until the agent
 1484   re-enabled using enable agent.
 1486   When an agent is disabled the processing of an auxiliary agent check that
 1487   was initiated while the agent was set as enabled is as follows: All
 1488   results that would alter the weight, specifically "drain" or a weight
 1489   returned by the agent, are ignored. The processing of agent check is
 1490   otherwise unchanged.
 1492   The motivation for this feature is to allow the weight changing effects
 1493   of the agent checks to be paused to allow the weight of a server to be
 1494   configured using set weight without being overridden by the agent.
 1496   This command is restricted and can only be issued on sockets configured for
 1497   level "admin".
 1499 disable dynamic-cookie backend <backend>
 1500   Disable the generation of dynamic cookies fot the backend <backend>
 1502 disable frontend <frontend>
 1503   Mark the frontend as temporarily stopped. This corresponds to the mode which
 1504   is used during a soft restart : the frontend releases the port but can be
 1505   enabled again if needed. This should be used with care as some non-Linux OSes
 1506   are unable to enable it back. This is intended to be used in environments
 1507   where stopping a proxy is not even imaginable but a misconfigured proxy must
 1508   be fixed. That way it's possible to release the port and bind it into another
 1509   process to restore operations. The frontend will appear with status "STOP"
 1510   on the stats page.
 1512   The frontend may be specified either by its name or by its numeric ID,
 1513   prefixed with a sharp ('#').
 1515   This command is restricted and can only be issued on sockets configured for
 1516   level "admin".
 1518 disable health <backend>/<server>
 1519   Mark the primary health check as temporarily stopped. This will disable
 1520   sending of health checks, and the last health check result will be ignored.
 1521   The server will be in unchecked state and considered UP unless an auxiliary
 1522   agent check forces it down.
 1524   This command is restricted and can only be issued on sockets configured for
 1525   level "admin".
 1527 disable server <backend>/<server>
 1528   Mark the server DOWN for maintenance. In this mode, no more checks will be
 1529   performed on the server until it leaves maintenance.
 1530   If the server is tracked by other servers, those servers will be set to DOWN
 1531   during the maintenance.
 1533   In the statistics page, a server DOWN for maintenance will appear with a
 1534   "MAINT" status, its tracking servers with the "MAINT(via)" one.
 1536   Both the backend and the server may be specified either by their name or by
 1537   their numeric ID, prefixed with a sharp ('#').
 1539   This command is restricted and can only be issued on sockets configured for
 1540   level "admin".
 1542 enable agent <backend>/<server>
 1543   Resume auxiliary agent check that was temporarily stopped.
 1545   See "disable agent" for details of the effect of temporarily starting
 1546   and stopping an auxiliary agent.
 1548   This command is restricted and can only be issued on sockets configured for
 1549   level "admin".
 1551 enable dynamic-cookie backend <backend>
 1552   Enable the generation of dynamic cookies fot the backend <backend>
 1553   A secret key must also be provided
 1555 enable frontend <frontend>
 1556   Resume a frontend which was temporarily stopped. It is possible that some of
 1557   the listening ports won't be able to bind anymore (eg: if another process
 1558   took them since the 'disable frontend' operation). If this happens, an error
 1559   is displayed. Some operating systems might not be able to resume a frontend
 1560   which was disabled.
 1562   The frontend may be specified either by its name or by its numeric ID,
 1563   prefixed with a sharp ('#').
 1565   This command is restricted and can only be issued on sockets configured for
 1566   level "admin".
 1568 enable health <backend>/<server>
 1569   Resume a primary health check that was temporarily stopped. This will enable
 1570   sending of health checks again. Please see "disable health" for details.
 1572   This command is restricted and can only be issued on sockets configured for
 1573   level "admin".
 1575 enable server <backend>/<server>
 1576   If the server was previously marked as DOWN for maintenance, this marks the
 1577   server UP and checks are re-enabled.
 1579   Both the backend and the server may be specified either by their name or by
 1580   their numeric ID, prefixed with a sharp ('#').
 1582   This command is restricted and can only be issued on sockets configured for
 1583   level "admin".
 1585 get map <map> <value>
 1586 get acl <acl> <value>
 1587   Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl>
 1588   are the #<id> or the <file> returned by "show map" or "show acl". This command
 1589   returns all the matching patterns associated with this map. This is useful for
 1590   debugging maps and ACLs. The output format is composed by one line par
 1591   matching type. Each line is composed by space-delimited series of words.
 1593   The first two words are:
 1595      <match method>:   The match method applied. It can be "found", "bool",
 1596                        "int", "ip", "bin", "len", "str", "beg", "sub", "dir",
 1597                        "dom", "end" or "reg".
 1599      <match result>:   The result. Can be "match" or "no-match".
 1601   The following words are returned only if the pattern matches an entry.
 1603      <index type>:     "tree" or "list". The internal lookup algorithm.
 1605      <case>:           "case-insensitive" or "case-sensitive". The
 1606                        interpretation of the case.
 1608      <entry matched>:  match="<entry>". Return the matched pattern. It is
 1609                        useful with regular expressions.
 1611   The two last word are used to show the returned value and its type. With the
 1612   "acl" case, the pattern doesn't exist.
 1614      return=nothing:        No return because there are no "map".
 1615      return="<value>":      The value returned in the string format.
 1616      return=cannot-display: The value cannot be converted as string.
 1618      type="<type>":         The type of the returned sample.
 1620 get weight <backend>/<server>
 1621   Report the current weight and the initial weight of server <server> in
 1622   backend <backend> or an error if either doesn't exist. The initial weight is
 1623   the one that appears in the configuration file. Both are normally equal
 1624   unless the current weight has been changed. Both the backend and the server
 1625   may be specified either by their name or by their numeric ID, prefixed with a
 1626   sharp ('#').
 1628 help
 1629   Print the list of known keywords and their basic usage. The same help screen
 1630   is also displayed for unknown commands.
 1632 prompt
 1633   Toggle the prompt at the beginning of the line and enter or leave interactive
 1634   mode. In interactive mode, the connection is not closed after a command
 1635   completes. Instead, the prompt will appear again, indicating the user that
 1636   the interpreter is waiting for a new command. The prompt consists in a right
 1637   angle bracket followed by a space "> ". This mode is particularly convenient
 1638   when one wants to periodically check information such as stats or errors.
 1639   It is also a good idea to enter interactive mode before issuing a "help"
 1640   command.
 1642 quit
 1643   Close the connection when in interactive mode.
 1645 set dynamic-cookie-key backend <backend> <value>
 1646   Modify the secret key used to generate the dynamic persistent cookies.
 1647   This will break the existing sessions.
 1649 set map <map> [<key>|#<ref>] <value>
 1650   Modify the value corresponding to each key <key> in a map <map>. <map> is the
 1651   #<id> or <file> returned by "show map". If the <ref> is used in place of
 1652   <key>, only the entry pointed by <ref> is changed. The new value is <value>.
 1654 set maxconn frontend <frontend> <value>
 1655   Dynamically change the specified frontend's maxconn setting. Any positive
 1656   value is allowed including zero, but setting values larger than the global
 1657   maxconn does not make much sense. If the limit is increased and connections
 1658   were pending, they will immediately be accepted. If it is lowered to a value
 1659   below the current number of connections, new connections acceptation will be
 1660   delayed until the threshold is reached. The frontend might be specified by
 1661   either its name or its numeric ID prefixed with a sharp ('#').
 1663 set maxconn server <backend/server> <value>
 1664   Dynamically change the specified server's maxconn setting. Any positive
 1665   value is allowed including zero, but setting values larger than the global
 1666   maxconn does not make much sense.
 1668 set maxconn global <maxconn>
 1669   Dynamically change the global maxconn setting within the range defined by the
 1670   initial global maxconn setting. If it is increased and connections were
 1671   pending, they will immediately be accepted. If it is lowered to a value below
 1672   the current number of connections, new connections acceptation will be
 1673   delayed until the threshold is reached. A value of zero restores the initial
 1674   setting.
 1676 set profiling { tasks } { auto | on | off }
 1677   Enables or disables CPU profiling for the indicated subsystem. This is
 1678   equivalent to setting or clearing the "profiling" settings in the "global"
 1679   section of the configuration file. Please also see "show profiling".
 1681 set rate-limit connections global <value>
 1682   Change the process-wide connection rate limit, which is set by the global
 1683   'maxconnrate' setting. A value of zero disables the limitation. This limit
 1684   applies to all frontends and the change has an immediate effect. The value
 1685   is passed in number of connections per second.
 1687 set rate-limit http-compression global <value>
 1688   Change the maximum input compression rate, which is set by the global
 1689   'maxcomprate' setting. A value of zero disables the limitation. The value is
 1690   passed in number of kilobytes per second. The value is available in the "show
 1691   info" on the line "CompressBpsRateLim" in bytes.
 1693 set rate-limit sessions global <value>
 1694   Change the process-wide session rate limit, which is set by the global
 1695   'maxsessrate' setting. A value of zero disables the limitation. This limit
 1696   applies to all frontends and the change has an immediate effect. The value
 1697   is passed in number of sessions per second.
 1699 set rate-limit ssl-sessions global <value>
 1700   Change the process-wide SSL session rate limit, which is set by the global
 1701   'maxsslrate' setting. A value of zero disables the limitation. This limit
 1702   applies to all frontends and the change has an immediate effect. The value
 1703   is passed in number of sessions per second sent to the SSL stack. It applies
 1704   before the handshake in order to protect the stack against handshake abuses.
 1706 set server <backend>/<server> addr <ip4 or ip6 address> [port <port>]
 1707   Replace the current IP address of a server by the one provided.
 1708   Optionally, the port can be changed using the 'port' parameter.
 1709   Note that changing the port also support switching from/to port mapping
 1710   (notation with +X or -Y), only if a port is configured for the health check.
 1712 set server <backend>/<server> agent [ up | down ]
 1713   Force a server's agent to a new state. This can be useful to immediately
 1714   switch a server's state regardless of some slow agent checks for example.
 1715   Note that the change is propagated to tracking servers if any.
 1717 set server <backend>/<server> agent-addr <addr>
 1718   Change addr for servers agent checks. Allows to migrate agent-checks to
 1719   another address at runtime. You can specify both IP and hostname, it will be
 1720   resolved.
 1722 set server <backend>/<server> agent-send <value>
 1723   Change agent string sent to agent check target. Allows to update string while
 1724   changing server address to keep those two matching.
 1726 set server <backend>/<server> health [ up | stopping | down ]
 1727   Force a server's health to a new state. This can be useful to immediately
 1728   switch a server's state regardless of some slow health checks for example.
 1729   Note that the change is propagated to tracking servers if any.
 1731 set server <backend>/<server> check-port <port>
 1732   Change the port used for health checking to <port>
 1734 set server <backend>/<server> state [ ready | drain | maint ]
 1735   Force a server's administrative state to a new state. This can be useful to
 1736   disable load balancing and/or any traffic to a server. Setting the state to
 1737   "ready" puts the server in normal mode, and the command is the equivalent of
 1738   the "enable server" command. Setting the state to "maint" disables any traffic
 1739   to the server as well as any health checks. This is the equivalent of the
 1740   "disable server" command. Setting the mode to "drain" only removes the server
 1741   from load balancing but still allows it to be checked and to accept new
 1742   persistent connections. Changes are propagated to tracking servers if any.
 1744 set server <backend>/<server> weight <weight>[%]
 1745   Change a server's weight to the value passed in argument. This is the exact
 1746   equivalent of the "set weight" command below.
 1748 set server <backend>/<server> fqdn <FQDN>
 1749   Change a server's FQDN to the value passed in argument. This requires the
 1750   internal run-time DNS resolver to be configured and enabled for this server.
 1752 set severity-output [ none | number | string ]
 1753   Change the severity output format of the stats socket connected to for the
 1754   duration of the current session.
 1756 set ssl ocsp-response <response | payload>
 1757   This command is used to update an OCSP Response for a certificate (see "crt"
 1758   on "bind" lines). Same controls are performed as during the initial loading of
 1759   the response. The <response> must be passed as a base64 encoded string of the
 1760   DER encoded response from the OCSP server. This command is not supported with
 1761   BoringSSL.
 1763   Example:
 1764     openssl ocsp -issuer issuer.pem -cert server.pem \
 1765                  -host ocsp.issuer.com:80 -respout resp.der
 1766     echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
 1767                  socat stdio /var/run/haproxy.stat
 1769     using the payload syntax:
 1770     echo -e "set ssl ocsp-response <<\n$(base64 resp.der)\n" | \
 1771                  socat stdio /var/run/haproxy.stat
 1773 set ssl tls-key <id> <tlskey>
 1774   Set the next TLS key for the <id> listener to <tlskey>. This key becomes the
 1775   ultimate key, while the penultimate one is used for encryption (others just
 1776   decrypt). The oldest TLS key present is overwritten. <id> is either a numeric
 1777   #<id> or <file> returned by "show tls-keys". <tlskey> is a base64 encoded 48
 1778   or 80 bits TLS ticket key (ex. openssl rand 80 | openssl base64 -A).
 1780 set table <table> key <key> [data.<data_type> <value>]*
 1781   Create or update a stick-table entry in the table. If the key is not present,
 1782   an entry is inserted. See stick-table in section 4.2 to find all possible
 1783   values for <data_type>. The most likely use consists in dynamically entering
 1784   entries for source IP addresses, with a flag in gpc0 to dynamically block an
 1785   IP address or affect its quality of service. It is possible to pass multiple
 1786   data_types in a single call.
 1788 set timeout cli <delay>
 1789   Change the CLI interface timeout for current connection. This can be useful
 1790   during long debugging sessions where the user needs to constantly inspect
 1791   some indicators without being disconnected. The delay is passed in seconds.
 1793 set weight <backend>/<server> <weight>[%]
 1794   Change a server's weight to the value passed in argument. If the value ends
 1795   with the '%' sign, then the new weight will be relative to the initially
 1796   configured weight.  Absolute weights are permitted between 0 and 256.
 1797   Relative weights must be positive with the resulting absolute weight is
 1798   capped at 256.  Servers which are part of a farm running a static
 1799   load-balancing algorithm have stricter limitations because the weight
 1800   cannot change once set. Thus for these servers, the only accepted values
 1801   are 0 and 100% (or 0 and the initial weight). Changes take effect
 1802   immediately, though certain LB algorithms require a certain amount of
 1803   requests to consider changes. A typical usage of this command is to
 1804   disable a server during an update by setting its weight to zero, then to
 1805   enable it again after the update by setting it back to 100%. This command
 1806   is restricted and can only be issued on sockets configured for level
 1807   "admin". Both the backend and the server may be specified either by their
 1808   name or by their numeric ID, prefixed with a sharp ('#').
 1810 show acl [<acl>]
 1811   Dump info about acl converters. Without argument, the list of all available
 1812   acls is returned. If a <acl> is specified, its contents are dumped. <acl> if
 1813   the #<id> or <file>. The dump format is the same than the map even for the
 1814   sample value. The data returned are not a list of available ACL, but are the
 1815   list of all patterns composing any ACL. Many of these patterns can be shared
 1816   with maps.
 1818 show backend
 1819   Dump the list of backends available in the running process
 1821 show cli level
 1822   Display the CLI level of the current CLI session. The result could be
 1823   'admin', 'operator' or 'user'. See also the 'operator' and 'user' commands.
 1825   Example :
 1827     $ socat /tmp/sock1 readline
 1828     prompt
 1829     > operator
 1830     > show cli level
 1831     operator
 1832     > user
 1833     > show cli level
 1834     user
 1835     > operator
 1836     Permission denied
 1838 operator
 1839   Decrease the CLI level of the current CLI session to operator. It can't be
 1840   increase. See also "show cli level"
 1842 user
 1843   Decrease the CLI level of the current CLI session to user. It can't be
 1844   increase. See also "show cli level"
 1846 show activity
 1847   Reports some counters about internal events that will help developers and
 1848   more generally people who know haproxy well enough to narrow down the causes
 1849   of reports of abnormal behaviours. A typical example would be a properly
 1850   running process never sleeping and eating 100% of the CPU. The output fields
 1851   will be made of one line per metric, and per-thread counters on the same
 1852   line. These counters are 32-bit and will wrap during the process' life, which
 1853   is not a problem since calls to this command will typically be performed
 1854   twice. The fields are purposely not documented so that their exact meaning is
 1855   verified in the code where the counters are fed. These values are also reset
 1856   by the "clear counters" command.
 1858 show cli sockets
 1859   List CLI sockets. The output format is composed of 3 fields separated by
 1860   spaces. The first field is the socket address, it can be a unix socket, a
 1861   ipv4 address:port couple or a ipv6 one. Socket of other types won't be dump.
 1862   The second field describe the level of the socket: 'admin', 'user' or
 1863   'operator'. The last field list the processes on which the socket is bound,
 1864   separated by commas, it can be numbers or 'all'.
 1866   Example :
 1868      $ echo 'show cli sockets' | socat stdio /tmp/sock1
 1869      # socket lvl processes
 1870      /tmp/sock1 admin all
 1871 user 2,3,4
 1872 user 2
 1873      [::1]:9999 operator 2
 1875 show cache
 1876   List the configured caches and the objects stored in each cache tree.
 1878   $ echo 'show cache' | socat stdio /tmp/sock1
 1879   0x7f6ac6c5b03a: foobar (shctx:0x7f6ac6c5b000, available blocks:3918)
 1880          1          2             3                             4
 1882   1. pointer to the cache structure
 1883   2. cache name
 1884   3. pointer to the mmap area (shctx)
 1885   4. number of blocks available for reuse in the shctx
 1887   0x7f6ac6c5b4cc hash:286881868 size:39114 (39 blocks), refcount:9, expire:237
 1888            1               2            3        4            5           6
 1890   1. pointer to the cache entry
 1891   2. first 32 bits of the hash
 1892   3. size of the object in bytes
 1893   4. number of blocks used for the object
 1894   5. number of transactions using the entry
 1895   6. expiration time, can be negative if already expired
 1897 show env [<name>]
 1898   Dump one or all environment variables known by the process. Without any
 1899   argument, all variables are dumped. With an argument, only the specified
 1900   variable is dumped if it exists. Otherwise "Variable not found" is emitted.
 1901   Variables are dumped in the same format as they are stored or returned by the
 1902   "env" utility, that is, "<name>=<value>". This can be handy when debugging
 1903   certain configuration files making heavy use of environment variables to
 1904   ensure that they contain the expected values. This command is restricted and
 1905   can only be issued on sockets configured for levels "operator" or "admin".
 1907 show errors [<iid>|<proxy>] [request|response]
 1908   Dump last known request and response errors collected by frontends and
 1909   backends. If <iid> is specified, the limit the dump to errors concerning
 1910   either frontend or backend whose ID is <iid>. Proxy ID "-1" will cause
 1911   all instances to be dumped. If a proxy name is specified instead, its ID
 1912   will be used as the filter. If "request" or "response" is added after the
 1913   proxy name or ID, only request or response errors will be dumped. This
 1914   command is restricted and can only be issued on sockets configured for
 1915   levels "operator" or "admin".
 1917   The errors which may be collected are the last request and response errors
 1918   caused by protocol violations, often due to invalid characters in header
 1919   names. The report precisely indicates what exact character violated the
 1920   protocol. Other important information such as the exact date the error was
 1921   detected, frontend and backend names, the server name (when known), the
 1922   internal session ID and the source address which has initiated the session
 1923   are reported too.
 1925   All characters are returned, and non-printable characters are encoded. The
 1926   most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one
 1927   letter following a backslash. The backslash itself is encoded as '\\' to
 1928   avoid confusion. Other non-printable characters are encoded '\xNN' where
 1929   NN is the two-digits hexadecimal representation of the character's ASCII
 1930   code.
 1932   Lines are prefixed with the position of their first character, starting at 0
 1933   for the beginning of the buffer. At most one input line is printed per line,
 1934   and large lines will be broken into multiple consecutive output lines so that
 1935   the output never goes beyond 79 characters wide. It is easy to detect if a
 1936   line was broken, because it will not end with '\n' and the next line's offset
 1937   will be followed by a '+' sign, indicating it is a continuation of previous
 1938   line.
 1940   Example :
 1941         $ echo "show errors -1 response" | socat stdio /tmp/sock1
 1942     >>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response
 1943           src, session #54, frontend fe-eth0 (#1), server s2 (#1)
 1944           response length 213 bytes, error at position 23:
 1946           00000  HTTP/1.0 200 OK\r\n
 1947           00017  header/bizarre:blah\r\n
 1948           00038  Location: blah\r\n
 1949           00054  Long-line: this is a very long line which should b
 1950           00104+ e broken into multiple lines on the output buffer,
 1951           00154+  otherwise it would be too large to print in a ter
 1952           00204+ minal\r\n
 1953           00211  \r\n
 1955     In the example above, we see that the backend "http-in" which has internal
 1956     ID 2 has blocked an invalid response from its server s2 which has internal
 1957     ID 1. The request was on session 54 initiated by source and
 1958     received by frontend fe-eth0 whose ID is 1. The total response length was
 1959     213 bytes when the error was detected, and the error was at byte 23. This
 1960     is the slash ('/') in header name "header/bizarre", which is not a valid
 1961     HTTP character for a header name.
 1963 show fd [<fd>]
 1964   Dump the list of either all open file descriptors or just the one number <fd>
 1965   if specified. This is only aimed at developers who need to observe internal
 1966   states in order to debug complex issues such as abnormal CPU usages. One fd
 1967   is reported per lines, and for each of them, its state in the poller using
 1968   upper case letters for enabled flags and lower case for disabled flags, using
 1969   "P" for "polled", "R" for "ready", "A" for "active", the events status using
 1970   "H" for "hangup", "E" for "error", "O" for "output", "P" for "priority" and
 1971   "I" for "input", a few other flags like "N" for "new" (just added into the fd
 1972   cache), "U" for "updated" (received an update in the fd cache), "L" for
 1973   "linger_risk", "C" for "cloned", then the cached entry position, the pointer
 1974   to the internal owner, the pointer to the I/O callback and its name when
 1975   known. When the owner is a connection, the connection flags, and the target
 1976   are reported (frontend, proxy or server). When the owner is a listener, the
 1977   listener's state and its frontend are reported. There is no point in using
 1978   this command without a good knowledge of the internals. It's worth noting
 1979   that the output format may evolve over time so this output must not be parsed
 1980   by tools designed to be durable.
 1982 show info [typed|json]
 1983   Dump info about haproxy status on current process. If "typed" is passed as an
 1984   optional argument, field numbers, names and types are emitted as well so that
 1985   external monitoring products can easily retrieve, possibly aggregate, then
 1986   report information found in fields they don't know. Each field is dumped on
 1987   its own line. If "json" is passed as an optional argument then
 1988   information provided by "typed" output is provided in JSON format as a
 1989   list of JSON objects. By default, the format contains only two columns
 1990   delimited by a colon (':'). The left one is the field name and the right
 1991   one is the value.  It is very important to note that in typed output
 1992   format, the dump for a single object is contiguous so that there is no
 1993   need for a consumer to store everything at once.
 1995   When using the typed output format, each line is made of 4 columns delimited
 1996   by colons (':'). The first column is a dot-delimited series of 3 elements. The
 1997   first element is the numeric position of the field in the list (starting at
 1998   zero). This position shall not change over time, but holes are to be expected,
 1999   depending on build options or if some fields are deleted in the future. The
 2000   second element is the field name as it appears in the default "show info"
 2001   output. The third element is the relative process number starting at 1.
 2003   The rest of the line starting after the first colon follows the "typed output
 2004   format" described in the section above. In short, the second column (after the
 2005   first ':') indicates the origin, nature and scope of the variable. The third
 2006   column indicates the type of the field, among "s32", "s64", "u32", "u64" and
 2007   "str". Then the fourth column is the value itself, which the consumer knows
 2008   how to parse thanks to column 3 and how to process thanks to column 2.
 2010   Thus the overall line format in typed mode is :
 2012       <field_pos>.<field_name>.<process_num>:<tags>:<type>:<value>
 2014   Example :
 2016       > show info
 2017       Name: HAProxy
 2018       Version: 1.7-dev1-de52ea-146
 2019       Release_date: 2016/03/11
 2020       Nbproc: 1
 2021       Process_num: 1
 2022       Pid: 28105
 2023       Uptime: 0d 0h00m04s
 2024       Uptime_sec: 4
 2025       Memmax_MB: 0
 2026       PoolAlloc_MB: 0
 2027       PoolUsed_MB: 0
 2028       PoolFailed: 0
 2029       (...)
 2031       > show info typed
 2032       0.Name.1:POS:str:HAProxy
 2033       1.Version.1:POS:str:1.7-dev1-de52ea-146
 2034       2.Release_date.1:POS:str:2016/03/11
 2035       3.Nbproc.1:CGS:u32:1
 2036       4.Process_num.1:KGP:u32:1
 2037       5.Pid.1:SGP:u32:28105
 2038       6.Uptime.1:MDP:str:0d 0h00m08s
 2039       7.Uptime_sec.1:MDP:u32:8
 2040       8.Memmax_MB.1:CLP:u32:0
 2041       9.PoolAlloc_MB.1:MGP:u32:0
 2042       10.PoolUsed_MB.1:MGP:u32:0
 2043       11.PoolFailed.1:MCP:u32:0
 2044       (...)
 2046   In the typed format, the presence of the process ID at the end of the
 2047   first column makes it very easy to visually aggregate outputs from
 2048   multiple processes.
 2049   Example :
 2051       $ ( echo show info typed | socat /var/run/haproxy.sock1 ;    \
 2052           echo show info typed | socat /var/run/haproxy.sock2 ) |  \
 2053         sort -t . -k 1,1n -k 2,2 -k 3,3n
 2054       0.Name.1:POS:str:HAProxy
 2055       0.Name.2:POS:str:HAProxy
 2056       1.Version.1:POS:str:1.7-dev1-868ab3-148
 2057       1.Version.2:POS:str:1.7-dev1-868ab3-148
 2058       2.Release_date.1:POS:str:2016/03/11
 2059       2.Release_date.2:POS:str:2016/03/11
 2060       3.Nbproc.1:CGS:u32:2
 2061       3.Nbproc.2:CGS:u32:2
 2062       4.Process_num.1:KGP:u32:1
 2063       4.Process_num.2:KGP:u32:2
 2064       5.Pid.1:SGP:u32:30120
 2065       5.Pid.2:SGP:u32:30121
 2066       6.Uptime.1:MDP:str:0d 0h01m28s
 2067       6.Uptime.2:MDP:str:0d 0h01m28s
 2068       (...)
 2070   The format of JSON output is described in a schema which may be output
 2071   using "show schema json".
 2073   The JSON output contains no extra whitespace in order to reduce the
 2074   volume of output. For human consumption passing the output through a
 2075   pretty printer may be helpful. Example :
 2077   $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
 2078     python -m json.tool
 2080   The JSON output contains no extra whitespace in order to reduce the
 2081   volume of output. For human consumption passing the output through a
 2082   pretty printer may be helpful. Example :
 2084   $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
 2085     python -m json.tool
 2087 show map [<map>]
 2088   Dump info about map converters. Without argument, the list of all available
 2089   maps is returned. If a <map> is specified, its contents are dumped. <map> is
 2090   the #<id> or <file>. The first column is a unique identifier. It can be used
 2091   as reference for the operation "del map" and "set map". The second column is
 2092   the pattern and the third column is the sample if available. The data returned
 2093   are not directly a list of available maps, but are the list of all patterns
 2094   composing any map. Many of these patterns can be shared with ACL.
 2096 show peers [<peers section>]
 2097   Dump info about the peers configured in "peers" sections. Without argument,
 2098   the list of the peers belonging to all the "peers" sections are listed. If
 2099   <peers section> is specified, only the information about the peers belonging
 2100   to this "peers" section are dumped.
 2102   Here are two examples of outputs where hostA, hostB and hostC peers belong to
 2103   "sharedlb" peers sections. Only hostA and hostB are connected. Only hostA has
 2104   sent data to hostB.
 2106   $ echo "show peers" | socat - /tmp/hostA
 2107   0x55deb0224320: [15/Apr/2019:11:28:01] id=sharedlb state=0 flags=0x3 \
 2108     resync_timeout=<PAST> task_calls=45122
 2109       0x55deb022b540: id=hostC(remote) addr= status=CONN \
 2110         reconnect=4s confirm=0
 2111         flags=0x0
 2112       0x55deb022a440: id=hostA(local) addr= status=NONE \
 2113         reconnect=<NEVER> confirm=0
 2114         flags=0x0
 2115       0x55deb0227d70: id=hostB(remote) addr= status=ESTA
 2116         reconnect=2s confirm=0
 2117         flags=0x20000200 appctx:0x55deb028fba0 st0=7 st1=0 task_calls=14456 \
 2118           state=EST
 2119         xprt=RAW src= addr=
 2120         remote_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
 2121         last_local_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
 2122         shared tables:
 2123           0x55deb0224a10 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
 2124             last_acked=0 last_pushed=3 last_get=0 teaching_origin=0 update=3
 2125             table:0x55deb022d6a0 id=stkt update=3 localupdate=3 \
 2126               commitupdate=3 syncing=0
 2128   $ echo "show peers" | socat - /tmp/hostB
 2129   0x55871b5ab320: [15/Apr/2019:11:28:03] id=sharedlb state=0 flags=0x3 \
 2130     resync_timeout=<PAST> task_calls=3
 2131       0x55871b5b2540: id=hostC(remote) addr= status=CONN \
 2132         reconnect=3s confirm=0
 2133         flags=0x0
 2134       0x55871b5b1440: id=hostB(local) addr= status=NONE \
 2135         reconnect=<NEVER> confirm=0
 2136         flags=0x0
 2137       0x55871b5aed70: id=hostA(remote) addr= status=ESTA \
 2138         reconnect=2s confirm=0
 2139         flags=0x20000200 appctx:0x7fa46800ee00 st0=7 st1=0 task_calls=62356 \
 2140           state=EST
 2141         remote_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
 2142         last_local_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
 2143         shared tables:
 2144           0x55871b5ab960 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
 2145             last_acked=3 last_pushed=0 last_get=3 teaching_origin=0 update=0
 2146             table:0x55871b5b46a0 id=stkt update=1 localupdate=0 \
 2147               commitupdate=0 syncing=0
 2149 show pools
 2150   Dump the status of internal memory pools. This is useful to track memory
 2151   usage when suspecting a memory leak for example. It does exactly the same
 2152   as the SIGQUIT when running in foreground except that it does not flush
 2153   the pools.
 2155 show profiling
 2156   Dumps the current profiling settings, one per line, as well as the command
 2157   needed to change them.
 2159 show servers state [<backend>]
 2160   Dump the state of the servers found in the running configuration. A backend
 2161   name or identifier may be provided to limit the output to this backend only.
 2163   The dump has the following format:
 2164    - first line contains the format version (1 in this specification);
 2165    - second line contains the column headers, prefixed by a sharp ('#');
 2166    - third line and next ones contain data;
 2167    - each line starting by a sharp ('#') is considered as a comment.
 2169   Since multiple versions of the output may co-exist, below is the list of
 2170   fields and their order per file format version :
 2171    1:
 2172      be_id:                       Backend unique id.
 2173      be_name:                     Backend label.
 2174      srv_id:                      Server unique id (in the backend).
 2175      srv_name:                    Server label.
 2176      srv_addr:                    Server IP address.
 2177      srv_op_state:                Server operational state (UP/DOWN/...).
 2178                                     0 = SRV_ST_STOPPED
 2179                                       The server is down.
 2180                                     1 = SRV_ST_STARTING
 2181                                       The server is warming up (up but
 2182                                       throttled).
 2183                                     2 = SRV_ST_RUNNING
 2184                                       The server is fully up.
 2185                                     3 = SRV_ST_STOPPING
 2186                                       The server is up but soft-stopping
 2187                                       (eg: 404).
 2188      srv_admin_state:             Server administrative state (MAINT/DRAIN/...).
 2189                                   The state is actually a mask of values :
 2190                                     0x01 = SRV_ADMF_FMAINT
 2191                                       The server was explicitly forced into
 2192                                       maintenance.
 2193                                     0x02 = SRV_ADMF_IMAINT
 2194                                       The server has inherited the maintenance
 2195                                       status from a tracked server.
 2196                                     0x04 = SRV_ADMF_CMAINT
 2197                                       The server is in maintenance because of
 2198                                       the configuration.
 2199                                     0x08 = SRV_ADMF_FDRAIN
 2200                                       The server was explicitly forced into
 2201                                       drain state.
 2202                                     0x10 = SRV_ADMF_IDRAIN
 2203                                       The server has inherited the drain status
 2204                                       from a tracked server.
 2205                                     0x20 = SRV_ADMF_RMAINT
 2206                                       The server is in maintenance because of an
 2207                                       IP address resolution failure.
 2208                                     0x40 = SRV_ADMF_HMAINT
 2209                                       The server FQDN was set from stats socket.
 2211      srv_uweight:                 User visible server's weight.
 2212      srv_iweight:                 Server's initial weight.
 2213      srv_time_since_last_change:  Time since last operational change.
 2214      srv_check_status:            Last health check status.
 2215      srv_check_result:            Last check result (FAILED/PASSED/...).
 2216                                     0 = CHK_RES_UNKNOWN
 2217                                       Initialized to this by default.
 2218                                     1 = CHK_RES_NEUTRAL
 2219                                       Valid check but no status information.
 2220                                     2 = CHK_RES_FAILED
 2221                                       Check failed.
 2222                                     3 = CHK_RES_PASSED
 2223                                       Check succeeded and server is fully up
 2224                                       again.
 2225                                     4 = CHK_RES_CONDPASS
 2226                                       Check reports the server doesn't want new
 2227                                       sessions.
 2228      srv_check_health:            Checks rise / fall current counter.
 2229      srv_check_state:             State of the check (ENABLED/PAUSED/...).
 2230                                   The state is actually a mask of values :
 2231                                     0x01 = CHK_ST_INPROGRESS
 2232                                       A check is currently running.
 2233                                     0x02 = CHK_ST_CONFIGURED
 2234                                       This check is configured and may be
 2235                                       enabled.
 2236                                     0x04 = CHK_ST_ENABLED
 2237                                       This check is currently administratively
 2238                                       enabled.
 2239                                     0x08 = CHK_ST_PAUSED
 2240                                       Checks are paused because of maintenance
 2241                                       (health only).
 2242      srv_agent_state:             State of the agent check (ENABLED/PAUSED/...).
 2243                                   This state uses the same mask values as
 2244                                   "srv_check_state", adding this specific one :
 2245                                     0x10 = CHK_ST_AGENT
 2246                                       Check is an agent check (otherwise it's a
 2247                                       health check).
 2248      bk_f_forced_id:              Flag to know if the backend ID is forced by
 2249                                   configuration.
 2250      srv_f_forced_id:             Flag to know if the server's ID is forced by
 2251                                   configuration.
 2252      srv_fqdn:                    Server FQDN.
 2253      srv_port:                    Server port.
 2254      srvrecord:                   DNS SRV record associated to this SRV.
 2256 show sess
 2257   Dump all known sessions. Avoid doing this on slow connections as this can
 2258   be huge. This command is restricted and can only be issued on sockets
 2259   configured for levels "operator" or "admin".
 2261 show sess <id>
 2262   Display a lot of internal information about the specified session identifier.
 2263   This identifier is the first field at the beginning of the lines in the dumps
 2264   of "show sess" (it corresponds to the session pointer). Those information are
 2265   useless to most users but may be used by haproxy developers to troubleshoot a
 2266   complex bug. The output format is intentionally not documented so that it can
 2267   freely evolve depending on demands. You may find a description of all fields
 2268   returned in src/dumpstats.c
 2270   The special id "all" dumps the states of all sessions, which must be avoided
 2271   as much as possible as it is highly CPU intensive and can take a lot of time.
 2273 show stat [{<iid>|<proxy>} <type> <sid>] [typed|json]
 2274   Dump statistics using the CSV format; using the extended typed output
 2275   format described in the section above if "typed" is passed after the
 2276   other arguments; or in JSON if "json" is passed after the other arguments
 2277   . By passing <id>, <type> and <sid>, it is possible to dump only selected
 2278   items :
 2279     - <iid> is a proxy ID, -1 to dump everything. Alternatively, a proxy name
 2280       <proxy> may be specified. In this case, this proxy's ID will be used as
 2281       the ID selector.
 2282     - <type> selects the type of dumpable objects : 1 for frontends, 2 for
 2283        backends, 4 for servers, -1 for everything. These values can be ORed,
 2284        for example:
 2285           1 + 2     = 3   -> frontend + backend.
 2286           1 + 2 + 4 = 7   -> frontend + backend + server.
 2287     - <sid> is a server ID, -1 to dump everything from the selected proxy.
 2289   Example :
 2290         $ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1
 2291     >>> Name: HAProxy
 2292         Version: 1.4-dev2-49
 2293         Release_date: 2009/09/23
 2294         Nbproc: 1
 2295         Process_num: 1
 2296         (...)
 2298         # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,  (...)
 2299         stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...)
 2300         stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...)
 2301         (...)
 2302         www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...)
 2304         $
 2306   In this example, two commands have been issued at once. That way it's easy to
 2307   find which process the stats apply to in multi-process mode. This is not
 2308   needed in the typed output format as the process number is reported on each
 2309   line.  Notice the empty line after the information output which marks the end
 2310   of the first block.  A similar empty line appears at the end of the second
 2311   block (stats) so that the reader knows the output has not been truncated.
 2313   When "typed" is specified, the output format is more suitable to monitoring
 2314   tools because it provides numeric positions and indicates the type of each
 2315   output field. Each value stands on its own line with process number, element
 2316   number, nature, origin and scope. This same format is available via the HTTP
 2317   stats by passing ";typed" after the URI. It is very important to note that in
 2318   typed output format, the dump for a single object is contiguous so that there
 2319   is no need for a consumer to store everything at once.
 2321   When using the typed output format, each line is made of 4 columns delimited
 2322   by colons (':'). The first column is a dot-delimited series of 5 elements. The
 2323   first element is a letter indicating the type of the object being described.
 2324   At the moment the following object types are known : 'F' for a frontend, 'B'
 2325   for a backend, 'L' for a listener, and 'S' for a server. The second element
 2326   The second element is a positive integer representing the unique identifier of
 2327   the proxy the object belongs to. It is equivalent to the "iid" column of the
 2328   CSV output and matches the value in front of the optional "id" directive found
 2329   in the frontend or backend section. The third element is a positive integer
 2330   containing the unique object identifier inside the proxy, and corresponds to
 2331   the "sid" column of the CSV output. ID 0 is reported when dumping a frontend
 2332   or a backend. For a listener or a server, this corresponds to their respective
 2333   ID inside the proxy. The fourth element is the numeric position of the field
 2334   in the list (starting at zero). This position shall not change over time, but
 2335   holes are to be expected, depending on build options or if some fields are
 2336   deleted in the future. The fifth element is the field name as it appears in
 2337   the CSV output. The sixth element is a positive integer and is the relative
 2338   process number starting at 1.
 2340   The rest of the line starting after the first colon follows the "typed output
 2341   format" described in the section above. In short, the second column (after the
 2342   first ':') indicates the origin, nature and scope of the variable. The third
 2343   column indicates the type of the field, among "s32", "s64", "u32", "u64" and
 2344   "str". Then the fourth column is the value itself, which the consumer knows
 2345   how to parse thanks to column 3 and how to process thanks to column 2.
 2347   Thus the overall line format in typed mode is :
 2349       <obj>.<px_id>.<id>.<fpos>.<fname>.<process_num>:<tags>:<type>:<value>
 2351   Here's an example of typed output format :
 2353         $ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
 2354         F.2.0.0.pxname.1:MGP:str:private-frontend
 2355         F.2.0.1.svname.1:MGP:str:FRONTEND
 2356         F.2.0.8.bin.1:MGP:u64:0
 2357         F.2.0.9.bout.1:MGP:u64:0
 2358         F.2.0.40.hrsp_2xx.1:MGP:u64:0
 2359         L.2.1.0.pxname.1:MGP:str:private-frontend
 2360         L.2.1.1.svname.1:MGP:str:sock-1
 2361         L.2.1.17.status.1:MGP:str:OPEN
 2362         L.2.1.73.addr.1:MGP:str:
 2363         S.3.13.60.rtime.1:MCP:u32:0
 2364         S.3.13.61.ttime.1:MCP:u32:0
 2365         S.3.13.62.agent_status.1:MGP:str:L4TOUT
 2366         S.3.13.64.agent_duration.1:MGP:u64:2001
 2367         S.3.13.65.check_desc.1:MCP:str:Layer4 timeout
 2368         S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout
 2369         S.3.13.67.check_rise.1:MCP:u32:2
 2370         S.3.13.68.check_fall.1:MCP:u32:3
 2371         S.3.13.69.check_health.1:SGP:u32:0
 2372         S.3.13.70.agent_rise.1:MaP:u32:1
 2373         S.3.13.71.agent_fall.1:SGP:u32:1
 2374         S.3.13.72.agent_health.1:SGP:u32:1
 2375         S.3.13.73.addr.1:MCP:str:
 2376         S.3.13.75.mode.1:MAP:str:http
 2377         B.3.0.0.pxname.1:MGP:str:private-backend
 2378         B.3.0.1.svname.1:MGP:str:BACKEND
 2379         B.3.0.2.qcur.1:MGP:u32:0
 2380         B.3.0.3.qmax.1:MGP:u32:0
 2381         B.3.0.4.scur.1:MGP:u32:0
 2382         B.3.0.5.smax.1:MGP:u32:0
 2383         B.3.0.6.slim.1:MGP:u32:1000
 2384         B.3.0.55.lastsess.1:MMP:s32:-1
 2385         (...)
 2387   In the typed format, the presence of the process ID at the end of the
 2388   first column makes it very easy to visually aggregate outputs from
 2389   multiple processes, as show in the example below where each line appears
 2390   for each process :
 2392         $ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
 2393             echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
 2394           sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
 2395         B.3.0.0.pxname.1:MGP:str:private-backend
 2396         B.3.0.0.pxname.2:MGP:str:private-backend
 2397         B.3.0.1.svname.1:MGP:str:BACKEND
 2398         B.3.0.1.svname.2:MGP:str:BACKEND
 2399         B.3.0.2.qcur.1:MGP:u32:0
 2400         B.3.0.2.qcur.2:MGP:u32:0
 2401         B.3.0.3.qmax.1:MGP:u32:0
 2402         B.3.0.3.qmax.2:MGP:u32:0
 2403         B.3.0.4.scur.1:MGP:u32:0
 2404         B.3.0.4.scur.2:MGP:u32:0
 2405         B.3.0.5.smax.1:MGP:u32:0
 2406         B.3.0.5.smax.2:MGP:u32:0
 2407         B.3.0.6.slim.1:MGP:u32:1000
 2408         B.3.0.6.slim.2:MGP:u32:1000
 2409         (...)
 2411   The format of JSON output is described in a schema which may be output
 2412   using "show schema json".
 2414   The JSON output contains no extra whitespace in order to reduce the
 2415   volume of output. For human consumption passing the output through a
 2416   pretty printer may be helpful. Example :
 2418   $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
 2419     python -m json.tool
 2421   The JSON output contains no extra whitespace in order to reduce the
 2422   volume of output. For human consumption passing the output through a
 2423   pretty printer may be helpful. Example :
 2425   $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
 2426     python -m json.tool
 2428 show stat resolvers [<resolvers section id>]
 2429   Dump statistics for the given resolvers section, or all resolvers sections
 2430   if no section is supplied.
 2432   For each name server, the following counters are reported:
 2433     sent: number of DNS requests sent to this server
 2434     valid: number of DNS valid responses received from this server
 2435     update: number of DNS responses used to update the server's IP address
 2436     cname: number of CNAME responses
 2437     cname_error: CNAME errors encountered with this server
 2438     any_err: number of empty response (IE: server does not support ANY type)
 2439     nx: non existent domain response received from this server
 2440     timeout: how many time this server did not answer in time
 2441     refused: number of requests refused by this server
 2442     other: any other DNS errors
 2443     invalid: invalid DNS response (from a protocol point of view)
 2444     too_big: too big response
 2445     outdated: number of response arrived too late (after an other name server)
 2447 show table
 2448   Dump general information on all known stick-tables. Their name is returned
 2449   (the name of the proxy which holds them), their type (currently zero, always
 2450   IP), their size in maximum possible number of entries, and the number of
 2451   entries currently in use.
 2453   Example :
 2454         $ echo "show table" | socat stdio /tmp/sock1
 2455     >>> # table: front_pub, type: ip, size:204800, used:171454
 2456     >>> # table: back_rdp, type: ip, size:204800, used:0
 2458 show table <name> [ data.<type> <operator> <value> ] | [ key <key> ]
 2459   Dump contents of stick-table <name>. In this mode, a first line of generic
 2460   information about the table is reported as with "show table", then all
 2461   entries are dumped. Since this can be quite heavy, it is possible to specify
 2462   a filter in order to specify what entries to display.
 2464   When the "data." form is used the filter applies to the stored data (see
 2465   "stick-table" in section 4.2).  A stored data type must be specified
 2466   in <type>, and this data type must be stored in the table otherwise an
 2467   error is reported. The data is compared according to <operator> with the
 2468   64-bit integer <value>.  Operators are the same as with the ACLs :
 2470     - eq : match entries whose data is equal to this value
 2471     - ne : match entries whose data is not equal to this value
 2472     - le : match entries whose data is less than or equal to this value
 2473     - ge : match entries whose data is greater than or equal to this value
 2474     - lt : match entries whose data is less than this value
 2475     - gt : match entries whose data is greater than this value
 2478   When the key form is used the entry <key> is shown.  The key must be of the
 2479   same type as the table, which currently is limited to IPv4, IPv6, integer,
 2480   and string.
 2482   Example :
 2483         $ echo "show table http_proxy" | socat stdio /tmp/sock1
 2484     >>> # table: http_proxy, type: ip, size:204800, used:2
 2485     >>> 0x80e6a4c: key= use=0 exp=3594729 gpc0=0 conn_rate(30000)=1  \
 2486           bytes_out_rate(60000)=187
 2487     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 2488           bytes_out_rate(60000)=191
 2490         $ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1
 2491     >>> # table: http_proxy, type: ip, size:204800, used:2
 2492     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 2493           bytes_out_rate(60000)=191
 2495         $ echo "show table http_proxy data.conn_rate gt 5" | \
 2496             socat stdio /tmp/sock1
 2497     >>> # table: http_proxy, type: ip, size:204800, used:2
 2498     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 2499           bytes_out_rate(60000)=191
 2501         $ echo "show table http_proxy key" | \
 2502             socat stdio /tmp/sock1
 2503     >>> # table: http_proxy, type: ip, size:204800, used:2
 2504     >>> 0x80e6a80: key= use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
 2505           bytes_out_rate(60000)=191
 2507   When the data criterion applies to a dynamic value dependent on time such as
 2508   a bytes rate, the value is dynamically computed during the evaluation of the
 2509   entry in order to decide whether it has to be dumped or not. This means that
 2510   such a filter could match for some time then not match anymore because as
 2511   time goes, the average event rate drops.
 2513   It is possible to use this to extract lists of IP addresses abusing the
 2514   service, in order to monitor them or even blacklist them in a firewall.
 2515   Example :
 2516         $ echo "show table http_proxy data.gpc0 gt 0" \
 2517           | socat stdio /tmp/sock1 \
 2518           | fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt
 2519           ( or | awk '/key/{ print a[split($2,a,"=")]; }' )
 2521 show threads
 2522   Dumps some internal states and structures for each thread, that may be useful
 2523   to help developers understand a problem. The output tries to be readable by
 2524   showing one block per thread. When haproxy is built with USE_THREAD_DUMP=1,
 2525   an advanced dump mechanism involving thread signals is used so that each
 2526   thread can dump its own state in turn. Without this option, the thread
 2527   processing the command shows all its details but the other ones are less
 2528   detailed. A star ('*') is displayed in front of the thread handling the
 2529   command. A right angle bracket ('>') may also be displayed in front of
 2530   threads which didn't make any progress since last invocation of this command,
 2531   indicating a bug in the code which must absolutely be reported. When this
 2532   happens between two threads it usually indicates a deadlock. If a thread is
 2533   alone, it's a different bug like a corrupted list. In all cases the process
 2534   needs is not fully functional anymore and needs to be restarted.
 2536   The output format is purposely not documented so that it can easily evolve as
 2537   new needs are identified, without having to maintain any form of backwards
 2538   compatibility, and just like with "show activity", the values are meaningless
 2539   without the code at hand.
 2541 show tls-keys [id|*]
 2542   Dump all loaded TLS ticket keys references. The TLS ticket key reference ID
 2543   and the file from which the keys have been loaded is shown. Both of those
 2544   can be used to update the TLS keys using "set ssl tls-key". If an ID is
 2545   specified as parameter, it will dump the tickets, using * it will dump every
 2546   keys from every references.
 2548 show schema json
 2549   Dump the schema used for the output of "show info json" and "show stat json".
 2551   The contains no extra whitespace in order to reduce the volume of output.
 2552   For human consumption passing the output through a pretty printer may be
 2553   helpful. Example :
 2555   $ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
 2556     python -m json.tool
 2558   The schema follows "JSON Schema" (json-schema.org) and accordingly
 2559   verifiers may be used to verify the output of "show info json" and "show
 2560   stat json" against the schema.
 2563 shutdown frontend <frontend>
 2564   Completely delete the specified frontend. All the ports it was bound to will
 2565   be released. It will not be possible to enable the frontend anymore after
 2566   this operation. This is intended to be used in environments where stopping a
 2567   proxy is not even imaginable but a misconfigured proxy must be fixed. That
 2568   way it's possible to release the port and bind it into another process to
 2569   restore operations. The frontend will not appear at all on the stats page
 2570   once it is terminated.
 2572   The frontend may be specified either by its name or by its numeric ID,
 2573   prefixed with a sharp ('#').
 2575   This command is restricted and can only be issued on sockets configured for
 2576   level "admin".
 2578 shutdown session <id>
 2579   Immediately terminate the session matching the specified session identifier.
 2580   This identifier is the first field at the beginning of the lines in the dumps
 2581   of "show sess" (it corresponds to the session pointer). This can be used to
 2582   terminate a long-running session without waiting for a timeout or when an
 2583   endless transfer is ongoing. Such terminated sessions are reported with a 'K'
 2584   flag in the logs.
 2586 shutdown sessions server <backend>/<server>
 2587   Immediately terminate all the sessions attached to the specified server. This
 2588   can be used to terminate long-running sessions after a server is put into
 2589   maintenance mode, for instance. Such terminated sessions are reported with a
 2590   'K' flag in the logs.
 2593 9.4. Master CLI
 2594 ---------------
 2596 The master CLI is a socket bound to the master process in master-worker mode.
 2597 This CLI gives access to the unix socket commands in every running or leaving
 2598 processes and allows a basic supervision of those processes.
 2600 The master CLI is configurable only from the haproxy program arguments with
 2601 the -S option. This option also takes bind options separated by commas.
 2603 Example:
 2605    # haproxy -W -S -f test1.cfg
 2606    # haproxy -Ws -S /tmp/master-socket,uid,1000,gid,1000,mode,600 -f test1.cfg
 2607    # haproxy -W -S /tmp/master-socket,level,user -f test1.cfg
 2609 The master CLI introduces a new 'show proc' command to surpervise the
 2610 processes:
 2612 Example:
 2614   $ echo 'show proc' | socat /var/run/haproxy-master.sock -
 2615   #<PID>          <type>          <relative PID>  <reloads>       <uptime>        <version>
 2616   1162            master          0               5               0d00h02m07s     2.0-dev7-0124c9-7
 2617   # workers
 2618   1271            worker          1               0               0d00h00m00s     2.0-dev7-0124c9-7
 2619   1272            worker          2               0               0d00h00m00s     2.0-dev7-0124c9-7
 2620   # old workers
 2621   1233            worker          [was: 1]        3               0d00h00m43s     2.0-dev3-6019f6-289
 2624 In this example, the master has been reloaded 5 times but one of the old
 2625 worker is still running and survived 3 reloads. You could access the CLI of
 2626 this worker to understand what's going on.
 2628 When the prompt is enabled (via the "prompt" command), the context the CLI is
 2629 working on is displayed in the prompt. The master is identified by the "master"
 2630 string, and other processes are identified with their PID. In case the last
 2631 reload failed, the master prompt will be changed to "master[ReloadFailed]>" so
 2632 that it becomes visible that the process is still running on the previous
 2633 configuration and that the new configuration is not operational.
 2635 The master CLI uses a special prefix notation to access the multiple
 2636 processes. This notation is easily identifiable as it begins by a @.
 2638 A @ prefix can be followed by a relative process number or by an exclamation
 2639 point and a PID. (e.g. @1 or @!1271). A @ alone could be use to specify the
 2640 master. Leaving processes are only accessible with the PID as relative process
 2641 number are only usable with the current processes.
 2643 Examples:
 2645   $ socat /var/run/haproxy-master.sock readline
 2646   prompt
 2647   master> @1 show info; @2 show info
 2648   [...]
 2649   Process_num: 1
 2650   Pid: 1271
 2651   [...]
 2652   Process_num: 2
 2653   Pid: 1272
 2654   [...]
 2655   master>
 2657   $ echo '@!1271 show info; @!1272 show info' | socat /var/run/haproxy-master.sock -
 2658   [...]
 2660 A prefix could be use as a command, which will send every next commands to
 2661 the specified process.
 2663 Examples:
 2665   $ socat /var/run/haproxy-master.sock readline
 2666   prompt
 2667   master> @1
 2668   1271> show info
 2669   [...]
 2670   1271> show stat
 2671   [...]
 2672   1271> @
 2673   master>
 2675   $ echo '@1; show info; show stat; @2; show info; show stat' | socat /var/run/haproxy-master.sock -
 2676   [...]
 2678 You can also reload the HAProxy master process with the "reload" command which
 2679 does the same as a `kill -USR2` on the master process, provided that the user
 2680 has at least "operator" or "admin" privileges.
 2682 Example:
 2684   $ echo "reload" | socat /var/run/haproxy-master.sock
 2686 Note that a reload will close the connection to the master CLI.
 2689 10. Tricks for easier configuration management
 2690 ----------------------------------------------
 2692 It is very common that two HAProxy nodes constituting a cluster share exactly
 2693 the same configuration modulo a few addresses. Instead of having to maintain a
 2694 duplicate configuration for each node, which will inevitably diverge, it is
 2695 possible to include environment variables in the configuration. Thus multiple
 2696 configuration may share the exact same file with only a few different system
 2697 wide environment variables. This started in version 1.5 where only addresses
 2698 were allowed to include environment variables, and 1.6 goes further by
 2699 supporting environment variables everywhere. The syntax is the same as in the
 2700 UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening
 2701 curly brace ('{'), then the variable name followed by the closing brace ('}').
 2702 Except for addresses, environment variables are only interpreted in arguments
 2703 surrounded with double quotes (this was necessary not to break existing setups
 2704 using regular expressions involving the dollar symbol).
 2706 Environment variables also make it convenient to write configurations which are
 2707 expected to work on various sites where only the address changes. It can also
 2708 permit to remove passwords from some configs. Example below where the the file
 2709 "site1.env" file is sourced by the init script upon startup :
 2711   $ cat site1.env
 2712   LISTEN=
 2713   CACHE_PFX=192.168.11
 2714   SERVER_PFX=192.168.22
 2715   LOGGER=
 2716   STATSLP=admin:pa$$w0rd
 2717   ABUSERS=/etc/haproxy/abuse.lst
 2718   TIMEOUT=10s
 2720   $ cat haproxy.cfg
 2721   global
 2722       log "${LOGGER}:514" local0
 2724   defaults
 2725       mode http
 2726       timeout client "${TIMEOUT}"
 2727       timeout server "${TIMEOUT}"
 2728       timeout connect 5s
 2730   frontend public
 2731       bind "${LISTEN}:80"
 2732       http-request reject if { src -f "${ABUSERS}" }
 2733       stats uri /stats
 2734       stats auth "${STATSLP}"
 2735       use_backend cache if { path_end .jpg .css .ico }
 2736       default_backend server
 2738   backend cache
 2739       server cache1 "${CACHE_PFX}.1:18080" check
 2740       server cache2 "${CACHE_PFX}.2:18080" check
 2742   backend server
 2743       server cache1 "${SERVER_PFX}.1:8080" check
 2744       server cache2 "${SERVER_PFX}.2:8080" check
 2747 11. Well-known traps to avoid
 2748 -----------------------------
 2750 Once in a while, someone reports that after a system reboot, the haproxy
 2751 service wasn't started, and that once they start it by hand it works. Most
 2752 often, these people are running a clustered IP address mechanism such as
 2753 keepalived, to assign the service IP address to the master node only, and while
 2754 it used to work when they used to bind haproxy to address, it stopped
 2755 working after they bound it to the virtual IP address. What happens here is
 2756 that when the service starts, the virtual IP address is not yet owned by the
 2757 local node, so when HAProxy wants to bind to it, the system rejects this
 2758 because it is not a local IP address. The fix doesn't consist in delaying the
 2759 haproxy service startup (since it wouldn't stand a restart), but instead to
 2760 properly configure the system to allow binding to non-local addresses. This is
 2761 easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This
 2762 is also needed in order to transparently intercept the IP traffic that passes
 2763 through HAProxy for a specific target address.
 2765 Multi-process configurations involving source port ranges may apparently seem
 2766 to work but they will cause some random failures under high loads because more
 2767 than one process may try to use the same source port to connect to the same
 2768 server, which is not possible. The system will report an error and a retry will
 2769 happen, picking another port. A high value in the "retries" parameter may hide
 2770 the effect to a certain extent but this also comes with increased CPU usage and
 2771 processing time. Logs will also report a certain number of retries. For this
 2772 reason, port ranges should be avoided in multi-process configurations.
 2774 Since HAProxy uses SO_REUSEPORT and supports having multiple independent
 2775 processes bound to the same IP:port, during troubleshooting it can happen that
 2776 an old process was not stopped before a new one was started. This provides
 2777 absurd test results which tend to indicate that any change to the configuration
 2778 is ignored. The reason is that in fact even the new process is restarted with a
 2779 new configuration, the old one also gets some incoming connections and
 2780 processes them, returning unexpected results. When in doubt, just stop the new
 2781 process and try again. If it still works, it very likely means that an old
 2782 process remains alive and has to be stopped. Linux's "netstat -lntp" is of good
 2783 help here.
 2785 When adding entries to an ACL from the command line (eg: when blacklisting a
 2786 source address), it is important to keep in mind that these entries are not
 2787 synchronized to the file and that if someone reloads the configuration, these
 2788 updates will be lost. While this is often the desired effect (for blacklisting)
 2789 it may not necessarily match expectations when the change was made as a fix for
 2790 a problem. See the "add acl" action of the CLI interface.
 2793 12. Debugging and performance issues
 2794 ------------------------------------
 2796 When HAProxy is started with the "-d" option, it will stay in the foreground
 2797 and will print one line per event, such as an incoming connection, the end of a
 2798 connection, and for each request or response header line seen. This debug
 2799 output is emitted before the contents are processed, so they don't consider the
 2800 local modifications. The main use is to show the request and response without
 2801 having to run a network sniffer. The output is less readable when multiple
 2802 connections are handled in parallel, though the "debug2ansi" and "debug2html"
 2803 scripts found in the examples/ directory definitely help here by coloring the
 2804 output.
 2806 If a request or response is rejected because HAProxy finds it is malformed, the
 2807 best thing to do is to connect to the CLI and issue "show errors", which will
 2808 report the last captured faulty request and response for each frontend and
 2809 backend, with all the necessary information to indicate precisely the first
 2810 character of the input stream that was rejected. This is sometimes needed to
 2811 prove to customers or to developers that a bug is present in their code. In
 2812 this case it is often possible to relax the checks (but still keep the
 2813 captures) using "option accept-invalid-http-request" or its equivalent for
 2814 responses coming from the server "option accept-invalid-http-response". Please
 2815 see the configuration manual for more details.
 2817 Example :
 2819   > show errors
 2820   Total events captured on [13/Oct/2015:13:43:47.169] : 1
 2822   [13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request
 2823     backend <NONE> (#-1), server <NONE> (#-1), event #0
 2824     src, session #0, session flags 0x00000080
 2825     HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
 2826     HTTP chunk len 0 bytes, HTTP body len 0 bytes
 2827     buffer flags 0x00808002, out 0 bytes, total 31 bytes
 2828     pending 31 bytes, wrapping at 8040, error at position 13:
 2830     00000  GET /invalid request HTTP/1.1\r\n
 2833 The output of "show info" on the CLI provides a number of useful information
 2834 regarding the maximum connection rate ever reached, maximum SSL key rate ever
 2835 reached, and in general all information which can help to explain temporary
 2836 issues regarding CPU or memory usage. Example :
 2838   > show info
 2839   Name: HAProxy
 2840   Version: 1.6-dev7-e32d18-17
 2841   Release_date: 2015/10/12
 2842   Nbproc: 1
 2843   Process_num: 1
 2844   Pid: 7949
 2845   Uptime: 0d 0h02m39s
 2846   Uptime_sec: 159
 2847   Memmax_MB: 0
 2848   Ulimit-n: 120032
 2849   Maxsock: 120032
 2850   Maxconn: 60000
 2851   Hard_maxconn: 60000
 2852   CurrConns: 0
 2853   CumConns: 3
 2854   CumReq: 3
 2855   MaxSslConns: 0
 2856   CurrSslConns: 0
 2857   CumSslConns: 0
 2858   Maxpipes: 0
 2859   PipesUsed: 0
 2860   PipesFree: 0
 2861   ConnRate: 0
 2862   ConnRateLimit: 0
 2863   MaxConnRate: 1
 2864   SessRate: 0
 2865   SessRateLimit: 0
 2866   MaxSessRate: 1
 2867   SslRate: 0
 2868   SslRateLimit: 0
 2869   MaxSslRate: 0
 2870   SslFrontendKeyRate: 0
 2871   SslFrontendMaxKeyRate: 0
 2872   SslFrontendSessionReuse_pct: 0
 2873   SslBackendKeyRate: 0
 2874   SslBackendMaxKeyRate: 0
 2875   SslCacheLookups: 0
 2876   SslCacheMisses: 0
 2877   CompressBpsIn: 0
 2878   CompressBpsOut: 0
 2879   CompressBpsRateLim: 0
 2880   ZlibMemUsage: 0
 2881   MaxZlibMemUsage: 0
 2882   Tasks: 5
 2883   Run_queue: 1
 2884   Idle_pct: 100
 2885   node: wtap
 2886   description:
 2888 When an issue seems to randomly appear on a new version of HAProxy (eg: every
 2889 second request is aborted, occasional crash, etc), it is worth trying to enable
 2890 memory poisoning so that each call to malloc() is immediately followed by the
 2891 filling of the memory area with a configurable byte. By default this byte is
 2892 0x50 (ASCII for 'P'), but any other byte can be used, including zero (which
 2893 will have the same effect as a calloc() and which may make issues disappear).
 2894 Memory poisoning is enabled on the command line using the "-dM" option. It
 2895 slightly hurts performance and is not recommended for use in production. If
 2896 an issue happens all the time with it or never happens when poisoning uses
 2897 byte zero, it clearly means you've found a bug and you definitely need to
 2898 report it. Otherwise if there's no clear change, the problem it is not related.
 2900 When debugging some latency issues, it is important to use both strace and
 2901 tcpdump on the local machine, and another tcpdump on the remote system. The
 2902 reason for this is that there are delays everywhere in the processing chain and
 2903 it is important to know which one is causing latency to know where to act. In
 2904 practice, the local tcpdump will indicate when the input data come in. Strace
 2905 will indicate when haproxy receives these data (using recv/recvfrom). Warning,
 2906 openssl uses read()/write() syscalls instead of recv()/send(). Strace will also
 2907 show when haproxy sends the data, and tcpdump will show when the system sends
 2908 these data to the interface. Then the external tcpdump will show when the data
 2909 sent are really received (since the local one only shows when the packets are
 2910 queued). The benefit of sniffing on the local system is that strace and tcpdump
 2911 will use the same reference clock. Strace should be used with "-tts200" to get
 2912 complete timestamps and report large enough chunks of data to read them.
 2913 Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence
 2914 numbers and complete timestamps.
 2916 In practice, received data are almost always immediately received by haproxy
 2917 (unless the machine has a saturated CPU or these data are invalid and not
 2918 delivered). If these data are received but not sent, it generally is because
 2919 the output buffer is saturated (ie: recipient doesn't consume the data fast
 2920 enough). This can be confirmed by seeing that the polling doesn't notify of
 2921 the ability to write on the output file descriptor for some time (it's often
 2922 easier to spot in the strace output when the data finally leave and then roll
 2923 back to see when the write event was notified). It generally matches an ACK
 2924 received from the recipient, and detected by tcpdump. Once the data are sent,
 2925 they may spend some time in the system doing nothing. Here again, the TCP
 2926 congestion window may be limited and not allow these data to leave, waiting for
 2927 an ACK to open the window. If the traffic is idle and the data take 40 ms or
 2928 200 ms to leave, it's a different issue (which is not an issue), it's the fact
 2929 that the Nagle algorithm prevents empty packets from leaving immediately, in
 2930 hope that they will be merged with subsequent data. HAProxy automatically
 2931 disables Nagle in pure TCP mode and in tunnels. However it definitely remains
 2932 enabled when forwarding an HTTP body (and this contributes to the performance
 2933 improvement there by reducing the number of packets). Some HTTP non-compliant
 2934 applications may be sensitive to the latency when delivering incomplete HTTP
 2935 response messages. In this case you will have to enable "option http-no-delay"
 2936 to disable Nagle in order to work around their design, keeping in mind that any
 2937 other proxy in the chain may similarly be impacted. If tcpdump reports that data
 2938 leave immediately but the other end doesn't see them quickly, it can mean there
 2939 is a congested WAN link, a congested LAN with flow control enabled and
 2940 preventing the data from leaving, or more commonly that HAProxy is in fact
 2941 running in a virtual machine and that for whatever reason the hypervisor has
 2942 decided that the data didn't need to be sent immediately. In virtualized
 2943 environments, latency issues are almost always caused by the virtualization
 2944 layer, so in order to save time, it's worth first comparing tcpdump in the VM
 2945 and on the external components. Any difference has to be credited to the
 2946 hypervisor and its accompanying drivers.
 2948 When some TCP SACK segments are seen in tcpdump traces (using -vv), it always
 2949 means that the side sending them has got the proof of a lost packet. While not
 2950 seeing them doesn't mean there are no losses, seeing them definitely means the
 2951 network is lossy. Losses are normal on a network, but at a rate where SACKs are
 2952 not noticeable at the naked eye. If they appear a lot in the traces, it is
 2953 worth investigating exactly what happens and where the packets are lost. HTTP
 2954 doesn't cope well with TCP losses, which introduce huge latencies.
 2956 The "netstat -i" command will report statistics per interface. An interface
 2957 where the Rx-Ovr counter grows indicates that the system doesn't have enough
 2958 resources to receive all incoming packets and that they're lost before being
 2959 processed by the network driver. Rx-Drp indicates that some received packets
 2960 were lost in the network stack because the application doesn't process them
 2961 fast enough. This can happen during some attacks as well. Tx-Drp means that
 2962 the output queues were full and packets had to be dropped. When using TCP it
 2963 should be very rare, but will possibly indicate a saturated outgoing link.
 2966 13. Security considerations
 2967 ---------------------------
 2969 HAProxy is designed to run with very limited privileges. The standard way to
 2970 use it is to isolate it into a chroot jail and to drop its privileges to a
 2971 non-root user without any permissions inside this jail so that if any future
 2972 vulnerability were to be discovered, its compromise would not affect the rest
 2973 of the system.
 2975 In order to perform a chroot, it first needs to be started as a root user. It is
 2976 pointless to build hand-made chroots to start the process there, these ones are
 2977 painful to build, are never properly maintained and always contain way more
 2978 bugs than the main file-system. And in case of compromise, the intruder can use
 2979 the purposely built file-system. Unfortunately many administrators confuse
 2980 "start as root" and "run as root", resulting in the uid change to be done prior
 2981 to starting haproxy, and reducing the effective security restrictions.
 2983 HAProxy will need to be started as root in order to :
 2984   - adjust the file descriptor limits
 2985   - bind to privileged port numbers
 2986   - bind to a specific network interface
 2987   - transparently listen to a foreign address
 2988   - isolate itself inside the chroot jail
 2989   - drop to another non-privileged UID
 2991 HAProxy may require to be run as root in order to :
 2992   - bind to an interface for outgoing connections
 2993   - bind to privileged source ports for outgoing connections
 2994   - transparently bind to a foreign address for outgoing connections
 2996 Most users will never need the "run as root" case. But the "start as root"
 2997 covers most usages.
 2999 A safe configuration will have :
 3001   - a chroot statement pointing to an empty location without any access
 3002     permissions. This can be prepared this way on the UNIX command line :
 3004       # mkdir /var/empty && chmod 0 /var/empty || echo "Failed"
 3006     and referenced like this in the HAProxy configuration's global section :
 3008       chroot /var/empty
 3010   - both a uid/user and gid/group statements in the global section :
 3012       user haproxy
 3013       group haproxy
 3015   - a stats socket whose mode, uid and gid are set to match the user and/or
 3016     group allowed to access the CLI so that nobody may access it :
 3018       stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600