"Fossies" - the Fresh Open Source Software Archive

Member "heaplayers-351/allocators/ptmalloc/ptmalloc.c" (6 Nov 2003, 146043 Bytes) of package /linux/misc/old/heaplayers_3_5_1.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format using (guessed) C and C++ source code syntax highlighting (style: standard) with prefixed line numbers and code folding option. Alternatively you can here view or download the uninterpreted source code file. For more information about "ptmalloc.c" see the Fossies "Dox" file reference documentation.

    1 /* Malloc implementation for multiple threads without lock contention.
    2    Copyright (C) 1996, 1997, 1998, 1999 Free Software Foundation, Inc.
    3    This file is part of the GNU C Library.
    4    Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
    5    and Doug Lea <dl@cs.oswego.edu>, 1996.
    6 
    7    The GNU C Library is free software; you can redistribute it and/or
    8    modify it under the terms of the GNU Library General Public License as
    9    published by the Free Software Foundation; either version 2 of the
   10    License, or (at your option) any later version.
   11 
   12    The GNU C Library is distributed in the hope that it will be useful,
   13    but WITHOUT ANY WARRANTY; without even the implied warranty of
   14    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
   15    Library General Public License for more details.
   16 
   17    You should have received a copy of the GNU Library General Public
   18    License along with the GNU C Library; see the file COPYING.LIB.  If not,
   19    write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
   20    Boston, MA 02111-1307, USA.  */
   21 
   22 /* $Id: ptmalloc.c,v 1.3 2003/11/06 20:05:13 yifeng Exp $
   23 
   24   This work is mainly derived from malloc-2.6.4 by Doug Lea
   25   <dl@cs.oswego.edu>, which is available from:
   26 
   27                  ftp://g.oswego.edu/pub/misc/malloc.c
   28 
   29   Most of the original comments are reproduced in the code below.
   30 
   31 * Why use this malloc?
   32 
   33   This is not the fastest, most space-conserving, most portable, or
   34   most tunable malloc ever written. However it is among the fastest
   35   while also being among the most space-conserving, portable and tunable.
   36   Consistent balance across these factors results in a good general-purpose
   37   allocator. For a high-level description, see
   38      http://g.oswego.edu/dl/html/malloc.html
   39 
   40   On many systems, the standard malloc implementation is by itself not
   41   thread-safe, and therefore wrapped with a single global lock around
   42   all malloc-related functions.  In some applications, especially with
   43   multiple available processors, this can lead to contention problems
   44   and bad performance.  This malloc version was designed with the goal
   45   to avoid waiting for locks as much as possible.  Statistics indicate
   46   that this goal is achieved in many cases.
   47 
   48 * Synopsis of public routines
   49 
   50   (Much fuller descriptions are contained in the program documentation below.)
   51 
   52   ptmalloc_init();
   53      Initialize global configuration.  When compiled for multiple threads,
   54      this function must be called once before any other function in the
   55      package.  It is not required otherwise.  It is called automatically
   56      in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
   57   malloc(size_t n);
   58      Return a pointer to a newly allocated chunk of at least n bytes, or null
   59      if no space is available.
   60   free(Void_t* p);
   61      Release the chunk of memory pointed to by p, or no effect if p is null.
   62   realloc(Void_t* p, size_t n);
   63      Return a pointer to a chunk of size n that contains the same data
   64      as does chunk p up to the minimum of (n, p's size) bytes, or null
   65      if no space is available. The returned pointer may or may not be
   66      the same as p. If p is null, equivalent to malloc.  Unless the
   67      #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
   68      size argument of zero (re)allocates a minimum-sized chunk.
   69   memalign(size_t alignment, size_t n);
   70      Return a pointer to a newly allocated chunk of n bytes, aligned
   71      in accord with the alignment argument, which must be a power of
   72      two.
   73   valloc(size_t n);
   74      Equivalent to memalign(pagesize, n), where pagesize is the page
   75      size of the system (or as near to this as can be figured out from
   76      all the includes/defines below.)
   77   pvalloc(size_t n);
   78      Equivalent to valloc(minimum-page-that-holds(n)), that is,
   79      round up n to nearest pagesize.
   80   calloc(size_t unit, size_t quantity);
   81      Returns a pointer to quantity * unit bytes, with all locations
   82      set to zero.
   83   cfree(Void_t* p);
   84      Equivalent to free(p).
   85   malloc_trim(size_t pad);
   86      Release all but pad bytes of freed top-most memory back
   87      to the system. Return 1 if successful, else 0.
   88   malloc_usable_size(Void_t* p);
   89      Report the number usable allocated bytes associated with allocated
   90      chunk p. This may or may not report more bytes than were requested,
   91      due to alignment and minimum size constraints.
   92   malloc_stats();
   93      Prints brief summary statistics on stderr.
   94   mallinfo()
   95      Returns (by copy) a struct containing various summary statistics.
   96   mallopt(int parameter_number, int parameter_value)
   97      Changes one of the tunable parameters described below. Returns
   98      1 if successful in changing the parameter, else 0.
   99 
  100 * Vital statistics:
  101 
  102   Alignment:                            8-byte
  103        8 byte alignment is currently hardwired into the design.  This
  104        seems to suffice for all current machines and C compilers.
  105 
  106   Assumed pointer representation:       4 or 8 bytes
  107        Code for 8-byte pointers is untested by me but has worked
  108        reliably by Wolfram Gloger, who contributed most of the
  109        changes supporting this.
  110 
  111   Assumed size_t  representation:       4 or 8 bytes
  112        Note that size_t is allowed to be 4 bytes even if pointers are 8.
  113 
  114   Minimum overhead per allocated chunk: 4 or 8 bytes
  115        Each malloced chunk has a hidden overhead of 4 bytes holding size
  116        and status information.
  117 
  118   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
  119                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
  120 
  121        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
  122        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
  123        needed; 4 (8) for a trailing size field
  124        and 8 (16) bytes for free list pointers. Thus, the minimum
  125        allocatable size is 16/24/32 bytes.
  126 
  127        Even a request for zero bytes (i.e., malloc(0)) returns a
  128        pointer to something of the minimum allocatable size.
  129 
  130   Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
  131                           8-byte size_t: 2^63 - 16 bytes
  132 
  133        It is assumed that (possibly signed) size_t bit values suffice to
  134        represent chunk sizes. `Possibly signed' is due to the fact
  135        that `size_t' may be defined on a system as either a signed or
  136        an unsigned type. To be conservative, values that would appear
  137        as negative numbers are avoided.
  138        Requests for sizes with a negative sign bit will return a
  139        minimum-sized chunk.
  140 
  141   Maximum overhead wastage per allocated chunk: normally 15 bytes
  142 
  143        Alignment demands, plus the minimum allocatable size restriction
  144        make the normal worst-case wastage 15 bytes (i.e., up to 15
  145        more bytes will be allocated than were requested in malloc), with
  146        two exceptions:
  147          1. Because requests for zero bytes allocate non-zero space,
  148             the worst case wastage for a request of zero bytes is 24 bytes.
  149          2. For requests >= mmap_threshold that are serviced via
  150             mmap(), the worst case wastage is 8 bytes plus the remainder
  151             from a system page (the minimal mmap unit); typically 4096 bytes.
  152 
  153 * Limitations
  154 
  155     Here are some features that are NOT currently supported
  156 
  157     * No automated mechanism for fully checking that all accesses
  158       to malloced memory stay within their bounds.
  159     * No support for compaction.
  160 
  161 * Synopsis of compile-time options:
  162 
  163     People have reported using previous versions of this malloc on all
  164     versions of Unix, sometimes by tweaking some of the defines
  165     below. It has been tested most extensively on Solaris and
  166     Linux. People have also reported adapting this malloc for use in
  167     stand-alone embedded systems.
  168 
  169     The implementation is in straight, hand-tuned ANSI C.  Among other
  170     consequences, it uses a lot of macros.  Because of this, to be at
  171     all usable, this code should be compiled using an optimizing compiler
  172     (for example gcc -O2) that can simplify expressions and control
  173     paths.
  174 
  175   __STD_C                  (default: derived from C compiler defines)
  176      Nonzero if using ANSI-standard C compiler, a C++ compiler, or
  177      a C compiler sufficiently close to ANSI to get away with it.
  178   MALLOC_DEBUG             (default: NOT defined)
  179      Define to enable debugging. Adds fairly extensive assertion-based
  180      checking to help track down memory errors, but noticeably slows down
  181      execution.
  182   MALLOC_HOOKS             (default: NOT defined)
  183      Define to enable support run-time replacement of the allocation
  184      functions through user-defined `hooks'.
  185   REALLOC_ZERO_BYTES_FREES (default: defined)
  186      Define this if you think that realloc(p, 0) should be equivalent
  187      to free(p).  (The C standard requires this behaviour, therefore
  188      it is the default.)  Otherwise, since malloc returns a unique
  189      pointer for malloc(0), so does realloc(p, 0).
  190   HAVE_MEMCPY               (default: defined)
  191      Define if you are not otherwise using ANSI STD C, but still
  192      have memcpy and memset in your C library and want to use them.
  193      Otherwise, simple internal versions are supplied.
  194   USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
  195      Define as 1 if you want the C library versions of memset and
  196      memcpy called in realloc and calloc (otherwise macro versions are used).
  197      At least on some platforms, the simple macro versions usually
  198      outperform libc versions.
  199   HAVE_MMAP                 (default: defined as 1)
  200      Define to non-zero to optionally make malloc() use mmap() to
  201      allocate very large blocks.
  202   HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
  203      Define to non-zero to optionally make realloc() use mremap() to
  204      reallocate very large blocks.
  205   malloc_getpagesize        (default: derived from system #includes)
  206      Either a constant or routine call returning the system page size.
  207   HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
  208      Optionally define if you are on a system with a /usr/include/malloc.h
  209      that declares struct mallinfo. It is not at all necessary to
  210      define this even if you do, but will ensure consistency.
  211   INTERNAL_SIZE_T           (default: size_t)
  212      Define to a 32-bit type (probably `unsigned int') if you are on a
  213      64-bit machine, yet do not want or need to allow malloc requests of
  214      greater than 2^31 to be handled. This saves space, especially for
  215      very small chunks.
  216   _LIBC                     (default: NOT defined)
  217      Defined only when compiled as part of the Linux libc/glibc.
  218      Also note that there is some odd internal name-mangling via defines
  219      (for example, internally, `malloc' is named `mALLOc') needed
  220      when compiling in this case. These look funny but don't otherwise
  221      affect anything.
  222   LACKS_UNISTD_H            (default: undefined)
  223      Define this if your system does not have a <unistd.h>.
  224   MORECORE                  (default: sbrk)
  225      The name of the routine to call to obtain more memory from the system.
  226   MORECORE_FAILURE          (default: -1)
  227      The value returned upon failure of MORECORE.
  228   MORECORE_CLEARS           (default 1)
  229      True (1) if the routine mapped to MORECORE zeroes out memory (which
  230      holds for sbrk).
  231   DEFAULT_TRIM_THRESHOLD
  232   DEFAULT_TOP_PAD
  233   DEFAULT_MMAP_THRESHOLD
  234   DEFAULT_MMAP_MAX
  235      Default values of tunable parameters (described in detail below)
  236      controlling interaction with host system routines (sbrk, mmap, etc).
  237      These values may also be changed dynamically via mallopt(). The
  238      preset defaults are those that give best performance for typical
  239      programs/systems.
  240   DEFAULT_CHECK_ACTION
  241      When the standard debugging hooks are in place, and a pointer is
  242      detected as corrupt, do nothing (0), print an error message (1),
  243      or call abort() (2).
  244 
  245 
  246 */
  247 
  248 /*
  249 
  250 * Compile-time options for multiple threads:
  251 
  252   USE_PTHREADS, USE_THR, USE_SPROC
  253      Define one of these as 1 to select the thread interface:
  254      POSIX threads, Solaris threads or SGI sproc's, respectively.
  255      If none of these is defined as non-zero, you get a `normal'
  256      malloc implementation which is not thread-safe.  Support for
  257      multiple threads requires HAVE_MMAP=1.  As an exception, when
  258      compiling for GNU libc, i.e. when _LIBC is defined, then none of
  259      the USE_... symbols have to be defined.
  260 
  261   HEAP_MIN_SIZE
  262   HEAP_MAX_SIZE
  263      When thread support is enabled, additional `heap's are created
  264      with mmap calls.  These are limited in size; HEAP_MIN_SIZE should
  265      be a multiple of the page size, while HEAP_MAX_SIZE must be a power
  266      of two for alignment reasons.  HEAP_MAX_SIZE should be at least
  267      twice as large as the mmap threshold.
  268   THREAD_STATS
  269      When this is defined as non-zero, some statistics on mutex locking
  270      are computed.
  271 
  272 */
  273 
  274 
  275 
  276 
  277 /* Preliminaries */
  278 
  279 #ifndef __STD_C
  280 #if defined (__STDC__)
  281 #define __STD_C     1
  282 #else
  283 #if __cplusplus
  284 #define __STD_C     1
  285 #else
  286 #define __STD_C     0
  287 #endif /*__cplusplus*/
  288 #endif /*__STDC__*/
  289 #endif /*__STD_C*/
  290 
  291 #ifndef Void_t
  292 #if __STD_C
  293 #define Void_t      void
  294 #else
  295 #define Void_t      char
  296 #endif
  297 #endif /*Void_t*/
  298 
  299 #if __STD_C
  300 # include <stddef.h>   /* for size_t */
  301 # if defined _LIBC || defined MALLOC_HOOKS
  302 #  include <stdlib.h>  /* for getenv(), abort() */
  303 # endif
  304 #else
  305 # include <sys/types.h>
  306 # if defined _LIBC || defined MALLOC_HOOKS
  307 extern char* getenv();
  308 # endif
  309 #endif
  310 
  311 /* Macros for handling mutexes and thread-specific data.  This is
  312    included early, because some thread-related header files (such as
  313    pthread.h) should be included before any others. */
  314 #include "thread-m.h"
  315 
  316 #ifdef __cplusplus
  317 extern "C" {
  318 #endif
  319 
  320 #include <stdio.h>    /* needed for malloc_stats */
  321 
  322 
  323 /*
  324   Compile-time options
  325 */
  326 
  327 
  328 /*
  329     Debugging:
  330 
  331     Because freed chunks may be overwritten with link fields, this
  332     malloc will often die when freed memory is overwritten by user
  333     programs.  This can be very effective (albeit in an annoying way)
  334     in helping track down dangling pointers.
  335 
  336     If you compile with -DMALLOC_DEBUG, a number of assertion checks are
  337     enabled that will catch more memory errors. You probably won't be
  338     able to make much sense of the actual assertion errors, but they
  339     should help you locate incorrectly overwritten memory.  The
  340     checking is fairly extensive, and will slow down execution
  341     noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
  342     attempt to check every non-mmapped allocated and free chunk in the
  343     course of computing the summaries. (By nature, mmapped regions
  344     cannot be checked very much automatically.)
  345 
  346     Setting MALLOC_DEBUG may also be helpful if you are trying to modify
  347     this code. The assertions in the check routines spell out in more
  348     detail the assumptions and invariants underlying the algorithms.
  349 
  350 */
  351 
  352 #if MALLOC_DEBUG
  353 #include <assert.h>
  354 #else
  355 #define assert(x) ((void)0)
  356 #endif
  357 
  358 
  359 /*
  360   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
  361   of chunk sizes. On a 64-bit machine, you can reduce malloc
  362   overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
  363   at the expense of not being able to handle requests greater than
  364   2^31. This limitation is hardly ever a concern; you are encouraged
  365   to set this. However, the default version is the same as size_t.
  366 */
  367 
  368 #ifndef INTERNAL_SIZE_T
  369 #define INTERNAL_SIZE_T size_t
  370 #endif
  371 
  372 /*
  373   REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
  374   zero bytes should be the same as a call to free.  The C standard
  375   requires this. Otherwise, since this malloc returns a unique pointer
  376   for malloc(0), so does realloc(p, 0).
  377 */
  378 
  379 
  380 #define REALLOC_ZERO_BYTES_FREES
  381 
  382 
  383 /*
  384   HAVE_MEMCPY should be defined if you are not otherwise using
  385   ANSI STD C, but still have memcpy and memset in your C library
  386   and want to use them in calloc and realloc. Otherwise simple
  387   macro versions are defined here.
  388 
  389   USE_MEMCPY should be defined as 1 if you actually want to
  390   have memset and memcpy called. People report that the macro
  391   versions are often enough faster than libc versions on many
  392   systems that it is better to use them.
  393 
  394 */
  395 
  396 #define HAVE_MEMCPY 1
  397 
  398 #ifndef USE_MEMCPY
  399 #ifdef HAVE_MEMCPY
  400 #define USE_MEMCPY 1
  401 #else
  402 #define USE_MEMCPY 0
  403 #endif
  404 #endif
  405 
  406 #if (__STD_C || defined(HAVE_MEMCPY))
  407 
  408 #if __STD_C
  409 void* memset(void*, int, size_t);
  410 void* memcpy(void*, const void*, size_t);
  411 #else
  412 Void_t* memset();
  413 Void_t* memcpy();
  414 #endif
  415 #endif
  416 
  417 #if USE_MEMCPY
  418 
  419 /* The following macros are only invoked with (2n+1)-multiples of
  420    INTERNAL_SIZE_T units, with a positive integer n. This is exploited
  421    for fast inline execution when n is small. */
  422 
  423 #define MALLOC_ZERO(charp, nbytes)                                            \
  424 do {                                                                          \
  425   INTERNAL_SIZE_T mzsz = (nbytes);                                            \
  426   if(mzsz <= 9*sizeof(mzsz)) {                                                \
  427     INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
  428     if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
  429                                      *mz++ = 0;                               \
  430       if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
  431                                      *mz++ = 0;                               \
  432         if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
  433                                      *mz++ = 0; }}}                           \
  434                                      *mz++ = 0;                               \
  435                                      *mz++ = 0;                               \
  436                                      *mz   = 0;                               \
  437   } else memset((charp), 0, mzsz);                                            \
  438 } while(0)
  439 
  440 #define MALLOC_COPY(dest,src,nbytes)                                          \
  441 do {                                                                          \
  442   INTERNAL_SIZE_T mcsz = (nbytes);                                            \
  443   if(mcsz <= 9*sizeof(mcsz)) {                                                \
  444     INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
  445     INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
  446     if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
  447                                      *mcdst++ = *mcsrc++;                     \
  448       if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
  449                                      *mcdst++ = *mcsrc++;                     \
  450         if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
  451                                      *mcdst++ = *mcsrc++; }}}                 \
  452                                      *mcdst++ = *mcsrc++;                     \
  453                                      *mcdst++ = *mcsrc++;                     \
  454                                      *mcdst   = *mcsrc  ;                     \
  455   } else memcpy(dest, src, mcsz);                                             \
  456 } while(0)
  457 
  458 #else /* !USE_MEMCPY */
  459 
  460 /* Use Duff's device for good zeroing/copying performance. */
  461 
  462 #define MALLOC_ZERO(charp, nbytes)                                            \
  463 do {                                                                          \
  464   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
  465   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
  466   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
  467   switch (mctmp) {                                                            \
  468     case 0: for(;;) { *mzp++ = 0;                                             \
  469     case 7:           *mzp++ = 0;                                             \
  470     case 6:           *mzp++ = 0;                                             \
  471     case 5:           *mzp++ = 0;                                             \
  472     case 4:           *mzp++ = 0;                                             \
  473     case 3:           *mzp++ = 0;                                             \
  474     case 2:           *mzp++ = 0;                                             \
  475     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
  476   }                                                                           \
  477 } while(0)
  478 
  479 #define MALLOC_COPY(dest,src,nbytes)                                          \
  480 do {                                                                          \
  481   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
  482   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
  483   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
  484   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
  485   switch (mctmp) {                                                            \
  486     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
  487     case 7:           *mcdst++ = *mcsrc++;                                    \
  488     case 6:           *mcdst++ = *mcsrc++;                                    \
  489     case 5:           *mcdst++ = *mcsrc++;                                    \
  490     case 4:           *mcdst++ = *mcsrc++;                                    \
  491     case 3:           *mcdst++ = *mcsrc++;                                    \
  492     case 2:           *mcdst++ = *mcsrc++;                                    \
  493     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
  494   }                                                                           \
  495 } while(0)
  496 
  497 #endif
  498 
  499 
  500 /*
  501   Define HAVE_MMAP to optionally make malloc() use mmap() to
  502   allocate very large blocks.  These will be returned to the
  503   operating system immediately after a free().
  504 */
  505 
  506 #ifndef HAVE_MMAP
  507 #define HAVE_MMAP 1
  508 #endif
  509 
  510 /*
  511   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
  512   large blocks.  This is currently only possible on Linux with
  513   kernel versions newer than 1.3.77.
  514 */
  515 
  516 #if 0
  517 #ifndef HAVE_MREMAP
  518 #define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
  519 #endif
  520 #endif
  521 
  522 #if HAVE_MMAP
  523 
  524 #include <unistd.h>
  525 #include <fcntl.h>
  526 #include <sys/mman.h>
  527 
  528 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
  529 #define MAP_ANONYMOUS MAP_ANON
  530 #endif
  531 #if !defined(MAP_FAILED)
  532 #define MAP_FAILED ((char*)-1)
  533 #endif
  534 
  535 #ifndef MAP_NORESERVE
  536 # ifdef MAP_AUTORESRV
  537 #  define MAP_NORESERVE MAP_AUTORESRV
  538 # else
  539 #  define MAP_NORESERVE 0
  540 # endif
  541 #endif
  542 
  543 #endif /* HAVE_MMAP */
  544 
  545 /*
  546   Access to system page size. To the extent possible, this malloc
  547   manages memory from the system in page-size units.
  548 
  549   The following mechanics for getpagesize were adapted from
  550   bsd/gnu getpagesize.h
  551 */
  552 
  553 #ifndef LACKS_UNISTD_H
  554 #  include <unistd.h>
  555 #endif
  556 
  557 #ifndef malloc_getpagesize
  558 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
  559 #    ifndef _SC_PAGE_SIZE
  560 #      define _SC_PAGE_SIZE _SC_PAGESIZE
  561 #    endif
  562 #  endif
  563 #  ifdef _SC_PAGE_SIZE
  564 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
  565 #  else
  566 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
  567        extern size_t getpagesize();
  568 #      define malloc_getpagesize getpagesize()
  569 #    else
  570 #      include <sys/param.h>
  571 #      ifdef EXEC_PAGESIZE
  572 #        define malloc_getpagesize EXEC_PAGESIZE
  573 #      else
  574 #        ifdef NBPG
  575 #          ifndef CLSIZE
  576 #            define malloc_getpagesize NBPG
  577 #          else
  578 #            define malloc_getpagesize (NBPG * CLSIZE)
  579 #          endif
  580 #        else
  581 #          ifdef NBPC
  582 #            define malloc_getpagesize NBPC
  583 #          else
  584 #            ifdef PAGESIZE
  585 #              define malloc_getpagesize PAGESIZE
  586 #            else
  587 #              define malloc_getpagesize (4096) /* just guess */
  588 #            endif
  589 #          endif
  590 #        endif
  591 #      endif
  592 #    endif
  593 #  endif
  594 #endif
  595 
  596 
  597 
  598 /*
  599 
  600   This version of malloc supports the standard SVID/XPG mallinfo
  601   routine that returns a struct containing the same kind of
  602   information you can get from malloc_stats. It should work on
  603   any SVID/XPG compliant system that has a /usr/include/malloc.h
  604   defining struct mallinfo. (If you'd like to install such a thing
  605   yourself, cut out the preliminary declarations as described above
  606   and below and save them in a malloc.h file. But there's no
  607   compelling reason to bother to do this.)
  608 
  609   The main declaration needed is the mallinfo struct that is returned
  610   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
  611   bunch of fields, most of which are not even meaningful in this
  612   version of malloc. Some of these fields are are instead filled by
  613   mallinfo() with other numbers that might possibly be of interest.
  614 
  615   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
  616   /usr/include/malloc.h file that includes a declaration of struct
  617   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
  618   version is declared below.  These must be precisely the same for
  619   mallinfo() to work.
  620 
  621 */
  622 
  623 /* #define HAVE_USR_INCLUDE_MALLOC_H */
  624 
  625 #if HAVE_USR_INCLUDE_MALLOC_H
  626 # include "/usr/include/malloc.h"
  627 #else
  628 # ifdef _LIBC
  629 #  include "malloc.h"
  630 # else
  631 #  include "ptmalloc.h"
  632 # endif
  633 #endif
  634 
  635 
  636 
  637 #ifndef DEFAULT_TRIM_THRESHOLD
  638 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
  639 #endif
  640 
  641 /*
  642     M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
  643       to keep before releasing via malloc_trim in free().
  644 
  645       Automatic trimming is mainly useful in long-lived programs.
  646       Because trimming via sbrk can be slow on some systems, and can
  647       sometimes be wasteful (in cases where programs immediately
  648       afterward allocate more large chunks) the value should be high
  649       enough so that your overall system performance would improve by
  650       releasing.
  651 
  652       The trim threshold and the mmap control parameters (see below)
  653       can be traded off with one another. Trimming and mmapping are
  654       two different ways of releasing unused memory back to the
  655       system. Between these two, it is often possible to keep
  656       system-level demands of a long-lived program down to a bare
  657       minimum. For example, in one test suite of sessions measuring
  658       the XF86 X server on Linux, using a trim threshold of 128K and a
  659       mmap threshold of 192K led to near-minimal long term resource
  660       consumption.
  661 
  662       If you are using this malloc in a long-lived program, it should
  663       pay to experiment with these values.  As a rough guide, you
  664       might set to a value close to the average size of a process
  665       (program) running on your system.  Releasing this much memory
  666       would allow such a process to run in memory.  Generally, it's
  667       worth it to tune for trimming rather than memory mapping when a
  668       program undergoes phases where several large chunks are
  669       allocated and released in ways that can reuse each other's
  670       storage, perhaps mixed with phases where there are no such
  671       chunks at all.  And in well-behaved long-lived programs,
  672       controlling release of large blocks via trimming versus mapping
  673       is usually faster.
  674 
  675       However, in most programs, these parameters serve mainly as
  676       protection against the system-level effects of carrying around
  677       massive amounts of unneeded memory. Since frequent calls to
  678       sbrk, mmap, and munmap otherwise degrade performance, the default
  679       parameters are set to relatively high values that serve only as
  680       safeguards.
  681 
  682       The default trim value is high enough to cause trimming only in
  683       fairly extreme (by current memory consumption standards) cases.
  684       It must be greater than page size to have any useful effect.  To
  685       disable trimming completely, you can set to (unsigned long)(-1);
  686 
  687 
  688 */
  689 
  690 
  691 #ifndef DEFAULT_TOP_PAD
  692 #define DEFAULT_TOP_PAD        (0)
  693 #endif
  694 
  695 /*
  696     M_TOP_PAD is the amount of extra `padding' space to allocate or
  697       retain whenever sbrk is called. It is used in two ways internally:
  698 
  699       * When sbrk is called to extend the top of the arena to satisfy
  700         a new malloc request, this much padding is added to the sbrk
  701         request.
  702 
  703       * When malloc_trim is called automatically from free(),
  704         it is used as the `pad' argument.
  705 
  706       In both cases, the actual amount of padding is rounded
  707       so that the end of the arena is always a system page boundary.
  708 
  709       The main reason for using padding is to avoid calling sbrk so
  710       often. Having even a small pad greatly reduces the likelihood
  711       that nearly every malloc request during program start-up (or
  712       after trimming) will invoke sbrk, which needlessly wastes
  713       time.
  714 
  715       Automatic rounding-up to page-size units is normally sufficient
  716       to avoid measurable overhead, so the default is 0.  However, in
  717       systems where sbrk is relatively slow, it can pay to increase
  718       this value, at the expense of carrying around more memory than
  719       the program needs.
  720 
  721 */
  722 
  723 
  724 #ifndef DEFAULT_MMAP_THRESHOLD
  725 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
  726 #endif
  727 
  728 /*
  729 
  730     M_MMAP_THRESHOLD is the request size threshold for using mmap()
  731       to service a request. Requests of at least this size that cannot
  732       be allocated using already-existing space will be serviced via mmap.
  733       (If enough normal freed space already exists it is used instead.)
  734 
  735       Using mmap segregates relatively large chunks of memory so that
  736       they can be individually obtained and released from the host
  737       system. A request serviced through mmap is never reused by any
  738       other request (at least not directly; the system may just so
  739       happen to remap successive requests to the same locations).
  740 
  741       Segregating space in this way has the benefit that mmapped space
  742       can ALWAYS be individually released back to the system, which
  743       helps keep the system level memory demands of a long-lived
  744       program low. Mapped memory can never become `locked' between
  745       other chunks, as can happen with normally allocated chunks, which
  746       menas that even trimming via malloc_trim would not release them.
  747 
  748       However, it has the disadvantages that:
  749 
  750          1. The space cannot be reclaimed, consolidated, and then
  751             used to service later requests, as happens with normal chunks.
  752          2. It can lead to more wastage because of mmap page alignment
  753             requirements
  754          3. It causes malloc performance to be more dependent on host
  755             system memory management support routines which may vary in
  756             implementation quality and may impose arbitrary
  757             limitations. Generally, servicing a request via normal
  758             malloc steps is faster than going through a system's mmap.
  759 
  760       All together, these considerations should lead you to use mmap
  761       only for relatively large requests.
  762 
  763 
  764 */
  765 
  766 
  767 
  768 #ifndef DEFAULT_MMAP_MAX
  769 #if HAVE_MMAP
  770 #define DEFAULT_MMAP_MAX       (1024)
  771 #else
  772 #define DEFAULT_MMAP_MAX       (0)
  773 #endif
  774 #endif
  775 
  776 /*
  777     M_MMAP_MAX is the maximum number of requests to simultaneously
  778       service using mmap. This parameter exists because:
  779 
  780          1. Some systems have a limited number of internal tables for
  781             use by mmap.
  782          2. In most systems, overreliance on mmap can degrade overall
  783             performance.
  784          3. If a program allocates many large regions, it is probably
  785             better off using normal sbrk-based allocation routines that
  786             can reclaim and reallocate normal heap memory. Using a
  787             small value allows transition into this mode after the
  788             first few allocations.
  789 
  790       Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
  791       the default value is 0, and attempts to set it to non-zero values
  792       in mallopt will fail.
  793 */
  794 
  795 
  796 
  797 #ifndef DEFAULT_CHECK_ACTION
  798 #define DEFAULT_CHECK_ACTION 1
  799 #endif
  800 
  801 /* What to do if the standard debugging hooks are in place and a
  802    corrupt pointer is detected: do nothing (0), print an error message
  803    (1), or call abort() (2). */
  804 
  805 
  806 
  807 #define HEAP_MIN_SIZE (32*1024)
  808 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
  809 
  810 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
  811       that are dynamically created for multi-threaded programs.  The
  812       maximum size must be a power of two, for fast determination of
  813       which heap belongs to a chunk.  It should be much larger than
  814       the mmap threshold, so that requests with a size just below that
  815       threshold can be fulfilled without creating too many heaps.
  816 */
  817 
  818 
  819 
  820 #ifndef THREAD_STATS
  821 #define THREAD_STATS 0
  822 #endif
  823 
  824 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
  825    computed. */
  826 
  827 
  828 
  829 /* On some platforms we can compile internal, not exported functions better.
  830    Let the environment provide a macro and define it to be empty if it
  831    is not available.  */
  832 #ifndef internal_function
  833 # define internal_function
  834 #endif
  835 
  836 
  837 /*
  838 
  839   Special defines for the Linux/GNU C library.
  840 
  841 */
  842 
  843 
  844 #ifdef _LIBC
  845 
  846 #if __STD_C
  847 
  848 Void_t * __default_morecore (ptrdiff_t);
  849 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
  850 
  851 #else
  852 
  853 Void_t * __default_morecore ();
  854 Void_t *(*__morecore)() = __default_morecore;
  855 
  856 #endif
  857 
  858 #define MORECORE (*__morecore)
  859 #define MORECORE_FAILURE 0
  860 #define MORECORE_CLEARS 1
  861 #define mmap    __mmap
  862 #define munmap  __munmap
  863 #define mremap  __mremap
  864 #define mprotect __mprotect
  865 #undef malloc_getpagesize
  866 #define malloc_getpagesize __getpagesize()
  867 
  868 #else /* _LIBC */
  869 
  870 #if __STD_C
  871 extern Void_t*     sbrk(ptrdiff_t);
  872 #else
  873 extern Void_t*     sbrk();
  874 #endif
  875 
  876 #ifndef MORECORE
  877 #define MORECORE sbrk
  878 #endif
  879 
  880 #ifndef MORECORE_FAILURE
  881 #define MORECORE_FAILURE -1
  882 #endif
  883 
  884 #ifndef MORECORE_CLEARS
  885 #define MORECORE_CLEARS 1
  886 #endif
  887 
  888 #endif /* _LIBC */
  889 
  890 #ifdef _LIBC
  891 
  892 #define cALLOc          __libc_calloc
  893 #define fREe            __libc_free
  894 #define mALLOc          __libc_malloc
  895 #define mEMALIGn        __libc_memalign
  896 #define rEALLOc         __libc_realloc
  897 #define vALLOc          __libc_valloc
  898 #define pvALLOc         __libc_pvalloc
  899 #define mALLINFo        __libc_mallinfo
  900 #define mALLOPt         __libc_mallopt
  901 #define mALLOC_STATs    __malloc_stats
  902 #define mALLOC_USABLE_SIZe __malloc_usable_size
  903 #define mALLOC_TRIm     __malloc_trim
  904 #define mALLOC_GET_STATe __malloc_get_state
  905 #define mALLOC_SET_STATe __malloc_set_state
  906 
  907 #else
  908 
  909 #define cALLOc          ptcalloc
  910 #define fREe            ptfree
  911 #define mALLOc          ptmalloc
  912 #define mEMALIGn        ptmemalign
  913 #define rEALLOc         ptrealloc
  914 #define vALLOc          ptvalloc
  915 #define pvALLOc         ptpvalloc
  916 #define mALLINFo        ptmallinfo
  917 #define mALLOPt         ptmallopt
  918 #define mALLOC_STATs    ptmalloc_stats
  919 #define mALLOC_USABLE_SIZe ptmalloc_usable_size
  920 #define mALLOC_TRIm     ptmalloc_trim
  921 #define mALLOC_GET_STATe ptmalloc_get_state
  922 #define mALLOC_SET_STATe ptmalloc_set_state
  923 
  924 #endif
  925 
  926 /* Public routines */
  927 
  928 #if __STD_C
  929 
  930 #ifndef _LIBC
  931 void    ptmalloc_init(void);
  932 #endif
  933 Void_t* mALLOc(size_t);
  934 void    fREe(Void_t*);
  935 Void_t* rEALLOc(Void_t*, size_t);
  936 Void_t* mEMALIGn(size_t, size_t);
  937 Void_t* vALLOc(size_t);
  938 Void_t* pvALLOc(size_t);
  939 Void_t* cALLOc(size_t, size_t);
  940 void    cfree(Void_t*);
  941 int     mALLOC_TRIm(size_t);
  942 size_t  mALLOC_USABLE_SIZe(Void_t*);
  943 void    mALLOC_STATs(void);
  944 int     mALLOPt(int, int);
  945 struct mallinfo mALLINFo(void);
  946 Void_t* mALLOC_GET_STATe(void);
  947 int     mALLOC_SET_STATe(Void_t*);
  948 
  949 #else /* !__STD_C */
  950 
  951 #ifndef _LIBC
  952 void    ptmalloc_init();
  953 #endif
  954 Void_t* mALLOc();
  955 void    fREe();
  956 Void_t* rEALLOc();
  957 Void_t* mEMALIGn();
  958 Void_t* vALLOc();
  959 Void_t* pvALLOc();
  960 Void_t* cALLOc();
  961 void    cfree();
  962 int     mALLOC_TRIm();
  963 size_t  mALLOC_USABLE_SIZe();
  964 void    mALLOC_STATs();
  965 int     mALLOPt();
  966 struct mallinfo mALLINFo();
  967 Void_t* mALLOC_GET_STATe();
  968 int     mALLOC_SET_STATe();
  969 
  970 #endif /* __STD_C */
  971 
  972 
  973 #ifdef __cplusplus
  974 };  /* end of extern "C" */
  975 #endif
  976 
  977 #if !defined(NO_THREADS) && !HAVE_MMAP
  978 "Can't have threads support without mmap"
  979 #endif
  980 
  981 
  982 /*
  983   Type declarations
  984 */
  985 
  986 
  987 struct malloc_chunk
  988 {
  989   INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
  990   INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
  991   struct malloc_chunk* fd;   /* double links -- used only if free. */
  992   struct malloc_chunk* bk;
  993 };
  994 
  995 typedef struct malloc_chunk* mchunkptr;
  996 
  997 /*
  998 
  999    malloc_chunk details:
 1000 
 1001     (The following includes lightly edited explanations by Colin Plumb.)
 1002 
 1003     Chunks of memory are maintained using a `boundary tag' method as
 1004     described in e.g., Knuth or Standish.  (See the paper by Paul
 1005     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
 1006     survey of such techniques.)  Sizes of free chunks are stored both
 1007     in the front of each chunk and at the end.  This makes
 1008     consolidating fragmented chunks into bigger chunks very fast.  The
 1009     size fields also hold bits representing whether chunks are free or
 1010     in use.
 1011 
 1012     An allocated chunk looks like this:
 1013 
 1014 
 1015     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1016             |             Size of previous chunk, if allocated            | |
 1017             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1018             |             Size of chunk, in bytes                         |P|
 1019       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1020             |             User data starts here...                          .
 1021             .                                                               .
 1022             .             (malloc_usable_space() bytes)                     .
 1023             .                                                               |
 1024 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1025             |             Size of chunk                                     |
 1026             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1027 
 1028 
 1029     Where "chunk" is the front of the chunk for the purpose of most of
 1030     the malloc code, but "mem" is the pointer that is returned to the
 1031     user.  "Nextchunk" is the beginning of the next contiguous chunk.
 1032 
 1033     Chunks always begin on even word boundaries, so the mem portion
 1034     (which is returned to the user) is also on an even word boundary, and
 1035     thus double-word aligned.
 1036 
 1037     Free chunks are stored in circular doubly-linked lists, and look like this:
 1038 
 1039     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1040             |             Size of previous chunk                            |
 1041             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1042     `head:' |             Size of chunk, in bytes                         |P|
 1043       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1044             |             Forward pointer to next chunk in list             |
 1045             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1046             |             Back pointer to previous chunk in list            |
 1047             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1048             |             Unused space (may be 0 bytes long)                .
 1049             .                                                               .
 1050             .                                                               |
 1051 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1052     `foot:' |             Size of chunk, in bytes                           |
 1053             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1054 
 1055     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
 1056     chunk size (which is always a multiple of two words), is an in-use
 1057     bit for the *previous* chunk.  If that bit is *clear*, then the
 1058     word before the current chunk size contains the previous chunk
 1059     size, and can be used to find the front of the previous chunk.
 1060     (The very first chunk allocated always has this bit set,
 1061     preventing access to non-existent (or non-owned) memory.)
 1062 
 1063     Note that the `foot' of the current chunk is actually represented
 1064     as the prev_size of the NEXT chunk. (This makes it easier to
 1065     deal with alignments etc).
 1066 
 1067     The two exceptions to all this are
 1068 
 1069      1. The special chunk `top', which doesn't bother using the
 1070         trailing size field since there is no
 1071         next contiguous chunk that would have to index off it. (After
 1072         initialization, `top' is forced to always exist.  If it would
 1073         become less than MINSIZE bytes long, it is replenished via
 1074         malloc_extend_top.)
 1075 
 1076      2. Chunks allocated via mmap, which have the second-lowest-order
 1077         bit (IS_MMAPPED) set in their size fields.  Because they are
 1078         never merged or traversed from any other chunk, they have no
 1079         foot size or inuse information.
 1080 
 1081     Available chunks are kept in any of several places (all declared below):
 1082 
 1083     * `av': An array of chunks serving as bin headers for consolidated
 1084        chunks. Each bin is doubly linked.  The bins are approximately
 1085        proportionally (log) spaced.  There are a lot of these bins
 1086        (128). This may look excessive, but works very well in
 1087        practice.  All procedures maintain the invariant that no
 1088        consolidated chunk physically borders another one. Chunks in
 1089        bins are kept in size order, with ties going to the
 1090        approximately least recently used chunk.
 1091 
 1092        The chunks in each bin are maintained in decreasing sorted order by
 1093        size.  This is irrelevant for the small bins, which all contain
 1094        the same-sized chunks, but facilitates best-fit allocation for
 1095        larger chunks. (These lists are just sequential. Keeping them in
 1096        order almost never requires enough traversal to warrant using
 1097        fancier ordered data structures.)  Chunks of the same size are
 1098        linked with the most recently freed at the front, and allocations
 1099        are taken from the back.  This results in LRU or FIFO allocation
 1100        order, which tends to give each chunk an equal opportunity to be
 1101        consolidated with adjacent freed chunks, resulting in larger free
 1102        chunks and less fragmentation.
 1103 
 1104     * `top': The top-most available chunk (i.e., the one bordering the
 1105        end of available memory) is treated specially. It is never
 1106        included in any bin, is used only if no other chunk is
 1107        available, and is released back to the system if it is very
 1108        large (see M_TRIM_THRESHOLD).
 1109 
 1110     * `last_remainder': A bin holding only the remainder of the
 1111        most recently split (non-top) chunk. This bin is checked
 1112        before other non-fitting chunks, so as to provide better
 1113        locality for runs of sequentially allocated chunks.
 1114 
 1115     *  Implicitly, through the host system's memory mapping tables.
 1116        If supported, requests greater than a threshold are usually
 1117        serviced via calls to mmap, and then later released via munmap.
 1118 
 1119 */
 1120 
 1121 /*
 1122    Bins
 1123 
 1124     The bins are an array of pairs of pointers serving as the
 1125     heads of (initially empty) doubly-linked lists of chunks, laid out
 1126     in a way so that each pair can be treated as if it were in a
 1127     malloc_chunk. (This way, the fd/bk offsets for linking bin heads
 1128     and chunks are the same).
 1129 
 1130     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
 1131     8 bytes apart. Larger bins are approximately logarithmically
 1132     spaced. (See the table below.)
 1133 
 1134     Bin layout:
 1135 
 1136     64 bins of size       8
 1137     32 bins of size      64
 1138     16 bins of size     512
 1139      8 bins of size    4096
 1140      4 bins of size   32768
 1141      2 bins of size  262144
 1142      1 bin  of size what's left
 1143 
 1144     There is actually a little bit of slop in the numbers in bin_index
 1145     for the sake of speed. This makes no difference elsewhere.
 1146 
 1147     The special chunks `top' and `last_remainder' get their own bins,
 1148     (this is implemented via yet more trickery with the av array),
 1149     although `top' is never properly linked to its bin since it is
 1150     always handled specially.
 1151 
 1152 */
 1153 
 1154 #define NAV             128   /* number of bins */
 1155 
 1156 typedef struct malloc_chunk* mbinptr;
 1157 
 1158 /* An arena is a configuration of malloc_chunks together with an array
 1159    of bins.  With multiple threads, it must be locked via a mutex
 1160    before changing its data structures.  One or more `heaps' are
 1161    associated with each arena, except for the main_arena, which is
 1162    associated only with the `main heap', i.e.  the conventional free
 1163    store obtained with calls to MORECORE() (usually sbrk).  The `av'
 1164    array is never mentioned directly in the code, but instead used via
 1165    bin access macros. */
 1166 
 1167 typedef struct _arena {
 1168   mbinptr av[2*NAV + 2];
 1169   struct _arena *next;
 1170   size_t size;
 1171 #if THREAD_STATS
 1172   long stat_lock_direct, stat_lock_loop, stat_lock_wait;
 1173 #endif
 1174   mutex_t mutex;
 1175 } arena;
 1176 
 1177 
 1178 /* A heap is a single contiguous memory region holding (coalesceable)
 1179    malloc_chunks.  It is allocated with mmap() and always starts at an
 1180    address aligned to HEAP_MAX_SIZE.  Not used unless compiling for
 1181    multiple threads. */
 1182 
 1183 typedef struct _heap_info {
 1184   arena *ar_ptr; /* Arena for this heap. */
 1185   struct _heap_info *prev; /* Previous heap. */
 1186   size_t size;   /* Current size in bytes. */
 1187   size_t pad;    /* Make sure the following data is properly aligned. */
 1188 } heap_info;
 1189 
 1190 
 1191 /*
 1192   Static functions (forward declarations)
 1193 */
 1194 
 1195 #if __STD_C
 1196 
 1197 static void      chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
 1198 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
 1199      internal_function;
 1200 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
 1201                                INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
 1202      internal_function;
 1203 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
 1204                              size_t alignment) internal_function;
 1205 static int       main_trim(size_t pad) internal_function;
 1206 #ifndef NO_THREADS
 1207 static int       heap_trim(heap_info *heap, size_t pad) internal_function;
 1208 #endif
 1209 #if defined _LIBC || defined MALLOC_HOOKS
 1210 static Void_t*   malloc_check(size_t sz);
 1211 static void      free_check(Void_t* mem);
 1212 static Void_t*   realloc_check(Void_t* oldmem, size_t bytes);
 1213 static Void_t*   memalign_check(size_t alignment, size_t bytes);
 1214 #ifndef NO_THREADS
 1215 static Void_t*   malloc_starter(size_t sz);
 1216 static void      free_starter(Void_t* mem);
 1217 static Void_t*   malloc_atfork(size_t sz);
 1218 static void      free_atfork(Void_t* mem);
 1219 #endif
 1220 #endif
 1221 
 1222 #else
 1223 
 1224 static void      chunk_free();
 1225 static mchunkptr chunk_alloc();
 1226 static mchunkptr chunk_realloc();
 1227 static mchunkptr chunk_align();
 1228 static int       main_trim();
 1229 #ifndef NO_THREADS
 1230 static int       heap_trim();
 1231 #endif
 1232 #if defined _LIBC || defined MALLOC_HOOKS
 1233 static Void_t*   malloc_check();
 1234 static void      free_check();
 1235 static Void_t*   realloc_check();
 1236 static Void_t*   memalign_check();
 1237 #ifndef NO_THREADS
 1238 static Void_t*   malloc_starter();
 1239 static void      free_starter();
 1240 static Void_t*   malloc_atfork();
 1241 static void      free_atfork();
 1242 #endif
 1243 #endif
 1244 
 1245 #endif
 1246 
 1247 
 1248 
 1249 /* sizes, alignments */
 1250 
 1251 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
 1252 #define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
 1253 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
 1254 #define MINSIZE                (sizeof(struct malloc_chunk))
 1255 
 1256 /* conversion from malloc headers to user pointers, and back */
 1257 
 1258 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
 1259 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
 1260 
 1261 /* pad request bytes into a usable size */
 1262 
 1263 #define request2size(req) \
 1264  (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
 1265   (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
 1266    (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
 1267 
 1268 /* Check if m has acceptable alignment */
 1269 
 1270 #define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
 1271 
 1272 
 1273 
 1274 
 1275 /*
 1276   Physical chunk operations
 1277 */
 1278 
 1279 
 1280 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
 1281 
 1282 #define PREV_INUSE 0x1
 1283 
 1284 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
 1285 
 1286 #define IS_MMAPPED 0x2
 1287 
 1288 /* Bits to mask off when extracting size */
 1289 
 1290 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
 1291 
 1292 
 1293 /* Ptr to next physical malloc_chunk. */
 1294 
 1295 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
 1296 
 1297 /* Ptr to previous physical malloc_chunk */
 1298 
 1299 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
 1300 
 1301 
 1302 /* Treat space at ptr + offset as a chunk */
 1303 
 1304 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
 1305 
 1306 
 1307 
 1308 
 1309 /*
 1310   Dealing with use bits
 1311 */
 1312 
 1313 /* extract p's inuse bit */
 1314 
 1315 #define inuse(p) \
 1316  ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
 1317 
 1318 /* extract inuse bit of previous chunk */
 1319 
 1320 #define prev_inuse(p)  ((p)->size & PREV_INUSE)
 1321 
 1322 /* check for mmap()'ed chunk */
 1323 
 1324 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
 1325 
 1326 /* set/clear chunk as in use without otherwise disturbing */
 1327 
 1328 #define set_inuse(p) \
 1329  ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
 1330 
 1331 #define clear_inuse(p) \
 1332  ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
 1333 
 1334 /* check/set/clear inuse bits in known places */
 1335 
 1336 #define inuse_bit_at_offset(p, s)\
 1337  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
 1338 
 1339 #define set_inuse_bit_at_offset(p, s)\
 1340  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
 1341 
 1342 #define clear_inuse_bit_at_offset(p, s)\
 1343  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
 1344 
 1345 
 1346 
 1347 
 1348 /*
 1349   Dealing with size fields
 1350 */
 1351 
 1352 /* Get size, ignoring use bits */
 1353 
 1354 #define chunksize(p)          ((p)->size & ~(SIZE_BITS))
 1355 
 1356 /* Set size at head, without disturbing its use bit */
 1357 
 1358 #define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
 1359 
 1360 /* Set size/use ignoring previous bits in header */
 1361 
 1362 #define set_head(p, s)        ((p)->size = (s))
 1363 
 1364 /* Set size at footer (only when chunk is not in use) */
 1365 
 1366 #define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
 1367 
 1368 
 1369 
 1370 
 1371 
 1372 /* access macros */
 1373 
 1374 #define bin_at(a, i)   ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
 1375 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
 1376 #define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
 1377 #define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
 1378 
 1379 /*
 1380    The first 2 bins are never indexed. The corresponding av cells are instead
 1381    used for bookkeeping. This is not to save space, but to simplify
 1382    indexing, maintain locality, and avoid some initialization tests.
 1383 */
 1384 
 1385 #define binblocks(a)      (bin_at(a,0)->size)/* bitvector of nonempty blocks */
 1386 #define top(a)            (bin_at(a,0)->fd)  /* The topmost chunk */
 1387 #define last_remainder(a) (bin_at(a,1))      /* remainder from last split */
 1388 
 1389 /*
 1390    Because top initially points to its own bin with initial
 1391    zero size, thus forcing extension on the first malloc request,
 1392    we avoid having any special code in malloc to check whether
 1393    it even exists yet. But we still need to in malloc_extend_top.
 1394 */
 1395 
 1396 #define initial_top(a)    ((mchunkptr)bin_at(a, 0))
 1397 
 1398 
 1399 
 1400 /* field-extraction macros */
 1401 
 1402 #define first(b) ((b)->fd)
 1403 #define last(b)  ((b)->bk)
 1404 
 1405 /*
 1406   Indexing into bins
 1407 */
 1408 
 1409 #define bin_index(sz)                                                         \
 1410 (((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3):\
 1411  ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6):\
 1412  ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9):\
 1413  ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12):\
 1414  ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15):\
 1415  ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
 1416                                           126)
 1417 /*
 1418   bins for chunks < 512 are all spaced 8 bytes apart, and hold
 1419   identically sized chunks. This is exploited in malloc.
 1420 */
 1421 
 1422 #define MAX_SMALLBIN         63
 1423 #define MAX_SMALLBIN_SIZE   512
 1424 #define SMALLBIN_WIDTH        8
 1425 
 1426 #define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
 1427 
 1428 /*
 1429    Requests are `small' if both the corresponding and the next bin are small
 1430 */
 1431 
 1432 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
 1433 
 1434 
 1435 
 1436 /*
 1437     To help compensate for the large number of bins, a one-level index
 1438     structure is used for bin-by-bin searching.  `binblocks' is a
 1439     one-word bitvector recording whether groups of BINBLOCKWIDTH bins
 1440     have any (possibly) non-empty bins, so they can be skipped over
 1441     all at once during during traversals. The bits are NOT always
 1442     cleared as soon as all bins in a block are empty, but instead only
 1443     when all are noticed to be empty during traversal in malloc.
 1444 */
 1445 
 1446 #define BINBLOCKWIDTH     4   /* bins per block */
 1447 
 1448 /* bin<->block macros */
 1449 
 1450 #define idx2binblock(ix)      ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
 1451 #define mark_binblock(a, ii)  (binblocks(a) |= idx2binblock(ii))
 1452 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
 1453 
 1454 
 1455 
 1456 
 1457 /* Static bookkeeping data */
 1458 
 1459 /* Helper macro to initialize bins */
 1460 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
 1461 
 1462 static arena main_arena = {
 1463     {
 1464  0, 0,
 1465  IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
 1466  IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
 1467  IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
 1468  IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
 1469  IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
 1470  IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
 1471  IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
 1472  IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
 1473  IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
 1474  IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
 1475  IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
 1476  IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
 1477  IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
 1478  IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
 1479  IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
 1480  IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
 1481     },
 1482     &main_arena, /* next */
 1483     0, /* size */
 1484 #if THREAD_STATS
 1485     0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
 1486 #endif
 1487     MUTEX_INITIALIZER /* mutex */
 1488 };
 1489 
 1490 #undef IAV
 1491 
 1492 /* Thread specific data */
 1493 
 1494 #ifndef NO_THREADS
 1495 static tsd_key_t arena_key;
 1496 static mutex_t list_lock = MUTEX_INITIALIZER;
 1497 #endif
 1498 
 1499 #if THREAD_STATS
 1500 static int stat_n_heaps = 0;
 1501 #define THREAD_STAT(x) x
 1502 #else
 1503 #define THREAD_STAT(x) do ; while(0)
 1504 #endif
 1505 
 1506 /* variables holding tunable values */
 1507 
 1508 static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
 1509 static unsigned long top_pad          = DEFAULT_TOP_PAD;
 1510 static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
 1511 static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
 1512 static int           check_action     = DEFAULT_CHECK_ACTION;
 1513 
 1514 /* The first value returned from sbrk */
 1515 static char* sbrk_base = (char*)(-1);
 1516 
 1517 /* The maximum memory obtained from system via sbrk */
 1518 static unsigned long max_sbrked_mem = 0;
 1519 
 1520 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
 1521 #ifdef NO_THREADS
 1522 static unsigned long max_total_mem = 0;
 1523 #endif
 1524 
 1525 /* The total memory obtained from system via sbrk */
 1526 #define sbrked_mem (main_arena.size)
 1527 
 1528 /* Tracking mmaps */
 1529 
 1530 static unsigned int n_mmaps = 0;
 1531 static unsigned int max_n_mmaps = 0;
 1532 static unsigned long mmapped_mem = 0;
 1533 static unsigned long max_mmapped_mem = 0;
 1534 
 1535 
 1536 
 1537 #ifndef _LIBC
 1538 #define weak_variable
 1539 #else
 1540 /* In GNU libc we want the hook variables to be weak definitions to
 1541    avoid a problem with Emacs.  */
 1542 #define weak_variable weak_function
 1543 #endif
 1544 
 1545 /* Already initialized? */
 1546 int __malloc_initialized = -1;
 1547 
 1548 
 1549 #ifndef NO_THREADS
 1550 
 1551 /* The following two functions are registered via thread_atfork() to
 1552    make sure that the mutexes remain in a consistent state in the
 1553    fork()ed version of a thread.  Also adapt the malloc and free hooks
 1554    temporarily, because the `atfork' handler mechanism may use
 1555    malloc/free internally (e.g. in LinuxThreads). */
 1556 
 1557 #if defined _LIBC || defined MALLOC_HOOKS
 1558 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size));
 1559 static void           (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr));
 1560 static Void_t*        save_arena;
 1561 #endif
 1562 
 1563 static void
 1564 ptmalloc_lock_all __MALLOC_P((void))
 1565 {
 1566   arena *ar_ptr;
 1567 
 1568   (void)mutex_lock(&list_lock);
 1569   for(ar_ptr = &main_arena;;) {
 1570     (void)mutex_lock(&ar_ptr->mutex);
 1571     ar_ptr = ar_ptr->next;
 1572     if(ar_ptr == &main_arena) break;
 1573   }
 1574 #if defined _LIBC || defined MALLOC_HOOKS
 1575   save_malloc_hook = __malloc_hook;
 1576   save_free_hook = __free_hook;
 1577   __malloc_hook = malloc_atfork;
 1578   __free_hook = free_atfork;
 1579   /* Only the current thread may perform malloc/free calls now. */
 1580   tsd_getspecific(arena_key, save_arena);
 1581   tsd_setspecific(arena_key, (Void_t*)0);
 1582 #endif
 1583 }
 1584 
 1585 static void
 1586 ptmalloc_unlock_all __MALLOC_P((void))
 1587 {
 1588   arena *ar_ptr;
 1589 
 1590 #if defined _LIBC || defined MALLOC_HOOKS
 1591   tsd_setspecific(arena_key, save_arena);
 1592   __malloc_hook = save_malloc_hook;
 1593   __free_hook = save_free_hook;
 1594 #endif
 1595   for(ar_ptr = &main_arena;;) {
 1596     (void)mutex_unlock(&ar_ptr->mutex);
 1597     ar_ptr = ar_ptr->next;
 1598     if(ar_ptr == &main_arena) break;
 1599   }
 1600   (void)mutex_unlock(&list_lock);
 1601 }
 1602 
 1603 #ifdef __linux__
 1604 
 1605 /* In LinuxThreads, unlocking a mutex in the child process after a
 1606    fork() is currently unsafe, whereas re-initializing it is safe and
 1607    does not leak resources.  Therefore, a special atfork handler is
 1608    installed for the child. */
 1609 
 1610 static void
 1611 ptmalloc_unlock_all2 __MALLOC_P((void))
 1612 {
 1613   arena *ar_ptr;
 1614 
 1615 #if defined _LIBC || defined MALLOC_HOOKS
 1616   tsd_setspecific(arena_key, save_arena);
 1617   __malloc_hook = save_malloc_hook;
 1618   __free_hook = save_free_hook;
 1619 #endif
 1620   for(ar_ptr = &main_arena;;) {
 1621     (void)mutex_init(&ar_ptr->mutex);
 1622     ar_ptr = ar_ptr->next;
 1623     if(ar_ptr == &main_arena) break;
 1624   }
 1625   (void)mutex_init(&list_lock);
 1626 }
 1627 
 1628 #else
 1629 
 1630 #define ptmalloc_unlock_all2 ptmalloc_unlock_all
 1631 
 1632 #endif
 1633 
 1634 #endif /* !defined NO_THREADS */
 1635 
 1636 /* Initialization routine. */
 1637 #if defined(_LIBC)
 1638 #if 0
 1639 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
 1640 #endif
 1641 
 1642 static void
 1643 ptmalloc_init __MALLOC_P((void))
 1644 #else
 1645 void
 1646 ptmalloc_init __MALLOC_P((void))
 1647 #endif
 1648 {
 1649 #if defined _LIBC || defined MALLOC_HOOKS
 1650 # if __STD_C
 1651   const char* s;
 1652 # else
 1653   char* s;
 1654 # endif
 1655 #endif
 1656 
 1657   if(__malloc_initialized >= 0) return;
 1658   __malloc_initialized = 0;
 1659 #ifndef NO_THREADS
 1660 #if defined _LIBC || defined MALLOC_HOOKS
 1661   /* With some threads implementations, creating thread-specific data
 1662      or initializing a mutex may call malloc() itself.  Provide a
 1663      simple starter version (realloc() won't work). */
 1664   save_malloc_hook = __malloc_hook;
 1665   save_free_hook = __free_hook;
 1666   __malloc_hook = malloc_starter;
 1667   __free_hook = free_starter;
 1668 #endif
 1669 #ifdef _LIBC
 1670   /* Initialize the pthreads interface. */
 1671   if (__pthread_initialize != NULL)
 1672     __pthread_initialize();
 1673 #endif
 1674   mutex_init(&main_arena.mutex);
 1675   mutex_init(&list_lock);
 1676   tsd_key_create(&arena_key, NULL);
 1677   tsd_setspecific(arena_key, (Void_t *)&main_arena);
 1678   thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_unlock_all2);
 1679 #endif /* !defined NO_THREADS */
 1680 #if defined _LIBC || defined MALLOC_HOOKS
 1681   if((s = getenv("MALLOC_TRIM_THRESHOLD_")))
 1682     mALLOPt(M_TRIM_THRESHOLD, atoi(s));
 1683   if((s = getenv("MALLOC_TOP_PAD_")))
 1684     mALLOPt(M_TOP_PAD, atoi(s));
 1685   if((s = getenv("MALLOC_MMAP_THRESHOLD_")))
 1686     mALLOPt(M_MMAP_THRESHOLD, atoi(s));
 1687   if((s = getenv("MALLOC_MMAP_MAX_")))
 1688     mALLOPt(M_MMAP_MAX, atoi(s));
 1689   s = getenv("MALLOC_CHECK_");
 1690 #ifndef NO_THREADS
 1691   __malloc_hook = save_malloc_hook;
 1692   __free_hook = save_free_hook;
 1693 #endif
 1694   if(s) {
 1695     if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
 1696     __malloc_check_init();
 1697   }
 1698   if(__malloc_initialize_hook != NULL)
 1699     (*__malloc_initialize_hook)();
 1700 #endif
 1701   __malloc_initialized = 1;
 1702 }
 1703 
 1704 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
 1705 #ifdef thread_atfork_static
 1706 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
 1707                      ptmalloc_unlock_all2)
 1708 #endif
 1709 
 1710 #if defined _LIBC || defined MALLOC_HOOKS
 1711 
 1712 /* Hooks for debugging versions.  The initial hooks just call the
 1713    initialization routine, then do the normal work. */
 1714 
 1715 static Void_t*
 1716 #if __STD_C
 1717 malloc_hook_ini(size_t sz)
 1718 #else
 1719 malloc_hook_ini(sz) size_t sz;
 1720 #endif
 1721 {
 1722   __malloc_hook = NULL;
 1723   ptmalloc_init();
 1724   return mALLOc(sz);
 1725 }
 1726 
 1727 static Void_t*
 1728 #if __STD_C
 1729 realloc_hook_ini(Void_t* ptr, size_t sz)
 1730 #else
 1731 realloc_hook_ini(ptr, sz) Void_t* ptr; size_t sz;
 1732 #endif
 1733 {
 1734   __malloc_hook = NULL;
 1735   __realloc_hook = NULL;
 1736   ptmalloc_init();
 1737   return rEALLOc(ptr, sz);
 1738 }
 1739 
 1740 static Void_t*
 1741 #if __STD_C
 1742 memalign_hook_ini(size_t sz, size_t alignment)
 1743 #else
 1744 memalign_hook_ini(sz, alignment) size_t sz; size_t alignment;
 1745 #endif
 1746 {
 1747   __memalign_hook = NULL;
 1748   ptmalloc_init();
 1749   return mEMALIGn(sz, alignment);
 1750 }
 1751 
 1752 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
 1753 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr)) = NULL;
 1754 __malloc_ptr_t weak_variable (*__malloc_hook)
 1755  __MALLOC_P ((size_t __size)) = malloc_hook_ini;
 1756 __malloc_ptr_t weak_variable (*__realloc_hook)
 1757  __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size)) = realloc_hook_ini;
 1758 __malloc_ptr_t weak_variable (*__memalign_hook)
 1759  __MALLOC_P ((size_t __size, size_t __alignment)) = memalign_hook_ini;
 1760 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
 1761 
 1762 /* Whether we are using malloc checking.  */
 1763 static int using_malloc_checking;
 1764 
 1765 /* A flag that is set by malloc_set_state, to signal that malloc checking
 1766    must not be enabled on the request from the user (via the MALLOC_CHECK_
 1767    environment variable).  It is reset by __malloc_check_init to tell
 1768    malloc_set_state that the user has requested malloc checking.
 1769 
 1770    The purpose of this flag is to make sure that malloc checking is not
 1771    enabled when the heap to be restored was constructed without malloc
 1772    checking, and thus does not contain the required magic bytes.
 1773    Otherwise the heap would be corrupted by calls to free and realloc.  If
 1774    it turns out that the heap was created with malloc checking and the
 1775    user has requested it malloc_set_state just calls __malloc_check_init
 1776    again to enable it.  On the other hand, reusing such a heap without
 1777    further malloc checking is safe.  */
 1778 static int disallow_malloc_check;
 1779 
 1780 /* Activate a standard set of debugging hooks. */
 1781 void
 1782 __malloc_check_init()
 1783 {
 1784   if (disallow_malloc_check) {
 1785     disallow_malloc_check = 0;
 1786     return;
 1787   }
 1788   using_malloc_checking = 1;
 1789   __malloc_hook = malloc_check;
 1790   __free_hook = free_check;
 1791   __realloc_hook = realloc_check;
 1792   __memalign_hook = memalign_check;
 1793   if(check_action == 1)
 1794     fprintf(stderr, "malloc: using debugging hooks\n");
 1795 }
 1796 
 1797 #endif
 1798 
 1799 
 1800 
 1801 
 1802 
 1803 /* Routines dealing with mmap(). */
 1804 
 1805 #if HAVE_MMAP
 1806 
 1807 #ifndef MAP_ANONYMOUS
 1808 
 1809 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
 1810 
 1811 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
 1812  (dev_zero_fd = open("/dev/zero", O_RDWR), \
 1813   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
 1814    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
 1815 
 1816 #else
 1817 
 1818 #define MMAP(addr, size, prot, flags) \
 1819  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
 1820 
 1821 #endif
 1822 
 1823 #if defined __GNUC__ && __GNUC__ >= 2
 1824 /* This function is only called from one place, inline it.  */
 1825 inline
 1826 #endif
 1827 static mchunkptr
 1828 internal_function
 1829 #if __STD_C
 1830 mmap_chunk(size_t size)
 1831 #else
 1832 mmap_chunk(size) size_t size;
 1833 #endif
 1834 {
 1835   size_t page_mask = malloc_getpagesize - 1;
 1836   mchunkptr p;
 1837 
 1838   if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
 1839 
 1840   /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
 1841    * there is no following chunk whose prev_size field could be used.
 1842    */
 1843   size = (size + SIZE_SZ + page_mask) & ~page_mask;
 1844 
 1845   p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
 1846   if(p == (mchunkptr) MAP_FAILED) return 0;
 1847 
 1848   n_mmaps++;
 1849   if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
 1850 
 1851   /* We demand that eight bytes into a page must be 8-byte aligned. */
 1852   assert(aligned_OK(chunk2mem(p)));
 1853 
 1854   /* The offset to the start of the mmapped region is stored
 1855    * in the prev_size field of the chunk; normally it is zero,
 1856    * but that can be changed in memalign().
 1857    */
 1858   p->prev_size = 0;
 1859   set_head(p, size|IS_MMAPPED);
 1860 
 1861   mmapped_mem += size;
 1862   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
 1863     max_mmapped_mem = mmapped_mem;
 1864 #ifdef NO_THREADS
 1865   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
 1866     max_total_mem = mmapped_mem + sbrked_mem;
 1867 #endif
 1868   return p;
 1869 }
 1870 
 1871 static void
 1872 internal_function
 1873 #if __STD_C
 1874 munmap_chunk(mchunkptr p)
 1875 #else
 1876 munmap_chunk(p) mchunkptr p;
 1877 #endif
 1878 {
 1879   INTERNAL_SIZE_T size = chunksize(p);
 1880   int ret;
 1881 
 1882   assert (chunk_is_mmapped(p));
 1883   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
 1884   assert((n_mmaps > 0));
 1885   assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
 1886 
 1887   n_mmaps--;
 1888   mmapped_mem -= (size + p->prev_size);
 1889 
 1890   ret = munmap((char *)p - p->prev_size, size + p->prev_size);
 1891 
 1892   /* munmap returns non-zero on failure */
 1893   assert(ret == 0);
 1894 }
 1895 
 1896 #if HAVE_MREMAP
 1897 
 1898 static mchunkptr
 1899 internal_function
 1900 #if __STD_C
 1901 mremap_chunk(mchunkptr p, size_t new_size)
 1902 #else
 1903 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
 1904 #endif
 1905 {
 1906   size_t page_mask = malloc_getpagesize - 1;
 1907   INTERNAL_SIZE_T offset = p->prev_size;
 1908   INTERNAL_SIZE_T size = chunksize(p);
 1909   char *cp;
 1910 
 1911   assert (chunk_is_mmapped(p));
 1912   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
 1913   assert((n_mmaps > 0));
 1914   assert(((size + offset) & (malloc_getpagesize-1)) == 0);
 1915 
 1916   /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
 1917   new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
 1918 
 1919   cp = (char *)mremap((char *)p - offset, size + offset, new_size,
 1920                       MREMAP_MAYMOVE);
 1921 
 1922   if (cp == MAP_FAILED) return 0;
 1923 
 1924   p = (mchunkptr)(cp + offset);
 1925 
 1926   assert(aligned_OK(chunk2mem(p)));
 1927 
 1928   assert((p->prev_size == offset));
 1929   set_head(p, (new_size - offset)|IS_MMAPPED);
 1930 
 1931   mmapped_mem -= size + offset;
 1932   mmapped_mem += new_size;
 1933   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
 1934     max_mmapped_mem = mmapped_mem;
 1935 #ifdef NO_THREADS
 1936   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
 1937     max_total_mem = mmapped_mem + sbrked_mem;
 1938 #endif
 1939   return p;
 1940 }
 1941 
 1942 #endif /* HAVE_MREMAP */
 1943 
 1944 #endif /* HAVE_MMAP */
 1945 
 1946 
 1947 
 1948 /* Managing heaps and arenas (for concurrent threads) */
 1949 
 1950 #ifndef NO_THREADS
 1951 
 1952 /* Create a new heap.  size is automatically rounded up to a multiple
 1953    of the page size. */
 1954 
 1955 static heap_info *
 1956 internal_function
 1957 #if __STD_C
 1958 new_heap(size_t size)
 1959 #else
 1960 new_heap(size) size_t size;
 1961 #endif
 1962 {
 1963   size_t page_mask = malloc_getpagesize - 1;
 1964   char *p1, *p2;
 1965   unsigned long ul;
 1966   heap_info *h;
 1967 
 1968   if(size+top_pad < HEAP_MIN_SIZE)
 1969     size = HEAP_MIN_SIZE;
 1970   else if(size+top_pad <= HEAP_MAX_SIZE)
 1971     size += top_pad;
 1972   else if(size > HEAP_MAX_SIZE)
 1973     return 0;
 1974   else
 1975     size = HEAP_MAX_SIZE;
 1976   size = (size + page_mask) & ~page_mask;
 1977 
 1978   /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
 1979      No swap space needs to be reserved for the following large
 1980      mapping (on Linux, this is the case for all non-writable mappings
 1981      anyway). */
 1982   p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
 1983   if(p1 == MAP_FAILED)
 1984     return 0;
 1985   p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
 1986   ul = p2 - p1;
 1987   munmap(p1, ul);
 1988   munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
 1989   if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
 1990     munmap(p2, HEAP_MAX_SIZE);
 1991     return 0;
 1992   }
 1993   h = (heap_info *)p2;
 1994   h->size = size;
 1995   THREAD_STAT(stat_n_heaps++);
 1996   return h;
 1997 }
 1998 
 1999 /* Grow or shrink a heap.  size is automatically rounded up to a
 2000    multiple of the page size if it is positive. */
 2001 
 2002 static int
 2003 #if __STD_C
 2004 grow_heap(heap_info *h, long diff)
 2005 #else
 2006 grow_heap(h, diff) heap_info *h; long diff;
 2007 #endif
 2008 {
 2009   size_t page_mask = malloc_getpagesize - 1;
 2010   long new_size;
 2011 
 2012   if(diff >= 0) {
 2013     diff = (diff + page_mask) & ~page_mask;
 2014     new_size = (long)h->size + diff;
 2015     if(new_size > HEAP_MAX_SIZE)
 2016       return -1;
 2017     if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
 2018       return -2;
 2019   } else {
 2020     new_size = (long)h->size + diff;
 2021     if(new_size < (long)sizeof(*h))
 2022       return -1;
 2023     /* Try to re-map the extra heap space freshly to save memory, and
 2024        make it inaccessible. */
 2025     if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
 2026                     MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
 2027       return -2;
 2028   }
 2029   h->size = new_size;
 2030   return 0;
 2031 }
 2032 
 2033 /* Delete a heap. */
 2034 
 2035 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
 2036 
 2037 /* arena_get() acquires an arena and locks the corresponding mutex.
 2038    First, try the one last locked successfully by this thread.  (This
 2039    is the common case and handled with a macro for speed.)  Then, loop
 2040    once over the circularly linked list of arenas.  If no arena is
 2041    readily available, create a new one. */
 2042 
 2043 #define arena_get(ptr, size) do { \
 2044   Void_t *vptr = NULL; \
 2045   ptr = (arena *)tsd_getspecific(arena_key, vptr); \
 2046   if(ptr && !mutex_trylock(&ptr->mutex)) { \
 2047     THREAD_STAT(++(ptr->stat_lock_direct)); \
 2048   } else \
 2049     ptr = arena_get2(ptr, (size)); \
 2050 } while(0)
 2051 
 2052 static arena *
 2053 internal_function
 2054 #if __STD_C
 2055 arena_get2(arena *a_tsd, size_t size)
 2056 #else
 2057 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
 2058 #endif
 2059 {
 2060   arena *a;
 2061   heap_info *h;
 2062   char *ptr;
 2063   int i;
 2064   unsigned long misalign;
 2065 
 2066   if(!a_tsd)
 2067     a = a_tsd = &main_arena;
 2068   else {
 2069     a = a_tsd->next;
 2070     if(!a) {
 2071       /* This can only happen while initializing the new arena. */
 2072       (void)mutex_lock(&main_arena.mutex);
 2073       THREAD_STAT(++(main_arena.stat_lock_wait));
 2074       return &main_arena;
 2075     }
 2076   }
 2077 
 2078   /* Check the global, circularly linked list for available arenas. */
 2079  repeat:
 2080   do {
 2081     if(!mutex_trylock(&a->mutex)) {
 2082       THREAD_STAT(++(a->stat_lock_loop));
 2083       tsd_setspecific(arena_key, (Void_t *)a);
 2084       return a;
 2085     }
 2086     a = a->next;
 2087   } while(a != a_tsd);
 2088 
 2089   /* If not even the list_lock can be obtained, try again.  This can
 2090      happen during `atfork', or for example on systems where thread
 2091      creation makes it temporarily impossible to obtain _any_
 2092      locks. */
 2093   if(mutex_trylock(&list_lock)) {
 2094     a = a_tsd;
 2095     goto repeat;
 2096   }
 2097   (void)mutex_unlock(&list_lock);
 2098 
 2099   /* Nothing immediately available, so generate a new arena. */
 2100   h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
 2101   if(!h)
 2102     return 0;
 2103   a = h->ar_ptr = (arena *)(h+1);
 2104   for(i=0; i<NAV; i++)
 2105     init_bin(a, i);
 2106   a->next = NULL;
 2107   a->size = h->size;
 2108   tsd_setspecific(arena_key, (Void_t *)a);
 2109   mutex_init(&a->mutex);
 2110   i = mutex_lock(&a->mutex); /* remember result */
 2111 
 2112   /* Set up the top chunk, with proper alignment. */
 2113   ptr = (char *)(a + 1);
 2114   misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
 2115   if (misalign > 0)
 2116     ptr += MALLOC_ALIGNMENT - misalign;
 2117   top(a) = (mchunkptr)ptr;
 2118   set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
 2119 
 2120   /* Add the new arena to the list. */
 2121   (void)mutex_lock(&list_lock);
 2122   a->next = main_arena.next;
 2123   main_arena.next = a;
 2124   (void)mutex_unlock(&list_lock);
 2125 
 2126   if(i) /* locking failed; keep arena for further attempts later */
 2127     return 0;
 2128 
 2129   THREAD_STAT(++(a->stat_lock_loop));
 2130   return a;
 2131 }
 2132 
 2133 /* find the heap and corresponding arena for a given ptr */
 2134 
 2135 #define heap_for_ptr(ptr) \
 2136  ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
 2137 #define arena_for_ptr(ptr) \
 2138  (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
 2139   &main_arena : heap_for_ptr(ptr)->ar_ptr)
 2140 
 2141 #else /* defined(NO_THREADS) */
 2142 
 2143 /* Without concurrent threads, there is only one arena. */
 2144 
 2145 #define arena_get(ptr, sz) (ptr = &main_arena)
 2146 #define arena_for_ptr(ptr) (&main_arena)
 2147 
 2148 #endif /* !defined(NO_THREADS) */
 2149 
 2150 
 2151 
 2152 /*
 2153   Debugging support
 2154 */
 2155 
 2156 #if MALLOC_DEBUG
 2157 
 2158 
 2159 /*
 2160   These routines make a number of assertions about the states
 2161   of data structures that should be true at all times. If any
 2162   are not true, it's very likely that a user program has somehow
 2163   trashed memory. (It's also possible that there is a coding error
 2164   in malloc. In which case, please report it!)
 2165 */
 2166 
 2167 #if __STD_C
 2168 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
 2169 #else
 2170 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
 2171 #endif
 2172 {
 2173   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
 2174 
 2175   /* No checkable chunk is mmapped */
 2176   assert(!chunk_is_mmapped(p));
 2177 
 2178 #ifndef NO_THREADS
 2179   if(ar_ptr != &main_arena) {
 2180     heap_info *heap = heap_for_ptr(p);
 2181     assert(heap->ar_ptr == ar_ptr);
 2182     if(p != top(ar_ptr))
 2183       assert((char *)p + sz <= (char *)heap + heap->size);
 2184     else
 2185       assert((char *)p + sz == (char *)heap + heap->size);
 2186     return;
 2187   }
 2188 #endif
 2189 
 2190   /* Check for legal address ... */
 2191   assert((char*)p >= sbrk_base);
 2192   if (p != top(ar_ptr))
 2193     assert((char*)p + sz <= (char*)top(ar_ptr));
 2194   else
 2195     assert((char*)p + sz <= sbrk_base + sbrked_mem);
 2196 
 2197 }
 2198 
 2199 
 2200 #if __STD_C
 2201 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
 2202 #else
 2203 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
 2204 #endif
 2205 {
 2206   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
 2207   mchunkptr next = chunk_at_offset(p, sz);
 2208 
 2209   do_check_chunk(ar_ptr, p);
 2210 
 2211   /* Check whether it claims to be free ... */
 2212   assert(!inuse(p));
 2213 
 2214   /* Must have OK size and fields */
 2215   assert((long)sz >= (long)MINSIZE);
 2216   assert((sz & MALLOC_ALIGN_MASK) == 0);
 2217   assert(aligned_OK(chunk2mem(p)));
 2218   /* ... matching footer field */
 2219   assert(next->prev_size == sz);
 2220   /* ... and is fully consolidated */
 2221   assert(prev_inuse(p));
 2222   assert (next == top(ar_ptr) || inuse(next));
 2223 
 2224   /* ... and has minimally sane links */
 2225   assert(p->fd->bk == p);
 2226   assert(p->bk->fd == p);
 2227 }
 2228 
 2229 #if __STD_C
 2230 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
 2231 #else
 2232 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
 2233 #endif
 2234 {
 2235   mchunkptr next = next_chunk(p);
 2236   do_check_chunk(ar_ptr, p);
 2237 
 2238   /* Check whether it claims to be in use ... */
 2239   assert(inuse(p));
 2240 
 2241   /* ... whether its size is OK (it might be a fencepost) ... */
 2242   assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
 2243 
 2244   /* ... and is surrounded by OK chunks.
 2245     Since more things can be checked with free chunks than inuse ones,
 2246     if an inuse chunk borders them and debug is on, it's worth doing them.
 2247   */
 2248   if (!prev_inuse(p))
 2249   {
 2250     mchunkptr prv = prev_chunk(p);
 2251     assert(next_chunk(prv) == p);
 2252     do_check_free_chunk(ar_ptr, prv);
 2253   }
 2254   if (next == top(ar_ptr))
 2255   {
 2256     assert(prev_inuse(next));
 2257     assert(chunksize(next) >= MINSIZE);
 2258   }
 2259   else if (!inuse(next))
 2260     do_check_free_chunk(ar_ptr, next);
 2261 
 2262 }
 2263 
 2264 #if __STD_C
 2265 static void do_check_malloced_chunk(arena *ar_ptr,
 2266                                     mchunkptr p, INTERNAL_SIZE_T s)
 2267 #else
 2268 static void do_check_malloced_chunk(ar_ptr, p, s)
 2269 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
 2270 #endif
 2271 {
 2272   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
 2273   long room = sz - s;
 2274 
 2275   do_check_inuse_chunk(ar_ptr, p);
 2276 
 2277   /* Legal size ... */
 2278   assert((long)sz >= (long)MINSIZE);
 2279   assert((sz & MALLOC_ALIGN_MASK) == 0);
 2280   assert(room >= 0);
 2281   assert(room < (long)MINSIZE);
 2282 
 2283   /* ... and alignment */
 2284   assert(aligned_OK(chunk2mem(p)));
 2285 
 2286 
 2287   /* ... and was allocated at front of an available chunk */
 2288   assert(prev_inuse(p));
 2289 
 2290 }
 2291 
 2292 
 2293 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
 2294 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
 2295 #define check_chunk(A,P) do_check_chunk(A,P)
 2296 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
 2297 #else
 2298 #define check_free_chunk(A,P)
 2299 #define check_inuse_chunk(A,P)
 2300 #define check_chunk(A,P)
 2301 #define check_malloced_chunk(A,P,N)
 2302 #endif
 2303 
 2304 
 2305 
 2306 /*
 2307   Macro-based internal utilities
 2308 */
 2309 
 2310 
 2311 /*
 2312   Linking chunks in bin lists.
 2313   Call these only with variables, not arbitrary expressions, as arguments.
 2314 */
 2315 
 2316 /*
 2317   Place chunk p of size s in its bin, in size order,
 2318   putting it ahead of others of same size.
 2319 */
 2320 
 2321 
 2322 #define frontlink(A, P, S, IDX, BK, FD)                                       \
 2323 {                                                                             \
 2324   if (S < MAX_SMALLBIN_SIZE)                                                  \
 2325   {                                                                           \
 2326     IDX = smallbin_index(S);                                                  \
 2327     mark_binblock(A, IDX);                                                    \
 2328     BK = bin_at(A, IDX);                                                      \
 2329     FD = BK->fd;                                                              \
 2330     P->bk = BK;                                                               \
 2331     P->fd = FD;                                                               \
 2332     FD->bk = BK->fd = P;                                                      \
 2333   }                                                                           \
 2334   else                                                                        \
 2335   {                                                                           \
 2336     IDX = bin_index(S);                                                       \
 2337     BK = bin_at(A, IDX);                                                      \
 2338     FD = BK->fd;                                                              \
 2339     if (FD == BK) mark_binblock(A, IDX);                                      \
 2340     else                                                                      \
 2341     {                                                                         \
 2342       while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
 2343       BK = FD->bk;                                                            \
 2344     }                                                                         \
 2345     P->bk = BK;                                                               \
 2346     P->fd = FD;                                                               \
 2347     FD->bk = BK->fd = P;                                                      \
 2348   }                                                                           \
 2349 }
 2350 
 2351 
 2352 /* take a chunk off a list */
 2353 
 2354 #define unlink(P, BK, FD)                                                     \
 2355 {                                                                             \
 2356   BK = P->bk;                                                                 \
 2357   FD = P->fd;                                                                 \
 2358   FD->bk = BK;                                                                \
 2359   BK->fd = FD;                                                                \
 2360 }                                                                             \
 2361 
 2362 /* Place p as the last remainder */
 2363 
 2364 #define link_last_remainder(A, P)                                             \
 2365 {                                                                             \
 2366   last_remainder(A)->fd = last_remainder(A)->bk = P;                          \
 2367   P->fd = P->bk = last_remainder(A);                                          \
 2368 }
 2369 
 2370 /* Clear the last_remainder bin */
 2371 
 2372 #define clear_last_remainder(A) \
 2373   (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
 2374 
 2375 
 2376 
 2377 
 2378 
 2379 /*
 2380   Extend the top-most chunk by obtaining memory from system.
 2381   Main interface to sbrk (but see also malloc_trim).
 2382 */
 2383 
 2384 #if defined __GNUC__ && __GNUC__ >= 2
 2385 /* This function is called only from one place, inline it.  */
 2386 inline
 2387 #endif
 2388 static void
 2389 internal_function
 2390 #if __STD_C
 2391 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
 2392 #else
 2393 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
 2394 #endif
 2395 {
 2396   unsigned long pagesz   = malloc_getpagesize;
 2397   mchunkptr old_top      = top(ar_ptr);        /* Record state of old top */
 2398   INTERNAL_SIZE_T old_top_size = chunksize(old_top);
 2399   INTERNAL_SIZE_T top_size;                    /* new size of top chunk */
 2400 
 2401 #ifndef NO_THREADS
 2402   if(ar_ptr == &main_arena) {
 2403 #endif
 2404 
 2405     char*     brk;                  /* return value from sbrk */
 2406     INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
 2407     INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
 2408     char*     new_brk;              /* return of 2nd sbrk call */
 2409     char*     old_end = (char*)(chunk_at_offset(old_top, old_top_size));
 2410 
 2411     /* Pad request with top_pad plus minimal overhead */
 2412     INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
 2413 
 2414     /* If not the first time through, round to preserve page boundary */
 2415     /* Otherwise, we need to correct to a page size below anyway. */
 2416     /* (We also correct below if an intervening foreign sbrk call.) */
 2417 
 2418     if (sbrk_base != (char*)(-1))
 2419       sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
 2420 
 2421     brk = (char*)(MORECORE (sbrk_size));
 2422 
 2423     /* Fail if sbrk failed or if a foreign sbrk call killed our space */
 2424     if (brk == (char*)(MORECORE_FAILURE) ||
 2425         (brk < old_end && old_top != initial_top(&main_arena)))
 2426       return;
 2427 
 2428 #if defined _LIBC || defined MALLOC_HOOKS
 2429     /* Call the `morecore' hook if necessary.  */
 2430     if (__after_morecore_hook)
 2431       (*__after_morecore_hook) ();
 2432 #endif
 2433 
 2434     sbrked_mem += sbrk_size;
 2435 
 2436     if (brk == old_end) { /* can just add bytes to current top */
 2437       top_size = sbrk_size + old_top_size;
 2438       set_head(old_top, top_size | PREV_INUSE);
 2439       old_top = 0; /* don't free below */
 2440     } else {
 2441       if (sbrk_base == (char*)(-1)) /* First time through. Record base */
 2442         sbrk_base = brk;
 2443       else
 2444         /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
 2445         sbrked_mem += brk - (char*)old_end;
 2446 
 2447       /* Guarantee alignment of first new chunk made from this space */
 2448       front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
 2449       if (front_misalign > 0) {
 2450         correction = (MALLOC_ALIGNMENT) - front_misalign;
 2451         brk += correction;
 2452       } else
 2453         correction = 0;
 2454 
 2455       /* Guarantee the next brk will be at a page boundary */
 2456       correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
 2457 
 2458       /* Allocate correction */
 2459       new_brk = (char*)(MORECORE (correction));
 2460       if (new_brk == (char*)(MORECORE_FAILURE)) return;
 2461 
 2462 #if defined _LIBC || defined MALLOC_HOOKS
 2463       /* Call the `morecore' hook if necessary.  */
 2464       if (__after_morecore_hook)
 2465         (*__after_morecore_hook) ();
 2466 #endif
 2467 
 2468       sbrked_mem += correction;
 2469 
 2470       top(&main_arena) = (mchunkptr)brk;
 2471       top_size = new_brk - brk + correction;
 2472       set_head(top(&main_arena), top_size | PREV_INUSE);
 2473 
 2474       if (old_top == initial_top(&main_arena))
 2475         old_top = 0; /* don't free below */
 2476     }
 2477 
 2478     if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
 2479       max_sbrked_mem = sbrked_mem;
 2480 #ifdef NO_THREADS
 2481     if ((unsigned long)(mmapped_mem + sbrked_mem) >
 2482         (unsigned long)max_total_mem)
 2483       max_total_mem = mmapped_mem + sbrked_mem;
 2484 #endif
 2485 
 2486 #ifndef NO_THREADS
 2487   } else { /* ar_ptr != &main_arena */
 2488     heap_info *old_heap, *heap;
 2489     size_t old_heap_size;
 2490 
 2491     if(old_top_size < MINSIZE) /* this should never happen */
 2492       return;
 2493 
 2494     /* First try to extend the current heap. */
 2495     if(MINSIZE + nb <= old_top_size)
 2496       return;
 2497     old_heap = heap_for_ptr(old_top);
 2498     old_heap_size = old_heap->size;
 2499     if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
 2500       ar_ptr->size += old_heap->size - old_heap_size;
 2501       top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
 2502       set_head(old_top, top_size | PREV_INUSE);
 2503       return;
 2504     }
 2505 
 2506     /* A new heap must be created. */
 2507     heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
 2508     if(!heap)
 2509       return;
 2510     heap->ar_ptr = ar_ptr;
 2511     heap->prev = old_heap;
 2512     ar_ptr->size += heap->size;
 2513 
 2514     /* Set up the new top, so we can safely use chunk_free() below. */
 2515     top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
 2516     top_size = heap->size - sizeof(*heap);
 2517     set_head(top(ar_ptr), top_size | PREV_INUSE);
 2518   }
 2519 #endif /* !defined(NO_THREADS) */
 2520 
 2521   /* We always land on a page boundary */
 2522   assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
 2523 
 2524   /* Setup fencepost and free the old top chunk. */
 2525   if(old_top) {
 2526     /* The fencepost takes at least MINSIZE bytes, because it might
 2527        become the top chunk again later.  Note that a footer is set
 2528        up, too, although the chunk is marked in use. */
 2529     old_top_size -= MINSIZE;
 2530     set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
 2531     if(old_top_size >= MINSIZE) {
 2532       set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
 2533       set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
 2534       set_head_size(old_top, old_top_size);
 2535       chunk_free(ar_ptr, old_top);
 2536     } else {
 2537       set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
 2538       set_foot(old_top, (old_top_size + 2*SIZE_SZ));
 2539     }
 2540   }
 2541 }
 2542 
 2543 
 2544 
 2545 
 2546 /* Main public routines */
 2547 
 2548 
 2549 /*
 2550   Malloc Algorithm:
 2551 
 2552     The requested size is first converted into a usable form, `nb'.
 2553     This currently means to add 4 bytes overhead plus possibly more to
 2554     obtain 8-byte alignment and/or to obtain a size of at least
 2555     MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
 2556     size.  (All fits are considered `exact' if they are within MINSIZE
 2557     bytes.)
 2558 
 2559     From there, the first successful of the following steps is taken:
 2560 
 2561       1. The bin corresponding to the request size is scanned, and if
 2562          a chunk of exactly the right size is found, it is taken.
 2563 
 2564       2. The most recently remaindered chunk is used if it is big
 2565          enough.  This is a form of (roving) first fit, used only in
 2566          the absence of exact fits. Runs of consecutive requests use
 2567          the remainder of the chunk used for the previous such request
 2568          whenever possible. This limited use of a first-fit style
 2569          allocation strategy tends to give contiguous chunks
 2570          coextensive lifetimes, which improves locality and can reduce
 2571          fragmentation in the long run.
 2572 
 2573       3. Other bins are scanned in increasing size order, using a
 2574          chunk big enough to fulfill the request, and splitting off
 2575          any remainder.  This search is strictly by best-fit; i.e.,
 2576          the smallest (with ties going to approximately the least
 2577          recently used) chunk that fits is selected.
 2578 
 2579       4. If large enough, the chunk bordering the end of memory
 2580          (`top') is split off. (This use of `top' is in accord with
 2581          the best-fit search rule.  In effect, `top' is treated as
 2582          larger (and thus less well fitting) than any other available
 2583          chunk since it can be extended to be as large as necessary
 2584          (up to system limitations).
 2585 
 2586       5. If the request size meets the mmap threshold and the
 2587          system supports mmap, and there are few enough currently
 2588          allocated mmapped regions, and a call to mmap succeeds,
 2589          the request is allocated via direct memory mapping.
 2590 
 2591       6. Otherwise, the top of memory is extended by
 2592          obtaining more space from the system (normally using sbrk,
 2593          but definable to anything else via the MORECORE macro).
 2594          Memory is gathered from the system (in system page-sized
 2595          units) in a way that allows chunks obtained across different
 2596          sbrk calls to be consolidated, but does not require
 2597          contiguous memory. Thus, it should be safe to intersperse
 2598          mallocs with other sbrk calls.
 2599 
 2600 
 2601       All allocations are made from the `lowest' part of any found
 2602       chunk. (The implementation invariant is that prev_inuse is
 2603       always true of any allocated chunk; i.e., that each allocated
 2604       chunk borders either a previously allocated and still in-use chunk,
 2605       or the base of its memory arena.)
 2606 
 2607 */
 2608 
 2609 #if __STD_C
 2610 Void_t* mALLOc(size_t bytes)
 2611 #else
 2612 Void_t* mALLOc(bytes) size_t bytes;
 2613 #endif
 2614 {
 2615   arena *ar_ptr;
 2616   INTERNAL_SIZE_T nb; /* padded request size */
 2617   mchunkptr victim;
 2618 
 2619 #if defined _LIBC || defined MALLOC_HOOKS
 2620   if (__malloc_hook != NULL) {
 2621     Void_t* result;
 2622 
 2623     result = (*__malloc_hook)(bytes);
 2624     return result;
 2625   }
 2626 #endif
 2627 
 2628   nb = request2size(bytes);
 2629   arena_get(ar_ptr, nb);
 2630   if(!ar_ptr)
 2631     return 0;
 2632   victim = chunk_alloc(ar_ptr, nb);
 2633   (void)mutex_unlock(&ar_ptr->mutex);
 2634   if(!victim) {
 2635     /* Maybe the failure is due to running out of mmapped areas. */
 2636     if(ar_ptr != &main_arena) {
 2637       (void)mutex_lock(&main_arena.mutex);
 2638       victim = chunk_alloc(&main_arena, nb);
 2639       (void)mutex_unlock(&main_arena.mutex);
 2640     }
 2641     if(!victim) return 0;
 2642   }
 2643   return chunk2mem(victim);
 2644 }
 2645 
 2646 static mchunkptr
 2647 internal_function
 2648 #if __STD_C
 2649 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
 2650 #else
 2651 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
 2652 #endif
 2653 {
 2654   mchunkptr victim;                  /* inspected/selected chunk */
 2655   INTERNAL_SIZE_T victim_size;       /* its size */
 2656   int       idx;                     /* index for bin traversal */
 2657   mbinptr   bin;                     /* associated bin */
 2658   mchunkptr remainder;               /* remainder from a split */
 2659   long      remainder_size;          /* its size */
 2660   int       remainder_index;         /* its bin index */
 2661   unsigned long block;               /* block traverser bit */
 2662   int       startidx;                /* first bin of a traversed block */
 2663   mchunkptr fwd;                     /* misc temp for linking */
 2664   mchunkptr bck;                     /* misc temp for linking */
 2665   mbinptr q;                         /* misc temp */
 2666 
 2667 
 2668   /* Check for exact match in a bin */
 2669 
 2670   if (is_small_request(nb))  /* Faster version for small requests */
 2671   {
 2672     idx = smallbin_index(nb);
 2673 
 2674     /* No traversal or size check necessary for small bins.  */
 2675 
 2676     q = bin_at(ar_ptr, idx);
 2677     victim = last(q);
 2678 
 2679     /* Also scan the next one, since it would have a remainder < MINSIZE */
 2680     if (victim == q)
 2681     {
 2682       q = next_bin(q);
 2683       victim = last(q);
 2684     }
 2685     if (victim != q)
 2686     {
 2687       victim_size = chunksize(victim);
 2688       unlink(victim, bck, fwd);
 2689       set_inuse_bit_at_offset(victim, victim_size);
 2690       check_malloced_chunk(ar_ptr, victim, nb);
 2691       return victim;
 2692     }
 2693 
 2694     idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
 2695 
 2696   }
 2697   else
 2698   {
 2699     idx = bin_index(nb);
 2700     bin = bin_at(ar_ptr, idx);
 2701 
 2702     for (victim = last(bin); victim != bin; victim = victim->bk)
 2703     {
 2704       victim_size = chunksize(victim);
 2705       remainder_size = victim_size - nb;
 2706 
 2707       if (remainder_size >= (long)MINSIZE) /* too big */
 2708       {
 2709         --idx; /* adjust to rescan below after checking last remainder */
 2710         break;
 2711       }
 2712 
 2713       else if (remainder_size >= 0) /* exact fit */
 2714       {
 2715         unlink(victim, bck, fwd);
 2716         set_inuse_bit_at_offset(victim, victim_size);
 2717         check_malloced_chunk(ar_ptr, victim, nb);
 2718         return victim;
 2719       }
 2720     }
 2721 
 2722     ++idx;
 2723 
 2724   }
 2725 
 2726   /* Try to use the last split-off remainder */
 2727 
 2728   if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
 2729   {
 2730     victim_size = chunksize(victim);
 2731     remainder_size = victim_size - nb;
 2732 
 2733     if (remainder_size >= (long)MINSIZE) /* re-split */
 2734     {
 2735       remainder = chunk_at_offset(victim, nb);
 2736       set_head(victim, nb | PREV_INUSE);
 2737       link_last_remainder(ar_ptr, remainder);
 2738       set_head(remainder, remainder_size | PREV_INUSE);
 2739       set_foot(remainder, remainder_size);
 2740       check_malloced_chunk(ar_ptr, victim, nb);
 2741       return victim;
 2742     }
 2743 
 2744     clear_last_remainder(ar_ptr);
 2745 
 2746     if (remainder_size >= 0)  /* exhaust */
 2747     {
 2748       set_inuse_bit_at_offset(victim, victim_size);
 2749       check_malloced_chunk(ar_ptr, victim, nb);
 2750       return victim;
 2751     }
 2752 
 2753     /* Else place in bin */
 2754 
 2755     frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
 2756   }
 2757 
 2758   /*
 2759      If there are any possibly nonempty big-enough blocks,
 2760      search for best fitting chunk by scanning bins in blockwidth units.
 2761   */
 2762 
 2763   if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
 2764   {
 2765 
 2766     /* Get to the first marked block */
 2767 
 2768     if ( (block & binblocks(ar_ptr)) == 0)
 2769     {
 2770       /* force to an even block boundary */
 2771       idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
 2772       block <<= 1;
 2773       while ((block & binblocks(ar_ptr)) == 0)
 2774       {
 2775         idx += BINBLOCKWIDTH;
 2776         block <<= 1;
 2777       }
 2778     }
 2779 
 2780     /* For each possibly nonempty block ... */
 2781     for (;;)
 2782     {
 2783       startidx = idx;          /* (track incomplete blocks) */
 2784       q = bin = bin_at(ar_ptr, idx);
 2785 
 2786       /* For each bin in this block ... */
 2787       do
 2788       {
 2789         /* Find and use first big enough chunk ... */
 2790 
 2791         for (victim = last(bin); victim != bin; victim = victim->bk)
 2792         {
 2793           victim_size = chunksize(victim);
 2794           remainder_size = victim_size - nb;
 2795 
 2796           if (remainder_size >= (long)MINSIZE) /* split */
 2797           {
 2798             remainder = chunk_at_offset(victim, nb);
 2799             set_head(victim, nb | PREV_INUSE);
 2800             unlink(victim, bck, fwd);
 2801             link_last_remainder(ar_ptr, remainder);
 2802             set_head(remainder, remainder_size | PREV_INUSE);
 2803             set_foot(remainder, remainder_size);
 2804             check_malloced_chunk(ar_ptr, victim, nb);
 2805             return victim;
 2806           }
 2807 
 2808           else if (remainder_size >= 0)  /* take */
 2809           {
 2810             set_inuse_bit_at_offset(victim, victim_size);
 2811             unlink(victim, bck, fwd);
 2812             check_malloced_chunk(ar_ptr, victim, nb);
 2813             return victim;
 2814           }
 2815 
 2816         }
 2817 
 2818        bin = next_bin(bin);
 2819 
 2820       } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
 2821 
 2822       /* Clear out the block bit. */
 2823 
 2824       do   /* Possibly backtrack to try to clear a partial block */
 2825       {
 2826         if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
 2827         {
 2828           binblocks(ar_ptr) &= ~block;
 2829           break;
 2830         }
 2831         --startidx;
 2832         q = prev_bin(q);
 2833       } while (first(q) == q);
 2834 
 2835       /* Get to the next possibly nonempty block */
 2836 
 2837       if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
 2838       {
 2839         while ((block & binblocks(ar_ptr)) == 0)
 2840         {
 2841           idx += BINBLOCKWIDTH;
 2842           block <<= 1;
 2843         }
 2844       }
 2845       else
 2846         break;
 2847     }
 2848   }
 2849 
 2850 
 2851   /* Try to use top chunk */
 2852 
 2853   /* Require that there be a remainder, ensuring top always exists  */
 2854   if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
 2855   {
 2856 
 2857 #if HAVE_MMAP
 2858     /* If big and would otherwise need to extend, try to use mmap instead */
 2859     if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
 2860         (victim = mmap_chunk(nb)) != 0)
 2861       return victim;
 2862 #endif
 2863 
 2864     /* Try to extend */
 2865     malloc_extend_top(ar_ptr, nb);
 2866     if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
 2867       return 0; /* propagate failure */
 2868   }
 2869 
 2870   victim = top(ar_ptr);
 2871   set_head(victim, nb | PREV_INUSE);
 2872   top(ar_ptr) = chunk_at_offset(victim, nb);
 2873   set_head(top(ar_ptr), remainder_size | PREV_INUSE);
 2874   check_malloced_chunk(ar_ptr, victim, nb);
 2875   return victim;
 2876 
 2877 }
 2878 
 2879 
 2880 
 2881 
 2882 /*
 2883 
 2884   free() algorithm :
 2885 
 2886     cases:
 2887 
 2888        1. free(0) has no effect.
 2889 
 2890        2. If the chunk was allocated via mmap, it is released via munmap().
 2891 
 2892        3. If a returned chunk borders the current high end of memory,
 2893           it is consolidated into the top, and if the total unused
 2894           topmost memory exceeds the trim threshold, malloc_trim is
 2895           called.
 2896 
 2897        4. Other chunks are consolidated as they arrive, and
 2898           placed in corresponding bins. (This includes the case of
 2899           consolidating with the current `last_remainder').
 2900 
 2901 */
 2902 
 2903 
 2904 #if __STD_C
 2905 void fREe(Void_t* mem)
 2906 #else
 2907 void fREe(mem) Void_t* mem;
 2908 #endif
 2909 {
 2910   arena *ar_ptr;
 2911   mchunkptr p;                          /* chunk corresponding to mem */
 2912 
 2913 #if defined _LIBC || defined MALLOC_HOOKS
 2914   if (__free_hook != NULL) {
 2915     (*__free_hook)(mem);
 2916     return;
 2917   }
 2918 #endif
 2919 
 2920   if (mem == 0)                              /* free(0) has no effect */
 2921     return;
 2922 
 2923   p = mem2chunk(mem);
 2924 
 2925 #if HAVE_MMAP
 2926   if (chunk_is_mmapped(p))                       /* release mmapped memory. */
 2927   {
 2928     munmap_chunk(p);
 2929     return;
 2930   }
 2931 #endif
 2932 
 2933   ar_ptr = arena_for_ptr(p);
 2934 #if THREAD_STATS
 2935   if(!mutex_trylock(&ar_ptr->mutex))
 2936     ++(ar_ptr->stat_lock_direct);
 2937   else {
 2938     (void)mutex_lock(&ar_ptr->mutex);
 2939     ++(ar_ptr->stat_lock_wait);
 2940   }
 2941 #else
 2942   (void)mutex_lock(&ar_ptr->mutex);
 2943 #endif
 2944   chunk_free(ar_ptr, p);
 2945   (void)mutex_unlock(&ar_ptr->mutex);
 2946 }
 2947 
 2948 static void
 2949 internal_function
 2950 #if __STD_C
 2951 chunk_free(arena *ar_ptr, mchunkptr p)
 2952 #else
 2953 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
 2954 #endif
 2955 {
 2956   INTERNAL_SIZE_T hd = p->size; /* its head field */
 2957   INTERNAL_SIZE_T sz;  /* its size */
 2958   int       idx;       /* its bin index */
 2959   mchunkptr next;      /* next contiguous chunk */
 2960   INTERNAL_SIZE_T nextsz; /* its size */
 2961   INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
 2962   mchunkptr bck;       /* misc temp for linking */
 2963   mchunkptr fwd;       /* misc temp for linking */
 2964   int       islr;      /* track whether merging with last_remainder */
 2965 
 2966   check_inuse_chunk(ar_ptr, p);
 2967 
 2968   sz = hd & ~PREV_INUSE;
 2969   next = chunk_at_offset(p, sz);
 2970   nextsz = chunksize(next);
 2971 
 2972   if (next == top(ar_ptr))                         /* merge with top */
 2973   {
 2974     sz += nextsz;
 2975 
 2976     if (!(hd & PREV_INUSE))                    /* consolidate backward */
 2977     {
 2978       prevsz = p->prev_size;
 2979       p = chunk_at_offset(p, -prevsz);
 2980       sz += prevsz;
 2981       unlink(p, bck, fwd);
 2982     }
 2983 
 2984     set_head(p, sz | PREV_INUSE);
 2985     top(ar_ptr) = p;
 2986 
 2987 #ifndef NO_THREADS
 2988     if(ar_ptr == &main_arena) {
 2989 #endif
 2990       if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
 2991         main_trim(top_pad);
 2992 #ifndef NO_THREADS
 2993     } else {
 2994       heap_info *heap = heap_for_ptr(p);
 2995 
 2996       assert(heap->ar_ptr == ar_ptr);
 2997 
 2998       /* Try to get rid of completely empty heaps, if possible. */
 2999       if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
 3000          p == chunk_at_offset(heap, sizeof(*heap)))
 3001         heap_trim(heap, top_pad);
 3002     }
 3003 #endif
 3004     return;
 3005   }
 3006 
 3007   islr = 0;
 3008 
 3009   if (!(hd & PREV_INUSE))                    /* consolidate backward */
 3010   {
 3011     prevsz = p->prev_size;
 3012     p = chunk_at_offset(p, -prevsz);
 3013     sz += prevsz;
 3014 
 3015     if (p->fd == last_remainder(ar_ptr))     /* keep as last_remainder */
 3016       islr = 1;
 3017     else
 3018       unlink(p, bck, fwd);
 3019   }
 3020 
 3021   if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
 3022   {
 3023     sz += nextsz;
 3024 
 3025     if (!islr && next->fd == last_remainder(ar_ptr))
 3026                                               /* re-insert last_remainder */
 3027     {
 3028       islr = 1;
 3029       link_last_remainder(ar_ptr, p);
 3030     }
 3031     else
 3032       unlink(next, bck, fwd);
 3033 
 3034     next = chunk_at_offset(p, sz);
 3035   }
 3036   else
 3037     set_head(next, nextsz);                  /* clear inuse bit */
 3038 
 3039   set_head(p, sz | PREV_INUSE);
 3040   next->prev_size = sz;
 3041   if (!islr)
 3042     frontlink(ar_ptr, p, sz, idx, bck, fwd);
 3043 
 3044 #ifndef NO_THREADS
 3045   /* Check whether the heap containing top can go away now. */
 3046   if(next->size < MINSIZE &&
 3047      (unsigned long)sz > trim_threshold &&
 3048      ar_ptr != &main_arena) {                /* fencepost */
 3049     heap_info *heap = heap_for_ptr(top(ar_ptr));
 3050 
 3051     if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
 3052        heap->prev == heap_for_ptr(p))
 3053       heap_trim(heap, top_pad);
 3054   }
 3055 #endif
 3056 }
 3057 
 3058 
 3059 
 3060 
 3061 
 3062 /*
 3063 
 3064   Realloc algorithm:
 3065 
 3066     Chunks that were obtained via mmap cannot be extended or shrunk
 3067     unless HAVE_MREMAP is defined, in which case mremap is used.
 3068     Otherwise, if their reallocation is for additional space, they are
 3069     copied.  If for less, they are just left alone.
 3070 
 3071     Otherwise, if the reallocation is for additional space, and the
 3072     chunk can be extended, it is, else a malloc-copy-free sequence is
 3073     taken.  There are several different ways that a chunk could be
 3074     extended. All are tried:
 3075 
 3076        * Extending forward into following adjacent free chunk.
 3077        * Shifting backwards, joining preceding adjacent space
 3078        * Both shifting backwards and extending forward.
 3079        * Extending into newly sbrked space
 3080 
 3081     Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
 3082     size argument of zero (re)allocates a minimum-sized chunk.
 3083 
 3084     If the reallocation is for less space, and the new request is for
 3085     a `small' (<512 bytes) size, then the newly unused space is lopped
 3086     off and freed.
 3087 
 3088     The old unix realloc convention of allowing the last-free'd chunk
 3089     to be used as an argument to realloc is no longer supported.
 3090     I don't know of any programs still relying on this feature,
 3091     and allowing it would also allow too many other incorrect
 3092     usages of realloc to be sensible.
 3093 
 3094 
 3095 */
 3096 
 3097 
 3098 #if __STD_C
 3099 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
 3100 #else
 3101 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
 3102 #endif
 3103 {
 3104   arena *ar_ptr;
 3105   INTERNAL_SIZE_T    nb;      /* padded request size */
 3106 
 3107   mchunkptr oldp;             /* chunk corresponding to oldmem */
 3108   INTERNAL_SIZE_T    oldsize; /* its size */
 3109 
 3110   mchunkptr newp;             /* chunk to return */
 3111 
 3112 #if defined _LIBC || defined MALLOC_HOOKS
 3113   if (__realloc_hook != NULL) {
 3114     Void_t* result;
 3115 
 3116     result = (*__realloc_hook)(oldmem, bytes);
 3117     return result;
 3118   }
 3119 #endif
 3120 
 3121 #ifdef REALLOC_ZERO_BYTES_FREES
 3122   if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
 3123 #endif
 3124 
 3125   /* realloc of null is supposed to be same as malloc */
 3126   if (oldmem == 0) return mALLOc(bytes);
 3127 
 3128   oldp    = mem2chunk(oldmem);
 3129   oldsize = chunksize(oldp);
 3130 
 3131   nb = request2size(bytes);
 3132 
 3133 #if HAVE_MMAP
 3134   if (chunk_is_mmapped(oldp))
 3135   {
 3136     Void_t* newmem;
 3137 
 3138 #if HAVE_MREMAP
 3139     newp = mremap_chunk(oldp, nb);
 3140     if(newp) return chunk2mem(newp);
 3141 #endif
 3142     /* Note the extra SIZE_SZ overhead. */
 3143     if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
 3144     /* Must alloc, copy, free. */
 3145     newmem = mALLOc(bytes);
 3146     if (newmem == 0) return 0; /* propagate failure */
 3147     MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
 3148     munmap_chunk(oldp);
 3149     return newmem;
 3150   }
 3151 #endif
 3152 
 3153   ar_ptr = arena_for_ptr(oldp);
 3154 #if THREAD_STATS
 3155   if(!mutex_trylock(&ar_ptr->mutex))
 3156     ++(ar_ptr->stat_lock_direct);
 3157   else {
 3158     (void)mutex_lock(&ar_ptr->mutex);
 3159     ++(ar_ptr->stat_lock_wait);
 3160   }
 3161 #else
 3162   (void)mutex_lock(&ar_ptr->mutex);
 3163 #endif
 3164 
 3165 #ifndef NO_THREADS
 3166   /* As in malloc(), remember this arena for the next allocation. */
 3167   tsd_setspecific(arena_key, (Void_t *)ar_ptr);
 3168 #endif
 3169 
 3170   newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
 3171 
 3172   (void)mutex_unlock(&ar_ptr->mutex);
 3173   return newp ? chunk2mem(newp) : NULL;
 3174 }
 3175 
 3176 static mchunkptr
 3177 internal_function
 3178 #if __STD_C
 3179 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
 3180               INTERNAL_SIZE_T nb)
 3181 #else
 3182 chunk_realloc(ar_ptr, oldp, oldsize, nb)
 3183 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
 3184 #endif
 3185 {
 3186   mchunkptr newp = oldp;      /* chunk to return */
 3187   INTERNAL_SIZE_T newsize = oldsize; /* its size */
 3188 
 3189   mchunkptr next;             /* next contiguous chunk after oldp */
 3190   INTERNAL_SIZE_T  nextsize;  /* its size */
 3191 
 3192   mchunkptr prev;             /* previous contiguous chunk before oldp */
 3193   INTERNAL_SIZE_T  prevsize;  /* its size */
 3194 
 3195   mchunkptr remainder;        /* holds split off extra space from newp */
 3196   INTERNAL_SIZE_T  remainder_size;   /* its size */
 3197 
 3198   mchunkptr bck;              /* misc temp for linking */
 3199   mchunkptr fwd;              /* misc temp for linking */
 3200 
 3201   check_inuse_chunk(ar_ptr, oldp);
 3202 
 3203   if ((long)(oldsize) < (long)(nb))
 3204   {
 3205 
 3206     /* Try expanding forward */
 3207 
 3208     next = chunk_at_offset(oldp, oldsize);
 3209     if (next == top(ar_ptr) || !inuse(next))
 3210     {
 3211       nextsize = chunksize(next);
 3212 
 3213       /* Forward into top only if a remainder */
 3214       if (next == top(ar_ptr))
 3215       {
 3216         if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
 3217         {
 3218           newsize += nextsize;
 3219           top(ar_ptr) = chunk_at_offset(oldp, nb);
 3220           set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
 3221           set_head_size(oldp, nb);
 3222           return oldp;
 3223         }
 3224       }
 3225 
 3226       /* Forward into next chunk */
 3227       else if (((long)(nextsize + newsize) >= (long)(nb)))
 3228       {
 3229         unlink(next, bck, fwd);
 3230         newsize  += nextsize;
 3231         goto split;
 3232       }
 3233     }
 3234     else
 3235     {
 3236       next = 0;
 3237       nextsize = 0;
 3238     }
 3239 
 3240     /* Try shifting backwards. */
 3241 
 3242     if (!prev_inuse(oldp))
 3243     {
 3244       prev = prev_chunk(oldp);
 3245       prevsize = chunksize(prev);
 3246 
 3247       /* try forward + backward first to save a later consolidation */
 3248 
 3249       if (next != 0)
 3250       {
 3251         /* into top */
 3252         if (next == top(ar_ptr))
 3253         {
 3254           if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
 3255           {
 3256             unlink(prev, bck, fwd);
 3257             newp = prev;
 3258             newsize += prevsize + nextsize;
 3259             MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
 3260             top(ar_ptr) = chunk_at_offset(newp, nb);
 3261             set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
 3262             set_head_size(newp, nb);
 3263             return newp;
 3264           }
 3265         }
 3266 
 3267         /* into next chunk */
 3268         else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
 3269         {
 3270           unlink(next, bck, fwd);
 3271           unlink(prev, bck, fwd);
 3272           newp = prev;
 3273           newsize += nextsize + prevsize;
 3274           MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
 3275           goto split;
 3276         }
 3277       }
 3278 
 3279       /* backward only */
 3280       if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
 3281       {
 3282         unlink(prev, bck, fwd);
 3283         newp = prev;
 3284         newsize += prevsize;
 3285         MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
 3286         goto split;
 3287       }
 3288     }
 3289 
 3290     /* Must allocate */
 3291 
 3292     newp = chunk_alloc (ar_ptr, nb);
 3293 
 3294     if (newp == 0) {
 3295       /* Maybe the failure is due to running out of mmapped areas. */
 3296       if (ar_ptr != &main_arena) {
 3297         (void)mutex_lock(&main_arena.mutex);
 3298         newp = chunk_alloc(&main_arena, nb);
 3299         (void)mutex_unlock(&main_arena.mutex);
 3300       }
 3301       if (newp == 0) /* propagate failure */
 3302         return 0;
 3303     }
 3304 
 3305     /* Avoid copy if newp is next chunk after oldp. */
 3306     /* (This can only happen when new chunk is sbrk'ed.) */
 3307 
 3308     if ( newp == next_chunk(oldp))
 3309     {
 3310       newsize += chunksize(newp);
 3311       newp = oldp;
 3312       goto split;
 3313     }
 3314 
 3315     /* Otherwise copy, free, and exit */
 3316     MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
 3317     chunk_free(ar_ptr, oldp);
 3318     return newp;
 3319   }
 3320 
 3321 
 3322  split:  /* split off extra room in old or expanded chunk */
 3323 
 3324   if (newsize - nb >= MINSIZE) /* split off remainder */
 3325   {
 3326     remainder = chunk_at_offset(newp, nb);
 3327     remainder_size = newsize - nb;
 3328     set_head_size(newp, nb);
 3329     set_head(remainder, remainder_size | PREV_INUSE);
 3330     set_inuse_bit_at_offset(remainder, remainder_size);
 3331     chunk_free(ar_ptr, remainder);
 3332   }
 3333   else
 3334   {
 3335     set_head_size(newp, newsize);
 3336     set_inuse_bit_at_offset(newp, newsize);
 3337   }
 3338 
 3339   check_inuse_chunk(ar_ptr, newp);
 3340   return newp;
 3341 }
 3342 
 3343 
 3344 
 3345 
 3346 /*
 3347 
 3348   memalign algorithm:
 3349 
 3350     memalign requests more than enough space from malloc, finds a spot
 3351     within that chunk that meets the alignment request, and then
 3352     possibly frees the leading and trailing space.
 3353 
 3354     The alignment argument must be a power of two. This property is not
 3355     checked by memalign, so misuse may result in random runtime errors.
 3356 
 3357     8-byte alignment is guaranteed by normal malloc calls, so don't
 3358     bother calling memalign with an argument of 8 or less.
 3359 
 3360     Overreliance on memalign is a sure way to fragment space.
 3361 
 3362 */
 3363 
 3364 
 3365 #if __STD_C
 3366 Void_t* mEMALIGn(size_t alignment, size_t bytes)
 3367 #else
 3368 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
 3369 #endif
 3370 {
 3371   arena *ar_ptr;
 3372   INTERNAL_SIZE_T    nb;      /* padded  request size */
 3373   mchunkptr p;
 3374 
 3375 #if defined _LIBC || defined MALLOC_HOOKS
 3376   if (__memalign_hook != NULL) {
 3377     Void_t* result;
 3378 
 3379     result = (*__memalign_hook)(alignment, bytes);
 3380     return result;
 3381   }
 3382 #endif
 3383 
 3384   /* If need less alignment than we give anyway, just relay to malloc */
 3385 
 3386   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
 3387 
 3388   /* Otherwise, ensure that it is at least a minimum chunk size */
 3389 
 3390   if (alignment <  MINSIZE) alignment = MINSIZE;
 3391 
 3392   nb = request2size(bytes);
 3393   arena_get(ar_ptr, nb + alignment + MINSIZE);
 3394   if(!ar_ptr)
 3395     return 0;
 3396   p = chunk_align(ar_ptr, nb, alignment);
 3397   (void)mutex_unlock(&ar_ptr->mutex);
 3398   if(!p) {
 3399     /* Maybe the failure is due to running out of mmapped areas. */
 3400     if(ar_ptr != &main_arena) {
 3401       (void)mutex_lock(&main_arena.mutex);
 3402       p = chunk_align(&main_arena, nb, alignment);
 3403       (void)mutex_unlock(&main_arena.mutex);
 3404     }
 3405     if(!p) return 0;
 3406   }
 3407   return chunk2mem(p);
 3408 }
 3409 
 3410 static mchunkptr
 3411 internal_function
 3412 #if __STD_C
 3413 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
 3414 #else
 3415 chunk_align(ar_ptr, nb, alignment)
 3416 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
 3417 #endif
 3418 {
 3419   char*     m;                /* memory returned by malloc call */
 3420   mchunkptr p;                /* corresponding chunk */
 3421   char*     brk;              /* alignment point within p */
 3422   mchunkptr newp;             /* chunk to return */
 3423   INTERNAL_SIZE_T  newsize;   /* its size */
 3424   INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
 3425   mchunkptr remainder;        /* spare room at end to split off */
 3426   long      remainder_size;   /* its size */
 3427 
 3428   /* Call chunk_alloc with worst case padding to hit alignment. */
 3429   p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
 3430   if (p == 0)
 3431     return 0; /* propagate failure */
 3432 
 3433   m = (char*)chunk2mem(p);
 3434 
 3435   if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
 3436   {
 3437 #if HAVE_MMAP
 3438     if(chunk_is_mmapped(p)) {
 3439       return p; /* nothing more to do */
 3440     }
 3441 #endif
 3442   }
 3443   else /* misaligned */
 3444   {
 3445     /*
 3446       Find an aligned spot inside chunk.
 3447       Since we need to give back leading space in a chunk of at
 3448       least MINSIZE, if the first calculation places us at
 3449       a spot with less than MINSIZE leader, we can move to the
 3450       next aligned spot -- we've allocated enough total room so that
 3451       this is always possible.
 3452     */
 3453 
 3454     brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
 3455     if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
 3456 
 3457     newp = (mchunkptr)brk;
 3458     leadsize = brk - (char*)(p);
 3459     newsize = chunksize(p) - leadsize;
 3460 
 3461 #if HAVE_MMAP
 3462     if(chunk_is_mmapped(p))
 3463     {
 3464       newp->prev_size = p->prev_size + leadsize;
 3465       set_head(newp, newsize|IS_MMAPPED);
 3466       return newp;
 3467     }
 3468 #endif
 3469 
 3470     /* give back leader, use the rest */
 3471 
 3472     set_head(newp, newsize | PREV_INUSE);
 3473     set_inuse_bit_at_offset(newp, newsize);
 3474     set_head_size(p, leadsize);
 3475     chunk_free(ar_ptr, p);
 3476     p = newp;
 3477 
 3478     assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
 3479   }
 3480 
 3481   /* Also give back spare room at the end */
 3482 
 3483   remainder_size = chunksize(p) - nb;
 3484 
 3485   if (remainder_size >= (long)MINSIZE)
 3486   {
 3487     remainder = chunk_at_offset(p, nb);
 3488     set_head(remainder, remainder_size | PREV_INUSE);
 3489     set_head_size(p, nb);
 3490     chunk_free(ar_ptr, remainder);
 3491   }
 3492 
 3493   check_inuse_chunk(ar_ptr, p);
 3494   return p;
 3495 }
 3496 
 3497 
 3498 
 3499 
 3500 /*
 3501     valloc just invokes memalign with alignment argument equal
 3502     to the page size of the system (or as near to this as can
 3503     be figured out from all the includes/defines above.)
 3504 */
 3505 
 3506 #if __STD_C
 3507 Void_t* vALLOc(size_t bytes)
 3508 #else
 3509 Void_t* vALLOc(bytes) size_t bytes;
 3510 #endif
 3511 {
 3512   return mEMALIGn (malloc_getpagesize, bytes);
 3513 }
 3514 
 3515 /*
 3516   pvalloc just invokes valloc for the nearest pagesize
 3517   that will accommodate request
 3518 */
 3519 
 3520 
 3521 #if __STD_C
 3522 Void_t* pvALLOc(size_t bytes)
 3523 #else
 3524 Void_t* pvALLOc(bytes) size_t bytes;
 3525 #endif
 3526 {
 3527   size_t pagesize = malloc_getpagesize;
 3528   return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
 3529 }
 3530 
 3531 /*
 3532 
 3533   calloc calls chunk_alloc, then zeroes out the allocated chunk.
 3534 
 3535 */
 3536 
 3537 #if __STD_C
 3538 Void_t* cALLOc(size_t n, size_t elem_size)
 3539 #else
 3540 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
 3541 #endif
 3542 {
 3543   arena *ar_ptr;
 3544   mchunkptr p, oldtop;
 3545   INTERNAL_SIZE_T sz, csz, oldtopsize;
 3546   Void_t* mem;
 3547 
 3548 #if defined _LIBC || defined MALLOC_HOOKS
 3549   if (__malloc_hook != NULL) {
 3550     sz = n * elem_size;
 3551     mem = (*__malloc_hook)(sz);
 3552     if(mem == 0)
 3553       return 0;
 3554 #ifdef HAVE_MEMCPY
 3555     return memset(mem, 0, sz);
 3556 #else
 3557     while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
 3558     return mem;
 3559 #endif
 3560   }
 3561 #endif
 3562 
 3563   sz = request2size(n * elem_size);
 3564   arena_get(ar_ptr, sz);
 3565   if(!ar_ptr)
 3566     return 0;
 3567 
 3568   /* check if expand_top called, in which case don't need to clear */
 3569 #if MORECORE_CLEARS
 3570   oldtop = top(ar_ptr);
 3571   oldtopsize = chunksize(top(ar_ptr));
 3572 #endif
 3573   p = chunk_alloc (ar_ptr, sz);
 3574 
 3575   /* Only clearing follows, so we can unlock early. */
 3576   (void)mutex_unlock(&ar_ptr->mutex);
 3577 
 3578   if (p == 0) {
 3579     /* Maybe the failure is due to running out of mmapped areas. */
 3580     if(ar_ptr != &main_arena) {
 3581       (void)mutex_lock(&main_arena.mutex);
 3582       p = chunk_alloc(&main_arena, sz);
 3583       (void)mutex_unlock(&main_arena.mutex);
 3584     }
 3585     if (p == 0) return 0;
 3586   }
 3587   mem = chunk2mem(p);
 3588 
 3589   /* Two optional cases in which clearing not necessary */
 3590 
 3591 #if HAVE_MMAP
 3592   if (chunk_is_mmapped(p)) return mem;
 3593 #endif
 3594 
 3595   csz = chunksize(p);
 3596 
 3597 #if MORECORE_CLEARS
 3598   if (p == oldtop && csz > oldtopsize) {
 3599     /* clear only the bytes from non-freshly-sbrked memory */
 3600     csz = oldtopsize;
 3601   }
 3602 #endif
 3603 
 3604   MALLOC_ZERO(mem, csz - SIZE_SZ);
 3605   return mem;
 3606 }
 3607 
 3608 /*
 3609 
 3610   cfree just calls free. It is needed/defined on some systems
 3611   that pair it with calloc, presumably for odd historical reasons.
 3612 
 3613 */
 3614 
 3615 #if !defined(_LIBC)
 3616 #if __STD_C
 3617 void cfree(Void_t *mem)
 3618 #else
 3619 void cfree(mem) Void_t *mem;
 3620 #endif
 3621 {
 3622   free(mem);
 3623 }
 3624 #endif
 3625 
 3626 
 3627 
 3628 /*
 3629 
 3630     Malloc_trim gives memory back to the system (via negative
 3631     arguments to sbrk) if there is unused memory at the `high' end of
 3632     the malloc pool. You can call this after freeing large blocks of
 3633     memory to potentially reduce the system-level memory requirements
 3634     of a program. However, it cannot guarantee to reduce memory. Under
 3635     some allocation patterns, some large free blocks of memory will be
 3636     locked between two used chunks, so they cannot be given back to
 3637     the system.
 3638 
 3639     The `pad' argument to malloc_trim represents the amount of free
 3640     trailing space to leave untrimmed. If this argument is zero,
 3641     only the minimum amount of memory to maintain internal data
 3642     structures will be left (one page or less). Non-zero arguments
 3643     can be supplied to maintain enough trailing space to service
 3644     future expected allocations without having to re-obtain memory
 3645     from the system.
 3646 
 3647     Malloc_trim returns 1 if it actually released any memory, else 0.
 3648 
 3649 */
 3650 
 3651 #if __STD_C
 3652 int mALLOC_TRIm(size_t pad)
 3653 #else
 3654 int mALLOC_TRIm(pad) size_t pad;
 3655 #endif
 3656 {
 3657   int res;
 3658 
 3659   (void)mutex_lock(&main_arena.mutex);
 3660   res = main_trim(pad);
 3661   (void)mutex_unlock(&main_arena.mutex);
 3662   return res;
 3663 }
 3664 
 3665 /* Trim the main arena. */
 3666 
 3667 static int
 3668 internal_function
 3669 #if __STD_C
 3670 main_trim(size_t pad)
 3671 #else
 3672 main_trim(pad) size_t pad;
 3673 #endif
 3674 {
 3675   mchunkptr top_chunk;   /* The current top chunk */
 3676   long  top_size;        /* Amount of top-most memory */
 3677   long  extra;           /* Amount to release */
 3678   char* current_brk;     /* address returned by pre-check sbrk call */
 3679   char* new_brk;         /* address returned by negative sbrk call */
 3680 
 3681   unsigned long pagesz = malloc_getpagesize;
 3682 
 3683   top_chunk = top(&main_arena);
 3684   top_size = chunksize(top_chunk);
 3685   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
 3686 
 3687   if (extra < (long)pagesz) /* Not enough memory to release */
 3688     return 0;
 3689 
 3690   /* Test to make sure no one else called sbrk */
 3691   current_brk = (char*)(MORECORE (0));
 3692   if (current_brk != (char*)(top_chunk) + top_size)
 3693     return 0;     /* Apparently we don't own memory; must fail */
 3694 
 3695   new_brk = (char*)(MORECORE (-extra));
 3696 
 3697 #if defined _LIBC || defined MALLOC_HOOKS
 3698   /* Call the `morecore' hook if necessary.  */
 3699   if (__after_morecore_hook)
 3700     (*__after_morecore_hook) ();
 3701 #endif
 3702 
 3703   if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
 3704     /* Try to figure out what we have */
 3705     current_brk = (char*)(MORECORE (0));
 3706     top_size = current_brk - (char*)top_chunk;
 3707     if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
 3708     {
 3709       sbrked_mem = current_brk - sbrk_base;
 3710       set_head(top_chunk, top_size | PREV_INUSE);
 3711     }
 3712     check_chunk(&main_arena, top_chunk);
 3713     return 0;
 3714   }
 3715   sbrked_mem -= extra;
 3716 
 3717   /* Success. Adjust top accordingly. */
 3718   set_head(top_chunk, (top_size - extra) | PREV_INUSE);
 3719   check_chunk(&main_arena, top_chunk);
 3720   return 1;
 3721 }
 3722 
 3723 #ifndef NO_THREADS
 3724 
 3725 static int
 3726 internal_function
 3727 #if __STD_C
 3728 heap_trim(heap_info *heap, size_t pad)
 3729 #else
 3730 heap_trim(heap, pad) heap_info *heap; size_t pad;
 3731 #endif
 3732 {
 3733   unsigned long pagesz = malloc_getpagesize;
 3734   arena *ar_ptr = heap->ar_ptr;
 3735   mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
 3736   heap_info *prev_heap;
 3737   long new_size, top_size, extra;
 3738 
 3739   /* Can this heap go away completely ? */
 3740   while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
 3741     prev_heap = heap->prev;
 3742     p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
 3743     assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
 3744     p = prev_chunk(p);
 3745     new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
 3746     assert(new_size>0 && new_size<(long)(2*MINSIZE));
 3747     if(!prev_inuse(p))
 3748       new_size += p->prev_size;
 3749     assert(new_size>0 && new_size<HEAP_MAX_SIZE);
 3750     if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
 3751       break;
 3752     ar_ptr->size -= heap->size;
 3753     delete_heap(heap);
 3754     heap = prev_heap;
 3755     if(!prev_inuse(p)) { /* consolidate backward */
 3756       p = prev_chunk(p);
 3757       unlink(p, bck, fwd);
 3758     }
 3759     assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
 3760     assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
 3761     top(ar_ptr) = top_chunk = p;
 3762     set_head(top_chunk, new_size | PREV_INUSE);
 3763     check_chunk(ar_ptr, top_chunk);
 3764   }
 3765   top_size = chunksize(top_chunk);
 3766   extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
 3767   if(extra < (long)pagesz)
 3768     return 0;
 3769   /* Try to shrink. */
 3770   if(grow_heap(heap, -extra) != 0)
 3771     return 0;
 3772   ar_ptr->size -= extra;
 3773 
 3774   /* Success. Adjust top accordingly. */
 3775   set_head(top_chunk, (top_size - extra) | PREV_INUSE);
 3776   check_chunk(ar_ptr, top_chunk);
 3777   return 1;
 3778 }
 3779 
 3780 #endif
 3781 
 3782 
 3783 
 3784 /*
 3785   malloc_usable_size:
 3786 
 3787     This routine tells you how many bytes you can actually use in an
 3788     allocated chunk, which may be more than you requested (although
 3789     often not). You can use this many bytes without worrying about
 3790     overwriting other allocated objects. Not a particularly great
 3791     programming practice, but still sometimes useful.
 3792 
 3793 */
 3794 
 3795 #if __STD_C
 3796 size_t mALLOC_USABLE_SIZe(Void_t* mem)
 3797 #else
 3798 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
 3799 #endif
 3800 {
 3801   mchunkptr p;
 3802 
 3803   if (mem == 0)
 3804     return 0;
 3805   else
 3806   {
 3807     p = mem2chunk(mem);
 3808     if(!chunk_is_mmapped(p))
 3809     {
 3810       if (!inuse(p)) return 0;
 3811       check_inuse_chunk(arena_for_ptr(mem), p);
 3812       return chunksize(p) - SIZE_SZ;
 3813     }
 3814     return chunksize(p) - 2*SIZE_SZ;
 3815   }
 3816 }
 3817 
 3818 
 3819 
 3820 
 3821 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
 3822 
 3823 static void
 3824 #if __STD_C
 3825 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
 3826 #else
 3827 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
 3828 #endif
 3829 {
 3830   int i, navail;
 3831   mbinptr b;
 3832   mchunkptr p;
 3833 #if MALLOC_DEBUG
 3834   mchunkptr q;
 3835 #endif
 3836   INTERNAL_SIZE_T avail;
 3837 
 3838   (void)mutex_lock(&ar_ptr->mutex);
 3839   avail = chunksize(top(ar_ptr));
 3840   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
 3841 
 3842   for (i = 1; i < NAV; ++i)
 3843   {
 3844     b = bin_at(ar_ptr, i);
 3845     for (p = last(b); p != b; p = p->bk)
 3846     {
 3847 #if MALLOC_DEBUG
 3848       check_free_chunk(ar_ptr, p);
 3849       for (q = next_chunk(p);
 3850            q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
 3851            q = next_chunk(q))
 3852         check_inuse_chunk(ar_ptr, q);
 3853 #endif
 3854       avail += chunksize(p);
 3855       navail++;
 3856     }
 3857   }
 3858 
 3859   mi->arena = ar_ptr->size;
 3860   mi->ordblks = navail;
 3861   mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
 3862   mi->uordblks = ar_ptr->size - avail;
 3863   mi->fordblks = avail;
 3864   mi->hblks = n_mmaps;
 3865   mi->hblkhd = mmapped_mem;
 3866   mi->keepcost = chunksize(top(ar_ptr));
 3867 
 3868   (void)mutex_unlock(&ar_ptr->mutex);
 3869 }
 3870 
 3871 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
 3872 
 3873 /* Print the complete contents of a single heap to stderr. */
 3874 
 3875 static void
 3876 #if __STD_C
 3877 dump_heap(heap_info *heap)
 3878 #else
 3879 dump_heap(heap) heap_info *heap;
 3880 #endif
 3881 {
 3882   char *ptr;
 3883   mchunkptr p;
 3884 
 3885   fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
 3886   ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
 3887     (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
 3888   p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
 3889                   ~MALLOC_ALIGN_MASK);
 3890   for(;;) {
 3891     fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
 3892     if(p == top(heap->ar_ptr)) {
 3893       fprintf(stderr, " (top)\n");
 3894       break;
 3895     } else if(p->size == (0|PREV_INUSE)) {
 3896       fprintf(stderr, " (fence)\n");
 3897       break;
 3898     }
 3899     fprintf(stderr, "\n");
 3900     p = next_chunk(p);
 3901   }
 3902 }
 3903 
 3904 #endif
 3905 
 3906 
 3907 
 3908 /*
 3909 
 3910   malloc_stats:
 3911 
 3912     For all arenas separately and in total, prints on stderr the
 3913     amount of space obtained from the system, and the current number
 3914     of bytes allocated via malloc (or realloc, etc) but not yet
 3915     freed. (Note that this is the number of bytes allocated, not the
 3916     number requested. It will be larger than the number requested
 3917     because of alignment and bookkeeping overhead.)  When not compiled
 3918     for multiple threads, the maximum amount of allocated memory
 3919     (which may be more than current if malloc_trim and/or munmap got
 3920     called) is also reported.  When using mmap(), prints the maximum
 3921     number of simultaneous mmap regions used, too.
 3922 
 3923 */
 3924 
 3925 void mALLOC_STATs()
 3926 {
 3927   int i;
 3928   arena *ar_ptr;
 3929   struct mallinfo mi;
 3930   unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
 3931 #if THREAD_STATS
 3932   long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
 3933 #endif
 3934 
 3935   for(i=0, ar_ptr = &main_arena;; i++) {
 3936     malloc_update_mallinfo(ar_ptr, &mi);
 3937     fprintf(stderr, "Arena %d:\n", i);
 3938     fprintf(stderr, "system bytes     = %10u\n", (unsigned int)mi.arena);
 3939     fprintf(stderr, "in use bytes     = %10u\n", (unsigned int)mi.uordblks);
 3940     system_b += mi.arena;
 3941     in_use_b += mi.uordblks;
 3942 #if THREAD_STATS
 3943     stat_lock_direct += ar_ptr->stat_lock_direct;
 3944     stat_lock_loop += ar_ptr->stat_lock_loop;
 3945     stat_lock_wait += ar_ptr->stat_lock_wait;
 3946 #endif
 3947 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
 3948     if(ar_ptr != &main_arena) {
 3949       heap_info *heap;
 3950       (void)mutex_lock(&ar_ptr->mutex);
 3951       heap = heap_for_ptr(top(ar_ptr));
 3952       while(heap) { dump_heap(heap); heap = heap->prev; }
 3953       (void)mutex_unlock(&ar_ptr->mutex);
 3954     }
 3955 #endif
 3956     ar_ptr = ar_ptr->next;
 3957     if(ar_ptr == &main_arena) break;
 3958   }
 3959 #if HAVE_MMAP
 3960   fprintf(stderr, "Total (incl. mmap):\n");
 3961 #else
 3962   fprintf(stderr, "Total:\n");
 3963 #endif
 3964   fprintf(stderr, "system bytes     = %10u\n", system_b);
 3965   fprintf(stderr, "in use bytes     = %10u\n", in_use_b);
 3966 #ifdef NO_THREADS
 3967   fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
 3968 #endif
 3969 #if HAVE_MMAP
 3970   fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
 3971   fprintf(stderr, "max mmap bytes   = %10lu\n", max_mmapped_mem);
 3972 #endif
 3973 #if THREAD_STATS
 3974   fprintf(stderr, "heaps created    = %10d\n",  stat_n_heaps);
 3975   fprintf(stderr, "locked directly  = %10ld\n", stat_lock_direct);
 3976   fprintf(stderr, "locked in loop   = %10ld\n", stat_lock_loop);
 3977   fprintf(stderr, "locked waiting   = %10ld\n", stat_lock_wait);
 3978   fprintf(stderr, "locked total     = %10ld\n",
 3979           stat_lock_direct + stat_lock_loop + stat_lock_wait);
 3980 #endif
 3981 }
 3982 
 3983 /*
 3984   mallinfo returns a copy of updated current mallinfo.
 3985   The information reported is for the arena last used by the thread.
 3986 */
 3987 
 3988 struct mallinfo mALLINFo()
 3989 {
 3990   struct mallinfo mi;
 3991   Void_t *vptr = NULL;
 3992 
 3993 #ifndef NO_THREADS
 3994   tsd_getspecific(arena_key, vptr);
 3995 #endif
 3996   malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
 3997   return mi;
 3998 }
 3999 
 4000 
 4001 
 4002 
 4003 /*
 4004   mallopt:
 4005 
 4006     mallopt is the general SVID/XPG interface to tunable parameters.
 4007     The format is to provide a (parameter-number, parameter-value) pair.
 4008     mallopt then sets the corresponding parameter to the argument
 4009     value if it can (i.e., so long as the value is meaningful),
 4010     and returns 1 if successful else 0.
 4011 
 4012     See descriptions of tunable parameters above.
 4013 
 4014 */
 4015 
 4016 #if __STD_C
 4017 int mALLOPt(int param_number, int value)
 4018 #else
 4019 int mALLOPt(param_number, value) int param_number; int value;
 4020 #endif
 4021 {
 4022   switch(param_number)
 4023   {
 4024     case M_TRIM_THRESHOLD:
 4025       trim_threshold = value; return 1;
 4026     case M_TOP_PAD:
 4027       top_pad = value; return 1;
 4028     case M_MMAP_THRESHOLD:
 4029 #ifndef NO_THREADS
 4030       /* Forbid setting the threshold too high. */
 4031       if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
 4032 #endif
 4033       mmap_threshold = value; return 1;
 4034     case M_MMAP_MAX:
 4035 #if HAVE_MMAP
 4036       n_mmaps_max = value; return 1;
 4037 #else
 4038       if (value != 0) return 0; else  n_mmaps_max = value; return 1;
 4039 #endif
 4040     case M_CHECK_ACTION:
 4041       check_action = value; return 1;
 4042 
 4043     default:
 4044       return 0;
 4045   }
 4046 }
 4047 
 4048 
 4049 
 4050 /* Get/set state: malloc_get_state() records the current state of all
 4051    malloc variables (_except_ for the actual heap contents and `hook'
 4052    function pointers) in a system dependent, opaque data structure.
 4053    This data structure is dynamically allocated and can be free()d
 4054    after use.  malloc_set_state() restores the state of all malloc
 4055    variables to the previously obtained state.  This is especially
 4056    useful when using this malloc as part of a shared library, and when
 4057    the heap contents are saved/restored via some other method.  The
 4058    primary example for this is GNU Emacs with its `dumping' procedure.
 4059    `Hook' function pointers are never saved or restored by these
 4060    functions. */
 4061 
 4062 #define MALLOC_STATE_MAGIC   0x444c4541l
 4063 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
 4064 
 4065 struct malloc_state {
 4066   long          magic;
 4067   long          version;
 4068   mbinptr       av[NAV * 2 + 2];
 4069   char*         sbrk_base;
 4070   int           sbrked_mem_bytes;
 4071   unsigned long trim_threshold;
 4072   unsigned long top_pad;
 4073   unsigned int  n_mmaps_max;
 4074   unsigned long mmap_threshold;
 4075   int           check_action;
 4076   unsigned long max_sbrked_mem;
 4077   unsigned long max_total_mem;
 4078   unsigned int  n_mmaps;
 4079   unsigned int  max_n_mmaps;
 4080   unsigned long mmapped_mem;
 4081   unsigned long max_mmapped_mem;
 4082   int           using_malloc_checking;
 4083 };
 4084 
 4085 Void_t*
 4086 mALLOC_GET_STATe()
 4087 {
 4088   struct malloc_state* ms;
 4089   int i;
 4090   mbinptr b;
 4091 
 4092   ms = (struct malloc_state*)mALLOc(sizeof(*ms));
 4093   if (!ms)
 4094     return 0;
 4095   (void)mutex_lock(&main_arena.mutex);
 4096   ms->magic = MALLOC_STATE_MAGIC;
 4097   ms->version = MALLOC_STATE_VERSION;
 4098   ms->av[0] = main_arena.av[0];
 4099   ms->av[1] = main_arena.av[1];
 4100   for(i=0; i<NAV; i++) {
 4101     b = bin_at(&main_arena, i);
 4102     if(first(b) == b)
 4103       ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
 4104     else {
 4105       ms->av[2*i+2] = first(b);
 4106       ms->av[2*i+3] = last(b);
 4107     }
 4108   }
 4109   ms->sbrk_base = sbrk_base;
 4110   ms->sbrked_mem_bytes = sbrked_mem;
 4111   ms->trim_threshold = trim_threshold;
 4112   ms->top_pad = top_pad;
 4113   ms->n_mmaps_max = n_mmaps_max;
 4114   ms->mmap_threshold = mmap_threshold;
 4115   ms->check_action = check_action;
 4116   ms->max_sbrked_mem = max_sbrked_mem;
 4117 #ifdef NO_THREADS
 4118   ms->max_total_mem = max_total_mem;
 4119 #else
 4120   ms->max_total_mem = 0;
 4121 #endif
 4122   ms->n_mmaps = n_mmaps;
 4123   ms->max_n_mmaps = max_n_mmaps;
 4124   ms->mmapped_mem = mmapped_mem;
 4125   ms->max_mmapped_mem = max_mmapped_mem;
 4126 #if defined _LIBC || defined MALLOC_HOOKS
 4127   ms->using_malloc_checking = using_malloc_checking;
 4128 #else
 4129   ms->using_malloc_checking = 0;
 4130 #endif
 4131   (void)mutex_unlock(&main_arena.mutex);
 4132   return (Void_t*)ms;
 4133 }
 4134 
 4135 int
 4136 #if __STD_C
 4137 mALLOC_SET_STATe(Void_t* msptr)
 4138 #else
 4139 mALLOC_SET_STATe(msptr) Void_t* msptr;
 4140 #endif
 4141 {
 4142   struct malloc_state* ms = (struct malloc_state*)msptr;
 4143   int i;
 4144   mbinptr b;
 4145 
 4146 #if defined _LIBC || defined MALLOC_HOOKS
 4147   disallow_malloc_check = 1;
 4148 #endif
 4149   ptmalloc_init();
 4150   if(ms->magic != MALLOC_STATE_MAGIC) return -1;
 4151   /* Must fail if the major version is too high. */
 4152   if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
 4153   (void)mutex_lock(&main_arena.mutex);
 4154   main_arena.av[0] = ms->av[0];
 4155   main_arena.av[1] = ms->av[1];
 4156   for(i=0; i<NAV; i++) {
 4157     b = bin_at(&main_arena, i);
 4158     if(ms->av[2*i+2] == 0)
 4159       first(b) = last(b) = b;
 4160     else {
 4161       first(b) = ms->av[2*i+2];
 4162       last(b) = ms->av[2*i+3];
 4163       if(i > 0) {
 4164         /* Make sure the links to the `av'-bins in the heap are correct. */
 4165         first(b)->bk = b;
 4166         last(b)->fd = b;
 4167       }
 4168     }
 4169   }
 4170   sbrk_base = ms->sbrk_base;
 4171   sbrked_mem = ms->sbrked_mem_bytes;
 4172   trim_threshold = ms->trim_threshold;
 4173   top_pad = ms->top_pad;
 4174   n_mmaps_max = ms->n_mmaps_max;
 4175   mmap_threshold = ms->mmap_threshold;
 4176   check_action = ms->check_action;
 4177   max_sbrked_mem = ms->max_sbrked_mem;
 4178 #ifdef NO_THREADS
 4179   max_total_mem = ms->max_total_mem;
 4180 #endif
 4181   n_mmaps = ms->n_mmaps;
 4182   max_n_mmaps = ms->max_n_mmaps;
 4183   mmapped_mem = ms->mmapped_mem;
 4184   max_mmapped_mem = ms->max_mmapped_mem;
 4185   /* add version-dependent code here */
 4186   if (ms->version >= 1) {
 4187 #if defined _LIBC || defined MALLOC_HOOKS
 4188     /* Check whether it is safe to enable malloc checking.  */
 4189     if (ms->using_malloc_checking && !using_malloc_checking &&
 4190         !disallow_malloc_check)
 4191       __malloc_check_init ();
 4192 #endif
 4193   }
 4194 
 4195   (void)mutex_unlock(&main_arena.mutex);
 4196   return 0;
 4197 }
 4198 
 4199 
 4200 
 4201 #if defined _LIBC || defined MALLOC_HOOKS
 4202 
 4203 /* A simple, standard set of debugging hooks.  Overhead is `only' one
 4204    byte per chunk; still this will catch most cases of double frees or
 4205    overruns.  The goal here is to avoid obscure crashes due to invalid
 4206    usage, unlike in the MALLOC_DEBUG code. */
 4207 
 4208 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
 4209 
 4210 /* Instrument a chunk with overrun detector byte(s) and convert it
 4211    into a user pointer with requested size sz. */
 4212 
 4213 static Void_t*
 4214 internal_function
 4215 #if __STD_C
 4216 chunk2mem_check(mchunkptr p, size_t sz)
 4217 #else
 4218 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
 4219 #endif
 4220 {
 4221   unsigned char* m_ptr = (unsigned char*)chunk2mem(p);
 4222   size_t i;
 4223 
 4224   for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
 4225       i > sz;
 4226       i -= 0xFF) {
 4227     if(i-sz < 0x100) {
 4228       m_ptr[i] = (unsigned char)(i-sz);
 4229       break;
 4230     }
 4231     m_ptr[i] = 0xFF;
 4232   }
 4233   m_ptr[sz] = MAGICBYTE(p);
 4234   return (Void_t*)m_ptr;
 4235 }
 4236 
 4237 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
 4238    pointer.  If the provided pointer is not valid, return NULL. */
 4239 
 4240 static mchunkptr
 4241 internal_function
 4242 #if __STD_C
 4243 mem2chunk_check(Void_t* mem)
 4244 #else
 4245 mem2chunk_check(mem) Void_t* mem;
 4246 #endif
 4247 {
 4248   mchunkptr p;
 4249   INTERNAL_SIZE_T sz, c;
 4250   unsigned char magic;
 4251 
 4252   p = mem2chunk(mem);
 4253   if(!aligned_OK(p)) return NULL;
 4254   if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
 4255     /* Must be a chunk in conventional heap memory. */
 4256     if(chunk_is_mmapped(p) ||
 4257        ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
 4258        sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
 4259        ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
 4260                             (long)prev_chunk(p)<(long)sbrk_base ||
 4261                             next_chunk(prev_chunk(p))!=p) ))
 4262       return NULL;
 4263     magic = MAGICBYTE(p);
 4264     for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
 4265       if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
 4266     }
 4267     ((unsigned char*)p)[sz] ^= 0xFF;
 4268   } else {
 4269     unsigned long offset, page_mask = malloc_getpagesize-1;
 4270 
 4271     /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
 4272        alignment relative to the beginning of a page.  Check this
 4273        first. */
 4274     offset = (unsigned long)mem & page_mask;
 4275     if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
 4276         offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
 4277         offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
 4278         offset<0x2000) ||
 4279        !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
 4280        ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
 4281        ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
 4282       return NULL;
 4283     magic = MAGICBYTE(p);
 4284     for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
 4285       if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
 4286     }
 4287     ((unsigned char*)p)[sz] ^= 0xFF;
 4288   }
 4289   return p;
 4290 }
 4291 
 4292 /* Check for corruption of the top chunk, and try to recover if
 4293    necessary. */
 4294 
 4295 static int
 4296 internal_function
 4297 top_check __MALLOC_P((void))
 4298 {
 4299   mchunkptr t = top(&main_arena);
 4300   char* brk, * new_brk;
 4301   INTERNAL_SIZE_T front_misalign, sbrk_size;
 4302   unsigned long pagesz = malloc_getpagesize;
 4303 
 4304   if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
 4305      t == initial_top(&main_arena)) return 0;
 4306 
 4307   switch(check_action) {
 4308   case 1:
 4309     fprintf(stderr, "malloc: top chunk is corrupt\n");
 4310     break;
 4311   case 2:
 4312     abort();
 4313   }
 4314   /* Try to set up a new top chunk. */
 4315   brk = MORECORE(0);
 4316   front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
 4317   if (front_misalign > 0)
 4318     front_misalign = MALLOC_ALIGNMENT - front_misalign;
 4319   sbrk_size = front_misalign + top_pad + MINSIZE;
 4320   sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
 4321   new_brk = (char*)(MORECORE (sbrk_size));
 4322   if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
 4323   sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
 4324 
 4325   top(&main_arena) = (mchunkptr)(brk + front_misalign);
 4326   set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
 4327 
 4328   return 0;
 4329 }
 4330 
 4331 static Void_t*
 4332 #if __STD_C
 4333 malloc_check(size_t sz)
 4334 #else
 4335 malloc_check(sz) size_t sz;
 4336 #endif
 4337 {
 4338   mchunkptr victim;
 4339   INTERNAL_SIZE_T nb = request2size(sz + 1);
 4340 
 4341   (void)mutex_lock(&main_arena.mutex);
 4342   victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
 4343   (void)mutex_unlock(&main_arena.mutex);
 4344   if(!victim) return NULL;
 4345   return chunk2mem_check(victim, sz);
 4346 }
 4347 
 4348 static void
 4349 #if __STD_C
 4350 free_check(Void_t* mem)
 4351 #else
 4352 free_check(mem) Void_t* mem;
 4353 #endif
 4354 {
 4355   mchunkptr p;
 4356 
 4357   if(!mem) return;
 4358   (void)mutex_lock(&main_arena.mutex);
 4359   p = mem2chunk_check(mem);
 4360   if(!p) {
 4361     (void)mutex_unlock(&main_arena.mutex);
 4362     switch(check_action) {
 4363     case 1:
 4364       fprintf(stderr, "free(): invalid pointer %p!\n", mem);
 4365       break;
 4366     case 2:
 4367       abort();
 4368     }
 4369     return;
 4370   }
 4371 #if HAVE_MMAP
 4372   if (chunk_is_mmapped(p)) {
 4373     (void)mutex_unlock(&main_arena.mutex);
 4374     munmap_chunk(p);
 4375     return;
 4376   }
 4377 #endif
 4378 #if 0 /* Erase freed memory. */
 4379   memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
 4380 #endif
 4381   chunk_free(&main_arena, p);
 4382   (void)mutex_unlock(&main_arena.mutex);
 4383 }
 4384 
 4385 static Void_t*
 4386 #if __STD_C
 4387 realloc_check(Void_t* oldmem, size_t bytes)
 4388 #else
 4389 realloc_check(oldmem, bytes) Void_t* oldmem; size_t bytes;
 4390 #endif
 4391 {
 4392   mchunkptr oldp, newp;
 4393   INTERNAL_SIZE_T nb, oldsize;
 4394 
 4395   if (oldmem == 0) return malloc_check(bytes);
 4396   (void)mutex_lock(&main_arena.mutex);
 4397   oldp = mem2chunk_check(oldmem);
 4398   if(!oldp) {
 4399     (void)mutex_unlock(&main_arena.mutex);
 4400     switch(check_action) {
 4401     case 1:
 4402       fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
 4403       break;
 4404     case 2:
 4405       abort();
 4406     }
 4407     return malloc_check(bytes);
 4408   }
 4409   oldsize = chunksize(oldp);
 4410 
 4411   nb = request2size(bytes+1);
 4412 
 4413 #if HAVE_MMAP
 4414   if (chunk_is_mmapped(oldp)) {
 4415 #if HAVE_MREMAP
 4416     newp = mremap_chunk(oldp, nb);
 4417     if(!newp) {
 4418 #endif
 4419       /* Note the extra SIZE_SZ overhead. */
 4420       if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
 4421       else {
 4422         /* Must alloc, copy, free. */
 4423         newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
 4424         if (newp) {
 4425           MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
 4426           munmap_chunk(oldp);
 4427         }
 4428       }
 4429 #if HAVE_MREMAP
 4430     }
 4431 #endif
 4432   } else {
 4433 #endif /* HAVE_MMAP */
 4434     newp = (top_check() >= 0) ?
 4435       chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
 4436 #if 0 /* Erase freed memory. */
 4437     nb = chunksize(newp);
 4438     if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
 4439       memset((char*)oldmem + 2*sizeof(mbinptr), 0,
 4440              oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
 4441     } else if(nb > oldsize+SIZE_SZ) {
 4442       memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
 4443     }
 4444 #endif
 4445 #if HAVE_MMAP
 4446   }
 4447 #endif
 4448   (void)mutex_unlock(&main_arena.mutex);
 4449 
 4450   if(!newp) return NULL;
 4451   return chunk2mem_check(newp, bytes);
 4452 }
 4453 
 4454 static Void_t*
 4455 #if __STD_C
 4456 memalign_check(size_t alignment, size_t bytes)
 4457 #else
 4458 memalign_check(alignment, bytes) size_t alignment; size_t bytes;
 4459 #endif
 4460 {
 4461   INTERNAL_SIZE_T nb;
 4462   mchunkptr p;
 4463 
 4464   if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes);
 4465   if (alignment <  MINSIZE) alignment = MINSIZE;
 4466 
 4467   nb = request2size(bytes+1);
 4468   (void)mutex_lock(&main_arena.mutex);
 4469   p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
 4470   (void)mutex_unlock(&main_arena.mutex);
 4471   if(!p) return NULL;
 4472   return chunk2mem_check(p, bytes);
 4473 }
 4474 
 4475 #ifndef NO_THREADS
 4476 
 4477 /* The following hooks are used when the global initialization in
 4478    ptmalloc_init() hasn't completed yet. */
 4479 
 4480 static Void_t*
 4481 #if __STD_C
 4482 malloc_starter(size_t sz)
 4483 #else
 4484 malloc_starter(sz) size_t sz;
 4485 #endif
 4486 {
 4487   mchunkptr victim = chunk_alloc(&main_arena, request2size(sz));
 4488 
 4489   return victim ? chunk2mem(victim) : 0;
 4490 }
 4491 
 4492 static void
 4493 #if __STD_C
 4494 free_starter(Void_t* mem)
 4495 #else
 4496 free_starter(mem) Void_t* mem;
 4497 #endif
 4498 {
 4499   mchunkptr p;
 4500 
 4501   if(!mem) return;
 4502   p = mem2chunk(mem);
 4503 #if HAVE_MMAP
 4504   if (chunk_is_mmapped(p)) {
 4505     munmap_chunk(p);
 4506     return;
 4507   }
 4508 #endif
 4509   chunk_free(&main_arena, p);
 4510 }
 4511 
 4512 /* The following hooks are used while the `atfork' handling mechanism
 4513    is active. */
 4514 
 4515 static Void_t*
 4516 #if __STD_C
 4517 malloc_atfork(size_t sz)
 4518 #else
 4519 malloc_atfork(sz) size_t sz;
 4520 #endif
 4521 {
 4522   Void_t *vptr = NULL;
 4523   mchunkptr victim;
 4524 
 4525   tsd_getspecific(arena_key, vptr);
 4526   if(!vptr) {
 4527     if(save_malloc_hook != malloc_check) {
 4528       victim = chunk_alloc(&main_arena, request2size(sz));
 4529       return victim ? chunk2mem(victim) : 0;
 4530     } else {
 4531       if(top_check() < 0) return 0;
 4532       victim = chunk_alloc(&main_arena, request2size(sz+1));
 4533       return victim ? chunk2mem_check(victim, sz) : 0;
 4534     }
 4535   } else {
 4536     /* Suspend the thread until the `atfork' handlers have completed.
 4537        By that time, the hooks will have been reset as well, so that
 4538        mALLOc() can be used again. */
 4539     (void)mutex_lock(&list_lock);
 4540     (void)mutex_unlock(&list_lock);
 4541     return mALLOc(sz);
 4542   }
 4543 }
 4544 
 4545 static void
 4546 #if __STD_C
 4547 free_atfork(Void_t* mem)
 4548 #else
 4549 free_atfork(mem) Void_t* mem;
 4550 #endif
 4551 {
 4552   Void_t *vptr = NULL;
 4553   arena *ar_ptr;
 4554   mchunkptr p;                          /* chunk corresponding to mem */
 4555 
 4556   if (mem == 0)                              /* free(0) has no effect */
 4557     return;
 4558 
 4559   p = mem2chunk(mem);         /* do not bother to replicate free_check here */
 4560 
 4561 #if HAVE_MMAP
 4562   if (chunk_is_mmapped(p))                       /* release mmapped memory. */
 4563   {
 4564     munmap_chunk(p);
 4565     return;
 4566   }
 4567 #endif
 4568 
 4569   ar_ptr = arena_for_ptr(p);
 4570   tsd_getspecific(arena_key, vptr);
 4571   if(vptr)
 4572     (void)mutex_lock(&ar_ptr->mutex);
 4573   chunk_free(ar_ptr, p);
 4574   if(vptr)
 4575     (void)mutex_unlock(&ar_ptr->mutex);
 4576 }
 4577 
 4578 #endif /* !defined NO_THREADS */
 4579 
 4580 #endif /* defined _LIBC || defined MALLOC_HOOKS */
 4581 
 4582 
 4583 
 4584 #ifdef _LIBC
 4585 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
 4586 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
 4587 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
 4588 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
 4589 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
 4590 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
 4591 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
 4592 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
 4593 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
 4594 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
 4595 
 4596 weak_alias (__malloc_stats, malloc_stats)
 4597 weak_alias (__malloc_usable_size, malloc_usable_size)
 4598 weak_alias (__malloc_trim, malloc_trim)
 4599 weak_alias (__malloc_get_state, malloc_get_state)
 4600 weak_alias (__malloc_set_state, malloc_set_state)
 4601 #endif
 4602 
 4603 /*
 4604 
 4605 History:
 4606 
 4607     V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
 4608       * Added malloc_get/set_state() (mainly for use in GNU emacs),
 4609         using interface from Marcus Daniels
 4610       * All parameters are now adjustable via environment variables
 4611 
 4612     V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
 4613       * Added debugging hooks
 4614       * Fixed possible deadlock in realloc() when out of memory
 4615       * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
 4616 
 4617     V2.6.4-pt Wed Dec  4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
 4618       * Very minor updates from the released 2.6.4 version.
 4619       * Trimmed include file down to exported data structures.
 4620       * Changes from H.J. Lu for glibc-2.0.
 4621 
 4622     V2.6.3i-pt Sep 16 1996  Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
 4623       * Many changes for multiple threads
 4624       * Introduced arenas and heaps
 4625 
 4626     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
 4627       * Added pvalloc, as recommended by H.J. Liu
 4628       * Added 64bit pointer support mainly from Wolfram Gloger
 4629       * Added anonymously donated WIN32 sbrk emulation
 4630       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
 4631       * malloc_extend_top: fix mask error that caused wastage after
 4632         foreign sbrks
 4633       * Add linux mremap support code from HJ Liu
 4634 
 4635     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
 4636       * Integrated most documentation with the code.
 4637       * Add support for mmap, with help from
 4638         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
 4639       * Use last_remainder in more cases.
 4640       * Pack bins using idea from  colin@nyx10.cs.du.edu
 4641       * Use ordered bins instead of best-fit threshold
 4642       * Eliminate block-local decls to simplify tracing and debugging.
 4643       * Support another case of realloc via move into top
 4644       * Fix error occurring when initial sbrk_base not word-aligned.
 4645       * Rely on page size for units instead of SBRK_UNIT to
 4646         avoid surprises about sbrk alignment conventions.
 4647       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
 4648         (raymond@es.ele.tue.nl) for the suggestion.
 4649       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
 4650       * More precautions for cases where other routines call sbrk,
 4651         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
 4652       * Added macros etc., allowing use in linux libc from
 4653         H.J. Lu (hjl@gnu.ai.mit.edu)
 4654       * Inverted this history list
 4655 
 4656     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
 4657       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
 4658       * Removed all preallocation code since under current scheme
 4659         the work required to undo bad preallocations exceeds
 4660         the work saved in good cases for most test programs.
 4661       * No longer use return list or unconsolidated bins since
 4662         no scheme using them consistently outperforms those that don't
 4663         given above changes.
 4664       * Use best fit for very large chunks to prevent some worst-cases.
 4665       * Added some support for debugging
 4666 
 4667     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
 4668       * Removed footers when chunks are in use. Thanks to
 4669         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
 4670 
 4671     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
 4672       * Added malloc_trim, with help from Wolfram Gloger
 4673         (wmglo@Dent.MED.Uni-Muenchen.DE).
 4674 
 4675     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
 4676 
 4677     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
 4678       * realloc: try to expand in both directions
 4679       * malloc: swap order of clean-bin strategy;
 4680       * realloc: only conditionally expand backwards
 4681       * Try not to scavenge used bins
 4682       * Use bin counts as a guide to preallocation
 4683       * Occasionally bin return list chunks in first scan
 4684       * Add a few optimizations from colin@nyx10.cs.du.edu
 4685 
 4686     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
 4687       * faster bin computation & slightly different binning
 4688       * merged all consolidations to one part of malloc proper
 4689          (eliminating old malloc_find_space & malloc_clean_bin)
 4690       * Scan 2 returns chunks (not just 1)
 4691       * Propagate failure in realloc if malloc returns 0
 4692       * Add stuff to allow compilation on non-ANSI compilers
 4693           from kpv@research.att.com
 4694 
 4695     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
 4696       * removed potential for odd address access in prev_chunk
 4697       * removed dependency on getpagesize.h
 4698       * misc cosmetics and a bit more internal documentation
 4699       * anticosmetics: mangled names in macros to evade debugger strangeness
 4700       * tested on sparc, hp-700, dec-mips, rs6000
 4701           with gcc & native cc (hp, dec only) allowing
 4702           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
 4703 
 4704     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
 4705       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
 4706          structure of old version,  but most details differ.)
 4707 
 4708 */