"Fossies" - the Fresh Open Source Software Archive

Member "mpr-2.8/malloc.c" (18 Dec 2003, 183958 Bytes) of package /linux/misc/old/mpr-2.8.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format using (guessed) C and C++ source code syntax highlighting (style: standard) with prefixed line numbers and code folding option. Alternatively you can here view or download the uninterpreted source code file.

    1 /*
    2   This is a version (aka dlmalloc) of malloc/free/realloc written by
    3   Doug Lea and released to the public domain.  Use, modify, and
    4   redistribute this code without permission or acknowledgement in any
    5   way you wish.  Send questions, comments, complaints, performance
    6   data, etc to dl@cs.oswego.edu
    7 
    8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
    9 
   10    Note: There may be an updated version of this malloc obtainable at
   11            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
   12          Check before installing!
   13 
   14 * Quickstart
   15 
   16   This library is all in one file to simplify the most common usage:
   17   ftp it, compile it (-O), and link it into another program. All
   18   of the compile-time options default to reasonable values for use on
   19   most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
   20   You might later want to step through various compile-time and dynamic
   21   tuning options.
   22 
   23   For convenience, an include file for code using this malloc is at:
   24      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
   25   You don't really need this .h file unless you call functions not
   26   defined in your system include files.  The .h file contains only the
   27   excerpts from this file needed for using this malloc on ANSI C/C++
   28   systems, so long as you haven't changed compile-time options about
   29   naming and tuning parameters.  If you do, then you can create your
   30   own malloc.h that does include all settings by cutting at the point
   31   indicated below.
   32 
   33 * Why use this malloc?
   34 
   35   This is not the fastest, most space-conserving, most portable, or
   36   most tunable malloc ever written. However it is among the fastest
   37   while also being among the most space-conserving, portable and tunable.
   38   Consistent balance across these factors results in a good general-purpose
   39   allocator for malloc-intensive programs.
   40 
   41   The main properties of the algorithms are:
   42   * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
   43     with ties normally decided via FIFO (i.e. least recently used).
   44   * For small (<= 64 bytes by default) requests, it is a caching
   45     allocator, that maintains pools of quickly recycled chunks.
   46   * In between, and for combinations of large and small requests, it does
   47     the best it can trying to meet both goals at once.
   48   * For very large requests (>= 128KB by default), it relies on system
   49     memory mapping facilities, if supported.
   50 
   51   For a longer but slightly out of date high-level description, see
   52      http://gee.cs.oswego.edu/dl/html/malloc.html
   53 
   54   You may already by default be using a C library containing a malloc
   55   that is  based on some version of this malloc (for example in
   56   linux). You might still want to use the one in this file in order to
   57   customize settings or to avoid overheads associated with library
   58   versions.
   59 
   60 * Contents, described in more detail in "description of public routines" below.
   61 
   62   Standard (ANSI/SVID/...)  functions:
   63     malloc(size_t n);
   64     calloc(size_t n_elements, size_t element_size);
   65     free(Void_t* p);
   66     realloc(Void_t* p, size_t n);
   67     memalign(size_t alignment, size_t n);
   68     valloc(size_t n);
   69     mallinfo()
   70     mallopt(int parameter_number, int parameter_value)
   71 
   72   Additional functions:
   73     independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
   74     independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
   75     pvalloc(size_t n);
   76     cfree(Void_t* p);
   77     malloc_trim(size_t pad);
   78     malloc_usable_size(Void_t* p);
   79     malloc_stats();
   80 
   81 * Vital statistics:
   82 
   83   Supported pointer representation:       4 or 8 bytes
   84   Supported size_t  representation:       4 or 8 bytes 
   85        Note that size_t is allowed to be 4 bytes even if pointers are 8.
   86        You can adjust this by defining INTERNAL_SIZE_T
   87 
   88   Alignment:                              2 * sizeof(size_t) (default)
   89        (i.e., 8 byte alignment with 4byte size_t). This suffices for
   90        nearly all current machines and C compilers. However, you can
   91        define MALLOC_ALIGNMENT to be wider than this if necessary.
   92 
   93   Minimum overhead per allocated chunk:   4 or 8 bytes
   94        Each malloced chunk has a hidden word of overhead holding size
   95        and status information.
   96 
   97   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
   98                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
   99 
  100        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
  101        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
  102        needed; 4 (8) for a trailing size field and 8 (16) bytes for
  103        free list pointers. Thus, the minimum allocatable size is
  104        16/24/32 bytes.
  105 
  106        Even a request for zero bytes (i.e., malloc(0)) returns a
  107        pointer to something of the minimum allocatable size.
  108 
  109        The maximum overhead wastage (i.e., number of extra bytes
  110        allocated than were requested in malloc) is less than or equal
  111        to the minimum size, except for requests >= mmap_threshold that
  112        are serviced via mmap(), where the worst case wastage is 2 *
  113        sizeof(size_t) bytes plus the remainder from a system page (the
  114        minimal mmap unit); typically 4096 or 8192 bytes.
  115 
  116   Maximum allocated size:  4-byte size_t: 2^32 minus about two pages 
  117                            8-byte size_t: 2^64 minus about two pages
  118 
  119        It is assumed that (possibly signed) size_t values suffice to
  120        represent chunk sizes. `Possibly signed' is due to the fact
  121        that `size_t' may be defined on a system as either a signed or
  122        an unsigned type. The ISO C standard says that it must be
  123        unsigned, but a few systems are known not to adhere to this.
  124        Additionally, even when size_t is unsigned, sbrk (which is by
  125        default used to obtain memory from system) accepts signed
  126        arguments, and may not be able to handle size_t-wide arguments
  127        with negative sign bit.  Generally, values that would
  128        appear as negative after accounting for overhead and alignment
  129        are supported only via mmap(), which does not have this
  130        limitation.
  131 
  132        Requests for sizes outside the allowed range will perform an optional
  133        failure action and then return null. (Requests may also
  134        also fail because a system is out of memory.)
  135 
  136   Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
  137 
  138        When USE_MALLOC_LOCK is defined, wrappers are created to
  139        surround every public call with either a pthread mutex or
  140        a win32 spinlock (depending on WIN32). This is not
  141        especially fast, and can be a major bottleneck.
  142        It is designed only to provide minimal protection
  143        in concurrent environments, and to provide a basis for
  144        extensions.  If you are using malloc in a concurrent program,
  145        you would be far better off obtaining ptmalloc, which is
  146        derived from a version of this malloc, and is well-tuned for
  147        concurrent programs. (See http://www.malloc.de) Note that
  148        even when USE_MALLOC_LOCK is defined, you can can guarantee
  149        full thread-safety only if no threads acquire memory through 
  150        direct calls to MORECORE or other system-level allocators.
  151 
  152   Compliance: I believe it is compliant with the 1997 Single Unix Specification
  153        (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably 
  154        others as well.
  155 
  156 * Synopsis of compile-time options:
  157 
  158     People have reported using previous versions of this malloc on all
  159     versions of Unix, sometimes by tweaking some of the defines
  160     below. It has been tested most extensively on Solaris and
  161     Linux. It is also reported to work on WIN32 platforms.
  162     People also report using it in stand-alone embedded systems.
  163 
  164     The implementation is in straight, hand-tuned ANSI C.  It is not
  165     at all modular. (Sorry!)  It uses a lot of macros.  To be at all
  166     usable, this code should be compiled using an optimizing compiler
  167     (for example gcc -O3) that can simplify expressions and control
  168     paths. (FAQ: some macros import variables as arguments rather than
  169     declare locals because people reported that some debuggers
  170     otherwise get confused.)
  171 
  172     OPTION                     DEFAULT VALUE
  173 
  174     Compilation Environment options:
  175 
  176     __STD_C                    derived from C compiler defines
  177     WIN32                      NOT defined
  178     HAVE_MEMCPY                defined
  179     USE_MEMCPY                 1 if HAVE_MEMCPY is defined
  180     HAVE_MMAP                  defined as 1 
  181     MMAP_CLEARS                1
  182     HAVE_MREMAP                0 unless linux defined
  183     malloc_getpagesize         derived from system #includes, or 4096 if not
  184     HAVE_USR_INCLUDE_MALLOC_H  NOT defined
  185     LACKS_UNISTD_H             NOT defined unless WIN32
  186     LACKS_SYS_PARAM_H          NOT defined unless WIN32
  187     LACKS_SYS_MMAN_H           NOT defined unless WIN32
  188     LACKS_FCNTL_H              NOT defined
  189 
  190     Changing default word sizes:
  191 
  192     INTERNAL_SIZE_T            size_t
  193     MALLOC_ALIGNMENT           2 * sizeof(INTERNAL_SIZE_T)
  194     PTR_UINT                   unsigned long
  195     CHUNK_SIZE_T               unsigned long
  196 
  197     Configuration and functionality options:
  198 
  199     USE_DL_PREFIX              NOT defined
  200     USE_PUBLIC_MALLOC_WRAPPERS NOT defined
  201     USE_MALLOC_LOCK            NOT defined
  202     DEBUG                      NOT defined
  203     REALLOC_ZERO_BYTES_FREES   NOT defined
  204     MALLOC_FAILURE_ACTION      errno = ENOMEM, if __STD_C defined, else no-op
  205     TRIM_FASTBINS              0
  206     FIRST_SORTED_BIN_SIZE      512
  207 
  208     Options for customizing MORECORE:
  209 
  210     MORECORE                   sbrk
  211     MORECORE_CONTIGUOUS        1 
  212     MORECORE_CANNOT_TRIM       NOT defined
  213     MMAP_AS_MORECORE_SIZE      (1024 * 1024) 
  214 
  215     Tuning options that are also dynamically changeable via mallopt:
  216 
  217     DEFAULT_MXFAST             64
  218     DEFAULT_TRIM_THRESHOLD     256 * 1024
  219     DEFAULT_TOP_PAD            0
  220     DEFAULT_MMAP_THRESHOLD     256 * 1024
  221     DEFAULT_MMAP_MAX           65536
  222 
  223     There are several other #defined constants and macros that you
  224     probably don't want to touch unless you are extending or adapting malloc.
  225 */
  226 
  227 /*
  228   WIN32 sets up defaults for MS environment and compilers.
  229   Otherwise defaults are for unix.
  230 */
  231 
  232 /* #define WIN32 */
  233 
  234 #ifdef WIN32
  235 
  236 #define WIN32_LEAN_AND_MEAN
  237 #include <windows.h>
  238 
  239 /* Win32 doesn't supply or need the following headers */
  240 #define LACKS_UNISTD_H
  241 #define LACKS_SYS_PARAM_H
  242 #define LACKS_SYS_MMAN_H
  243 
  244 /* Use the supplied emulation of sbrk */
  245 #define MORECORE sbrk
  246 #define MORECORE_CONTIGUOUS 1
  247 #define MORECORE_FAILURE    ((void*)(-1))
  248 
  249 /* Use the supplied emulation of mmap and munmap */
  250 #define HAVE_MMAP 1
  251 #define MUNMAP_FAILURE  (-1)
  252 #define MMAP_CLEARS 1
  253 
  254 /* These values don't really matter in windows mmap emulation */
  255 #define MAP_PRIVATE 1
  256 #define MAP_ANONYMOUS 2
  257 #define PROT_READ 1
  258 #define PROT_WRITE 2
  259 
  260 /* Emulation functions defined at the end of this file */
  261 
  262 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
  263 #ifdef USE_MALLOC_LOCK
  264 static int slwait(int *sl);
  265 static int slrelease(int *sl);
  266 #endif
  267 
  268 static long getpagesize(void);
  269 static long getregionsize(void);
  270 static void *sbrk(long size);
  271 static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
  272 static long munmap(void *ptr, long size);
  273 
  274 static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed);
  275 static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
  276 
  277 #endif
  278 
  279 /*
  280   __STD_C should be nonzero if using ANSI-standard C compiler, a C++
  281   compiler, or a C compiler sufficiently close to ANSI to get away
  282   with it.
  283 */
  284 
  285 #ifndef __STD_C
  286 #if defined(__STDC__) || defined(_cplusplus)
  287 #define __STD_C     1
  288 #else
  289 #define __STD_C     0
  290 #endif 
  291 #endif /*__STD_C*/
  292 
  293 
  294 /*
  295   Void_t* is the pointer type that malloc should say it returns
  296 */
  297 
  298 #ifndef Void_t
  299 #if (__STD_C || defined(WIN32))
  300 #define Void_t      void
  301 #else
  302 #define Void_t      char
  303 #endif
  304 #endif /*Void_t*/
  305 
  306 #if __STD_C
  307 #include <stddef.h>   /* for size_t */
  308 #else
  309 #include <sys/types.h>
  310 #endif
  311 
  312 #ifdef __cplusplus
  313 extern "C" {
  314 #endif
  315 
  316 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
  317 
  318 /* #define  LACKS_UNISTD_H */
  319 
  320 #ifndef LACKS_UNISTD_H
  321 #include <unistd.h>
  322 #endif
  323 
  324 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
  325 
  326 /* #define  LACKS_SYS_PARAM_H */
  327 
  328 
  329 #include <stdio.h>    /* needed for malloc_stats */
  330 #include <errno.h>    /* needed for optional MALLOC_FAILURE_ACTION */
  331 
  332 
  333 /*
  334   Debugging:
  335 
  336   Because freed chunks may be overwritten with bookkeeping fields, this
  337   malloc will often die when freed memory is overwritten by user
  338   programs.  This can be very effective (albeit in an annoying way)
  339   in helping track down dangling pointers.
  340 
  341   If you compile with -DDEBUG, a number of assertion checks are
  342   enabled that will catch more memory errors. You probably won't be
  343   able to make much sense of the actual assertion errors, but they
  344   should help you locate incorrectly overwritten memory.  The
  345   checking is fairly extensive, and will slow down execution
  346   noticeably. Calling malloc_stats or mallinfo with DEBUG set will
  347   attempt to check every non-mmapped allocated and free chunk in the
  348   course of computing the summmaries. (By nature, mmapped regions
  349   cannot be checked very much automatically.)
  350 
  351   Setting DEBUG may also be helpful if you are trying to modify
  352   this code. The assertions in the check routines spell out in more
  353   detail the assumptions and invariants underlying the algorithms.
  354 
  355   Setting DEBUG does NOT provide an automated mechanism for checking
  356   that all accesses to malloced memory stay within their
  357   bounds. However, there are several add-ons and adaptations of this
  358   or other mallocs available that do this.
  359 */
  360 
  361 #if DEBUG
  362 #include <assert.h>
  363 #else
  364 #define assert(x) ((void)0)
  365 #endif
  366 
  367 /*
  368   The unsigned integer type used for comparing any two chunk sizes.
  369   This should be at least as wide as size_t, but should not be signed.
  370 */
  371 
  372 #ifndef CHUNK_SIZE_T
  373 #define CHUNK_SIZE_T unsigned long
  374 #endif
  375 
  376 /* 
  377   The unsigned integer type used to hold addresses when they are are
  378   manipulated as integers. Except that it is not defined on all
  379   systems, intptr_t would suffice.
  380 */
  381 #ifndef PTR_UINT
  382 #define PTR_UINT unsigned long
  383 #endif
  384 
  385 
  386 /*
  387   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
  388   of chunk sizes.
  389 
  390   The default version is the same as size_t.
  391 
  392   While not strictly necessary, it is best to define this as an
  393   unsigned type, even if size_t is a signed type. This may avoid some
  394   artificial size limitations on some systems.
  395 
  396   On a 64-bit machine, you may be able to reduce malloc overhead by
  397   defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
  398   expense of not being able to handle more than 2^32 of malloced
  399   space. If this limitation is acceptable, you are encouraged to set
  400   this unless you are on a platform requiring 16byte alignments. In
  401   this case the alignment requirements turn out to negate any
  402   potential advantages of decreasing size_t word size.
  403 
  404   Implementors: Beware of the possible combinations of:
  405      - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
  406        and might be the same width as int or as long
  407      - size_t might have different width and signedness as INTERNAL_SIZE_T
  408      - int and long might be 32 or 64 bits, and might be the same width
  409   To deal with this, most comparisons and difference computations
  410   among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
  411   aware of the fact that casting an unsigned int to a wider long does
  412   not sign-extend. (This also makes checking for negative numbers
  413   awkward.) Some of these casts result in harmless compiler warnings
  414   on some systems.
  415 */
  416 
  417 #ifndef INTERNAL_SIZE_T
  418 #define INTERNAL_SIZE_T size_t
  419 #endif
  420 
  421 /* The corresponding word size */
  422 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
  423 
  424 
  425 
  426 /*
  427   MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
  428   It must be a power of two at least 2 * SIZE_SZ, even on machines
  429   for which smaller alignments would suffice. It may be defined as
  430   larger than this though. Note however that code and data structures
  431   are optimized for the case of 8-byte alignment.
  432 */
  433 
  434 
  435 #ifndef MALLOC_ALIGNMENT
  436 #define MALLOC_ALIGNMENT       (2 * SIZE_SZ)
  437 #endif
  438 
  439 /* The corresponding bit mask value */
  440 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
  441 
  442 
  443 
  444 /*
  445   REALLOC_ZERO_BYTES_FREES should be set if a call to
  446   realloc with zero bytes should be the same as a call to free.
  447   Some people think it should. Otherwise, since this malloc
  448   returns a unique pointer for malloc(0), so does realloc(p, 0).
  449 */
  450 
  451 /*   #define REALLOC_ZERO_BYTES_FREES */
  452 
  453 /*
  454   TRIM_FASTBINS controls whether free() of a very small chunk can
  455   immediately lead to trimming. Setting to true (1) can reduce memory
  456   footprint, but will almost always slow down programs that use a lot
  457   of small chunks.
  458 
  459   Define this only if you are willing to give up some speed to more
  460   aggressively reduce system-level memory footprint when releasing
  461   memory in programs that use many small chunks.  You can get
  462   essentially the same effect by setting MXFAST to 0, but this can
  463   lead to even greater slowdowns in programs using many small chunks.
  464   TRIM_FASTBINS is an in-between compile-time option, that disables
  465   only those chunks bordering topmost memory from being placed in
  466   fastbins.
  467 */
  468 
  469 #ifndef TRIM_FASTBINS
  470 #define TRIM_FASTBINS  0
  471 #endif
  472 
  473 
  474 /*
  475   USE_DL_PREFIX will prefix all public routines with the string 'dl'.
  476   This is necessary when you only want to use this malloc in one part 
  477   of a program, using your regular system malloc elsewhere.
  478 */
  479 
  480 /* #define USE_DL_PREFIX */
  481 
  482 
  483 /*
  484   USE_MALLOC_LOCK causes wrapper functions to surround each
  485   callable routine with pthread mutex lock/unlock.
  486 
  487   USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
  488 */
  489 
  490 
  491 /* #define USE_MALLOC_LOCK */
  492 
  493 
  494 /*
  495   If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
  496   actually a wrapper function that first calls MALLOC_PREACTION, then
  497   calls the internal routine, and follows it with
  498   MALLOC_POSTACTION. This is needed for locking, but you can also use
  499   this, without USE_MALLOC_LOCK, for purposes of interception,
  500   instrumentation, etc. It is a sad fact that using wrappers often
  501   noticeably degrades performance of malloc-intensive programs.
  502 */
  503 
  504 #ifdef USE_MALLOC_LOCK
  505 #define USE_PUBLIC_MALLOC_WRAPPERS
  506 #else
  507 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
  508 #endif
  509 
  510 
  511 /* 
  512    Two-phase name translation.
  513    All of the actual routines are given mangled names.
  514    When wrappers are used, they become the public callable versions.
  515    When DL_PREFIX is used, the callable names are prefixed.
  516 */
  517 
  518 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
  519 #define cALLOc      public_cALLOc
  520 #define fREe        public_fREe
  521 #define cFREe       public_cFREe
  522 #define mALLOc      public_mALLOc
  523 #define mEMALIGn    public_mEMALIGn
  524 #define rEALLOc     public_rEALLOc
  525 #define vALLOc      public_vALLOc
  526 #define pVALLOc     public_pVALLOc
  527 #define mALLINFo    public_mALLINFo
  528 #define mALLOPt     public_mALLOPt
  529 #define mTRIm       public_mTRIm
  530 #define mSTATs      public_mSTATs
  531 #define mUSABLe     public_mUSABLe
  532 #define iCALLOc     public_iCALLOc
  533 #define iCOMALLOc   public_iCOMALLOc
  534 #endif
  535 
  536 #ifdef USE_DL_PREFIX
  537 #define public_cALLOc    dlcalloc
  538 #define public_fREe      dlfree
  539 #define public_cFREe     dlcfree
  540 #define public_mALLOc    dlmalloc
  541 #define public_mEMALIGn  dlmemalign
  542 #define public_rEALLOc   dlrealloc
  543 #define public_vALLOc    dlvalloc
  544 #define public_pVALLOc   dlpvalloc
  545 #define public_mALLINFo  dlmallinfo
  546 #define public_mALLOPt   dlmallopt
  547 #define public_mTRIm     dlmalloc_trim
  548 #define public_mSTATs    dlmalloc_stats
  549 #define public_mUSABLe   dlmalloc_usable_size
  550 #define public_iCALLOc   dlindependent_calloc
  551 #define public_iCOMALLOc dlindependent_comalloc
  552 #else /* USE_DL_PREFIX */
  553 #define public_cALLOc    calloc
  554 #define public_fREe      free
  555 #define public_cFREe     cfree
  556 #define public_mALLOc    malloc
  557 #define public_mEMALIGn  memalign
  558 #define public_rEALLOc   realloc
  559 #define public_vALLOc    valloc
  560 #define public_pVALLOc   pvalloc
  561 #define public_mALLINFo  mallinfo
  562 #define public_mALLOPt   mallopt
  563 #define public_mTRIm     malloc_trim
  564 #define public_mSTATs    malloc_stats
  565 #define public_mUSABLe   malloc_usable_size
  566 #define public_iCALLOc   independent_calloc
  567 #define public_iCOMALLOc independent_comalloc
  568 #endif /* USE_DL_PREFIX */
  569 
  570 
  571 /*
  572   HAVE_MEMCPY should be defined if you are not otherwise using
  573   ANSI STD C, but still have memcpy and memset in your C library
  574   and want to use them in calloc and realloc. Otherwise simple
  575   macro versions are defined below.
  576 
  577   USE_MEMCPY should be defined as 1 if you actually want to
  578   have memset and memcpy called. People report that the macro
  579   versions are faster than libc versions on some systems.
  580   
  581   Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
  582   (of <= 36 bytes) are manually unrolled in realloc and calloc.
  583 */
  584 
  585 #define HAVE_MEMCPY
  586 
  587 #ifndef USE_MEMCPY
  588 #ifdef HAVE_MEMCPY
  589 #define USE_MEMCPY 1
  590 #else
  591 #define USE_MEMCPY 0
  592 #endif
  593 #endif
  594 
  595 
  596 #if (__STD_C || defined(HAVE_MEMCPY))
  597 
  598 #ifdef WIN32
  599 /* On Win32 memset and memcpy are already declared in windows.h */
  600 #else
  601 #if __STD_C
  602 void* memset(void*, int, size_t);
  603 void* memcpy(void*, const void*, size_t);
  604 #else
  605 Void_t* memset();
  606 Void_t* memcpy();
  607 #endif
  608 #endif
  609 #endif
  610 
  611 /*
  612   MALLOC_FAILURE_ACTION is the action to take before "return 0" when
  613   malloc fails to be able to return memory, either because memory is
  614   exhausted or because of illegal arguments.
  615   
  616   By default, sets errno if running on STD_C platform, else does nothing.  
  617 */
  618 
  619 #ifndef MALLOC_FAILURE_ACTION
  620 #if __STD_C
  621 #define MALLOC_FAILURE_ACTION \
  622    errno = ENOMEM;
  623 
  624 #else
  625 #define MALLOC_FAILURE_ACTION
  626 #endif
  627 #endif
  628 
  629 /*
  630   MORECORE-related declarations. By default, rely on sbrk
  631 */
  632 
  633 
  634 #ifdef LACKS_UNISTD_H
  635 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
  636 #if __STD_C
  637 extern Void_t*     sbrk(ptrdiff_t);
  638 #else
  639 extern Void_t*     sbrk();
  640 #endif
  641 #endif
  642 #endif
  643 
  644 /*
  645   MORECORE is the name of the routine to call to obtain more memory
  646   from the system.  See below for general guidance on writing
  647   alternative MORECORE functions, as well as a version for WIN32 and a
  648   sample version for pre-OSX macos.
  649 */
  650 
  651 #ifndef MORECORE
  652 #define MORECORE sbrk
  653 #endif
  654 
  655 /*
  656   MORECORE_FAILURE is the value returned upon failure of MORECORE
  657   as well as mmap. Since it cannot be an otherwise valid memory address,
  658   and must reflect values of standard sys calls, you probably ought not
  659   try to redefine it.
  660 */
  661 
  662 #ifndef MORECORE_FAILURE
  663 #define MORECORE_FAILURE (-1)
  664 #endif
  665 
  666 /*
  667   If MORECORE_CONTIGUOUS is true, take advantage of fact that
  668   consecutive calls to MORECORE with positive arguments always return
  669   contiguous increasing addresses.  This is true of unix sbrk.  Even
  670   if not defined, when regions happen to be contiguous, malloc will
  671   permit allocations spanning regions obtained from different
  672   calls. But defining this when applicable enables some stronger
  673   consistency checks and space efficiencies. 
  674 */
  675 
  676 #ifndef MORECORE_CONTIGUOUS
  677 #define MORECORE_CONTIGUOUS 1
  678 #endif
  679 
  680 /*
  681   Define MORECORE_CANNOT_TRIM if your version of MORECORE
  682   cannot release space back to the system when given negative
  683   arguments. This is generally necessary only if you are using
  684   a hand-crafted MORECORE function that cannot handle negative arguments.
  685 */
  686 
  687 /* #define MORECORE_CANNOT_TRIM */
  688 
  689 
  690 /*
  691   Define HAVE_MMAP as true to optionally make malloc() use mmap() to
  692   allocate very large blocks.  These will be returned to the
  693   operating system immediately after a free(). Also, if mmap
  694   is available, it is used as a backup strategy in cases where
  695   MORECORE fails to provide space from system.
  696 
  697   This malloc is best tuned to work with mmap for large requests.
  698   If you do not have mmap, operations involving very large chunks (1MB
  699   or so) may be slower than you'd like.
  700 */
  701 
  702 #ifndef HAVE_MMAP
  703 #define HAVE_MMAP 1
  704 #endif
  705 
  706 #if HAVE_MMAP
  707 /* 
  708    Standard unix mmap using /dev/zero clears memory so calloc doesn't
  709    need to.
  710 */
  711 
  712 #ifndef MMAP_CLEARS
  713 #define MMAP_CLEARS 1
  714 #endif
  715 
  716 #else /* no mmap */
  717 #ifndef MMAP_CLEARS
  718 #define MMAP_CLEARS 0
  719 #endif
  720 #endif
  721 
  722 
  723 /* 
  724    MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
  725    sbrk fails, and mmap is used as a backup (which is done only if
  726    HAVE_MMAP).  The value must be a multiple of page size.  This
  727    backup strategy generally applies only when systems have "holes" in
  728    address space, so sbrk cannot perform contiguous expansion, but
  729    there is still space available on system.  On systems for which
  730    this is known to be useful (i.e. most linux kernels), this occurs
  731    only when programs allocate huge amounts of memory.  Between this,
  732    and the fact that mmap regions tend to be limited, the size should
  733    be large, to avoid too many mmap calls and thus avoid running out
  734    of kernel resources.
  735 */
  736 
  737 #ifndef MMAP_AS_MORECORE_SIZE
  738 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
  739 #endif
  740 
  741 /*
  742   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
  743   large blocks.  This is currently only possible on Linux with
  744   kernel versions newer than 1.3.77.
  745 */
  746 
  747 #ifndef HAVE_MREMAP
  748 #ifdef linux
  749 #define HAVE_MREMAP 1
  750 #else
  751 #define HAVE_MREMAP 0
  752 #endif
  753 
  754 #endif /* HAVE_MMAP */
  755 
  756 
  757 /*
  758   The system page size. To the extent possible, this malloc manages
  759   memory from the system in page-size units.  Note that this value is
  760   cached during initialization into a field of malloc_state. So even
  761   if malloc_getpagesize is a function, it is only called once.
  762 
  763   The following mechanics for getpagesize were adapted from bsd/gnu
  764   getpagesize.h. If none of the system-probes here apply, a value of
  765   4096 is used, which should be OK: If they don't apply, then using
  766   the actual value probably doesn't impact performance.
  767 */
  768 
  769 
  770 #ifndef malloc_getpagesize
  771 
  772 #ifndef LACKS_UNISTD_H
  773 #  include <unistd.h>
  774 #endif
  775 
  776 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
  777 #    ifndef _SC_PAGE_SIZE
  778 #      define _SC_PAGE_SIZE _SC_PAGESIZE
  779 #    endif
  780 #  endif
  781 
  782 #  ifdef _SC_PAGE_SIZE
  783 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
  784 #  else
  785 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
  786        extern size_t getpagesize();
  787 #      define malloc_getpagesize getpagesize()
  788 #    else
  789 #      ifdef WIN32 /* use supplied emulation of getpagesize */
  790 #        define malloc_getpagesize getpagesize() 
  791 #      else
  792 #        ifndef LACKS_SYS_PARAM_H
  793 #          include <sys/param.h>
  794 #        endif
  795 #        ifdef EXEC_PAGESIZE
  796 #          define malloc_getpagesize EXEC_PAGESIZE
  797 #        else
  798 #          ifdef NBPG
  799 #            ifndef CLSIZE
  800 #              define malloc_getpagesize NBPG
  801 #            else
  802 #              define malloc_getpagesize (NBPG * CLSIZE)
  803 #            endif
  804 #          else
  805 #            ifdef NBPC
  806 #              define malloc_getpagesize NBPC
  807 #            else
  808 #              ifdef PAGESIZE
  809 #                define malloc_getpagesize PAGESIZE
  810 #              else /* just guess */
  811 #                define malloc_getpagesize (4096) 
  812 #              endif
  813 #            endif
  814 #          endif
  815 #        endif
  816 #      endif
  817 #    endif
  818 #  endif
  819 #endif
  820 
  821 /*
  822   This version of malloc supports the standard SVID/XPG mallinfo
  823   routine that returns a struct containing usage properties and
  824   statistics. It should work on any SVID/XPG compliant system that has
  825   a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
  826   install such a thing yourself, cut out the preliminary declarations
  827   as described above and below and save them in a malloc.h file. But
  828   there's no compelling reason to bother to do this.)
  829 
  830   The main declaration needed is the mallinfo struct that is returned
  831   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
  832   bunch of fields that are not even meaningful in this version of
  833   malloc.  These fields are are instead filled by mallinfo() with
  834   other numbers that might be of interest.
  835 
  836   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
  837   /usr/include/malloc.h file that includes a declaration of struct
  838   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
  839   version is declared below.  These must be precisely the same for
  840   mallinfo() to work.  The original SVID version of this struct,
  841   defined on most systems with mallinfo, declares all fields as
  842   ints. But some others define as unsigned long. If your system
  843   defines the fields using a type of different width than listed here,
  844   you must #include your system version and #define
  845   HAVE_USR_INCLUDE_MALLOC_H.
  846 */
  847 
  848 /* #define HAVE_USR_INCLUDE_MALLOC_H */
  849 
  850 #ifdef HAVE_USR_INCLUDE_MALLOC_H
  851 #include "/usr/include/malloc.h"
  852 #else
  853 
  854 /* SVID2/XPG mallinfo structure */
  855 
  856 struct mallinfo {
  857   int arena;    /* non-mmapped space allocated from system */
  858   int ordblks;  /* number of free chunks */
  859   int smblks;   /* number of fastbin blocks */
  860   int hblks;    /* number of mmapped regions */
  861   int hblkhd;   /* space in mmapped regions */
  862   int usmblks;  /* maximum total allocated space */
  863   int fsmblks;  /* space available in freed fastbin blocks */
  864   int uordblks; /* total allocated space */
  865   int fordblks; /* total free space */
  866   int keepcost; /* top-most, releasable (via malloc_trim) space */
  867 };
  868 
  869 /*
  870   SVID/XPG defines four standard parameter numbers for mallopt,
  871   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
  872   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
  873   so setting them has no effect. But this malloc also supports other
  874   options in mallopt described below.
  875 */
  876 #endif
  877 
  878 
  879 /* ---------- description of public routines ------------ */
  880 
  881 /*
  882   malloc(size_t n)
  883   Returns a pointer to a newly allocated chunk of at least n bytes, or null
  884   if no space is available. Additionally, on failure, errno is
  885   set to ENOMEM on ANSI C systems.
  886 
  887   If n is zero, malloc returns a minumum-sized chunk. (The minimum
  888   size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
  889   systems.)  On most systems, size_t is an unsigned type, so calls
  890   with negative arguments are interpreted as requests for huge amounts
  891   of space, which will often fail. The maximum supported value of n
  892   differs across systems, but is in all cases less than the maximum
  893   representable value of a size_t.
  894 */
  895 #if __STD_C
  896 Void_t*  public_mALLOc(size_t);
  897 #else
  898 Void_t*  public_mALLOc();
  899 #endif
  900 
  901 /*
  902   free(Void_t* p)
  903   Releases the chunk of memory pointed to by p, that had been previously
  904   allocated using malloc or a related routine such as realloc.
  905   It has no effect if p is null. It can have arbitrary (i.e., bad!)
  906   effects if p has already been freed.
  907 
  908   Unless disabled (using mallopt), freeing very large spaces will
  909   when possible, automatically trigger operations that give
  910   back unused memory to the system, thus reducing program footprint.
  911 */
  912 #if __STD_C
  913 void     public_fREe(Void_t*);
  914 #else
  915 void     public_fREe();
  916 #endif
  917 
  918 /*
  919   calloc(size_t n_elements, size_t element_size);
  920   Returns a pointer to n_elements * element_size bytes, with all locations
  921   set to zero.
  922 */
  923 #if __STD_C
  924 Void_t*  public_cALLOc(size_t, size_t);
  925 #else
  926 Void_t*  public_cALLOc();
  927 #endif
  928 
  929 /*
  930   realloc(Void_t* p, size_t n)
  931   Returns a pointer to a chunk of size n that contains the same data
  932   as does chunk p up to the minimum of (n, p's size) bytes, or null
  933   if no space is available. 
  934 
  935   The returned pointer may or may not be the same as p. The algorithm
  936   prefers extending p when possible, otherwise it employs the
  937   equivalent of a malloc-copy-free sequence.
  938 
  939   If p is null, realloc is equivalent to malloc.  
  940 
  941   If space is not available, realloc returns null, errno is set (if on
  942   ANSI) and p is NOT freed.
  943 
  944   if n is for fewer bytes than already held by p, the newly unused
  945   space is lopped off and freed if possible.  Unless the #define
  946   REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
  947   zero (re)allocates a minimum-sized chunk.
  948 
  949   Large chunks that were internally obtained via mmap will always
  950   be reallocated using malloc-copy-free sequences unless
  951   the system supports MREMAP (currently only linux).
  952 
  953   The old unix realloc convention of allowing the last-free'd chunk
  954   to be used as an argument to realloc is not supported.
  955 */
  956 #if __STD_C
  957 Void_t*  public_rEALLOc(Void_t*, size_t);
  958 #else
  959 Void_t*  public_rEALLOc();
  960 #endif
  961 
  962 /*
  963   memalign(size_t alignment, size_t n);
  964   Returns a pointer to a newly allocated chunk of n bytes, aligned
  965   in accord with the alignment argument.
  966 
  967   The alignment argument should be a power of two. If the argument is
  968   not a power of two, the nearest greater power is used.
  969   8-byte alignment is guaranteed by normal malloc calls, so don't
  970   bother calling memalign with an argument of 8 or less.
  971 
  972   Overreliance on memalign is a sure way to fragment space.
  973 */
  974 #if __STD_C
  975 Void_t*  public_mEMALIGn(size_t, size_t);
  976 #else
  977 Void_t*  public_mEMALIGn();
  978 #endif
  979 
  980 /*
  981   valloc(size_t n);
  982   Equivalent to memalign(pagesize, n), where pagesize is the page
  983   size of the system. If the pagesize is unknown, 4096 is used.
  984 */
  985 #if __STD_C
  986 Void_t*  public_vALLOc(size_t);
  987 #else
  988 Void_t*  public_vALLOc();
  989 #endif
  990 
  991 
  992 
  993 /*
  994   mallopt(int parameter_number, int parameter_value)
  995   Sets tunable parameters The format is to provide a
  996   (parameter-number, parameter-value) pair.  mallopt then sets the
  997   corresponding parameter to the argument value if it can (i.e., so
  998   long as the value is meaningful), and returns 1 if successful else
  999   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
 1000   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
 1001   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
 1002   so setting them has no effect. But this malloc also supports four
 1003   other options in mallopt. See below for details.  Briefly, supported
 1004   parameters are as follows (listed defaults are for "typical"
 1005   configurations).
 1006 
 1007   Symbol            param #   default    allowed param values
 1008   M_MXFAST          1         64         0-80  (0 disables fastbins)
 1009   M_TRIM_THRESHOLD -1         256*1024   any   (-1U disables trimming)
 1010   M_TOP_PAD        -2         0          any  
 1011   M_MMAP_THRESHOLD -3         256*1024   any   (or 0 if no MMAP support)
 1012   M_MMAP_MAX       -4         65536      any   (0 disables use of mmap)
 1013 */
 1014 #if __STD_C
 1015 int      public_mALLOPt(int, int);
 1016 #else
 1017 int      public_mALLOPt();
 1018 #endif
 1019 
 1020 
 1021 /*
 1022   mallinfo()
 1023   Returns (by copy) a struct containing various summary statistics:
 1024 
 1025   arena:     current total non-mmapped bytes allocated from system 
 1026   ordblks:   the number of free chunks 
 1027   smblks:    the number of fastbin blocks (i.e., small chunks that
 1028                have been freed but not use resused or consolidated)
 1029   hblks:     current number of mmapped regions 
 1030   hblkhd:    total bytes held in mmapped regions 
 1031   usmblks:   the maximum total allocated space. This will be greater
 1032                 than current total if trimming has occurred.
 1033   fsmblks:   total bytes held in fastbin blocks 
 1034   uordblks:  current total allocated space (normal or mmapped)
 1035   fordblks:  total free space 
 1036   keepcost:  the maximum number of bytes that could ideally be released
 1037                back to system via malloc_trim. ("ideally" means that
 1038                it ignores page restrictions etc.)
 1039 
 1040   Because these fields are ints, but internal bookkeeping may
 1041   be kept as longs, the reported values may wrap around zero and 
 1042   thus be inaccurate.
 1043 */
 1044 #if __STD_C
 1045 struct mallinfo public_mALLINFo(void);
 1046 #else
 1047 struct mallinfo public_mALLINFo();
 1048 #endif
 1049 
 1050 /*
 1051   independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
 1052 
 1053   independent_calloc is similar to calloc, but instead of returning a
 1054   single cleared space, it returns an array of pointers to n_elements
 1055   independent elements that can hold contents of size elem_size, each
 1056   of which starts out cleared, and can be independently freed,
 1057   realloc'ed etc. The elements are guaranteed to be adjacently
 1058   allocated (this is not guaranteed to occur with multiple callocs or
 1059   mallocs), which may also improve cache locality in some
 1060   applications.
 1061 
 1062   The "chunks" argument is optional (i.e., may be null, which is
 1063   probably the most typical usage). If it is null, the returned array
 1064   is itself dynamically allocated and should also be freed when it is
 1065   no longer needed. Otherwise, the chunks array must be of at least
 1066   n_elements in length. It is filled in with the pointers to the
 1067   chunks.
 1068 
 1069   In either case, independent_calloc returns this pointer array, or
 1070   null if the allocation failed.  If n_elements is zero and "chunks"
 1071   is null, it returns a chunk representing an array with zero elements
 1072   (which should be freed if not wanted).
 1073 
 1074   Each element must be individually freed when it is no longer
 1075   needed. If you'd like to instead be able to free all at once, you
 1076   should instead use regular calloc and assign pointers into this
 1077   space to represent elements.  (In this case though, you cannot
 1078   independently free elements.)
 1079   
 1080   independent_calloc simplifies and speeds up implementations of many
 1081   kinds of pools.  It may also be useful when constructing large data
 1082   structures that initially have a fixed number of fixed-sized nodes,
 1083   but the number is not known at compile time, and some of the nodes
 1084   may later need to be freed. For example:
 1085 
 1086   struct Node { int item; struct Node* next; };
 1087   
 1088   struct Node* build_list() {
 1089     struct Node** pool;
 1090     int n = read_number_of_nodes_needed();
 1091     if (n <= 0) return 0;
 1092     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
 1093     if (pool == 0) die(); 
 1094     // organize into a linked list... 
 1095     struct Node* first = pool[0];
 1096     for (i = 0; i < n-1; ++i) 
 1097       pool[i]->next = pool[i+1];
 1098     free(pool);     // Can now free the array (or not, if it is needed later)
 1099     return first;
 1100   }
 1101 */
 1102 #if __STD_C
 1103 Void_t** public_iCALLOc(size_t, size_t, Void_t**);
 1104 #else
 1105 Void_t** public_iCALLOc();
 1106 #endif
 1107 
 1108 /*
 1109   independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
 1110 
 1111   independent_comalloc allocates, all at once, a set of n_elements
 1112   chunks with sizes indicated in the "sizes" array.    It returns
 1113   an array of pointers to these elements, each of which can be
 1114   independently freed, realloc'ed etc. The elements are guaranteed to
 1115   be adjacently allocated (this is not guaranteed to occur with
 1116   multiple callocs or mallocs), which may also improve cache locality
 1117   in some applications.
 1118 
 1119   The "chunks" argument is optional (i.e., may be null). If it is null
 1120   the returned array is itself dynamically allocated and should also
 1121   be freed when it is no longer needed. Otherwise, the chunks array
 1122   must be of at least n_elements in length. It is filled in with the
 1123   pointers to the chunks.
 1124 
 1125   In either case, independent_comalloc returns this pointer array, or
 1126   null if the allocation failed.  If n_elements is zero and chunks is
 1127   null, it returns a chunk representing an array with zero elements
 1128   (which should be freed if not wanted).
 1129   
 1130   Each element must be individually freed when it is no longer
 1131   needed. If you'd like to instead be able to free all at once, you
 1132   should instead use a single regular malloc, and assign pointers at
 1133   particular offsets in the aggregate space. (In this case though, you 
 1134   cannot independently free elements.)
 1135 
 1136   independent_comallac differs from independent_calloc in that each
 1137   element may have a different size, and also that it does not
 1138   automatically clear elements.
 1139 
 1140   independent_comalloc can be used to speed up allocation in cases
 1141   where several structs or objects must always be allocated at the
 1142   same time.  For example:
 1143 
 1144   struct Head { ... }
 1145   struct Foot { ... }
 1146 
 1147   void send_message(char* msg) {
 1148     int msglen = strlen(msg);
 1149     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
 1150     void* chunks[3];
 1151     if (independent_comalloc(3, sizes, chunks) == 0)
 1152       die();
 1153     struct Head* head = (struct Head*)(chunks[0]);
 1154     char*        body = (char*)(chunks[1]);
 1155     struct Foot* foot = (struct Foot*)(chunks[2]);
 1156     // ...
 1157   }
 1158 
 1159   In general though, independent_comalloc is worth using only for
 1160   larger values of n_elements. For small values, you probably won't
 1161   detect enough difference from series of malloc calls to bother.
 1162 
 1163   Overuse of independent_comalloc can increase overall memory usage,
 1164   since it cannot reuse existing noncontiguous small chunks that
 1165   might be available for some of the elements.
 1166 */
 1167 #if __STD_C
 1168 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
 1169 #else
 1170 Void_t** public_iCOMALLOc();
 1171 #endif
 1172 
 1173 
 1174 /*
 1175   pvalloc(size_t n);
 1176   Equivalent to valloc(minimum-page-that-holds(n)), that is,
 1177   round up n to nearest pagesize.
 1178  */
 1179 #if __STD_C
 1180 Void_t*  public_pVALLOc(size_t);
 1181 #else
 1182 Void_t*  public_pVALLOc();
 1183 #endif
 1184 
 1185 /*
 1186   cfree(Void_t* p);
 1187   Equivalent to free(p).
 1188 
 1189   cfree is needed/defined on some systems that pair it with calloc,
 1190   for odd historical reasons (such as: cfree is used in example 
 1191   code in the first edition of K&R).
 1192 */
 1193 #if __STD_C
 1194 void     public_cFREe(Void_t*);
 1195 #else
 1196 void     public_cFREe();
 1197 #endif
 1198 
 1199 /*
 1200   malloc_trim(size_t pad);
 1201 
 1202   If possible, gives memory back to the system (via negative
 1203   arguments to sbrk) if there is unused memory at the `high' end of
 1204   the malloc pool. You can call this after freeing large blocks of
 1205   memory to potentially reduce the system-level memory requirements
 1206   of a program. However, it cannot guarantee to reduce memory. Under
 1207   some allocation patterns, some large free blocks of memory will be
 1208   locked between two used chunks, so they cannot be given back to
 1209   the system.
 1210   
 1211   The `pad' argument to malloc_trim represents the amount of free
 1212   trailing space to leave untrimmed. If this argument is zero,
 1213   only the minimum amount of memory to maintain internal data
 1214   structures will be left (one page or less). Non-zero arguments
 1215   can be supplied to maintain enough trailing space to service
 1216   future expected allocations without having to re-obtain memory
 1217   from the system.
 1218   
 1219   Malloc_trim returns 1 if it actually released any memory, else 0.
 1220   On systems that do not support "negative sbrks", it will always
 1221   rreturn 0.
 1222 */
 1223 #if __STD_C
 1224 int      public_mTRIm(size_t);
 1225 #else
 1226 int      public_mTRIm();
 1227 #endif
 1228 
 1229 /*
 1230   malloc_usable_size(Void_t* p);
 1231 
 1232   Returns the number of bytes you can actually use in
 1233   an allocated chunk, which may be more than you requested (although
 1234   often not) due to alignment and minimum size constraints.
 1235   You can use this many bytes without worrying about
 1236   overwriting other allocated objects. This is not a particularly great
 1237   programming practice. malloc_usable_size can be more useful in
 1238   debugging and assertions, for example:
 1239 
 1240   p = malloc(n);
 1241   assert(malloc_usable_size(p) >= 256);
 1242 
 1243 */
 1244 #if __STD_C
 1245 size_t   public_mUSABLe(Void_t*);
 1246 #else
 1247 size_t   public_mUSABLe();
 1248 #endif
 1249 
 1250 /*
 1251   malloc_stats();
 1252   Prints on stderr the amount of space obtained from the system (both
 1253   via sbrk and mmap), the maximum amount (which may be more than
 1254   current if malloc_trim and/or munmap got called), and the current
 1255   number of bytes allocated via malloc (or realloc, etc) but not yet
 1256   freed. Note that this is the number of bytes allocated, not the
 1257   number requested. It will be larger than the number requested
 1258   because of alignment and bookkeeping overhead. Because it includes
 1259   alignment wastage as being in use, this figure may be greater than
 1260   zero even when no user-level chunks are allocated.
 1261 
 1262   The reported current and maximum system memory can be inaccurate if
 1263   a program makes other calls to system memory allocation functions
 1264   (normally sbrk) outside of malloc.
 1265 
 1266   malloc_stats prints only the most commonly interesting statistics.
 1267   More information can be obtained by calling mallinfo.
 1268 
 1269 */
 1270 #if __STD_C
 1271 void     public_mSTATs();
 1272 #else
 1273 void     public_mSTATs();
 1274 #endif
 1275 
 1276 /* mallopt tuning options */
 1277 
 1278 /*
 1279   M_MXFAST is the maximum request size used for "fastbins", special bins
 1280   that hold returned chunks without consolidating their spaces. This
 1281   enables future requests for chunks of the same size to be handled
 1282   very quickly, but can increase fragmentation, and thus increase the
 1283   overall memory footprint of a program.
 1284 
 1285   This malloc manages fastbins very conservatively yet still
 1286   efficiently, so fragmentation is rarely a problem for values less
 1287   than or equal to the default.  The maximum supported value of MXFAST
 1288   is 80. You wouldn't want it any higher than this anyway.  Fastbins
 1289   are designed especially for use with many small structs, objects or
 1290   strings -- the default handles structs/objects/arrays with sizes up
 1291   to 16 4byte fields, or small strings representing words, tokens,
 1292   etc. Using fastbins for larger objects normally worsens
 1293   fragmentation without improving speed.
 1294 
 1295   M_MXFAST is set in REQUEST size units. It is internally used in
 1296   chunksize units, which adds padding and alignment.  You can reduce
 1297   M_MXFAST to 0 to disable all use of fastbins.  This causes the malloc
 1298   algorithm to be a closer approximation of fifo-best-fit in all cases,
 1299   not just for larger requests, but will generally cause it to be
 1300   slower.
 1301 */
 1302 
 1303 
 1304 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
 1305 #ifndef M_MXFAST
 1306 #define M_MXFAST            1    
 1307 #endif
 1308 
 1309 #ifndef DEFAULT_MXFAST
 1310 #define DEFAULT_MXFAST     64
 1311 #endif
 1312 
 1313 
 1314 /*
 1315   M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
 1316   to keep before releasing via malloc_trim in free().
 1317 
 1318   Automatic trimming is mainly useful in long-lived programs.
 1319   Because trimming via sbrk can be slow on some systems, and can
 1320   sometimes be wasteful (in cases where programs immediately
 1321   afterward allocate more large chunks) the value should be high
 1322   enough so that your overall system performance would improve by
 1323   releasing this much memory.
 1324 
 1325   The trim threshold and the mmap control parameters (see below)
 1326   can be traded off with one another. Trimming and mmapping are
 1327   two different ways of releasing unused memory back to the
 1328   system. Between these two, it is often possible to keep
 1329   system-level demands of a long-lived program down to a bare
 1330   minimum. For example, in one test suite of sessions measuring
 1331   the XF86 X server on Linux, using a trim threshold of 128K and a
 1332   mmap threshold of 192K led to near-minimal long term resource
 1333   consumption.
 1334 
 1335   If you are using this malloc in a long-lived program, it should
 1336   pay to experiment with these values.  As a rough guide, you
 1337   might set to a value close to the average size of a process
 1338   (program) running on your system.  Releasing this much memory
 1339   would allow such a process to run in memory.  Generally, it's
 1340   worth it to tune for trimming rather tham memory mapping when a
 1341   program undergoes phases where several large chunks are
 1342   allocated and released in ways that can reuse each other's
 1343   storage, perhaps mixed with phases where there are no such
 1344   chunks at all.  And in well-behaved long-lived programs,
 1345   controlling release of large blocks via trimming versus mapping
 1346   is usually faster.
 1347 
 1348   However, in most programs, these parameters serve mainly as
 1349   protection against the system-level effects of carrying around
 1350   massive amounts of unneeded memory. Since frequent calls to
 1351   sbrk, mmap, and munmap otherwise degrade performance, the default
 1352   parameters are set to relatively high values that serve only as
 1353   safeguards.
 1354 
 1355   The trim value must be greater than page size to have any useful
 1356   effect.  To disable trimming completely, you can set to 
 1357   (unsigned long)(-1)
 1358 
 1359   Trim settings interact with fastbin (MXFAST) settings: Unless
 1360   TRIM_FASTBINS is defined, automatic trimming never takes place upon
 1361   freeing a chunk with size less than or equal to MXFAST. Trimming is
 1362   instead delayed until subsequent freeing of larger chunks. However,
 1363   you can still force an attempted trim by calling malloc_trim.
 1364 
 1365   Also, trimming is not generally possible in cases where
 1366   the main arena is obtained via mmap.
 1367 
 1368   Note that the trick some people use of mallocing a huge space and
 1369   then freeing it at program startup, in an attempt to reserve system
 1370   memory, doesn't have the intended effect under automatic trimming,
 1371   since that memory will immediately be returned to the system.
 1372 */
 1373 
 1374 #define M_TRIM_THRESHOLD       -1
 1375 
 1376 #ifndef DEFAULT_TRIM_THRESHOLD
 1377 #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
 1378 #endif
 1379 
 1380 /*
 1381   M_TOP_PAD is the amount of extra `padding' space to allocate or
 1382   retain whenever sbrk is called. It is used in two ways internally:
 1383 
 1384   * When sbrk is called to extend the top of the arena to satisfy
 1385   a new malloc request, this much padding is added to the sbrk
 1386   request.
 1387 
 1388   * When malloc_trim is called automatically from free(),
 1389   it is used as the `pad' argument.
 1390 
 1391   In both cases, the actual amount of padding is rounded
 1392   so that the end of the arena is always a system page boundary.
 1393 
 1394   The main reason for using padding is to avoid calling sbrk so
 1395   often. Having even a small pad greatly reduces the likelihood
 1396   that nearly every malloc request during program start-up (or
 1397   after trimming) will invoke sbrk, which needlessly wastes
 1398   time.
 1399 
 1400   Automatic rounding-up to page-size units is normally sufficient
 1401   to avoid measurable overhead, so the default is 0.  However, in
 1402   systems where sbrk is relatively slow, it can pay to increase
 1403   this value, at the expense of carrying around more memory than
 1404   the program needs.
 1405 */
 1406 
 1407 #define M_TOP_PAD              -2
 1408 
 1409 #ifndef DEFAULT_TOP_PAD
 1410 #define DEFAULT_TOP_PAD        (0)
 1411 #endif
 1412 
 1413 /*
 1414   M_MMAP_THRESHOLD is the request size threshold for using mmap()
 1415   to service a request. Requests of at least this size that cannot
 1416   be allocated using already-existing space will be serviced via mmap.
 1417   (If enough normal freed space already exists it is used instead.)
 1418 
 1419   Using mmap segregates relatively large chunks of memory so that
 1420   they can be individually obtained and released from the host
 1421   system. A request serviced through mmap is never reused by any
 1422   other request (at least not directly; the system may just so
 1423   happen to remap successive requests to the same locations).
 1424 
 1425   Segregating space in this way has the benefits that:
 1426 
 1427    1. Mmapped space can ALWAYS be individually released back 
 1428       to the system, which helps keep the system level memory 
 1429       demands of a long-lived program low. 
 1430    2. Mapped memory can never become `locked' between
 1431       other chunks, as can happen with normally allocated chunks, which
 1432       means that even trimming via malloc_trim would not release them.
 1433    3. On some systems with "holes" in address spaces, mmap can obtain
 1434       memory that sbrk cannot.
 1435 
 1436   However, it has the disadvantages that:
 1437 
 1438    1. The space cannot be reclaimed, consolidated, and then
 1439       used to service later requests, as happens with normal chunks.
 1440    2. It can lead to more wastage because of mmap page alignment
 1441       requirements
 1442    3. It causes malloc performance to be more dependent on host
 1443       system memory management support routines which may vary in
 1444       implementation quality and may impose arbitrary
 1445       limitations. Generally, servicing a request via normal
 1446       malloc steps is faster than going through a system's mmap.
 1447 
 1448   The advantages of mmap nearly always outweigh disadvantages for
 1449   "large" chunks, but the value of "large" varies across systems.  The
 1450   default is an empirically derived value that works well in most
 1451   systems.
 1452 */
 1453 
 1454 #define M_MMAP_THRESHOLD      -3
 1455 
 1456 #ifndef DEFAULT_MMAP_THRESHOLD
 1457 #define DEFAULT_MMAP_THRESHOLD (256 * 1024)
 1458 #endif
 1459 
 1460 /*
 1461   M_MMAP_MAX is the maximum number of requests to simultaneously
 1462   service using mmap. This parameter exists because
 1463 . Some systems have a limited number of internal tables for
 1464   use by mmap, and using more than a few of them may degrade
 1465   performance.
 1466 
 1467   The default is set to a value that serves only as a safeguard.
 1468   Setting to 0 disables use of mmap for servicing large requests.  If
 1469   HAVE_MMAP is not set, the default value is 0, and attempts to set it
 1470   to non-zero values in mallopt will fail.
 1471 */
 1472 
 1473 #define M_MMAP_MAX             -4
 1474 
 1475 #ifndef DEFAULT_MMAP_MAX
 1476 #if HAVE_MMAP
 1477 #define DEFAULT_MMAP_MAX       (65536)
 1478 #else
 1479 #define DEFAULT_MMAP_MAX       (0)
 1480 #endif
 1481 #endif
 1482 
 1483 #ifdef __cplusplus
 1484 };  /* end of extern "C" */
 1485 #endif
 1486 
 1487 /* 
 1488   ========================================================================
 1489   To make a fully customizable malloc.h header file, cut everything
 1490   above this line, put into file malloc.h, edit to suit, and #include it 
 1491   on the next line, as well as in programs that use this malloc.
 1492   ========================================================================
 1493 */
 1494 
 1495 /* #include "malloc.h" */
 1496 
 1497 /* --------------------- public wrappers ---------------------- */
 1498 
 1499 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
 1500 
 1501 /* Declare all routines as internal */
 1502 #if __STD_C
 1503 static Void_t*  mALLOc(size_t);
 1504 static void     fREe(Void_t*);
 1505 static Void_t*  rEALLOc(Void_t*, size_t);
 1506 static Void_t*  mEMALIGn(size_t, size_t);
 1507 static Void_t*  vALLOc(size_t);
 1508 static Void_t*  pVALLOc(size_t);
 1509 static Void_t*  cALLOc(size_t, size_t);
 1510 static Void_t** iCALLOc(size_t, size_t, Void_t**);
 1511 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
 1512 static void     cFREe(Void_t*);
 1513 static int      mTRIm(size_t);
 1514 static size_t   mUSABLe(Void_t*);
 1515 static void     mSTATs();
 1516 static int      mALLOPt(int, int);
 1517 static struct mallinfo mALLINFo(void);
 1518 #else
 1519 static Void_t*  mALLOc();
 1520 static void     fREe();
 1521 static Void_t*  rEALLOc();
 1522 static Void_t*  mEMALIGn();
 1523 static Void_t*  vALLOc();
 1524 static Void_t*  pVALLOc();
 1525 static Void_t*  cALLOc();
 1526 static Void_t** iCALLOc();
 1527 static Void_t** iCOMALLOc();
 1528 static void     cFREe();
 1529 static int      mTRIm();
 1530 static size_t   mUSABLe();
 1531 static void     mSTATs();
 1532 static int      mALLOPt();
 1533 static struct mallinfo mALLINFo();
 1534 #endif
 1535 
 1536 /*
 1537   MALLOC_PREACTION and MALLOC_POSTACTION should be
 1538   defined to return 0 on success, and nonzero on failure.
 1539   The return value of MALLOC_POSTACTION is currently ignored
 1540   in wrapper functions since there is no reasonable default
 1541   action to take on failure.
 1542 */
 1543 
 1544 
 1545 #ifdef USE_MALLOC_LOCK
 1546 
 1547 #ifdef WIN32
 1548 
 1549 static int mALLOC_MUTEx;
 1550 #define MALLOC_PREACTION   slwait(&mALLOC_MUTEx)
 1551 #define MALLOC_POSTACTION  slrelease(&mALLOC_MUTEx)
 1552 
 1553 #else
 1554 
 1555 #include <pthread.h>
 1556 
 1557 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
 1558 
 1559 #define MALLOC_PREACTION   pthread_mutex_lock(&mALLOC_MUTEx)
 1560 #define MALLOC_POSTACTION  pthread_mutex_unlock(&mALLOC_MUTEx)
 1561 
 1562 #endif /* USE_MALLOC_LOCK */
 1563 
 1564 #else
 1565 
 1566 /* Substitute anything you like for these */
 1567 
 1568 #define MALLOC_PREACTION   (0)
 1569 #define MALLOC_POSTACTION  (0)
 1570 
 1571 #endif
 1572 
 1573 Void_t* public_mALLOc(size_t bytes) {
 1574   Void_t* m;
 1575   if (MALLOC_PREACTION != 0) {
 1576     return 0;
 1577   }
 1578   m = mALLOc(bytes);
 1579   if (MALLOC_POSTACTION != 0) {
 1580   }
 1581   return m;
 1582 }
 1583 
 1584 void public_fREe(Void_t* m) {
 1585   if (MALLOC_PREACTION != 0) {
 1586     return;
 1587   }
 1588   fREe(m);
 1589   if (MALLOC_POSTACTION != 0) {
 1590   }
 1591 }
 1592 
 1593 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
 1594   if (MALLOC_PREACTION != 0) {
 1595     return 0;
 1596   }
 1597   m = rEALLOc(m, bytes);
 1598   if (MALLOC_POSTACTION != 0) {
 1599   }
 1600   return m;
 1601 }
 1602 
 1603 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
 1604   Void_t* m;
 1605   if (MALLOC_PREACTION != 0) {
 1606     return 0;
 1607   }
 1608   m = mEMALIGn(alignment, bytes);
 1609   if (MALLOC_POSTACTION != 0) {
 1610   }
 1611   return m;
 1612 }
 1613 
 1614 Void_t* public_vALLOc(size_t bytes) {
 1615   Void_t* m;
 1616   if (MALLOC_PREACTION != 0) {
 1617     return 0;
 1618   }
 1619   m = vALLOc(bytes);
 1620   if (MALLOC_POSTACTION != 0) {
 1621   }
 1622   return m;
 1623 }
 1624 
 1625 Void_t* public_pVALLOc(size_t bytes) {
 1626   Void_t* m;
 1627   if (MALLOC_PREACTION != 0) {
 1628     return 0;
 1629   }
 1630   m = pVALLOc(bytes);
 1631   if (MALLOC_POSTACTION != 0) {
 1632   }
 1633   return m;
 1634 }
 1635 
 1636 Void_t* public_cALLOc(size_t n, size_t elem_size) {
 1637   Void_t* m;
 1638   if (MALLOC_PREACTION != 0) {
 1639     return 0;
 1640   }
 1641   m = cALLOc(n, elem_size);
 1642   if (MALLOC_POSTACTION != 0) {
 1643   }
 1644   return m;
 1645 }
 1646 
 1647 
 1648 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
 1649   Void_t** m;
 1650   if (MALLOC_PREACTION != 0) {
 1651     return 0;
 1652   }
 1653   m = iCALLOc(n, elem_size, chunks);
 1654   if (MALLOC_POSTACTION != 0) {
 1655   }
 1656   return m;
 1657 }
 1658 
 1659 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
 1660   Void_t** m;
 1661   if (MALLOC_PREACTION != 0) {
 1662     return 0;
 1663   }
 1664   m = iCOMALLOc(n, sizes, chunks);
 1665   if (MALLOC_POSTACTION != 0) {
 1666   }
 1667   return m;
 1668 }
 1669 
 1670 void public_cFREe(Void_t* m) {
 1671   if (MALLOC_PREACTION != 0) {
 1672     return;
 1673   }
 1674   cFREe(m);
 1675   if (MALLOC_POSTACTION != 0) {
 1676   }
 1677 }
 1678 
 1679 int public_mTRIm(size_t s) {
 1680   int result;
 1681   if (MALLOC_PREACTION != 0) {
 1682     return 0;
 1683   }
 1684   result = mTRIm(s);
 1685   if (MALLOC_POSTACTION != 0) {
 1686   }
 1687   return result;
 1688 }
 1689 
 1690 size_t public_mUSABLe(Void_t* m) {
 1691   size_t result;
 1692   if (MALLOC_PREACTION != 0) {
 1693     return 0;
 1694   }
 1695   result = mUSABLe(m);
 1696   if (MALLOC_POSTACTION != 0) {
 1697   }
 1698   return result;
 1699 }
 1700 
 1701 void public_mSTATs() {
 1702   if (MALLOC_PREACTION != 0) {
 1703     return;
 1704   }
 1705   mSTATs();
 1706   if (MALLOC_POSTACTION != 0) {
 1707   }
 1708 }
 1709 
 1710 struct mallinfo public_mALLINFo() {
 1711   struct mallinfo m;
 1712   if (MALLOC_PREACTION != 0) {
 1713     struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
 1714     return nm;
 1715   }
 1716   m = mALLINFo();
 1717   if (MALLOC_POSTACTION != 0) {
 1718   }
 1719   return m;
 1720 }
 1721 
 1722 int public_mALLOPt(int p, int v) {
 1723   int result;
 1724   if (MALLOC_PREACTION != 0) {
 1725     return 0;
 1726   }
 1727   result = mALLOPt(p, v);
 1728   if (MALLOC_POSTACTION != 0) {
 1729   }
 1730   return result;
 1731 }
 1732 
 1733 #endif
 1734 
 1735 
 1736 
 1737 /* ------------- Optional versions of memcopy ---------------- */
 1738 
 1739 
 1740 #if USE_MEMCPY
 1741 
 1742 /* 
 1743   Note: memcpy is ONLY invoked with non-overlapping regions,
 1744   so the (usually slower) memmove is not needed.
 1745 */
 1746 
 1747 #define MALLOC_COPY(dest, src, nbytes)  memcpy(dest, src, nbytes)
 1748 #define MALLOC_ZERO(dest, nbytes)       memset(dest, 0,   nbytes)
 1749 
 1750 #else /* !USE_MEMCPY */
 1751 
 1752 /* Use Duff's device for good zeroing/copying performance. */
 1753 
 1754 #define MALLOC_ZERO(charp, nbytes)                                            \
 1755 do {                                                                          \
 1756   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
 1757   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
 1758   long mcn;                                                                   \
 1759   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
 1760   switch (mctmp) {                                                            \
 1761     case 0: for(;;) { *mzp++ = 0;                                             \
 1762     case 7:           *mzp++ = 0;                                             \
 1763     case 6:           *mzp++ = 0;                                             \
 1764     case 5:           *mzp++ = 0;                                             \
 1765     case 4:           *mzp++ = 0;                                             \
 1766     case 3:           *mzp++ = 0;                                             \
 1767     case 2:           *mzp++ = 0;                                             \
 1768     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
 1769   }                                                                           \
 1770 } while(0)
 1771 
 1772 #define MALLOC_COPY(dest,src,nbytes)                                          \
 1773 do {                                                                          \
 1774   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
 1775   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
 1776   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
 1777   long mcn;                                                                   \
 1778   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
 1779   switch (mctmp) {                                                            \
 1780     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
 1781     case 7:           *mcdst++ = *mcsrc++;                                    \
 1782     case 6:           *mcdst++ = *mcsrc++;                                    \
 1783     case 5:           *mcdst++ = *mcsrc++;                                    \
 1784     case 4:           *mcdst++ = *mcsrc++;                                    \
 1785     case 3:           *mcdst++ = *mcsrc++;                                    \
 1786     case 2:           *mcdst++ = *mcsrc++;                                    \
 1787     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
 1788   }                                                                           \
 1789 } while(0)
 1790 
 1791 #endif
 1792 
 1793 /* ------------------ MMAP support ------------------  */
 1794 
 1795 
 1796 #if HAVE_MMAP
 1797 
 1798 #ifndef LACKS_FCNTL_H
 1799 #include <fcntl.h>
 1800 #endif
 1801 
 1802 #ifndef LACKS_SYS_MMAN_H
 1803 #include <sys/mman.h>
 1804 #endif
 1805 
 1806 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
 1807 #define MAP_ANONYMOUS MAP_ANON
 1808 #endif
 1809 
 1810 /* 
 1811    Nearly all versions of mmap support MAP_ANONYMOUS, 
 1812    so the following is unlikely to be needed, but is
 1813    supplied just in case.
 1814 */
 1815 
 1816 #ifndef MAP_ANONYMOUS
 1817 
 1818 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
 1819 
 1820 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
 1821  (dev_zero_fd = open("/dev/zero", O_RDWR), \
 1822   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
 1823    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
 1824 
 1825 #else
 1826 
 1827 #define MMAP(addr, size, prot, flags) \
 1828  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
 1829 
 1830 #endif
 1831 
 1832 
 1833 #endif /* HAVE_MMAP */
 1834 
 1835 
 1836 /*
 1837   -----------------------  Chunk representations -----------------------
 1838 */
 1839 
 1840 
 1841 /*
 1842   This struct declaration is misleading (but accurate and necessary).
 1843   It declares a "view" into memory allowing access to necessary
 1844   fields at known offsets from a given base. See explanation below.
 1845 */
 1846 
 1847 struct malloc_chunk {
 1848 
 1849   INTERNAL_SIZE_T      prev_size;  /* Size of previous chunk (if free).  */
 1850   INTERNAL_SIZE_T      size;       /* Size in bytes, including overhead. */
 1851 
 1852   struct malloc_chunk* fd;         /* double links -- used only if free. */
 1853   struct malloc_chunk* bk;
 1854 };
 1855 
 1856 
 1857 typedef struct malloc_chunk* mchunkptr;
 1858 
 1859 /*
 1860    malloc_chunk details:
 1861 
 1862     (The following includes lightly edited explanations by Colin Plumb.)
 1863 
 1864     Chunks of memory are maintained using a `boundary tag' method as
 1865     described in e.g., Knuth or Standish.  (See the paper by Paul
 1866     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
 1867     survey of such techniques.)  Sizes of free chunks are stored both
 1868     in the front of each chunk and at the end.  This makes
 1869     consolidating fragmented chunks into bigger chunks very fast.  The
 1870     size fields also hold bits representing whether chunks are free or
 1871     in use.
 1872 
 1873     An allocated chunk looks like this:
 1874 
 1875 
 1876     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1877             |             Size of previous chunk, if allocated            | |
 1878             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1879             |             Size of chunk, in bytes                         |P|
 1880       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1881             |             User data starts here...                          .
 1882             .                                                               .
 1883             .             (malloc_usable_space() bytes)                     .
 1884             .                                                               |
 1885 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1886             |             Size of chunk                                     |
 1887             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1888 
 1889 
 1890     Where "chunk" is the front of the chunk for the purpose of most of
 1891     the malloc code, but "mem" is the pointer that is returned to the
 1892     user.  "Nextchunk" is the beginning of the next contiguous chunk.
 1893 
 1894     Chunks always begin on even word boundries, so the mem portion
 1895     (which is returned to the user) is also on an even word boundary, and
 1896     thus at least double-word aligned.
 1897 
 1898     Free chunks are stored in circular doubly-linked lists, and look like this:
 1899 
 1900     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1901             |             Size of previous chunk                            |
 1902             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1903     `head:' |             Size of chunk, in bytes                         |P|
 1904       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1905             |             Forward pointer to next chunk in list             |
 1906             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1907             |             Back pointer to previous chunk in list            |
 1908             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1909             |             Unused space (may be 0 bytes long)                .
 1910             .                                                               .
 1911             .                                                               |
 1912 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1913     `foot:' |             Size of chunk, in bytes                           |
 1914             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 1915 
 1916     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
 1917     chunk size (which is always a multiple of two words), is an in-use
 1918     bit for the *previous* chunk.  If that bit is *clear*, then the
 1919     word before the current chunk size contains the previous chunk
 1920     size, and can be used to find the front of the previous chunk.
 1921     The very first chunk allocated always has this bit set,
 1922     preventing access to non-existent (or non-owned) memory. If
 1923     prev_inuse is set for any given chunk, then you CANNOT determine
 1924     the size of the previous chunk, and might even get a memory
 1925     addressing fault when trying to do so.
 1926 
 1927     Note that the `foot' of the current chunk is actually represented
 1928     as the prev_size of the NEXT chunk. This makes it easier to
 1929     deal with alignments etc but can be very confusing when trying
 1930     to extend or adapt this code.
 1931 
 1932     The two exceptions to all this are
 1933 
 1934      1. The special chunk `top' doesn't bother using the
 1935         trailing size field since there is no next contiguous chunk
 1936         that would have to index off it. After initialization, `top'
 1937         is forced to always exist.  If it would become less than
 1938         MINSIZE bytes long, it is replenished.
 1939 
 1940      2. Chunks allocated via mmap, which have the second-lowest-order
 1941         bit (IS_MMAPPED) set in their size fields.  Because they are
 1942         allocated one-by-one, each must contain its own trailing size field.
 1943 
 1944 */
 1945 
 1946 /*
 1947   ---------- Size and alignment checks and conversions ----------
 1948 */
 1949 
 1950 /* conversion from malloc headers to user pointers, and back */
 1951 
 1952 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
 1953 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
 1954 
 1955 /* The smallest possible chunk */
 1956 #define MIN_CHUNK_SIZE        (sizeof(struct malloc_chunk))
 1957 
 1958 /* The smallest size we can malloc is an aligned minimal chunk */
 1959 
 1960 #define MINSIZE  \
 1961   (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
 1962 
 1963 /* Check if m has acceptable alignment */
 1964 
 1965 #define aligned_OK(m)  (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
 1966 
 1967 
 1968 /* 
 1969    Check if a request is so large that it would wrap around zero when
 1970    padded and aligned. To simplify some other code, the bound is made
 1971    low enough so that adding MINSIZE will also not wrap around sero.
 1972 */
 1973 
 1974 #define REQUEST_OUT_OF_RANGE(req)                                 \
 1975   ((CHUNK_SIZE_T)(req) >=                                        \
 1976    (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))    
 1977 
 1978 /* pad request bytes into a usable size -- internal version */
 1979 
 1980 #define request2size(req)                                         \
 1981   (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
 1982    MINSIZE :                                                      \
 1983    ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
 1984 
 1985 /*  Same, except also perform argument check */
 1986 
 1987 #define checked_request2size(req, sz)                             \
 1988   if (REQUEST_OUT_OF_RANGE(req)) {                                \
 1989     MALLOC_FAILURE_ACTION;                                        \
 1990     return 0;                                                     \
 1991   }                                                               \
 1992   (sz) = request2size(req);                                              
 1993 
 1994 /*
 1995   --------------- Physical chunk operations ---------------
 1996 */
 1997 
 1998 
 1999 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
 2000 #define PREV_INUSE 0x1
 2001 
 2002 /* extract inuse bit of previous chunk */
 2003 #define prev_inuse(p)       ((p)->size & PREV_INUSE)
 2004 
 2005 
 2006 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
 2007 #define IS_MMAPPED 0x2
 2008 
 2009 /* check for mmap()'ed chunk */
 2010 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
 2011 
 2012 /* 
 2013   Bits to mask off when extracting size 
 2014 
 2015   Note: IS_MMAPPED is intentionally not masked off from size field in
 2016   macros for which mmapped chunks should never be seen. This should
 2017   cause helpful core dumps to occur if it is tried by accident by
 2018   people extending or adapting this malloc.
 2019 */
 2020 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
 2021 
 2022 /* Get size, ignoring use bits */
 2023 #define chunksize(p)         ((p)->size & ~(SIZE_BITS))
 2024 
 2025 
 2026 /* Ptr to next physical malloc_chunk. */
 2027 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
 2028 
 2029 /* Ptr to previous physical malloc_chunk */
 2030 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
 2031 
 2032 /* Treat space at ptr + offset as a chunk */
 2033 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
 2034 
 2035 /* extract p's inuse bit */
 2036 #define inuse(p)\
 2037 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
 2038 
 2039 /* set/clear chunk as being inuse without otherwise disturbing */
 2040 #define set_inuse(p)\
 2041 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
 2042 
 2043 #define clear_inuse(p)\
 2044 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
 2045 
 2046 
 2047 /* check/set/clear inuse bits in known places */
 2048 #define inuse_bit_at_offset(p, s)\
 2049  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
 2050 
 2051 #define set_inuse_bit_at_offset(p, s)\
 2052  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
 2053 
 2054 #define clear_inuse_bit_at_offset(p, s)\
 2055  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
 2056 
 2057 
 2058 /* Set size at head, without disturbing its use bit */
 2059 #define set_head_size(p, s)  ((p)->size = (((p)->size & PREV_INUSE) | (s)))
 2060 
 2061 /* Set size/use field */
 2062 #define set_head(p, s)       ((p)->size = (s))
 2063 
 2064 /* Set size at footer (only when chunk is not in use) */
 2065 #define set_foot(p, s)       (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
 2066 
 2067 
 2068 /*
 2069   -------------------- Internal data structures --------------------
 2070 
 2071    All internal state is held in an instance of malloc_state defined
 2072    below. There are no other static variables, except in two optional
 2073    cases: 
 2074    * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. 
 2075    * If HAVE_MMAP is true, but mmap doesn't support
 2076      MAP_ANONYMOUS, a dummy file descriptor for mmap.
 2077 
 2078    Beware of lots of tricks that minimize the total bookkeeping space
 2079    requirements. The result is a little over 1K bytes (for 4byte
 2080    pointers and size_t.)
 2081 */
 2082 
 2083 /*
 2084   Bins
 2085 
 2086     An array of bin headers for free chunks. Each bin is doubly
 2087     linked.  The bins are approximately proportionally (log) spaced.
 2088     There are a lot of these bins (128). This may look excessive, but
 2089     works very well in practice.  Most bins hold sizes that are
 2090     unusual as malloc request sizes, but are more usual for fragments
 2091     and consolidated sets of chunks, which is what these bins hold, so
 2092     they can be found quickly.  All procedures maintain the invariant
 2093     that no consolidated chunk physically borders another one, so each
 2094     chunk in a list is known to be preceeded and followed by either
 2095     inuse chunks or the ends of memory.
 2096 
 2097     Chunks in bins are kept in size order, with ties going to the
 2098     approximately least recently used chunk. Ordering isn't needed
 2099     for the small bins, which all contain the same-sized chunks, but
 2100     facilitates best-fit allocation for larger chunks. These lists
 2101     are just sequential. Keeping them in order almost never requires
 2102     enough traversal to warrant using fancier ordered data
 2103     structures.  
 2104 
 2105     Chunks of the same size are linked with the most
 2106     recently freed at the front, and allocations are taken from the
 2107     back.  This results in LRU (FIFO) allocation order, which tends
 2108     to give each chunk an equal opportunity to be consolidated with
 2109     adjacent freed chunks, resulting in larger free chunks and less
 2110     fragmentation.
 2111 
 2112     To simplify use in double-linked lists, each bin header acts
 2113     as a malloc_chunk. This avoids special-casing for headers.
 2114     But to conserve space and improve locality, we allocate
 2115     only the fd/bk pointers of bins, and then use repositioning tricks
 2116     to treat these as the fields of a malloc_chunk*.  
 2117 */
 2118 
 2119 typedef struct malloc_chunk* mbinptr;
 2120 
 2121 /* addressing -- note that bin_at(0) does not exist */
 2122 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
 2123 
 2124 /* analog of ++bin */
 2125 #define next_bin(b)  ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
 2126 
 2127 /* Reminders about list directionality within bins */
 2128 #define first(b)     ((b)->fd)
 2129 #define last(b)      ((b)->bk)
 2130 
 2131 /* Take a chunk off a bin list */
 2132 #define unlink(P, BK, FD) {                                            \
 2133   FD = P->fd;                                                          \
 2134   BK = P->bk;                                                          \
 2135   FD->bk = BK;                                                         \
 2136   BK->fd = FD;                                                         \
 2137 }
 2138 
 2139 /*
 2140   Indexing
 2141 
 2142     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
 2143     8 bytes apart. Larger bins are approximately logarithmically spaced:
 2144 
 2145     64 bins of size       8
 2146     32 bins of size      64
 2147     16 bins of size     512
 2148      8 bins of size    4096
 2149      4 bins of size   32768
 2150      2 bins of size  262144
 2151      1 bin  of size what's left
 2152 
 2153     The bins top out around 1MB because we expect to service large
 2154     requests via mmap.
 2155 */
 2156 
 2157 #define NBINS              96
 2158 #define NSMALLBINS         32
 2159 #define SMALLBIN_WIDTH      8
 2160 #define MIN_LARGE_SIZE    256
 2161 
 2162 #define in_smallbin_range(sz)  \
 2163   ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
 2164 
 2165 #define smallbin_index(sz)     (((unsigned)(sz)) >> 3)
 2166 
 2167 /*
 2168   Compute index for size. We expect this to be inlined when
 2169   compiled with optimization, else not, which works out well.
 2170 */
 2171 static int largebin_index(unsigned int sz) {
 2172   unsigned int  x = sz >> SMALLBIN_WIDTH; 
 2173   unsigned int m;            /* bit position of highest set bit of m */
 2174 
 2175   if (x >= 0x10000) return NBINS-1;
 2176 
 2177   /* On intel, use BSRL instruction to find highest bit */
 2178 #if defined(__GNUC__) && defined(i386)
 2179 
 2180   __asm__("bsrl %1,%0\n\t"
 2181           : "=r" (m) 
 2182           : "g"  (x));
 2183 
 2184 #else
 2185   {
 2186     /*
 2187       Based on branch-free nlz algorithm in chapter 5 of Henry
 2188       S. Warren Jr's book "Hacker's Delight".
 2189     */
 2190 
 2191     unsigned int n = ((x - 0x100) >> 16) & 8;
 2192     x <<= n; 
 2193     m = ((x - 0x1000) >> 16) & 4;
 2194     n += m; 
 2195     x <<= m; 
 2196     m = ((x - 0x4000) >> 16) & 2;
 2197     n += m; 
 2198     x = (x << m) >> 14;
 2199     m = 13 - n + (x & ~(x>>1));
 2200   }
 2201 #endif
 2202 
 2203   /* Use next 2 bits to create finer-granularity bins */
 2204   return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
 2205 }
 2206 
 2207 #define bin_index(sz) \
 2208  ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
 2209 
 2210 /*
 2211   FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
 2212   first bin that is maintained in sorted order. This must
 2213   be the smallest size corresponding to a given bin.
 2214 
 2215   Normally, this should be MIN_LARGE_SIZE. But you can weaken
 2216   best fit guarantees to sometimes speed up malloc by increasing value.
 2217   Doing this means that malloc may choose a chunk that is 
 2218   non-best-fitting by up to the width of the bin.
 2219 
 2220   Some useful cutoff values:
 2221       512 - all bins sorted
 2222      2560 - leaves bins <=     64 bytes wide unsorted  
 2223     12288 - leaves bins <=    512 bytes wide unsorted
 2224     65536 - leaves bins <=   4096 bytes wide unsorted
 2225    262144 - leaves bins <=  32768 bytes wide unsorted
 2226        -1 - no bins sorted (not recommended!)
 2227 */
 2228 
 2229 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE 
 2230 /* #define FIRST_SORTED_BIN_SIZE 65536 */
 2231 
 2232 /*
 2233   Unsorted chunks
 2234 
 2235     All remainders from chunk splits, as well as all returned chunks,
 2236     are first placed in the "unsorted" bin. They are then placed
 2237     in regular bins after malloc gives them ONE chance to be used before
 2238     binning. So, basically, the unsorted_chunks list acts as a queue,
 2239     with chunks being placed on it in free (and malloc_consolidate),
 2240     and taken off (to be either used or placed in bins) in malloc.
 2241 */
 2242 
 2243 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
 2244 #define unsorted_chunks(M)          (bin_at(M, 1))
 2245 
 2246 /*
 2247   Top
 2248 
 2249     The top-most available chunk (i.e., the one bordering the end of
 2250     available memory) is treated specially. It is never included in
 2251     any bin, is used only if no other chunk is available, and is
 2252     released back to the system if it is very large (see
 2253     M_TRIM_THRESHOLD).  Because top initially
 2254     points to its own bin with initial zero size, thus forcing
 2255     extension on the first malloc request, we avoid having any special
 2256     code in malloc to check whether it even exists yet. But we still
 2257     need to do so when getting memory from system, so we make
 2258     initial_top treat the bin as a legal but unusable chunk during the
 2259     interval between initialization and the first call to
 2260     sYSMALLOc. (This is somewhat delicate, since it relies on
 2261     the 2 preceding words to be zero during this interval as well.)
 2262 */
 2263 
 2264 /* Conveniently, the unsorted bin can be used as dummy top on first call */
 2265 #define initial_top(M)              (unsorted_chunks(M))
 2266 
 2267 /*
 2268   Binmap
 2269 
 2270     To help compensate for the large number of bins, a one-level index
 2271     structure is used for bin-by-bin searching.  `binmap' is a
 2272     bitvector recording whether bins are definitely empty so they can
 2273     be skipped over during during traversals.  The bits are NOT always
 2274     cleared as soon as bins are empty, but instead only
 2275     when they are noticed to be empty during traversal in malloc.
 2276 */
 2277 
 2278 /* Conservatively use 32 bits per map word, even if on 64bit system */
 2279 #define BINMAPSHIFT      5
 2280 #define BITSPERMAP       (1U << BINMAPSHIFT)
 2281 #define BINMAPSIZE       (NBINS / BITSPERMAP)
 2282 
 2283 #define idx2block(i)     ((i) >> BINMAPSHIFT)
 2284 #define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
 2285 
 2286 #define mark_bin(m,i)    ((m)->binmap[idx2block(i)] |=  idx2bit(i))
 2287 #define unmark_bin(m,i)  ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
 2288 #define get_binmap(m,i)  ((m)->binmap[idx2block(i)] &   idx2bit(i))
 2289 
 2290 /*
 2291   Fastbins
 2292 
 2293     An array of lists holding recently freed small chunks.  Fastbins
 2294     are not doubly linked.  It is faster to single-link them, and
 2295     since chunks are never removed from the middles of these lists,
 2296     double linking is not necessary. Also, unlike regular bins, they
 2297     are not even processed in FIFO order (they use faster LIFO) since
 2298     ordering doesn't much matter in the transient contexts in which
 2299     fastbins are normally used.
 2300 
 2301     Chunks in fastbins keep their inuse bit set, so they cannot
 2302     be consolidated with other free chunks. malloc_consolidate
 2303     releases all chunks in fastbins and consolidates them with
 2304     other free chunks. 
 2305 */
 2306 
 2307 typedef struct malloc_chunk* mfastbinptr;
 2308 
 2309 /* offset 2 to use otherwise unindexable first 2 bins */
 2310 #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2)
 2311 
 2312 /* The maximum fastbin request size we support */
 2313 #define MAX_FAST_SIZE     80
 2314 
 2315 #define NFASTBINS  (fastbin_index(request2size(MAX_FAST_SIZE))+1)
 2316 
 2317 /*
 2318   FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
 2319   that triggers automatic consolidation of possibly-surrounding
 2320   fastbin chunks. This is a heuristic, so the exact value should not
 2321   matter too much. It is defined at half the default trim threshold as a
 2322   compromise heuristic to only attempt consolidation if it is likely
 2323   to lead to trimming. However, it is not dynamically tunable, since
 2324   consolidation reduces fragmentation surrounding loarge chunks even 
 2325   if trimming is not used.
 2326 */
 2327 
 2328 #define FASTBIN_CONSOLIDATION_THRESHOLD  \
 2329   ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
 2330 
 2331 /*
 2332   Since the lowest 2 bits in max_fast don't matter in size comparisons, 
 2333   they are used as flags.
 2334 */
 2335 
 2336 /*
 2337   ANYCHUNKS_BIT held in max_fast indicates that there may be any
 2338   freed chunks at all. It is set true when entering a chunk into any
 2339   bin.
 2340 */
 2341 
 2342 #define ANYCHUNKS_BIT        (1U)
 2343 
 2344 #define have_anychunks(M)     (((M)->max_fast &  ANYCHUNKS_BIT))
 2345 #define set_anychunks(M)      ((M)->max_fast |=  ANYCHUNKS_BIT)
 2346 #define clear_anychunks(M)    ((M)->max_fast &= ~ANYCHUNKS_BIT)
 2347 
 2348 /*
 2349   FASTCHUNKS_BIT held in max_fast indicates that there are probably
 2350   some fastbin chunks. It is set true on entering a chunk into any
 2351   fastbin, and cleared only in malloc_consolidate.
 2352 */
 2353 
 2354 #define FASTCHUNKS_BIT        (2U)
 2355 
 2356 #define have_fastchunks(M)   (((M)->max_fast &  FASTCHUNKS_BIT))
 2357 #define set_fastchunks(M)    ((M)->max_fast |=  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
 2358 #define clear_fastchunks(M)  ((M)->max_fast &= ~(FASTCHUNKS_BIT))
 2359 
 2360 /* 
 2361    Set value of max_fast. 
 2362    Use impossibly small value if 0.
 2363 */
 2364 
 2365 #define set_max_fast(M, s) \
 2366   (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
 2367   ((M)->max_fast &  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
 2368 
 2369 #define get_max_fast(M) \
 2370   ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
 2371 
 2372 
 2373 /*
 2374   morecore_properties is a status word holding dynamically discovered
 2375   or controlled properties of the morecore function
 2376 */
 2377 
 2378 #define MORECORE_CONTIGUOUS_BIT  (1U)
 2379 
 2380 #define contiguous(M) \
 2381         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT))
 2382 #define noncontiguous(M) \
 2383         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT) == 0)
 2384 #define set_contiguous(M) \
 2385         ((M)->morecore_properties |=  MORECORE_CONTIGUOUS_BIT)
 2386 #define set_noncontiguous(M) \
 2387         ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
 2388 
 2389 
 2390 /*
 2391    ----------- Internal state representation and initialization -----------
 2392 */
 2393 
 2394 struct malloc_state {
 2395 
 2396   /* The maximum chunk size to be eligible for fastbin */
 2397   INTERNAL_SIZE_T  max_fast;   /* low 2 bits used as flags */
 2398 
 2399   /* Fastbins */
 2400   mfastbinptr      fastbins[NFASTBINS];
 2401 
 2402   /* Base of the topmost chunk -- not otherwise kept in a bin */
 2403   mchunkptr        top;
 2404 
 2405   /* The remainder from the most recent split of a small request */
 2406   mchunkptr        last_remainder;
 2407 
 2408   /* Normal bins packed as described above */
 2409   mchunkptr        bins[NBINS * 2];
 2410 
 2411   /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
 2412   unsigned int     binmap[BINMAPSIZE+1];
 2413 
 2414   /* Tunable parameters */
 2415   CHUNK_SIZE_T     trim_threshold;
 2416   INTERNAL_SIZE_T  top_pad;
 2417   INTERNAL_SIZE_T  mmap_threshold;
 2418 
 2419   /* Memory map support */
 2420   int              n_mmaps;
 2421   int              n_mmaps_max;
 2422   int              max_n_mmaps;
 2423 
 2424   /* Cache malloc_getpagesize */
 2425   unsigned int     pagesize;    
 2426 
 2427   /* Track properties of MORECORE */
 2428   unsigned int     morecore_properties;
 2429 
 2430   /* Statistics */
 2431   INTERNAL_SIZE_T  mmapped_mem;
 2432   INTERNAL_SIZE_T  sbrked_mem;
 2433   INTERNAL_SIZE_T  max_sbrked_mem;
 2434   INTERNAL_SIZE_T  max_mmapped_mem;
 2435   INTERNAL_SIZE_T  max_total_mem;
 2436 };
 2437 
 2438 typedef struct malloc_state *mstate;
 2439 
 2440 /* 
 2441    There is exactly one instance of this struct in this malloc.
 2442    If you are adapting this malloc in a way that does NOT use a static
 2443    malloc_state, you MUST explicitly zero-fill it before using. This
 2444    malloc relies on the property that malloc_state is initialized to
 2445    all zeroes (as is true of C statics).
 2446 */
 2447 
 2448 static struct malloc_state av_;  /* never directly referenced */
 2449 
 2450 /*
 2451    All uses of av_ are via get_malloc_state().
 2452    At most one "call" to get_malloc_state is made per invocation of
 2453    the public versions of malloc and free, but other routines
 2454    that in turn invoke malloc and/or free may call more then once. 
 2455    Also, it is called in check* routines if DEBUG is set.
 2456 */
 2457 
 2458 #define get_malloc_state() (&(av_))
 2459 
 2460 /*
 2461   Initialize a malloc_state struct.
 2462 
 2463   This is called only from within malloc_consolidate, which needs
 2464   be called in the same contexts anyway.  It is never called directly
 2465   outside of malloc_consolidate because some optimizing compilers try
 2466   to inline it at all call points, which turns out not to be an
 2467   optimization at all. (Inlining it in malloc_consolidate is fine though.)
 2468 */
 2469 
 2470 #if __STD_C
 2471 static void malloc_init_state(mstate av)
 2472 #else
 2473 static void malloc_init_state(av) mstate av;
 2474 #endif
 2475 {
 2476   int     i;
 2477   mbinptr bin;
 2478   
 2479   /* Establish circular links for normal bins */
 2480   for (i = 1; i < NBINS; ++i) { 
 2481     bin = bin_at(av,i);
 2482     bin->fd = bin->bk = bin;
 2483   }
 2484 
 2485   av->top_pad        = DEFAULT_TOP_PAD;
 2486   av->n_mmaps_max    = DEFAULT_MMAP_MAX;
 2487   av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
 2488   av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
 2489 
 2490 #if MORECORE_CONTIGUOUS
 2491   set_contiguous(av);
 2492 #else
 2493   set_noncontiguous(av);
 2494 #endif
 2495 
 2496 
 2497   set_max_fast(av, DEFAULT_MXFAST);
 2498 
 2499   av->top            = initial_top(av);
 2500   av->pagesize       = malloc_getpagesize;
 2501 }
 2502 
 2503 /* 
 2504    Other internal utilities operating on mstates
 2505 */
 2506 
 2507 #if __STD_C
 2508 static Void_t*  sYSMALLOc(INTERNAL_SIZE_T, mstate);
 2509 static int      sYSTRIm(size_t, mstate);
 2510 static void     malloc_consolidate(mstate);
 2511 static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
 2512 #else
 2513 static Void_t*  sYSMALLOc();
 2514 static int      sYSTRIm();
 2515 static void     malloc_consolidate();
 2516 static Void_t** iALLOc();
 2517 #endif
 2518 
 2519 /*
 2520   Debugging support
 2521 
 2522   These routines make a number of assertions about the states
 2523   of data structures that should be true at all times. If any
 2524   are not true, it's very likely that a user program has somehow
 2525   trashed memory. (It's also possible that there is a coding error
 2526   in malloc. In which case, please report it!)
 2527 */
 2528 
 2529 #if ! DEBUG
 2530 
 2531 #define check_chunk(P)
 2532 #define check_free_chunk(P)
 2533 #define check_inuse_chunk(P)
 2534 #define check_remalloced_chunk(P,N)
 2535 #define check_malloced_chunk(P,N)
 2536 #define check_malloc_state()
 2537 
 2538 #else
 2539 #define check_chunk(P)              do_check_chunk(P)
 2540 #define check_free_chunk(P)         do_check_free_chunk(P)
 2541 #define check_inuse_chunk(P)        do_check_inuse_chunk(P)
 2542 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
 2543 #define check_malloced_chunk(P,N)   do_check_malloced_chunk(P,N)
 2544 #define check_malloc_state()        do_check_malloc_state()
 2545 
 2546 /*
 2547   Properties of all chunks
 2548 */
 2549 
 2550 #if __STD_C
 2551 static void do_check_chunk(mchunkptr p)
 2552 #else
 2553 static void do_check_chunk(p) mchunkptr p;
 2554 #endif
 2555 {
 2556   mstate av = get_malloc_state();
 2557   CHUNK_SIZE_T  sz = chunksize(p);
 2558   /* min and max possible addresses assuming contiguous allocation */
 2559   char* max_address = (char*)(av->top) + chunksize(av->top);
 2560   char* min_address = max_address - av->sbrked_mem;
 2561 
 2562   if (!chunk_is_mmapped(p)) {
 2563     
 2564     /* Has legal address ... */
 2565     if (p != av->top) {
 2566       if (contiguous(av)) {
 2567         assert(((char*)p) >= min_address);
 2568         assert(((char*)p + sz) <= ((char*)(av->top)));
 2569       }
 2570     }
 2571     else {
 2572       /* top size is always at least MINSIZE */
 2573       assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
 2574       /* top predecessor always marked inuse */
 2575       assert(prev_inuse(p));
 2576     }
 2577       
 2578   }
 2579   else {
 2580 #if HAVE_MMAP
 2581     /* address is outside main heap  */
 2582     if (contiguous(av) && av->top != initial_top(av)) {
 2583       assert(((char*)p) < min_address || ((char*)p) > max_address);
 2584     }
 2585     /* chunk is page-aligned */
 2586     assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
 2587     /* mem is aligned */
 2588     assert(aligned_OK(chunk2mem(p)));
 2589 #else
 2590     /* force an appropriate assert violation if debug set */
 2591     assert(!chunk_is_mmapped(p));
 2592 #endif
 2593   }
 2594 }
 2595 
 2596 /*
 2597   Properties of free chunks
 2598 */
 2599 
 2600 #if __STD_C
 2601 static void do_check_free_chunk(mchunkptr p)
 2602 #else
 2603 static void do_check_free_chunk(p) mchunkptr p;
 2604 #endif
 2605 {
 2606   mstate av = get_malloc_state();
 2607 
 2608   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
 2609   mchunkptr next = chunk_at_offset(p, sz);
 2610 
 2611   do_check_chunk(p);
 2612 
 2613   /* Chunk must claim to be free ... */
 2614   assert(!inuse(p));
 2615   assert (!chunk_is_mmapped(p));
 2616 
 2617   /* Unless a special marker, must have OK fields */
 2618   if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
 2619   {
 2620     assert((sz & MALLOC_ALIGN_MASK) == 0);
 2621     assert(aligned_OK(chunk2mem(p)));
 2622     /* ... matching footer field */
 2623     assert(next->prev_size == sz);
 2624     /* ... and is fully consolidated */
 2625     assert(prev_inuse(p));
 2626     assert (next == av->top || inuse(next));
 2627 
 2628     /* ... and has minimally sane links */
 2629     assert(p->fd->bk == p);
 2630     assert(p->bk->fd == p);
 2631   }
 2632   else /* markers are always of size SIZE_SZ */
 2633     assert(sz == SIZE_SZ);
 2634 }
 2635 
 2636 /*
 2637   Properties of inuse chunks
 2638 */
 2639 
 2640 #if __STD_C
 2641 static void do_check_inuse_chunk(mchunkptr p)
 2642 #else
 2643 static void do_check_inuse_chunk(p) mchunkptr p;
 2644 #endif
 2645 {
 2646   mstate av = get_malloc_state();
 2647   mchunkptr next;
 2648   do_check_chunk(p);
 2649 
 2650   if (chunk_is_mmapped(p))
 2651     return; /* mmapped chunks have no next/prev */
 2652 
 2653   /* Check whether it claims to be in use ... */
 2654   assert(inuse(p));
 2655 
 2656   next = next_chunk(p);
 2657 
 2658   /* ... and is surrounded by OK chunks.
 2659     Since more things can be checked with free chunks than inuse ones,
 2660     if an inuse chunk borders them and debug is on, it's worth doing them.
 2661   */
 2662   if (!prev_inuse(p))  {
 2663     /* Note that we cannot even look at prev unless it is not inuse */
 2664     mchunkptr prv = prev_chunk(p);
 2665     assert(next_chunk(prv) == p);
 2666     do_check_free_chunk(prv);
 2667   }
 2668 
 2669   if (next == av->top) {
 2670     assert(prev_inuse(next));
 2671     assert(chunksize(next) >= MINSIZE);
 2672   }
 2673   else if (!inuse(next))
 2674     do_check_free_chunk(next);
 2675 }
 2676 
 2677 /*
 2678   Properties of chunks recycled from fastbins
 2679 */
 2680 
 2681 #if __STD_C
 2682 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
 2683 #else
 2684 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
 2685 #endif
 2686 {
 2687   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
 2688 
 2689   do_check_inuse_chunk(p);
 2690 
 2691   /* Legal size ... */
 2692   assert((sz & MALLOC_ALIGN_MASK) == 0);
 2693   assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
 2694   /* ... and alignment */
 2695   assert(aligned_OK(chunk2mem(p)));
 2696   /* chunk is less than MINSIZE more than request */
 2697   assert((long)(sz) - (long)(s) >= 0);
 2698   assert((long)(sz) - (long)(s + MINSIZE) < 0);
 2699 }
 2700 
 2701 /*
 2702   Properties of nonrecycled chunks at the point they are malloced
 2703 */
 2704 
 2705 #if __STD_C
 2706 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
 2707 #else
 2708 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
 2709 #endif
 2710 {
 2711   /* same as recycled case ... */
 2712   do_check_remalloced_chunk(p, s);
 2713 
 2714   /*
 2715     ... plus,  must obey implementation invariant that prev_inuse is
 2716     always true of any allocated chunk; i.e., that each allocated
 2717     chunk borders either a previously allocated and still in-use
 2718     chunk, or the base of its memory arena. This is ensured
 2719     by making all allocations from the the `lowest' part of any found
 2720     chunk.  This does not necessarily hold however for chunks
 2721     recycled via fastbins.
 2722   */
 2723 
 2724   assert(prev_inuse(p));
 2725 }
 2726 
 2727 
 2728 /*
 2729   Properties of malloc_state.
 2730 
 2731   This may be useful for debugging malloc, as well as detecting user
 2732   programmer errors that somehow write into malloc_state.
 2733 
 2734   If you are extending or experimenting with this malloc, you can
 2735   probably figure out how to hack this routine to print out or
 2736   display chunk addresses, sizes, bins, and other instrumentation.
 2737 */
 2738 
 2739 static void do_check_malloc_state()
 2740 {
 2741   mstate av = get_malloc_state();
 2742   int i;
 2743   mchunkptr p;
 2744   mchunkptr q;
 2745   mbinptr b;
 2746   unsigned int binbit;
 2747   int empty;
 2748   unsigned int idx;
 2749   INTERNAL_SIZE_T size;
 2750   CHUNK_SIZE_T  total = 0;
 2751   int max_fast_bin;
 2752 
 2753   /* internal size_t must be no wider than pointer type */
 2754   assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
 2755 
 2756   /* alignment is a power of 2 */
 2757   assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
 2758 
 2759   /* cannot run remaining checks until fully initialized */
 2760   if (av->top == 0 || av->top == initial_top(av))
 2761     return;
 2762 
 2763   /* pagesize is a power of 2 */
 2764   assert((av->pagesize & (av->pagesize-1)) == 0);
 2765 
 2766   /* properties of fastbins */
 2767 
 2768   /* max_fast is in allowed range */
 2769   assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
 2770 
 2771   max_fast_bin = fastbin_index(av->max_fast);
 2772 
 2773   for (i = 0; i < NFASTBINS; ++i) {
 2774     p = av->fastbins[i];
 2775 
 2776     /* all bins past max_fast are empty */
 2777     if (i > max_fast_bin)
 2778       assert(p == 0);
 2779 
 2780     while (p != 0) {
 2781       /* each chunk claims to be inuse */
 2782       do_check_inuse_chunk(p);
 2783       total += chunksize(p);
 2784       /* chunk belongs in this bin */
 2785       assert(fastbin_index(chunksize(p)) == i);
 2786       p = p->fd;
 2787     }
 2788   }
 2789 
 2790   if (total != 0)
 2791     assert(have_fastchunks(av));
 2792   else if (!have_fastchunks(av))
 2793     assert(total == 0);
 2794 
 2795   /* check normal bins */
 2796   for (i = 1; i < NBINS; ++i) {
 2797     b = bin_at(av,i);
 2798 
 2799     /* binmap is accurate (except for bin 1 == unsorted_chunks) */
 2800     if (i >= 2) {
 2801       binbit = get_binmap(av,i);
 2802       empty = last(b) == b;
 2803       if (!binbit)
 2804         assert(empty);
 2805       else if (!empty)
 2806         assert(binbit);
 2807     }
 2808 
 2809     for (p = last(b); p != b; p = p->bk) {
 2810       /* each chunk claims to be free */
 2811       do_check_free_chunk(p);
 2812       size = chunksize(p);
 2813       total += size;
 2814       if (i >= 2) {
 2815         /* chunk belongs in bin */
 2816         idx = bin_index(size);
 2817         assert(idx == i);
 2818         /* lists are sorted */
 2819         if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
 2820           assert(p->bk == b || 
 2821                  (CHUNK_SIZE_T)chunksize(p->bk) >= 
 2822                  (CHUNK_SIZE_T)chunksize(p));
 2823         }
 2824       }
 2825       /* chunk is followed by a legal chain of inuse chunks */
 2826       for (q = next_chunk(p);
 2827            (q != av->top && inuse(q) && 
 2828              (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
 2829            q = next_chunk(q))
 2830         do_check_inuse_chunk(q);
 2831     }
 2832   }
 2833 
 2834   /* top chunk is OK */
 2835   check_chunk(av->top);
 2836 
 2837   /* sanity checks for statistics */
 2838 
 2839   assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
 2840   assert(av->n_mmaps >= 0);
 2841   assert(av->n_mmaps <= av->max_n_mmaps);
 2842 
 2843   assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
 2844          (CHUNK_SIZE_T)(av->max_sbrked_mem));
 2845 
 2846   assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
 2847          (CHUNK_SIZE_T)(av->max_mmapped_mem));
 2848 
 2849   assert((CHUNK_SIZE_T)(av->max_total_mem) >=
 2850          (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
 2851 }
 2852 #endif
 2853 
 2854 
 2855 /* ----------- Routines dealing with system allocation -------------- */
 2856 
 2857 /*
 2858   sysmalloc handles malloc cases requiring more memory from the system.
 2859   On entry, it is assumed that av->top does not have enough
 2860   space to service request for nb bytes, thus requiring that av->top
 2861   be extended or replaced.
 2862 */
 2863 
 2864 #if __STD_C
 2865 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
 2866 #else
 2867 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
 2868 #endif
 2869 {
 2870   mchunkptr       old_top;        /* incoming value of av->top */
 2871   INTERNAL_SIZE_T old_size;       /* its size */
 2872   char*           old_end;        /* its end address */
 2873 
 2874   long            size;           /* arg to first MORECORE or mmap call */
 2875   char*           brk;            /* return value from MORECORE */
 2876 
 2877   long            correction;     /* arg to 2nd MORECORE call */
 2878   char*           snd_brk;        /* 2nd return val */
 2879 
 2880   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
 2881   INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
 2882   char*           aligned_brk;    /* aligned offset into brk */
 2883 
 2884   mchunkptr       p;              /* the allocated/returned chunk */
 2885   mchunkptr       remainder;      /* remainder from allocation */
 2886   CHUNK_SIZE_T    remainder_size; /* its size */
 2887 
 2888   CHUNK_SIZE_T    sum;            /* for updating stats */
 2889 
 2890   size_t          pagemask  = av->pagesize - 1;
 2891 
 2892   /*
 2893     If there is space available in fastbins, consolidate and retry
 2894     malloc from scratch rather than getting memory from system.  This
 2895     can occur only if nb is in smallbin range so we didn't consolidate
 2896     upon entry to malloc. It is much easier to handle this case here
 2897     than in malloc proper.
 2898   */
 2899 
 2900   if (have_fastchunks(av)) {
 2901     assert(in_smallbin_range(nb));
 2902     malloc_consolidate(av);
 2903     return mALLOc(nb - MALLOC_ALIGN_MASK);
 2904   }
 2905 
 2906 
 2907 #if HAVE_MMAP
 2908 
 2909   /*
 2910     If have mmap, and the request size meets the mmap threshold, and
 2911     the system supports mmap, and there are few enough currently
 2912     allocated mmapped regions, try to directly map this request
 2913     rather than expanding top.
 2914   */
 2915 
 2916   if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
 2917       (av->n_mmaps < av->n_mmaps_max)) {
 2918 
 2919     char* mm;             /* return value from mmap call*/
 2920 
 2921     /*
 2922       Round up size to nearest page.  For mmapped chunks, the overhead
 2923       is one SIZE_SZ unit larger than for normal chunks, because there
 2924       is no following chunk whose prev_size field could be used.
 2925     */
 2926     size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
 2927 
 2928     /* Don't try if size wraps around 0 */
 2929     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
 2930 
 2931       mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
 2932       
 2933       if (mm != (char*)(MORECORE_FAILURE)) {
 2934         
 2935         /*
 2936           The offset to the start of the mmapped region is stored
 2937           in the prev_size field of the chunk. This allows us to adjust
 2938           returned start address to meet alignment requirements here 
 2939           and in memalign(), and still be able to compute proper
 2940           address argument for later munmap in free() and realloc().
 2941         */
 2942         
 2943         front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
 2944         if (front_misalign > 0) {
 2945           correction = MALLOC_ALIGNMENT - front_misalign;
 2946           p = (mchunkptr)(mm + correction);
 2947           p->prev_size = correction;
 2948           set_head(p, (size - correction) |IS_MMAPPED);
 2949         }
 2950         else {
 2951           p = (mchunkptr)mm;
 2952           p->prev_size = 0;
 2953           set_head(p, size|IS_MMAPPED);
 2954         }
 2955         
 2956         /* update statistics */
 2957         
 2958         if (++av->n_mmaps > av->max_n_mmaps) 
 2959           av->max_n_mmaps = av->n_mmaps;
 2960         
 2961         sum = av->mmapped_mem += size;
 2962         if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) 
 2963           av->max_mmapped_mem = sum;
 2964         sum += av->sbrked_mem;
 2965         if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) 
 2966           av->max_total_mem = sum;
 2967 
 2968         check_chunk(p);
 2969         
 2970         return chunk2mem(p);
 2971       }
 2972     }
 2973   }
 2974 #endif
 2975 
 2976   /* Record incoming configuration of top */
 2977 
 2978   old_top  = av->top;
 2979   old_size = chunksize(old_top);
 2980   old_end  = (char*)(chunk_at_offset(old_top, old_size));
 2981 
 2982   brk = snd_brk = (char*)(MORECORE_FAILURE); 
 2983 
 2984   /* 
 2985      If not the first time through, we require old_size to be
 2986      at least MINSIZE and to have prev_inuse set.
 2987   */
 2988 
 2989   assert((old_top == initial_top(av) && old_size == 0) || 
 2990          ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
 2991           prev_inuse(old_top)));
 2992 
 2993   /* Precondition: not enough current space to satisfy nb request */
 2994   assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
 2995 
 2996   /* Precondition: all fastbins are consolidated */
 2997   assert(!have_fastchunks(av));
 2998 
 2999 
 3000   /* Request enough space for nb + pad + overhead */
 3001 
 3002   size = nb + av->top_pad + MINSIZE;
 3003 
 3004   /*
 3005     If contiguous, we can subtract out existing space that we hope to
 3006     combine with new space. We add it back later only if
 3007     we don't actually get contiguous space.
 3008   */
 3009 
 3010   if (contiguous(av))
 3011     size -= old_size;
 3012 
 3013   /*
 3014     Round to a multiple of page size.
 3015     If MORECORE is not contiguous, this ensures that we only call it
 3016     with whole-page arguments.  And if MORECORE is contiguous and
 3017     this is not first time through, this preserves page-alignment of
 3018     previous calls. Otherwise, we correct to page-align below.
 3019   */
 3020 
 3021   size = (size + pagemask) & ~pagemask;
 3022 
 3023   /*
 3024     Don't try to call MORECORE if argument is so big as to appear
 3025     negative. Note that since mmap takes size_t arg, it may succeed
 3026     below even if we cannot call MORECORE.
 3027   */
 3028 
 3029   if (size > 0) 
 3030     brk = (char*)(MORECORE(size));
 3031 
 3032   /*
 3033     If have mmap, try using it as a backup when MORECORE fails or
 3034     cannot be used. This is worth doing on systems that have "holes" in
 3035     address space, so sbrk cannot extend to give contiguous space, but
 3036     space is available elsewhere.  Note that we ignore mmap max count
 3037     and threshold limits, since the space will not be used as a
 3038     segregated mmap region.
 3039   */
 3040 
 3041 #if HAVE_MMAP
 3042   if (brk == (char*)(MORECORE_FAILURE)) {
 3043 
 3044     /* Cannot merge with old top, so add its size back in */
 3045     if (contiguous(av))
 3046       size = (size + old_size + pagemask) & ~pagemask;
 3047 
 3048     /* If we are relying on mmap as backup, then use larger units */
 3049     if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
 3050       size = MMAP_AS_MORECORE_SIZE;
 3051 
 3052     /* Don't try if size wraps around 0 */
 3053     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
 3054 
 3055       brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
 3056       
 3057       if (brk != (char*)(MORECORE_FAILURE)) {
 3058         
 3059         /* We do not need, and cannot use, another sbrk call to find end */
 3060         snd_brk = brk + size;
 3061         
 3062         /* 
 3063            Record that we no longer have a contiguous sbrk region. 
 3064            After the first time mmap is used as backup, we do not
 3065            ever rely on contiguous space since this could incorrectly
 3066            bridge regions.
 3067         */
 3068         set_noncontiguous(av);
 3069       }
 3070     }
 3071   }
 3072 #endif
 3073 
 3074   if (brk != (char*)(MORECORE_FAILURE)) {
 3075     av->sbrked_mem += size;
 3076 
 3077     /*
 3078       If MORECORE extends previous space, we can likewise extend top size.
 3079     */
 3080     
 3081     if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
 3082       set_head(old_top, (size + old_size) | PREV_INUSE);
 3083     }
 3084 
 3085     /*
 3086       Otherwise, make adjustments:
 3087       
 3088       * If the first time through or noncontiguous, we need to call sbrk
 3089         just to find out where the end of memory lies.
 3090 
 3091       * We need to ensure that all returned chunks from malloc will meet
 3092         MALLOC_ALIGNMENT
 3093 
 3094       * If there was an intervening foreign sbrk, we need to adjust sbrk
 3095         request size to account for fact that we will not be able to
 3096         combine new space with existing space in old_top.
 3097 
 3098       * Almost all systems internally allocate whole pages at a time, in
 3099         which case we might as well use the whole last page of request.
 3100         So we allocate enough more memory to hit a page boundary now,
 3101         which in turn causes future contiguous calls to page-align.
 3102     */
 3103     
 3104     else {
 3105       front_misalign = 0;
 3106       end_misalign = 0;
 3107       correction = 0;
 3108       aligned_brk = brk;
 3109 
 3110       /*
 3111         If MORECORE returns an address lower than we have seen before,
 3112         we know it isn't really contiguous.  This and some subsequent
 3113         checks help cope with non-conforming MORECORE functions and
 3114         the presence of "foreign" calls to MORECORE from outside of
 3115         malloc or by other threads.  We cannot guarantee to detect
 3116         these in all cases, but cope with the ones we do detect.
 3117       */
 3118       if (contiguous(av) && old_size != 0 && brk < old_end) {
 3119         set_noncontiguous(av);
 3120       }
 3121       
 3122       /* handle contiguous cases */
 3123       if (contiguous(av)) { 
 3124 
 3125         /* 
 3126            We can tolerate forward non-contiguities here (usually due
 3127            to foreign calls) but treat them as part of our space for
 3128            stats reporting.
 3129         */
 3130         if (old_size != 0) 
 3131           av->sbrked_mem += brk - old_end;
 3132         
 3133         /* Guarantee alignment of first new chunk made from this space */
 3134 
 3135         front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
 3136         if (front_misalign > 0) {
 3137 
 3138           /*
 3139             Skip over some bytes to arrive at an aligned position.
 3140             We don't need to specially mark these wasted front bytes.
 3141             They will never be accessed anyway because
 3142             prev_inuse of av->top (and any chunk created from its start)
 3143             is always true after initialization.
 3144           */
 3145 
 3146           correction = MALLOC_ALIGNMENT - front_misalign;
 3147           aligned_brk += correction;
 3148         }
 3149         
 3150         /*
 3151           If this isn't adjacent to existing space, then we will not
 3152           be able to merge with old_top space, so must add to 2nd request.
 3153         */
 3154         
 3155         correction += old_size;
 3156         
 3157         /* Extend the end address to hit a page boundary */
 3158         end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
 3159         correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
 3160         
 3161         assert(correction >= 0);
 3162         snd_brk = (char*)(MORECORE(correction));
 3163         
 3164         if (snd_brk == (char*)(MORECORE_FAILURE)) {
 3165           /*
 3166             If can't allocate correction, try to at least find out current
 3167             brk.  It might be enough to proceed without failing.
 3168           */
 3169           correction = 0;
 3170           snd_brk = (char*)(MORECORE(0));
 3171         }
 3172         else if (snd_brk < brk) {
 3173           /*
 3174             If the second call gives noncontiguous space even though
 3175             it says it won't, the only course of action is to ignore
 3176             results of second call, and conservatively estimate where
 3177             the first call left us. Also set noncontiguous, so this
 3178             won't happen again, leaving at most one hole.
 3179             
 3180             Note that this check is intrinsically incomplete.  Because
 3181             MORECORE is allowed to give more space than we ask for,
 3182             there is no reliable way to detect a noncontiguity
 3183             producing a forward gap for the second call.
 3184           */
 3185           snd_brk = brk + size;
 3186           correction = 0;
 3187           set_noncontiguous(av);
 3188         }
 3189 
 3190       }
 3191       
 3192       /* handle non-contiguous cases */
 3193       else { 
 3194         /* MORECORE/mmap must correctly align */
 3195         assert(aligned_OK(chunk2mem(brk)));
 3196         
 3197         /* Find out current end of memory */
 3198         if (snd_brk == (char*)(MORECORE_FAILURE)) {
 3199           snd_brk = (char*)(MORECORE(0));
 3200           av->sbrked_mem += snd_brk - brk - size;
 3201         }
 3202       }
 3203       
 3204       /* Adjust top based on results of second sbrk */
 3205       if (snd_brk != (char*)(MORECORE_FAILURE)) {
 3206         av->top = (mchunkptr)aligned_brk;
 3207         set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
 3208         av->sbrked_mem += correction;
 3209      
 3210         /*
 3211           If not the first time through, we either have a
 3212           gap due to foreign sbrk or a non-contiguous region.  Insert a
 3213           double fencepost at old_top to prevent consolidation with space
 3214           we don't own. These fenceposts are artificial chunks that are
 3215           marked as inuse and are in any case too small to use.  We need
 3216           two to make sizes and alignments work out.
 3217         */
 3218    
 3219         if (old_size != 0) {
 3220           /* 
 3221              Shrink old_top to insert fenceposts, keeping size a
 3222              multiple of MALLOC_ALIGNMENT. We know there is at least
 3223              enough space in old_top to do this.
 3224           */
 3225           old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
 3226           set_head(old_top, old_size | PREV_INUSE);
 3227           
 3228           /*
 3229             Note that the following assignments completely overwrite
 3230             old_top when old_size was previously MINSIZE.  This is
 3231             intentional. We need the fencepost, even if old_top otherwise gets
 3232             lost.
 3233           */
 3234           chunk_at_offset(old_top, old_size          )->size =
 3235             SIZE_SZ|PREV_INUSE;
 3236 
 3237           chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
 3238             SIZE_SZ|PREV_INUSE;
 3239 
 3240           /* 
 3241              If possible, release the rest, suppressing trimming.
 3242           */
 3243           if (old_size >= MINSIZE) {
 3244             INTERNAL_SIZE_T tt = av->trim_threshold;
 3245             av->trim_threshold = (INTERNAL_SIZE_T)(-1);
 3246             fREe(chunk2mem(old_top));
 3247             av->trim_threshold = tt;
 3248           }
 3249         }
 3250       }
 3251     }
 3252     
 3253     /* Update statistics */
 3254     sum = av->sbrked_mem;
 3255     if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
 3256       av->max_sbrked_mem = sum;
 3257     
 3258     sum += av->mmapped_mem;
 3259     if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
 3260       av->max_total_mem = sum;
 3261 
 3262     check_malloc_state();
 3263     
 3264     /* finally, do the allocation */
 3265 
 3266     p = av->top;
 3267     size = chunksize(p);
 3268     
 3269     /* check that one of the above allocation paths succeeded */
 3270     if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
 3271       remainder_size = size - nb;
 3272       remainder = chunk_at_offset(p, nb);
 3273       av->top = remainder;
 3274       set_head(p, nb | PREV_INUSE);
 3275       set_head(remainder, remainder_size | PREV_INUSE);
 3276       check_malloced_chunk(p, nb);
 3277       return chunk2mem(p);
 3278     }
 3279 
 3280   }
 3281 
 3282   /* catch all failure paths */
 3283   MALLOC_FAILURE_ACTION;
 3284   return 0;
 3285 }
 3286 
 3287 
 3288 
 3289 
 3290 /*
 3291   sYSTRIm is an inverse of sorts to sYSMALLOc.  It gives memory back
 3292   to the system (via negative arguments to sbrk) if there is unused
 3293   memory at the `high' end of the malloc pool. It is called
 3294   automatically by free() when top space exceeds the trim
 3295   threshold. It is also called by the public malloc_trim routine.  It
 3296   returns 1 if it actually released any memory, else 0.
 3297 */
 3298 
 3299 #if __STD_C
 3300 static int sYSTRIm(size_t pad, mstate av)
 3301 #else
 3302 static int sYSTRIm(pad, av) size_t pad; mstate av;
 3303 #endif
 3304 {
 3305   long  top_size;        /* Amount of top-most memory */
 3306   long  extra;           /* Amount to release */
 3307   long  released;        /* Amount actually released */
 3308   char* current_brk;     /* address returned by pre-check sbrk call */
 3309   char* new_brk;         /* address returned by post-check sbrk call */
 3310   size_t pagesz;
 3311 
 3312   pagesz = av->pagesize;
 3313   top_size = chunksize(av->top);
 3314   
 3315   /* Release in pagesize units, keeping at least one page */
 3316   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
 3317   
 3318   if (extra > 0) {
 3319     
 3320     /*
 3321       Only proceed if end of memory is where we last set it.
 3322       This avoids problems if there were foreign sbrk calls.
 3323     */
 3324     current_brk = (char*)(MORECORE(0));
 3325     if (current_brk == (char*)(av->top) + top_size) {
 3326       
 3327       /*
 3328         Attempt to release memory. We ignore MORECORE return value,
 3329         and instead call again to find out where new end of memory is.
 3330         This avoids problems if first call releases less than we asked,
 3331         of if failure somehow altered brk value. (We could still
 3332         encounter problems if it altered brk in some very bad way,
 3333         but the only thing we can do is adjust anyway, which will cause
 3334         some downstream failure.)
 3335       */
 3336       
 3337       MORECORE(-extra);
 3338       new_brk = (char*)(MORECORE(0));
 3339       
 3340       if (new_brk != (char*)MORECORE_FAILURE) {
 3341         released = (long)(current_brk - new_brk);
 3342         
 3343         if (released != 0) {
 3344           /* Success. Adjust top. */
 3345           av->sbrked_mem -= released;
 3346           set_head(av->top, (top_size - released) | PREV_INUSE);
 3347           check_malloc_state();
 3348           return 1;
 3349         }
 3350       }
 3351     }
 3352   }
 3353   return 0;
 3354 }
 3355 
 3356 /* ================ MPR ================== */
 3357 
 3358 static int __mpr_malloc_initialized = 0;
 3359 Void_t* (*__mpr_malloc_hook)(size_t) = 0;
 3360 void    (*__mpr_free_hook)(Void_t*) = 0;
 3361 Void_t* (*__mpr_realloc_hook)(Void_t*, size_t) = 0;
 3362 
 3363 static void
 3364 __mpr_malloc_initialize()
 3365 {
 3366     extern int mprhook();
 3367     mprhook();
 3368     __mpr_malloc_initialized = 1;
 3369 }
 3370 
 3371 /*
 3372   ------------------------------ malloc ------------------------------
 3373 */
 3374 
 3375 
 3376 #if __STD_C
 3377 Void_t* mALLOc(size_t bytes)
 3378 #else
 3379   Void_t* mALLOc(bytes) size_t bytes;
 3380 #endif
 3381 {
 3382   mstate av = get_malloc_state();
 3383 
 3384   INTERNAL_SIZE_T nb;               /* normalized request size */
 3385   unsigned int    idx;              /* associated bin index */
 3386   mbinptr         bin;              /* associated bin */
 3387   mfastbinptr*    fb;               /* associated fastbin */
 3388 
 3389   mchunkptr       victim;           /* inspected/selected chunk */
 3390   INTERNAL_SIZE_T size;             /* its size */
 3391   int             victim_index;     /* its bin index */
 3392 
 3393   mchunkptr       remainder;        /* remainder from a split */
 3394   CHUNK_SIZE_T    remainder_size;   /* its size */
 3395 
 3396   unsigned int    block;            /* bit map traverser */
 3397   unsigned int    bit;              /* bit map traverser */
 3398   unsigned int    map;              /* current word of binmap */
 3399 
 3400   mchunkptr       fwd;              /* misc temp for linking */
 3401   mchunkptr       bck;              /* misc temp for linking */
 3402 
 3403   if (!__mpr_malloc_initialized)
 3404     __mpr_malloc_initialize();
 3405   if (__mpr_malloc_hook != NULL)
 3406     return (*__mpr_malloc_hook)(bytes);
 3407 
 3408   /*
 3409     Convert request size to internal form by adding SIZE_SZ bytes
 3410     overhead plus possibly more to obtain necessary alignment and/or
 3411     to obtain a size of at least MINSIZE, the smallest allocatable
 3412     size. Also, checked_request2size traps (returning 0) request sizes
 3413     that are so large that they wrap around zero when padded and
 3414     aligned.
 3415   */
 3416 
 3417   checked_request2size(bytes, nb);
 3418 
 3419   /*
 3420     Bypass search if no frees yet
 3421    */
 3422   if (!have_anychunks(av)) {
 3423     if (av->max_fast == 0) /* initialization check */
 3424       malloc_consolidate(av);
 3425     goto use_top;
 3426   }
 3427 
 3428   /*
 3429     If the size qualifies as a fastbin, first check corresponding bin.
 3430   */
 3431 
 3432   if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) { 
 3433     fb = &(av->fastbins[(fastbin_index(nb))]);
 3434     if ( (victim = *fb) != 0) {
 3435       *fb = victim->fd;
 3436       check_remalloced_chunk(victim, nb);
 3437       return chunk2mem(victim);
 3438     }
 3439   }
 3440 
 3441   /*
 3442     If a small request, check regular bin.  Since these "smallbins"
 3443     hold one size each, no searching within bins is necessary.
 3444     (For a large request, we need to wait until unsorted chunks are
 3445     processed to find best fit. But for small ones, fits are exact
 3446     anyway, so we can check now, which is faster.)
 3447   */
 3448 
 3449   if (in_smallbin_range(nb)) {
 3450     idx = smallbin_index(nb);
 3451     bin = bin_at(av,idx);
 3452 
 3453     if ( (victim = last(bin)) != bin) {
 3454       bck = victim->bk;
 3455       set_inuse_bit_at_offset(victim, nb);
 3456       bin->bk = bck;
 3457       bck->fd = bin;
 3458       
 3459       check_malloced_chunk(victim, nb);
 3460       return chunk2mem(victim);
 3461     }
 3462   }
 3463 
 3464   /* 
 3465      If this is a large request, consolidate fastbins before continuing.
 3466      While it might look excessive to kill all fastbins before
 3467      even seeing if there is space available, this avoids
 3468      fragmentation problems normally associated with fastbins.
 3469      Also, in practice, programs tend to have runs of either small or
 3470      large requests, but less often mixtures, so consolidation is not 
 3471      invoked all that often in most programs. And the programs that
 3472      it is called frequently in otherwise tend to fragment.
 3473   */
 3474 
 3475   else {
 3476     idx = largebin_index(nb);
 3477     if (have_fastchunks(av)) 
 3478       malloc_consolidate(av);
 3479   }
 3480 
 3481   /*
 3482     Process recently freed or remaindered chunks, taking one only if
 3483     it is exact fit, or, if this a small request, the chunk is remainder from
 3484     the most recent non-exact fit.  Place other traversed chunks in
 3485     bins.  Note that this step is the only place in any routine where
 3486     chunks are placed in bins.
 3487   */
 3488     
 3489   while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
 3490     bck = victim->bk;
 3491     size = chunksize(victim);
 3492     
 3493     /* 
 3494        If a small request, try to use last remainder if it is the
 3495        only chunk in unsorted bin.  This helps promote locality for
 3496        runs of consecutive small requests. This is the only
 3497        exception to best-fit, and applies only when there is
 3498        no exact fit for a small chunk.
 3499     */
 3500     
 3501     if (in_smallbin_range(nb) && 
 3502         bck == unsorted_chunks(av) &&
 3503         victim == av->last_remainder &&
 3504         (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
 3505       
 3506       /* split and reattach remainder */
 3507       remainder_size = size - nb;
 3508       remainder = chunk_at_offset(victim, nb);
 3509       unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
 3510       av->last_remainder = remainder; 
 3511       remainder->bk = remainder->fd = unsorted_chunks(av);
 3512       
 3513       set_head(victim, nb | PREV_INUSE);
 3514       set_head(remainder, remainder_size | PREV_INUSE);
 3515       set_foot(remainder, remainder_size);
 3516       
 3517       check_malloced_chunk(victim, nb);
 3518       return chunk2mem(victim);
 3519     }
 3520     
 3521     /* remove from unsorted list */
 3522     unsorted_chunks(av)->bk = bck;
 3523     bck->fd = unsorted_chunks(av);
 3524     
 3525     /* Take now instead of binning if exact fit */
 3526     
 3527     if (size == nb) {
 3528       set_inuse_bit_at_offset(victim, size);
 3529       check_malloced_chunk(victim, nb);
 3530       return chunk2mem(victim);
 3531     }
 3532     
 3533     /* place chunk in bin */
 3534     
 3535     if (in_smallbin_range(size)) {
 3536       victim_index = smallbin_index(size);
 3537       bck = bin_at(av, victim_index);
 3538       fwd = bck->fd;
 3539     }
 3540     else {
 3541       victim_index = largebin_index(size);
 3542       bck = bin_at(av, victim_index);
 3543       fwd = bck->fd;
 3544       
 3545       if (fwd != bck) {
 3546         /* if smaller than smallest, place first */
 3547         if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
 3548           fwd = bck;
 3549           bck = bck->bk;
 3550         }
 3551         else if ((CHUNK_SIZE_T)(size) >= 
 3552                  (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
 3553           
 3554           /* maintain large bins in sorted order */
 3555           size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
 3556           while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size)) 
 3557             fwd = fwd->fd;
 3558           bck = fwd->bk;
 3559         }
 3560       }
 3561     }
 3562       
 3563     mark_bin(av, victim_index);
 3564     victim->bk = bck;
 3565     victim->fd = fwd;
 3566     fwd->bk = victim;
 3567     bck->fd = victim;
 3568   }
 3569   
 3570   /*
 3571     If a large request, scan through the chunks of current bin to
 3572     find one that fits.  (This will be the smallest that fits unless
 3573     FIRST_SORTED_BIN_SIZE has been changed from default.)  This is
 3574     the only step where an unbounded number of chunks might be
 3575     scanned without doing anything useful with them. However the
 3576     lists tend to be short.
 3577   */
 3578   
 3579   if (!in_smallbin_range(nb)) {
 3580     bin = bin_at(av, idx);
 3581     
 3582     for (victim = last(bin); victim != bin; victim = victim->bk) {
 3583       size = chunksize(victim);
 3584       
 3585       if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
 3586         remainder_size = size - nb;
 3587         unlink(victim, bck, fwd);
 3588         
 3589         /* Exhaust */
 3590         if (remainder_size < MINSIZE)  {
 3591           set_inuse_bit_at_offset(victim, size);
 3592           check_malloced_chunk(victim, nb);
 3593           return chunk2mem(victim);
 3594         }
 3595         /* Split */
 3596         else {
 3597           remainder = chunk_at_offset(victim, nb);
 3598           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
 3599           remainder->bk = remainder->fd = unsorted_chunks(av);
 3600           set_head(victim, nb | PREV_INUSE);
 3601           set_head(remainder, remainder_size | PREV_INUSE);
 3602           set_foot(remainder, remainder_size);
 3603           check_malloced_chunk(victim, nb);
 3604           return chunk2mem(victim);
 3605         } 
 3606       }
 3607     }    
 3608   }
 3609 
 3610   /*
 3611     Search for a chunk by scanning bins, starting with next largest
 3612     bin. This search is strictly by best-fit; i.e., the smallest
 3613     (with ties going to approximately the least recently used) chunk
 3614     that fits is selected.
 3615     
 3616     The bitmap avoids needing to check that most blocks are nonempty.
 3617   */
 3618     
 3619   ++idx;
 3620   bin = bin_at(av,idx);
 3621   block = idx2block(idx);
 3622   map = av->binmap[block];
 3623   bit = idx2bit(idx);
 3624   
 3625   for (;;) {
 3626     
 3627     /* Skip rest of block if there are no more set bits in this block.  */
 3628     if (bit > map || bit == 0) {
 3629       do {
 3630         if (++block >= BINMAPSIZE)  /* out of bins */
 3631           goto use_top;
 3632       } while ( (map = av->binmap[block]) == 0);
 3633       
 3634       bin = bin_at(av, (block << BINMAPSHIFT));
 3635       bit = 1;
 3636     }
 3637     
 3638     /* Advance to bin with set bit. There must be one. */
 3639     while ((bit & map) == 0) {
 3640       bin = next_bin(bin);
 3641       bit <<= 1;
 3642       assert(bit != 0);
 3643     }
 3644     
 3645     /* Inspect the bin. It is likely to be non-empty */
 3646     victim = last(bin);
 3647     
 3648     /*  If a false alarm (empty bin), clear the bit. */
 3649     if (victim == bin) {
 3650       av->binmap[block] = map &= ~bit; /* Write through */
 3651       bin = next_bin(bin);
 3652       bit <<= 1;
 3653     }
 3654     
 3655     else {
 3656       size = chunksize(victim);
 3657       
 3658       /*  We know the first chunk in this bin is big enough to use. */
 3659       assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
 3660       
 3661       remainder_size = size - nb;
 3662       
 3663       /* unlink */
 3664       bck = victim->bk;
 3665       bin->bk = bck;
 3666       bck->fd = bin;
 3667       
 3668       /* Exhaust */
 3669       if (remainder_size < MINSIZE) {
 3670         set_inuse_bit_at_offset(victim, size);
 3671         check_malloced_chunk(victim, nb);
 3672         return chunk2mem(victim);
 3673       }
 3674       
 3675       /* Split */
 3676       else {
 3677         remainder = chunk_at_offset(victim, nb);
 3678         
 3679         unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
 3680         remainder->bk = remainder->fd = unsorted_chunks(av);
 3681         /* advertise as last remainder */
 3682         if (in_smallbin_range(nb)) 
 3683           av->last_remainder = remainder; 
 3684         
 3685         set_head(victim, nb | PREV_INUSE);
 3686         set_head(remainder, remainder_size | PREV_INUSE);
 3687         set_foot(remainder, remainder_size);
 3688         check_malloced_chunk(victim, nb);
 3689         return chunk2mem(victim);
 3690       }
 3691     }
 3692   }
 3693 
 3694   use_top:    
 3695   /*
 3696     If large enough, split off the chunk bordering the end of memory
 3697     (held in av->top). Note that this is in accord with the best-fit
 3698     search rule.  In effect, av->top is treated as larger (and thus
 3699     less well fitting) than any other available chunk since it can
 3700     be extended to be as large as necessary (up to system
 3701     limitations).
 3702     
 3703     We require that av->top always exists (i.e., has size >=
 3704     MINSIZE) after initialization, so if it would otherwise be
 3705     exhuasted by current request, it is replenished. (The main
 3706     reason for ensuring it exists is that we may need MINSIZE space
 3707     to put in fenceposts in sysmalloc.)
 3708   */
 3709   
 3710   victim = av->top;
 3711   size = chunksize(victim);
 3712   
 3713   if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
 3714     remainder_size = size - nb;
 3715     remainder = chunk_at_offset(victim, nb);
 3716     av->top = remainder;
 3717     set_head(victim, nb | PREV_INUSE);
 3718     set_head(remainder, remainder_size | PREV_INUSE);
 3719     
 3720     check_malloced_chunk(victim, nb);
 3721     return chunk2mem(victim);
 3722   }
 3723   
 3724   /* 
 3725      If no space in top, relay to handle system-dependent cases 
 3726   */
 3727   return sYSMALLOc(nb, av);    
 3728 }
 3729 
 3730 /*
 3731   ------------------------------ free ------------------------------
 3732 */
 3733 
 3734 #if __STD_C
 3735 void fREe(Void_t* mem)
 3736 #else
 3737 void fREe(mem) Void_t* mem;
 3738 #endif
 3739 {
 3740   mstate av = get_malloc_state();
 3741 
 3742   mchunkptr       p;           /* chunk corresponding to mem */
 3743   INTERNAL_SIZE_T size;        /* its size */
 3744   mfastbinptr*    fb;          /* associated fastbin */
 3745   mchunkptr       nextchunk;   /* next contiguous chunk */
 3746   INTERNAL_SIZE_T nextsize;    /* its size */
 3747   int             nextinuse;   /* true if nextchunk is used */
 3748   INTERNAL_SIZE_T prevsize;    /* size of previous contiguous chunk */
 3749   mchunkptr       bck;         /* misc temp for linking */
 3750   mchunkptr       fwd;         /* misc temp for linking */
 3751 
 3752   if (!__mpr_malloc_initialized)
 3753     __mpr_malloc_initialize();
 3754   if (__mpr_free_hook != NULL) {
 3755     (*__mpr_free_hook)(mem);
 3756     return;
 3757   }
 3758 
 3759   /* free(0) has no effect */
 3760   if (mem != 0) {
 3761     p = mem2chunk(mem);
 3762     size = chunksize(p);
 3763 
 3764     check_inuse_chunk(p);
 3765 
 3766     /*
 3767       If eligible, place chunk on a fastbin so it can be found
 3768       and used quickly in malloc.
 3769     */
 3770 
 3771     if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
 3772 
 3773 #if TRIM_FASTBINS
 3774         /* 
 3775            If TRIM_FASTBINS set, don't place chunks
 3776            bordering top into fastbins
 3777         */
 3778         && (chunk_at_offset(p, size) != av->top)
 3779 #endif
 3780         ) {
 3781 
 3782       set_fastchunks(av);
 3783       fb = &(av->fastbins[fastbin_index(size)]);
 3784       p->fd = *fb;
 3785       *fb = p;
 3786     }
 3787 
 3788     /*
 3789        Consolidate other non-mmapped chunks as they arrive.
 3790     */
 3791 
 3792     else if (!chunk_is_mmapped(p)) {
 3793       set_anychunks(av);
 3794 
 3795       nextchunk = chunk_at_offset(p, size);
 3796       nextsize = chunksize(nextchunk);
 3797 
 3798       /* consolidate backward */
 3799       if (!prev_inuse(p)) {
 3800         prevsize = p->prev_size;
 3801         size += prevsize;
 3802         p = chunk_at_offset(p, -((long) prevsize));
 3803         unlink(p, bck, fwd);
 3804       }
 3805 
 3806       if (nextchunk != av->top) {
 3807         /* get and clear inuse bit */
 3808         nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
 3809         set_head(nextchunk, nextsize);
 3810 
 3811         /* consolidate forward */
 3812         if (!nextinuse) {
 3813           unlink(nextchunk, bck, fwd);
 3814           size += nextsize;
 3815         }
 3816 
 3817         /*
 3818           Place the chunk in unsorted chunk list. Chunks are
 3819           not placed into regular bins until after they have
 3820           been given one chance to be used in malloc.
 3821         */
 3822 
 3823         bck = unsorted_chunks(av);
 3824         fwd = bck->fd;
 3825         p->bk = bck;
 3826         p->fd = fwd;
 3827         bck->fd = p;
 3828         fwd->bk = p;
 3829 
 3830         set_head(p, size | PREV_INUSE);
 3831         set_foot(p, size);
 3832         
 3833         check_free_chunk(p);
 3834       }
 3835 
 3836       /*
 3837          If the chunk borders the current high end of memory,
 3838          consolidate into top
 3839       */
 3840 
 3841       else {
 3842         size += nextsize;
 3843         set_head(p, size | PREV_INUSE);
 3844         av->top = p;
 3845         check_chunk(p);
 3846       }
 3847 
 3848       /*
 3849         If freeing a large space, consolidate possibly-surrounding
 3850         chunks. Then, if the total unused topmost memory exceeds trim
 3851         threshold, ask malloc_trim to reduce top.
 3852 
 3853         Unless max_fast is 0, we don't know if there are fastbins
 3854         bordering top, so we cannot tell for sure whether threshold
 3855         has been reached unless fastbins are consolidated.  But we
 3856         don't want to consolidate on each free.  As a compromise,
 3857         consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
 3858         is reached.
 3859       */
 3860 
 3861       if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { 
 3862         if (have_fastchunks(av)) 
 3863           malloc_consolidate(av);
 3864 
 3865 #ifndef MORECORE_CANNOT_TRIM        
 3866         if ((CHUNK_SIZE_T)(chunksize(av->top)) >= 
 3867             (CHUNK_SIZE_T)(av->trim_threshold))
 3868           sYSTRIm(av->top_pad, av);
 3869 #endif
 3870       }
 3871 
 3872     }
 3873     /*
 3874       If the chunk was allocated via mmap, release via munmap()
 3875       Note that if HAVE_MMAP is false but chunk_is_mmapped is
 3876       true, then user must have overwritten memory. There's nothing
 3877       we can do to catch this error unless DEBUG is set, in which case
 3878       check_inuse_chunk (above) will have triggered error.
 3879     */
 3880 
 3881     else {
 3882 #if HAVE_MMAP
 3883       int ret;
 3884       INTERNAL_SIZE_T offset = p->prev_size;
 3885       av->n_mmaps--;
 3886       av->mmapped_mem -= (size + offset);
 3887       ret = munmap((char*)p - offset, size + offset);
 3888       /* munmap returns non-zero on failure */
 3889       assert(ret == 0);
 3890 #endif
 3891     }
 3892   }
 3893 }
 3894 
 3895 /*
 3896   ------------------------- malloc_consolidate -------------------------
 3897 
 3898   malloc_consolidate is a specialized version of free() that tears
 3899   down chunks held in fastbins.  Free itself cannot be used for this
 3900   purpose since, among other things, it might place chunks back onto
 3901   fastbins.  So, instead, we need to use a minor variant of the same
 3902   code.
 3903   
 3904   Also, because this routine needs to be called the first time through
 3905   malloc anyway, it turns out to be the perfect place to trigger
 3906   initialization code.
 3907 */
 3908 
 3909 #if __STD_C
 3910 static void malloc_consolidate(mstate av)
 3911 #else
 3912 static void malloc_consolidate(av) mstate av;
 3913 #endif
 3914 {
 3915   mfastbinptr*    fb;                 /* current fastbin being consolidated */
 3916   mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
 3917   mchunkptr       p;                  /* current chunk being consolidated */
 3918   mchunkptr       nextp;              /* next chunk to consolidate */
 3919   mchunkptr       unsorted_bin;       /* bin header */
 3920   mchunkptr       first_unsorted;     /* chunk to link to */
 3921 
 3922   /* These have same use as in free() */
 3923   mchunkptr       nextchunk;
 3924   INTERNAL_SIZE_T size;
 3925   INTERNAL_SIZE_T nextsize;
 3926   INTERNAL_SIZE_T prevsize;
 3927   int             nextinuse;
 3928   mchunkptr       bck;
 3929   mchunkptr       fwd;
 3930 
 3931   /*
 3932     If max_fast is 0, we know that av hasn't
 3933     yet been initialized, in which case do so below
 3934   */
 3935 
 3936   if (av->max_fast != 0) {
 3937     clear_fastchunks(av);
 3938 
 3939     unsorted_bin = unsorted_chunks(av);
 3940 
 3941     /*
 3942       Remove each chunk from fast bin and consolidate it, placing it
 3943       then in unsorted bin. Among other reasons for doing this,
 3944       placing in unsorted bin avoids needing to calculate actual bins
 3945       until malloc is sure that chunks aren't immediately going to be
 3946       reused anyway.
 3947     */
 3948     
 3949     maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
 3950     fb = &(av->fastbins[0]);
 3951     do {
 3952       if ( (p = *fb) != 0) {
 3953         *fb = 0;
 3954         
 3955         do {
 3956           check_inuse_chunk(p);
 3957           nextp = p->fd;
 3958           
 3959           /* Slightly streamlined version of consolidation code in free() */
 3960           size = p->size & ~PREV_INUSE;
 3961           nextchunk = chunk_at_offset(p, size);
 3962           nextsize = chunksize(nextchunk);
 3963           
 3964           if (!prev_inuse(p)) {
 3965             prevsize = p->prev_size;
 3966             size += prevsize;
 3967             p = chunk_at_offset(p, -((long) prevsize));
 3968             unlink(p, bck, fwd);
 3969           }
 3970           
 3971           if (nextchunk != av->top) {
 3972             nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
 3973             set_head(nextchunk, nextsize);
 3974             
 3975             if (!nextinuse) {
 3976               size += nextsize;
 3977               unlink(nextchunk, bck, fwd);
 3978             }
 3979             
 3980             first_unsorted = unsorted_bin->fd;
 3981             unsorted_bin->fd = p;
 3982             first_unsorted->bk = p;
 3983             
 3984             set_head(p, size | PREV_INUSE);
 3985             p->bk = unsorted_bin;
 3986             p->fd = first_unsorted;
 3987             set_foot(p, size);
 3988           }
 3989           
 3990           else {
 3991             size += nextsize;
 3992             set_head(p, size | PREV_INUSE);
 3993             av->top = p;
 3994           }
 3995           
 3996         } while ( (p = nextp) != 0);
 3997         
 3998       }
 3999     } while (fb++ != maxfb);
 4000   }
 4001   else {
 4002     malloc_init_state(av);
 4003     check_malloc_state();
 4004   }
 4005 }
 4006 
 4007 /*
 4008   ------------------------------ realloc ------------------------------
 4009 */
 4010 
 4011 
 4012 #if __STD_C
 4013 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
 4014 #else
 4015 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
 4016 #endif
 4017 {
 4018   mstate av = get_malloc_state();
 4019 
 4020   INTERNAL_SIZE_T  nb;              /* padded request size */
 4021 
 4022   mchunkptr        oldp;            /* chunk corresponding to oldmem */
 4023   INTERNAL_SIZE_T  oldsize;         /* its size */
 4024 
 4025   mchunkptr        newp;            /* chunk to return */
 4026   INTERNAL_SIZE_T  newsize;         /* its size */
 4027   Void_t*          newmem;          /* corresponding user mem */
 4028 
 4029   mchunkptr        next;            /* next contiguous chunk after oldp */
 4030 
 4031   mchunkptr        remainder;       /* extra space at end of newp */
 4032   CHUNK_SIZE_T     remainder_size;  /* its size */
 4033 
 4034   mchunkptr        bck;             /* misc temp for linking */
 4035   mchunkptr        fwd;             /* misc temp for linking */
 4036 
 4037   CHUNK_SIZE_T     copysize;        /* bytes to copy */
 4038   unsigned int     ncopies;         /* INTERNAL_SIZE_T words to copy */
 4039   INTERNAL_SIZE_T* s;               /* copy source */ 
 4040   INTERNAL_SIZE_T* d;               /* copy destination */
 4041 
 4042   if (!__mpr_malloc_initialized)
 4043     __mpr_malloc_initialize();
 4044   if (__mpr_realloc_hook != NULL)
 4045     return (*__mpr_realloc_hook)(oldmem, bytes);
 4046 
 4047 #ifdef REALLOC_ZERO_BYTES_FREES
 4048   if (bytes == 0) {
 4049     fREe(oldmem);
 4050     return 0;
 4051   }
 4052 #endif
 4053 
 4054   /* realloc of null is supposed to be same as malloc */
 4055   if (oldmem == 0) return mALLOc(bytes);
 4056 
 4057   checked_request2size(bytes, nb);
 4058 
 4059   oldp    = mem2chunk(oldmem);
 4060   oldsize = chunksize(oldp);
 4061 
 4062   check_inuse_chunk(oldp);
 4063 
 4064   if (!chunk_is_mmapped(oldp)) {
 4065 
 4066     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
 4067       /* already big enough; split below */
 4068       newp = oldp;
 4069       newsize = oldsize;
 4070     }
 4071 
 4072     else {
 4073       next = chunk_at_offset(oldp, oldsize);
 4074 
 4075       /* Try to expand forward into top */
 4076       if (next == av->top &&
 4077           (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
 4078           (CHUNK_SIZE_T)(nb + MINSIZE)) {
 4079         set_head_size(oldp, nb);
 4080         av->top = chunk_at_offset(oldp, nb);
 4081         set_head(av->top, (newsize - nb) | PREV_INUSE);
 4082         return chunk2mem(oldp);
 4083       }
 4084       
 4085       /* Try to expand forward into next chunk;  split off remainder below */
 4086       else if (next != av->top && 
 4087                !inuse(next) &&
 4088                (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
 4089                (CHUNK_SIZE_T)(nb)) {
 4090         newp = oldp;
 4091         unlink(next, bck, fwd);
 4092       }
 4093 
 4094       /* allocate, copy, free */
 4095       else {
 4096         newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
 4097         if (newmem == 0)
 4098           return 0; /* propagate failure */
 4099       
 4100         newp = mem2chunk(newmem);
 4101         newsize = chunksize(newp);
 4102         
 4103         /*
 4104           Avoid copy if newp is next chunk after oldp.
 4105         */
 4106         if (newp == next) {
 4107           newsize += oldsize;
 4108           newp = oldp;
 4109         }
 4110         else {
 4111           /*
 4112             Unroll copy of <= 36 bytes (72 if 8byte sizes)
 4113             We know that contents have an odd number of
 4114             INTERNAL_SIZE_T-sized words; minimally 3.
 4115           */
 4116           
 4117           copysize = oldsize - SIZE_SZ;
 4118           s = (INTERNAL_SIZE_T*)(oldmem);
 4119           d = (INTERNAL_SIZE_T*)(newmem);
 4120           ncopies = copysize / sizeof(INTERNAL_SIZE_T);
 4121           assert(ncopies >= 3);
 4122           
 4123           if (ncopies > 9)
 4124             MALLOC_COPY(d, s, copysize);
 4125           
 4126           else {
 4127             *(d+0) = *(s+0);
 4128             *(d+1) = *(s+1);
 4129             *(d+2) = *(s+2);
 4130             if (ncopies > 4) {
 4131               *(d+3) = *(s+3);
 4132               *(d+4) = *(s+4);
 4133               if (ncopies > 6) {
 4134                 *(d+5) = *(s+5);
 4135                 *(d+6) = *(s+6);
 4136                 if (ncopies > 8) {
 4137                   *(d+7) = *(s+7);
 4138                   *(d+8) = *(s+8);
 4139                 }
 4140               }
 4141             }
 4142           }
 4143           
 4144           fREe(oldmem);
 4145           check_inuse_chunk(newp);
 4146           return chunk2mem(newp);
 4147         }
 4148       }
 4149     }
 4150 
 4151     /* If possible, free extra space in old or extended chunk */
 4152 
 4153     assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
 4154 
 4155     remainder_size = newsize - nb;
 4156 
 4157     if (remainder_size < MINSIZE) { /* not enough extra to split off */
 4158       set_head_size(newp, newsize);
 4159       set_inuse_bit_at_offset(newp, newsize);
 4160     }
 4161     else { /* split remainder */
 4162       remainder = chunk_at_offset(newp, nb);
 4163       set_head_size(newp, nb);
 4164       set_head(remainder, remainder_size | PREV_INUSE);
 4165       /* Mark remainder as inuse so free() won't complain */
 4166       set_inuse_bit_at_offset(remainder, remainder_size);
 4167       fREe(chunk2mem(remainder)); 
 4168     }
 4169 
 4170     check_inuse_chunk(newp);
 4171     return chunk2mem(newp);
 4172   }
 4173 
 4174   /*
 4175     Handle mmap cases
 4176   */
 4177 
 4178   else {
 4179 #if HAVE_MMAP
 4180 
 4181 #if HAVE_MREMAP
 4182     INTERNAL_SIZE_T offset = oldp->prev_size;
 4183     size_t pagemask = av->pagesize - 1;
 4184     char *cp;
 4185     CHUNK_SIZE_T  sum;
 4186     
 4187     /* Note the extra SIZE_SZ overhead */
 4188     newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
 4189 
 4190     /* don't need to remap if still within same page */
 4191     if (oldsize == newsize - offset) 
 4192       return oldmem;
 4193 
 4194     cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
 4195     
 4196     if (cp != (char*)MORECORE_FAILURE) {
 4197 
 4198       newp = (mchunkptr)(cp + offset);
 4199       set_head(newp, (newsize - offset)|IS_MMAPPED);
 4200       
 4201       assert(aligned_OK(chunk2mem(newp)));
 4202       assert((newp->prev_size == offset));
 4203       
 4204       /* update statistics */
 4205       sum = av->mmapped_mem += newsize - oldsize;
 4206       if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) 
 4207         av->max_mmapped_mem = sum;
 4208       sum += av->sbrked_mem;
 4209       if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) 
 4210         av->max_total_mem = sum;
 4211       
 4212       return chunk2mem(newp);
 4213     }
 4214 #endif
 4215 
 4216     /* Note the extra SIZE_SZ overhead. */
 4217     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ)) 
 4218       newmem = oldmem; /* do nothing */
 4219     else {
 4220       /* Must alloc, copy, free. */
 4221       newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
 4222       if (newmem != 0) {
 4223         MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
 4224         fREe(oldmem);
 4225       }
 4226     }
 4227     return newmem;
 4228 
 4229 #else 
 4230     /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
 4231     check_malloc_state();
 4232     MALLOC_FAILURE_ACTION;
 4233     return 0;
 4234 #endif
 4235   }
 4236 }
 4237 
 4238 /*
 4239   ------------------------------ memalign ------------------------------
 4240 */
 4241 
 4242 #if __STD_C
 4243 Void_t* mEMALIGn(size_t alignment, size_t bytes)
 4244 #else
 4245 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
 4246 #endif
 4247 {
 4248   INTERNAL_SIZE_T nb;             /* padded  request size */
 4249   char*           m;              /* memory returned by malloc call */
 4250   mchunkptr       p;              /* corresponding chunk */
 4251   char*           brk;            /* alignment point within p */
 4252   mchunkptr       newp;           /* chunk to return */
 4253   INTERNAL_SIZE_T newsize;        /* its size */
 4254   INTERNAL_SIZE_T leadsize;       /* leading space before alignment point */
 4255   mchunkptr       remainder;      /* spare room at end to split off */
 4256   CHUNK_SIZE_T    remainder_size; /* its size */
 4257   INTERNAL_SIZE_T size;
 4258 
 4259   /* If need less alignment than we give anyway, just relay to malloc */
 4260 
 4261   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
 4262 
 4263   /* Otherwise, ensure that it is at least a minimum chunk size */
 4264 
 4265   if (alignment <  MINSIZE) alignment = MINSIZE;
 4266 
 4267   /* Make sure alignment is power of 2 (in case MINSIZE is not).  */
 4268   if ((alignment & (alignment - 1)) != 0) {
 4269     size_t a = MALLOC_ALIGNMENT * 2;
 4270     while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1;
 4271     alignment = a;
 4272   }
 4273 
 4274   checked_request2size(bytes, nb);
 4275 
 4276   /*
 4277     Strategy: find a spot within that chunk that meets the alignment
 4278     request, and then possibly free the leading and trailing space.
 4279   */
 4280 
 4281 
 4282   /* Call malloc with worst case padding to hit alignment. */
 4283 
 4284   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
 4285 
 4286   if (m == 0) return 0; /* propagate failure */
 4287 
 4288   p = mem2chunk(m);
 4289 
 4290   if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
 4291 
 4292     /*
 4293       Find an aligned spot inside chunk.  Since we need to give back
 4294       leading space in a chunk of at least MINSIZE, if the first
 4295       calculation places us at a spot with less than MINSIZE leader,
 4296       we can move to the next aligned spot -- we've allocated enough
 4297       total room so that this is always possible.
 4298     */
 4299 
 4300     brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
 4301                            -((signed long) alignment)));
 4302     if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
 4303       brk += alignment;
 4304 
 4305     newp = (mchunkptr)brk;
 4306     leadsize = brk - (char*)(p);
 4307     newsize = chunksize(p) - leadsize;
 4308 
 4309     /* For mmapped chunks, just adjust offset */
 4310     if (chunk_is_mmapped(p)) {
 4311       newp->prev_size = p->prev_size + leadsize;
 4312       set_head(newp, newsize|IS_MMAPPED);
 4313       return chunk2mem(newp);
 4314     }
 4315 
 4316     /* Otherwise, give back leader, use the rest */
 4317     set_head(newp, newsize | PREV_INUSE);
 4318     set_inuse_bit_at_offset(newp, newsize);
 4319     set_head_size(p, leadsize);
 4320     fREe(chunk2mem(p));
 4321     p = newp;
 4322 
 4323     assert (newsize >= nb &&
 4324             (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
 4325   }
 4326 
 4327   /* Also give back spare room at the end */
 4328   if (!chunk_is_mmapped(p)) {
 4329     size = chunksize(p);
 4330     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
 4331       remainder_size = size - nb;
 4332       remainder = chunk_at_offset(p, nb);
 4333       set_head(remainder, remainder_size | PREV_INUSE);
 4334       set_head_size(p, nb);
 4335       fREe(chunk2mem(remainder));
 4336     }
 4337   }
 4338 
 4339   check_inuse_chunk(p);
 4340   return chunk2mem(p);
 4341 }
 4342 
 4343 /*
 4344   ------------------------------ calloc ------------------------------
 4345 */
 4346 
 4347 #if __STD_C
 4348 Void_t* cALLOc(size_t n_elements, size_t elem_size)
 4349 #else
 4350 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
 4351 #endif
 4352 {
 4353   mchunkptr p;
 4354   CHUNK_SIZE_T  clearsize;
 4355   CHUNK_SIZE_T  nclears;
 4356   INTERNAL_SIZE_T* d;
 4357 
 4358   Void_t* mem = mALLOc(n_elements * elem_size);
 4359 
 4360   if (mem != 0) {
 4361     p = mem2chunk(mem);
 4362 
 4363     if (!chunk_is_mmapped(p))
 4364     {  
 4365       /*
 4366         Unroll clear of <= 36 bytes (72 if 8byte sizes)
 4367         We know that contents have an odd number of
 4368         INTERNAL_SIZE_T-sized words; minimally 3.
 4369       */
 4370 
 4371       d = (INTERNAL_SIZE_T*)mem;
 4372       clearsize = chunksize(p) - SIZE_SZ;
 4373       nclears = clearsize / sizeof(INTERNAL_SIZE_T);
 4374       assert(nclears >= 3);
 4375 
 4376       if (nclears > 9)
 4377         MALLOC_ZERO(d, clearsize);
 4378 
 4379       else {
 4380         *(d+0) = 0;
 4381         *(d+1) = 0;
 4382         *(d+2) = 0;
 4383         if (nclears > 4) {
 4384           *(d+3) = 0;
 4385           *(d+4) = 0;
 4386           if (nclears > 6) {
 4387             *(d+5) = 0;
 4388             *(d+6) = 0;
 4389             if (nclears > 8) {
 4390               *(d+7) = 0;
 4391               *(d+8) = 0;
 4392             }
 4393           }
 4394         }
 4395       }
 4396     }
 4397 #if ! MMAP_CLEARS
 4398     else
 4399     {
 4400       d = (INTERNAL_SIZE_T*)mem;
 4401       /*
 4402         Note the additional SIZE_SZ
 4403       */
 4404       clearsize = chunksize(p) - 2*SIZE_SZ;
 4405       MALLOC_ZERO(d, clearsize);
 4406     }
 4407 #endif
 4408   }
 4409   return mem;
 4410 }
 4411 
 4412 /*
 4413   ------------------------------ cfree ------------------------------
 4414 */
 4415 
 4416 #if __STD_C
 4417 void cFREe(Void_t *mem)
 4418 #else
 4419 void cFREe(mem) Void_t *mem;
 4420 #endif
 4421 {
 4422   fREe(mem);
 4423 }
 4424 
 4425 /*
 4426   ------------------------- independent_calloc -------------------------
 4427 */
 4428 
 4429 #if __STD_C
 4430 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
 4431 #else
 4432 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
 4433 #endif
 4434 {
 4435   size_t sz = elem_size; /* serves as 1-element array */
 4436   /* opts arg of 3 means all elements are same size, and should be cleared */
 4437   return iALLOc(n_elements, &sz, 3, chunks);
 4438 }
 4439 
 4440 /*
 4441   ------------------------- independent_comalloc -------------------------
 4442 */
 4443 
 4444 #if __STD_C
 4445 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
 4446 #else
 4447 Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
 4448 #endif
 4449 {
 4450   return iALLOc(n_elements, sizes, 0, chunks);
 4451 }
 4452 
 4453 
 4454 /*
 4455   ------------------------------ ialloc ------------------------------
 4456   ialloc provides common support for independent_X routines, handling all of
 4457   the combinations that can result.
 4458 
 4459   The opts arg has:
 4460     bit 0 set if all elements are same size (using sizes[0])
 4461     bit 1 set if elements should be zeroed
 4462 */
 4463 
 4464 
 4465 #if __STD_C
 4466 static Void_t** iALLOc(size_t n_elements, 
 4467                        size_t* sizes,  
 4468                        int opts,
 4469                        Void_t* chunks[])
 4470 #else
 4471 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
 4472 #endif
 4473 {
 4474   mstate av = get_malloc_state();
 4475   INTERNAL_SIZE_T element_size;   /* chunksize of each element, if all same */
 4476   INTERNAL_SIZE_T contents_size;  /* total size of elements */
 4477   INTERNAL_SIZE_T array_size;     /* request size of pointer array */
 4478   Void_t*         mem;            /* malloced aggregate space */
 4479   mchunkptr       p;              /* corresponding chunk */
 4480   INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
 4481   Void_t**        marray;         /* either "chunks" or malloced ptr array */
 4482   mchunkptr       array_chunk;    /* chunk for malloced ptr array */
 4483   int             mmx;            /* to disable mmap */
 4484   INTERNAL_SIZE_T size;           
 4485   size_t          i;
 4486 
 4487   /* Ensure initialization */
 4488   if (av->max_fast == 0) malloc_consolidate(av);
 4489 
 4490   /* compute array length, if needed */
 4491   if (chunks != 0) {
 4492     if (n_elements == 0)
 4493       return chunks; /* nothing to do */
 4494     marray = chunks;
 4495     array_size = 0;
 4496   }
 4497   else {
 4498     /* if empty req, must still return chunk representing empty array */
 4499     if (n_elements == 0) 
 4500       return (Void_t**) mALLOc(0);
 4501     marray = 0;
 4502     array_size = request2size(n_elements * (sizeof(Void_t*)));
 4503   }
 4504 
 4505   /* compute total element size */
 4506   if (opts & 0x1) { /* all-same-size */
 4507     element_size = request2size(*sizes);
 4508     contents_size = n_elements * element_size;
 4509   }
 4510   else { /* add up all the sizes */
 4511     element_size = 0;
 4512     contents_size = 0;
 4513     for (i = 0; i != n_elements; ++i) 
 4514       contents_size += request2size(sizes[i]);     
 4515   }
 4516 
 4517   /* subtract out alignment bytes from total to minimize overallocation */
 4518   size = contents_size + array_size - MALLOC_ALIGN_MASK;
 4519   
 4520   /* 
 4521      Allocate the aggregate chunk.
 4522      But first disable mmap so malloc won't use it, since
 4523      we would not be able to later free/realloc space internal
 4524      to a segregated mmap region.
 4525  */
 4526   mmx = av->n_mmaps_max;   /* disable mmap */
 4527   av->n_mmaps_max = 0;
 4528   mem = mALLOc(size);
 4529   av->n_mmaps_max = mmx;   /* reset mmap */
 4530   if (mem == 0) 
 4531     return 0;
 4532 
 4533   p = mem2chunk(mem);
 4534   assert(!chunk_is_mmapped(p)); 
 4535   remainder_size = chunksize(p);
 4536 
 4537   if (opts & 0x2) {       /* optionally clear the elements */
 4538     MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
 4539   }
 4540 
 4541   /* If not provided, allocate the pointer array as final part of chunk */
 4542   if (marray == 0) {
 4543     array_chunk = chunk_at_offset(p, contents_size);
 4544     marray = (Void_t**) (chunk2mem(array_chunk));
 4545     set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
 4546     remainder_size = contents_size;
 4547   }
 4548 
 4549   /* split out elements */
 4550   for (i = 0; ; ++i) {
 4551     marray[i] = chunk2mem(p);
 4552     if (i != n_elements-1) {
 4553       if (element_size != 0) 
 4554         size = element_size;
 4555       else
 4556         size = request2size(sizes[i]);          
 4557       remainder_size -= size;
 4558       set_head(p, size | PREV_INUSE);
 4559       p = chunk_at_offset(p, size);
 4560     }
 4561     else { /* the final element absorbs any overallocation slop */
 4562       set_head(p, remainder_size | PREV_INUSE);
 4563       break;
 4564     }
 4565   }
 4566 
 4567 #if DEBUG
 4568   if (marray != chunks) {
 4569     /* final element must have exactly exhausted chunk */
 4570     if (element_size != 0) 
 4571       assert(remainder_size == element_size);
 4572     else
 4573       assert(remainder_size == request2size(sizes[i]));
 4574     check_inuse_chunk(mem2chunk(marray));
 4575   }
 4576 
 4577   for (i = 0; i != n_elements; ++i)
 4578     check_inuse_chunk(mem2chunk(marray[i]));
 4579 #endif
 4580 
 4581   return marray;
 4582 }
 4583 
 4584 
 4585 /*
 4586   ------------------------------ valloc ------------------------------
 4587 */
 4588 
 4589 #if __STD_C
 4590 Void_t* vALLOc(size_t bytes)
 4591 #else
 4592 Void_t* vALLOc(bytes) size_t bytes;
 4593 #endif
 4594 {
 4595   /* Ensure initialization */
 4596   mstate av = get_malloc_state();
 4597   if (av->max_fast == 0) malloc_consolidate(av);
 4598   return mEMALIGn(av->pagesize, bytes);
 4599 }
 4600 
 4601 /*
 4602   ------------------------------ pvalloc ------------------------------
 4603 */
 4604 
 4605 
 4606 #if __STD_C
 4607 Void_t* pVALLOc(size_t bytes)
 4608 #else
 4609 Void_t* pVALLOc(bytes) size_t bytes;
 4610 #endif
 4611 {
 4612   mstate av = get_malloc_state();
 4613   size_t pagesz;
 4614 
 4615   /* Ensure initialization */
 4616   if (av->max_fast == 0) malloc_consolidate(av);
 4617   pagesz = av->pagesize;
 4618   return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
 4619 }
 4620    
 4621 
 4622 /*
 4623   ------------------------------ malloc_trim ------------------------------
 4624 */
 4625 
 4626 #if __STD_C
 4627 int mTRIm(size_t pad)
 4628 #else
 4629 int mTRIm(pad) size_t pad;
 4630 #endif
 4631 {
 4632   mstate av = get_malloc_state();
 4633   /* Ensure initialization/consolidation */
 4634   malloc_consolidate(av);
 4635 
 4636 #ifndef MORECORE_CANNOT_TRIM        
 4637   return sYSTRIm(pad, av);
 4638 #else
 4639   return 0;
 4640 #endif
 4641 }
 4642 
 4643 
 4644 /*
 4645   ------------------------- malloc_usable_size -------------------------
 4646 */
 4647 
 4648 #if __STD_C
 4649 size_t mUSABLe(Void_t* mem)
 4650 #else
 4651 size_t mUSABLe(mem) Void_t* mem;
 4652 #endif
 4653 {
 4654   mchunkptr p;
 4655   if (mem != 0) {
 4656     p = mem2chunk(mem);
 4657     if (chunk_is_mmapped(p))
 4658       return chunksize(p) - 2*SIZE_SZ;
 4659     else if (inuse(p))
 4660       return chunksize(p) - SIZE_SZ;
 4661   }
 4662   return 0;
 4663 }
 4664 
 4665 /*
 4666   ------------------------------ mallinfo ------------------------------
 4667 */
 4668 
 4669 struct mallinfo mALLINFo()
 4670 {
 4671   mstate av = get_malloc_state();
 4672   struct mallinfo mi;
 4673   int i;
 4674   mbinptr b;
 4675   mchunkptr p;
 4676   INTERNAL_SIZE_T avail;
 4677   INTERNAL_SIZE_T fastavail;
 4678   int nblocks;
 4679   int nfastblocks;
 4680 
 4681   /* Ensure initialization */
 4682   if (av->top == 0)  malloc_consolidate(av);
 4683 
 4684   check_malloc_state();
 4685 
 4686   /* Account for top */
 4687   avail = chunksize(av->top);
 4688   nblocks = 1;  /* top always exists */
 4689 
 4690   /* traverse fastbins */
 4691   nfastblocks = 0;
 4692   fastavail = 0;
 4693 
 4694   for (i = 0; i < NFASTBINS; ++i) {
 4695     for (p = av->fastbins[i]; p != 0; p = p->fd) {
 4696       ++nfastblocks;
 4697       fastavail += chunksize(p);
 4698     }
 4699   }
 4700 
 4701   avail += fastavail;
 4702 
 4703   /* traverse regular bins */
 4704   for (i = 1; i < NBINS; ++i) {
 4705     b = bin_at(av, i);
 4706     for (p = last(b); p != b; p = p->bk) {
 4707       ++nblocks;
 4708       avail += chunksize(p);
 4709     }
 4710   }
 4711 
 4712   mi.smblks = nfastblocks;
 4713   mi.ordblks = nblocks;
 4714   mi.fordblks = avail;
 4715   mi.uordblks = av->sbrked_mem - avail;
 4716   mi.arena = av->sbrked_mem;
 4717   mi.hblks = av->n_mmaps;
 4718   mi.hblkhd = av->mmapped_mem;
 4719   mi.fsmblks = fastavail;
 4720   mi.keepcost = chunksize(av->top);
 4721   mi.usmblks = av->max_total_mem;
 4722   return mi;
 4723 }
 4724 
 4725 /*
 4726   ------------------------------ malloc_stats ------------------------------
 4727 */
 4728 
 4729 void mSTATs()
 4730 {
 4731   struct mallinfo mi = mALLINFo();
 4732 
 4733 #ifdef WIN32
 4734   {
 4735     CHUNK_SIZE_T  free, reserved, committed;
 4736     vminfo (&free, &reserved, &committed);
 4737     fprintf(stderr, "free bytes       = %10lu\n", 
 4738             free);
 4739     fprintf(stderr, "reserved bytes   = %10lu\n", 
 4740             reserved);
 4741     fprintf(stderr, "committed bytes  = %10lu\n", 
 4742             committed);
 4743   }
 4744 #endif
 4745 
 4746 
 4747   fprintf(stderr, "max system bytes = %10lu\n",
 4748           (CHUNK_SIZE_T)(mi.usmblks));
 4749   fprintf(stderr, "system bytes     = %10lu\n",
 4750           (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
 4751   fprintf(stderr, "in use bytes     = %10lu\n",
 4752           (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
 4753 
 4754 #ifdef WIN32 
 4755   {
 4756     CHUNK_SIZE_T  kernel, user;
 4757     if (cpuinfo (TRUE, &kernel, &user)) {
 4758       fprintf(stderr, "kernel ms        = %10lu\n", 
 4759               kernel);
 4760       fprintf(stderr, "user ms          = %10lu\n", 
 4761               user);
 4762     }
 4763   }
 4764 #endif
 4765 }
 4766 
 4767 
 4768 /*
 4769   ------------------------------ mallopt ------------------------------
 4770 */
 4771 
 4772 #if __STD_C
 4773 int mALLOPt(int param_number, int value)
 4774 #else
 4775 int mALLOPt(param_number, value) int param_number; int value;
 4776 #endif
 4777 {
 4778   mstate av = get_malloc_state();
 4779   /* Ensure initialization/consolidation */
 4780   malloc_consolidate(av);
 4781 
 4782   switch(param_number) {
 4783   case M_MXFAST:
 4784     if (value >= 0 && value <= MAX_FAST_SIZE) {
 4785       set_max_fast(av, value);
 4786       return 1;
 4787     }
 4788     else
 4789       return 0;
 4790 
 4791   case M_TRIM_THRESHOLD:
 4792     av->trim_threshold = value;
 4793     return 1;
 4794 
 4795   case M_TOP_PAD:
 4796     av->top_pad = value;
 4797     return 1;
 4798 
 4799   case M_MMAP_THRESHOLD:
 4800     av->mmap_threshold = value;
 4801     return 1;
 4802 
 4803   case M_MMAP_MAX:
 4804 #if !HAVE_MMAP
 4805     if (value != 0)
 4806       return 0;
 4807 #endif
 4808     av->n_mmaps_max = value;
 4809     return 1;
 4810 
 4811   default:
 4812     return 0;
 4813   }
 4814 }
 4815 
 4816 
 4817 /* 
 4818   -------------------- Alternative MORECORE functions --------------------
 4819 */
 4820 
 4821 
 4822 /*
 4823   General Requirements for MORECORE.
 4824 
 4825   The MORECORE function must have the following properties:
 4826 
 4827   If MORECORE_CONTIGUOUS is false:
 4828 
 4829     * MORECORE must allocate in multiples of pagesize. It will
 4830       only be called with arguments that are multiples of pagesize.
 4831 
 4832     * MORECORE(0) must return an address that is at least 
 4833       MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
 4834 
 4835   else (i.e. If MORECORE_CONTIGUOUS is true):
 4836 
 4837     * Consecutive calls to MORECORE with positive arguments
 4838       return increasing addresses, indicating that space has been
 4839       contiguously extended. 
 4840 
 4841     * MORECORE need not allocate in multiples of pagesize.
 4842       Calls to MORECORE need not have args of multiples of pagesize.
 4843 
 4844     * MORECORE need not page-align.
 4845 
 4846   In either case:
 4847 
 4848     * MORECORE may allocate more memory than requested. (Or even less,
 4849       but this will generally result in a malloc failure.)
 4850 
 4851     * MORECORE must not allocate memory when given argument zero, but
 4852       instead return one past the end address of memory from previous
 4853       nonzero call. This malloc does NOT call MORECORE(0)
 4854       until at least one call with positive arguments is made, so
 4855       the initial value returned is not important.
 4856 
 4857     * Even though consecutive calls to MORECORE need not return contiguous
 4858       addresses, it must be OK for malloc'ed chunks to span multiple
 4859       regions in those cases where they do happen to be contiguous.
 4860 
 4861     * MORECORE need not handle negative arguments -- it may instead
 4862       just return MORECORE_FAILURE when given negative arguments.
 4863       Negative arguments are always multiples of pagesize. MORECORE
 4864       must not misinterpret negative args as large positive unsigned
 4865       args. You can suppress all such calls from even occurring by defining
 4866       MORECORE_CANNOT_TRIM,
 4867 
 4868   There is some variation across systems about the type of the
 4869   argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
 4870   actually be size_t, because sbrk supports negative args, so it is
 4871   normally the signed type of the same width as size_t (sometimes
 4872   declared as "intptr_t", and sometimes "ptrdiff_t").  It doesn't much
 4873   matter though. Internally, we use "long" as arguments, which should
 4874   work across all reasonable possibilities.
 4875 
 4876   Additionally, if MORECORE ever returns failure for a positive
 4877   request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
 4878   system allocator. This is a useful backup strategy for systems with
 4879   holes in address spaces -- in this case sbrk cannot contiguously
 4880   expand the heap, but mmap may be able to map noncontiguous space.
 4881 
 4882   If you'd like mmap to ALWAYS be used, you can define MORECORE to be
 4883   a function that always returns MORECORE_FAILURE.
 4884 
 4885   Malloc only has limited ability to detect failures of MORECORE
 4886   to supply contiguous space when it says it can. In particular,
 4887   multithreaded programs that do not use locks may result in
 4888   rece conditions across calls to MORECORE that result in gaps
 4889   that cannot be detected as such, and subsequent corruption.
 4890 
 4891   If you are using this malloc with something other than sbrk (or its
 4892   emulation) to supply memory regions, you probably want to set
 4893   MORECORE_CONTIGUOUS as false.  As an example, here is a custom
 4894   allocator kindly contributed for pre-OSX macOS.  It uses virtually
 4895   but not necessarily physically contiguous non-paged memory (locked
 4896   in, present and won't get swapped out).  You can use it by
 4897   uncommenting this section, adding some #includes, and setting up the
 4898   appropriate defines above:
 4899 
 4900       #define MORECORE osMoreCore
 4901       #define MORECORE_CONTIGUOUS 0
 4902 
 4903   There is also a shutdown routine that should somehow be called for
 4904   cleanup upon program exit.
 4905 
 4906   #define MAX_POOL_ENTRIES 100
 4907   #define MINIMUM_MORECORE_SIZE  (64 * 1024)
 4908   static int next_os_pool;
 4909   void *our_os_pools[MAX_POOL_ENTRIES];
 4910 
 4911   void *osMoreCore(int size)
 4912   {
 4913     void *ptr = 0;
 4914     static void *sbrk_top = 0;
 4915 
 4916     if (size > 0)
 4917     {
 4918       if (size < MINIMUM_MORECORE_SIZE)
 4919          size = MINIMUM_MORECORE_SIZE;
 4920       if (CurrentExecutionLevel() == kTaskLevel)
 4921          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
 4922       if (ptr == 0)
 4923       {
 4924         return (void *) MORECORE_FAILURE;
 4925       }
 4926       // save ptrs so they can be freed during cleanup
 4927       our_os_pools[next_os_pool] = ptr;
 4928       next_os_pool++;
 4929       ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
 4930       sbrk_top = (char *) ptr + size;
 4931       return ptr;
 4932     }
 4933     else if (size < 0)
 4934     {
 4935       // we don't currently support shrink behavior
 4936       return (void *) MORECORE_FAILURE;
 4937     }
 4938     else
 4939     {
 4940       return sbrk_top;
 4941     }
 4942   }
 4943 
 4944   // cleanup any allocated memory pools
 4945   // called as last thing before shutting down driver
 4946 
 4947   void osCleanupMem(void)
 4948   {
 4949     void **ptr;
 4950 
 4951     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
 4952       if (*ptr)
 4953       {
 4954          PoolDeallocate(*ptr);
 4955          *ptr = 0;
 4956       }
 4957   }
 4958 
 4959 */
 4960 
 4961 
 4962 /* 
 4963   -------------------------------------------------------------- 
 4964 
 4965   Emulation of sbrk for win32. 
 4966   Donated by J. Walter <Walter@GeNeSys-e.de>.
 4967   For additional information about this code, and malloc on Win32, see 
 4968      http://www.genesys-e.de/jwalter/
 4969 */
 4970 
 4971 
 4972 #ifdef WIN32
 4973 
 4974 #ifdef _DEBUG
 4975 /* #define TRACE */
 4976 #endif
 4977 
 4978 /* Support for USE_MALLOC_LOCK */
 4979 #ifdef USE_MALLOC_LOCK
 4980 
 4981 /* Wait for spin lock */
 4982 static int slwait (int *sl) {
 4983     while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) 
 4984         Sleep (0);
 4985     return 0;
 4986 }
 4987 
 4988 /* Release spin lock */
 4989 static int slrelease (int *sl) {
 4990     InterlockedExchange (sl, 0);
 4991     return 0;
 4992 }
 4993 
 4994 #ifdef NEEDED
 4995 /* Spin lock for emulation code */
 4996 static int g_sl;
 4997 #endif
 4998 
 4999 #endif /* USE_MALLOC_LOCK */
 5000 
 5001 /* getpagesize for windows */
 5002 static long getpagesize (void) {
 5003     static long g_pagesize = 0;
 5004     if (! g_pagesize) {
 5005         SYSTEM_INFO system_info;
 5006         GetSystemInfo (&system_info);
 5007         g_pagesize = system_info.dwPageSize;
 5008     }
 5009     return g_pagesize;
 5010 }
 5011 static long getregionsize (void) {
 5012     static long g_regionsize = 0;
 5013     if (! g_regionsize) {
 5014         SYSTEM_INFO system_info;
 5015         GetSystemInfo (&system_info);
 5016         g_regionsize = system_info.dwAllocationGranularity;
 5017     }
 5018     return g_regionsize;
 5019 }
 5020 
 5021 /* A region list entry */
 5022 typedef struct _region_list_entry {
 5023     void *top_allocated;
 5024     void *top_committed;
 5025     void *top_reserved;
 5026     long reserve_size;
 5027     struct _region_list_entry *previous;
 5028 } region_list_entry;
 5029 
 5030 /* Allocate and link a region entry in the region list */
 5031 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
 5032     region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
 5033     if (! next)
 5034         return FALSE;
 5035     next->top_allocated = (char *) base_reserved;
 5036     next->top_committed = (char *) base_reserved;
 5037     next->top_reserved = (char *) base_reserved + reserve_size;
 5038     next->reserve_size = reserve_size;
 5039     next->previous = *last;
 5040     *last = next;
 5041     return TRUE;
 5042 }
 5043 /* Free and unlink the last region entry from the region list */
 5044 static int region_list_remove (region_list_entry **last) {
 5045     region_list_entry *previous = (*last)->previous;
 5046     if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
 5047         return FALSE;
 5048     *last = previous;
 5049     return TRUE;
 5050 }
 5051 
 5052 #define CEIL(size,to)   (((size)+(to)-1)&~((to)-1))
 5053 #define FLOOR(size,to)  ((size)&~((to)-1))
 5054 
 5055 #define SBRK_SCALE  0
 5056 /* #define SBRK_SCALE  1 */
 5057 /* #define SBRK_SCALE  2 */
 5058 /* #define SBRK_SCALE  4  */
 5059 
 5060 /* sbrk for windows */
 5061 static void *sbrk (long size) {
 5062     static long g_pagesize, g_my_pagesize;
 5063     static long g_regionsize, g_my_regionsize;
 5064     static region_list_entry *g_last;
 5065     void *result = (void *) MORECORE_FAILURE;
 5066 #ifdef TRACE
 5067     printf ("sbrk %d\n", size);
 5068 #endif
 5069 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5070     /* Wait for spin lock */
 5071     slwait (&g_sl);
 5072 #endif
 5073     /* First time initialization */
 5074     if (! g_pagesize) {
 5075         g_pagesize = getpagesize ();
 5076         g_my_pagesize = g_pagesize << SBRK_SCALE;
 5077     }
 5078     if (! g_regionsize) {
 5079         g_regionsize = getregionsize ();
 5080         g_my_regionsize = g_regionsize << SBRK_SCALE;
 5081     }
 5082     if (! g_last) {
 5083         if (! region_list_append (&g_last, 0, 0)) 
 5084            goto sbrk_exit;
 5085     }
 5086     /* Assert invariants */
 5087     assert (g_last);
 5088     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
 5089             g_last->top_allocated <= g_last->top_committed);
 5090     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
 5091             g_last->top_committed <= g_last->top_reserved &&
 5092             (unsigned) g_last->top_committed % g_pagesize == 0);
 5093     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
 5094     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
 5095     /* Allocation requested? */
 5096     if (size >= 0) {
 5097         /* Allocation size is the requested size */
 5098         long allocate_size = size;
 5099         /* Compute the size to commit */
 5100         long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
 5101         /* Do we reach the commit limit? */
 5102         if (to_commit > 0) {
 5103             /* Round size to commit */
 5104             long commit_size = CEIL (to_commit, g_my_pagesize);
 5105             /* Compute the size to reserve */
 5106             long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
 5107             /* Do we reach the reserve limit? */
 5108             if (to_reserve > 0) {
 5109                 /* Compute the remaining size to commit in the current region */
 5110                 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
 5111                 if (remaining_commit_size > 0) {
 5112                     /* Assert preconditions */
 5113                     assert ((unsigned) g_last->top_committed % g_pagesize == 0);
 5114                     assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
 5115                         /* Commit this */
 5116                         void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
 5117                                                              MEM_COMMIT, PAGE_READWRITE);
 5118                         /* Check returned pointer for consistency */
 5119                         if (base_committed != g_last->top_committed)
 5120                             goto sbrk_exit;
 5121                         /* Assert postconditions */
 5122                         assert ((unsigned) base_committed % g_pagesize == 0);
 5123 #ifdef TRACE
 5124                         printf ("Commit %p %d\n", base_committed, remaining_commit_size);
 5125 #endif
 5126                         /* Adjust the regions commit top */
 5127                         g_last->top_committed = (char *) base_committed + remaining_commit_size;
 5128                     }
 5129                 } {
 5130                     /* Now we are going to search and reserve. */
 5131                     int contiguous = -1;
 5132                     int found = FALSE;
 5133                     MEMORY_BASIC_INFORMATION memory_info;
 5134                     void *base_reserved;
 5135                     long reserve_size;
 5136                     do {
 5137                         /* Assume contiguous memory */
 5138                         contiguous = TRUE;
 5139                         /* Round size to reserve */
 5140                         reserve_size = CEIL (to_reserve, g_my_regionsize);
 5141                         /* Start with the current region's top */
 5142                         memory_info.BaseAddress = g_last->top_reserved;
 5143                         /* Assert preconditions */
 5144                         assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
 5145                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
 5146                         while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
 5147                             /* Assert postconditions */
 5148                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
 5149 #ifdef TRACE
 5150                             printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, 
 5151                                     memory_info.State == MEM_FREE ? "FREE": 
 5152                                     (memory_info.State == MEM_RESERVE ? "RESERVED":
 5153                                      (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
 5154 #endif
 5155                             /* Region is free, well aligned and big enough: we are done */
 5156                             if (memory_info.State == MEM_FREE &&
 5157                                 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
 5158                                 memory_info.RegionSize >= (unsigned) reserve_size) {
 5159                                 found = TRUE;
 5160                                 break;
 5161                             }
 5162                             /* From now on we can't get contiguous memory! */
 5163                             contiguous = FALSE;
 5164                             /* Recompute size to reserve */
 5165                             reserve_size = CEIL (allocate_size, g_my_regionsize);
 5166                             memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
 5167                             /* Assert preconditions */
 5168                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
 5169                             assert (0 < reserve_size && reserve_size % g_regionsize == 0);
 5170                         }
 5171                         /* Search failed? */
 5172                         if (! found) 
 5173                             goto sbrk_exit;
 5174                         /* Assert preconditions */
 5175                         assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
 5176                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
 5177                         /* Try to reserve this */
 5178                         base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, 
 5179                                                       MEM_RESERVE, PAGE_NOACCESS);
 5180                         if (! base_reserved) {
 5181                             int rc = GetLastError ();
 5182                             if (rc != ERROR_INVALID_ADDRESS) 
 5183                                 goto sbrk_exit;
 5184                         }
 5185                         /* A null pointer signals (hopefully) a race condition with another thread. */
 5186                         /* In this case, we try again. */
 5187                     } while (! base_reserved);
 5188                     /* Check returned pointer for consistency */
 5189                     if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
 5190                         goto sbrk_exit;
 5191                     /* Assert postconditions */
 5192                     assert ((unsigned) base_reserved % g_regionsize == 0);
 5193 #ifdef TRACE
 5194                     printf ("Reserve %p %d\n", base_reserved, reserve_size);
 5195 #endif
 5196                     /* Did we get contiguous memory? */
 5197                     if (contiguous) {
 5198                         long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
 5199                         /* Adjust allocation size */
 5200                         allocate_size -= start_size;
 5201                         /* Adjust the regions allocation top */
 5202                         g_last->top_allocated = g_last->top_committed;
 5203                         /* Recompute the size to commit */
 5204                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
 5205                         /* Round size to commit */
 5206                         commit_size = CEIL (to_commit, g_my_pagesize);
 5207                     } 
 5208                     /* Append the new region to the list */
 5209                     if (! region_list_append (&g_last, base_reserved, reserve_size))
 5210                         goto sbrk_exit;
 5211                     /* Didn't we get contiguous memory? */
 5212                     if (! contiguous) {
 5213                         /* Recompute the size to commit */
 5214                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
 5215                         /* Round size to commit */
 5216                         commit_size = CEIL (to_commit, g_my_pagesize);
 5217                     }
 5218                 }
 5219             } 
 5220             /* Assert preconditions */
 5221             assert ((unsigned) g_last->top_committed % g_pagesize == 0);
 5222             assert (0 < commit_size && commit_size % g_pagesize == 0); {
 5223                 /* Commit this */
 5224                 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, 
 5225                                                      MEM_COMMIT, PAGE_READWRITE);
 5226                 /* Check returned pointer for consistency */
 5227                 if (base_committed != g_last->top_committed)
 5228                     goto sbrk_exit;
 5229                 /* Assert postconditions */
 5230                 assert ((unsigned) base_committed % g_pagesize == 0);
 5231 #ifdef TRACE
 5232                 printf ("Commit %p %d\n", base_committed, commit_size);
 5233 #endif
 5234                 /* Adjust the regions commit top */
 5235                 g_last->top_committed = (char *) base_committed + commit_size;
 5236             }
 5237         } 
 5238         /* Adjust the regions allocation top */
 5239         g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
 5240         result = (char *) g_last->top_allocated - size;
 5241     /* Deallocation requested? */
 5242     } else if (size < 0) {
 5243         long deallocate_size = - size;
 5244         /* As long as we have a region to release */
 5245         while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
 5246             /* Get the size to release */
 5247             long release_size = g_last->reserve_size;
 5248             /* Get the base address */
 5249             void *base_reserved = (char *) g_last->top_reserved - release_size;
 5250             /* Assert preconditions */
 5251             assert ((unsigned) base_reserved % g_regionsize == 0); 
 5252             assert (0 < release_size && release_size % g_regionsize == 0); {
 5253                 /* Release this */
 5254                 int rc = VirtualFree (base_reserved, 0, 
 5255                                       MEM_RELEASE);
 5256                 /* Check returned code for consistency */
 5257                 if (! rc)
 5258                     goto sbrk_exit;
 5259 #ifdef TRACE
 5260                 printf ("Release %p %d\n", base_reserved, release_size);
 5261 #endif
 5262             }
 5263             /* Adjust deallocation size */
 5264             deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
 5265             /* Remove the old region from the list */
 5266             if (! region_list_remove (&g_last))
 5267                 goto sbrk_exit;
 5268         } {
 5269             /* Compute the size to decommit */
 5270             long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
 5271             if (to_decommit >= g_my_pagesize) {
 5272                 /* Compute the size to decommit */
 5273                 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
 5274                 /*  Compute the base address */
 5275                 void *base_committed = (char *) g_last->top_committed - decommit_size;
 5276                 /* Assert preconditions */
 5277                 assert ((unsigned) base_committed % g_pagesize == 0);
 5278                 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
 5279                     /* Decommit this */
 5280                     int rc = VirtualFree ((char *) base_committed, decommit_size, 
 5281                                           MEM_DECOMMIT);
 5282                     /* Check returned code for consistency */
 5283                     if (! rc)
 5284                         goto sbrk_exit;
 5285 #ifdef TRACE
 5286                     printf ("Decommit %p %d\n", base_committed, decommit_size);
 5287 #endif
 5288                 }
 5289                 /* Adjust deallocation size and regions commit and allocate top */
 5290                 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
 5291                 g_last->top_committed = base_committed;
 5292                 g_last->top_allocated = base_committed;
 5293             }
 5294         }
 5295         /* Adjust regions allocate top */
 5296         g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
 5297         /* Check for underflow */
 5298         if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
 5299             g_last->top_allocated > g_last->top_committed) {
 5300             /* Adjust regions allocate top */
 5301             g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
 5302             goto sbrk_exit;
 5303         }
 5304         result = g_last->top_allocated;
 5305     }
 5306     /* Assert invariants */
 5307     assert (g_last);
 5308     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
 5309             g_last->top_allocated <= g_last->top_committed);
 5310     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
 5311             g_last->top_committed <= g_last->top_reserved &&
 5312             (unsigned) g_last->top_committed % g_pagesize == 0);
 5313     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
 5314     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
 5315 
 5316 sbrk_exit:
 5317 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5318     /* Release spin lock */
 5319     slrelease (&g_sl);
 5320 #endif
 5321     return result;
 5322 }
 5323 
 5324 /* mmap for windows */
 5325 static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
 5326     static long g_pagesize;
 5327     static long g_regionsize;
 5328 #ifdef TRACE
 5329     printf ("mmap %d\n", size);
 5330 #endif
 5331 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5332     /* Wait for spin lock */
 5333     slwait (&g_sl);
 5334 #endif
 5335     /* First time initialization */
 5336     if (! g_pagesize) 
 5337         g_pagesize = getpagesize ();
 5338     if (! g_regionsize) 
 5339         g_regionsize = getregionsize ();
 5340     /* Assert preconditions */
 5341     assert ((unsigned) ptr % g_regionsize == 0);
 5342     assert (size % g_pagesize == 0);
 5343     /* Allocate this */
 5344     ptr = VirtualAlloc (ptr, size,
 5345                         MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
 5346     if (! ptr) {
 5347         ptr = (void *) MORECORE_FAILURE;
 5348         goto mmap_exit;
 5349     }
 5350     /* Assert postconditions */
 5351     assert ((unsigned) ptr % g_regionsize == 0);
 5352 #ifdef TRACE
 5353     printf ("Commit %p %d\n", ptr, size);
 5354 #endif
 5355 mmap_exit:
 5356 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5357     /* Release spin lock */
 5358     slrelease (&g_sl);
 5359 #endif
 5360     return ptr;
 5361 }
 5362 
 5363 /* munmap for windows */
 5364 static long munmap (void *ptr, long size) {
 5365     static long g_pagesize;
 5366     static long g_regionsize;
 5367     int rc = MUNMAP_FAILURE;
 5368 #ifdef TRACE
 5369     printf ("munmap %p %d\n", ptr, size);
 5370 #endif
 5371 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5372     /* Wait for spin lock */
 5373     slwait (&g_sl);
 5374 #endif
 5375     /* First time initialization */
 5376     if (! g_pagesize) 
 5377         g_pagesize = getpagesize ();
 5378     if (! g_regionsize) 
 5379         g_regionsize = getregionsize ();
 5380     /* Assert preconditions */
 5381     assert ((unsigned) ptr % g_regionsize == 0);
 5382     assert (size % g_pagesize == 0);
 5383     /* Free this */
 5384     if (! VirtualFree (ptr, 0, 
 5385                        MEM_RELEASE))
 5386         goto munmap_exit;
 5387     rc = 0;
 5388 #ifdef TRACE
 5389     printf ("Release %p %d\n", ptr, size);
 5390 #endif
 5391 munmap_exit:
 5392 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
 5393     /* Release spin lock */
 5394     slrelease (&g_sl);
 5395 #endif
 5396     return rc;
 5397 }
 5398 
 5399 static void vminfo (CHUNK_SIZE_T  *free, CHUNK_SIZE_T  *reserved, CHUNK_SIZE_T  *committed) {
 5400     MEMORY_BASIC_INFORMATION memory_info;
 5401     memory_info.BaseAddress = 0;
 5402     *free = *reserved = *committed = 0;
 5403     while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
 5404         switch (memory_info.State) {
 5405         case MEM_FREE:
 5406             *free += memory_info.RegionSize;
 5407             break;
 5408         case MEM_RESERVE:
 5409             *reserved += memory_info.RegionSize;
 5410             break;
 5411         case MEM_COMMIT:
 5412             *committed += memory_info.RegionSize;
 5413             break;
 5414         }
 5415         memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
 5416     }
 5417 }
 5418 
 5419 static int cpuinfo (int whole, CHUNK_SIZE_T  *kernel, CHUNK_SIZE_T  *user) {
 5420     if (whole) {
 5421         __int64 creation64, exit64, kernel64, user64;
 5422         int rc = GetProcessTimes (GetCurrentProcess (), 
 5423                                   (FILETIME *) &creation64,  
 5424                                   (FILETIME *) &exit64, 
 5425                                   (FILETIME *) &kernel64, 
 5426                                   (FILETIME *) &user64);
 5427         if (! rc) {
 5428             *kernel = 0;
 5429             *user = 0;
 5430             return FALSE;
 5431         } 
 5432         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
 5433         *user = (CHUNK_SIZE_T) (user64 / 10000);
 5434         return TRUE;
 5435     } else {
 5436         __int64 creation64, exit64, kernel64, user64;
 5437         int rc = GetThreadTimes (GetCurrentThread (), 
 5438                                  (FILETIME *) &creation64,  
 5439                                  (FILETIME *) &exit64, 
 5440                                  (FILETIME *) &kernel64, 
 5441                                  (FILETIME *) &user64);
 5442         if (! rc) {
 5443             *kernel = 0;
 5444             *user = 0;
 5445             return FALSE;
 5446         } 
 5447         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
 5448         *user = (CHUNK_SIZE_T) (user64 / 10000);
 5449         return TRUE;
 5450     }
 5451 }
 5452 
 5453 #endif /* WIN32 */
 5454 
 5455 /* ------------------------------------------------------------
 5456 History:
 5457     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
 5458       * Fix malloc_state bitmap array misdeclaration
 5459 
 5460     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
 5461       * Allow tuning of FIRST_SORTED_BIN_SIZE
 5462       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
 5463       * Better detection and support for non-contiguousness of MORECORE. 
 5464         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
 5465       * Bypass most of malloc if no frees. Thanks To Emery Berger.
 5466       * Fix freeing of old top non-contiguous chunk im sysmalloc.
 5467       * Raised default trim and map thresholds to 256K.
 5468       * Fix mmap-related #defines. Thanks to Lubos Lunak.
 5469       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
 5470       * Branch-free bin calculation
 5471       * Default trim and mmap thresholds now 256K.
 5472 
 5473     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
 5474       * Introduce independent_comalloc and independent_calloc.
 5475         Thanks to Michael Pachos for motivation and help.
 5476       * Make optional .h file available
 5477       * Allow > 2GB requests on 32bit systems.
 5478       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
 5479         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
 5480         and Anonymous.
 5481       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for 
 5482         helping test this.)
 5483       * memalign: check alignment arg
 5484       * realloc: don't try to shift chunks backwards, since this
 5485         leads to  more fragmentation in some programs and doesn't
 5486         seem to help in any others.
 5487       * Collect all cases in malloc requiring system memory into sYSMALLOc
 5488       * Use mmap as backup to sbrk
 5489       * Place all internal state in malloc_state
 5490       * Introduce fastbins (although similar to 2.5.1)
 5491       * Many minor tunings and cosmetic improvements
 5492       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK 
 5493       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
 5494         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
 5495       * Include errno.h to support default failure action.
 5496 
 5497     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
 5498       * return null for negative arguments
 5499       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
 5500          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
 5501           (e.g. WIN32 platforms)
 5502          * Cleanup header file inclusion for WIN32 platforms
 5503          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
 5504          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
 5505            memory allocation routines
 5506          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
 5507          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
 5508            usage of 'assert' in non-WIN32 code
 5509          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
 5510            avoid infinite loop
 5511       * Always call 'fREe()' rather than 'free()'
 5512 
 5513     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
 5514       * Fixed ordering problem with boundary-stamping
 5515 
 5516     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
 5517       * Added pvalloc, as recommended by H.J. Liu
 5518       * Added 64bit pointer support mainly from Wolfram Gloger
 5519       * Added anonymously donated WIN32 sbrk emulation
 5520       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
 5521       * malloc_extend_top: fix mask error that caused wastage after
 5522         foreign sbrks
 5523       * Add linux mremap support code from HJ Liu
 5524 
 5525     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
 5526       * Integrated most documentation with the code.
 5527       * Add support for mmap, with help from
 5528         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
 5529       * Use last_remainder in more cases.
 5530       * Pack bins using idea from  colin@nyx10.cs.du.edu
 5531       * Use ordered bins instead of best-fit threshhold
 5532       * Eliminate block-local decls to simplify tracing and debugging.
 5533       * Support another case of realloc via move into top
 5534       * Fix error occuring when initial sbrk_base not word-aligned.
 5535       * Rely on page size for units instead of SBRK_UNIT to
 5536         avoid surprises about sbrk alignment conventions.
 5537       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
 5538         (raymond@es.ele.tue.nl) for the suggestion.
 5539       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
 5540       * More precautions for cases where other routines call sbrk,
 5541         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
 5542       * Added macros etc., allowing use in linux libc from
 5543         H.J. Lu (hjl@gnu.ai.mit.edu)
 5544       * Inverted this history list
 5545 
 5546     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
 5547       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
 5548       * Removed all preallocation code since under current scheme
 5549         the work required to undo bad preallocations exceeds
 5550         the work saved in good cases for most test programs.
 5551       * No longer use return list or unconsolidated bins since
 5552         no scheme using them consistently outperforms those that don't
 5553         given above changes.
 5554       * Use best fit for very large chunks to prevent some worst-cases.
 5555       * Added some support for debugging
 5556 
 5557     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
 5558       * Removed footers when chunks are in use. Thanks to
 5559         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
 5560 
 5561     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
 5562       * Added malloc_trim, with help from Wolfram Gloger
 5563         (wmglo@Dent.MED.Uni-Muenchen.DE).
 5564 
 5565     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
 5566 
 5567     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
 5568       * realloc: try to expand in both directions
 5569       * malloc: swap order of clean-bin strategy;
 5570       * realloc: only conditionally expand backwards
 5571       * Try not to scavenge used bins
 5572       * Use bin counts as a guide to preallocation
 5573       * Occasionally bin return list chunks in first scan
 5574       * Add a few optimizations from colin@nyx10.cs.du.edu
 5575 
 5576     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
 5577       * faster bin computation & slightly different binning
 5578       * merged all consolidations to one part of malloc proper
 5579          (eliminating old malloc_find_space & malloc_clean_bin)
 5580       * Scan 2 returns chunks (not just 1)
 5581       * Propagate failure in realloc if malloc returns 0
 5582       * Add stuff to allow compilation on non-ANSI compilers
 5583           from kpv@research.att.com
 5584 
 5585     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
 5586       * removed potential for odd address access in prev_chunk
 5587       * removed dependency on getpagesize.h
 5588       * misc cosmetics and a bit more internal documentation
 5589       * anticosmetics: mangled names in macros to evade debugger strangeness
 5590       * tested on sparc, hp-700, dec-mips, rs6000
 5591           with gcc & native cc (hp, dec only) allowing
 5592           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
 5593 
 5594     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
 5595       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
 5596          structure of old version,  but most details differ.)
 5597 
 5598 */