2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.5.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
107 It is also possible to limit the maximum total allocatable
108 space, using malloc_set_footprint_limit. This is not
109 designed as a security feature in itself (calls to set limits
110 are not screened or privileged), but may be useful as one
111 aspect of a secure implementation.
113 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
114 When USE_LOCKS is defined, each public call to malloc, free,
115 etc is surrounded with a lock. By default, this uses a plain
116 pthread mutex, win32 critical section, or a spin-lock if if
117 available for the platform and not disabled by setting
118 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
119 recursive versions are used instead (which are not required for
120 base functionality but may be needed in layered extensions).
121 Using a global lock is not especially fast, and can be a major
122 bottleneck. It is designed only to provide minimal protection
123 in concurrent environments, and to provide a basis for
124 extensions. If you are using malloc in a concurrent program,
125 consider instead using nedmalloc
126 (http://www.nedprod.com/programs/portable/nedmalloc/) or
127 ptmalloc (See http://www.malloc.de), which are derived from
128 versions of this malloc.
130 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
131 This malloc can use unix sbrk or any emulation (invoked using
132 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
133 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
134 memory. On most unix systems, it tends to work best if both
135 MORECORE and MMAP are enabled. On Win32, it uses emulations
136 based on VirtualAlloc. It also uses common C library functions
139 Compliance: I believe it is compliant with the Single Unix Specification
140 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
143 * Overview of algorithms
145 This is not the fastest, most space-conserving, most portable, or
146 most tunable malloc ever written. However it is among the fastest
147 while also being among the most space-conserving, portable and
148 tunable. Consistent balance across these factors results in a good
149 general-purpose allocator for malloc-intensive programs.
151 In most ways, this malloc is a best-fit allocator. Generally, it
152 chooses the best-fitting existing chunk for a request, with ties
153 broken in approximately least-recently-used order. (This strategy
154 normally maintains low fragmentation.) However, for requests less
155 than 256bytes, it deviates from best-fit when there is not an
156 exactly fitting available chunk by preferring to use space adjacent
157 to that used for the previous small request, as well as by breaking
158 ties in approximately most-recently-used order. (These enhance
159 locality of series of small allocations.) And for very large requests
160 (>= 256Kb by default), it relies on system memory mapping
161 facilities, if supported. (This helps avoid carrying around and
162 possibly fragmenting memory used only for large chunks.)
164 All operations (except malloc_stats and mallinfo) have execution
165 times that are bounded by a constant factor of the number of bits in
166 a size_t, not counting any clearing in calloc or copying in realloc,
167 or actions surrounding MORECORE and MMAP that have times
168 proportional to the number of non-contiguous regions returned by
169 system allocation routines, which is often just 1. In real-time
170 applications, you can optionally suppress segment traversals using
171 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
172 system allocators return non-contiguous spaces, at the typical
173 expense of carrying around more memory and increased fragmentation.
175 The implementation is not very modular and seriously overuses
176 macros. Perhaps someday all C compilers will do as good a job
177 inlining modular code as can now be done by brute-force expansion,
178 but now, enough of them seem not to.
180 Some compilers issue a lot of warnings about code that is
181 dead/unreachable only on some platforms, and also about intentional
182 uses of negation on unsigned types. All known cases of each can be
185 For a longer but out of date high-level description, see
186 http://gee.cs.oswego.edu/dl/html/malloc.html
189 If MSPACES is defined, then in addition to malloc, free, etc.,
190 this file also defines mspace_malloc, mspace_free, etc. These
191 are versions of malloc routines that take an "mspace" argument
192 obtained using create_mspace, to control all internal bookkeeping.
193 If ONLY_MSPACES is defined, only these versions are compiled.
194 So if you would like to use this allocator for only some allocations,
195 and your system malloc for others, you can compile with
196 ONLY_MSPACES and then do something like...
197 static mspace mymspace = create_mspace(0,0); // for example
198 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
200 (Note: If you only need one instance of an mspace, you can instead
201 use "USE_DL_PREFIX" to relabel the global malloc.)
203 You can similarly create thread-local allocators by storing
204 mspaces as thread-locals. For example:
205 static __thread mspace tlms = 0;
206 void* tlmalloc(size_t bytes) {
207 if (tlms == 0) tlms = create_mspace(0, 0);
208 return mspace_malloc(tlms, bytes);
210 void tlfree(void* mem) { mspace_free(tlms, mem); }
212 Unless FOOTERS is defined, each mspace is completely independent.
213 You cannot allocate from one and free to another (although
214 conformance is only weakly checked, so usage errors are not always
215 caught). If FOOTERS is defined, then each chunk carries around a tag
216 indicating its originating mspace, and frees are directed to their
217 originating spaces. Normally, this requires use of locks.
219 ------------------------- Compile-time options ---------------------------
221 Be careful in setting #define values for numerical constants of type
222 size_t. On some systems, literal values are not automatically extended
223 to size_t precision unless they are explicitly casted. You can also
224 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
226 WIN32 default: defined if _WIN32 defined
227 Defining WIN32 sets up defaults for MS environment and compilers.
228 Otherwise defaults are for unix. Beware that there seem to be some
229 cases where this malloc might not be a pure drop-in replacement for
230 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
231 SetDIBits()) may be due to bugs in some video driver implementations
232 when pixel buffers are malloc()ed, and the region spans more than
233 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
234 default granularity, pixel buffers may straddle virtual allocation
235 regions more often than when using the Microsoft allocator. You can
236 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
237 buffers rather than using malloc(). If this is not possible,
238 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
239 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
240 conditions use _MSC_VER to distinguish them.
242 DLMALLOC_EXPORT default: extern
243 Defines how public APIs are declared. If you want to export via a
244 Windows DLL, you might define this as
245 #define DLMALLOC_EXPORT extern __declspace(dllexport)
246 If you want a POSIX ELF shared object, you might use
247 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
249 MALLOC_ALIGNMENT default: (size_t)8
250 Controls the minimum alignment for malloc'ed chunks. It must be a
251 power of two and at least 8, even on machines for which smaller
252 alignments would suffice. It may be defined as larger than this
253 though. Note however that code and data structures are optimized for
254 the case of 8-byte alignment.
256 MSPACES default: 0 (false)
257 If true, compile in support for independent allocation spaces.
258 This is only supported if HAVE_MMAP is true.
260 ONLY_MSPACES default: 0 (false)
261 If true, only compile in mspace versions, not regular versions.
263 USE_LOCKS default: 0 (false)
264 Causes each call to each public routine to be surrounded with
265 pthread or WIN32 mutex lock/unlock. (If set true, this can be
266 overridden on a per-mspace basis for mspace versions.) If set to a
267 non-zero value other than 1, locks are used, but their
268 implementation is left out, so lock functions must be supplied manually,
271 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
272 If true, uses custom spin locks for locking. This is currently
273 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
274 MS compilers. Otherwise, posix locks or win32 critical sections are
277 USE_RECURSIVE_LOCKS default: not defined
278 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
279 uses plain mutexes. This is not required for malloc proper, but may
280 be needed for layered allocators such as nedmalloc.
283 If true, provide extra checking and dispatching by placing
284 information in the footers of allocated chunks. This adds
285 space and time overhead.
288 If true, omit checks for usage errors and heap space overwrites.
290 USE_DL_PREFIX default: NOT defined
291 Causes compiler to prefix all public routines with the string 'dl'.
292 This can be useful when you only want to use this malloc in one part
293 of a program, using your regular system malloc elsewhere.
295 MALLOC_INSPECT_ALL default: NOT defined
296 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
297 perform traversal of all heap space. Unless access to these
298 functions is otherwise restricted, you probably do not want to
299 include them in secure implementations.
301 ABORT default: defined as abort()
302 Defines how to abort on failed checks. On most systems, a failed
303 check cannot die with an "assert" or even print an informative
304 message, because the underlying print routines in turn call malloc,
305 which will fail again. Generally, the best policy is to simply call
306 abort(). It's not very useful to do more than this because many
307 errors due to overwriting will show up as address faults (null, odd
308 addresses etc) rather than malloc-triggered checks, so will also
309 abort. Also, most compilers know that abort() does not return, so
310 can better optimize code conditionally calling it.
312 PROCEED_ON_ERROR default: defined as 0 (false)
313 Controls whether detected bad addresses cause them to bypassed
314 rather than aborting. If set, detected bad arguments to free and
315 realloc are ignored. And all bookkeeping information is zeroed out
316 upon a detected overwrite of freed heap space, thus losing the
317 ability to ever return it from malloc again, but enabling the
318 application to proceed. If PROCEED_ON_ERROR is defined, the
319 static variable malloc_corruption_error_count is compiled in
320 and can be examined to see if errors have occurred. This option
321 generates slower code than the default abort policy.
323 DEBUG default: NOT defined
324 The DEBUG setting is mainly intended for people trying to modify
325 this code or diagnose problems when porting to new platforms.
326 However, it may also be able to better isolate user errors than just
327 using runtime checks. The assertions in the check routines spell
328 out in more detail the assumptions and invariants underlying the
329 algorithms. The checking is fairly extensive, and will slow down
330 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
331 set will attempt to check every non-mmapped allocated and free chunk
332 in the course of computing the summaries.
334 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
335 Debugging assertion failures can be nearly impossible if your
336 version of the assert macro causes malloc to be called, which will
337 lead to a cascade of further failures, blowing the runtime stack.
338 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
339 which will usually make debugging easier.
341 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
342 The action to take before "return 0" when malloc fails to be able to
343 return memory because there is none available.
345 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
346 True if this system supports sbrk or an emulation of it.
348 MORECORE default: sbrk
349 The name of the sbrk-style system routine to call to obtain more
350 memory. See below for guidance on writing custom MORECORE
351 functions. The type of the argument to sbrk/MORECORE varies across
352 systems. It cannot be size_t, because it supports negative
353 arguments, so it is normally the signed type of the same width as
354 size_t (sometimes declared as "intptr_t"). It doesn't much matter
355 though. Internally, we only call it with arguments less than half
356 the max value of a size_t, which should work across all reasonable
357 possibilities, although sometimes generating compiler warnings.
359 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
360 If true, take advantage of fact that consecutive calls to MORECORE
361 with positive arguments always return contiguous increasing
362 addresses. This is true of unix sbrk. It does not hurt too much to
363 set it true anyway, since malloc copes with non-contiguities.
364 Setting it false when definitely non-contiguous saves time
365 and possibly wasted space it would take to discover this though.
367 MORECORE_CANNOT_TRIM default: NOT defined
368 True if MORECORE cannot release space back to the system when given
369 negative arguments. This is generally necessary only if you are
370 using a hand-crafted MORECORE function that cannot handle negative
373 NO_SEGMENT_TRAVERSAL default: 0
374 If non-zero, suppresses traversals of memory segments
375 returned by either MORECORE or CALL_MMAP. This disables
376 merging of segments that are contiguous, and selectively
377 releasing them to the OS if unused, but bounds execution times.
379 HAVE_MMAP default: 1 (true)
380 True if this system supports mmap or an emulation of it. If so, and
381 HAVE_MORECORE is not true, MMAP is used for all system
382 allocation. If set and HAVE_MORECORE is true as well, MMAP is
383 primarily used to directly allocate very large blocks. It is also
384 used as a backup strategy in cases where MORECORE fails to provide
385 space from system. Note: A single call to MUNMAP is assumed to be
386 able to unmap memory that may have be allocated using multiple calls
387 to MMAP, so long as they are adjacent.
389 HAVE_MREMAP default: 1 on linux, else 0
390 If true realloc() uses mremap() to re-allocate large blocks and
391 extend or shrink allocation spaces.
393 MMAP_CLEARS default: 1 except on WINCE.
394 True if mmap clears memory so calloc doesn't need to. This is true
395 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
397 USE_BUILTIN_FFS default: 0 (i.e., not used)
398 Causes malloc to use the builtin ffs() function to compute indices.
399 Some compilers may recognize and intrinsify ffs to be faster than the
400 supplied C version. Also, the case of x86 using gcc is special-cased
401 to an asm instruction, so is already as fast as it can be, and so
402 this setting has no effect. Similarly for Win32 under recent MS compilers.
403 (On most x86s, the asm version is only slightly faster than the C version.)
405 malloc_getpagesize default: derive from system includes, or 4096.
406 The system page size. To the extent possible, this malloc manages
407 memory from the system in page-size units. This may be (and
408 usually is) a function rather than a constant. This is ignored
409 if WIN32, where page size is determined using getSystemInfo during
412 USE_DEV_RANDOM default: 0 (i.e., not used)
413 Causes malloc to use /dev/random to initialize secure magic seed for
414 stamping footers. Otherwise, the current time is used.
416 NO_MALLINFO default: 0
417 If defined, don't compile "mallinfo". This can be a simple way
418 of dealing with mismatches between system declarations and
421 MALLINFO_FIELD_TYPE default: size_t
422 The type of the fields in the mallinfo struct. This was originally
423 defined as "int" in SVID etc, but is more usefully defined as
424 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
426 NO_MALLOC_STATS default: 0
427 If defined, don't compile "malloc_stats". This avoids calls to
428 fprintf and bringing in stdio dependencies you might not want.
430 REALLOC_ZERO_BYTES_FREES default: not defined
431 This should be set if a call to realloc with zero bytes should
432 be the same as a call to free. Some people think it should. Otherwise,
433 since this malloc returns a unique pointer for malloc(0), so does
436 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
437 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
438 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
439 Define these if your system does not have these header files.
440 You might need to manually insert some of the declarations they provide.
442 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
443 system_info.dwAllocationGranularity in WIN32,
445 Also settable using mallopt(M_GRANULARITY, x)
446 The unit for allocating and deallocating memory from the system. On
447 most systems with contiguous MORECORE, there is no reason to
448 make this more than a page. However, systems with MMAP tend to
449 either require or encourage larger granularities. You can increase
450 this value to prevent system allocation functions to be called so
451 often, especially if they are slow. The value must be at least one
452 page and must be a power of two. Setting to 0 causes initialization
453 to either page size or win32 region size. (Note: In previous
454 versions of malloc, the equivalent of this option was called
457 DEFAULT_TRIM_THRESHOLD default: 2MB
458 Also settable using mallopt(M_TRIM_THRESHOLD, x)
459 The maximum amount of unused top-most memory to keep before
460 releasing via malloc_trim in free(). Automatic trimming is mainly
461 useful in long-lived programs using contiguous MORECORE. Because
462 trimming via sbrk can be slow on some systems, and can sometimes be
463 wasteful (in cases where programs immediately afterward allocate
464 more large chunks) the value should be high enough so that your
465 overall system performance would improve by releasing this much
466 memory. As a rough guide, you might set to a value close to the
467 average size of a process (program) running on your system.
468 Releasing this much memory would allow such a process to run in
469 memory. Generally, it is worth tuning trim thresholds when a
470 program undergoes phases where several large chunks are allocated
471 and released in ways that can reuse each other's storage, perhaps
472 mixed with phases where there are no such chunks at all. The trim
473 value must be greater than page size to have any useful effect. To
474 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
475 some people use of mallocing a huge space and then freeing it at
476 program startup, in an attempt to reserve system memory, doesn't
477 have the intended effect under automatic trimming, since that memory
478 will immediately be returned to the system.
480 DEFAULT_MMAP_THRESHOLD default: 256K
481 Also settable using mallopt(M_MMAP_THRESHOLD, x)
482 The request size threshold for using MMAP to directly service a
483 request. Requests of at least this size that cannot be allocated
484 using already-existing space will be serviced via mmap. (If enough
485 normal freed space already exists it is used instead.) Using mmap
486 segregates relatively large chunks of memory so that they can be
487 individually obtained and released from the host system. A request
488 serviced through mmap is never reused by any other request (at least
489 not directly; the system may just so happen to remap successive
490 requests to the same locations). Segregating space in this way has
491 the benefits that: Mmapped space can always be individually released
492 back to the system, which helps keep the system level memory demands
493 of a long-lived program low. Also, mapped memory doesn't become
494 `locked' between other chunks, as can happen with normally allocated
495 chunks, which means that even trimming via malloc_trim would not
496 release them. However, it has the disadvantage that the space
497 cannot be reclaimed, consolidated, and then used to service later
498 requests, as happens with normal chunks. The advantages of mmap
499 nearly always outweigh disadvantages for "large" chunks, but the
500 value of "large" may vary across systems. The default is an
501 empirically derived value that works well in most systems. You can
502 disable mmap by setting to MAX_SIZE_T.
504 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
505 The number of consolidated frees between checks to release
506 unused segments when freeing. When using non-contiguous segments,
507 especially with multiple mspaces, checking only for topmost space
508 doesn't always suffice to trigger trimming. To compensate for this,
509 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
510 current number of segments, if greater) try to release unused
511 segments to the OS when freeing chunks that result in
512 consolidation. The best value for this parameter is a compromise
513 between slowing down frees with relatively costly checks that
514 rarely trigger versus holding on to unused memory. To effectively
515 disable, set to MAX_SIZE_T. This may lead to a very slight speed
516 improvement at the expense of carrying around more memory.
520 #include "dlmalloc.h"
522 /* Version identifier to allow people to support multiple versions */
523 #ifndef DLMALLOC_VERSION
524 #define DLMALLOC_VERSION 20805
525 #endif /* DLMALLOC_VERSION */
527 #ifndef DLMALLOC_EXPORT
528 #define DLMALLOC_EXPORT extern
536 #define LACKS_FCNTL_H
538 #endif /* _WIN32_WCE */
541 #define WIN32_LEAN_AND_MEAN
545 #define HAVE_MORECORE 0
546 #define LACKS_UNISTD_H
547 #define LACKS_SYS_PARAM_H
548 #define LACKS_SYS_MMAN_H
549 #define LACKS_STRING_H
550 #define LACKS_STRINGS_H
551 #define LACKS_SYS_TYPES_H
552 #define LACKS_ERRNO_H
553 #define LACKS_SCHED_H
554 #ifndef MALLOC_FAILURE_ACTION
555 #define MALLOC_FAILURE_ACTION
556 #endif /* MALLOC_FAILURE_ACTION */
558 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
559 #define MMAP_CLEARS 0
561 #define MMAP_CLEARS 1
562 #endif /* _WIN32_WCE */
563 #endif /*MMAP_CLEARS */
566 #if defined(DARWIN) || defined(_DARWIN)
567 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
568 #ifndef HAVE_MORECORE
569 #define HAVE_MORECORE 0
571 /* OSX allocators provide 16 byte alignment */
572 #ifndef MALLOC_ALIGNMENT
573 #define MALLOC_ALIGNMENT ((size_t)16U)
575 #endif /* HAVE_MORECORE */
578 #ifndef LACKS_SYS_TYPES_H
579 #include <sys/types.h> /* For size_t */
580 #endif /* LACKS_SYS_TYPES_H */
582 /* The maximum possible size_t value has all bits set */
583 #define MAX_SIZE_T (~(size_t)0)
585 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
586 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
587 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
588 #endif /* USE_LOCKS */
590 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
591 #if ((defined(__GNUC__) && \
592 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
593 defined(__i386__) || defined(__x86_64__))) || \
594 (defined(_MSC_VER) && _MSC_VER>=1310))
595 #ifndef USE_SPIN_LOCKS
596 #define USE_SPIN_LOCKS 1
597 #endif /* USE_SPIN_LOCKS */
599 #error "USE_SPIN_LOCKS defined without implementation"
600 #endif /* ... locks available... */
601 #elif !defined(USE_SPIN_LOCKS)
602 #define USE_SPIN_LOCKS 0
603 #endif /* USE_LOCKS */
606 #define ONLY_MSPACES 0
607 #endif /* ONLY_MSPACES */
611 #else /* ONLY_MSPACES */
613 #endif /* ONLY_MSPACES */
615 #ifndef MALLOC_ALIGNMENT
616 #define MALLOC_ALIGNMENT ((size_t)8U)
617 #endif /* MALLOC_ALIGNMENT */
622 #define ABORT abort()
624 #ifndef ABORT_ON_ASSERT_FAILURE
625 #define ABORT_ON_ASSERT_FAILURE 1
626 #endif /* ABORT_ON_ASSERT_FAILURE */
627 #ifndef PROCEED_ON_ERROR
628 #define PROCEED_ON_ERROR 0
629 #endif /* PROCEED_ON_ERROR */
633 #endif /* INSECURE */
634 #ifndef MALLOC_INSPECT_ALL
635 #define MALLOC_INSPECT_ALL 0
636 #endif /* MALLOC_INSPECT_ALL */
639 #endif /* HAVE_MMAP */
641 #define MMAP_CLEARS 1
642 #endif /* MMAP_CLEARS */
645 #define HAVE_MREMAP 1
646 #define _GNU_SOURCE /* Turns on mremap() definition */
648 #define HAVE_MREMAP 0
650 #endif /* HAVE_MREMAP */
651 #ifndef MALLOC_FAILURE_ACTION
652 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
653 #endif /* MALLOC_FAILURE_ACTION */
654 #ifndef HAVE_MORECORE
656 #define HAVE_MORECORE 0
657 #else /* ONLY_MSPACES */
658 #define HAVE_MORECORE 1
659 #endif /* ONLY_MSPACES */
660 #endif /* HAVE_MORECORE */
662 #define MORECORE_CONTIGUOUS 0
663 #else /* !HAVE_MORECORE */
664 #define MORECORE_DEFAULT sbrk
665 #ifndef MORECORE_CONTIGUOUS
666 #define MORECORE_CONTIGUOUS 1
667 #endif /* MORECORE_CONTIGUOUS */
668 #endif /* HAVE_MORECORE */
669 #ifndef DEFAULT_GRANULARITY
670 #if (MORECORE_CONTIGUOUS || defined(WIN32))
671 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
672 #else /* MORECORE_CONTIGUOUS */
673 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
674 #endif /* MORECORE_CONTIGUOUS */
675 #endif /* DEFAULT_GRANULARITY */
676 #ifndef DEFAULT_TRIM_THRESHOLD
677 #ifndef MORECORE_CANNOT_TRIM
678 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
679 #else /* MORECORE_CANNOT_TRIM */
680 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
681 #endif /* MORECORE_CANNOT_TRIM */
682 #endif /* DEFAULT_TRIM_THRESHOLD */
683 #ifndef DEFAULT_MMAP_THRESHOLD
685 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
686 #else /* HAVE_MMAP */
687 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
688 #endif /* HAVE_MMAP */
689 #endif /* DEFAULT_MMAP_THRESHOLD */
690 #ifndef MAX_RELEASE_CHECK_RATE
692 #define MAX_RELEASE_CHECK_RATE 4095
694 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
695 #endif /* HAVE_MMAP */
696 #endif /* MAX_RELEASE_CHECK_RATE */
697 #ifndef USE_BUILTIN_FFS
698 #define USE_BUILTIN_FFS 0
699 #endif /* USE_BUILTIN_FFS */
700 #ifndef USE_DEV_RANDOM
701 #define USE_DEV_RANDOM 0
702 #endif /* USE_DEV_RANDOM */
704 #define NO_MALLINFO 0
705 #endif /* NO_MALLINFO */
706 #ifndef MALLINFO_FIELD_TYPE
707 #define MALLINFO_FIELD_TYPE size_t
708 #endif /* MALLINFO_FIELD_TYPE */
709 #ifndef NO_MALLOC_STATS
710 #define NO_MALLOC_STATS 0
711 #endif /* NO_MALLOC_STATS */
712 #ifndef NO_SEGMENT_TRAVERSAL
713 #define NO_SEGMENT_TRAVERSAL 0
714 #endif /* NO_SEGMENT_TRAVERSAL */
717 mallopt tuning options. SVID/XPG defines four standard parameter
718 numbers for mallopt, normally defined in malloc.h. None of these
719 are used in this malloc, so setting them has no effect. But this
720 malloc does support the following options.
723 #define M_TRIM_THRESHOLD (-1)
724 #define M_GRANULARITY (-2)
725 #define M_MMAP_THRESHOLD (-3)
727 /* ------------------------ Mallinfo declarations ------------------------ */
731 This version of malloc supports the standard SVID/XPG mallinfo
732 routine that returns a struct containing usage properties and
733 statistics. It should work on any system that has a
734 /usr/include/malloc.h defining struct mallinfo. The main
735 declaration needed is the mallinfo struct that is returned (by-copy)
736 by mallinfo(). The malloinfo struct contains a bunch of fields that
737 are not even meaningful in this version of malloc. These fields are
738 are instead filled by mallinfo() with other numbers that might be of
741 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
742 /usr/include/malloc.h file that includes a declaration of struct
743 mallinfo. If so, it is included; else a compliant version is
744 declared below. These must be precisely the same for mallinfo() to
745 work. The original SVID version of this struct, defined on most
746 systems with mallinfo, declares all fields as ints. But some others
747 define as unsigned long. If your system defines the fields using a
748 type of different width than listed here, you MUST #include your
749 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
752 /* #define HAVE_USR_INCLUDE_MALLOC_H */
754 #ifdef HAVE_USR_INCLUDE_MALLOC_H
755 #include "/usr/include/malloc.h"
756 #else /* HAVE_USR_INCLUDE_MALLOC_H */
757 #ifndef STRUCT_MALLINFO_DECLARED
758 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
759 #define _STRUCT_MALLINFO
760 #define STRUCT_MALLINFO_DECLARED 1
762 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
763 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
764 MALLINFO_FIELD_TYPE smblks; /* always 0 */
765 MALLINFO_FIELD_TYPE hblks; /* always 0 */
766 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
767 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
768 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
769 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
770 MALLINFO_FIELD_TYPE fordblks; /* total free space */
771 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
773 #endif /* STRUCT_MALLINFO_DECLARED */
774 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
775 #endif /* NO_MALLINFO */
778 Try to persuade compilers to inline. The most critical functions for
779 inlining are defined as macros, so these aren't used for them.
783 #if defined(__GNUC__)
784 #define FORCEINLINE __inline __attribute__ ((always_inline))
785 #elif defined(_MSC_VER)
786 #define FORCEINLINE __forceinline
790 #if defined(__GNUC__)
791 #define NOINLINE __attribute__ ((noinline))
792 #elif defined(_MSC_VER)
793 #define NOINLINE __declspec(noinline)
802 #define FORCEINLINE inline
804 #endif /* __cplusplus */
811 /* ------------------- Declarations of public routines ------------------- */
813 #ifndef USE_DL_PREFIX
814 #define dlcalloc calloc
816 #define dlmalloc malloc
817 #define dlmemalign aligned_alloc
818 #define dlposix_memalign posix_memalign
819 #define dlrealloc realloc
820 #define dlrealloc_in_place realloc_in_place
821 #define dlvalloc valloc
822 #define dlpvalloc pvalloc
823 #define dlmallinfo mallinfo
824 #define dlmallopt mallopt
825 #define dlmalloc_trim malloc_trim
826 #define dlmalloc_stats malloc_stats
827 #define dlmalloc_usable_size malloc_usable_size
828 #define dlmalloc_footprint malloc_footprint
829 #define dlmalloc_max_footprint malloc_max_footprint
830 #define dlmalloc_footprint_limit malloc_footprint_limit
831 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
832 #define dlmalloc_inspect_all malloc_inspect_all
833 #define dlindependent_calloc independent_calloc
834 #define dlindependent_comalloc independent_comalloc
835 #define dlbulk_free bulk_free
836 #endif /* USE_DL_PREFIX */
838 #if 0 // Redeclaration warnings as PDCLib already declares these in <stdio.h>
842 Returns a pointer to a newly allocated chunk of at least n bytes, or
843 null if no space is available, in which case errno is set to ENOMEM
846 If n is zero, malloc returns a minimum-sized chunk. (The minimum
847 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848 systems.) Note that size_t is an unsigned type, so calls with
849 arguments that would be negative if signed are interpreted as
850 requests for huge amounts of space, which will often fail. The
851 maximum supported value of n differs across systems, but is in all
852 cases less than the maximum representable value of a size_t.
854 DLMALLOC_EXPORT void* dlmalloc(size_t);
858 Releases the chunk of memory pointed to by p, that had been previously
859 allocated using malloc or a related routine such as realloc.
860 It has no effect if p is null. If p was not malloced or already
861 freed, free(p) will by default cause the current program to abort.
863 DLMALLOC_EXPORT void dlfree(void*);
866 calloc(size_t n_elements, size_t element_size);
867 Returns a pointer to n_elements * element_size bytes, with all locations
870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
873 realloc(void* p, size_t n)
874 Returns a pointer to a chunk of size n that contains the same data
875 as does chunk p up to the minimum of (n, p's size) bytes, or null
876 if no space is available.
878 The returned pointer may or may not be the same as p. The algorithm
879 prefers extending p in most cases when possible, otherwise it
880 employs the equivalent of a malloc-copy-free sequence.
882 If p is null, realloc is equivalent to malloc.
884 If space is not available, realloc returns null, errno is set (if on
885 ANSI) and p is NOT freed.
887 if n is for fewer bytes than already held by p, the newly unused
888 space is lopped off and freed if possible. realloc with a size
889 argument of zero (re)allocates a minimum-sized chunk.
891 The old unix realloc convention of allowing the last-free'd chunk
892 to be used as an argument to realloc is not supported.
894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
899 realloc_in_place(void* p, size_t n)
900 Resizes the space allocated for p to size n, only if this can be
901 done without moving p (i.e., only if there is adjacent space
902 available if n is greater than p's current allocated size, or n is
903 less than or equal to p's size). This may be used instead of plain
904 realloc if an alternative allocation strategy is needed upon failure
905 to expand space; for example, reallocation of a buffer that must be
906 memory-aligned or cleared. You can use realloc_in_place to trigger
907 these alternatives only when needed.
909 Returns p if successful; otherwise null.
911 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
913 #if 0 // Redeclaration warnings as PDCLib already declares these in <stdio.h>
916 memalign(size_t alignment, size_t n);
917 Returns a pointer to a newly allocated chunk of n bytes, aligned
918 in accord with the alignment argument.
920 The alignment argument should be a power of two. If the argument is
921 not a power of two, the nearest greater power is used.
922 8-byte alignment is guaranteed by normal malloc calls, so don't
923 bother calling memalign with an argument of 8 or less.
925 Overreliance on memalign is a sure way to fragment space.
927 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
932 int posix_memalign(void** pp, size_t alignment, size_t n);
933 Allocates a chunk of n bytes, aligned in accord with the alignment
934 argument. Differs from memalign only in that it (1) assigns the
935 allocated memory to *pp rather than returning it, (2) fails and
936 returns EINVAL if the alignment is not a power of two (3) fails and
937 returns ENOMEM if memory cannot be allocated.
939 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
943 Equivalent to memalign(pagesize, n), where pagesize is the page
944 size of the system. If the pagesize is unknown, 4096 is used.
946 DLMALLOC_EXPORT void* dlvalloc(size_t);
949 mallopt(int parameter_number, int parameter_value)
950 Sets tunable parameters The format is to provide a
951 (parameter-number, parameter-value) pair. mallopt then sets the
952 corresponding parameter to the argument value if it can (i.e., so
953 long as the value is meaningful), and returns 1 if successful else
954 0. To workaround the fact that mallopt is specified to use int,
955 not size_t parameters, the value -1 is specially treated as the
956 maximum unsigned size_t value.
958 SVID/XPG/ANSI defines four standard param numbers for mallopt,
959 normally defined in malloc.h. None of these are use in this malloc,
960 so setting them has no effect. But this malloc also supports other
961 options in mallopt. See below for details. Briefly, supported
962 parameters are as follows (listed defaults are for "typical"
965 Symbol param # default allowed param values
966 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
967 M_GRANULARITY -2 page size any power of 2 >= page size
968 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
970 DLMALLOC_EXPORT int dlmallopt(int, int);
974 Returns the number of bytes obtained from the system. The total
975 number of bytes allocated by malloc, realloc etc., is less than this
976 value. Unlike mallinfo, this function returns only a precomputed
977 result, so can be called frequently to monitor memory consumption.
978 Even if locks are otherwise defined, this function does not use them,
979 so results might not be up to date.
981 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
984 malloc_max_footprint();
985 Returns the maximum number of bytes obtained from the system. This
986 value will be greater than current footprint if deallocated space
987 has been reclaimed by the system. The peak number of bytes allocated
988 by malloc, realloc etc., is less than this value. Unlike mallinfo,
989 this function returns only a precomputed result, so can be called
990 frequently to monitor memory consumption. Even if locks are
991 otherwise defined, this function does not use them, so results might
994 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
997 malloc_footprint_limit();
998 Returns the number of bytes that the heap is allowed to obtain from
999 the system, returning the last value returned by
1000 malloc_set_footprint_limit, or the maximum size_t value if
1001 never set. The returned value reflects a permission. There is no
1002 guarantee that this number of bytes can actually be obtained from
1005 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(void);
1008 malloc_set_footprint_limit();
1009 Sets the maximum number of bytes to obtain from the system, causing
1010 failure returns from malloc and related functions upon attempts to
1011 exceed this value. The argument value may be subject to page
1012 rounding to an enforceable limit; this actual value is returned.
1013 Using an argument of the maximum possible size_t effectively
1014 disables checks. If the argument is less than or equal to the
1015 current malloc_footprint, then all future allocations that require
1016 additional system memory will fail. However, invocation cannot
1017 retroactively deallocate existing used memory.
1019 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1021 #if MALLOC_INSPECT_ALL
1023 malloc_inspect_all(void(*handler)(void *start,
1026 void* callback_arg),
1028 Traverses the heap and calls the given handler for each managed
1029 region, skipping all bytes that are (or may be) used for bookkeeping
1030 purposes. Traversal does not include include chunks that have been
1031 directly memory mapped. Each reported region begins at the start
1032 address, and continues up to but not including the end address. The
1033 first used_bytes of the region contain allocated data. If
1034 used_bytes is zero, the region is unallocated. The handler is
1035 invoked with the given callback argument. If locks are defined, they
1036 are held during the entire traversal. It is a bad idea to invoke
1037 other malloc functions from within the handler.
1039 For example, to count the number of in-use chunks with size greater
1040 than 1000, you could write:
1041 static int count = 0;
1042 void count_chunks(void* start, void* end, size_t used, void* arg) {
1043 if (used >= 1000) ++count;
1046 malloc_inspect_all(count_chunks, NULL);
1048 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1050 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1053 #endif /* MALLOC_INSPECT_ALL */
1058 Returns (by copy) a struct containing various summary statistics:
1060 arena: current total non-mmapped bytes allocated from system
1061 ordblks: the number of free chunks
1062 smblks: always zero.
1063 hblks: current number of mmapped regions
1064 hblkhd: total bytes held in mmapped regions
1065 usmblks: the maximum total allocated space. This will be greater
1066 than current total if trimming has occurred.
1067 fsmblks: always zero
1068 uordblks: current total allocated space (normal or mmapped)
1069 fordblks: total free space
1070 keepcost: the maximum number of bytes that could ideally be released
1071 back to system via malloc_trim. ("ideally" means that
1072 it ignores page restrictions etc.)
1074 Because these fields are ints, but internal bookkeeping may
1075 be kept as longs, the reported values may wrap around zero and
1078 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1079 #endif /* NO_MALLINFO */
1082 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1084 independent_calloc is similar to calloc, but instead of returning a
1085 single cleared space, it returns an array of pointers to n_elements
1086 independent elements that can hold contents of size elem_size, each
1087 of which starts out cleared, and can be independently freed,
1088 realloc'ed etc. The elements are guaranteed to be adjacently
1089 allocated (this is not guaranteed to occur with multiple callocs or
1090 mallocs), which may also improve cache locality in some
1093 The "chunks" argument is optional (i.e., may be null, which is
1094 probably the most typical usage). If it is null, the returned array
1095 is itself dynamically allocated and should also be freed when it is
1096 no longer needed. Otherwise, the chunks array must be of at least
1097 n_elements in length. It is filled in with the pointers to the
1100 In either case, independent_calloc returns this pointer array, or
1101 null if the allocation failed. If n_elements is zero and "chunks"
1102 is null, it returns a chunk representing an array with zero elements
1103 (which should be freed if not wanted).
1105 Each element must be freed when it is no longer needed. This can be
1106 done all at once using bulk_free.
1108 independent_calloc simplifies and speeds up implementations of many
1109 kinds of pools. It may also be useful when constructing large data
1110 structures that initially have a fixed number of fixed-sized nodes,
1111 but the number is not known at compile time, and some of the nodes
1112 may later need to be freed. For example:
1114 struct Node { int item; struct Node* next; };
1116 struct Node* build_list() {
1118 int n = read_number_of_nodes_needed();
1119 if (n <= 0) return 0;
1120 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1121 if (pool == 0) die();
1122 // organize into a linked list...
1123 struct Node* first = pool[0];
1124 for (i = 0; i < n-1; ++i)
1125 pool[i]->next = pool[i+1];
1126 free(pool); // Can now free the array (or not, if it is needed later)
1130 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1133 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1135 independent_comalloc allocates, all at once, a set of n_elements
1136 chunks with sizes indicated in the "sizes" array. It returns
1137 an array of pointers to these elements, each of which can be
1138 independently freed, realloc'ed etc. The elements are guaranteed to
1139 be adjacently allocated (this is not guaranteed to occur with
1140 multiple callocs or mallocs), which may also improve cache locality
1141 in some applications.
1143 The "chunks" argument is optional (i.e., may be null). If it is null
1144 the returned array is itself dynamically allocated and should also
1145 be freed when it is no longer needed. Otherwise, the chunks array
1146 must be of at least n_elements in length. It is filled in with the
1147 pointers to the chunks.
1149 In either case, independent_comalloc returns this pointer array, or
1150 null if the allocation failed. If n_elements is zero and chunks is
1151 null, it returns a chunk representing an array with zero elements
1152 (which should be freed if not wanted).
1154 Each element must be freed when it is no longer needed. This can be
1155 done all at once using bulk_free.
1157 independent_comallac differs from independent_calloc in that each
1158 element may have a different size, and also that it does not
1159 automatically clear elements.
1161 independent_comalloc can be used to speed up allocation in cases
1162 where several structs or objects must always be allocated at the
1163 same time. For example:
1168 void send_message(char* msg) {
1169 int msglen = strlen(msg);
1170 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1172 if (independent_comalloc(3, sizes, chunks) == 0)
1174 struct Head* head = (struct Head*)(chunks[0]);
1175 char* body = (char*)(chunks[1]);
1176 struct Foot* foot = (struct Foot*)(chunks[2]);
1180 In general though, independent_comalloc is worth using only for
1181 larger values of n_elements. For small values, you probably won't
1182 detect enough difference from series of malloc calls to bother.
1184 Overuse of independent_comalloc can increase overall memory usage,
1185 since it cannot reuse existing noncontiguous small chunks that
1186 might be available for some of the elements.
1188 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1191 bulk_free(void* array[], size_t n_elements)
1192 Frees and clears (sets to null) each non-null pointer in the given
1193 array. This is likely to be faster than freeing them one-by-one.
1194 If footers are used, pointers that have been allocated in different
1195 mspaces are not freed or cleared, and the count of all such pointers
1196 is returned. For large arrays of pointers with poor locality, it
1197 may be worthwhile to sort this array before calling bulk_free.
1199 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1203 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1204 round up n to nearest pagesize.
1206 DLMALLOC_EXPORT void* dlpvalloc(size_t);
1209 malloc_trim(size_t pad);
1211 If possible, gives memory back to the system (via negative arguments
1212 to sbrk) if there is unused memory at the `high' end of the malloc
1213 pool or in unused MMAP segments. You can call this after freeing
1214 large blocks of memory to potentially reduce the system-level memory
1215 requirements of a program. However, it cannot guarantee to reduce
1216 memory. Under some allocation patterns, some large free blocks of
1217 memory will be locked between two used chunks, so they cannot be
1218 given back to the system.
1220 The `pad' argument to malloc_trim represents the amount of free
1221 trailing space to leave untrimmed. If this argument is zero, only
1222 the minimum amount of memory to maintain internal data structures
1223 will be left. Non-zero arguments can be supplied to maintain enough
1224 trailing space to service future expected allocations without having
1225 to re-obtain memory from the system.
1227 Malloc_trim returns 1 if it actually released any memory, else 0.
1229 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1233 Prints on stderr the amount of space obtained from the system (both
1234 via sbrk and mmap), the maximum amount (which may be more than
1235 current if malloc_trim and/or munmap got called), and the current
1236 number of bytes allocated via malloc (or realloc, etc) but not yet
1237 freed. Note that this is the number of bytes allocated, not the
1238 number requested. It will be larger than the number requested
1239 because of alignment and bookkeeping overhead. Because it includes
1240 alignment wastage as being in use, this figure may be greater than
1241 zero even when no user-level chunks are allocated.
1243 The reported current and maximum system memory can be inaccurate if
1244 a program makes other calls to system memory allocation functions
1245 (normally sbrk) outside of malloc.
1247 malloc_stats prints only the most commonly interesting statistics.
1248 More information can be obtained by calling mallinfo.
1250 DLMALLOC_EXPORT void dlmalloc_stats(void);
1252 #endif /* ONLY_MSPACES */
1255 malloc_usable_size(void* p);
1257 Returns the number of bytes you can actually use in
1258 an allocated chunk, which may be more than you requested (although
1259 often not) due to alignment and minimum size constraints.
1260 You can use this many bytes without worrying about
1261 overwriting other allocated objects. This is not a particularly great
1262 programming practice. malloc_usable_size can be more useful in
1263 debugging and assertions, for example:
1266 assert(malloc_usable_size(p) >= 256);
1268 size_t dlmalloc_usable_size(void*);
1273 mspace is an opaque type representing an independent
1274 region of space that supports mspace_malloc, etc.
1276 typedef void* mspace;
1279 create_mspace creates and returns a new independent space with the
1280 given initial capacity, or, if 0, the default granularity size. It
1281 returns null if there is no system memory available to create the
1282 space. If argument locked is non-zero, the space uses a separate
1283 lock to control access. The capacity of the space will grow
1284 dynamically as needed to service mspace_malloc requests. You can
1285 control the sizes of incremental increases of this space by
1286 compiling with a different DEFAULT_GRANULARITY or dynamically
1287 setting with mallopt(M_GRANULARITY, value).
1289 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1292 destroy_mspace destroys the given space, and attempts to return all
1293 of its memory back to the system, returning the total number of
1294 bytes freed. After destruction, the results of access to all memory
1295 used by the space become undefined.
1297 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1300 create_mspace_with_base uses the memory supplied as the initial base
1301 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1302 space is used for bookkeeping, so the capacity must be at least this
1303 large. (Otherwise 0 is returned.) When this initial space is
1304 exhausted, additional memory will be obtained from the system.
1305 Destroying this space will deallocate all additionally allocated
1306 space (if possible) but not the initial base.
1308 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1311 mspace_track_large_chunks controls whether requests for large chunks
1312 are allocated in their own untracked mmapped regions, separate from
1313 others in this mspace. By default large chunks are not tracked,
1314 which reduces fragmentation. However, such chunks are not
1315 necessarily released to the system upon destroy_mspace. Enabling
1316 tracking by setting to true may increase fragmentation, but avoids
1317 leakage when relying on destroy_mspace to release all memory
1318 allocated using this space. The function returns the previous
1321 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1325 mspace_malloc behaves as malloc, but operates within
1328 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1331 mspace_free behaves as free, but operates within
1334 If compiled with FOOTERS==1, mspace_free is not actually needed.
1335 free may be called instead of mspace_free because freed chunks from
1336 any space are handled by their originating spaces.
1338 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1341 mspace_realloc behaves as realloc, but operates within
1344 If compiled with FOOTERS==1, mspace_realloc is not actually
1345 needed. realloc may be called instead of mspace_realloc because
1346 realloced chunks from any space are handled by their originating
1349 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1352 mspace_calloc behaves as calloc, but operates within
1355 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1358 mspace_memalign behaves as memalign, but operates within
1361 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1364 mspace_independent_calloc behaves as independent_calloc, but
1365 operates within the given space.
1367 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1368 size_t elem_size, void* chunks[]);
1371 mspace_independent_comalloc behaves as independent_comalloc, but
1372 operates within the given space.
1374 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1375 size_t sizes[], void* chunks[]);
1378 mspace_footprint() returns the number of bytes obtained from the
1379 system for this space.
1381 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1384 mspace_max_footprint() returns the peak number of bytes obtained from the
1385 system for this space.
1387 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1392 mspace_mallinfo behaves as mallinfo, but reports properties of
1395 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1396 #endif /* NO_MALLINFO */
1399 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1401 DLMALLOC_EXPORT size_t mspace_usable_size(void* mem);
1404 mspace_malloc_stats behaves as malloc_stats, but reports
1405 properties of the given space.
1407 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1410 mspace_trim behaves as malloc_trim, but
1411 operates within the given space.
1413 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1416 An alias for mallopt.
1418 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1420 #endif /* MSPACES */
1423 } /* end of extern "C" */
1424 #endif /* __cplusplus */
1427 ========================================================================
1428 To make a fully customizable malloc.h header file, cut everything
1429 above this line, put into file malloc.h, edit to suit, and #include it
1430 on the next line, as well as in programs that use this malloc.
1431 ========================================================================
1434 /* #include "malloc.h" */
1436 /*------------------------------ internal #includes ---------------------- */
1439 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1440 #endif /* _MSC_VER */
1441 #if !NO_MALLOC_STATS
1442 #include <stdio.h> /* for printing in malloc_stats */
1443 #endif /* NO_MALLOC_STATS */
1444 #ifndef LACKS_ERRNO_H
1445 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1446 #endif /* LACKS_ERRNO_H */
1448 #if ABORT_ON_ASSERT_FAILURE
1450 #define assert(x) if(!(x)) ABORT
1451 #else /* ABORT_ON_ASSERT_FAILURE */
1453 #endif /* ABORT_ON_ASSERT_FAILURE */
1460 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1461 #include <time.h> /* for magic initialization */
1463 #ifndef LACKS_STDLIB_H
1464 #include <stdlib.h> /* for abort() */
1465 #endif /* LACKS_STDLIB_H */
1466 #ifndef LACKS_STRING_H
1467 #include <string.h> /* for memset etc */
1468 #endif /* LACKS_STRING_H */
1470 #ifndef LACKS_STRINGS_H
1471 #include <strings.h> /* for ffs */
1472 #endif /* LACKS_STRINGS_H */
1473 #endif /* USE_BUILTIN_FFS */
1475 #ifndef LACKS_SYS_MMAN_H
1476 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1477 #if (defined(linux) && !defined(__USE_GNU))
1479 #include <sys/mman.h> /* for mmap */
1482 #include <sys/mman.h> /* for mmap */
1484 #endif /* LACKS_SYS_MMAN_H */
1485 #ifndef LACKS_FCNTL_H
1487 #endif /* LACKS_FCNTL_H */
1488 #endif /* HAVE_MMAP */
1489 #ifndef LACKS_UNISTD_H
1490 #include <unistd.h> /* for sbrk, sysconf */
1491 #else /* LACKS_UNISTD_H */
1492 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1493 /*extern void* sbrk(ptrdiff_t);*/
1494 #endif /* FreeBSD etc */
1495 #endif /* LACKS_UNISTD_H */
1497 /* Declarations for locking */
1500 #if defined (__SVR4) && defined (__sun) /* solaris */
1502 #elif !defined(LACKS_SCHED_H)
1504 #endif /* solaris or LACKS_SCHED_H */
1505 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1506 /*#include <pthread.h>*/
1507 #endif /* USE_RECURSIVE_LOCKS ... */
1508 #elif defined(_MSC_VER)
1510 /* These are already defined on AMD64 builds */
1513 #endif /* __cplusplus */
1514 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1515 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1518 #endif /* __cplusplus */
1519 #endif /* _M_AMD64 */
1520 #pragma intrinsic (_InterlockedCompareExchange)
1521 #pragma intrinsic (_InterlockedExchange)
1522 #define interlockedcompareexchange _InterlockedCompareExchange
1523 #define interlockedexchange _InterlockedExchange
1524 #elif defined(WIN32) && defined(__GNUC__)
1525 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1526 #define interlockedexchange __sync_lock_test_and_set
1528 #endif /* USE_LOCKS */
1530 /* Declarations for bit scanning on win32 */
1531 #if defined(_MSC_VER) && _MSC_VER>=1300
1532 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1535 #endif /* __cplusplus */
1536 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1537 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1540 #endif /* __cplusplus */
1542 #define BitScanForward _BitScanForward
1543 #define BitScanReverse _BitScanReverse
1544 #pragma intrinsic(_BitScanForward)
1545 #pragma intrinsic(_BitScanReverse)
1546 #endif /* BitScanForward */
1547 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1550 #ifndef malloc_getpagesize
1551 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1552 # ifndef _SC_PAGE_SIZE
1553 # define _SC_PAGE_SIZE _SC_PAGESIZE
1556 # ifdef _SC_PAGE_SIZE
1557 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1559 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1560 extern size_t getpagesize();
1561 # define malloc_getpagesize getpagesize()
1563 # ifdef WIN32 /* use supplied emulation of getpagesize */
1564 # define malloc_getpagesize getpagesize()
1566 # ifndef LACKS_SYS_PARAM_H
1567 # include <sys/param.h>
1569 # ifdef EXEC_PAGESIZE
1570 # define malloc_getpagesize EXEC_PAGESIZE
1574 # define malloc_getpagesize NBPG
1576 # define malloc_getpagesize (NBPG * CLSIZE)
1580 # define malloc_getpagesize NBPC
1583 # define malloc_getpagesize PAGESIZE
1584 # else /* just guess */
1585 # define malloc_getpagesize ((size_t)4096U)
1596 /* ------------------- size_t and alignment properties -------------------- */
1598 /* The byte and bit size of a size_t */
1599 #define SIZE_T_SIZE (sizeof(size_t))
1600 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1602 /* Some constants coerced to size_t */
1603 /* Annoying but necessary to avoid errors on some platforms */
1604 #define SIZE_T_ZERO ((size_t)0)
1605 #define SIZE_T_ONE ((size_t)1)
1606 #define SIZE_T_TWO ((size_t)2)
1607 #define SIZE_T_FOUR ((size_t)4)
1608 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1609 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1610 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1611 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1613 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1614 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1616 /* True if address a has acceptable alignment */
1617 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1619 /* the number of bytes to offset an address to align it */
1620 #define align_offset(A)\
1621 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1622 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1624 /* -------------------------- MMAP preliminaries ------------------------- */
1627 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1628 checks to fail so compiler optimizer can delete code rather than
1629 using so many "#if"s.
1633 /* MORECORE and MMAP must return MFAIL on failure */
1634 #define MFAIL ((void*)(MAX_SIZE_T))
1635 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1640 #elif !defined(WIN32)
1641 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1642 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1643 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1644 #define MAP_ANONYMOUS MAP_ANON
1645 #endif /* MAP_ANON */
1646 #ifdef MAP_ANONYMOUS
1647 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1648 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1649 #else /* MAP_ANONYMOUS */
1651 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1652 is unlikely to be needed, but is supplied just in case.
1654 #define MMAP_FLAGS (MAP_PRIVATE)
1655 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1656 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1657 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1658 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1659 #endif /* MAP_ANONYMOUS */
1661 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1665 /* Win32 MMAP via VirtualAlloc */
1666 static FORCEINLINE void* win32mmap(size_t size) {
1667 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1668 return (ptr != 0)? ptr: MFAIL;
1671 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1672 static FORCEINLINE void* win32direct_mmap(size_t size) {
1673 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1675 return (ptr != 0)? ptr: MFAIL;
1678 /* This function supports releasing coalesed segments */
1679 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1680 MEMORY_BASIC_INFORMATION minfo;
1681 char* cptr = (char*)ptr;
1683 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1685 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1686 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1688 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1690 cptr += minfo.RegionSize;
1691 size -= minfo.RegionSize;
1696 #define MMAP_DEFAULT(s) win32mmap(s)
1697 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1698 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1700 #endif /* HAVE_MMAP */
1702 #if HAVE_MREMAP && !defined(MREMAP_DEFAULT)
1704 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1706 #endif /* HAVE_MREMAP */
1709 * Define CALL_MORECORE
1713 #define CALL_MORECORE(S) MORECORE(S)
1714 #else /* MORECORE */
1715 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1716 #endif /* MORECORE */
1717 #else /* HAVE_MORECORE */
1718 #define CALL_MORECORE(S) MFAIL
1719 #endif /* HAVE_MORECORE */
1722 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1725 #define USE_MMAP_BIT (SIZE_T_ONE)
1728 #define CALL_MMAP(s) MMAP(s)
1730 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1733 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1735 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1738 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1739 #else /* DIRECT_MMAP */
1740 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1741 #endif /* DIRECT_MMAP */
1742 #else /* HAVE_MMAP */
1743 #define USE_MMAP_BIT (SIZE_T_ZERO)
1745 #define MMAP(s) MFAIL
1746 #define MUNMAP(a, s) (-1)
1747 #define DIRECT_MMAP(s) MFAIL
1748 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1749 #define CALL_MMAP(s) MMAP(s)
1750 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1751 #endif /* HAVE_MMAP */
1754 * Define CALL_MREMAP
1756 #if HAVE_MMAP && HAVE_MREMAP
1758 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1760 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1762 #else /* HAVE_MMAP && HAVE_MREMAP */
1763 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1764 #endif /* HAVE_MMAP && HAVE_MREMAP */
1766 /* mstate bit set if continguous morecore disabled or failed */
1767 #define USE_NONCONTIGUOUS_BIT (4U)
1769 /* segment bit set in create_mspace_with_base */
1770 #define EXTERN_BIT (8U)
1773 /* --------------------------- Lock preliminaries ------------------------ */
1776 When locks are defined, there is one global lock, plus
1777 one per-mspace lock.
1779 The global lock_ensures that mparams.magic and other unique
1780 mparams values are initialized only once. It also protects
1781 sequences of calls to MORECORE. In many cases sys_alloc requires
1782 two calls, that should not be interleaved with calls by other
1783 threads. This does not protect against direct calls to MORECORE
1784 by other threads not using this lock, so there is still code to
1785 cope the best we can on interference.
1787 Per-mspace locks surround calls to malloc, free, etc.
1788 By default, locks are simple non-reentrant mutexes.
1790 Because lock-protected regions generally have bounded times, it is
1791 OK to use the supplied simple spinlocks. Spinlocks are likely to
1792 improve performance for lightly contended applications, but worsen
1793 performance under heavy contention.
1795 If USE_LOCKS is > 1, the definitions of lock routines here are
1796 bypassed, in which case you will need to define the type MLOCK_T,
1797 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1798 and TRY_LOCK. You must also declare a
1799 static MLOCK_T malloc_global_mutex = { initialization values };.
1804 #define USE_LOCK_BIT (0U)
1805 #define INITIAL_LOCK(l) (0)
1806 #define DESTROY_LOCK(l) (0)
1807 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1808 #define RELEASE_MALLOC_GLOBAL_LOCK()
1812 /* ----------------------- User-defined locks ------------------------ */
1813 /* Define your own lock implementation here */
1814 /* #define INITIAL_LOCK(lk) ... */
1815 /* #define DESTROY_LOCK(lk) ... */
1816 /* #define ACQUIRE_LOCK(lk) ... */
1817 /* #define RELEASE_LOCK(lk) ... */
1818 /* #define TRY_LOCK(lk) ... */
1819 /* static MLOCK_T malloc_global_mutex = ... */
1821 #elif USE_SPIN_LOCKS
1823 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1824 /* Note CAS_LOCK defined to return 0 on success */
1826 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1827 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1828 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1830 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1831 /* Custom spin locks for older gcc on x86 */
1832 static FORCEINLINE int x86_cas_lock(int *sl) {
1836 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1838 : "r" (val), "m" (*(sl)), "0"(cmp)
1843 static FORCEINLINE void x86_clear_lock(int* sl) {
1847 __asm__ __volatile__ ("lock; xchgl %0, %1"
1849 : "m" (*(sl)), "0"(prev)
1853 #define CAS_LOCK(sl) x86_cas_lock(sl)
1854 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1856 #else /* Win32 MSC */
1857 #define CAS_LOCK(sl) interlockedexchange(sl, 1)
1858 #define CLEAR_LOCK(sl) interlockedexchange (sl, 0)
1860 #endif /* ... gcc spins locks ... */
1862 /* How to yield for a spin lock */
1863 #define SPINS_PER_YIELD 63
1864 #if defined(_MSC_VER)
1865 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1866 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1867 #elif defined (__SVR4) && defined (__sun) /* solaris */
1868 #define SPIN_LOCK_YIELD thr_yield();
1869 #elif !defined(LACKS_SCHED_H)
1870 #define SPIN_LOCK_YIELD sched_yield();
1872 #define SPIN_LOCK_YIELD
1873 #endif /* ... yield ... */
1875 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1876 /* Plain spin locks use single word (embedded in malloc_states) */
1877 static int spin_acquire_lock(int *sl) {
1879 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1880 if ((++spins & SPINS_PER_YIELD) == 0) {
1888 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1889 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1890 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1891 #define INITIAL_LOCK(sl) (*sl = 0)
1892 #define DESTROY_LOCK(sl) (0)
1893 static MLOCK_T malloc_global_mutex = 0;
1895 #else /* USE_RECURSIVE_LOCKS */
1896 /* types for lock owners */
1898 #define THREAD_ID_T DWORD
1899 #define CURRENT_THREAD GetCurrentThreadId()
1900 #define EQ_OWNER(X,Y) ((X) == (Y))
1903 Note: the following assume that pthread_t is a type that can be
1904 initialized to (casted) zero. If this is not the case, you will need to
1905 somehow redefine these or not use spin locks.
1907 #define THREAD_ID_T pthread_t
1908 #define CURRENT_THREAD pthread_self()
1909 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1912 struct malloc_recursive_lock {
1915 THREAD_ID_T threadid;
1918 #define MLOCK_T struct malloc_recursive_lock
1919 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1921 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1922 assert(lk->sl != 0);
1924 CLEAR_LOCK(&lk->sl);
1928 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1929 THREAD_ID_T mythreadid = CURRENT_THREAD;
1932 if (*((volatile int *)(&lk->sl)) == 0) {
1933 if (!CAS_LOCK(&lk->sl)) {
1934 lk->threadid = mythreadid;
1939 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1943 if ((++spins & SPINS_PER_YIELD) == 0) {
1949 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1950 THREAD_ID_T mythreadid = CURRENT_THREAD;
1951 if (*((volatile int *)(&lk->sl)) == 0) {
1952 if (!CAS_LOCK(&lk->sl)) {
1953 lk->threadid = mythreadid;
1958 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1965 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1966 #define TRY_LOCK(lk) recursive_try_lock(lk)
1967 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1968 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1969 #define DESTROY_LOCK(lk) (0)
1970 #endif /* USE_RECURSIVE_LOCKS */
1972 #elif defined(WIN32) /* Win32 critical sections */
1973 #define MLOCK_T CRITICAL_SECTION
1974 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1975 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1976 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1977 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1978 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1979 #define NEED_GLOBAL_LOCK_INIT
1981 static MLOCK_T malloc_global_mutex;
1982 static volatile long malloc_global_mutex_status;
1984 /* Use spin loop to initialize global lock */
1985 static void init_malloc_global_mutex() {
1987 long stat = malloc_global_mutex_status;
1990 /* transition to < 0 while initializing, then to > 0) */
1992 interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1993 InitializeCriticalSection(&malloc_global_mutex);
1994 interlockedexchange(&malloc_global_mutex_status,1);
2001 #else /* pthreads-based locks */
2002 #define MLOCK_T pthread_mutex_t
2003 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2004 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2005 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2006 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2007 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2009 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2010 /* Cope with old-style linux recursive lock initialization by adding */
2011 /* skipped internal declaration from pthread.h */
2012 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2014 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2015 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2016 #endif /* USE_RECURSIVE_LOCKS ... */
2018 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2020 static int pthread_init_lock (MLOCK_T *lk) {
2021 pthread_mutexattr_t attr;
2022 if (pthread_mutexattr_init(&attr)) return 1;
2023 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2024 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2026 if (pthread_mutex_init(lk, &attr)) return 1;
2027 if (pthread_mutexattr_destroy(&attr)) return 1;
2031 #endif /* ... lock types ... */
2033 /* Common code for all lock types */
2034 #define USE_LOCK_BIT (2U)
2036 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2037 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2040 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2041 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2044 #endif /* USE_LOCKS */
2046 /* ----------------------- Chunk representations ------------------------ */
2049 (The following includes lightly edited explanations by Colin Plumb.)
2051 The malloc_chunk declaration below is misleading (but accurate and
2052 necessary). It declares a "view" into memory allowing access to
2053 necessary fields at known offsets from a given base.
2055 Chunks of memory are maintained using a `boundary tag' method as
2056 originally described by Knuth. (See the paper by Paul Wilson
2057 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2058 techniques.) Sizes of free chunks are stored both in the front of
2059 each chunk and at the end. This makes consolidating fragmented
2060 chunks into bigger chunks fast. The head fields also hold bits
2061 representing whether chunks are free or in use.
2063 Here are some pictures to make it clearer. They are "exploded" to
2064 show that the state of a chunk can be thought of as extending from
2065 the high 31 bits of the head field of its header through the
2066 prev_foot and PINUSE_BIT bit of the following chunk header.
2068 A chunk that's in use looks like:
2070 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2071 | Size of previous chunk (if P = 0) |
2072 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2073 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2074 | Size of this chunk 1| +-+
2075 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2081 +- size - sizeof(size_t) available payload bytes -+
2085 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2086 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2087 | Size of next chunk (may or may not be in use) | +-+
2088 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2090 And if it's free, it looks like this:
2093 | User payload (must be in use, or we would have merged!) |
2094 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2096 | Size of this chunk 0| +-+
2097 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2099 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103 +- size - sizeof(struct chunk) unused bytes -+
2105 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2106 | Size of this chunk |
2107 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2108 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2109 | Size of next chunk (must be in use, or we would have merged)| +-+
2110 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2114 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2117 Note that since we always merge adjacent free chunks, the chunks
2118 adjacent to a free chunk must be in use.
2120 Given a pointer to a chunk (which can be derived trivially from the
2121 payload pointer) we can, in O(1) time, find out whether the adjacent
2122 chunks are free, and if so, unlink them from the lists that they
2123 are on and merge them with the current chunk.
2125 Chunks always begin on even word boundaries, so the mem portion
2126 (which is returned to the user) is also on an even word boundary, and
2127 thus at least double-word aligned.
2129 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2130 chunk size (which is always a multiple of two words), is an in-use
2131 bit for the *previous* chunk. If that bit is *clear*, then the
2132 word before the current chunk size contains the previous chunk
2133 size, and can be used to find the front of the previous chunk.
2134 The very first chunk allocated always has this bit set, preventing
2135 access to non-existent (or non-owned) memory. If pinuse is set for
2136 any given chunk, then you CANNOT determine the size of the
2137 previous chunk, and might even get a memory addressing fault when
2140 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2141 the chunk size redundantly records whether the current chunk is
2142 inuse (unless the chunk is mmapped). This redundancy enables usage
2143 checks within free and realloc, and reduces indirection when freeing
2144 and consolidating chunks.
2146 Each freshly allocated chunk must have both cinuse and pinuse set.
2147 That is, each allocated chunk borders either a previously allocated
2148 and still in-use chunk, or the base of its memory arena. This is
2149 ensured by making all allocations from the `lowest' part of any
2150 found chunk. Further, no free chunk physically borders another one,
2151 so each free chunk is known to be preceded and followed by either
2152 inuse chunks or the ends of memory.
2154 Note that the `foot' of the current chunk is actually represented
2155 as the prev_foot of the NEXT chunk. This makes it easier to
2156 deal with alignments etc but can be very confusing when trying
2157 to extend or adapt this code.
2159 The exceptions to all this are
2161 1. The special chunk `top' is the top-most available chunk (i.e.,
2162 the one bordering the end of available memory). It is treated
2163 specially. Top is never included in any bin, is used only if
2164 no other chunk is available, and is released back to the
2165 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2166 the top chunk is treated as larger (and thus less well
2167 fitting) than any other available chunk. The top chunk
2168 doesn't update its trailing size field since there is no next
2169 contiguous chunk that would have to index off it. However,
2170 space is still allocated for it (TOP_FOOT_SIZE) to enable
2171 separation or merging when space is extended.
2173 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2174 cleared in their head fields. Because they are allocated
2175 one-by-one, each must carry its own prev_foot field, which is
2176 also used to hold the offset this chunk has within its mmapped
2177 region, which is needed to preserve alignment. Each mmapped
2178 chunk is trailed by the first two fields of a fake next-chunk
2179 for sake of usage checks.
2183 struct malloc_chunk {
2184 size_t prev_foot; /* Size of previous chunk (if free). */
2185 size_t head; /* Size and inuse bits. */
2186 struct malloc_chunk* fd; /* double links -- used only if free. */
2187 struct malloc_chunk* bk;
2190 typedef struct malloc_chunk mchunk;
2191 typedef struct malloc_chunk* mchunkptr;
2192 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2193 typedef unsigned int bindex_t; /* Described below */
2194 typedef unsigned int binmap_t; /* Described below */
2195 typedef unsigned int flag_t; /* The type of various bit flag sets */
2197 /* ------------------- Chunks sizes and alignments ----------------------- */
2199 #define MCHUNK_SIZE (sizeof(mchunk))
2202 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2204 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2205 #endif /* FOOTERS */
2207 /* MMapped chunks need a second word of overhead ... */
2208 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2209 /* ... and additional padding for fake next-chunk at foot */
2210 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2212 /* The smallest size we can malloc is an aligned minimal chunk */
2213 #define MIN_CHUNK_SIZE\
2214 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2216 /* conversion from malloc headers to user pointers, and back */
2217 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2218 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2219 /* chunk associated with aligned address A */
2220 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2222 /* Bounds on request (not chunk) sizes. */
2223 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2224 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2226 /* pad request bytes into a usable size */
2227 #define pad_request(req) \
2228 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2230 /* pad request, checking for minimum (but not maximum) */
2231 #define request2size(req) \
2232 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2235 /* ------------------ Operations on head and foot fields ----------------- */
2238 The head field of a chunk is or'ed with PINUSE_BIT when previous
2239 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2240 use, unless mmapped, in which case both bits are cleared.
2242 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2245 #define PINUSE_BIT (SIZE_T_ONE)
2246 #define CINUSE_BIT (SIZE_T_TWO)
2247 #define FLAG4_BIT (SIZE_T_FOUR)
2248 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2249 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2251 /* Head value for fenceposts */
2252 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2254 /* extraction of fields from head words */
2255 #define cinuse(p) ((p)->head & CINUSE_BIT)
2256 #define pinuse(p) ((p)->head & PINUSE_BIT)
2257 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2258 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2259 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2261 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2263 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2264 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2265 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2267 /* Treat space at ptr +/- offset as a chunk */
2268 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2269 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2271 /* Ptr to next or previous physical malloc_chunk. */
2272 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2273 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2275 /* extract next chunk's pinuse bit */
2276 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2278 /* Get/set size at footer */
2279 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2280 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2282 /* Set size, pinuse bit, and foot */
2283 #define set_size_and_pinuse_of_free_chunk(p, s)\
2284 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2286 /* Set size, pinuse bit, foot, and clear next pinuse */
2287 #define set_free_with_pinuse(p, s, n)\
2288 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2290 /* Get the internal overhead associated with chunk p */
2291 #define overhead_for(p)\
2292 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2294 /* Return true if malloced space is not necessarily cleared */
2296 #define calloc_must_clear(p) (!is_mmapped(p))
2297 #else /* MMAP_CLEARS */
2298 #define calloc_must_clear(p) (1)
2299 #endif /* MMAP_CLEARS */
2301 /* ---------------------- Overlaid data structures ----------------------- */
2304 When chunks are not in use, they are treated as nodes of either
2307 "Small" chunks are stored in circular doubly-linked lists, and look
2310 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2311 | Size of previous chunk |
2312 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2313 `head:' | Size of chunk, in bytes |P|
2314 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2315 | Forward pointer to next chunk in list |
2316 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2317 | Back pointer to previous chunk in list |
2318 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2319 | Unused space (may be 0 bytes long) .
2322 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2323 `foot:' | Size of chunk, in bytes |
2324 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326 Larger chunks are kept in a form of bitwise digital trees (aka
2327 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2328 free chunks greater than 256 bytes, their size doesn't impose any
2329 constraints on user chunk sizes. Each node looks like:
2331 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2332 | Size of previous chunk |
2333 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2334 `head:' | Size of chunk, in bytes |P|
2335 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2336 | Forward pointer to next chunk of same size |
2337 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2338 | Back pointer to previous chunk of same size |
2339 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2340 | Pointer to left child (child[0]) |
2341 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2342 | Pointer to right child (child[1]) |
2343 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2344 | Pointer to parent |
2345 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2346 | bin index of this chunk |
2347 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2351 `foot:' | Size of chunk, in bytes |
2352 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2355 of the same size are arranged in a circularly-linked list, with only
2356 the oldest chunk (the next to be used, in our FIFO ordering)
2357 actually in the tree. (Tree members are distinguished by a non-null
2358 parent pointer.) If a chunk with the same size an an existing node
2359 is inserted, it is linked off the existing node using pointers that
2360 work in the same way as fd/bk pointers of small chunks.
2362 Each tree contains a power of 2 sized range of chunk sizes (the
2363 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2364 tree level, with the chunks in the smaller half of the range (0x100
2365 <= x < 0x140 for the top nose) in the left subtree and the larger
2366 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2367 done by inspecting individual bits.
2369 Using these rules, each node's left subtree contains all smaller
2370 sizes than its right subtree. However, the node at the root of each
2371 subtree has no particular ordering relationship to either. (The
2372 dividing line between the subtree sizes is based on trie relation.)
2373 If we remove the last chunk of a given size from the interior of the
2374 tree, we need to replace it with a leaf node. The tree ordering
2375 rules permit a node to be replaced by any leaf below it.
2377 The smallest chunk in a tree (a common operation in a best-fit
2378 allocator) can be found by walking a path to the leftmost leaf in
2379 the tree. Unlike a usual binary tree, where we follow left child
2380 pointers until we reach a null, here we follow the right child
2381 pointer any time the left one is null, until we reach a leaf with
2382 both child pointers null. The smallest chunk in the tree will be
2383 somewhere along that path.
2385 The worst case number of steps to add, find, or remove a node is
2386 bounded by the number of bits differentiating chunks within
2387 bins. Under current bin calculations, this ranges from 6 up to 21
2388 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2389 is of course much better.
2392 struct malloc_tree_chunk {
2393 /* The first four fields must be compatible with malloc_chunk */
2396 struct malloc_tree_chunk* fd;
2397 struct malloc_tree_chunk* bk;
2399 struct malloc_tree_chunk* child[2];
2400 struct malloc_tree_chunk* parent;
2404 typedef struct malloc_tree_chunk tchunk;
2405 typedef struct malloc_tree_chunk* tchunkptr;
2406 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2408 /* A little helper macro for trees */
2409 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2411 /* ----------------------------- Segments -------------------------------- */
2414 Each malloc space may include non-contiguous segments, held in a
2415 list headed by an embedded malloc_segment record representing the
2416 top-most space. Segments also include flags holding properties of
2417 the space. Large chunks that are directly allocated by mmap are not
2418 included in this list. They are instead independently created and
2419 destroyed without otherwise keeping track of them.
2421 Segment management mainly comes into play for spaces allocated by
2422 MMAP. Any call to MMAP might or might not return memory that is
2423 adjacent to an existing segment. MORECORE normally contiguously
2424 extends the current space, so this space is almost always adjacent,
2425 which is simpler and faster to deal with. (This is why MORECORE is
2426 used preferentially to MMAP when both are available -- see
2427 sys_alloc.) When allocating using MMAP, we don't use any of the
2428 hinting mechanisms (inconsistently) supported in various
2429 implementations of unix mmap, or distinguish reserving from
2430 committing memory. Instead, we just ask for space, and exploit
2431 contiguity when we get it. It is probably possible to do
2432 better than this on some systems, but no general scheme seems
2433 to be significantly better.
2435 Management entails a simpler variant of the consolidation scheme
2436 used for chunks to reduce fragmentation -- new adjacent memory is
2437 normally prepended or appended to an existing segment. However,
2438 there are limitations compared to chunk consolidation that mostly
2439 reflect the fact that segment processing is relatively infrequent
2440 (occurring only when getting memory from system) and that we
2441 don't expect to have huge numbers of segments:
2443 * Segments are not indexed, so traversal requires linear scans. (It
2444 would be possible to index these, but is not worth the extra
2445 overhead and complexity for most programs on most platforms.)
2446 * New segments are only appended to old ones when holding top-most
2447 memory; if they cannot be prepended to others, they are held in
2450 Except for the top-most segment of an mstate, each segment record
2451 is kept at the tail of its segment. Segments are added by pushing
2452 segment records onto the list headed by &mstate.seg for the
2455 Segment flags control allocation/merge/deallocation policies:
2456 * If EXTERN_BIT set, then we did not allocate this segment,
2457 and so should not try to deallocate or merge with others.
2458 (This currently holds only for the initial segment passed
2459 into create_mspace_with_base.)
2460 * If USE_MMAP_BIT set, the segment may be merged with
2461 other surrounding mmapped segments and trimmed/de-allocated
2463 * If neither bit is set, then the segment was obtained using
2464 MORECORE so can be merged with surrounding MORECORE'd segments
2465 and deallocated/trimmed using MORECORE with negative arguments.
2468 struct malloc_segment {
2469 char* base; /* base address */
2470 size_t size; /* allocated size */
2471 struct malloc_segment* next; /* ptr to next segment */
2472 flag_t sflags; /* mmap and extern flag */
2475 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2476 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2478 typedef struct malloc_segment msegment;
2479 typedef struct malloc_segment* msegmentptr;
2481 /* ---------------------------- malloc_state ----------------------------- */
2484 A malloc_state holds all of the bookkeeping for a space.
2485 The main fields are:
2488 The topmost chunk of the currently active segment. Its size is
2489 cached in topsize. The actual size of topmost space is
2490 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2491 fenceposts and segment records if necessary when getting more
2492 space from the system. The size at which to autotrim top is
2493 cached from mparams in trim_check, except that it is disabled if
2496 Designated victim (dv)
2497 This is the preferred chunk for servicing small requests that
2498 don't have exact fits. It is normally the chunk split off most
2499 recently to service another small request. Its size is cached in
2500 dvsize. The link fields of this chunk are not maintained since it
2501 is not kept in a bin.
2504 An array of bin headers for free chunks. These bins hold chunks
2505 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2506 chunks of all the same size, spaced 8 bytes apart. To simplify
2507 use in double-linked lists, each bin header acts as a malloc_chunk
2508 pointing to the real first node, if it exists (else pointing to
2509 itself). This avoids special-casing for headers. But to avoid
2510 waste, we allocate only the fd/bk pointers of bins, and then use
2511 repositioning tricks to treat these as the fields of a chunk.
2514 Treebins are pointers to the roots of trees holding a range of
2515 sizes. There are 2 equally spaced treebins for each power of two
2516 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2520 There is one bit map for small bins ("smallmap") and one for
2521 treebins ("treemap). Each bin sets its bit when non-empty, and
2522 clears the bit when empty. Bit operations are then used to avoid
2523 bin-by-bin searching -- nearly all "search" is done without ever
2524 looking at bins that won't be selected. The bit maps
2525 conservatively use 32 bits per map word, even if on 64bit system.
2526 For a good description of some of the bit-based techniques used
2527 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2528 supplement at http://hackersdelight.org/). Many of these are
2529 intended to reduce the branchiness of paths through malloc etc, as
2530 well as to reduce the number of memory locations read or written.
2533 A list of segments headed by an embedded malloc_segment record
2534 representing the initial space.
2536 Address check support
2537 The least_addr field is the least address ever obtained from
2538 MORECORE or MMAP. Attempted frees and reallocs of any address less
2539 than this are trapped (unless INSECURE is defined).
2542 A cross-check field that should always hold same value as mparams.magic.
2544 Max allowed footprint
2545 The maximum allowed bytes to allocate from system (zero means no limit)
2548 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2551 Each space keeps track of current and maximum system memory
2552 obtained via MORECORE or MMAP.
2555 Fields holding the amount of unused topmost memory that should trigger
2556 trimming, and a counter to force periodic scanning to release unused
2557 non-topmost segments.
2560 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2561 around every public call using this mspace.
2564 A void* pointer and a size_t field that can be used to help implement
2565 extensions to this malloc.
2568 /* Bin types, widths and sizes */
2569 #define NSMALLBINS (32U)
2570 #define NTREEBINS (32U)
2571 #define SMALLBIN_SHIFT (3U)
2572 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2573 #define TREEBIN_SHIFT (8U)
2574 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2575 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2576 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2578 struct malloc_state {
2587 size_t release_checks;
2589 mchunkptr smallbins[(NSMALLBINS+1)*2];
2590 tbinptr treebins[NTREEBINS];
2592 size_t max_footprint;
2593 size_t footprint_limit; /* zero means no limit */
2596 MLOCK_T mutex; /* locate lock among fields that rarely change */
2597 #endif /* USE_LOCKS */
2599 void* extp; /* Unused but available for extensions */
2603 typedef struct malloc_state* mstate;
2605 /* ------------- Global malloc_state and malloc_params ------------------- */
2608 malloc_params holds global properties, including those that can be
2609 dynamically set using mallopt. There is a single instance, mparams,
2610 initialized in init_mparams. Note that the non-zeroness of "magic"
2611 also serves as an initialization flag.
2614 struct malloc_params {
2618 size_t mmap_threshold;
2619 size_t trim_threshold;
2620 flag_t default_mflags;
2623 static struct malloc_params mparams;
2625 /* Ensure mparams initialized */
2626 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2630 /* The global malloc_state used for all non-"mspace" calls */
2631 static struct malloc_state _gm_;
2633 #define is_global(M) ((M) == &_gm_)
2635 #endif /* !ONLY_MSPACES */
2637 #define is_initialized(M) ((M)->top != 0)
2639 /* -------------------------- system alloc setup ------------------------- */
2641 /* Operations on mflags */
2643 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2644 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2646 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2648 #define disable_lock(M)
2651 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2652 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2654 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2656 #define disable_mmap(M)
2659 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2660 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2662 #define set_lock(M,L)\
2663 ((M)->mflags = (L)?\
2664 ((M)->mflags | USE_LOCK_BIT) :\
2665 ((M)->mflags & ~USE_LOCK_BIT))
2667 /* page-align a size */
2668 #define page_align(S)\
2669 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2671 /* granularity-align a size */
2672 #define granularity_align(S)\
2673 (((S) + (mparams.granularity - SIZE_T_ONE))\
2674 & ~(mparams.granularity - SIZE_T_ONE))
2677 /* For mmap, use granularity alignment on windows, else page-align */
2679 #define mmap_align(S) granularity_align(S)
2681 #define mmap_align(S) page_align(S)
2684 /* For sys_alloc, enough padding to ensure can malloc request on success */
2685 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2687 #define is_page_aligned(S)\
2688 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2689 #define is_granularity_aligned(S)\
2690 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2692 /* True if segment S holds address A */
2693 #define segment_holds(S, A)\
2694 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2696 /* Return segment holding given address */
2697 static msegmentptr segment_holding(mstate m, char* addr) {
2698 msegmentptr sp = &m->seg;
2700 if (addr >= sp->base && addr < sp->base + sp->size)
2702 if ((sp = sp->next) == 0)
2707 /* Return true if segment contains a segment link */
2708 static int has_segment_link(mstate m, msegmentptr ss) {
2709 msegmentptr sp = &m->seg;
2711 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2713 if ((sp = sp->next) == 0)
2718 #ifndef MORECORE_CANNOT_TRIM
2719 #define should_trim(M,s) ((s) > (M)->trim_check)
2720 #else /* MORECORE_CANNOT_TRIM */
2721 #define should_trim(M,s) (0)
2722 #endif /* MORECORE_CANNOT_TRIM */
2725 TOP_FOOT_SIZE is padding at the end of a segment, including space
2726 that may be needed to place segment records and fenceposts when new
2727 noncontiguous segments are added.
2729 #define TOP_FOOT_SIZE\
2730 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2733 /* ------------------------------- Hooks -------------------------------- */
2736 PREACTION should be defined to return 0 on success, and nonzero on
2737 failure. If you are not using locking, you can redefine these to do
2742 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2743 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2744 #else /* USE_LOCKS */
2747 #define PREACTION(M) (0)
2748 #endif /* PREACTION */
2751 #define POSTACTION(M)
2752 #endif /* POSTACTION */
2754 #endif /* USE_LOCKS */
2757 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2758 USAGE_ERROR_ACTION is triggered on detected bad frees and
2759 reallocs. The argument p is an address that might have triggered the
2760 fault. It is ignored by the two predefined actions, but might be
2761 useful in custom actions that try to help diagnose errors.
2764 #if PROCEED_ON_ERROR
2766 /* A count of the number of corruption errors causing resets */
2767 int malloc_corruption_error_count;
2769 /* default corruption action */
2770 static void reset_on_error(mstate m);
2772 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2773 #define USAGE_ERROR_ACTION(m, p)
2775 #else /* PROCEED_ON_ERROR */
2777 #ifndef CORRUPTION_ERROR_ACTION
2778 #define CORRUPTION_ERROR_ACTION(m) ABORT
2779 #endif /* CORRUPTION_ERROR_ACTION */
2781 #ifndef USAGE_ERROR_ACTION
2782 #define USAGE_ERROR_ACTION(m,p) ABORT
2783 #endif /* USAGE_ERROR_ACTION */
2785 #endif /* PROCEED_ON_ERROR */
2788 /* -------------------------- Debugging setup ---------------------------- */
2792 #define check_free_chunk(M,P)
2793 #define check_inuse_chunk(M,P)
2794 #define check_malloced_chunk(M,P,N)
2795 #define check_mmapped_chunk(M,P)
2796 #define check_malloc_state(M)
2797 #define check_top_chunk(M,P)
2800 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2801 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2802 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2803 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2804 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2805 #define check_malloc_state(M) do_check_malloc_state(M)
2807 static void do_check_any_chunk(mstate m, mchunkptr p);
2808 static void do_check_top_chunk(mstate m, mchunkptr p);
2809 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2810 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2811 static void do_check_free_chunk(mstate m, mchunkptr p);
2812 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2813 static void do_check_tree(mstate m, tchunkptr t);
2814 static void do_check_treebin(mstate m, bindex_t i);
2815 static void do_check_smallbin(mstate m, bindex_t i);
2816 static void do_check_malloc_state(mstate m);
2817 static int bin_find(mstate m, mchunkptr x);
2818 static size_t traverse_and_check(mstate m);
2821 /* ---------------------------- Indexing Bins ---------------------------- */
2823 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2824 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2825 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2826 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2828 /* addressing by index. See above about smallbin repositioning */
2829 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2830 #define treebin_at(M,i) (&((M)->treebins[i]))
2832 /* assign tree index for size S to variable I. Use x86 asm if possible */
2833 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2834 #define compute_tree_index(S, I)\
2836 unsigned int X = S >> TREEBIN_SHIFT;\
2839 else if (X > 0xFFFF)\
2842 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2843 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2847 #elif defined (__INTEL_COMPILER)
2848 #define compute_tree_index(S, I)\
2850 size_t X = S >> TREEBIN_SHIFT;\
2853 else if (X > 0xFFFF)\
2856 unsigned int K = _bit_scan_reverse (X); \
2857 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2861 #elif defined(_MSC_VER) && _MSC_VER>=1300
2862 #define compute_tree_index(S, I)\
2864 size_t X = S >> TREEBIN_SHIFT;\
2867 else if (X > 0xFFFF)\
2871 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2872 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2877 #define compute_tree_index(S, I)\
2879 size_t X = S >> TREEBIN_SHIFT;\
2882 else if (X > 0xFFFF)\
2885 unsigned int Y = (unsigned int)X;\
2886 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2887 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2889 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2890 K = 14 - N + ((Y <<= K) >> 15);\
2891 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2896 /* Bit representing maximum resolved size in a treebin at i */
2897 #define bit_for_tree_index(i) \
2898 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2900 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2901 #define leftshift_for_tree_index(i) \
2902 ((i == NTREEBINS-1)? 0 : \
2903 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2905 /* The size of the smallest chunk held in bin with index i */
2906 #define minsize_for_tree_index(i) \
2907 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2908 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2911 /* ------------------------ Operations on bin maps ----------------------- */
2913 /* bit corresponding to given index */
2914 #define idx2bit(i) ((binmap_t)(1) << (i))
2916 /* Mark/Clear bits with given index */
2917 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2918 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2919 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2921 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2922 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2923 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2925 /* isolate the least set bit of a bitmap */
2926 #define least_bit(x) ((x) & -(x))
2928 /* mask with all bits to left of least bit of x on */
2929 #define left_bits(x) ((x<<1) | -(x<<1))
2931 /* mask with all bits to left of or equal to least bit of x on */
2932 #define same_or_left_bits(x) ((x) | -(x))
2934 /* index corresponding to given bit. Use x86 asm if possible */
2936 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2937 #define compute_bit2idx(X, I)\
2940 J = __builtin_ctz(X); \
2944 #elif defined (__INTEL_COMPILER)
2945 #define compute_bit2idx(X, I)\
2948 J = _bit_scan_forward (X); \
2952 #elif defined(_MSC_VER) && _MSC_VER>=1300
2953 #define compute_bit2idx(X, I)\
2956 _BitScanForward((DWORD *) &J, X);\
2960 #elif USE_BUILTIN_FFS
2961 #define compute_bit2idx(X, I) I = ffs(X)-1
2964 #define compute_bit2idx(X, I)\
2966 unsigned int Y = X - 1;\
2967 unsigned int K = Y >> (16-4) & 16;\
2968 unsigned int N = K; Y >>= K;\
2969 N += K = Y >> (8-3) & 8; Y >>= K;\
2970 N += K = Y >> (4-2) & 4; Y >>= K;\
2971 N += K = Y >> (2-1) & 2; Y >>= K;\
2972 N += K = Y >> (1-0) & 1; Y >>= K;\
2973 I = (bindex_t)(N + Y);\
2978 /* ----------------------- Runtime Check Support ------------------------- */
2981 For security, the main invariant is that malloc/free/etc never
2982 writes to a static address other than malloc_state, unless static
2983 malloc_state itself has been corrupted, which cannot occur via
2984 malloc (because of these checks). In essence this means that we
2985 believe all pointers, sizes, maps etc held in malloc_state, but
2986 check all of those linked or offsetted from other embedded data
2987 structures. These checks are interspersed with main code in a way
2988 that tends to minimize their run-time cost.
2990 When FOOTERS is defined, in addition to range checking, we also
2991 verify footer fields of inuse chunks, which can be used guarantee
2992 that the mstate controlling malloc/free is intact. This is a
2993 streamlined version of the approach described by William Robertson
2994 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2995 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2996 of an inuse chunk holds the xor of its mstate and a random seed,
2997 that is checked upon calls to free() and realloc(). This is
2998 (probabalistically) unguessable from outside the program, but can be
2999 computed by any code successfully malloc'ing any chunk, so does not
3000 itself provide protection against code that has already broken
3001 security through some other means. Unlike Robertson et al, we
3002 always dynamically check addresses of all offset chunks (previous,
3003 next, etc). This turns out to be cheaper than relying on hashes.
3007 /* Check if address a is at least as high as any from MORECORE or MMAP */
3008 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3009 /* Check if address of next chunk n is higher than base chunk p */
3010 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3011 /* Check if p has inuse status */
3012 #define ok_inuse(p) is_inuse(p)
3013 /* Check if p has its pinuse bit on */
3014 #define ok_pinuse(p) pinuse(p)
3016 #else /* !INSECURE */
3017 #define ok_address(M, a) (1)
3018 #define ok_next(b, n) (1)
3019 #define ok_inuse(p) (1)
3020 #define ok_pinuse(p) (1)
3021 #endif /* !INSECURE */
3023 #if (FOOTERS && !INSECURE)
3024 /* Check if (alleged) mstate m has expected magic field */
3025 #define ok_magic(M) ((M)->magic == mparams.magic)
3026 #else /* (FOOTERS && !INSECURE) */
3027 #define ok_magic(M) (1)
3028 #endif /* (FOOTERS && !INSECURE) */
3030 /* In gcc, use __builtin_expect to minimize impact of checks */
3032 #if defined(__GNUC__) && __GNUC__ >= 3
3033 #define RTCHECK(e) __builtin_expect(e, 1)
3035 #define RTCHECK(e) (e)
3037 #else /* !INSECURE */
3038 #define RTCHECK(e) (1)
3039 #endif /* !INSECURE */
3041 /* macros to set up inuse chunks with or without footers */
3045 #define mark_inuse_foot(M,p,s)
3047 /* Macros for setting head/foot of non-mmapped chunks */
3049 /* Set cinuse bit and pinuse bit of next chunk */
3050 #define set_inuse(M,p,s)\
3051 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3052 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3054 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3055 #define set_inuse_and_pinuse(M,p,s)\
3056 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3057 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3059 /* Set size, cinuse and pinuse bit of this chunk */
3060 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3061 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3065 /* Set foot of inuse chunk to be xor of mstate and seed */
3066 #define mark_inuse_foot(M,p,s)\
3067 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3069 #define get_mstate_for(p)\
3070 ((mstate)(((mchunkptr)((char*)(p) +\
3071 (chunksize(p))))->prev_foot ^ mparams.magic))
3073 #define set_inuse(M,p,s)\
3074 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3075 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3076 mark_inuse_foot(M,p,s))
3078 #define set_inuse_and_pinuse(M,p,s)\
3079 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3080 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3081 mark_inuse_foot(M,p,s))
3083 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3084 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3085 mark_inuse_foot(M, p, s))
3087 #endif /* !FOOTERS */
3089 /* ---------------------------- setting mparams -------------------------- */
3091 /* Initialize mparams */
3092 static int init_mparams(void) {
3093 #ifdef NEED_GLOBAL_LOCK_INIT
3094 call_once(&malloc_global_mutex_init_once, init_malloc_global_mutex);
3097 ACQUIRE_MALLOC_GLOBAL_LOCK();
3098 if (mparams.magic == 0) {
3104 psize = malloc_getpagesize;
3105 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3108 SYSTEM_INFO system_info;
3109 GetSystemInfo(&system_info);
3110 psize = system_info.dwPageSize;
3111 gsize = ((DEFAULT_GRANULARITY != 0)?
3112 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3116 /* Sanity-check configuration:
3117 size_t must be unsigned and as wide as pointer type.
3118 ints must be at least 4 bytes.
3119 alignment must be at least 8.
3120 Alignment, min chunk size, and page size must all be powers of 2.
3122 if ((sizeof(size_t) != sizeof(char*)) ||
3123 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3124 (sizeof(int) < 4) ||
3125 (MALLOC_ALIGNMENT < (size_t)8U) ||
3126 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3127 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3128 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3129 ((psize & (psize-SIZE_T_ONE)) != 0))
3132 mparams.granularity = gsize;
3133 mparams.page_size = psize;
3134 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3135 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3136 #if MORECORE_CONTIGUOUS
3137 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3138 #else /* MORECORE_CONTIGUOUS */
3139 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3140 #endif /* MORECORE_CONTIGUOUS */
3143 /* Set up lock for main malloc area */
3144 gm->mflags = mparams.default_mflags;
3145 (void)INITIAL_LOCK(&gm->mutex);
3151 unsigned char buf[sizeof(size_t)];
3152 /* Try to use /dev/urandom, else fall back on using time */
3153 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3154 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3155 magic = *((size_t *) buf);
3159 #endif /* USE_DEV_RANDOM */
3161 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3162 #elif defined(LACKS_TIME_H)
3163 magic = (size_t)&magic ^ (size_t)0x55555555U;
3165 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3167 magic |= (size_t)8U; /* ensure nonzero */
3168 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3169 /* Until memory modes commonly available, use volatile-write */
3170 (*(volatile size_t *)(&(mparams.magic))) = magic;
3174 RELEASE_MALLOC_GLOBAL_LOCK();
3178 /* support for mallopt */
3179 static int change_mparam(int param_number, int value) {
3181 ensure_initialization();
3182 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3183 switch(param_number) {
3184 case M_TRIM_THRESHOLD:
3185 mparams.trim_threshold = val;
3188 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3189 mparams.granularity = val;
3194 case M_MMAP_THRESHOLD:
3195 mparams.mmap_threshold = val;
3203 /* ------------------------- Debugging Support --------------------------- */
3205 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3206 static void do_check_any_chunk(mstate m, mchunkptr p) {
3207 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3208 assert(ok_address(m, p));
3211 /* Check properties of top chunk */
3212 static void do_check_top_chunk(mstate m, mchunkptr p) {
3213 msegmentptr sp = segment_holding(m, (char*)p);
3214 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3216 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3217 assert(ok_address(m, p));
3218 assert(sz == m->topsize);
3220 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3222 assert(!pinuse(chunk_plus_offset(p, sz)));
3225 /* Check properties of (inuse) mmapped chunks */
3226 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3227 size_t sz = chunksize(p);
3228 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3229 assert(is_mmapped(p));
3230 assert(use_mmap(m));
3231 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3232 assert(ok_address(m, p));
3233 assert(!is_small(sz));
3234 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3235 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3236 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3239 /* Check properties of inuse chunks */
3240 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3241 do_check_any_chunk(m, p);
3242 assert(is_inuse(p));
3243 assert(next_pinuse(p));
3244 /* If not pinuse and not mmapped, previous chunk has OK offset */
3245 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3247 do_check_mmapped_chunk(m, p);
3250 /* Check properties of free chunks */
3251 static void do_check_free_chunk(mstate m, mchunkptr p) {
3252 size_t sz = chunksize(p);
3253 mchunkptr next = chunk_plus_offset(p, sz);
3254 do_check_any_chunk(m, p);
3255 assert(!is_inuse(p));
3256 assert(!next_pinuse(p));
3257 assert (!is_mmapped(p));
3258 if (p != m->dv && p != m->top) {
3259 if (sz >= MIN_CHUNK_SIZE) {
3260 assert((sz & CHUNK_ALIGN_MASK) == 0);
3261 assert(is_aligned(chunk2mem(p)));
3262 assert(next->prev_foot == sz);
3264 assert (next == m->top || is_inuse(next));
3265 assert(p->fd->bk == p);
3266 assert(p->bk->fd == p);
3268 else /* markers are always of size SIZE_T_SIZE */
3269 assert(sz == SIZE_T_SIZE);
3273 /* Check properties of malloced chunks at the point they are malloced */
3274 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3276 mchunkptr p = mem2chunk(mem);
3277 size_t sz = p->head & ~INUSE_BITS;
3278 do_check_inuse_chunk(m, p);
3279 assert((sz & CHUNK_ALIGN_MASK) == 0);
3280 assert(sz >= MIN_CHUNK_SIZE);
3282 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3283 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3287 /* Check a tree and its subtrees. */
3288 static void do_check_tree(mstate m, tchunkptr t) {
3291 bindex_t tindex = t->index;
3292 size_t tsize = chunksize(t);
3294 compute_tree_index(tsize, idx);
3295 assert(tindex == idx);
3296 assert(tsize >= MIN_LARGE_SIZE);
3297 assert(tsize >= minsize_for_tree_index(idx));
3298 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3300 do { /* traverse through chain of same-sized nodes */
3301 do_check_any_chunk(m, ((mchunkptr)u));
3302 assert(u->index == tindex);
3303 assert(chunksize(u) == tsize);
3304 assert(!is_inuse(u));
3305 assert(!next_pinuse(u));
3306 assert(u->fd->bk == u);
3307 assert(u->bk->fd == u);
3308 if (u->parent == 0) {
3309 assert(u->child[0] == 0);
3310 assert(u->child[1] == 0);
3313 assert(head == 0); /* only one node on chain has parent */
3315 assert(u->parent != u);
3316 assert (u->parent->child[0] == u ||
3317 u->parent->child[1] == u ||
3318 *((tbinptr*)(u->parent)) == u);
3319 if (u->child[0] != 0) {
3320 assert(u->child[0]->parent == u);
3321 assert(u->child[0] != u);
3322 do_check_tree(m, u->child[0]);
3324 if (u->child[1] != 0) {
3325 assert(u->child[1]->parent == u);
3326 assert(u->child[1] != u);
3327 do_check_tree(m, u->child[1]);
3329 if (u->child[0] != 0 && u->child[1] != 0) {
3330 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3338 /* Check all the chunks in a treebin. */
3339 static void do_check_treebin(mstate m, bindex_t i) {
3340 tbinptr* tb = treebin_at(m, i);
3342 int empty = (m->treemap & (1U << i)) == 0;
3346 do_check_tree(m, t);
3349 /* Check all the chunks in a smallbin. */
3350 static void do_check_smallbin(mstate m, bindex_t i) {
3351 sbinptr b = smallbin_at(m, i);
3352 mchunkptr p = b->bk;
3353 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3357 for (; p != b; p = p->bk) {
3358 size_t size = chunksize(p);
3360 /* each chunk claims to be free */
3361 do_check_free_chunk(m, p);
3362 /* chunk belongs in bin */
3363 assert(small_index(size) == i);
3364 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3365 /* chunk is followed by an inuse chunk */
3367 if (q->head != FENCEPOST_HEAD)
3368 do_check_inuse_chunk(m, q);
3373 /* Find x in a bin. Used in other check functions. */
3374 static int bin_find(mstate m, mchunkptr x) {
3375 size_t size = chunksize(x);
3376 if (is_small(size)) {
3377 bindex_t sidx = small_index(size);
3378 sbinptr b = smallbin_at(m, sidx);
3379 if (smallmap_is_marked(m, sidx)) {
3384 } while ((p = p->fd) != b);
3389 compute_tree_index(size, tidx);
3390 if (treemap_is_marked(m, tidx)) {
3391 tchunkptr t = *treebin_at(m, tidx);
3392 size_t sizebits = size << leftshift_for_tree_index(tidx);
3393 while (t != 0 && chunksize(t) != size) {
3394 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3400 if (u == (tchunkptr)x)
3402 } while ((u = u->fd) != t);
3409 /* Traverse each chunk and check it; return total */
3410 static size_t traverse_and_check(mstate m) {
3412 if (is_initialized(m)) {
3413 msegmentptr s = &m->seg;
3414 sum += m->topsize + TOP_FOOT_SIZE;
3416 mchunkptr q = align_as_chunk(s->base);
3417 mchunkptr lastq = 0;
3419 while (segment_holds(s, q) &&
3420 q != m->top && q->head != FENCEPOST_HEAD) {
3421 sum += chunksize(q);
3423 assert(!bin_find(m, q));
3424 do_check_inuse_chunk(m, q);
3427 assert(q == m->dv || bin_find(m, q));
3428 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3429 do_check_free_chunk(m, q);
3441 /* Check all properties of malloc_state. */
3442 static void do_check_malloc_state(mstate m) {
3446 for (i = 0; i < NSMALLBINS; ++i)
3447 do_check_smallbin(m, i);
3448 for (i = 0; i < NTREEBINS; ++i)
3449 do_check_treebin(m, i);
3451 if (m->dvsize != 0) { /* check dv chunk */
3452 do_check_any_chunk(m, m->dv);
3453 assert(m->dvsize == chunksize(m->dv));
3454 assert(m->dvsize >= MIN_CHUNK_SIZE);
3455 assert(bin_find(m, m->dv) == 0);
3458 if (m->top != 0) { /* check top chunk */
3459 do_check_top_chunk(m, m->top);
3460 /*assert(m->topsize == chunksize(m->top)); redundant */
3461 assert(m->topsize > 0);
3462 assert(bin_find(m, m->top) == 0);
3465 total = traverse_and_check(m);
3466 assert(total <= m->footprint);
3467 assert(m->footprint <= m->max_footprint);
3471 /* ----------------------------- statistics ------------------------------ */
3474 static struct mallinfo internal_mallinfo(mstate m) {
3475 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3476 ensure_initialization();
3477 if (!PREACTION(m)) {
3478 check_malloc_state(m);
3479 if (is_initialized(m)) {
3480 size_t nfree = SIZE_T_ONE; /* top always free */
3481 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3483 msegmentptr s = &m->seg;
3485 mchunkptr q = align_as_chunk(s->base);
3486 while (segment_holds(s, q) &&
3487 q != m->top && q->head != FENCEPOST_HEAD) {
3488 size_t sz = chunksize(q);
3501 nm.hblkhd = m->footprint - sum;
3502 nm.usmblks = m->max_footprint;
3503 nm.uordblks = m->footprint - mfree;
3504 nm.fordblks = mfree;
3505 nm.keepcost = m->topsize;
3512 #endif /* !NO_MALLINFO */
3514 #if !NO_MALLOC_STATS
3515 static void internal_malloc_stats(mstate m) {
3516 ensure_initialization();
3517 if (!PREACTION(m)) {
3521 check_malloc_state(m);
3522 if (is_initialized(m)) {
3523 msegmentptr s = &m->seg;
3524 maxfp = m->max_footprint;
3526 used = fp - (m->topsize + TOP_FOOT_SIZE);
3529 mchunkptr q = align_as_chunk(s->base);
3530 while (segment_holds(s, q) &&
3531 q != m->top && q->head != FENCEPOST_HEAD) {
3533 used -= chunksize(q);
3539 POSTACTION(m); /* drop lock */
3540 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3541 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3542 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3545 #endif /* NO_MALLOC_STATS */
3547 /* ----------------------- Operations on smallbins ----------------------- */
3550 Various forms of linking and unlinking are defined as macros. Even
3551 the ones for trees, which are very long but have very short typical
3552 paths. This is ugly but reduces reliance on inlining support of
3556 /* Link a free chunk into a smallbin */
3557 #define insert_small_chunk(M, P, S) {\
3558 bindex_t I = small_index(S);\
3559 mchunkptr B = smallbin_at(M, I);\
3561 assert(S >= MIN_CHUNK_SIZE);\
3562 if (!smallmap_is_marked(M, I))\
3563 mark_smallmap(M, I);\
3564 else if (RTCHECK(ok_address(M, B->fd)))\
3567 CORRUPTION_ERROR_ACTION(M);\
3575 /* Unlink a chunk from a smallbin */
3576 #define unlink_small_chunk(M, P, S) {\
3577 mchunkptr F = P->fd;\
3578 mchunkptr B = P->bk;\
3579 bindex_t I = small_index(S);\
3582 assert(chunksize(P) == small_index2size(I));\
3583 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3585 clear_smallmap(M, I);\
3587 else if (RTCHECK(B == smallbin_at(M,I) ||\
3588 (ok_address(M, B) && B->fd == P))) {\
3593 CORRUPTION_ERROR_ACTION(M);\
3597 CORRUPTION_ERROR_ACTION(M);\
3601 /* Unlink the first chunk from a smallbin */
3602 #define unlink_first_small_chunk(M, B, P, I) {\
3603 mchunkptr F = P->fd;\
3606 assert(chunksize(P) == small_index2size(I));\
3608 clear_smallmap(M, I);\
3610 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3615 CORRUPTION_ERROR_ACTION(M);\
3619 /* Replace dv node, binning the old one */
3620 /* Used only when dvsize known to be small */
3621 #define replace_dv(M, P, S) {\
3622 size_t DVS = M->dvsize;\
3623 assert(is_small(DVS));\
3625 mchunkptr DV = M->dv;\
3626 insert_small_chunk(M, DV, DVS);\
3632 /* ------------------------- Operations on trees ------------------------- */
3634 /* Insert chunk into tree */
3635 #define insert_large_chunk(M, X, S) {\
3638 compute_tree_index(S, I);\
3639 H = treebin_at(M, I);\
3641 X->child[0] = X->child[1] = 0;\
3642 if (!treemap_is_marked(M, I)) {\
3643 mark_treemap(M, I);\
3645 X->parent = (tchunkptr)H;\
3650 size_t K = S << leftshift_for_tree_index(I);\
3652 if (chunksize(T) != S) {\
3653 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3657 else if (RTCHECK(ok_address(M, C))) {\
3664 CORRUPTION_ERROR_ACTION(M);\
3669 tchunkptr F = T->fd;\
3670 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3678 CORRUPTION_ERROR_ACTION(M);\
3689 1. If x is a chained node, unlink it from its same-sized fd/bk links
3690 and choose its bk node as its replacement.
3691 2. If x was the last node of its size, but not a leaf node, it must
3692 be replaced with a leaf node (not merely one with an open left or
3693 right), to make sure that lefts and rights of descendents
3694 correspond properly to bit masks. We use the rightmost descendent
3695 of x. We could use any other leaf, but this is easy to locate and
3696 tends to counteract removal of leftmosts elsewhere, and so keeps
3697 paths shorter than minimally guaranteed. This doesn't loop much
3698 because on average a node in a tree is near the bottom.
3699 3. If x is the base of a chain (i.e., has parent links) relink
3700 x's parent and children to x's replacement (or null if none).
3703 #define unlink_large_chunk(M, X) {\
3704 tchunkptr XP = X->parent;\
3707 tchunkptr F = X->fd;\
3709 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3714 CORRUPTION_ERROR_ACTION(M);\
3719 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3720 ((R = *(RP = &(X->child[0]))) != 0)) {\
3722 while ((*(CP = &(R->child[1])) != 0) ||\
3723 (*(CP = &(R->child[0])) != 0)) {\
3726 if (RTCHECK(ok_address(M, RP)))\
3729 CORRUPTION_ERROR_ACTION(M);\
3734 tbinptr* H = treebin_at(M, X->index);\
3736 if ((*H = R) == 0) \
3737 clear_treemap(M, X->index);\
3739 else if (RTCHECK(ok_address(M, XP))) {\
3740 if (XP->child[0] == X) \
3746 CORRUPTION_ERROR_ACTION(M);\
3748 if (RTCHECK(ok_address(M, R))) {\
3751 if ((C0 = X->child[0]) != 0) {\
3752 if (RTCHECK(ok_address(M, C0))) {\
3757 CORRUPTION_ERROR_ACTION(M);\
3759 if ((C1 = X->child[1]) != 0) {\
3760 if (RTCHECK(ok_address(M, C1))) {\
3765 CORRUPTION_ERROR_ACTION(M);\
3769 CORRUPTION_ERROR_ACTION(M);\
3774 /* Relays to large vs small bin operations */
3776 #define insert_chunk(M, P, S)\
3777 if (is_small(S)) insert_small_chunk(M, P, S)\
3778 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3780 #define unlink_chunk(M, P, S)\
3781 if (is_small(S)) unlink_small_chunk(M, P, S)\
3782 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3785 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3788 #define internal_malloc(m, b) mspace_malloc(m, b)
3789 #define internal_free(m, mem) mspace_free(m,mem);
3790 #else /* ONLY_MSPACES */
3792 #define internal_malloc(m, b)\
3793 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3794 #define internal_free(m, mem)\
3795 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3797 #define internal_malloc(m, b) dlmalloc(b)
3798 #define internal_free(m, mem) dlfree(mem)
3799 #endif /* MSPACES */
3800 #endif /* ONLY_MSPACES */
3802 /* ----------------------- Direct-mmapping chunks ----------------------- */
3805 Directly mmapped chunks are set up with an offset to the start of
3806 the mmapped region stored in the prev_foot field of the chunk. This
3807 allows reconstruction of the required argument to MUNMAP when freed,
3808 and also allows adjustment of the returned chunk to meet alignment
3809 requirements (especially in memalign).
3812 /* Malloc using mmap */
3813 static void* mmap_alloc(mstate m, size_t nb) {
3814 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3815 if (m->footprint_limit != 0) {
3816 size_t fp = m->footprint + mmsize;
3817 if (fp <= m->footprint || fp > m->footprint_limit)
3820 if (mmsize > nb) { /* Check for wrap around 0 */
3821 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3823 size_t offset = align_offset(chunk2mem(mm));
3824 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3825 mchunkptr p = (mchunkptr)(mm + offset);
3826 p->prev_foot = offset;
3828 mark_inuse_foot(m, p, psize);
3829 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3830 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3832 if (m->least_addr == 0 || mm < m->least_addr)
3834 if ((m->footprint += mmsize) > m->max_footprint)
3835 m->max_footprint = m->footprint;
3836 assert(is_aligned(chunk2mem(p)));
3837 check_mmapped_chunk(m, p);
3838 return chunk2mem(p);
3844 /* Realloc using mmap */
3845 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3846 size_t oldsize = chunksize(oldp);
3848 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3850 /* Keep old chunk if big enough but not too big */
3851 if (oldsize >= nb + SIZE_T_SIZE &&
3852 (oldsize - nb) <= (mparams.granularity << 1))
3855 size_t offset = oldp->prev_foot;
3856 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3857 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3858 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3859 oldmmsize, newmmsize, flags);
3861 mchunkptr newp = (mchunkptr)(cp + offset);
3862 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3864 mark_inuse_foot(m, newp, psize);
3865 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3866 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3868 if (cp < m->least_addr)
3870 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3871 m->max_footprint = m->footprint;
3872 check_mmapped_chunk(m, newp);
3880 /* -------------------------- mspace management -------------------------- */
3882 /* Initialize top chunk and its size */
3883 static void init_top(mstate m, mchunkptr p, size_t psize) {
3884 /* Ensure alignment */
3885 size_t offset = align_offset(chunk2mem(p));
3886 p = (mchunkptr)((char*)p + offset);
3891 p->head = psize | PINUSE_BIT;
3892 /* set size of fake trailing chunk holding overhead space only once */
3893 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3894 m->trim_check = mparams.trim_threshold; /* reset on each update */
3897 /* Initialize bins for a new mstate that is otherwise zeroed out */
3898 static void init_bins(mstate m) {
3899 /* Establish circular links for smallbins */
3901 for (i = 0; i < NSMALLBINS; ++i) {
3902 sbinptr bin = smallbin_at(m,i);
3903 bin->fd = bin->bk = bin;
3907 #if PROCEED_ON_ERROR
3909 /* default corruption action */
3910 static void reset_on_error(mstate m) {
3912 ++malloc_corruption_error_count;
3913 /* Reinitialize fields to forget about all memory */
3914 m->smallmap = m->treemap = 0;
3915 m->dvsize = m->topsize = 0;
3920 for (i = 0; i < NTREEBINS; ++i)
3921 *treebin_at(m, i) = 0;
3924 #endif /* PROCEED_ON_ERROR */
3926 /* Allocate chunk and prepend remainder with chunk in successor base. */
3927 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3929 mchunkptr p = align_as_chunk(newbase);
3930 mchunkptr oldfirst = align_as_chunk(oldbase);
3931 size_t psize = (char*)oldfirst - (char*)p;
3932 mchunkptr q = chunk_plus_offset(p, nb);
3933 size_t qsize = psize - nb;
3934 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3936 assert((char*)oldfirst > (char*)q);
3937 assert(pinuse(oldfirst));
3938 assert(qsize >= MIN_CHUNK_SIZE);
3940 /* consolidate remainder with first chunk of old base */
3941 if (oldfirst == m->top) {
3942 size_t tsize = m->topsize += qsize;
3944 q->head = tsize | PINUSE_BIT;
3945 check_top_chunk(m, q);
3947 else if (oldfirst == m->dv) {
3948 size_t dsize = m->dvsize += qsize;
3950 set_size_and_pinuse_of_free_chunk(q, dsize);
3953 if (!is_inuse(oldfirst)) {
3954 size_t nsize = chunksize(oldfirst);
3955 unlink_chunk(m, oldfirst, nsize);
3956 oldfirst = chunk_plus_offset(oldfirst, nsize);
3959 set_free_with_pinuse(q, qsize, oldfirst);
3960 insert_chunk(m, q, qsize);
3961 check_free_chunk(m, q);
3964 check_malloced_chunk(m, chunk2mem(p), nb);
3965 return chunk2mem(p);
3968 /* Add a segment to hold a new noncontiguous region */
3969 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3970 /* Determine locations and sizes of segment, fenceposts, old top */
3971 char* old_top = (char*)m->top;
3972 msegmentptr oldsp = segment_holding(m, old_top);
3973 char* old_end = oldsp->base + oldsp->size;
3974 size_t ssize = pad_request(sizeof(struct malloc_segment));
3975 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3976 size_t offset = align_offset(chunk2mem(rawsp));
3977 char* asp = rawsp + offset;
3978 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3979 mchunkptr sp = (mchunkptr)csp;
3980 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3981 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3982 mchunkptr p = tnext;
3985 /* reset top to new space */
3986 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3988 /* Set up segment record */
3989 assert(is_aligned(ss));
3990 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3991 *ss = m->seg; /* Push current record */
3992 m->seg.base = tbase;
3993 m->seg.size = tsize;
3994 m->seg.sflags = mmapped;
3997 /* Insert trailing fenceposts */
3999 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4000 p->head = FENCEPOST_HEAD;
4002 if ((char*)(&(nextp->head)) < old_end)
4007 assert(nfences >= 2);
4009 /* Insert the rest of old top into a bin as an ordinary free chunk */
4010 if (csp != old_top) {
4011 mchunkptr q = (mchunkptr)old_top;
4012 size_t psize = csp - old_top;
4013 mchunkptr tn = chunk_plus_offset(q, psize);
4014 set_free_with_pinuse(q, psize, tn);
4015 insert_chunk(m, q, psize);
4018 check_top_chunk(m, m->top);
4021 /* -------------------------- System allocation -------------------------- */
4023 /* Get memory from system using MORECORE or MMAP */
4024 static void* sys_alloc(mstate m, size_t nb) {
4025 char* tbase = CMFAIL;
4027 flag_t mmap_flag = 0;
4028 size_t asize; /* allocation size */
4030 ensure_initialization();
4032 /* Directly map large chunks, but only if already initialized */
4033 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4034 void* mem = mmap_alloc(m, nb);
4039 asize = granularity_align(nb + SYS_ALLOC_PADDING);
4041 return 0; /* wraparound */
4042 if (m->footprint_limit != 0) {
4043 size_t fp = m->footprint + asize;
4044 if (fp <= m->footprint || fp > m->footprint_limit)
4049 Try getting memory in any of three ways (in most-preferred to
4050 least-preferred order):
4051 1. A call to MORECORE that can normally contiguously extend memory.
4052 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4053 or main space is mmapped or a previous contiguous call failed)
4054 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4055 Note that under the default settings, if MORECORE is unable to
4056 fulfill a request, and HAVE_MMAP is true, then mmap is
4057 used as a noncontiguous system allocator. This is a useful backup
4058 strategy for systems with holes in address spaces -- in this case
4059 sbrk cannot contiguously expand the heap, but mmap may be able to
4061 3. A call to MORECORE that cannot usually contiguously extend memory.
4062 (disabled if not HAVE_MORECORE)
4064 In all cases, we need to request enough bytes from system to ensure
4065 we can malloc nb bytes upon success, so pad with enough space for
4066 top_foot, plus alignment-pad to make sure we don't lose bytes if
4067 not on boundary, and round this up to a granularity unit.
4070 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4072 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4073 ACQUIRE_MALLOC_GLOBAL_LOCK();
4075 if (ss == 0) { /* First time through or recovery */
4076 char* base = (char*)CALL_MORECORE(0);
4077 if (base != CMFAIL) {
4079 /* Adjust to end on a page boundary */
4080 if (!is_page_aligned(base))
4081 asize += (page_align((size_t)base) - (size_t)base);
4082 fp = m->footprint + asize; /* recheck limits */
4083 if (asize > nb && asize < HALF_MAX_SIZE_T &&
4084 (m->footprint_limit == 0 ||
4085 (fp > m->footprint && fp <= m->footprint_limit)) &&
4086 (br = (char*)(CALL_MORECORE(asize))) == base) {
4093 /* Subtract out existing available top space from MORECORE request. */
4094 asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4095 /* Use mem here only if it did continuously extend old space */
4096 if (asize < HALF_MAX_SIZE_T &&
4097 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
4103 if (tbase == CMFAIL) { /* Cope with partial failure */
4104 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4105 if (asize < HALF_MAX_SIZE_T &&
4106 asize < nb + SYS_ALLOC_PADDING) {
4107 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4108 if (esize < HALF_MAX_SIZE_T) {
4109 char* end = (char*)CALL_MORECORE(esize);
4112 else { /* Can't use; try to release */
4113 (void) CALL_MORECORE(-asize);
4119 if (br != CMFAIL) { /* Use the space we did get */
4124 disable_contiguous(m); /* Don't try contiguous path in the future */
4127 RELEASE_MALLOC_GLOBAL_LOCK();
4130 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4131 char* mp = (char*)(CALL_MMAP(asize));
4135 mmap_flag = USE_MMAP_BIT;
4139 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4140 if (asize < HALF_MAX_SIZE_T) {
4143 ACQUIRE_MALLOC_GLOBAL_LOCK();
4144 br = (char*)(CALL_MORECORE(asize));
4145 end = (char*)(CALL_MORECORE(0));
4146 RELEASE_MALLOC_GLOBAL_LOCK();
4147 if (br != CMFAIL && end != CMFAIL && br < end) {
4148 size_t ssize = end - br;
4149 if (ssize > nb + TOP_FOOT_SIZE) {
4157 if (tbase != CMFAIL) {
4159 if ((m->footprint += tsize) > m->max_footprint)
4160 m->max_footprint = m->footprint;
4162 if (!is_initialized(m)) { /* first-time initialization */
4163 if (m->least_addr == 0 || tbase < m->least_addr)
4164 m->least_addr = tbase;
4165 m->seg.base = tbase;
4166 m->seg.size = tsize;
4167 m->seg.sflags = mmap_flag;
4168 m->magic = mparams.magic;
4169 m->release_checks = MAX_RELEASE_CHECK_RATE;
4173 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4177 /* Offset top by embedded malloc_state */
4178 mchunkptr mn = next_chunk(mem2chunk(m));
4179 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4184 /* Try to merge with an existing segment */
4185 msegmentptr sp = &m->seg;
4186 /* Only consider most recent segment if traversal suppressed */
4187 while (sp != 0 && tbase != sp->base + sp->size)
4188 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4190 !is_extern_segment(sp) &&
4191 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4192 segment_holds(sp, m->top)) { /* append */
4194 init_top(m, m->top, m->topsize + tsize);
4197 if (tbase < m->least_addr)
4198 m->least_addr = tbase;
4200 while (sp != 0 && sp->base != tbase + tsize)
4201 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4203 !is_extern_segment(sp) &&
4204 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4205 char* oldbase = sp->base;
4208 return prepend_alloc(m, tbase, oldbase, nb);
4211 add_segment(m, tbase, tsize, mmap_flag);
4215 if (nb < m->topsize) { /* Allocate from new or extended top space */
4216 size_t rsize = m->topsize -= nb;
4217 mchunkptr p = m->top;
4218 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4219 r->head = rsize | PINUSE_BIT;
4220 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4221 check_top_chunk(m, m->top);
4222 check_malloced_chunk(m, chunk2mem(p), nb);
4223 return chunk2mem(p);
4227 MALLOC_FAILURE_ACTION;
4231 /* ----------------------- system deallocation -------------------------- */
4233 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4234 static size_t release_unused_segments(mstate m) {
4235 size_t released = 0;
4237 msegmentptr pred = &m->seg;
4238 msegmentptr sp = pred->next;
4240 char* base = sp->base;
4241 size_t size = sp->size;
4242 msegmentptr next = sp->next;
4244 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4245 mchunkptr p = align_as_chunk(base);
4246 size_t psize = chunksize(p);
4247 /* Can unmap if first chunk holds entire segment and not pinned */
4248 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4249 tchunkptr tp = (tchunkptr)p;
4250 assert(segment_holds(sp, (char*)sp));
4256 unlink_large_chunk(m, tp);
4258 if (CALL_MUNMAP(base, size) == 0) {
4260 m->footprint -= size;
4261 /* unlink obsoleted record */
4265 else { /* back out if cannot unmap */
4266 insert_large_chunk(m, tp, psize);
4270 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4275 /* Reset check counter */
4276 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4277 nsegs : MAX_RELEASE_CHECK_RATE);
4281 static int sys_trim(mstate m, size_t pad) {
4282 size_t released = 0;
4283 ensure_initialization();
4284 if (pad < MAX_REQUEST && is_initialized(m)) {
4285 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4287 if (m->topsize > pad) {
4288 /* Shrink top space in granularity-size units, keeping at least one */
4289 size_t unit = mparams.granularity;
4290 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4292 msegmentptr sp = segment_holding(m, (char*)m->top);
4294 if (!is_extern_segment(sp)) {
4295 if (is_mmapped_segment(sp)) {
4297 sp->size >= extra &&
4298 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4299 size_t newsize = sp->size - extra;
4300 /* Prefer mremap, fall back to munmap */
4301 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4302 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4307 else if (HAVE_MORECORE) {
4308 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4309 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4310 ACQUIRE_MALLOC_GLOBAL_LOCK();
4312 /* Make sure end of memory is where we last set it. */
4313 char* old_br = (char*)(CALL_MORECORE(0));
4314 if (old_br == sp->base + sp->size) {
4315 char* rel_br = (char*)(CALL_MORECORE(-extra));
4316 char* new_br = (char*)(CALL_MORECORE(0));
4317 if (rel_br != CMFAIL && new_br < old_br)
4318 released = old_br - new_br;
4321 RELEASE_MALLOC_GLOBAL_LOCK();
4325 if (released != 0) {
4326 sp->size -= released;
4327 m->footprint -= released;
4328 init_top(m, m->top, m->topsize - released);
4329 check_top_chunk(m, m->top);
4333 /* Unmap any unused mmapped segments */
4335 released += release_unused_segments(m);
4337 /* On failure, disable autotrim to avoid repeated failed future calls */
4338 if (released == 0 && m->topsize > m->trim_check)
4339 m->trim_check = MAX_SIZE_T;
4342 return (released != 0)? 1 : 0;
4345 /* Consolidate and bin a chunk. Differs from exported versions
4346 of free mainly in that the chunk need not be marked as inuse.
4348 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4349 mchunkptr next = chunk_plus_offset(p, psize);
4352 size_t prevsize = p->prev_foot;
4353 if (is_mmapped(p)) {
4354 psize += prevsize + MMAP_FOOT_PAD;
4355 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4356 m->footprint -= psize;
4359 prev = chunk_minus_offset(p, prevsize);
4362 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4364 unlink_chunk(m, p, prevsize);
4366 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4368 set_free_with_pinuse(p, psize, next);
4373 CORRUPTION_ERROR_ACTION(m);
4377 if (RTCHECK(ok_address(m, next))) {
4378 if (!cinuse(next)) { /* consolidate forward */
4379 if (next == m->top) {
4380 size_t tsize = m->topsize += psize;
4382 p->head = tsize | PINUSE_BIT;
4389 else if (next == m->dv) {
4390 size_t dsize = m->dvsize += psize;
4392 set_size_and_pinuse_of_free_chunk(p, dsize);
4396 size_t nsize = chunksize(next);
4398 unlink_chunk(m, next, nsize);
4399 set_size_and_pinuse_of_free_chunk(p, psize);
4407 set_free_with_pinuse(p, psize, next);
4409 insert_chunk(m, p, psize);
4412 CORRUPTION_ERROR_ACTION(m);
4416 /* ---------------------------- malloc --------------------------- */
4418 /* allocate a large request from the best fitting chunk in a treebin */
4419 static void* tmalloc_large(mstate m, size_t nb) {
4421 size_t rsize = -nb; /* Unsigned negation */
4424 compute_tree_index(nb, idx);
4425 if ((t = *treebin_at(m, idx)) != 0) {
4426 /* Traverse tree for this bin looking for node with size == nb */
4427 size_t sizebits = nb << leftshift_for_tree_index(idx);
4428 tchunkptr rst = 0; /* The deepest untaken right subtree */
4431 size_t trem = chunksize(t) - nb;
4434 if ((rsize = trem) == 0)
4438 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4439 if (rt != 0 && rt != t)
4442 t = rst; /* set t to least subtree holding sizes > nb */
4448 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4449 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4450 if (leftbits != 0) {
4452 binmap_t leastbit = least_bit(leftbits);
4453 compute_bit2idx(leastbit, i);
4454 t = *treebin_at(m, i);
4458 while (t != 0) { /* find smallest of tree or subtree */
4459 size_t trem = chunksize(t) - nb;
4464 t = leftmost_child(t);
4467 /* If dv is a better fit, return 0 so malloc will use it */
4468 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4469 if (RTCHECK(ok_address(m, v))) { /* split */
4470 mchunkptr r = chunk_plus_offset(v, nb);
4471 assert(chunksize(v) == rsize + nb);
4472 if (RTCHECK(ok_next(v, r))) {
4473 unlink_large_chunk(m, v);
4474 if (rsize < MIN_CHUNK_SIZE)
4475 set_inuse_and_pinuse(m, v, (rsize + nb));
4477 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4478 set_size_and_pinuse_of_free_chunk(r, rsize);
4479 insert_chunk(m, r, rsize);
4481 return chunk2mem(v);
4484 CORRUPTION_ERROR_ACTION(m);
4489 /* allocate a small request from the best fitting chunk in a treebin */
4490 static void* tmalloc_small(mstate m, size_t nb) {
4494 binmap_t leastbit = least_bit(m->treemap);
4495 compute_bit2idx(leastbit, i);
4496 v = t = *treebin_at(m, i);
4497 rsize = chunksize(t) - nb;
4499 while ((t = leftmost_child(t)) != 0) {
4500 size_t trem = chunksize(t) - nb;
4507 if (RTCHECK(ok_address(m, v))) {
4508 mchunkptr r = chunk_plus_offset(v, nb);
4509 assert(chunksize(v) == rsize + nb);
4510 if (RTCHECK(ok_next(v, r))) {
4511 unlink_large_chunk(m, v);
4512 if (rsize < MIN_CHUNK_SIZE)
4513 set_inuse_and_pinuse(m, v, (rsize + nb));
4515 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4516 set_size_and_pinuse_of_free_chunk(r, rsize);
4517 replace_dv(m, r, rsize);
4519 return chunk2mem(v);
4523 CORRUPTION_ERROR_ACTION(m);
4529 void* dlmalloc(size_t bytes) {
4532 If a small request (< 256 bytes minus per-chunk overhead):
4533 1. If one exists, use a remainderless chunk in associated smallbin.
4534 (Remainderless means that there are too few excess bytes to
4535 represent as a chunk.)
4536 2. If it is big enough, use the dv chunk, which is normally the
4537 chunk adjacent to the one used for the most recent small request.
4538 3. If one exists, split the smallest available chunk in a bin,
4539 saving remainder in dv.
4540 4. If it is big enough, use the top chunk.
4541 5. If available, get memory from system and use it
4542 Otherwise, for a large request:
4543 1. Find the smallest available binned chunk that fits, and use it
4544 if it is better fitting than dv chunk, splitting if necessary.
4545 2. If better fitting than any binned chunk, use the dv chunk.
4546 3. If it is big enough, use the top chunk.
4547 4. If request size >= mmap threshold, try to directly mmap this chunk.
4548 5. If available, get memory from system and use it
4550 The ugly goto's here ensure that postaction occurs along all paths.
4554 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4557 if (!PREACTION(gm)) {
4560 if (bytes <= MAX_SMALL_REQUEST) {
4563 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4564 idx = small_index(nb);
4565 smallbits = gm->smallmap >> idx;
4567 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4569 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4570 b = smallbin_at(gm, idx);
4572 assert(chunksize(p) == small_index2size(idx));
4573 unlink_first_small_chunk(gm, b, p, idx);
4574 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4576 check_malloced_chunk(gm, mem, nb);
4580 else if (nb > gm->dvsize) {
4581 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4585 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4586 binmap_t leastbit = least_bit(leftbits);
4587 compute_bit2idx(leastbit, i);
4588 b = smallbin_at(gm, i);
4590 assert(chunksize(p) == small_index2size(i));
4591 unlink_first_small_chunk(gm, b, p, i);
4592 rsize = small_index2size(i) - nb;
4593 /* Fit here cannot be remainderless if 4byte sizes */
4594 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4595 set_inuse_and_pinuse(gm, p, small_index2size(i));
4597 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4598 r = chunk_plus_offset(p, nb);
4599 set_size_and_pinuse_of_free_chunk(r, rsize);
4600 replace_dv(gm, r, rsize);
4603 check_malloced_chunk(gm, mem, nb);
4607 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4608 check_malloced_chunk(gm, mem, nb);
4613 else if (bytes >= MAX_REQUEST)
4614 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4616 nb = pad_request(bytes);
4617 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4618 check_malloced_chunk(gm, mem, nb);
4623 if (nb <= gm->dvsize) {
4624 size_t rsize = gm->dvsize - nb;
4625 mchunkptr p = gm->dv;
4626 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4627 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4629 set_size_and_pinuse_of_free_chunk(r, rsize);
4630 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4632 else { /* exhaust dv */
4633 size_t dvs = gm->dvsize;
4636 set_inuse_and_pinuse(gm, p, dvs);
4639 check_malloced_chunk(gm, mem, nb);
4643 else if (nb < gm->topsize) { /* Split top */
4644 size_t rsize = gm->topsize -= nb;
4645 mchunkptr p = gm->top;
4646 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4647 r->head = rsize | PINUSE_BIT;
4648 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4650 check_top_chunk(gm, gm->top);
4651 check_malloced_chunk(gm, mem, nb);
4655 mem = sys_alloc(gm, nb);
4665 /* ---------------------------- free --------------------------- */
4667 void dlfree(void* mem) {
4669 Consolidate freed chunks with preceeding or succeeding bordering
4670 free chunks, if they exist, and then place in a bin. Intermixed
4671 with special cases for top, dv, mmapped chunks, and usage errors.
4675 mchunkptr p = mem2chunk(mem);
4677 mstate fm = get_mstate_for(p);
4678 if (!ok_magic(fm)) {
4679 USAGE_ERROR_ACTION(fm, p);
4684 #endif /* FOOTERS */
4685 if (!PREACTION(fm)) {
4686 check_inuse_chunk(fm, p);
4687 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4688 size_t psize = chunksize(p);
4689 mchunkptr next = chunk_plus_offset(p, psize);
4691 size_t prevsize = p->prev_foot;
4692 if (is_mmapped(p)) {
4693 psize += prevsize + MMAP_FOOT_PAD;
4694 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4695 fm->footprint -= psize;
4699 mchunkptr prev = chunk_minus_offset(p, prevsize);
4702 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4704 unlink_chunk(fm, p, prevsize);
4706 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4708 set_free_with_pinuse(p, psize, next);
4717 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4718 if (!cinuse(next)) { /* consolidate forward */
4719 if (next == fm->top) {
4720 size_t tsize = fm->topsize += psize;
4722 p->head = tsize | PINUSE_BIT;
4727 if (should_trim(fm, tsize))
4731 else if (next == fm->dv) {
4732 size_t dsize = fm->dvsize += psize;
4734 set_size_and_pinuse_of_free_chunk(p, dsize);
4738 size_t nsize = chunksize(next);
4740 unlink_chunk(fm, next, nsize);
4741 set_size_and_pinuse_of_free_chunk(p, psize);
4749 set_free_with_pinuse(p, psize, next);
4751 if (is_small(psize)) {
4752 insert_small_chunk(fm, p, psize);
4753 check_free_chunk(fm, p);
4756 tchunkptr tp = (tchunkptr)p;
4757 insert_large_chunk(fm, tp, psize);
4758 check_free_chunk(fm, p);
4759 if (--fm->release_checks == 0)
4760 release_unused_segments(fm);
4766 USAGE_ERROR_ACTION(fm, p);
4773 #endif /* FOOTERS */
4776 void* dlcalloc(size_t n_elements, size_t elem_size) {
4779 if (n_elements != 0) {
4780 req = n_elements * elem_size;
4781 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4782 (req / n_elements != elem_size))
4783 req = MAX_SIZE_T; /* force downstream failure on overflow */
4785 mem = dlmalloc(req);
4786 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4787 memset(mem, 0, req);
4791 #endif /* !ONLY_MSPACES */
4793 /* ------------ Internal support for realloc, memalign, etc -------------- */
4795 /* Try to realloc; only in-place unless can_move true */
4796 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4799 size_t oldsize = chunksize(p);
4800 mchunkptr next = chunk_plus_offset(p, oldsize);
4801 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4802 ok_next(p, next) && ok_pinuse(next))) {
4803 if (is_mmapped(p)) {
4804 newp = mmap_resize(m, p, nb, can_move);
4806 else if (oldsize >= nb) { /* already big enough */
4807 size_t rsize = oldsize - nb;
4808 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4809 mchunkptr r = chunk_plus_offset(p, nb);
4810 set_inuse(m, p, nb);
4811 set_inuse(m, r, rsize);
4812 dispose_chunk(m, r, rsize);
4816 else if (next == m->top) { /* extend into top */
4817 if (oldsize + m->topsize > nb) {
4818 size_t newsize = oldsize + m->topsize;
4819 size_t newtopsize = newsize - nb;
4820 mchunkptr newtop = chunk_plus_offset(p, nb);
4821 set_inuse(m, p, nb);
4822 newtop->head = newtopsize |PINUSE_BIT;
4824 m->topsize = newtopsize;
4828 else if (next == m->dv) { /* extend into dv */
4829 size_t dvs = m->dvsize;
4830 if (oldsize + dvs >= nb) {
4831 size_t dsize = oldsize + dvs - nb;
4832 if (dsize >= MIN_CHUNK_SIZE) {
4833 mchunkptr r = chunk_plus_offset(p, nb);
4834 mchunkptr n = chunk_plus_offset(r, dsize);
4835 set_inuse(m, p, nb);
4836 set_size_and_pinuse_of_free_chunk(r, dsize);
4841 else { /* exhaust dv */
4842 size_t newsize = oldsize + dvs;
4843 set_inuse(m, p, newsize);
4850 else if (!cinuse(next)) { /* extend into next free chunk */
4851 size_t nextsize = chunksize(next);
4852 if (oldsize + nextsize >= nb) {
4853 size_t rsize = oldsize + nextsize - nb;
4854 unlink_chunk(m, next, nextsize);
4855 if (rsize < MIN_CHUNK_SIZE) {
4856 size_t newsize = oldsize + nextsize;
4857 set_inuse(m, p, newsize);
4860 mchunkptr r = chunk_plus_offset(p, nb);
4861 set_inuse(m, p, nb);
4862 set_inuse(m, r, rsize);
4863 dispose_chunk(m, r, rsize);
4870 USAGE_ERROR_ACTION(m, oldmem);
4875 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4877 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4878 alignment = MIN_CHUNK_SIZE;
4879 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4880 size_t a = MALLOC_ALIGNMENT << 1;
4881 while (a < alignment) a <<= 1;
4884 if (bytes >= MAX_REQUEST - alignment) {
4885 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4886 MALLOC_FAILURE_ACTION;
4890 size_t nb = request2size(bytes);
4891 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4892 mem = internal_malloc(m, req);
4894 mchunkptr p = mem2chunk(mem);
4897 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4899 Find an aligned spot inside chunk. Since we need to give
4900 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4901 the first calculation places us at a spot with less than
4902 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4903 We've allocated enough total room so that this is always
4906 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4909 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4911 mchunkptr newp = (mchunkptr)pos;
4912 size_t leadsize = pos - (char*)(p);
4913 size_t newsize = chunksize(p) - leadsize;
4915 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4916 newp->prev_foot = p->prev_foot + leadsize;
4917 newp->head = newsize;
4919 else { /* Otherwise, give back leader, use the rest */
4920 set_inuse(m, newp, newsize);
4921 set_inuse(m, p, leadsize);
4922 dispose_chunk(m, p, leadsize);
4927 /* Give back spare room at the end */
4928 if (!is_mmapped(p)) {
4929 size_t size = chunksize(p);
4930 if (size > nb + MIN_CHUNK_SIZE) {
4931 size_t remainder_size = size - nb;
4932 mchunkptr remainder = chunk_plus_offset(p, nb);
4933 set_inuse(m, p, nb);
4934 set_inuse(m, remainder, remainder_size);
4935 dispose_chunk(m, remainder, remainder_size);
4940 assert (chunksize(p) >= nb);
4941 assert(((size_t)mem & (alignment - 1)) == 0);
4942 check_inuse_chunk(m, p);
4950 Common support for independent_X routines, handling
4951 all of the combinations that can result.
4953 bit 0 set if all elements are same size (using sizes[0])
4954 bit 1 set if elements should be zeroed
4956 static void** ialloc(mstate m,
4962 size_t element_size; /* chunksize of each element, if all same */
4963 size_t contents_size; /* total size of elements */
4964 size_t array_size; /* request size of pointer array */
4965 void* mem; /* malloced aggregate space */
4966 mchunkptr p; /* corresponding chunk */
4967 size_t remainder_size; /* remaining bytes while splitting */
4968 void** marray; /* either "chunks" or malloced ptr array */
4969 mchunkptr array_chunk; /* chunk for malloced ptr array */
4970 flag_t was_enabled; /* to disable mmap */
4974 ensure_initialization();
4975 /* compute array length, if needed */
4977 if (n_elements == 0)
4978 return chunks; /* nothing to do */
4983 /* if empty req, must still return chunk representing empty array */
4984 if (n_elements == 0)
4985 return (void**)internal_malloc(m, 0);
4987 array_size = request2size(n_elements * (sizeof(void*)));
4990 /* compute total element size */
4991 if (opts & 0x1) { /* all-same-size */
4992 element_size = request2size(*sizes);
4993 contents_size = n_elements * element_size;
4995 else { /* add up all the sizes */
4998 for (i = 0; i != n_elements; ++i)
4999 contents_size += request2size(sizes[i]);
5002 size = contents_size + array_size;
5005 Allocate the aggregate chunk. First disable direct-mmapping so
5006 malloc won't use it, since we would not be able to later
5007 free/realloc space internal to a segregated mmap region.
5009 was_enabled = use_mmap(m);
5011 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5017 if (PREACTION(m)) return 0;
5019 remainder_size = chunksize(p);
5021 assert(!is_mmapped(p));
5023 if (opts & 0x2) { /* optionally clear the elements */
5024 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5027 /* If not provided, allocate the pointer array as final part of chunk */
5029 size_t array_chunk_size;
5030 array_chunk = chunk_plus_offset(p, contents_size);
5031 array_chunk_size = remainder_size - contents_size;
5032 marray = (void**) (chunk2mem(array_chunk));
5033 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5034 remainder_size = contents_size;
5037 /* split out elements */
5038 for (i = 0; ; ++i) {
5039 marray[i] = chunk2mem(p);
5040 if (i != n_elements-1) {
5041 if (element_size != 0)
5042 size = element_size;
5044 size = request2size(sizes[i]);
5045 remainder_size -= size;
5046 set_size_and_pinuse_of_inuse_chunk(m, p, size);
5047 p = chunk_plus_offset(p, size);
5049 else { /* the final element absorbs any overallocation slop */
5050 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5056 if (marray != chunks) {
5057 /* final element must have exactly exhausted chunk */
5058 if (element_size != 0) {
5059 assert(remainder_size == element_size);
5062 assert(remainder_size == request2size(sizes[i]));
5064 check_inuse_chunk(m, mem2chunk(marray));
5066 for (i = 0; i != n_elements; ++i)
5067 check_inuse_chunk(m, mem2chunk(marray[i]));
5075 /* Try to free all pointers in the given array.
5076 Note: this could be made faster, by delaying consolidation,
5077 at the price of disabling some user integrity checks, We
5078 still optimize some consolidations by combining adjacent
5079 chunks before freeing, which will occur often if allocated
5080 with ialloc or the array is sorted.
5082 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5084 if (!PREACTION(m)) {
5086 void** fence = &(array[nelem]);
5087 for (a = array; a != fence; ++a) {
5090 mchunkptr p = mem2chunk(mem);
5091 size_t psize = chunksize(p);
5093 if (get_mstate_for(p) != m) {
5098 check_inuse_chunk(m, p);
5100 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5101 void ** b = a + 1; /* try to merge with next chunk */
5102 mchunkptr next = next_chunk(p);
5103 if (b != fence && *b == chunk2mem(next)) {
5104 size_t newsize = chunksize(next) + psize;
5105 set_inuse(m, p, newsize);
5109 dispose_chunk(m, p, psize);
5112 CORRUPTION_ERROR_ACTION(m);
5117 if (should_trim(m, m->topsize))
5125 #if MALLOC_INSPECT_ALL
5126 static void internal_inspect_all(mstate m,
5127 void(*handler)(void *start,
5130 void* callback_arg),
5132 if (is_initialized(m)) {
5133 mchunkptr top = m->top;
5135 for (s = &m->seg; s != 0; s = s->next) {
5136 mchunkptr q = align_as_chunk(s->base);
5137 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5138 mchunkptr next = next_chunk(q);
5139 size_t sz = chunksize(q);
5143 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5144 start = chunk2mem(q);
5148 if (is_small(sz)) { /* offset by possible bookkeeping */
5149 start = (void*)((char*)q + sizeof(malloc_chunk));
5152 start = (void*)((char*)q + sizeof(malloc_tree_chunk));
5155 if (start < (void*)next) /* skip if all space is bookkeeping */
5156 handler(start, next, used, arg);
5164 #endif /* MALLOC_INSPECT_ALL */
5166 /* ------------------ Exported realloc, memalign, etc -------------------- */
5170 void* dlrealloc(void* oldmem, size_t bytes) {
5173 mem = dlmalloc(bytes);
5175 else if (bytes >= MAX_REQUEST) {
5176 MALLOC_FAILURE_ACTION;
5178 #ifdef REALLOC_ZERO_BYTES_FREES
5179 else if (bytes == 0) {
5182 #endif /* REALLOC_ZERO_BYTES_FREES */
5184 size_t nb = request2size(bytes);
5185 mchunkptr oldp = mem2chunk(oldmem);
5189 mstate m = get_mstate_for(oldp);
5191 USAGE_ERROR_ACTION(m, oldmem);
5194 #endif /* FOOTERS */
5195 if (!PREACTION(m)) {
5196 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5199 check_inuse_chunk(m, newp);
5200 mem = chunk2mem(newp);
5203 mem = internal_malloc(m, bytes);
5205 size_t oc = chunksize(oldp) - overhead_for(oldp);
5206 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5207 internal_free(m, oldmem);
5215 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5218 if (bytes >= MAX_REQUEST) {
5219 MALLOC_FAILURE_ACTION;
5222 size_t nb = request2size(bytes);
5223 mchunkptr oldp = mem2chunk(oldmem);
5227 mstate m = get_mstate_for(oldp);
5229 USAGE_ERROR_ACTION(m, oldmem);
5232 #endif /* FOOTERS */
5233 if (!PREACTION(m)) {
5234 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5237 check_inuse_chunk(m, newp);
5246 void* dlmemalign(size_t alignment, size_t bytes) {
5247 if (alignment <= MALLOC_ALIGNMENT) {
5248 return dlmalloc(bytes);
5250 return internal_memalign(gm, alignment, bytes);
5253 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5255 if (alignment == MALLOC_ALIGNMENT)
5256 mem = dlmalloc(bytes);
5258 size_t d = alignment / sizeof(void*);
5259 size_t r = alignment % sizeof(void*);
5260 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5262 else if (bytes >= MAX_REQUEST - alignment) {
5263 if (alignment < MIN_CHUNK_SIZE)
5264 alignment = MIN_CHUNK_SIZE;
5265 mem = internal_memalign(gm, alignment, bytes);
5276 void* dlvalloc(size_t bytes) {
5278 ensure_initialization();
5279 pagesz = mparams.page_size;
5280 return dlmemalign(pagesz, bytes);
5283 void* dlpvalloc(size_t bytes) {
5285 ensure_initialization();
5286 pagesz = mparams.page_size;
5287 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5290 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5292 size_t sz = elem_size; /* serves as 1-element array */
5293 return ialloc(gm, n_elements, &sz, 3, chunks);
5296 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5298 return ialloc(gm, n_elements, sizes, 0, chunks);
5301 size_t dlbulk_free(void* array[], size_t nelem) {
5302 return internal_bulk_free(gm, array, nelem);
5305 #if MALLOC_INSPECT_ALL
5306 void dlmalloc_inspect_all(void(*handler)(void *start,
5309 void* callback_arg),
5311 ensure_initialization();
5312 if (!PREACTION(gm)) {
5313 internal_inspect_all(gm, handler, arg);
5317 #endif /* MALLOC_INSPECT_ALL */
5319 int dlmalloc_trim(size_t pad) {
5321 ensure_initialization();
5322 if (!PREACTION(gm)) {
5323 result = sys_trim(gm, pad);
5329 size_t dlmalloc_footprint(void) {
5330 return gm->footprint;
5333 size_t dlmalloc_max_footprint(void) {
5334 return gm->max_footprint;
5337 size_t dlmalloc_footprint_limit(void) {
5338 size_t maf = gm->footprint_limit;
5339 return maf == 0 ? MAX_SIZE_T : maf;
5342 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5343 size_t result; /* invert sense of 0 */
5345 result = granularity_align(1); /* Use minimal size */
5346 if (bytes == MAX_SIZE_T)
5347 result = 0; /* disable */
5349 result = granularity_align(bytes);
5350 return gm->footprint_limit = result;
5354 struct mallinfo dlmallinfo(void) {
5355 return internal_mallinfo(gm);
5357 #endif /* NO_MALLINFO */
5359 #if !NO_MALLOC_STATS
5360 void dlmalloc_stats() {
5361 internal_malloc_stats(gm);
5363 #endif /* NO_MALLOC_STATS */
5365 int dlmallopt(int param_number, int value) {
5366 return change_mparam(param_number, value);
5369 size_t dlmalloc_usable_size(void* mem) {
5371 mchunkptr p = mem2chunk(mem);
5373 return chunksize(p) - overhead_for(p);
5378 #endif /* !ONLY_MSPACES */
5380 /* ----------------------------- user mspaces ---------------------------- */
5384 static mstate init_user_mstate(char* tbase, size_t tsize) {
5385 size_t msize = pad_request(sizeof(struct malloc_state));
5387 mchunkptr msp = align_as_chunk(tbase);
5388 mstate m = (mstate)(chunk2mem(msp));
5389 memset(m, 0, msize);
5390 (void)INITIAL_LOCK(&m->mutex);
5391 msp->head = (msize|INUSE_BITS);
5392 m->seg.base = m->least_addr = tbase;
5393 m->seg.size = m->footprint = m->max_footprint = tsize;
5394 m->magic = mparams.magic;
5395 m->release_checks = MAX_RELEASE_CHECK_RATE;
5396 m->mflags = mparams.default_mflags;
5399 disable_contiguous(m);
5401 mn = next_chunk(mem2chunk(m));
5402 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5403 check_top_chunk(m, m->top);
5407 mspace create_mspace(size_t capacity, int locked) {
5410 ensure_initialization();
5411 msize = pad_request(sizeof(struct malloc_state));
5412 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5413 size_t rs = ((capacity == 0)? mparams.granularity :
5414 (capacity + TOP_FOOT_SIZE + msize));
5415 size_t tsize = granularity_align(rs);
5416 char* tbase = (char*)(CALL_MMAP(tsize));
5417 if (tbase != CMFAIL) {
5418 m = init_user_mstate(tbase, tsize);
5419 m->seg.sflags = USE_MMAP_BIT;
5420 set_lock(m, locked);
5426 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5429 ensure_initialization();
5430 msize = pad_request(sizeof(struct malloc_state));
5431 if (capacity > msize + TOP_FOOT_SIZE &&
5432 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5433 m = init_user_mstate((char*)base, capacity);
5434 m->seg.sflags = EXTERN_BIT;
5435 set_lock(m, locked);
5440 int mspace_track_large_chunks(mspace msp, int enable) {
5442 mstate ms = (mstate)msp;
5443 if (!PREACTION(ms)) {
5455 size_t destroy_mspace(mspace msp) {
5457 mstate ms = (mstate)msp;
5459 msegmentptr sp = &ms->seg;
5460 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5462 char* base = sp->base;
5463 size_t size = sp->size;
5464 flag_t flag = sp->sflags;
5466 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5467 CALL_MUNMAP(base, size) == 0)
5472 USAGE_ERROR_ACTION(ms,ms);
5478 mspace versions of routines are near-clones of the global
5479 versions. This is not so nice but better than the alternatives.
5482 void* mspace_malloc(mspace msp, size_t bytes) {
5483 mstate ms = (mstate)msp;
5484 if (!ok_magic(ms)) {
5485 USAGE_ERROR_ACTION(ms,ms);
5488 if (!PREACTION(ms)) {
5491 if (bytes <= MAX_SMALL_REQUEST) {
5494 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5495 idx = small_index(nb);
5496 smallbits = ms->smallmap >> idx;
5498 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5500 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5501 b = smallbin_at(ms, idx);
5503 assert(chunksize(p) == small_index2size(idx));
5504 unlink_first_small_chunk(ms, b, p, idx);
5505 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5507 check_malloced_chunk(ms, mem, nb);
5511 else if (nb > ms->dvsize) {
5512 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5516 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5517 binmap_t leastbit = least_bit(leftbits);
5518 compute_bit2idx(leastbit, i);
5519 b = smallbin_at(ms, i);
5521 assert(chunksize(p) == small_index2size(i));
5522 unlink_first_small_chunk(ms, b, p, i);
5523 rsize = small_index2size(i) - nb;
5524 /* Fit here cannot be remainderless if 4byte sizes */
5525 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5526 set_inuse_and_pinuse(ms, p, small_index2size(i));
5528 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5529 r = chunk_plus_offset(p, nb);
5530 set_size_and_pinuse_of_free_chunk(r, rsize);
5531 replace_dv(ms, r, rsize);
5534 check_malloced_chunk(ms, mem, nb);
5538 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5539 check_malloced_chunk(ms, mem, nb);
5544 else if (bytes >= MAX_REQUEST)
5545 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5547 nb = pad_request(bytes);
5548 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5549 check_malloced_chunk(ms, mem, nb);
5554 if (nb <= ms->dvsize) {
5555 size_t rsize = ms->dvsize - nb;
5556 mchunkptr p = ms->dv;
5557 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5558 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5560 set_size_and_pinuse_of_free_chunk(r, rsize);
5561 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5563 else { /* exhaust dv */
5564 size_t dvs = ms->dvsize;
5567 set_inuse_and_pinuse(ms, p, dvs);
5570 check_malloced_chunk(ms, mem, nb);
5574 else if (nb < ms->topsize) { /* Split top */
5575 size_t rsize = ms->topsize -= nb;
5576 mchunkptr p = ms->top;
5577 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5578 r->head = rsize | PINUSE_BIT;
5579 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5581 check_top_chunk(ms, ms->top);
5582 check_malloced_chunk(ms, mem, nb);
5586 mem = sys_alloc(ms, nb);
5596 void mspace_free(mspace msp, void* mem) {
5598 mchunkptr p = mem2chunk(mem);
5600 mstate fm = get_mstate_for(p);
5601 msp = msp; /* placate people compiling -Wunused */
5603 mstate fm = (mstate)msp;
5604 #endif /* FOOTERS */
5605 if (!ok_magic(fm)) {
5606 USAGE_ERROR_ACTION(fm, p);
5609 if (!PREACTION(fm)) {
5610 check_inuse_chunk(fm, p);
5611 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5612 size_t psize = chunksize(p);
5613 mchunkptr next = chunk_plus_offset(p, psize);
5615 size_t prevsize = p->prev_foot;
5616 if (is_mmapped(p)) {
5617 psize += prevsize + MMAP_FOOT_PAD;
5618 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5619 fm->footprint -= psize;
5623 mchunkptr prev = chunk_minus_offset(p, prevsize);
5626 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5628 unlink_chunk(fm, p, prevsize);
5630 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5632 set_free_with_pinuse(p, psize, next);
5641 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5642 if (!cinuse(next)) { /* consolidate forward */
5643 if (next == fm->top) {
5644 size_t tsize = fm->topsize += psize;
5646 p->head = tsize | PINUSE_BIT;
5651 if (should_trim(fm, tsize))
5655 else if (next == fm->dv) {
5656 size_t dsize = fm->dvsize += psize;
5658 set_size_and_pinuse_of_free_chunk(p, dsize);
5662 size_t nsize = chunksize(next);
5664 unlink_chunk(fm, next, nsize);
5665 set_size_and_pinuse_of_free_chunk(p, psize);
5673 set_free_with_pinuse(p, psize, next);
5675 if (is_small(psize)) {
5676 insert_small_chunk(fm, p, psize);
5677 check_free_chunk(fm, p);
5680 tchunkptr tp = (tchunkptr)p;
5681 insert_large_chunk(fm, tp, psize);
5682 check_free_chunk(fm, p);
5683 if (--fm->release_checks == 0)
5684 release_unused_segments(fm);
5690 USAGE_ERROR_ACTION(fm, p);
5697 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5700 mstate ms = (mstate)msp;
5701 if (!ok_magic(ms)) {
5702 USAGE_ERROR_ACTION(ms,ms);
5705 if (n_elements != 0) {
5706 req = n_elements * elem_size;
5707 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5708 (req / n_elements != elem_size))
5709 req = MAX_SIZE_T; /* force downstream failure on overflow */
5711 mem = internal_malloc(ms, req);
5712 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5713 memset(mem, 0, req);
5717 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5720 mem = mspace_malloc(msp, bytes);
5722 else if (bytes >= MAX_REQUEST) {
5723 MALLOC_FAILURE_ACTION;
5725 #ifdef REALLOC_ZERO_BYTES_FREES
5726 else if (bytes == 0) {
5727 mspace_free(msp, oldmem);
5729 #endif /* REALLOC_ZERO_BYTES_FREES */
5731 size_t nb = request2size(bytes);
5732 mchunkptr oldp = mem2chunk(oldmem);
5734 mstate m = (mstate)msp;
5736 mstate m = get_mstate_for(oldp);
5738 USAGE_ERROR_ACTION(m, oldmem);
5741 #endif /* FOOTERS */
5742 if (!PREACTION(m)) {
5743 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5746 check_inuse_chunk(m, newp);
5747 mem = chunk2mem(newp);
5750 mem = mspace_malloc(m, bytes);
5752 size_t oc = chunksize(oldp) - overhead_for(oldp);
5753 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5754 mspace_free(m, oldmem);
5762 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5765 if (bytes >= MAX_REQUEST) {
5766 MALLOC_FAILURE_ACTION;
5769 size_t nb = request2size(bytes);
5770 mchunkptr oldp = mem2chunk(oldmem);
5772 mstate m = (mstate)msp;
5774 mstate m = get_mstate_for(oldp);
5775 msp = msp; /* placate people compiling -Wunused */
5777 USAGE_ERROR_ACTION(m, oldmem);
5780 #endif /* FOOTERS */
5781 if (!PREACTION(m)) {
5782 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5785 check_inuse_chunk(m, newp);
5794 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5795 mstate ms = (mstate)msp;
5796 if (!ok_magic(ms)) {
5797 USAGE_ERROR_ACTION(ms,ms);
5800 if (alignment <= MALLOC_ALIGNMENT)
5801 return mspace_malloc(msp, bytes);
5802 return internal_memalign(ms, alignment, bytes);
5805 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5806 size_t elem_size, void* chunks[]) {
5807 size_t sz = elem_size; /* serves as 1-element array */
5808 mstate ms = (mstate)msp;
5809 if (!ok_magic(ms)) {
5810 USAGE_ERROR_ACTION(ms,ms);
5813 return ialloc(ms, n_elements, &sz, 3, chunks);
5816 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5817 size_t sizes[], void* chunks[]) {
5818 mstate ms = (mstate)msp;
5819 if (!ok_magic(ms)) {
5820 USAGE_ERROR_ACTION(ms,ms);
5823 return ialloc(ms, n_elements, sizes, 0, chunks);
5826 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5827 return internal_bulk_free((mstate)msp, array, nelem);
5830 #if MALLOC_INSPECT_ALL
5831 void mspace_inspect_all(mspace msp,
5832 void(*handler)(void *start,
5835 void* callback_arg),
5837 mstate ms = (mstate)msp;
5839 if (!PREACTION(ms)) {
5840 internal_inspect_all(ms, handler, arg);
5845 USAGE_ERROR_ACTION(ms,ms);
5848 #endif /* MALLOC_INSPECT_ALL */
5850 int mspace_trim(mspace msp, size_t pad) {
5852 mstate ms = (mstate)msp;
5854 if (!PREACTION(ms)) {
5855 result = sys_trim(ms, pad);
5860 USAGE_ERROR_ACTION(ms,ms);
5865 #if !NO_MALLOC_STATS
5866 void mspace_malloc_stats(mspace msp) {
5867 mstate ms = (mstate)msp;
5869 internal_malloc_stats(ms);
5872 USAGE_ERROR_ACTION(ms,ms);
5875 #endif /* NO_MALLOC_STATS */
5877 size_t mspace_footprint(mspace msp) {
5879 mstate ms = (mstate)msp;
5881 result = ms->footprint;
5884 USAGE_ERROR_ACTION(ms,ms);
5889 size_t mspace_max_footprint(mspace msp) {
5891 mstate ms = (mstate)msp;
5893 result = ms->max_footprint;
5896 USAGE_ERROR_ACTION(ms,ms);
5901 size_t mspace_footprint_limit(mspace msp) {
5903 mstate ms = (mstate)msp;
5905 size_t maf = ms->footprint_limit;
5906 result = (maf == 0) ? MAX_SIZE_T : maf;
5909 USAGE_ERROR_ACTION(ms,ms);
5914 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5916 mstate ms = (mstate)msp;
5919 result = granularity_align(1); /* Use minimal size */
5920 if (bytes == MAX_SIZE_T)
5921 result = 0; /* disable */
5923 result = granularity_align(bytes);
5924 ms->footprint_limit = result;
5927 USAGE_ERROR_ACTION(ms,ms);
5933 struct mallinfo mspace_mallinfo(mspace msp) {
5934 mstate ms = (mstate)msp;
5935 if (!ok_magic(ms)) {
5936 USAGE_ERROR_ACTION(ms,ms);
5938 return internal_mallinfo(ms);
5940 #endif /* NO_MALLINFO */
5942 size_t mspace_usable_size(void* mem) {
5944 mchunkptr p = mem2chunk(mem);
5946 return chunksize(p) - overhead_for(p);
5951 int mspace_mallopt(int param_number, int value) {
5952 return change_mparam(param_number, value);
5955 #endif /* MSPACES */
5958 /* -------------------- Alternative MORECORE functions ------------------- */
5961 Guidelines for creating a custom version of MORECORE:
5963 * For best performance, MORECORE should allocate in multiples of pagesize.
5964 * MORECORE may allocate more memory than requested. (Or even less,
5965 but this will usually result in a malloc failure.)
5966 * MORECORE must not allocate memory when given argument zero, but
5967 instead return one past the end address of memory from previous
5969 * For best performance, consecutive calls to MORECORE with positive
5970 arguments should return increasing addresses, indicating that
5971 space has been contiguously extended.
5972 * Even though consecutive calls to MORECORE need not return contiguous
5973 addresses, it must be OK for malloc'ed chunks to span multiple
5974 regions in those cases where they do happen to be contiguous.
5975 * MORECORE need not handle negative arguments -- it may instead
5976 just return MFAIL when given negative arguments.
5977 Negative arguments are always multiples of pagesize. MORECORE
5978 must not misinterpret negative args as large positive unsigned
5979 args. You can suppress all such calls from even occurring by defining
5980 MORECORE_CANNOT_TRIM,
5982 As an example alternative MORECORE, here is a custom allocator
5983 kindly contributed for pre-OSX macOS. It uses virtually but not
5984 necessarily physically contiguous non-paged memory (locked in,
5985 present and won't get swapped out). You can use it by uncommenting
5986 this section, adding some #includes, and setting up the appropriate
5989 #define MORECORE osMoreCore
5991 There is also a shutdown routine that should somehow be called for
5992 cleanup upon program exit.
5994 #define MAX_POOL_ENTRIES 100
5995 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5996 static int next_os_pool;
5997 void *our_os_pools[MAX_POOL_ENTRIES];
5999 void *osMoreCore(int size)
6002 static void *sbrk_top = 0;
6006 if (size < MINIMUM_MORECORE_SIZE)
6007 size = MINIMUM_MORECORE_SIZE;
6008 if (CurrentExecutionLevel() == kTaskLevel)
6009 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6012 return (void *) MFAIL;
6014 // save ptrs so they can be freed during cleanup
6015 our_os_pools[next_os_pool] = ptr;
6017 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6018 sbrk_top = (char *) ptr + size;
6023 // we don't currently support shrink behavior
6024 return (void *) MFAIL;
6032 // cleanup any allocated memory pools
6033 // called as last thing before shutting down driver
6035 void osCleanupMem(void)
6039 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6042 PoolDeallocate(*ptr);
6050 /* -----------------------------------------------------------------------
6052 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6053 * Always perform unlink checks unless INSECURE
6054 * Add posix_memalign.
6055 * Improve realloc to expand in more cases; expose realloc_in_place.
6056 Thanks to Peter Buhr for the suggestion.
6057 * Add footprint_limit, inspect_all, bulk_free. Thanks
6058 to Barry Hayes and others for the suggestions.
6059 * Internal refactorings to avoid calls while holding locks
6060 * Use non-reentrant locks by default. Thanks to Roland McGrath
6062 * Small fixes to mspace_destroy, reset_on_error.
6063 * Various configuration extensions/changes. Thanks
6064 to all who contributed these.
6066 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6067 * Update Creative Commons URL
6069 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6070 * Use zeros instead of prev foot for is_mmapped
6071 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6072 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6073 * Fix insufficient sys_alloc padding when using 16byte alignment
6074 * Fix bad error check in mspace_footprint
6075 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6076 * Reentrant spin locks; thanks to Earl Chew and others
6077 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6078 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6079 * Extension hook in malloc_state
6080 * Various small adjustments to reduce warnings on some compilers
6081 * Various configuration extensions/changes for more platforms. Thanks
6082 to all who contributed these.
6084 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6085 * Add max_footprint functions
6086 * Ensure all appropriate literals are size_t
6087 * Fix conditional compilation problem for some #define settings
6088 * Avoid concatenating segments with the one provided
6089 in create_mspace_with_base
6090 * Rename some variables to avoid compiler shadowing warnings
6091 * Use explicit lock initialization.
6092 * Better handling of sbrk interference.
6093 * Simplify and fix segment insertion, trimming and mspace_destroy
6094 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6095 * Thanks especially to Dennis Flanagan for help on these.
6097 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6098 * Fix memalign brace error.
6100 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6101 * Fix improper #endif nesting in C++
6102 * Add explicit casts needed for C++
6104 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6105 * Use trees for large bins
6107 * Use segments to unify sbrk-based and mmap-based system allocation,
6108 removing need for emulation on most platforms without sbrk.
6109 * Default safety checks
6110 * Optional footer checks. Thanks to William Robertson for the idea.
6111 * Internal code refactoring
6112 * Incorporate suggestions and platform-specific changes.
6113 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6114 Aaron Bachmann, Emery Berger, and others.
6115 * Speed up non-fastbin processing enough to remove fastbins.
6116 * Remove useless cfree() to avoid conflicts with other apps.
6117 * Remove internal memcpy, memset. Compilers handle builtins better.
6118 * Remove some options that no one ever used and rename others.
6120 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6121 * Fix malloc_state bitmap array misdeclaration
6123 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6124 * Allow tuning of FIRST_SORTED_BIN_SIZE
6125 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6126 * Better detection and support for non-contiguousness of MORECORE.
6127 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6128 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6129 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6130 * Raised default trim and map thresholds to 256K.
6131 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6132 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6133 * Branch-free bin calculation
6134 * Default trim and mmap thresholds now 256K.
6136 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6137 * Introduce independent_comalloc and independent_calloc.
6138 Thanks to Michael Pachos for motivation and help.
6139 * Make optional .h file available
6140 * Allow > 2GB requests on 32bit systems.
6141 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6142 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6144 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6146 * memalign: check alignment arg
6147 * realloc: don't try to shift chunks backwards, since this
6148 leads to more fragmentation in some programs and doesn't
6149 seem to help in any others.
6150 * Collect all cases in malloc requiring system memory into sysmalloc
6151 * Use mmap as backup to sbrk
6152 * Place all internal state in malloc_state
6153 * Introduce fastbins (although similar to 2.5.1)
6154 * Many minor tunings and cosmetic improvements
6155 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6156 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6157 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6158 * Include errno.h to support default failure action.
6160 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6161 * return null for negative arguments
6162 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6163 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6164 (e.g. WIN32 platforms)
6165 * Cleanup header file inclusion for WIN32 platforms
6166 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6167 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6168 memory allocation routines
6169 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6170 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6171 usage of 'assert' in non-WIN32 code
6172 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6174 * Always call 'fREe()' rather than 'free()'
6176 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6177 * Fixed ordering problem with boundary-stamping
6179 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6180 * Added pvalloc, as recommended by H.J. Liu
6181 * Added 64bit pointer support mainly from Wolfram Gloger
6182 * Added anonymously donated WIN32 sbrk emulation
6183 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6184 * malloc_extend_top: fix mask error that caused wastage after
6186 * Add linux mremap support code from HJ Liu
6188 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6189 * Integrated most documentation with the code.
6190 * Add support for mmap, with help from
6191 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6192 * Use last_remainder in more cases.
6193 * Pack bins using idea from colin@nyx10.cs.du.edu
6194 * Use ordered bins instead of best-fit threshhold
6195 * Eliminate block-local decls to simplify tracing and debugging.
6196 * Support another case of realloc via move into top
6197 * Fix error occuring when initial sbrk_base not word-aligned.
6198 * Rely on page size for units instead of SBRK_UNIT to
6199 avoid surprises about sbrk alignment conventions.
6200 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6201 (raymond@es.ele.tue.nl) for the suggestion.
6202 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6203 * More precautions for cases where other routines call sbrk,
6204 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6205 * Added macros etc., allowing use in linux libc from
6206 H.J. Lu (hjl@gnu.ai.mit.edu)
6207 * Inverted this history list
6209 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6210 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6211 * Removed all preallocation code since under current scheme
6212 the work required to undo bad preallocations exceeds
6213 the work saved in good cases for most test programs.
6214 * No longer use return list or unconsolidated bins since
6215 no scheme using them consistently outperforms those that don't
6216 given above changes.
6217 * Use best fit for very large chunks to prevent some worst-cases.
6218 * Added some support for debugging
6220 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6221 * Removed footers when chunks are in use. Thanks to
6222 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6224 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6225 * Added malloc_trim, with help from Wolfram Gloger
6226 (wmglo@Dent.MED.Uni-Muenchen.DE).
6228 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6230 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6231 * realloc: try to expand in both directions
6232 * malloc: swap order of clean-bin strategy;
6233 * realloc: only conditionally expand backwards
6234 * Try not to scavenge used bins
6235 * Use bin counts as a guide to preallocation
6236 * Occasionally bin return list chunks in first scan
6237 * Add a few optimizations from colin@nyx10.cs.du.edu
6239 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6240 * faster bin computation & slightly different binning
6241 * merged all consolidations to one part of malloc proper
6242 (eliminating old malloc_find_space & malloc_clean_bin)
6243 * Scan 2 returns chunks (not just 1)
6244 * Propagate failure in realloc if malloc returns 0
6245 * Add stuff to allow compilation on non-ANSI compilers
6246 from kpv@research.att.com
6248 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6249 * removed potential for odd address access in prev_chunk
6250 * removed dependency on getpagesize.h
6251 * misc cosmetics and a bit more internal documentation
6252 * anticosmetics: mangled names in macros to evade debugger strangeness
6253 * tested on sparc, hp-700, dec-mips, rs6000
6254 with gcc & native cc (hp, dec only) allowing
6255 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6257 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6258 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6259 structure of old version, but most details differ.)
6265 #include "_PDCLIB_test.h"
6270 return TEST_RESULTS;