2 This is a version (aka dlmalloc) of malloc/free/realloc written by
\r
3 Doug Lea and released to the public domain, as explained at
\r
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
\r
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
\r
7 * Version 2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
\r
9 Note: There may be an updated version of this malloc obtainable at
\r
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
\r
11 Check before installing!
\r
15 This library is all in one file to simplify the most common usage:
\r
16 ftp it, compile it (-O3), and link it into another program. All of
\r
17 the compile-time options default to reasonable values for use on
\r
18 most platforms. You might later want to step through various
\r
19 compile-time and dynamic tuning options.
\r
21 For convenience, an include file for code using this malloc is at:
\r
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.5.h
\r
23 You don't really need this .h file unless you call functions not
\r
24 defined in your system include files. The .h file contains only the
\r
25 excerpts from this file needed for using this malloc on ANSI C/C++
\r
26 systems, so long as you haven't changed compile-time options about
\r
27 naming and tuning parameters. If you do, then you can create your
\r
28 own malloc.h that does include all settings by cutting at the point
\r
29 indicated below. Note that you may already by default be using a C
\r
30 library containing a malloc that is based on some version of this
\r
31 malloc (for example in linux). You might still want to use the one
\r
32 in this file to customize settings or to avoid overheads associated
\r
33 with library versions.
\r
37 Supported pointer/size_t representation: 4 or 8 bytes
\r
38 size_t MUST be an unsigned type of the same width as
\r
39 pointers. (If you are using an ancient system that declares
\r
40 size_t as a signed type, or need it to be a different width
\r
41 than pointers, you can use a previous release of this malloc
\r
42 (e.g. 2.7.2) supporting these.)
\r
44 Alignment: 8 bytes (default)
\r
45 This suffices for nearly all current machines and C compilers.
\r
46 However, you can define MALLOC_ALIGNMENT to be wider than this
\r
47 if necessary (up to 128bytes), at the expense of using more space.
\r
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
\r
50 8 or 16 bytes (if 8byte sizes)
\r
51 Each malloced chunk has a hidden word of overhead holding size
\r
52 and status information, and additional cross-check word
\r
53 if FOOTERS is defined.
\r
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
\r
56 8-byte ptrs: 32 bytes (including overhead)
\r
58 Even a request for zero bytes (i.e., malloc(0)) returns a
\r
59 pointer to something of the minimum allocatable size.
\r
60 The maximum overhead wastage (i.e., number of extra bytes
\r
61 allocated than were requested in malloc) is less than or equal
\r
62 to the minimum size, except for requests >= mmap_threshold that
\r
63 are serviced via mmap(), where the worst case wastage is about
\r
64 32 bytes plus the remainder from a system page (the minimal
\r
65 mmap unit); typically 4096 or 8192 bytes.
\r
67 Security: static-safe; optionally more or less
\r
68 The "security" of malloc refers to the ability of malicious
\r
69 code to accentuate the effects of errors (for example, freeing
\r
70 space that is not currently malloc'ed or overwriting past the
\r
71 ends of chunks) in code that calls malloc. This malloc
\r
72 guarantees not to modify any memory locations below the base of
\r
73 heap, i.e., static variables, even in the presence of usage
\r
74 errors. The routines additionally detect most improper frees
\r
75 and reallocs. All this holds as long as the static bookkeeping
\r
76 for malloc itself is not corrupted by some other means. This
\r
77 is only one aspect of security -- these checks do not, and
\r
78 cannot, detect all possible programming errors.
\r
80 If FOOTERS is defined nonzero, then each allocated chunk
\r
81 carries an additional check word to verify that it was malloced
\r
82 from its space. These check words are the same within each
\r
83 execution of a program using malloc, but differ across
\r
84 executions, so externally crafted fake chunks cannot be
\r
85 freed. This improves security by rejecting frees/reallocs that
\r
86 could corrupt heap memory, in addition to the checks preventing
\r
87 writes to statics that are always on. This may further improve
\r
88 security at the expense of time and space overhead. (Note that
\r
89 FOOTERS may also be worth using with MSPACES.)
\r
91 By default detected errors cause the program to abort (calling
\r
92 "abort()"). You can override this to instead proceed past
\r
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
\r
94 has no effect, and a malloc that encounters a bad address
\r
95 caused by user overwrites will ignore the bad address by
\r
96 dropping pointers and indices to all known memory. This may
\r
97 be appropriate for programs that should continue if at all
\r
98 possible in the face of programming errors, although they may
\r
99 run out of memory because dropped memory is never reclaimed.
\r
101 If you don't like either of these options, you can define
\r
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
\r
103 else. And if if you are sure that your program using malloc has
\r
104 no errors or vulnerabilities, you can define INSECURE to 1,
\r
105 which might (or might not) provide a small performance improvement.
\r
107 It is also possible to limit the maximum total allocatable
\r
108 space, using malloc_set_footprint_limit. This is not
\r
109 designed as a security feature in itself (calls to set limits
\r
110 are not screened or privileged), but may be useful as one
\r
111 aspect of a secure implementation.
\r
113 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
\r
114 When USE_LOCKS is defined, each public call to malloc, free,
\r
115 etc is surrounded with a lock. By default, this uses a plain
\r
116 pthread mutex, win32 critical section, or a spin-lock if if
\r
117 available for the platform and not disabled by setting
\r
118 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
\r
119 recursive versions are used instead (which are not required for
\r
120 base functionality but may be needed in layered extensions).
\r
121 Using a global lock is not especially fast, and can be a major
\r
122 bottleneck. It is designed only to provide minimal protection
\r
123 in concurrent environments, and to provide a basis for
\r
124 extensions. If you are using malloc in a concurrent program,
\r
125 consider instead using nedmalloc
\r
126 (http://www.nedprod.com/programs/portable/nedmalloc/) or
\r
127 ptmalloc (See http://www.malloc.de), which are derived from
\r
128 versions of this malloc.
\r
130 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
\r
131 This malloc can use unix sbrk or any emulation (invoked using
\r
132 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
\r
133 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
\r
134 memory. On most unix systems, it tends to work best if both
\r
135 MORECORE and MMAP are enabled. On Win32, it uses emulations
\r
136 based on VirtualAlloc. It also uses common C library functions
\r
139 Compliance: I believe it is compliant with the Single Unix Specification
\r
140 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
\r
143 * Overview of algorithms
\r
145 This is not the fastest, most space-conserving, most portable, or
\r
146 most tunable malloc ever written. However it is among the fastest
\r
147 while also being among the most space-conserving, portable and
\r
148 tunable. Consistent balance across these factors results in a good
\r
149 general-purpose allocator for malloc-intensive programs.
\r
151 In most ways, this malloc is a best-fit allocator. Generally, it
\r
152 chooses the best-fitting existing chunk for a request, with ties
\r
153 broken in approximately least-recently-used order. (This strategy
\r
154 normally maintains low fragmentation.) However, for requests less
\r
155 than 256bytes, it deviates from best-fit when there is not an
\r
156 exactly fitting available chunk by preferring to use space adjacent
\r
157 to that used for the previous small request, as well as by breaking
\r
158 ties in approximately most-recently-used order. (These enhance
\r
159 locality of series of small allocations.) And for very large requests
\r
160 (>= 256Kb by default), it relies on system memory mapping
\r
161 facilities, if supported. (This helps avoid carrying around and
\r
162 possibly fragmenting memory used only for large chunks.)
\r
164 All operations (except malloc_stats and mallinfo) have execution
\r
165 times that are bounded by a constant factor of the number of bits in
\r
166 a size_t, not counting any clearing in calloc or copying in realloc,
\r
167 or actions surrounding MORECORE and MMAP that have times
\r
168 proportional to the number of non-contiguous regions returned by
\r
169 system allocation routines, which is often just 1. In real-time
\r
170 applications, you can optionally suppress segment traversals using
\r
171 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
\r
172 system allocators return non-contiguous spaces, at the typical
\r
173 expense of carrying around more memory and increased fragmentation.
\r
175 The implementation is not very modular and seriously overuses
\r
176 macros. Perhaps someday all C compilers will do as good a job
\r
177 inlining modular code as can now be done by brute-force expansion,
\r
178 but now, enough of them seem not to.
\r
180 Some compilers issue a lot of warnings about code that is
\r
181 dead/unreachable only on some platforms, and also about intentional
\r
182 uses of negation on unsigned types. All known cases of each can be
\r
185 For a longer but out of date high-level description, see
\r
186 http://gee.cs.oswego.edu/dl/html/malloc.html
\r
189 If MSPACES is defined, then in addition to malloc, free, etc.,
\r
190 this file also defines mspace_malloc, mspace_free, etc. These
\r
191 are versions of malloc routines that take an "mspace" argument
\r
192 obtained using create_mspace, to control all internal bookkeeping.
\r
193 If ONLY_MSPACES is defined, only these versions are compiled.
\r
194 So if you would like to use this allocator for only some allocations,
\r
195 and your system malloc for others, you can compile with
\r
196 ONLY_MSPACES and then do something like...
\r
197 static mspace mymspace = create_mspace(0,0); // for example
\r
198 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
\r
200 (Note: If you only need one instance of an mspace, you can instead
\r
201 use "USE_DL_PREFIX" to relabel the global malloc.)
\r
203 You can similarly create thread-local allocators by storing
\r
204 mspaces as thread-locals. For example:
\r
205 static __thread mspace tlms = 0;
\r
206 void* tlmalloc(size_t bytes) {
\r
207 if (tlms == 0) tlms = create_mspace(0, 0);
\r
208 return mspace_malloc(tlms, bytes);
\r
210 void tlfree(void* mem) { mspace_free(tlms, mem); }
\r
212 Unless FOOTERS is defined, each mspace is completely independent.
\r
213 You cannot allocate from one and free to another (although
\r
214 conformance is only weakly checked, so usage errors are not always
\r
215 caught). If FOOTERS is defined, then each chunk carries around a tag
\r
216 indicating its originating mspace, and frees are directed to their
\r
217 originating spaces. Normally, this requires use of locks.
\r
219 ------------------------- Compile-time options ---------------------------
\r
221 Be careful in setting #define values for numerical constants of type
\r
222 size_t. On some systems, literal values are not automatically extended
\r
223 to size_t precision unless they are explicitly casted. You can also
\r
224 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
\r
226 WIN32 default: defined if _WIN32 defined
\r
227 Defining WIN32 sets up defaults for MS environment and compilers.
\r
228 Otherwise defaults are for unix. Beware that there seem to be some
\r
229 cases where this malloc might not be a pure drop-in replacement for
\r
230 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
\r
231 SetDIBits()) may be due to bugs in some video driver implementations
\r
232 when pixel buffers are malloc()ed, and the region spans more than
\r
233 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
\r
234 default granularity, pixel buffers may straddle virtual allocation
\r
235 regions more often than when using the Microsoft allocator. You can
\r
236 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
\r
237 buffers rather than using malloc(). If this is not possible,
\r
238 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
\r
239 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
\r
240 conditions use _MSC_VER to distinguish them.
\r
242 DLMALLOC_EXPORT default: extern
\r
243 Defines how public APIs are declared. If you want to export via a
\r
244 Windows DLL, you might define this as
\r
245 #define DLMALLOC_EXPORT extern __declspace(dllexport)
\r
246 If you want a POSIX ELF shared object, you might use
\r
247 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
\r
249 MALLOC_ALIGNMENT default: (size_t)8
\r
250 Controls the minimum alignment for malloc'ed chunks. It must be a
\r
251 power of two and at least 8, even on machines for which smaller
\r
252 alignments would suffice. It may be defined as larger than this
\r
253 though. Note however that code and data structures are optimized for
\r
254 the case of 8-byte alignment.
\r
256 MSPACES default: 0 (false)
\r
257 If true, compile in support for independent allocation spaces.
\r
258 This is only supported if HAVE_MMAP is true.
\r
260 ONLY_MSPACES default: 0 (false)
\r
261 If true, only compile in mspace versions, not regular versions.
\r
263 USE_LOCKS default: 0 (false)
\r
264 Causes each call to each public routine to be surrounded with
\r
265 pthread or WIN32 mutex lock/unlock. (If set true, this can be
\r
266 overridden on a per-mspace basis for mspace versions.) If set to a
\r
267 non-zero value other than 1, locks are used, but their
\r
268 implementation is left out, so lock functions must be supplied manually,
\r
269 as described below.
\r
271 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
\r
272 If true, uses custom spin locks for locking. This is currently
\r
273 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
\r
274 MS compilers. Otherwise, posix locks or win32 critical sections are
\r
277 USE_RECURSIVE_LOCKS default: not defined
\r
278 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
\r
279 uses plain mutexes. This is not required for malloc proper, but may
\r
280 be needed for layered allocators such as nedmalloc.
\r
283 If true, provide extra checking and dispatching by placing
\r
284 information in the footers of allocated chunks. This adds
\r
285 space and time overhead.
\r
287 INSECURE default: 0
\r
288 If true, omit checks for usage errors and heap space overwrites.
\r
290 USE_DL_PREFIX default: NOT defined
\r
291 Causes compiler to prefix all public routines with the string 'dl'.
\r
292 This can be useful when you only want to use this malloc in one part
\r
293 of a program, using your regular system malloc elsewhere.
\r
295 MALLOC_INSPECT_ALL default: NOT defined
\r
296 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
\r
297 perform traversal of all heap space. Unless access to these
\r
298 functions is otherwise restricted, you probably do not want to
\r
299 include them in secure implementations.
\r
301 ABORT default: defined as abort()
\r
302 Defines how to abort on failed checks. On most systems, a failed
\r
303 check cannot die with an "assert" or even print an informative
\r
304 message, because the underlying print routines in turn call malloc,
\r
305 which will fail again. Generally, the best policy is to simply call
\r
306 abort(). It's not very useful to do more than this because many
\r
307 errors due to overwriting will show up as address faults (null, odd
\r
308 addresses etc) rather than malloc-triggered checks, so will also
\r
309 abort. Also, most compilers know that abort() does not return, so
\r
310 can better optimize code conditionally calling it.
\r
312 PROCEED_ON_ERROR default: defined as 0 (false)
\r
313 Controls whether detected bad addresses cause them to bypassed
\r
314 rather than aborting. If set, detected bad arguments to free and
\r
315 realloc are ignored. And all bookkeeping information is zeroed out
\r
316 upon a detected overwrite of freed heap space, thus losing the
\r
317 ability to ever return it from malloc again, but enabling the
\r
318 application to proceed. If PROCEED_ON_ERROR is defined, the
\r
319 static variable malloc_corruption_error_count is compiled in
\r
320 and can be examined to see if errors have occurred. This option
\r
321 generates slower code than the default abort policy.
\r
323 DEBUG default: NOT defined
\r
324 The DEBUG setting is mainly intended for people trying to modify
\r
325 this code or diagnose problems when porting to new platforms.
\r
326 However, it may also be able to better isolate user errors than just
\r
327 using runtime checks. The assertions in the check routines spell
\r
328 out in more detail the assumptions and invariants underlying the
\r
329 algorithms. The checking is fairly extensive, and will slow down
\r
330 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
\r
331 set will attempt to check every non-mmapped allocated and free chunk
\r
332 in the course of computing the summaries.
\r
334 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
\r
335 Debugging assertion failures can be nearly impossible if your
\r
336 version of the assert macro causes malloc to be called, which will
\r
337 lead to a cascade of further failures, blowing the runtime stack.
\r
338 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
\r
339 which will usually make debugging easier.
\r
341 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
\r
342 The action to take before "return 0" when malloc fails to be able to
\r
343 return memory because there is none available.
\r
345 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
\r
346 True if this system supports sbrk or an emulation of it.
\r
348 MORECORE default: sbrk
\r
349 The name of the sbrk-style system routine to call to obtain more
\r
350 memory. See below for guidance on writing custom MORECORE
\r
351 functions. The type of the argument to sbrk/MORECORE varies across
\r
352 systems. It cannot be size_t, because it supports negative
\r
353 arguments, so it is normally the signed type of the same width as
\r
354 size_t (sometimes declared as "intptr_t"). It doesn't much matter
\r
355 though. Internally, we only call it with arguments less than half
\r
356 the max value of a size_t, which should work across all reasonable
\r
357 possibilities, although sometimes generating compiler warnings.
\r
359 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
\r
360 If true, take advantage of fact that consecutive calls to MORECORE
\r
361 with positive arguments always return contiguous increasing
\r
362 addresses. This is true of unix sbrk. It does not hurt too much to
\r
363 set it true anyway, since malloc copes with non-contiguities.
\r
364 Setting it false when definitely non-contiguous saves time
\r
365 and possibly wasted space it would take to discover this though.
\r
367 MORECORE_CANNOT_TRIM default: NOT defined
\r
368 True if MORECORE cannot release space back to the system when given
\r
369 negative arguments. This is generally necessary only if you are
\r
370 using a hand-crafted MORECORE function that cannot handle negative
\r
373 NO_SEGMENT_TRAVERSAL default: 0
\r
374 If non-zero, suppresses traversals of memory segments
\r
375 returned by either MORECORE or CALL_MMAP. This disables
\r
376 merging of segments that are contiguous, and selectively
\r
377 releasing them to the OS if unused, but bounds execution times.
\r
379 HAVE_MMAP default: 1 (true)
\r
380 True if this system supports mmap or an emulation of it. If so, and
\r
381 HAVE_MORECORE is not true, MMAP is used for all system
\r
382 allocation. If set and HAVE_MORECORE is true as well, MMAP is
\r
383 primarily used to directly allocate very large blocks. It is also
\r
384 used as a backup strategy in cases where MORECORE fails to provide
\r
385 space from system. Note: A single call to MUNMAP is assumed to be
\r
386 able to unmap memory that may have be allocated using multiple calls
\r
387 to MMAP, so long as they are adjacent.
\r
389 HAVE_MREMAP default: 1 on linux, else 0
\r
390 If true realloc() uses mremap() to re-allocate large blocks and
\r
391 extend or shrink allocation spaces.
\r
393 MMAP_CLEARS default: 1 except on WINCE.
\r
394 True if mmap clears memory so calloc doesn't need to. This is true
\r
395 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
\r
397 USE_BUILTIN_FFS default: 0 (i.e., not used)
\r
398 Causes malloc to use the builtin ffs() function to compute indices.
\r
399 Some compilers may recognize and intrinsify ffs to be faster than the
\r
400 supplied C version. Also, the case of x86 using gcc is special-cased
\r
401 to an asm instruction, so is already as fast as it can be, and so
\r
402 this setting has no effect. Similarly for Win32 under recent MS compilers.
\r
403 (On most x86s, the asm version is only slightly faster than the C version.)
\r
405 malloc_getpagesize default: derive from system includes, or 4096.
\r
406 The system page size. To the extent possible, this malloc manages
\r
407 memory from the system in page-size units. This may be (and
\r
408 usually is) a function rather than a constant. This is ignored
\r
409 if WIN32, where page size is determined using getSystemInfo during
\r
412 USE_DEV_RANDOM default: 0 (i.e., not used)
\r
413 Causes malloc to use /dev/random to initialize secure magic seed for
\r
414 stamping footers. Otherwise, the current time is used.
\r
416 NO_MALLINFO default: 0
\r
417 If defined, don't compile "mallinfo". This can be a simple way
\r
418 of dealing with mismatches between system declarations and
\r
419 those in this file.
\r
421 MALLINFO_FIELD_TYPE default: size_t
\r
422 The type of the fields in the mallinfo struct. This was originally
\r
423 defined as "int" in SVID etc, but is more usefully defined as
\r
424 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
\r
426 NO_MALLOC_STATS default: 0
\r
427 If defined, don't compile "malloc_stats". This avoids calls to
\r
428 fprintf and bringing in stdio dependencies you might not want.
\r
430 REALLOC_ZERO_BYTES_FREES default: not defined
\r
431 This should be set if a call to realloc with zero bytes should
\r
432 be the same as a call to free. Some people think it should. Otherwise,
\r
433 since this malloc returns a unique pointer for malloc(0), so does
\r
436 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
\r
437 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
\r
438 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
\r
439 Define these if your system does not have these header files.
\r
440 You might need to manually insert some of the declarations they provide.
\r
442 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
\r
443 system_info.dwAllocationGranularity in WIN32,
\r
445 Also settable using mallopt(M_GRANULARITY, x)
\r
446 The unit for allocating and deallocating memory from the system. On
\r
447 most systems with contiguous MORECORE, there is no reason to
\r
448 make this more than a page. However, systems with MMAP tend to
\r
449 either require or encourage larger granularities. You can increase
\r
450 this value to prevent system allocation functions to be called so
\r
451 often, especially if they are slow. The value must be at least one
\r
452 page and must be a power of two. Setting to 0 causes initialization
\r
453 to either page size or win32 region size. (Note: In previous
\r
454 versions of malloc, the equivalent of this option was called
\r
457 DEFAULT_TRIM_THRESHOLD default: 2MB
\r
458 Also settable using mallopt(M_TRIM_THRESHOLD, x)
\r
459 The maximum amount of unused top-most memory to keep before
\r
460 releasing via malloc_trim in free(). Automatic trimming is mainly
\r
461 useful in long-lived programs using contiguous MORECORE. Because
\r
462 trimming via sbrk can be slow on some systems, and can sometimes be
\r
463 wasteful (in cases where programs immediately afterward allocate
\r
464 more large chunks) the value should be high enough so that your
\r
465 overall system performance would improve by releasing this much
\r
466 memory. As a rough guide, you might set to a value close to the
\r
467 average size of a process (program) running on your system.
\r
468 Releasing this much memory would allow such a process to run in
\r
469 memory. Generally, it is worth tuning trim thresholds when a
\r
470 program undergoes phases where several large chunks are allocated
\r
471 and released in ways that can reuse each other's storage, perhaps
\r
472 mixed with phases where there are no such chunks at all. The trim
\r
473 value must be greater than page size to have any useful effect. To
\r
474 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
\r
475 some people use of mallocing a huge space and then freeing it at
\r
476 program startup, in an attempt to reserve system memory, doesn't
\r
477 have the intended effect under automatic trimming, since that memory
\r
478 will immediately be returned to the system.
\r
480 DEFAULT_MMAP_THRESHOLD default: 256K
\r
481 Also settable using mallopt(M_MMAP_THRESHOLD, x)
\r
482 The request size threshold for using MMAP to directly service a
\r
483 request. Requests of at least this size that cannot be allocated
\r
484 using already-existing space will be serviced via mmap. (If enough
\r
485 normal freed space already exists it is used instead.) Using mmap
\r
486 segregates relatively large chunks of memory so that they can be
\r
487 individually obtained and released from the host system. A request
\r
488 serviced through mmap is never reused by any other request (at least
\r
489 not directly; the system may just so happen to remap successive
\r
490 requests to the same locations). Segregating space in this way has
\r
491 the benefits that: Mmapped space can always be individually released
\r
492 back to the system, which helps keep the system level memory demands
\r
493 of a long-lived program low. Also, mapped memory doesn't become
\r
494 `locked' between other chunks, as can happen with normally allocated
\r
495 chunks, which means that even trimming via malloc_trim would not
\r
496 release them. However, it has the disadvantage that the space
\r
497 cannot be reclaimed, consolidated, and then used to service later
\r
498 requests, as happens with normal chunks. The advantages of mmap
\r
499 nearly always outweigh disadvantages for "large" chunks, but the
\r
500 value of "large" may vary across systems. The default is an
\r
501 empirically derived value that works well in most systems. You can
\r
502 disable mmap by setting to MAX_SIZE_T.
\r
504 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
\r
505 The number of consolidated frees between checks to release
\r
506 unused segments when freeing. When using non-contiguous segments,
\r
507 especially with multiple mspaces, checking only for topmost space
\r
508 doesn't always suffice to trigger trimming. To compensate for this,
\r
509 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
\r
510 current number of segments, if greater) try to release unused
\r
511 segments to the OS when freeing chunks that result in
\r
512 consolidation. The best value for this parameter is a compromise
\r
513 between slowing down frees with relatively costly checks that
\r
514 rarely trigger versus holding on to unused memory. To effectively
\r
515 disable, set to MAX_SIZE_T. This may lead to a very slight speed
\r
516 improvement at the expense of carrying around more memory.
\r
520 #include "dlmalloc.h"
\r
522 /* Version identifier to allow people to support multiple versions */
\r
523 #ifndef DLMALLOC_VERSION
\r
524 #define DLMALLOC_VERSION 20805
\r
525 #endif /* DLMALLOC_VERSION */
\r
527 #ifndef DLMALLOC_EXPORT
\r
528 #define DLMALLOC_EXPORT extern
\r
534 #endif /* _WIN32 */
\r
536 #define LACKS_FCNTL_H
\r
538 #endif /* _WIN32_WCE */
\r
541 #define WIN32_LEAN_AND_MEAN
\r
542 #include <windows.h>
\r
544 #define HAVE_MMAP 1
\r
545 #define HAVE_MORECORE 0
\r
546 #define LACKS_UNISTD_H
\r
547 #define LACKS_SYS_PARAM_H
\r
548 #define LACKS_SYS_MMAN_H
\r
549 #define LACKS_STRING_H
\r
550 #define LACKS_STRINGS_H
\r
551 #define LACKS_SYS_TYPES_H
\r
552 #define LACKS_ERRNO_H
\r
553 #define LACKS_SCHED_H
\r
554 #ifndef MALLOC_FAILURE_ACTION
\r
555 #define MALLOC_FAILURE_ACTION
\r
556 #endif /* MALLOC_FAILURE_ACTION */
\r
557 #ifndef MMAP_CLEARS
\r
558 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
\r
559 #define MMAP_CLEARS 0
\r
561 #define MMAP_CLEARS 1
\r
562 #endif /* _WIN32_WCE */
\r
563 #endif /*MMAP_CLEARS */
\r
566 #if defined(DARWIN) || defined(_DARWIN)
\r
567 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
\r
568 #ifndef HAVE_MORECORE
\r
569 #define HAVE_MORECORE 0
\r
570 #define HAVE_MMAP 1
\r
571 /* OSX allocators provide 16 byte alignment */
\r
572 #ifndef MALLOC_ALIGNMENT
\r
573 #define MALLOC_ALIGNMENT ((size_t)16U)
\r
575 #endif /* HAVE_MORECORE */
\r
576 #endif /* DARWIN */
\r
578 #ifndef LACKS_SYS_TYPES_H
\r
579 #include <sys/types.h> /* For size_t */
\r
580 #endif /* LACKS_SYS_TYPES_H */
\r
582 /* The maximum possible size_t value has all bits set */
\r
583 #define MAX_SIZE_T (~(size_t)0)
\r
585 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
\r
586 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
\r
587 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
\r
588 #endif /* USE_LOCKS */
\r
590 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
\r
591 #if ((defined(__GNUC__) && \
\r
592 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
\r
593 defined(__i386__) || defined(__x86_64__))) || \
\r
594 (defined(_MSC_VER) && _MSC_VER>=1310))
\r
595 #ifndef USE_SPIN_LOCKS
\r
596 #define USE_SPIN_LOCKS 1
\r
597 #endif /* USE_SPIN_LOCKS */
\r
598 #elif USE_SPIN_LOCKS
\r
599 #error "USE_SPIN_LOCKS defined without implementation"
\r
600 #endif /* ... locks available... */
\r
601 #elif !defined(USE_SPIN_LOCKS)
\r
602 #define USE_SPIN_LOCKS 0
\r
603 #endif /* USE_LOCKS */
\r
605 #ifndef ONLY_MSPACES
\r
606 #define ONLY_MSPACES 0
\r
607 #endif /* ONLY_MSPACES */
\r
611 #else /* ONLY_MSPACES */
\r
613 #endif /* ONLY_MSPACES */
\r
614 #endif /* MSPACES */
\r
615 #ifndef MALLOC_ALIGNMENT
\r
616 #define MALLOC_ALIGNMENT ((size_t)8U)
\r
617 #endif /* MALLOC_ALIGNMENT */
\r
620 #endif /* FOOTERS */
\r
622 #define ABORT abort()
\r
624 #ifndef ABORT_ON_ASSERT_FAILURE
\r
625 #define ABORT_ON_ASSERT_FAILURE 1
\r
626 #endif /* ABORT_ON_ASSERT_FAILURE */
\r
627 #ifndef PROCEED_ON_ERROR
\r
628 #define PROCEED_ON_ERROR 0
\r
629 #endif /* PROCEED_ON_ERROR */
\r
633 #endif /* INSECURE */
\r
634 #ifndef MALLOC_INSPECT_ALL
\r
635 #define MALLOC_INSPECT_ALL 0
\r
636 #endif /* MALLOC_INSPECT_ALL */
\r
638 #define HAVE_MMAP 1
\r
639 #endif /* HAVE_MMAP */
\r
640 #ifndef MMAP_CLEARS
\r
641 #define MMAP_CLEARS 1
\r
642 #endif /* MMAP_CLEARS */
\r
643 #ifndef HAVE_MREMAP
\r
645 #define HAVE_MREMAP 1
\r
646 #define _GNU_SOURCE /* Turns on mremap() definition */
\r
648 #define HAVE_MREMAP 0
\r
650 #endif /* HAVE_MREMAP */
\r
651 #ifndef MALLOC_FAILURE_ACTION
\r
652 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
\r
653 #endif /* MALLOC_FAILURE_ACTION */
\r
654 #ifndef HAVE_MORECORE
\r
656 #define HAVE_MORECORE 0
\r
657 #else /* ONLY_MSPACES */
\r
658 #define HAVE_MORECORE 1
\r
659 #endif /* ONLY_MSPACES */
\r
660 #endif /* HAVE_MORECORE */
\r
662 #define MORECORE_CONTIGUOUS 0
\r
663 #else /* !HAVE_MORECORE */
\r
664 #define MORECORE_DEFAULT sbrk
\r
665 #ifndef MORECORE_CONTIGUOUS
\r
666 #define MORECORE_CONTIGUOUS 1
\r
667 #endif /* MORECORE_CONTIGUOUS */
\r
668 #endif /* HAVE_MORECORE */
\r
669 #ifndef DEFAULT_GRANULARITY
\r
670 #if (MORECORE_CONTIGUOUS || defined(WIN32))
\r
671 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
\r
672 #else /* MORECORE_CONTIGUOUS */
\r
673 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
\r
674 #endif /* MORECORE_CONTIGUOUS */
\r
675 #endif /* DEFAULT_GRANULARITY */
\r
676 #ifndef DEFAULT_TRIM_THRESHOLD
\r
677 #ifndef MORECORE_CANNOT_TRIM
\r
678 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
\r
679 #else /* MORECORE_CANNOT_TRIM */
\r
680 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
\r
681 #endif /* MORECORE_CANNOT_TRIM */
\r
682 #endif /* DEFAULT_TRIM_THRESHOLD */
\r
683 #ifndef DEFAULT_MMAP_THRESHOLD
\r
685 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
\r
686 #else /* HAVE_MMAP */
\r
687 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
\r
688 #endif /* HAVE_MMAP */
\r
689 #endif /* DEFAULT_MMAP_THRESHOLD */
\r
690 #ifndef MAX_RELEASE_CHECK_RATE
\r
692 #define MAX_RELEASE_CHECK_RATE 4095
\r
694 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
\r
695 #endif /* HAVE_MMAP */
\r
696 #endif /* MAX_RELEASE_CHECK_RATE */
\r
697 #ifndef USE_BUILTIN_FFS
\r
698 #define USE_BUILTIN_FFS 0
\r
699 #endif /* USE_BUILTIN_FFS */
\r
700 #ifndef USE_DEV_RANDOM
\r
701 #define USE_DEV_RANDOM 0
\r
702 #endif /* USE_DEV_RANDOM */
\r
703 #ifndef NO_MALLINFO
\r
704 #define NO_MALLINFO 0
\r
705 #endif /* NO_MALLINFO */
\r
706 #ifndef MALLINFO_FIELD_TYPE
\r
707 #define MALLINFO_FIELD_TYPE size_t
\r
708 #endif /* MALLINFO_FIELD_TYPE */
\r
709 #ifndef NO_MALLOC_STATS
\r
710 #define NO_MALLOC_STATS 0
\r
711 #endif /* NO_MALLOC_STATS */
\r
712 #ifndef NO_SEGMENT_TRAVERSAL
\r
713 #define NO_SEGMENT_TRAVERSAL 0
\r
714 #endif /* NO_SEGMENT_TRAVERSAL */
\r
717 mallopt tuning options. SVID/XPG defines four standard parameter
\r
718 numbers for mallopt, normally defined in malloc.h. None of these
\r
719 are used in this malloc, so setting them has no effect. But this
\r
720 malloc does support the following options.
\r
723 #define M_TRIM_THRESHOLD (-1)
\r
724 #define M_GRANULARITY (-2)
\r
725 #define M_MMAP_THRESHOLD (-3)
\r
727 /* ------------------------ Mallinfo declarations ------------------------ */
\r
731 This version of malloc supports the standard SVID/XPG mallinfo
\r
732 routine that returns a struct containing usage properties and
\r
733 statistics. It should work on any system that has a
\r
734 /usr/include/malloc.h defining struct mallinfo. The main
\r
735 declaration needed is the mallinfo struct that is returned (by-copy)
\r
736 by mallinfo(). The malloinfo struct contains a bunch of fields that
\r
737 are not even meaningful in this version of malloc. These fields are
\r
738 are instead filled by mallinfo() with other numbers that might be of
\r
741 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
\r
742 /usr/include/malloc.h file that includes a declaration of struct
\r
743 mallinfo. If so, it is included; else a compliant version is
\r
744 declared below. These must be precisely the same for mallinfo() to
\r
745 work. The original SVID version of this struct, defined on most
\r
746 systems with mallinfo, declares all fields as ints. But some others
\r
747 define as unsigned long. If your system defines the fields using a
\r
748 type of different width than listed here, you MUST #include your
\r
749 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
\r
752 /* #define HAVE_USR_INCLUDE_MALLOC_H */
\r
754 #ifdef HAVE_USR_INCLUDE_MALLOC_H
\r
755 #include "/usr/include/malloc.h"
\r
756 #else /* HAVE_USR_INCLUDE_MALLOC_H */
\r
757 #ifndef STRUCT_MALLINFO_DECLARED
\r
758 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
\r
759 #define _STRUCT_MALLINFO
\r
760 #define STRUCT_MALLINFO_DECLARED 1
\r
762 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
\r
763 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
\r
764 MALLINFO_FIELD_TYPE smblks; /* always 0 */
\r
765 MALLINFO_FIELD_TYPE hblks; /* always 0 */
\r
766 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
\r
767 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
\r
768 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
\r
769 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
\r
770 MALLINFO_FIELD_TYPE fordblks; /* total free space */
\r
771 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
\r
773 #endif /* STRUCT_MALLINFO_DECLARED */
\r
774 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
\r
775 #endif /* NO_MALLINFO */
\r
778 Try to persuade compilers to inline. The most critical functions for
\r
779 inlining are defined as macros, so these aren't used for them.
\r
782 #ifndef FORCEINLINE
\r
783 #if defined(__GNUC__)
\r
784 #define FORCEINLINE __inline __attribute__ ((always_inline))
\r
785 #elif defined(_MSC_VER)
\r
786 #define FORCEINLINE __forceinline
\r
790 #if defined(__GNUC__)
\r
791 #define NOINLINE __attribute__ ((noinline))
\r
792 #elif defined(_MSC_VER)
\r
793 #define NOINLINE __declspec(noinline)
\r
801 #ifndef FORCEINLINE
\r
802 #define FORCEINLINE inline
\r
804 #endif /* __cplusplus */
\r
805 #ifndef FORCEINLINE
\r
806 #define FORCEINLINE
\r
811 /* ------------------- Declarations of public routines ------------------- */
\r
813 #ifndef USE_DL_PREFIX
\r
814 #define dlcalloc calloc
\r
815 #define dlfree free
\r
816 #define dlmalloc malloc
\r
817 #define dlmemalign memalign
\r
818 #define dlposix_memalign posix_memalign
\r
819 #define dlrealloc realloc
\r
820 #define dlrealloc_in_place realloc_in_place
\r
821 #define dlvalloc valloc
\r
822 #define dlpvalloc pvalloc
\r
823 #define dlmallinfo mallinfo
\r
824 #define dlmallopt mallopt
\r
825 #define dlmalloc_trim malloc_trim
\r
826 #define dlmalloc_stats malloc_stats
\r
827 #define dlmalloc_usable_size malloc_usable_size
\r
828 #define dlmalloc_footprint malloc_footprint
\r
829 #define dlmalloc_max_footprint malloc_max_footprint
\r
830 #define dlmalloc_footprint_limit malloc_footprint_limit
\r
831 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
\r
832 #define dlmalloc_inspect_all malloc_inspect_all
\r
833 #define dlindependent_calloc independent_calloc
\r
834 #define dlindependent_comalloc independent_comalloc
\r
835 #define dlbulk_free bulk_free
\r
836 #endif /* USE_DL_PREFIX */
\r
838 #if 0 // Redeclaration warnings as PDCLib already declares these in <stdio.h>
\r
842 Returns a pointer to a newly allocated chunk of at least n bytes, or
\r
843 null if no space is available, in which case errno is set to ENOMEM
\r
846 If n is zero, malloc returns a minimum-sized chunk. (The minimum
\r
847 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
\r
848 systems.) Note that size_t is an unsigned type, so calls with
\r
849 arguments that would be negative if signed are interpreted as
\r
850 requests for huge amounts of space, which will often fail. The
\r
851 maximum supported value of n differs across systems, but is in all
\r
852 cases less than the maximum representable value of a size_t.
\r
854 DLMALLOC_EXPORT void* dlmalloc(size_t);
\r
858 Releases the chunk of memory pointed to by p, that had been previously
\r
859 allocated using malloc or a related routine such as realloc.
\r
860 It has no effect if p is null. If p was not malloced or already
\r
861 freed, free(p) will by default cause the current program to abort.
\r
863 DLMALLOC_EXPORT void dlfree(void*);
\r
866 calloc(size_t n_elements, size_t element_size);
\r
867 Returns a pointer to n_elements * element_size bytes, with all locations
\r
870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
\r
873 realloc(void* p, size_t n)
\r
874 Returns a pointer to a chunk of size n that contains the same data
\r
875 as does chunk p up to the minimum of (n, p's size) bytes, or null
\r
876 if no space is available.
\r
878 The returned pointer may or may not be the same as p. The algorithm
\r
879 prefers extending p in most cases when possible, otherwise it
\r
880 employs the equivalent of a malloc-copy-free sequence.
\r
882 If p is null, realloc is equivalent to malloc.
\r
884 If space is not available, realloc returns null, errno is set (if on
\r
885 ANSI) and p is NOT freed.
\r
887 if n is for fewer bytes than already held by p, the newly unused
\r
888 space is lopped off and freed if possible. realloc with a size
\r
889 argument of zero (re)allocates a minimum-sized chunk.
\r
891 The old unix realloc convention of allowing the last-free'd chunk
\r
892 to be used as an argument to realloc is not supported.
\r
894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
\r
899 realloc_in_place(void* p, size_t n)
\r
900 Resizes the space allocated for p to size n, only if this can be
\r
901 done without moving p (i.e., only if there is adjacent space
\r
902 available if n is greater than p's current allocated size, or n is
\r
903 less than or equal to p's size). This may be used instead of plain
\r
904 realloc if an alternative allocation strategy is needed upon failure
\r
905 to expand space; for example, reallocation of a buffer that must be
\r
906 memory-aligned or cleared. You can use realloc_in_place to trigger
\r
907 these alternatives only when needed.
\r
909 Returns p if successful; otherwise null.
\r
911 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
\r
914 memalign(size_t alignment, size_t n);
\r
915 Returns a pointer to a newly allocated chunk of n bytes, aligned
\r
916 in accord with the alignment argument.
\r
918 The alignment argument should be a power of two. If the argument is
\r
919 not a power of two, the nearest greater power is used.
\r
920 8-byte alignment is guaranteed by normal malloc calls, so don't
\r
921 bother calling memalign with an argument of 8 or less.
\r
923 Overreliance on memalign is a sure way to fragment space.
\r
925 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
\r
928 int posix_memalign(void** pp, size_t alignment, size_t n);
\r
929 Allocates a chunk of n bytes, aligned in accord with the alignment
\r
930 argument. Differs from memalign only in that it (1) assigns the
\r
931 allocated memory to *pp rather than returning it, (2) fails and
\r
932 returns EINVAL if the alignment is not a power of two (3) fails and
\r
933 returns ENOMEM if memory cannot be allocated.
\r
935 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
\r
939 Equivalent to memalign(pagesize, n), where pagesize is the page
\r
940 size of the system. If the pagesize is unknown, 4096 is used.
\r
942 DLMALLOC_EXPORT void* dlvalloc(size_t);
\r
945 mallopt(int parameter_number, int parameter_value)
\r
946 Sets tunable parameters The format is to provide a
\r
947 (parameter-number, parameter-value) pair. mallopt then sets the
\r
948 corresponding parameter to the argument value if it can (i.e., so
\r
949 long as the value is meaningful), and returns 1 if successful else
\r
950 0. To workaround the fact that mallopt is specified to use int,
\r
951 not size_t parameters, the value -1 is specially treated as the
\r
952 maximum unsigned size_t value.
\r
954 SVID/XPG/ANSI defines four standard param numbers for mallopt,
\r
955 normally defined in malloc.h. None of these are use in this malloc,
\r
956 so setting them has no effect. But this malloc also supports other
\r
957 options in mallopt. See below for details. Briefly, supported
\r
958 parameters are as follows (listed defaults are for "typical"
\r
961 Symbol param # default allowed param values
\r
962 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
\r
963 M_GRANULARITY -2 page size any power of 2 >= page size
\r
964 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
\r
966 DLMALLOC_EXPORT int dlmallopt(int, int);
\r
969 malloc_footprint();
\r
970 Returns the number of bytes obtained from the system. The total
\r
971 number of bytes allocated by malloc, realloc etc., is less than this
\r
972 value. Unlike mallinfo, this function returns only a precomputed
\r
973 result, so can be called frequently to monitor memory consumption.
\r
974 Even if locks are otherwise defined, this function does not use them,
\r
975 so results might not be up to date.
\r
977 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
\r
980 malloc_max_footprint();
\r
981 Returns the maximum number of bytes obtained from the system. This
\r
982 value will be greater than current footprint if deallocated space
\r
983 has been reclaimed by the system. The peak number of bytes allocated
\r
984 by malloc, realloc etc., is less than this value. Unlike mallinfo,
\r
985 this function returns only a precomputed result, so can be called
\r
986 frequently to monitor memory consumption. Even if locks are
\r
987 otherwise defined, this function does not use them, so results might
\r
990 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
\r
993 malloc_footprint_limit();
\r
994 Returns the number of bytes that the heap is allowed to obtain from
\r
995 the system, returning the last value returned by
\r
996 malloc_set_footprint_limit, or the maximum size_t value if
\r
997 never set. The returned value reflects a permission. There is no
\r
998 guarantee that this number of bytes can actually be obtained from
\r
1001 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(void);
\r
1004 malloc_set_footprint_limit();
\r
1005 Sets the maximum number of bytes to obtain from the system, causing
\r
1006 failure returns from malloc and related functions upon attempts to
\r
1007 exceed this value. The argument value may be subject to page
\r
1008 rounding to an enforceable limit; this actual value is returned.
\r
1009 Using an argument of the maximum possible size_t effectively
\r
1010 disables checks. If the argument is less than or equal to the
\r
1011 current malloc_footprint, then all future allocations that require
\r
1012 additional system memory will fail. However, invocation cannot
\r
1013 retroactively deallocate existing used memory.
\r
1015 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
\r
1017 #if MALLOC_INSPECT_ALL
\r
1019 malloc_inspect_all(void(*handler)(void *start,
\r
1021 size_t used_bytes,
\r
1022 void* callback_arg),
\r
1024 Traverses the heap and calls the given handler for each managed
\r
1025 region, skipping all bytes that are (or may be) used for bookkeeping
\r
1026 purposes. Traversal does not include include chunks that have been
\r
1027 directly memory mapped. Each reported region begins at the start
\r
1028 address, and continues up to but not including the end address. The
\r
1029 first used_bytes of the region contain allocated data. If
\r
1030 used_bytes is zero, the region is unallocated. The handler is
\r
1031 invoked with the given callback argument. If locks are defined, they
\r
1032 are held during the entire traversal. It is a bad idea to invoke
\r
1033 other malloc functions from within the handler.
\r
1035 For example, to count the number of in-use chunks with size greater
\r
1036 than 1000, you could write:
\r
1037 static int count = 0;
\r
1038 void count_chunks(void* start, void* end, size_t used, void* arg) {
\r
1039 if (used >= 1000) ++count;
\r
1042 malloc_inspect_all(count_chunks, NULL);
\r
1044 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
\r
1046 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
\r
1049 #endif /* MALLOC_INSPECT_ALL */
\r
1054 Returns (by copy) a struct containing various summary statistics:
\r
1056 arena: current total non-mmapped bytes allocated from system
\r
1057 ordblks: the number of free chunks
\r
1058 smblks: always zero.
\r
1059 hblks: current number of mmapped regions
\r
1060 hblkhd: total bytes held in mmapped regions
\r
1061 usmblks: the maximum total allocated space. This will be greater
\r
1062 than current total if trimming has occurred.
\r
1063 fsmblks: always zero
\r
1064 uordblks: current total allocated space (normal or mmapped)
\r
1065 fordblks: total free space
\r
1066 keepcost: the maximum number of bytes that could ideally be released
\r
1067 back to system via malloc_trim. ("ideally" means that
\r
1068 it ignores page restrictions etc.)
\r
1070 Because these fields are ints, but internal bookkeeping may
\r
1071 be kept as longs, the reported values may wrap around zero and
\r
1072 thus be inaccurate.
\r
1074 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
\r
1075 #endif /* NO_MALLINFO */
\r
1078 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
\r
1080 independent_calloc is similar to calloc, but instead of returning a
\r
1081 single cleared space, it returns an array of pointers to n_elements
\r
1082 independent elements that can hold contents of size elem_size, each
\r
1083 of which starts out cleared, and can be independently freed,
\r
1084 realloc'ed etc. The elements are guaranteed to be adjacently
\r
1085 allocated (this is not guaranteed to occur with multiple callocs or
\r
1086 mallocs), which may also improve cache locality in some
\r
1089 The "chunks" argument is optional (i.e., may be null, which is
\r
1090 probably the most typical usage). If it is null, the returned array
\r
1091 is itself dynamically allocated and should also be freed when it is
\r
1092 no longer needed. Otherwise, the chunks array must be of at least
\r
1093 n_elements in length. It is filled in with the pointers to the
\r
1096 In either case, independent_calloc returns this pointer array, or
\r
1097 null if the allocation failed. If n_elements is zero and "chunks"
\r
1098 is null, it returns a chunk representing an array with zero elements
\r
1099 (which should be freed if not wanted).
\r
1101 Each element must be freed when it is no longer needed. This can be
\r
1102 done all at once using bulk_free.
\r
1104 independent_calloc simplifies and speeds up implementations of many
\r
1105 kinds of pools. It may also be useful when constructing large data
\r
1106 structures that initially have a fixed number of fixed-sized nodes,
\r
1107 but the number is not known at compile time, and some of the nodes
\r
1108 may later need to be freed. For example:
\r
1110 struct Node { int item; struct Node* next; };
\r
1112 struct Node* build_list() {
\r
1113 struct Node** pool;
\r
1114 int n = read_number_of_nodes_needed();
\r
1115 if (n <= 0) return 0;
\r
1116 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
\r
1117 if (pool == 0) die();
\r
1118 // organize into a linked list...
\r
1119 struct Node* first = pool[0];
\r
1120 for (i = 0; i < n-1; ++i)
\r
1121 pool[i]->next = pool[i+1];
\r
1122 free(pool); // Can now free the array (or not, if it is needed later)
\r
1126 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
\r
1129 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
\r
1131 independent_comalloc allocates, all at once, a set of n_elements
\r
1132 chunks with sizes indicated in the "sizes" array. It returns
\r
1133 an array of pointers to these elements, each of which can be
\r
1134 independently freed, realloc'ed etc. The elements are guaranteed to
\r
1135 be adjacently allocated (this is not guaranteed to occur with
\r
1136 multiple callocs or mallocs), which may also improve cache locality
\r
1137 in some applications.
\r
1139 The "chunks" argument is optional (i.e., may be null). If it is null
\r
1140 the returned array is itself dynamically allocated and should also
\r
1141 be freed when it is no longer needed. Otherwise, the chunks array
\r
1142 must be of at least n_elements in length. It is filled in with the
\r
1143 pointers to the chunks.
\r
1145 In either case, independent_comalloc returns this pointer array, or
\r
1146 null if the allocation failed. If n_elements is zero and chunks is
\r
1147 null, it returns a chunk representing an array with zero elements
\r
1148 (which should be freed if not wanted).
\r
1150 Each element must be freed when it is no longer needed. This can be
\r
1151 done all at once using bulk_free.
\r
1153 independent_comallac differs from independent_calloc in that each
\r
1154 element may have a different size, and also that it does not
\r
1155 automatically clear elements.
\r
1157 independent_comalloc can be used to speed up allocation in cases
\r
1158 where several structs or objects must always be allocated at the
\r
1159 same time. For example:
\r
1161 struct Head { ... }
\r
1162 struct Foot { ... }
\r
1164 void send_message(char* msg) {
\r
1165 int msglen = strlen(msg);
\r
1166 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
\r
1168 if (independent_comalloc(3, sizes, chunks) == 0)
\r
1170 struct Head* head = (struct Head*)(chunks[0]);
\r
1171 char* body = (char*)(chunks[1]);
\r
1172 struct Foot* foot = (struct Foot*)(chunks[2]);
\r
1176 In general though, independent_comalloc is worth using only for
\r
1177 larger values of n_elements. For small values, you probably won't
\r
1178 detect enough difference from series of malloc calls to bother.
\r
1180 Overuse of independent_comalloc can increase overall memory usage,
\r
1181 since it cannot reuse existing noncontiguous small chunks that
\r
1182 might be available for some of the elements.
\r
1184 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
\r
1187 bulk_free(void* array[], size_t n_elements)
\r
1188 Frees and clears (sets to null) each non-null pointer in the given
\r
1189 array. This is likely to be faster than freeing them one-by-one.
\r
1190 If footers are used, pointers that have been allocated in different
\r
1191 mspaces are not freed or cleared, and the count of all such pointers
\r
1192 is returned. For large arrays of pointers with poor locality, it
\r
1193 may be worthwhile to sort this array before calling bulk_free.
\r
1195 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
\r
1198 pvalloc(size_t n);
\r
1199 Equivalent to valloc(minimum-page-that-holds(n)), that is,
\r
1200 round up n to nearest pagesize.
\r
1202 DLMALLOC_EXPORT void* dlpvalloc(size_t);
\r
1205 malloc_trim(size_t pad);
\r
1207 If possible, gives memory back to the system (via negative arguments
\r
1208 to sbrk) if there is unused memory at the `high' end of the malloc
\r
1209 pool or in unused MMAP segments. You can call this after freeing
\r
1210 large blocks of memory to potentially reduce the system-level memory
\r
1211 requirements of a program. However, it cannot guarantee to reduce
\r
1212 memory. Under some allocation patterns, some large free blocks of
\r
1213 memory will be locked between two used chunks, so they cannot be
\r
1214 given back to the system.
\r
1216 The `pad' argument to malloc_trim represents the amount of free
\r
1217 trailing space to leave untrimmed. If this argument is zero, only
\r
1218 the minimum amount of memory to maintain internal data structures
\r
1219 will be left. Non-zero arguments can be supplied to maintain enough
\r
1220 trailing space to service future expected allocations without having
\r
1221 to re-obtain memory from the system.
\r
1223 Malloc_trim returns 1 if it actually released any memory, else 0.
\r
1225 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
\r
1229 Prints on stderr the amount of space obtained from the system (both
\r
1230 via sbrk and mmap), the maximum amount (which may be more than
\r
1231 current if malloc_trim and/or munmap got called), and the current
\r
1232 number of bytes allocated via malloc (or realloc, etc) but not yet
\r
1233 freed. Note that this is the number of bytes allocated, not the
\r
1234 number requested. It will be larger than the number requested
\r
1235 because of alignment and bookkeeping overhead. Because it includes
\r
1236 alignment wastage as being in use, this figure may be greater than
\r
1237 zero even when no user-level chunks are allocated.
\r
1239 The reported current and maximum system memory can be inaccurate if
\r
1240 a program makes other calls to system memory allocation functions
\r
1241 (normally sbrk) outside of malloc.
\r
1243 malloc_stats prints only the most commonly interesting statistics.
\r
1244 More information can be obtained by calling mallinfo.
\r
1246 DLMALLOC_EXPORT void dlmalloc_stats(void);
\r
1248 #endif /* ONLY_MSPACES */
\r
1251 malloc_usable_size(void* p);
\r
1253 Returns the number of bytes you can actually use in
\r
1254 an allocated chunk, which may be more than you requested (although
\r
1255 often not) due to alignment and minimum size constraints.
\r
1256 You can use this many bytes without worrying about
\r
1257 overwriting other allocated objects. This is not a particularly great
\r
1258 programming practice. malloc_usable_size can be more useful in
\r
1259 debugging and assertions, for example:
\r
1262 assert(malloc_usable_size(p) >= 256);
\r
1264 size_t dlmalloc_usable_size(void*);
\r
1269 mspace is an opaque type representing an independent
\r
1270 region of space that supports mspace_malloc, etc.
\r
1272 typedef void* mspace;
\r
1275 create_mspace creates and returns a new independent space with the
\r
1276 given initial capacity, or, if 0, the default granularity size. It
\r
1277 returns null if there is no system memory available to create the
\r
1278 space. If argument locked is non-zero, the space uses a separate
\r
1279 lock to control access. The capacity of the space will grow
\r
1280 dynamically as needed to service mspace_malloc requests. You can
\r
1281 control the sizes of incremental increases of this space by
\r
1282 compiling with a different DEFAULT_GRANULARITY or dynamically
\r
1283 setting with mallopt(M_GRANULARITY, value).
\r
1285 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
\r
1288 destroy_mspace destroys the given space, and attempts to return all
\r
1289 of its memory back to the system, returning the total number of
\r
1290 bytes freed. After destruction, the results of access to all memory
\r
1291 used by the space become undefined.
\r
1293 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
\r
1296 create_mspace_with_base uses the memory supplied as the initial base
\r
1297 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
\r
1298 space is used for bookkeeping, so the capacity must be at least this
\r
1299 large. (Otherwise 0 is returned.) When this initial space is
\r
1300 exhausted, additional memory will be obtained from the system.
\r
1301 Destroying this space will deallocate all additionally allocated
\r
1302 space (if possible) but not the initial base.
\r
1304 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
\r
1307 mspace_track_large_chunks controls whether requests for large chunks
\r
1308 are allocated in their own untracked mmapped regions, separate from
\r
1309 others in this mspace. By default large chunks are not tracked,
\r
1310 which reduces fragmentation. However, such chunks are not
\r
1311 necessarily released to the system upon destroy_mspace. Enabling
\r
1312 tracking by setting to true may increase fragmentation, but avoids
\r
1313 leakage when relying on destroy_mspace to release all memory
\r
1314 allocated using this space. The function returns the previous
\r
1317 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
\r
1321 mspace_malloc behaves as malloc, but operates within
\r
1324 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
\r
1327 mspace_free behaves as free, but operates within
\r
1330 If compiled with FOOTERS==1, mspace_free is not actually needed.
\r
1331 free may be called instead of mspace_free because freed chunks from
\r
1332 any space are handled by their originating spaces.
\r
1334 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
\r
1337 mspace_realloc behaves as realloc, but operates within
\r
1340 If compiled with FOOTERS==1, mspace_realloc is not actually
\r
1341 needed. realloc may be called instead of mspace_realloc because
\r
1342 realloced chunks from any space are handled by their originating
\r
1345 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
\r
1348 mspace_calloc behaves as calloc, but operates within
\r
1351 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
\r
1354 mspace_memalign behaves as memalign, but operates within
\r
1357 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
\r
1360 mspace_independent_calloc behaves as independent_calloc, but
\r
1361 operates within the given space.
\r
1363 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
\r
1364 size_t elem_size, void* chunks[]);
\r
1367 mspace_independent_comalloc behaves as independent_comalloc, but
\r
1368 operates within the given space.
\r
1370 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
\r
1371 size_t sizes[], void* chunks[]);
\r
1374 mspace_footprint() returns the number of bytes obtained from the
\r
1375 system for this space.
\r
1377 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
\r
1380 mspace_max_footprint() returns the peak number of bytes obtained from the
\r
1381 system for this space.
\r
1383 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
\r
1388 mspace_mallinfo behaves as mallinfo, but reports properties of
\r
1391 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
\r
1392 #endif /* NO_MALLINFO */
\r
1395 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
\r
1397 DLMALLOC_EXPORT size_t mspace_usable_size(void* mem);
\r
1400 mspace_malloc_stats behaves as malloc_stats, but reports
\r
1401 properties of the given space.
\r
1403 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
\r
1406 mspace_trim behaves as malloc_trim, but
\r
1407 operates within the given space.
\r
1409 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
\r
1412 An alias for mallopt.
\r
1414 DLMALLOC_EXPORT int mspace_mallopt(int, int);
\r
1416 #endif /* MSPACES */
\r
1418 #ifdef __cplusplus
\r
1419 } /* end of extern "C" */
\r
1420 #endif /* __cplusplus */
\r
1423 ========================================================================
\r
1424 To make a fully customizable malloc.h header file, cut everything
\r
1425 above this line, put into file malloc.h, edit to suit, and #include it
\r
1426 on the next line, as well as in programs that use this malloc.
\r
1427 ========================================================================
\r
1430 /* #include "malloc.h" */
\r
1432 /*------------------------------ internal #includes ---------------------- */
\r
1435 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
\r
1436 #endif /* _MSC_VER */
\r
1437 #if !NO_MALLOC_STATS
\r
1438 #include <stdio.h> /* for printing in malloc_stats */
\r
1439 #endif /* NO_MALLOC_STATS */
\r
1440 #ifndef LACKS_ERRNO_H
\r
1441 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
\r
1442 #endif /* LACKS_ERRNO_H */
\r
1444 #if ABORT_ON_ASSERT_FAILURE
\r
1446 #define assert(x) if(!(x)) ABORT
\r
1447 #else /* ABORT_ON_ASSERT_FAILURE */
\r
1448 #include <assert.h>
\r
1449 #endif /* ABORT_ON_ASSERT_FAILURE */
\r
1455 #endif /* DEBUG */
\r
1456 #if !defined(WIN32) && !defined(LACKS_TIME_H)
\r
1457 #include <time.h> /* for magic initialization */
\r
1458 #endif /* WIN32 */
\r
1459 #ifndef LACKS_STDLIB_H
\r
1460 #include <stdlib.h> /* for abort() */
\r
1461 #endif /* LACKS_STDLIB_H */
\r
1462 #ifndef LACKS_STRING_H
\r
1463 #include <string.h> /* for memset etc */
\r
1464 #endif /* LACKS_STRING_H */
\r
1465 #if USE_BUILTIN_FFS
\r
1466 #ifndef LACKS_STRINGS_H
\r
1467 #include <strings.h> /* for ffs */
\r
1468 #endif /* LACKS_STRINGS_H */
\r
1469 #endif /* USE_BUILTIN_FFS */
\r
1471 #ifndef LACKS_SYS_MMAN_H
\r
1472 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
\r
1473 #if (defined(linux) && !defined(__USE_GNU))
\r
1474 #define __USE_GNU 1
\r
1475 #include <sys/mman.h> /* for mmap */
\r
1478 #include <sys/mman.h> /* for mmap */
\r
1479 #endif /* linux */
\r
1480 #endif /* LACKS_SYS_MMAN_H */
\r
1481 #ifndef LACKS_FCNTL_H
\r
1482 #include <fcntl.h>
\r
1483 #endif /* LACKS_FCNTL_H */
\r
1484 #endif /* HAVE_MMAP */
\r
1485 #ifndef LACKS_UNISTD_H
\r
1486 #include <unistd.h> /* for sbrk, sysconf */
\r
1487 #else /* LACKS_UNISTD_H */
\r
1488 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
\r
1489 /*extern void* sbrk(ptrdiff_t);*/
\r
1490 #endif /* FreeBSD etc */
\r
1491 #endif /* LACKS_UNISTD_H */
\r
1493 /* Declarations for locking */
\r
1496 #if defined (__SVR4) && defined (__sun) /* solaris */
\r
1497 #include <thread.h>
\r
1498 #elif !defined(LACKS_SCHED_H)
\r
1499 #include <sched.h>
\r
1500 #endif /* solaris or LACKS_SCHED_H */
\r
1501 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
\r
1502 /*#include <pthread.h>*/
\r
1503 #endif /* USE_RECURSIVE_LOCKS ... */
\r
1504 #elif defined(_MSC_VER)
\r
1506 /* These are already defined on AMD64 builds */
\r
1507 #ifdef __cplusplus
\r
1509 #endif /* __cplusplus */
\r
1510 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
\r
1511 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
\r
1512 #ifdef __cplusplus
\r
1514 #endif /* __cplusplus */
\r
1515 #endif /* _M_AMD64 */
\r
1516 #pragma intrinsic (_InterlockedCompareExchange)
\r
1517 #pragma intrinsic (_InterlockedExchange)
\r
1518 #define interlockedcompareexchange _InterlockedCompareExchange
\r
1519 #define interlockedexchange _InterlockedExchange
\r
1520 #elif defined(WIN32) && defined(__GNUC__)
\r
1521 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
\r
1522 #define interlockedexchange __sync_lock_test_and_set
\r
1523 #endif /* Win32 */
\r
1524 #endif /* USE_LOCKS */
\r
1526 /* Declarations for bit scanning on win32 */
\r
1527 #if defined(_MSC_VER) && _MSC_VER>=1300
\r
1528 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
\r
1529 #ifdef __cplusplus
\r
1531 #endif /* __cplusplus */
\r
1532 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
\r
1533 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
\r
1534 #ifdef __cplusplus
\r
1536 #endif /* __cplusplus */
\r
1538 #define BitScanForward _BitScanForward
\r
1539 #define BitScanReverse _BitScanReverse
\r
1540 #pragma intrinsic(_BitScanForward)
\r
1541 #pragma intrinsic(_BitScanReverse)
\r
1542 #endif /* BitScanForward */
\r
1543 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
\r
1546 #ifndef malloc_getpagesize
\r
1547 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
\r
1548 # ifndef _SC_PAGE_SIZE
\r
1549 # define _SC_PAGE_SIZE _SC_PAGESIZE
\r
1552 # ifdef _SC_PAGE_SIZE
\r
1553 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
\r
1555 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
\r
1556 extern size_t getpagesize();
\r
1557 # define malloc_getpagesize getpagesize()
\r
1559 # ifdef WIN32 /* use supplied emulation of getpagesize */
\r
1560 # define malloc_getpagesize getpagesize()
\r
1562 # ifndef LACKS_SYS_PARAM_H
\r
1563 # include <sys/param.h>
\r
1565 # ifdef EXEC_PAGESIZE
\r
1566 # define malloc_getpagesize EXEC_PAGESIZE
\r
1570 # define malloc_getpagesize NBPG
\r
1572 # define malloc_getpagesize (NBPG * CLSIZE)
\r
1576 # define malloc_getpagesize NBPC
\r
1579 # define malloc_getpagesize PAGESIZE
\r
1580 # else /* just guess */
\r
1581 # define malloc_getpagesize ((size_t)4096U)
\r
1592 /* ------------------- size_t and alignment properties -------------------- */
\r
1594 /* The byte and bit size of a size_t */
\r
1595 #define SIZE_T_SIZE (sizeof(size_t))
\r
1596 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
\r
1598 /* Some constants coerced to size_t */
\r
1599 /* Annoying but necessary to avoid errors on some platforms */
\r
1600 #define SIZE_T_ZERO ((size_t)0)
\r
1601 #define SIZE_T_ONE ((size_t)1)
\r
1602 #define SIZE_T_TWO ((size_t)2)
\r
1603 #define SIZE_T_FOUR ((size_t)4)
\r
1604 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
\r
1605 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
\r
1606 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
\r
1607 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
\r
1609 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
\r
1610 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
\r
1612 /* True if address a has acceptable alignment */
\r
1613 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
\r
1615 /* the number of bytes to offset an address to align it */
\r
1616 #define align_offset(A)\
\r
1617 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
\r
1618 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
\r
1620 /* -------------------------- MMAP preliminaries ------------------------- */
\r
1623 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
\r
1624 checks to fail so compiler optimizer can delete code rather than
\r
1625 using so many "#if"s.
\r
1629 /* MORECORE and MMAP must return MFAIL on failure */
\r
1630 #define MFAIL ((void*)(MAX_SIZE_T))
\r
1631 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
\r
1635 #ifdef MMAP_DEFAULT
\r
1636 #elif !defined(WIN32)
\r
1637 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
\r
1638 #define MMAP_PROT (PROT_READ|PROT_WRITE)
\r
1639 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
\r
1640 #define MAP_ANONYMOUS MAP_ANON
\r
1641 #endif /* MAP_ANON */
\r
1642 #ifdef MAP_ANONYMOUS
\r
1643 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
\r
1644 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
\r
1645 #else /* MAP_ANONYMOUS */
\r
1647 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
\r
1648 is unlikely to be needed, but is supplied just in case.
\r
1650 #define MMAP_FLAGS (MAP_PRIVATE)
\r
1651 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
\r
1652 (dev_zero_fd = open("/dev/zero", O_RDWR), \
\r
1653 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
\r
1654 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
\r
1655 #endif /* MAP_ANONYMOUS */
\r
1657 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
\r
1661 /* Win32 MMAP via VirtualAlloc */
\r
1662 static FORCEINLINE void* win32mmap(size_t size) {
\r
1663 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
\r
1664 return (ptr != 0)? ptr: MFAIL;
\r
1667 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
\r
1668 static FORCEINLINE void* win32direct_mmap(size_t size) {
\r
1669 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
\r
1671 return (ptr != 0)? ptr: MFAIL;
\r
1674 /* This function supports releasing coalesed segments */
\r
1675 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
\r
1676 MEMORY_BASIC_INFORMATION minfo;
\r
1677 char* cptr = (char*)ptr;
\r
1679 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
\r
1681 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
\r
1682 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
\r
1684 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
\r
1686 cptr += minfo.RegionSize;
\r
1687 size -= minfo.RegionSize;
\r
1692 #define MMAP_DEFAULT(s) win32mmap(s)
\r
1693 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
\r
1694 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
\r
1695 #endif /* WIN32 */
\r
1696 #endif /* HAVE_MMAP */
\r
1698 #if HAVE_MREMAP && !defined(MREMAP_DEFAULT)
\r
1700 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
\r
1701 #endif /* WIN32 */
\r
1702 #endif /* HAVE_MREMAP */
\r
1705 * Define CALL_MORECORE
\r
1709 #define CALL_MORECORE(S) MORECORE(S)
\r
1710 #else /* MORECORE */
\r
1711 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
\r
1712 #endif /* MORECORE */
\r
1713 #else /* HAVE_MORECORE */
\r
1714 #define CALL_MORECORE(S) MFAIL
\r
1715 #endif /* HAVE_MORECORE */
\r
1718 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
\r
1721 #define USE_MMAP_BIT (SIZE_T_ONE)
\r
1724 #define CALL_MMAP(s) MMAP(s)
\r
1726 #define CALL_MMAP(s) MMAP_DEFAULT(s)
\r
1729 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
\r
1730 #else /* MUNMAP */
\r
1731 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
\r
1732 #endif /* MUNMAP */
\r
1733 #ifdef DIRECT_MMAP
\r
1734 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
\r
1735 #else /* DIRECT_MMAP */
\r
1736 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
\r
1737 #endif /* DIRECT_MMAP */
\r
1738 #else /* HAVE_MMAP */
\r
1739 #define USE_MMAP_BIT (SIZE_T_ZERO)
\r
1741 #define MMAP(s) MFAIL
\r
1742 #define MUNMAP(a, s) (-1)
\r
1743 #define DIRECT_MMAP(s) MFAIL
\r
1744 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
\r
1745 #define CALL_MMAP(s) MMAP(s)
\r
1746 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
\r
1747 #endif /* HAVE_MMAP */
\r
1750 * Define CALL_MREMAP
\r
1752 #if HAVE_MMAP && HAVE_MREMAP
\r
1754 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
\r
1755 #else /* MREMAP */
\r
1756 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
\r
1757 #endif /* MREMAP */
\r
1758 #else /* HAVE_MMAP && HAVE_MREMAP */
\r
1759 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
\r
1760 #endif /* HAVE_MMAP && HAVE_MREMAP */
\r
1762 /* mstate bit set if continguous morecore disabled or failed */
\r
1763 #define USE_NONCONTIGUOUS_BIT (4U)
\r
1765 /* segment bit set in create_mspace_with_base */
\r
1766 #define EXTERN_BIT (8U)
\r
1769 /* --------------------------- Lock preliminaries ------------------------ */
\r
1772 When locks are defined, there is one global lock, plus
\r
1773 one per-mspace lock.
\r
1775 The global lock_ensures that mparams.magic and other unique
\r
1776 mparams values are initialized only once. It also protects
\r
1777 sequences of calls to MORECORE. In many cases sys_alloc requires
\r
1778 two calls, that should not be interleaved with calls by other
\r
1779 threads. This does not protect against direct calls to MORECORE
\r
1780 by other threads not using this lock, so there is still code to
\r
1781 cope the best we can on interference.
\r
1783 Per-mspace locks surround calls to malloc, free, etc.
\r
1784 By default, locks are simple non-reentrant mutexes.
\r
1786 Because lock-protected regions generally have bounded times, it is
\r
1787 OK to use the supplied simple spinlocks. Spinlocks are likely to
\r
1788 improve performance for lightly contended applications, but worsen
\r
1789 performance under heavy contention.
\r
1791 If USE_LOCKS is > 1, the definitions of lock routines here are
\r
1792 bypassed, in which case you will need to define the type MLOCK_T,
\r
1793 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
\r
1794 and TRY_LOCK. You must also declare a
\r
1795 static MLOCK_T malloc_global_mutex = { initialization values };.
\r
1800 #define USE_LOCK_BIT (0U)
\r
1801 #define INITIAL_LOCK(l) (0)
\r
1802 #define DESTROY_LOCK(l) (0)
\r
1803 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
\r
1804 #define RELEASE_MALLOC_GLOBAL_LOCK()
\r
1808 /* ----------------------- User-defined locks ------------------------ */
\r
1809 /* Define your own lock implementation here */
\r
1810 /* #define INITIAL_LOCK(lk) ... */
\r
1811 /* #define DESTROY_LOCK(lk) ... */
\r
1812 /* #define ACQUIRE_LOCK(lk) ... */
\r
1813 /* #define RELEASE_LOCK(lk) ... */
\r
1814 /* #define TRY_LOCK(lk) ... */
\r
1815 /* static MLOCK_T malloc_global_mutex = ... */
\r
1817 #elif USE_SPIN_LOCKS
\r
1819 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
\r
1820 /* Note CAS_LOCK defined to return 0 on success */
\r
1822 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
\r
1823 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
\r
1824 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
\r
1826 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
\r
1827 /* Custom spin locks for older gcc on x86 */
\r
1828 static FORCEINLINE int x86_cas_lock(int *sl) {
\r
1832 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
\r
1834 : "r" (val), "m" (*(sl)), "0"(cmp)
\r
1835 : "memory", "cc");
\r
1839 static FORCEINLINE void x86_clear_lock(int* sl) {
\r
1843 __asm__ __volatile__ ("lock; xchgl %0, %1"
\r
1845 : "m" (*(sl)), "0"(prev)
\r
1849 #define CAS_LOCK(sl) x86_cas_lock(sl)
\r
1850 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
\r
1852 #else /* Win32 MSC */
\r
1853 #define CAS_LOCK(sl) interlockedexchange(sl, 1)
\r
1854 #define CLEAR_LOCK(sl) interlockedexchange (sl, 0)
\r
1856 #endif /* ... gcc spins locks ... */
\r
1858 /* How to yield for a spin lock */
\r
1859 #define SPINS_PER_YIELD 63
\r
1860 #if defined(_MSC_VER)
\r
1861 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
\r
1862 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
\r
1863 #elif defined (__SVR4) && defined (__sun) /* solaris */
\r
1864 #define SPIN_LOCK_YIELD thr_yield();
\r
1865 #elif !defined(LACKS_SCHED_H)
\r
1866 #define SPIN_LOCK_YIELD sched_yield();
\r
1868 #define SPIN_LOCK_YIELD
\r
1869 #endif /* ... yield ... */
\r
1871 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
\r
1872 /* Plain spin locks use single word (embedded in malloc_states) */
\r
1873 static int spin_acquire_lock(int *sl) {
\r
1875 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
\r
1876 if ((++spins & SPINS_PER_YIELD) == 0) {
\r
1883 #define MLOCK_T int
\r
1884 #define TRY_LOCK(sl) !CAS_LOCK(sl)
\r
1885 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
\r
1886 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
\r
1887 #define INITIAL_LOCK(sl) (*sl = 0)
\r
1888 #define DESTROY_LOCK(sl) (0)
\r
1889 static MLOCK_T malloc_global_mutex = 0;
\r
1891 #else /* USE_RECURSIVE_LOCKS */
\r
1892 /* types for lock owners */
\r
1894 #define THREAD_ID_T DWORD
\r
1895 #define CURRENT_THREAD GetCurrentThreadId()
\r
1896 #define EQ_OWNER(X,Y) ((X) == (Y))
\r
1899 Note: the following assume that pthread_t is a type that can be
\r
1900 initialized to (casted) zero. If this is not the case, you will need to
\r
1901 somehow redefine these or not use spin locks.
\r
1903 #define THREAD_ID_T pthread_t
\r
1904 #define CURRENT_THREAD pthread_self()
\r
1905 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
\r
1908 struct malloc_recursive_lock {
\r
1911 THREAD_ID_T threadid;
\r
1914 #define MLOCK_T struct malloc_recursive_lock
\r
1915 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
\r
1917 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
\r
1918 assert(lk->sl != 0);
\r
1919 if (--lk->c == 0) {
\r
1920 CLEAR_LOCK(&lk->sl);
\r
1924 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
\r
1925 THREAD_ID_T mythreadid = CURRENT_THREAD;
\r
1928 if (*((volatile int *)(&lk->sl)) == 0) {
\r
1929 if (!CAS_LOCK(&lk->sl)) {
\r
1930 lk->threadid = mythreadid;
\r
1935 else if (EQ_OWNER(lk->threadid, mythreadid)) {
\r
1939 if ((++spins & SPINS_PER_YIELD) == 0) {
\r
1945 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
\r
1946 THREAD_ID_T mythreadid = CURRENT_THREAD;
\r
1947 if (*((volatile int *)(&lk->sl)) == 0) {
\r
1948 if (!CAS_LOCK(&lk->sl)) {
\r
1949 lk->threadid = mythreadid;
\r
1954 else if (EQ_OWNER(lk->threadid, mythreadid)) {
\r
1961 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
\r
1962 #define TRY_LOCK(lk) recursive_try_lock(lk)
\r
1963 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
\r
1964 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
\r
1965 #define DESTROY_LOCK(lk) (0)
\r
1966 #endif /* USE_RECURSIVE_LOCKS */
\r
1968 #elif defined(WIN32) /* Win32 critical sections */
\r
1969 #define MLOCK_T CRITICAL_SECTION
\r
1970 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
\r
1971 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
\r
1972 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
\r
1973 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
\r
1974 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
\r
1975 #define NEED_GLOBAL_LOCK_INIT
\r
1977 static MLOCK_T malloc_global_mutex;
\r
1978 static volatile long malloc_global_mutex_status;
\r
1980 /* Use spin loop to initialize global lock */
\r
1981 static void init_malloc_global_mutex() {
\r
1983 long stat = malloc_global_mutex_status;
\r
1986 /* transition to < 0 while initializing, then to > 0) */
\r
1988 interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
\r
1989 InitializeCriticalSection(&malloc_global_mutex);
\r
1990 interlockedexchange(&malloc_global_mutex_status,1);
\r
1993 SleepEx(0, FALSE);
\r
1997 #else /* pthreads-based locks */
\r
1998 #define MLOCK_T pthread_mutex_t
\r
1999 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
\r
2000 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
\r
2001 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
\r
2002 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
\r
2003 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
\r
2005 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
\r
2006 /* Cope with old-style linux recursive lock initialization by adding */
\r
2007 /* skipped internal declaration from pthread.h */
\r
2008 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
\r
2010 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
\r
2011 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
\r
2012 #endif /* USE_RECURSIVE_LOCKS ... */
\r
2014 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
\r
2016 static int pthread_init_lock (MLOCK_T *lk) {
\r
2017 pthread_mutexattr_t attr;
\r
2018 if (pthread_mutexattr_init(&attr)) return 1;
\r
2019 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
\r
2020 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
\r
2022 if (pthread_mutex_init(lk, &attr)) return 1;
\r
2023 if (pthread_mutexattr_destroy(&attr)) return 1;
\r
2027 #endif /* ... lock types ... */
\r
2029 /* Common code for all lock types */
\r
2030 #define USE_LOCK_BIT (2U)
\r
2032 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
\r
2033 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
\r
2036 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
\r
2037 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
\r
2040 #endif /* USE_LOCKS */
\r
2042 /* ----------------------- Chunk representations ------------------------ */
\r
2045 (The following includes lightly edited explanations by Colin Plumb.)
\r
2047 The malloc_chunk declaration below is misleading (but accurate and
\r
2048 necessary). It declares a "view" into memory allowing access to
\r
2049 necessary fields at known offsets from a given base.
\r
2051 Chunks of memory are maintained using a `boundary tag' method as
\r
2052 originally described by Knuth. (See the paper by Paul Wilson
\r
2053 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
\r
2054 techniques.) Sizes of free chunks are stored both in the front of
\r
2055 each chunk and at the end. This makes consolidating fragmented
\r
2056 chunks into bigger chunks fast. The head fields also hold bits
\r
2057 representing whether chunks are free or in use.
\r
2059 Here are some pictures to make it clearer. They are "exploded" to
\r
2060 show that the state of a chunk can be thought of as extending from
\r
2061 the high 31 bits of the head field of its header through the
\r
2062 prev_foot and PINUSE_BIT bit of the following chunk header.
\r
2064 A chunk that's in use looks like:
\r
2066 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2067 | Size of previous chunk (if P = 0) |
\r
2068 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2069 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
\r
2070 | Size of this chunk 1| +-+
\r
2071 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2077 +- size - sizeof(size_t) available payload bytes -+
\r
2081 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2082 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
\r
2083 | Size of next chunk (may or may not be in use) | +-+
\r
2084 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2086 And if it's free, it looks like this:
\r
2089 | User payload (must be in use, or we would have merged!) |
\r
2090 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2091 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
\r
2092 | Size of this chunk 0| +-+
\r
2093 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2097 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2099 +- size - sizeof(struct chunk) unused bytes -+
\r
2101 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2102 | Size of this chunk |
\r
2103 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2104 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
\r
2105 | Size of next chunk (must be in use, or we would have merged)| +-+
\r
2106 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2108 +- User payload -+
\r
2110 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2113 Note that since we always merge adjacent free chunks, the chunks
\r
2114 adjacent to a free chunk must be in use.
\r
2116 Given a pointer to a chunk (which can be derived trivially from the
\r
2117 payload pointer) we can, in O(1) time, find out whether the adjacent
\r
2118 chunks are free, and if so, unlink them from the lists that they
\r
2119 are on and merge them with the current chunk.
\r
2121 Chunks always begin on even word boundaries, so the mem portion
\r
2122 (which is returned to the user) is also on an even word boundary, and
\r
2123 thus at least double-word aligned.
\r
2125 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
\r
2126 chunk size (which is always a multiple of two words), is an in-use
\r
2127 bit for the *previous* chunk. If that bit is *clear*, then the
\r
2128 word before the current chunk size contains the previous chunk
\r
2129 size, and can be used to find the front of the previous chunk.
\r
2130 The very first chunk allocated always has this bit set, preventing
\r
2131 access to non-existent (or non-owned) memory. If pinuse is set for
\r
2132 any given chunk, then you CANNOT determine the size of the
\r
2133 previous chunk, and might even get a memory addressing fault when
\r
2136 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
\r
2137 the chunk size redundantly records whether the current chunk is
\r
2138 inuse (unless the chunk is mmapped). This redundancy enables usage
\r
2139 checks within free and realloc, and reduces indirection when freeing
\r
2140 and consolidating chunks.
\r
2142 Each freshly allocated chunk must have both cinuse and pinuse set.
\r
2143 That is, each allocated chunk borders either a previously allocated
\r
2144 and still in-use chunk, or the base of its memory arena. This is
\r
2145 ensured by making all allocations from the `lowest' part of any
\r
2146 found chunk. Further, no free chunk physically borders another one,
\r
2147 so each free chunk is known to be preceded and followed by either
\r
2148 inuse chunks or the ends of memory.
\r
2150 Note that the `foot' of the current chunk is actually represented
\r
2151 as the prev_foot of the NEXT chunk. This makes it easier to
\r
2152 deal with alignments etc but can be very confusing when trying
\r
2153 to extend or adapt this code.
\r
2155 The exceptions to all this are
\r
2157 1. The special chunk `top' is the top-most available chunk (i.e.,
\r
2158 the one bordering the end of available memory). It is treated
\r
2159 specially. Top is never included in any bin, is used only if
\r
2160 no other chunk is available, and is released back to the
\r
2161 system if it is very large (see M_TRIM_THRESHOLD). In effect,
\r
2162 the top chunk is treated as larger (and thus less well
\r
2163 fitting) than any other available chunk. The top chunk
\r
2164 doesn't update its trailing size field since there is no next
\r
2165 contiguous chunk that would have to index off it. However,
\r
2166 space is still allocated for it (TOP_FOOT_SIZE) to enable
\r
2167 separation or merging when space is extended.
\r
2169 3. Chunks allocated via mmap, have both cinuse and pinuse bits
\r
2170 cleared in their head fields. Because they are allocated
\r
2171 one-by-one, each must carry its own prev_foot field, which is
\r
2172 also used to hold the offset this chunk has within its mmapped
\r
2173 region, which is needed to preserve alignment. Each mmapped
\r
2174 chunk is trailed by the first two fields of a fake next-chunk
\r
2175 for sake of usage checks.
\r
2179 struct malloc_chunk {
\r
2180 size_t prev_foot; /* Size of previous chunk (if free). */
\r
2181 size_t head; /* Size and inuse bits. */
\r
2182 struct malloc_chunk* fd; /* double links -- used only if free. */
\r
2183 struct malloc_chunk* bk;
\r
2186 typedef struct malloc_chunk mchunk;
\r
2187 typedef struct malloc_chunk* mchunkptr;
\r
2188 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
\r
2189 typedef unsigned int bindex_t; /* Described below */
\r
2190 typedef unsigned int binmap_t; /* Described below */
\r
2191 typedef unsigned int flag_t; /* The type of various bit flag sets */
\r
2193 /* ------------------- Chunks sizes and alignments ----------------------- */
\r
2195 #define MCHUNK_SIZE (sizeof(mchunk))
\r
2198 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
\r
2199 #else /* FOOTERS */
\r
2200 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
\r
2201 #endif /* FOOTERS */
\r
2203 /* MMapped chunks need a second word of overhead ... */
\r
2204 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
\r
2205 /* ... and additional padding for fake next-chunk at foot */
\r
2206 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
\r
2208 /* The smallest size we can malloc is an aligned minimal chunk */
\r
2209 #define MIN_CHUNK_SIZE\
\r
2210 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
\r
2212 /* conversion from malloc headers to user pointers, and back */
\r
2213 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
\r
2214 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
\r
2215 /* chunk associated with aligned address A */
\r
2216 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
\r
2218 /* Bounds on request (not chunk) sizes. */
\r
2219 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
\r
2220 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
\r
2222 /* pad request bytes into a usable size */
\r
2223 #define pad_request(req) \
\r
2224 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
\r
2226 /* pad request, checking for minimum (but not maximum) */
\r
2227 #define request2size(req) \
\r
2228 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
\r
2231 /* ------------------ Operations on head and foot fields ----------------- */
\r
2234 The head field of a chunk is or'ed with PINUSE_BIT when previous
\r
2235 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
\r
2236 use, unless mmapped, in which case both bits are cleared.
\r
2238 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
\r
2241 #define PINUSE_BIT (SIZE_T_ONE)
\r
2242 #define CINUSE_BIT (SIZE_T_TWO)
\r
2243 #define FLAG4_BIT (SIZE_T_FOUR)
\r
2244 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
\r
2245 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
\r
2247 /* Head value for fenceposts */
\r
2248 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
\r
2250 /* extraction of fields from head words */
\r
2251 #define cinuse(p) ((p)->head & CINUSE_BIT)
\r
2252 #define pinuse(p) ((p)->head & PINUSE_BIT)
\r
2253 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
\r
2254 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
\r
2255 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
\r
2257 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
\r
2259 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
\r
2260 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
\r
2261 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
\r
2263 /* Treat space at ptr +/- offset as a chunk */
\r
2264 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
\r
2265 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
\r
2267 /* Ptr to next or previous physical malloc_chunk. */
\r
2268 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
\r
2269 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
\r
2271 /* extract next chunk's pinuse bit */
\r
2272 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
\r
2274 /* Get/set size at footer */
\r
2275 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
\r
2276 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
\r
2278 /* Set size, pinuse bit, and foot */
\r
2279 #define set_size_and_pinuse_of_free_chunk(p, s)\
\r
2280 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
\r
2282 /* Set size, pinuse bit, foot, and clear next pinuse */
\r
2283 #define set_free_with_pinuse(p, s, n)\
\r
2284 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
\r
2286 /* Get the internal overhead associated with chunk p */
\r
2287 #define overhead_for(p)\
\r
2288 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
\r
2290 /* Return true if malloced space is not necessarily cleared */
\r
2292 #define calloc_must_clear(p) (!is_mmapped(p))
\r
2293 #else /* MMAP_CLEARS */
\r
2294 #define calloc_must_clear(p) (1)
\r
2295 #endif /* MMAP_CLEARS */
\r
2297 /* ---------------------- Overlaid data structures ----------------------- */
\r
2300 When chunks are not in use, they are treated as nodes of either
\r
2303 "Small" chunks are stored in circular doubly-linked lists, and look
\r
2306 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2307 | Size of previous chunk |
\r
2308 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2309 `head:' | Size of chunk, in bytes |P|
\r
2310 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2311 | Forward pointer to next chunk in list |
\r
2312 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2313 | Back pointer to previous chunk in list |
\r
2314 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2315 | Unused space (may be 0 bytes long) .
\r
2318 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2319 `foot:' | Size of chunk, in bytes |
\r
2320 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2322 Larger chunks are kept in a form of bitwise digital trees (aka
\r
2323 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
\r
2324 free chunks greater than 256 bytes, their size doesn't impose any
\r
2325 constraints on user chunk sizes. Each node looks like:
\r
2327 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2328 | Size of previous chunk |
\r
2329 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2330 `head:' | Size of chunk, in bytes |P|
\r
2331 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2332 | Forward pointer to next chunk of same size |
\r
2333 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2334 | Back pointer to previous chunk of same size |
\r
2335 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2336 | Pointer to left child (child[0]) |
\r
2337 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2338 | Pointer to right child (child[1]) |
\r
2339 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2340 | Pointer to parent |
\r
2341 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2342 | bin index of this chunk |
\r
2343 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2346 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2347 `foot:' | Size of chunk, in bytes |
\r
2348 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2350 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
\r
2351 of the same size are arranged in a circularly-linked list, with only
\r
2352 the oldest chunk (the next to be used, in our FIFO ordering)
\r
2353 actually in the tree. (Tree members are distinguished by a non-null
\r
2354 parent pointer.) If a chunk with the same size an an existing node
\r
2355 is inserted, it is linked off the existing node using pointers that
\r
2356 work in the same way as fd/bk pointers of small chunks.
\r
2358 Each tree contains a power of 2 sized range of chunk sizes (the
\r
2359 smallest is 0x100 <= x < 0x180), which is is divided in half at each
\r
2360 tree level, with the chunks in the smaller half of the range (0x100
\r
2361 <= x < 0x140 for the top nose) in the left subtree and the larger
\r
2362 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
\r
2363 done by inspecting individual bits.
\r
2365 Using these rules, each node's left subtree contains all smaller
\r
2366 sizes than its right subtree. However, the node at the root of each
\r
2367 subtree has no particular ordering relationship to either. (The
\r
2368 dividing line between the subtree sizes is based on trie relation.)
\r
2369 If we remove the last chunk of a given size from the interior of the
\r
2370 tree, we need to replace it with a leaf node. The tree ordering
\r
2371 rules permit a node to be replaced by any leaf below it.
\r
2373 The smallest chunk in a tree (a common operation in a best-fit
\r
2374 allocator) can be found by walking a path to the leftmost leaf in
\r
2375 the tree. Unlike a usual binary tree, where we follow left child
\r
2376 pointers until we reach a null, here we follow the right child
\r
2377 pointer any time the left one is null, until we reach a leaf with
\r
2378 both child pointers null. The smallest chunk in the tree will be
\r
2379 somewhere along that path.
\r
2381 The worst case number of steps to add, find, or remove a node is
\r
2382 bounded by the number of bits differentiating chunks within
\r
2383 bins. Under current bin calculations, this ranges from 6 up to 21
\r
2384 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
\r
2385 is of course much better.
\r
2388 struct malloc_tree_chunk {
\r
2389 /* The first four fields must be compatible with malloc_chunk */
\r
2392 struct malloc_tree_chunk* fd;
\r
2393 struct malloc_tree_chunk* bk;
\r
2395 struct malloc_tree_chunk* child[2];
\r
2396 struct malloc_tree_chunk* parent;
\r
2400 typedef struct malloc_tree_chunk tchunk;
\r
2401 typedef struct malloc_tree_chunk* tchunkptr;
\r
2402 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
\r
2404 /* A little helper macro for trees */
\r
2405 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
\r
2407 /* ----------------------------- Segments -------------------------------- */
\r
2410 Each malloc space may include non-contiguous segments, held in a
\r
2411 list headed by an embedded malloc_segment record representing the
\r
2412 top-most space. Segments also include flags holding properties of
\r
2413 the space. Large chunks that are directly allocated by mmap are not
\r
2414 included in this list. They are instead independently created and
\r
2415 destroyed without otherwise keeping track of them.
\r
2417 Segment management mainly comes into play for spaces allocated by
\r
2418 MMAP. Any call to MMAP might or might not return memory that is
\r
2419 adjacent to an existing segment. MORECORE normally contiguously
\r
2420 extends the current space, so this space is almost always adjacent,
\r
2421 which is simpler and faster to deal with. (This is why MORECORE is
\r
2422 used preferentially to MMAP when both are available -- see
\r
2423 sys_alloc.) When allocating using MMAP, we don't use any of the
\r
2424 hinting mechanisms (inconsistently) supported in various
\r
2425 implementations of unix mmap, or distinguish reserving from
\r
2426 committing memory. Instead, we just ask for space, and exploit
\r
2427 contiguity when we get it. It is probably possible to do
\r
2428 better than this on some systems, but no general scheme seems
\r
2429 to be significantly better.
\r
2431 Management entails a simpler variant of the consolidation scheme
\r
2432 used for chunks to reduce fragmentation -- new adjacent memory is
\r
2433 normally prepended or appended to an existing segment. However,
\r
2434 there are limitations compared to chunk consolidation that mostly
\r
2435 reflect the fact that segment processing is relatively infrequent
\r
2436 (occurring only when getting memory from system) and that we
\r
2437 don't expect to have huge numbers of segments:
\r
2439 * Segments are not indexed, so traversal requires linear scans. (It
\r
2440 would be possible to index these, but is not worth the extra
\r
2441 overhead and complexity for most programs on most platforms.)
\r
2442 * New segments are only appended to old ones when holding top-most
\r
2443 memory; if they cannot be prepended to others, they are held in
\r
2444 different segments.
\r
2446 Except for the top-most segment of an mstate, each segment record
\r
2447 is kept at the tail of its segment. Segments are added by pushing
\r
2448 segment records onto the list headed by &mstate.seg for the
\r
2449 containing mstate.
\r
2451 Segment flags control allocation/merge/deallocation policies:
\r
2452 * If EXTERN_BIT set, then we did not allocate this segment,
\r
2453 and so should not try to deallocate or merge with others.
\r
2454 (This currently holds only for the initial segment passed
\r
2455 into create_mspace_with_base.)
\r
2456 * If USE_MMAP_BIT set, the segment may be merged with
\r
2457 other surrounding mmapped segments and trimmed/de-allocated
\r
2459 * If neither bit is set, then the segment was obtained using
\r
2460 MORECORE so can be merged with surrounding MORECORE'd segments
\r
2461 and deallocated/trimmed using MORECORE with negative arguments.
\r
2464 struct malloc_segment {
\r
2465 char* base; /* base address */
\r
2466 size_t size; /* allocated size */
\r
2467 struct malloc_segment* next; /* ptr to next segment */
\r
2468 flag_t sflags; /* mmap and extern flag */
\r
2471 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
\r
2472 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
\r
2474 typedef struct malloc_segment msegment;
\r
2475 typedef struct malloc_segment* msegmentptr;
\r
2477 /* ---------------------------- malloc_state ----------------------------- */
\r
2480 A malloc_state holds all of the bookkeeping for a space.
\r
2481 The main fields are:
\r
2484 The topmost chunk of the currently active segment. Its size is
\r
2485 cached in topsize. The actual size of topmost space is
\r
2486 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
\r
2487 fenceposts and segment records if necessary when getting more
\r
2488 space from the system. The size at which to autotrim top is
\r
2489 cached from mparams in trim_check, except that it is disabled if
\r
2490 an autotrim fails.
\r
2492 Designated victim (dv)
\r
2493 This is the preferred chunk for servicing small requests that
\r
2494 don't have exact fits. It is normally the chunk split off most
\r
2495 recently to service another small request. Its size is cached in
\r
2496 dvsize. The link fields of this chunk are not maintained since it
\r
2497 is not kept in a bin.
\r
2500 An array of bin headers for free chunks. These bins hold chunks
\r
2501 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
\r
2502 chunks of all the same size, spaced 8 bytes apart. To simplify
\r
2503 use in double-linked lists, each bin header acts as a malloc_chunk
\r
2504 pointing to the real first node, if it exists (else pointing to
\r
2505 itself). This avoids special-casing for headers. But to avoid
\r
2506 waste, we allocate only the fd/bk pointers of bins, and then use
\r
2507 repositioning tricks to treat these as the fields of a chunk.
\r
2510 Treebins are pointers to the roots of trees holding a range of
\r
2511 sizes. There are 2 equally spaced treebins for each power of two
\r
2512 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
\r
2516 There is one bit map for small bins ("smallmap") and one for
\r
2517 treebins ("treemap). Each bin sets its bit when non-empty, and
\r
2518 clears the bit when empty. Bit operations are then used to avoid
\r
2519 bin-by-bin searching -- nearly all "search" is done without ever
\r
2520 looking at bins that won't be selected. The bit maps
\r
2521 conservatively use 32 bits per map word, even if on 64bit system.
\r
2522 For a good description of some of the bit-based techniques used
\r
2523 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
\r
2524 supplement at http://hackersdelight.org/). Many of these are
\r
2525 intended to reduce the branchiness of paths through malloc etc, as
\r
2526 well as to reduce the number of memory locations read or written.
\r
2529 A list of segments headed by an embedded malloc_segment record
\r
2530 representing the initial space.
\r
2532 Address check support
\r
2533 The least_addr field is the least address ever obtained from
\r
2534 MORECORE or MMAP. Attempted frees and reallocs of any address less
\r
2535 than this are trapped (unless INSECURE is defined).
\r
2538 A cross-check field that should always hold same value as mparams.magic.
\r
2540 Max allowed footprint
\r
2541 The maximum allowed bytes to allocate from system (zero means no limit)
\r
2544 Bits recording whether to use MMAP, locks, or contiguous MORECORE
\r
2547 Each space keeps track of current and maximum system memory
\r
2548 obtained via MORECORE or MMAP.
\r
2551 Fields holding the amount of unused topmost memory that should trigger
\r
2552 trimming, and a counter to force periodic scanning to release unused
\r
2553 non-topmost segments.
\r
2556 If USE_LOCKS is defined, the "mutex" lock is acquired and released
\r
2557 around every public call using this mspace.
\r
2560 A void* pointer and a size_t field that can be used to help implement
\r
2561 extensions to this malloc.
\r
2564 /* Bin types, widths and sizes */
\r
2565 #define NSMALLBINS (32U)
\r
2566 #define NTREEBINS (32U)
\r
2567 #define SMALLBIN_SHIFT (3U)
\r
2568 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
\r
2569 #define TREEBIN_SHIFT (8U)
\r
2570 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
\r
2571 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
\r
2572 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
\r
2574 struct malloc_state {
\r
2575 binmap_t smallmap;
\r
2582 size_t trim_check;
\r
2583 size_t release_checks;
\r
2585 mchunkptr smallbins[(NSMALLBINS+1)*2];
\r
2586 tbinptr treebins[NTREEBINS];
\r
2588 size_t max_footprint;
\r
2589 size_t footprint_limit; /* zero means no limit */
\r
2592 MLOCK_T mutex; /* locate lock among fields that rarely change */
\r
2593 #endif /* USE_LOCKS */
\r
2595 void* extp; /* Unused but available for extensions */
\r
2599 typedef struct malloc_state* mstate;
\r
2601 /* ------------- Global malloc_state and malloc_params ------------------- */
\r
2604 malloc_params holds global properties, including those that can be
\r
2605 dynamically set using mallopt. There is a single instance, mparams,
\r
2606 initialized in init_mparams. Note that the non-zeroness of "magic"
\r
2607 also serves as an initialization flag.
\r
2610 struct malloc_params {
\r
2613 size_t granularity;
\r
2614 size_t mmap_threshold;
\r
2615 size_t trim_threshold;
\r
2616 flag_t default_mflags;
\r
2619 static struct malloc_params mparams;
\r
2621 /* Ensure mparams initialized */
\r
2622 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
\r
2626 /* The global malloc_state used for all non-"mspace" calls */
\r
2627 static struct malloc_state _gm_;
\r
2628 #define gm (&_gm_)
\r
2629 #define is_global(M) ((M) == &_gm_)
\r
2631 #endif /* !ONLY_MSPACES */
\r
2633 #define is_initialized(M) ((M)->top != 0)
\r
2635 /* -------------------------- system alloc setup ------------------------- */
\r
2637 /* Operations on mflags */
\r
2639 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
\r
2640 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
\r
2642 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
\r
2644 #define disable_lock(M)
\r
2647 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
\r
2648 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
\r
2650 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
\r
2652 #define disable_mmap(M)
\r
2655 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
\r
2656 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
\r
2658 #define set_lock(M,L)\
\r
2659 ((M)->mflags = (L)?\
\r
2660 ((M)->mflags | USE_LOCK_BIT) :\
\r
2661 ((M)->mflags & ~USE_LOCK_BIT))
\r
2663 /* page-align a size */
\r
2664 #define page_align(S)\
\r
2665 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
\r
2667 /* granularity-align a size */
\r
2668 #define granularity_align(S)\
\r
2669 (((S) + (mparams.granularity - SIZE_T_ONE))\
\r
2670 & ~(mparams.granularity - SIZE_T_ONE))
\r
2673 /* For mmap, use granularity alignment on windows, else page-align */
\r
2675 #define mmap_align(S) granularity_align(S)
\r
2677 #define mmap_align(S) page_align(S)
\r
2680 /* For sys_alloc, enough padding to ensure can malloc request on success */
\r
2681 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
\r
2683 #define is_page_aligned(S)\
\r
2684 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
\r
2685 #define is_granularity_aligned(S)\
\r
2686 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
\r
2688 /* True if segment S holds address A */
\r
2689 #define segment_holds(S, A)\
\r
2690 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
\r
2692 /* Return segment holding given address */
\r
2693 static msegmentptr segment_holding(mstate m, char* addr) {
\r
2694 msegmentptr sp = &m->seg;
\r
2696 if (addr >= sp->base && addr < sp->base + sp->size)
\r
2698 if ((sp = sp->next) == 0)
\r
2703 /* Return true if segment contains a segment link */
\r
2704 static int has_segment_link(mstate m, msegmentptr ss) {
\r
2705 msegmentptr sp = &m->seg;
\r
2707 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
\r
2709 if ((sp = sp->next) == 0)
\r
2714 #ifndef MORECORE_CANNOT_TRIM
\r
2715 #define should_trim(M,s) ((s) > (M)->trim_check)
\r
2716 #else /* MORECORE_CANNOT_TRIM */
\r
2717 #define should_trim(M,s) (0)
\r
2718 #endif /* MORECORE_CANNOT_TRIM */
\r
2721 TOP_FOOT_SIZE is padding at the end of a segment, including space
\r
2722 that may be needed to place segment records and fenceposts when new
\r
2723 noncontiguous segments are added.
\r
2725 #define TOP_FOOT_SIZE\
\r
2726 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
\r
2729 /* ------------------------------- Hooks -------------------------------- */
\r
2732 PREACTION should be defined to return 0 on success, and nonzero on
\r
2733 failure. If you are not using locking, you can redefine these to do
\r
2734 anything you like.
\r
2738 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
\r
2739 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
\r
2740 #else /* USE_LOCKS */
\r
2743 #define PREACTION(M) (0)
\r
2744 #endif /* PREACTION */
\r
2746 #ifndef POSTACTION
\r
2747 #define POSTACTION(M)
\r
2748 #endif /* POSTACTION */
\r
2750 #endif /* USE_LOCKS */
\r
2753 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
\r
2754 USAGE_ERROR_ACTION is triggered on detected bad frees and
\r
2755 reallocs. The argument p is an address that might have triggered the
\r
2756 fault. It is ignored by the two predefined actions, but might be
\r
2757 useful in custom actions that try to help diagnose errors.
\r
2760 #if PROCEED_ON_ERROR
\r
2762 /* A count of the number of corruption errors causing resets */
\r
2763 int malloc_corruption_error_count;
\r
2765 /* default corruption action */
\r
2766 static void reset_on_error(mstate m);
\r
2768 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
\r
2769 #define USAGE_ERROR_ACTION(m, p)
\r
2771 #else /* PROCEED_ON_ERROR */
\r
2773 #ifndef CORRUPTION_ERROR_ACTION
\r
2774 #define CORRUPTION_ERROR_ACTION(m) ABORT
\r
2775 #endif /* CORRUPTION_ERROR_ACTION */
\r
2777 #ifndef USAGE_ERROR_ACTION
\r
2778 #define USAGE_ERROR_ACTION(m,p) ABORT
\r
2779 #endif /* USAGE_ERROR_ACTION */
\r
2781 #endif /* PROCEED_ON_ERROR */
\r
2784 /* -------------------------- Debugging setup ---------------------------- */
\r
2788 #define check_free_chunk(M,P)
\r
2789 #define check_inuse_chunk(M,P)
\r
2790 #define check_malloced_chunk(M,P,N)
\r
2791 #define check_mmapped_chunk(M,P)
\r
2792 #define check_malloc_state(M)
\r
2793 #define check_top_chunk(M,P)
\r
2796 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
\r
2797 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
\r
2798 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
\r
2799 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
\r
2800 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
\r
2801 #define check_malloc_state(M) do_check_malloc_state(M)
\r
2803 static void do_check_any_chunk(mstate m, mchunkptr p);
\r
2804 static void do_check_top_chunk(mstate m, mchunkptr p);
\r
2805 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
\r
2806 static void do_check_inuse_chunk(mstate m, mchunkptr p);
\r
2807 static void do_check_free_chunk(mstate m, mchunkptr p);
\r
2808 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
\r
2809 static void do_check_tree(mstate m, tchunkptr t);
\r
2810 static void do_check_treebin(mstate m, bindex_t i);
\r
2811 static void do_check_smallbin(mstate m, bindex_t i);
\r
2812 static void do_check_malloc_state(mstate m);
\r
2813 static int bin_find(mstate m, mchunkptr x);
\r
2814 static size_t traverse_and_check(mstate m);
\r
2815 #endif /* DEBUG */
\r
2817 /* ---------------------------- Indexing Bins ---------------------------- */
\r
2819 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
\r
2820 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
\r
2821 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
\r
2822 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
\r
2824 /* addressing by index. See above about smallbin repositioning */
\r
2825 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
\r
2826 #define treebin_at(M,i) (&((M)->treebins[i]))
\r
2828 /* assign tree index for size S to variable I. Use x86 asm if possible */
\r
2829 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
\r
2830 #define compute_tree_index(S, I)\
\r
2832 unsigned int X = S >> TREEBIN_SHIFT;\
\r
2835 else if (X > 0xFFFF)\
\r
2838 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
\r
2839 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2843 #elif defined (__INTEL_COMPILER)
\r
2844 #define compute_tree_index(S, I)\
\r
2846 size_t X = S >> TREEBIN_SHIFT;\
\r
2849 else if (X > 0xFFFF)\
\r
2852 unsigned int K = _bit_scan_reverse (X); \
\r
2853 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2857 #elif defined(_MSC_VER) && _MSC_VER>=1300
\r
2858 #define compute_tree_index(S, I)\
\r
2860 size_t X = S >> TREEBIN_SHIFT;\
\r
2863 else if (X > 0xFFFF)\
\r
2867 _BitScanReverse((DWORD *) &K, (DWORD) X);\
\r
2868 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2873 #define compute_tree_index(S, I)\
\r
2875 size_t X = S >> TREEBIN_SHIFT;\
\r
2878 else if (X > 0xFFFF)\
\r
2881 unsigned int Y = (unsigned int)X;\
\r
2882 unsigned int N = ((Y - 0x100) >> 16) & 8;\
\r
2883 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
\r
2885 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
\r
2886 K = 14 - N + ((Y <<= K) >> 15);\
\r
2887 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
\r
2892 /* Bit representing maximum resolved size in a treebin at i */
\r
2893 #define bit_for_tree_index(i) \
\r
2894 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
\r
2896 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
\r
2897 #define leftshift_for_tree_index(i) \
\r
2898 ((i == NTREEBINS-1)? 0 : \
\r
2899 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
\r
2901 /* The size of the smallest chunk held in bin with index i */
\r
2902 #define minsize_for_tree_index(i) \
\r
2903 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
\r
2904 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
\r
2907 /* ------------------------ Operations on bin maps ----------------------- */
\r
2909 /* bit corresponding to given index */
\r
2910 #define idx2bit(i) ((binmap_t)(1) << (i))
\r
2912 /* Mark/Clear bits with given index */
\r
2913 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
\r
2914 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
\r
2915 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
\r
2917 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
\r
2918 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
\r
2919 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
\r
2921 /* isolate the least set bit of a bitmap */
\r
2922 #define least_bit(x) ((x) & -(x))
\r
2924 /* mask with all bits to left of least bit of x on */
\r
2925 #define left_bits(x) ((x<<1) | -(x<<1))
\r
2927 /* mask with all bits to left of or equal to least bit of x on */
\r
2928 #define same_or_left_bits(x) ((x) | -(x))
\r
2930 /* index corresponding to given bit. Use x86 asm if possible */
\r
2932 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
\r
2933 #define compute_bit2idx(X, I)\
\r
2936 J = __builtin_ctz(X); \
\r
2940 #elif defined (__INTEL_COMPILER)
\r
2941 #define compute_bit2idx(X, I)\
\r
2944 J = _bit_scan_forward (X); \
\r
2948 #elif defined(_MSC_VER) && _MSC_VER>=1300
\r
2949 #define compute_bit2idx(X, I)\
\r
2952 _BitScanForward((DWORD *) &J, X);\
\r
2956 #elif USE_BUILTIN_FFS
\r
2957 #define compute_bit2idx(X, I) I = ffs(X)-1
\r
2960 #define compute_bit2idx(X, I)\
\r
2962 unsigned int Y = X - 1;\
\r
2963 unsigned int K = Y >> (16-4) & 16;\
\r
2964 unsigned int N = K; Y >>= K;\
\r
2965 N += K = Y >> (8-3) & 8; Y >>= K;\
\r
2966 N += K = Y >> (4-2) & 4; Y >>= K;\
\r
2967 N += K = Y >> (2-1) & 2; Y >>= K;\
\r
2968 N += K = Y >> (1-0) & 1; Y >>= K;\
\r
2969 I = (bindex_t)(N + Y);\
\r
2974 /* ----------------------- Runtime Check Support ------------------------- */
\r
2977 For security, the main invariant is that malloc/free/etc never
\r
2978 writes to a static address other than malloc_state, unless static
\r
2979 malloc_state itself has been corrupted, which cannot occur via
\r
2980 malloc (because of these checks). In essence this means that we
\r
2981 believe all pointers, sizes, maps etc held in malloc_state, but
\r
2982 check all of those linked or offsetted from other embedded data
\r
2983 structures. These checks are interspersed with main code in a way
\r
2984 that tends to minimize their run-time cost.
\r
2986 When FOOTERS is defined, in addition to range checking, we also
\r
2987 verify footer fields of inuse chunks, which can be used guarantee
\r
2988 that the mstate controlling malloc/free is intact. This is a
\r
2989 streamlined version of the approach described by William Robertson
\r
2990 et al in "Run-time Detection of Heap-based Overflows" LISA'03
\r
2991 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
\r
2992 of an inuse chunk holds the xor of its mstate and a random seed,
\r
2993 that is checked upon calls to free() and realloc(). This is
\r
2994 (probabalistically) unguessable from outside the program, but can be
\r
2995 computed by any code successfully malloc'ing any chunk, so does not
\r
2996 itself provide protection against code that has already broken
\r
2997 security through some other means. Unlike Robertson et al, we
\r
2998 always dynamically check addresses of all offset chunks (previous,
\r
2999 next, etc). This turns out to be cheaper than relying on hashes.
\r
3003 /* Check if address a is at least as high as any from MORECORE or MMAP */
\r
3004 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
\r
3005 /* Check if address of next chunk n is higher than base chunk p */
\r
3006 #define ok_next(p, n) ((char*)(p) < (char*)(n))
\r
3007 /* Check if p has inuse status */
\r
3008 #define ok_inuse(p) is_inuse(p)
\r
3009 /* Check if p has its pinuse bit on */
\r
3010 #define ok_pinuse(p) pinuse(p)
\r
3012 #else /* !INSECURE */
\r
3013 #define ok_address(M, a) (1)
\r
3014 #define ok_next(b, n) (1)
\r
3015 #define ok_inuse(p) (1)
\r
3016 #define ok_pinuse(p) (1)
\r
3017 #endif /* !INSECURE */
\r
3019 #if (FOOTERS && !INSECURE)
\r
3020 /* Check if (alleged) mstate m has expected magic field */
\r
3021 #define ok_magic(M) ((M)->magic == mparams.magic)
\r
3022 #else /* (FOOTERS && !INSECURE) */
\r
3023 #define ok_magic(M) (1)
\r
3024 #endif /* (FOOTERS && !INSECURE) */
\r
3026 /* In gcc, use __builtin_expect to minimize impact of checks */
\r
3028 #if defined(__GNUC__) && __GNUC__ >= 3
\r
3029 #define RTCHECK(e) __builtin_expect(e, 1)
\r
3031 #define RTCHECK(e) (e)
\r
3033 #else /* !INSECURE */
\r
3034 #define RTCHECK(e) (1)
\r
3035 #endif /* !INSECURE */
\r
3037 /* macros to set up inuse chunks with or without footers */
\r
3041 #define mark_inuse_foot(M,p,s)
\r
3043 /* Macros for setting head/foot of non-mmapped chunks */
\r
3045 /* Set cinuse bit and pinuse bit of next chunk */
\r
3046 #define set_inuse(M,p,s)\
\r
3047 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
\r
3048 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
\r
3050 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
\r
3051 #define set_inuse_and_pinuse(M,p,s)\
\r
3052 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3053 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
\r
3055 /* Set size, cinuse and pinuse bit of this chunk */
\r
3056 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
\r
3057 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
\r
3059 #else /* FOOTERS */
\r
3061 /* Set foot of inuse chunk to be xor of mstate and seed */
\r
3062 #define mark_inuse_foot(M,p,s)\
\r
3063 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
\r
3065 #define get_mstate_for(p)\
\r
3066 ((mstate)(((mchunkptr)((char*)(p) +\
\r
3067 (chunksize(p))))->prev_foot ^ mparams.magic))
\r
3069 #define set_inuse(M,p,s)\
\r
3070 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
\r
3071 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
\r
3072 mark_inuse_foot(M,p,s))
\r
3074 #define set_inuse_and_pinuse(M,p,s)\
\r
3075 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3076 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
\r
3077 mark_inuse_foot(M,p,s))
\r
3079 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
\r
3080 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3081 mark_inuse_foot(M, p, s))
\r
3083 #endif /* !FOOTERS */
\r
3085 /* ---------------------------- setting mparams -------------------------- */
\r
3087 /* Initialize mparams */
\r
3088 static int init_mparams(void) {
\r
3089 #ifdef NEED_GLOBAL_LOCK_INIT
\r
3090 call_once(&malloc_global_mutex_init_once, init_malloc_global_mutex);
\r
3093 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
3094 if (mparams.magic == 0) {
\r
3100 psize = malloc_getpagesize;
\r
3101 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
\r
3104 SYSTEM_INFO system_info;
\r
3105 GetSystemInfo(&system_info);
\r
3106 psize = system_info.dwPageSize;
\r
3107 gsize = ((DEFAULT_GRANULARITY != 0)?
\r
3108 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
\r
3110 #endif /* WIN32 */
\r
3112 /* Sanity-check configuration:
\r
3113 size_t must be unsigned and as wide as pointer type.
\r
3114 ints must be at least 4 bytes.
\r
3115 alignment must be at least 8.
\r
3116 Alignment, min chunk size, and page size must all be powers of 2.
\r
3118 if ((sizeof(size_t) != sizeof(char*)) ||
\r
3119 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
\r
3120 (sizeof(int) < 4) ||
\r
3121 (MALLOC_ALIGNMENT < (size_t)8U) ||
\r
3122 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
\r
3123 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
\r
3124 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
\r
3125 ((psize & (psize-SIZE_T_ONE)) != 0))
\r
3128 mparams.granularity = gsize;
\r
3129 mparams.page_size = psize;
\r
3130 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
\r
3131 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
\r
3132 #if MORECORE_CONTIGUOUS
\r
3133 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
\r
3134 #else /* MORECORE_CONTIGUOUS */
\r
3135 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
\r
3136 #endif /* MORECORE_CONTIGUOUS */
\r
3139 /* Set up lock for main malloc area */
\r
3140 gm->mflags = mparams.default_mflags;
\r
3141 (void)INITIAL_LOCK(&gm->mutex);
\r
3145 #if USE_DEV_RANDOM
\r
3147 unsigned char buf[sizeof(size_t)];
\r
3148 /* Try to use /dev/urandom, else fall back on using time */
\r
3149 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
\r
3150 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
\r
3151 magic = *((size_t *) buf);
\r
3155 #endif /* USE_DEV_RANDOM */
\r
3157 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
\r
3158 #elif defined(LACKS_TIME_H)
\r
3159 magic = (size_t)&magic ^ (size_t)0x55555555U;
\r
3161 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
\r
3163 magic |= (size_t)8U; /* ensure nonzero */
\r
3164 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
\r
3165 /* Until memory modes commonly available, use volatile-write */
\r
3166 (*(volatile size_t *)(&(mparams.magic))) = magic;
\r
3170 RELEASE_MALLOC_GLOBAL_LOCK();
\r
3174 /* support for mallopt */
\r
3175 static int change_mparam(int param_number, int value) {
\r
3177 ensure_initialization();
\r
3178 val = (value == -1)? MAX_SIZE_T : (size_t)value;
\r
3179 switch(param_number) {
\r
3180 case M_TRIM_THRESHOLD:
\r
3181 mparams.trim_threshold = val;
\r
3183 case M_GRANULARITY:
\r
3184 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
\r
3185 mparams.granularity = val;
\r
3190 case M_MMAP_THRESHOLD:
\r
3191 mparams.mmap_threshold = val;
\r
3199 /* ------------------------- Debugging Support --------------------------- */
\r
3201 /* Check properties of any chunk, whether free, inuse, mmapped etc */
\r
3202 static void do_check_any_chunk(mstate m, mchunkptr p) {
\r
3203 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3204 assert(ok_address(m, p));
\r
3207 /* Check properties of top chunk */
\r
3208 static void do_check_top_chunk(mstate m, mchunkptr p) {
\r
3209 msegmentptr sp = segment_holding(m, (char*)p);
\r
3210 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
\r
3212 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3213 assert(ok_address(m, p));
\r
3214 assert(sz == m->topsize);
\r
3216 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
\r
3217 assert(pinuse(p));
\r
3218 assert(!pinuse(chunk_plus_offset(p, sz)));
\r
3221 /* Check properties of (inuse) mmapped chunks */
\r
3222 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
\r
3223 size_t sz = chunksize(p);
\r
3224 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
\r
3225 assert(is_mmapped(p));
\r
3226 assert(use_mmap(m));
\r
3227 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3228 assert(ok_address(m, p));
\r
3229 assert(!is_small(sz));
\r
3230 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
\r
3231 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
\r
3232 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
\r
3235 /* Check properties of inuse chunks */
\r
3236 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
\r
3237 do_check_any_chunk(m, p);
\r
3238 assert(is_inuse(p));
\r
3239 assert(next_pinuse(p));
\r
3240 /* If not pinuse and not mmapped, previous chunk has OK offset */
\r
3241 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
\r
3242 if (is_mmapped(p))
\r
3243 do_check_mmapped_chunk(m, p);
\r
3246 /* Check properties of free chunks */
\r
3247 static void do_check_free_chunk(mstate m, mchunkptr p) {
\r
3248 size_t sz = chunksize(p);
\r
3249 mchunkptr next = chunk_plus_offset(p, sz);
\r
3250 do_check_any_chunk(m, p);
\r
3251 assert(!is_inuse(p));
\r
3252 assert(!next_pinuse(p));
\r
3253 assert (!is_mmapped(p));
\r
3254 if (p != m->dv && p != m->top) {
\r
3255 if (sz >= MIN_CHUNK_SIZE) {
\r
3256 assert((sz & CHUNK_ALIGN_MASK) == 0);
\r
3257 assert(is_aligned(chunk2mem(p)));
\r
3258 assert(next->prev_foot == sz);
\r
3259 assert(pinuse(p));
\r
3260 assert (next == m->top || is_inuse(next));
\r
3261 assert(p->fd->bk == p);
\r
3262 assert(p->bk->fd == p);
\r
3264 else /* markers are always of size SIZE_T_SIZE */
\r
3265 assert(sz == SIZE_T_SIZE);
\r
3269 /* Check properties of malloced chunks at the point they are malloced */
\r
3270 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
\r
3272 mchunkptr p = mem2chunk(mem);
\r
3273 size_t sz = p->head & ~INUSE_BITS;
\r
3274 do_check_inuse_chunk(m, p);
\r
3275 assert((sz & CHUNK_ALIGN_MASK) == 0);
\r
3276 assert(sz >= MIN_CHUNK_SIZE);
\r
3278 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
\r
3279 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
\r
3283 /* Check a tree and its subtrees. */
\r
3284 static void do_check_tree(mstate m, tchunkptr t) {
\r
3285 tchunkptr head = 0;
\r
3287 bindex_t tindex = t->index;
\r
3288 size_t tsize = chunksize(t);
\r
3290 compute_tree_index(tsize, idx);
\r
3291 assert(tindex == idx);
\r
3292 assert(tsize >= MIN_LARGE_SIZE);
\r
3293 assert(tsize >= minsize_for_tree_index(idx));
\r
3294 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
\r
3296 do { /* traverse through chain of same-sized nodes */
\r
3297 do_check_any_chunk(m, ((mchunkptr)u));
\r
3298 assert(u->index == tindex);
\r
3299 assert(chunksize(u) == tsize);
\r
3300 assert(!is_inuse(u));
\r
3301 assert(!next_pinuse(u));
\r
3302 assert(u->fd->bk == u);
\r
3303 assert(u->bk->fd == u);
\r
3304 if (u->parent == 0) {
\r
3305 assert(u->child[0] == 0);
\r
3306 assert(u->child[1] == 0);
\r
3309 assert(head == 0); /* only one node on chain has parent */
\r
3311 assert(u->parent != u);
\r
3312 assert (u->parent->child[0] == u ||
\r
3313 u->parent->child[1] == u ||
\r
3314 *((tbinptr*)(u->parent)) == u);
\r
3315 if (u->child[0] != 0) {
\r
3316 assert(u->child[0]->parent == u);
\r
3317 assert(u->child[0] != u);
\r
3318 do_check_tree(m, u->child[0]);
\r
3320 if (u->child[1] != 0) {
\r
3321 assert(u->child[1]->parent == u);
\r
3322 assert(u->child[1] != u);
\r
3323 do_check_tree(m, u->child[1]);
\r
3325 if (u->child[0] != 0 && u->child[1] != 0) {
\r
3326 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
\r
3331 assert(head != 0);
\r
3334 /* Check all the chunks in a treebin. */
\r
3335 static void do_check_treebin(mstate m, bindex_t i) {
\r
3336 tbinptr* tb = treebin_at(m, i);
\r
3337 tchunkptr t = *tb;
\r
3338 int empty = (m->treemap & (1U << i)) == 0;
\r
3342 do_check_tree(m, t);
\r
3345 /* Check all the chunks in a smallbin. */
\r
3346 static void do_check_smallbin(mstate m, bindex_t i) {
\r
3347 sbinptr b = smallbin_at(m, i);
\r
3348 mchunkptr p = b->bk;
\r
3349 unsigned int empty = (m->smallmap & (1U << i)) == 0;
\r
3353 for (; p != b; p = p->bk) {
\r
3354 size_t size = chunksize(p);
\r
3356 /* each chunk claims to be free */
\r
3357 do_check_free_chunk(m, p);
\r
3358 /* chunk belongs in bin */
\r
3359 assert(small_index(size) == i);
\r
3360 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
\r
3361 /* chunk is followed by an inuse chunk */
\r
3362 q = next_chunk(p);
\r
3363 if (q->head != FENCEPOST_HEAD)
\r
3364 do_check_inuse_chunk(m, q);
\r
3369 /* Find x in a bin. Used in other check functions. */
\r
3370 static int bin_find(mstate m, mchunkptr x) {
\r
3371 size_t size = chunksize(x);
\r
3372 if (is_small(size)) {
\r
3373 bindex_t sidx = small_index(size);
\r
3374 sbinptr b = smallbin_at(m, sidx);
\r
3375 if (smallmap_is_marked(m, sidx)) {
\r
3380 } while ((p = p->fd) != b);
\r
3385 compute_tree_index(size, tidx);
\r
3386 if (treemap_is_marked(m, tidx)) {
\r
3387 tchunkptr t = *treebin_at(m, tidx);
\r
3388 size_t sizebits = size << leftshift_for_tree_index(tidx);
\r
3389 while (t != 0 && chunksize(t) != size) {
\r
3390 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
\r
3396 if (u == (tchunkptr)x)
\r
3398 } while ((u = u->fd) != t);
\r
3405 /* Traverse each chunk and check it; return total */
\r
3406 static size_t traverse_and_check(mstate m) {
\r
3408 if (is_initialized(m)) {
\r
3409 msegmentptr s = &m->seg;
\r
3410 sum += m->topsize + TOP_FOOT_SIZE;
\r
3412 mchunkptr q = align_as_chunk(s->base);
\r
3413 mchunkptr lastq = 0;
\r
3414 assert(pinuse(q));
\r
3415 while (segment_holds(s, q) &&
\r
3416 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3417 sum += chunksize(q);
\r
3418 if (is_inuse(q)) {
\r
3419 assert(!bin_find(m, q));
\r
3420 do_check_inuse_chunk(m, q);
\r
3423 assert(q == m->dv || bin_find(m, q));
\r
3424 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
\r
3425 do_check_free_chunk(m, q);
\r
3428 q = next_chunk(q);
\r
3437 /* Check all properties of malloc_state. */
\r
3438 static void do_check_malloc_state(mstate m) {
\r
3442 for (i = 0; i < NSMALLBINS; ++i)
\r
3443 do_check_smallbin(m, i);
\r
3444 for (i = 0; i < NTREEBINS; ++i)
\r
3445 do_check_treebin(m, i);
\r
3447 if (m->dvsize != 0) { /* check dv chunk */
\r
3448 do_check_any_chunk(m, m->dv);
\r
3449 assert(m->dvsize == chunksize(m->dv));
\r
3450 assert(m->dvsize >= MIN_CHUNK_SIZE);
\r
3451 assert(bin_find(m, m->dv) == 0);
\r
3454 if (m->top != 0) { /* check top chunk */
\r
3455 do_check_top_chunk(m, m->top);
\r
3456 /*assert(m->topsize == chunksize(m->top)); redundant */
\r
3457 assert(m->topsize > 0);
\r
3458 assert(bin_find(m, m->top) == 0);
\r
3461 total = traverse_and_check(m);
\r
3462 assert(total <= m->footprint);
\r
3463 assert(m->footprint <= m->max_footprint);
\r
3465 #endif /* DEBUG */
\r
3467 /* ----------------------------- statistics ------------------------------ */
\r
3470 static struct mallinfo internal_mallinfo(mstate m) {
\r
3471 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
\r
3472 ensure_initialization();
\r
3473 if (!PREACTION(m)) {
\r
3474 check_malloc_state(m);
\r
3475 if (is_initialized(m)) {
\r
3476 size_t nfree = SIZE_T_ONE; /* top always free */
\r
3477 size_t mfree = m->topsize + TOP_FOOT_SIZE;
\r
3478 size_t sum = mfree;
\r
3479 msegmentptr s = &m->seg;
\r
3481 mchunkptr q = align_as_chunk(s->base);
\r
3482 while (segment_holds(s, q) &&
\r
3483 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3484 size_t sz = chunksize(q);
\r
3486 if (!is_inuse(q)) {
\r
3490 q = next_chunk(q);
\r
3496 nm.ordblks = nfree;
\r
3497 nm.hblkhd = m->footprint - sum;
\r
3498 nm.usmblks = m->max_footprint;
\r
3499 nm.uordblks = m->footprint - mfree;
\r
3500 nm.fordblks = mfree;
\r
3501 nm.keepcost = m->topsize;
\r
3508 #endif /* !NO_MALLINFO */
\r
3510 #if !NO_MALLOC_STATS
\r
3511 static void internal_malloc_stats(mstate m) {
\r
3512 ensure_initialization();
\r
3513 if (!PREACTION(m)) {
\r
3517 check_malloc_state(m);
\r
3518 if (is_initialized(m)) {
\r
3519 msegmentptr s = &m->seg;
\r
3520 maxfp = m->max_footprint;
\r
3521 fp = m->footprint;
\r
3522 used = fp - (m->topsize + TOP_FOOT_SIZE);
\r
3525 mchunkptr q = align_as_chunk(s->base);
\r
3526 while (segment_holds(s, q) &&
\r
3527 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3529 used -= chunksize(q);
\r
3530 q = next_chunk(q);
\r
3535 POSTACTION(m); /* drop lock */
\r
3536 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
\r
3537 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
\r
3538 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
\r
3541 #endif /* NO_MALLOC_STATS */
\r
3543 /* ----------------------- Operations on smallbins ----------------------- */
\r
3546 Various forms of linking and unlinking are defined as macros. Even
\r
3547 the ones for trees, which are very long but have very short typical
\r
3548 paths. This is ugly but reduces reliance on inlining support of
\r
3552 /* Link a free chunk into a smallbin */
\r
3553 #define insert_small_chunk(M, P, S) {\
\r
3554 bindex_t I = small_index(S);\
\r
3555 mchunkptr B = smallbin_at(M, I);\
\r
3557 assert(S >= MIN_CHUNK_SIZE);\
\r
3558 if (!smallmap_is_marked(M, I))\
\r
3559 mark_smallmap(M, I);\
\r
3560 else if (RTCHECK(ok_address(M, B->fd)))\
\r
3563 CORRUPTION_ERROR_ACTION(M);\
\r
3571 /* Unlink a chunk from a smallbin */
\r
3572 #define unlink_small_chunk(M, P, S) {\
\r
3573 mchunkptr F = P->fd;\
\r
3574 mchunkptr B = P->bk;\
\r
3575 bindex_t I = small_index(S);\
\r
3578 assert(chunksize(P) == small_index2size(I));\
\r
3579 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
\r
3581 clear_smallmap(M, I);\
\r
3583 else if (RTCHECK(B == smallbin_at(M,I) ||\
\r
3584 (ok_address(M, B) && B->fd == P))) {\
\r
3589 CORRUPTION_ERROR_ACTION(M);\
\r
3593 CORRUPTION_ERROR_ACTION(M);\
\r
3597 /* Unlink the first chunk from a smallbin */
\r
3598 #define unlink_first_small_chunk(M, B, P, I) {\
\r
3599 mchunkptr F = P->fd;\
\r
3602 assert(chunksize(P) == small_index2size(I));\
\r
3604 clear_smallmap(M, I);\
\r
3606 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
\r
3611 CORRUPTION_ERROR_ACTION(M);\
\r
3615 /* Replace dv node, binning the old one */
\r
3616 /* Used only when dvsize known to be small */
\r
3617 #define replace_dv(M, P, S) {\
\r
3618 size_t DVS = M->dvsize;\
\r
3619 assert(is_small(DVS));\
\r
3621 mchunkptr DV = M->dv;\
\r
3622 insert_small_chunk(M, DV, DVS);\
\r
3628 /* ------------------------- Operations on trees ------------------------- */
\r
3630 /* Insert chunk into tree */
\r
3631 #define insert_large_chunk(M, X, S) {\
\r
3634 compute_tree_index(S, I);\
\r
3635 H = treebin_at(M, I);\
\r
3637 X->child[0] = X->child[1] = 0;\
\r
3638 if (!treemap_is_marked(M, I)) {\
\r
3639 mark_treemap(M, I);\
\r
3641 X->parent = (tchunkptr)H;\
\r
3642 X->fd = X->bk = X;\
\r
3645 tchunkptr T = *H;\
\r
3646 size_t K = S << leftshift_for_tree_index(I);\
\r
3648 if (chunksize(T) != S) {\
\r
3649 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
\r
3653 else if (RTCHECK(ok_address(M, C))) {\
\r
3656 X->fd = X->bk = X;\
\r
3660 CORRUPTION_ERROR_ACTION(M);\
\r
3665 tchunkptr F = T->fd;\
\r
3666 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
\r
3667 T->fd = F->bk = X;\
\r
3674 CORRUPTION_ERROR_ACTION(M);\
\r
3685 1. If x is a chained node, unlink it from its same-sized fd/bk links
\r
3686 and choose its bk node as its replacement.
\r
3687 2. If x was the last node of its size, but not a leaf node, it must
\r
3688 be replaced with a leaf node (not merely one with an open left or
\r
3689 right), to make sure that lefts and rights of descendents
\r
3690 correspond properly to bit masks. We use the rightmost descendent
\r
3691 of x. We could use any other leaf, but this is easy to locate and
\r
3692 tends to counteract removal of leftmosts elsewhere, and so keeps
\r
3693 paths shorter than minimally guaranteed. This doesn't loop much
\r
3694 because on average a node in a tree is near the bottom.
\r
3695 3. If x is the base of a chain (i.e., has parent links) relink
\r
3696 x's parent and children to x's replacement (or null if none).
\r
3699 #define unlink_large_chunk(M, X) {\
\r
3700 tchunkptr XP = X->parent;\
\r
3702 if (X->bk != X) {\
\r
3703 tchunkptr F = X->fd;\
\r
3705 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
\r
3710 CORRUPTION_ERROR_ACTION(M);\
\r
3715 if (((R = *(RP = &(X->child[1]))) != 0) ||\
\r
3716 ((R = *(RP = &(X->child[0]))) != 0)) {\
\r
3718 while ((*(CP = &(R->child[1])) != 0) ||\
\r
3719 (*(CP = &(R->child[0])) != 0)) {\
\r
3722 if (RTCHECK(ok_address(M, RP)))\
\r
3725 CORRUPTION_ERROR_ACTION(M);\
\r
3730 tbinptr* H = treebin_at(M, X->index);\
\r
3732 if ((*H = R) == 0) \
\r
3733 clear_treemap(M, X->index);\
\r
3735 else if (RTCHECK(ok_address(M, XP))) {\
\r
3736 if (XP->child[0] == X) \
\r
3737 XP->child[0] = R;\
\r
3739 XP->child[1] = R;\
\r
3742 CORRUPTION_ERROR_ACTION(M);\
\r
3744 if (RTCHECK(ok_address(M, R))) {\
\r
3745 tchunkptr C0, C1;\
\r
3747 if ((C0 = X->child[0]) != 0) {\
\r
3748 if (RTCHECK(ok_address(M, C0))) {\
\r
3749 R->child[0] = C0;\
\r
3753 CORRUPTION_ERROR_ACTION(M);\
\r
3755 if ((C1 = X->child[1]) != 0) {\
\r
3756 if (RTCHECK(ok_address(M, C1))) {\
\r
3757 R->child[1] = C1;\
\r
3761 CORRUPTION_ERROR_ACTION(M);\
\r
3765 CORRUPTION_ERROR_ACTION(M);\
\r
3770 /* Relays to large vs small bin operations */
\r
3772 #define insert_chunk(M, P, S)\
\r
3773 if (is_small(S)) insert_small_chunk(M, P, S)\
\r
3774 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
\r
3776 #define unlink_chunk(M, P, S)\
\r
3777 if (is_small(S)) unlink_small_chunk(M, P, S)\
\r
3778 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
\r
3781 /* Relays to internal calls to malloc/free from realloc, memalign etc */
\r
3784 #define internal_malloc(m, b) mspace_malloc(m, b)
\r
3785 #define internal_free(m, mem) mspace_free(m,mem);
\r
3786 #else /* ONLY_MSPACES */
\r
3788 #define internal_malloc(m, b)\
\r
3789 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
\r
3790 #define internal_free(m, mem)\
\r
3791 if (m == gm) dlfree(mem); else mspace_free(m,mem);
\r
3792 #else /* MSPACES */
\r
3793 #define internal_malloc(m, b) dlmalloc(b)
\r
3794 #define internal_free(m, mem) dlfree(mem)
\r
3795 #endif /* MSPACES */
\r
3796 #endif /* ONLY_MSPACES */
\r
3798 /* ----------------------- Direct-mmapping chunks ----------------------- */
\r
3801 Directly mmapped chunks are set up with an offset to the start of
\r
3802 the mmapped region stored in the prev_foot field of the chunk. This
\r
3803 allows reconstruction of the required argument to MUNMAP when freed,
\r
3804 and also allows adjustment of the returned chunk to meet alignment
\r
3805 requirements (especially in memalign).
\r
3808 /* Malloc using mmap */
\r
3809 static void* mmap_alloc(mstate m, size_t nb) {
\r
3810 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3811 if (m->footprint_limit != 0) {
\r
3812 size_t fp = m->footprint + mmsize;
\r
3813 if (fp <= m->footprint || fp > m->footprint_limit)
\r
3816 if (mmsize > nb) { /* Check for wrap around 0 */
\r
3817 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
\r
3818 if (mm != CMFAIL) {
\r
3819 size_t offset = align_offset(chunk2mem(mm));
\r
3820 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
\r
3821 mchunkptr p = (mchunkptr)(mm + offset);
\r
3822 p->prev_foot = offset;
\r
3824 mark_inuse_foot(m, p, psize);
\r
3825 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
\r
3826 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
\r
3828 if (m->least_addr == 0 || mm < m->least_addr)
\r
3829 m->least_addr = mm;
\r
3830 if ((m->footprint += mmsize) > m->max_footprint)
\r
3831 m->max_footprint = m->footprint;
\r
3832 assert(is_aligned(chunk2mem(p)));
\r
3833 check_mmapped_chunk(m, p);
\r
3834 return chunk2mem(p);
\r
3840 /* Realloc using mmap */
\r
3841 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
\r
3842 size_t oldsize = chunksize(oldp);
\r
3844 if (is_small(nb)) /* Can't shrink mmap regions below small size */
\r
3846 /* Keep old chunk if big enough but not too big */
\r
3847 if (oldsize >= nb + SIZE_T_SIZE &&
\r
3848 (oldsize - nb) <= (mparams.granularity << 1))
\r
3851 size_t offset = oldp->prev_foot;
\r
3852 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
\r
3853 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3854 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
\r
3855 oldmmsize, newmmsize, flags);
\r
3856 if (cp != CMFAIL) {
\r
3857 mchunkptr newp = (mchunkptr)(cp + offset);
\r
3858 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
\r
3859 newp->head = psize;
\r
3860 mark_inuse_foot(m, newp, psize);
\r
3861 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
\r
3862 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
\r
3864 if (cp < m->least_addr)
\r
3865 m->least_addr = cp;
\r
3866 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
\r
3867 m->max_footprint = m->footprint;
\r
3868 check_mmapped_chunk(m, newp);
\r
3876 /* -------------------------- mspace management -------------------------- */
\r
3878 /* Initialize top chunk and its size */
\r
3879 static void init_top(mstate m, mchunkptr p, size_t psize) {
\r
3880 /* Ensure alignment */
\r
3881 size_t offset = align_offset(chunk2mem(p));
\r
3882 p = (mchunkptr)((char*)p + offset);
\r
3886 m->topsize = psize;
\r
3887 p->head = psize | PINUSE_BIT;
\r
3888 /* set size of fake trailing chunk holding overhead space only once */
\r
3889 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
\r
3890 m->trim_check = mparams.trim_threshold; /* reset on each update */
\r
3893 /* Initialize bins for a new mstate that is otherwise zeroed out */
\r
3894 static void init_bins(mstate m) {
\r
3895 /* Establish circular links for smallbins */
\r
3897 for (i = 0; i < NSMALLBINS; ++i) {
\r
3898 sbinptr bin = smallbin_at(m,i);
\r
3899 bin->fd = bin->bk = bin;
\r
3903 #if PROCEED_ON_ERROR
\r
3905 /* default corruption action */
\r
3906 static void reset_on_error(mstate m) {
\r
3908 ++malloc_corruption_error_count;
\r
3909 /* Reinitialize fields to forget about all memory */
\r
3910 m->smallmap = m->treemap = 0;
\r
3911 m->dvsize = m->topsize = 0;
\r
3915 m->top = m->dv = 0;
\r
3916 for (i = 0; i < NTREEBINS; ++i)
\r
3917 *treebin_at(m, i) = 0;
\r
3920 #endif /* PROCEED_ON_ERROR */
\r
3922 /* Allocate chunk and prepend remainder with chunk in successor base. */
\r
3923 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
\r
3925 mchunkptr p = align_as_chunk(newbase);
\r
3926 mchunkptr oldfirst = align_as_chunk(oldbase);
\r
3927 size_t psize = (char*)oldfirst - (char*)p;
\r
3928 mchunkptr q = chunk_plus_offset(p, nb);
\r
3929 size_t qsize = psize - nb;
\r
3930 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
\r
3932 assert((char*)oldfirst > (char*)q);
\r
3933 assert(pinuse(oldfirst));
\r
3934 assert(qsize >= MIN_CHUNK_SIZE);
\r
3936 /* consolidate remainder with first chunk of old base */
\r
3937 if (oldfirst == m->top) {
\r
3938 size_t tsize = m->topsize += qsize;
\r
3940 q->head = tsize | PINUSE_BIT;
\r
3941 check_top_chunk(m, q);
\r
3943 else if (oldfirst == m->dv) {
\r
3944 size_t dsize = m->dvsize += qsize;
\r
3946 set_size_and_pinuse_of_free_chunk(q, dsize);
\r
3949 if (!is_inuse(oldfirst)) {
\r
3950 size_t nsize = chunksize(oldfirst);
\r
3951 unlink_chunk(m, oldfirst, nsize);
\r
3952 oldfirst = chunk_plus_offset(oldfirst, nsize);
\r
3955 set_free_with_pinuse(q, qsize, oldfirst);
\r
3956 insert_chunk(m, q, qsize);
\r
3957 check_free_chunk(m, q);
\r
3960 check_malloced_chunk(m, chunk2mem(p), nb);
\r
3961 return chunk2mem(p);
\r
3964 /* Add a segment to hold a new noncontiguous region */
\r
3965 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
\r
3966 /* Determine locations and sizes of segment, fenceposts, old top */
\r
3967 char* old_top = (char*)m->top;
\r
3968 msegmentptr oldsp = segment_holding(m, old_top);
\r
3969 char* old_end = oldsp->base + oldsp->size;
\r
3970 size_t ssize = pad_request(sizeof(struct malloc_segment));
\r
3971 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3972 size_t offset = align_offset(chunk2mem(rawsp));
\r
3973 char* asp = rawsp + offset;
\r
3974 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
\r
3975 mchunkptr sp = (mchunkptr)csp;
\r
3976 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
\r
3977 mchunkptr tnext = chunk_plus_offset(sp, ssize);
\r
3978 mchunkptr p = tnext;
\r
3981 /* reset top to new space */
\r
3982 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
\r
3984 /* Set up segment record */
\r
3985 assert(is_aligned(ss));
\r
3986 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
\r
3987 *ss = m->seg; /* Push current record */
\r
3988 m->seg.base = tbase;
\r
3989 m->seg.size = tsize;
\r
3990 m->seg.sflags = mmapped;
\r
3993 /* Insert trailing fenceposts */
\r
3995 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
\r
3996 p->head = FENCEPOST_HEAD;
\r
3998 if ((char*)(&(nextp->head)) < old_end)
\r
4003 assert(nfences >= 2);
\r
4005 /* Insert the rest of old top into a bin as an ordinary free chunk */
\r
4006 if (csp != old_top) {
\r
4007 mchunkptr q = (mchunkptr)old_top;
\r
4008 size_t psize = csp - old_top;
\r
4009 mchunkptr tn = chunk_plus_offset(q, psize);
\r
4010 set_free_with_pinuse(q, psize, tn);
\r
4011 insert_chunk(m, q, psize);
\r
4014 check_top_chunk(m, m->top);
\r
4017 /* -------------------------- System allocation -------------------------- */
\r
4019 /* Get memory from system using MORECORE or MMAP */
\r
4020 static void* sys_alloc(mstate m, size_t nb) {
\r
4021 char* tbase = CMFAIL;
\r
4023 flag_t mmap_flag = 0;
\r
4024 size_t asize; /* allocation size */
\r
4026 ensure_initialization();
\r
4028 /* Directly map large chunks, but only if already initialized */
\r
4029 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
\r
4030 void* mem = mmap_alloc(m, nb);
\r
4035 asize = granularity_align(nb + SYS_ALLOC_PADDING);
\r
4037 return 0; /* wraparound */
\r
4038 if (m->footprint_limit != 0) {
\r
4039 size_t fp = m->footprint + asize;
\r
4040 if (fp <= m->footprint || fp > m->footprint_limit)
\r
4045 Try getting memory in any of three ways (in most-preferred to
\r
4046 least-preferred order):
\r
4047 1. A call to MORECORE that can normally contiguously extend memory.
\r
4048 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
\r
4049 or main space is mmapped or a previous contiguous call failed)
\r
4050 2. A call to MMAP new space (disabled if not HAVE_MMAP).
\r
4051 Note that under the default settings, if MORECORE is unable to
\r
4052 fulfill a request, and HAVE_MMAP is true, then mmap is
\r
4053 used as a noncontiguous system allocator. This is a useful backup
\r
4054 strategy for systems with holes in address spaces -- in this case
\r
4055 sbrk cannot contiguously expand the heap, but mmap may be able to
\r
4057 3. A call to MORECORE that cannot usually contiguously extend memory.
\r
4058 (disabled if not HAVE_MORECORE)
\r
4060 In all cases, we need to request enough bytes from system to ensure
\r
4061 we can malloc nb bytes upon success, so pad with enough space for
\r
4062 top_foot, plus alignment-pad to make sure we don't lose bytes if
\r
4063 not on boundary, and round this up to a granularity unit.
\r
4066 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
\r
4067 char* br = CMFAIL;
\r
4068 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
\r
4069 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4071 if (ss == 0) { /* First time through or recovery */
\r
4072 char* base = (char*)CALL_MORECORE(0);
\r
4073 if (base != CMFAIL) {
\r
4075 /* Adjust to end on a page boundary */
\r
4076 if (!is_page_aligned(base))
\r
4077 asize += (page_align((size_t)base) - (size_t)base);
\r
4078 fp = m->footprint + asize; /* recheck limits */
\r
4079 if (asize > nb && asize < HALF_MAX_SIZE_T &&
\r
4080 (m->footprint_limit == 0 ||
\r
4081 (fp > m->footprint && fp <= m->footprint_limit)) &&
\r
4082 (br = (char*)(CALL_MORECORE(asize))) == base) {
\r
4089 /* Subtract out existing available top space from MORECORE request. */
\r
4090 asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
\r
4091 /* Use mem here only if it did continuously extend old space */
\r
4092 if (asize < HALF_MAX_SIZE_T &&
\r
4093 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
\r
4099 if (tbase == CMFAIL) { /* Cope with partial failure */
\r
4100 if (br != CMFAIL) { /* Try to use/extend the space we did get */
\r
4101 if (asize < HALF_MAX_SIZE_T &&
\r
4102 asize < nb + SYS_ALLOC_PADDING) {
\r
4103 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
\r
4104 if (esize < HALF_MAX_SIZE_T) {
\r
4105 char* end = (char*)CALL_MORECORE(esize);
\r
4106 if (end != CMFAIL)
\r
4108 else { /* Can't use; try to release */
\r
4109 (void) CALL_MORECORE(-asize);
\r
4115 if (br != CMFAIL) { /* Use the space we did get */
\r
4120 disable_contiguous(m); /* Don't try contiguous path in the future */
\r
4123 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4126 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
\r
4127 char* mp = (char*)(CALL_MMAP(asize));
\r
4128 if (mp != CMFAIL) {
\r
4131 mmap_flag = USE_MMAP_BIT;
\r
4135 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
\r
4136 if (asize < HALF_MAX_SIZE_T) {
\r
4137 char* br = CMFAIL;
\r
4138 char* end = CMFAIL;
\r
4139 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4140 br = (char*)(CALL_MORECORE(asize));
\r
4141 end = (char*)(CALL_MORECORE(0));
\r
4142 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4143 if (br != CMFAIL && end != CMFAIL && br < end) {
\r
4144 size_t ssize = end - br;
\r
4145 if (ssize > nb + TOP_FOOT_SIZE) {
\r
4153 if (tbase != CMFAIL) {
\r
4155 if ((m->footprint += tsize) > m->max_footprint)
\r
4156 m->max_footprint = m->footprint;
\r
4158 if (!is_initialized(m)) { /* first-time initialization */
\r
4159 if (m->least_addr == 0 || tbase < m->least_addr)
\r
4160 m->least_addr = tbase;
\r
4161 m->seg.base = tbase;
\r
4162 m->seg.size = tsize;
\r
4163 m->seg.sflags = mmap_flag;
\r
4164 m->magic = mparams.magic;
\r
4165 m->release_checks = MAX_RELEASE_CHECK_RATE;
\r
4169 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
\r
4173 /* Offset top by embedded malloc_state */
\r
4174 mchunkptr mn = next_chunk(mem2chunk(m));
\r
4175 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
\r
4180 /* Try to merge with an existing segment */
\r
4181 msegmentptr sp = &m->seg;
\r
4182 /* Only consider most recent segment if traversal suppressed */
\r
4183 while (sp != 0 && tbase != sp->base + sp->size)
\r
4184 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
\r
4186 !is_extern_segment(sp) &&
\r
4187 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
\r
4188 segment_holds(sp, m->top)) { /* append */
\r
4189 sp->size += tsize;
\r
4190 init_top(m, m->top, m->topsize + tsize);
\r
4193 if (tbase < m->least_addr)
\r
4194 m->least_addr = tbase;
\r
4196 while (sp != 0 && sp->base != tbase + tsize)
\r
4197 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
\r
4199 !is_extern_segment(sp) &&
\r
4200 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
\r
4201 char* oldbase = sp->base;
\r
4203 sp->size += tsize;
\r
4204 return prepend_alloc(m, tbase, oldbase, nb);
\r
4207 add_segment(m, tbase, tsize, mmap_flag);
\r
4211 if (nb < m->topsize) { /* Allocate from new or extended top space */
\r
4212 size_t rsize = m->topsize -= nb;
\r
4213 mchunkptr p = m->top;
\r
4214 mchunkptr r = m->top = chunk_plus_offset(p, nb);
\r
4215 r->head = rsize | PINUSE_BIT;
\r
4216 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
\r
4217 check_top_chunk(m, m->top);
\r
4218 check_malloced_chunk(m, chunk2mem(p), nb);
\r
4219 return chunk2mem(p);
\r
4223 MALLOC_FAILURE_ACTION;
\r
4227 /* ----------------------- system deallocation -------------------------- */
\r
4229 /* Unmap and unlink any mmapped segments that don't contain used chunks */
\r
4230 static size_t release_unused_segments(mstate m) {
\r
4231 size_t released = 0;
\r
4233 msegmentptr pred = &m->seg;
\r
4234 msegmentptr sp = pred->next;
\r
4236 char* base = sp->base;
\r
4237 size_t size = sp->size;
\r
4238 msegmentptr next = sp->next;
\r
4240 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
\r
4241 mchunkptr p = align_as_chunk(base);
\r
4242 size_t psize = chunksize(p);
\r
4243 /* Can unmap if first chunk holds entire segment and not pinned */
\r
4244 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
\r
4245 tchunkptr tp = (tchunkptr)p;
\r
4246 assert(segment_holds(sp, (char*)sp));
\r
4252 unlink_large_chunk(m, tp);
\r
4254 if (CALL_MUNMAP(base, size) == 0) {
\r
4256 m->footprint -= size;
\r
4257 /* unlink obsoleted record */
\r
4261 else { /* back out if cannot unmap */
\r
4262 insert_large_chunk(m, tp, psize);
\r
4266 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
\r
4271 /* Reset check counter */
\r
4272 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
\r
4273 nsegs : MAX_RELEASE_CHECK_RATE);
\r
4277 static int sys_trim(mstate m, size_t pad) {
\r
4278 size_t released = 0;
\r
4279 ensure_initialization();
\r
4280 if (pad < MAX_REQUEST && is_initialized(m)) {
\r
4281 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
\r
4283 if (m->topsize > pad) {
\r
4284 /* Shrink top space in granularity-size units, keeping at least one */
\r
4285 size_t unit = mparams.granularity;
\r
4286 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
\r
4287 SIZE_T_ONE) * unit;
\r
4288 msegmentptr sp = segment_holding(m, (char*)m->top);
\r
4290 if (!is_extern_segment(sp)) {
\r
4291 if (is_mmapped_segment(sp)) {
\r
4293 sp->size >= extra &&
\r
4294 !has_segment_link(m, sp)) { /* can't shrink if pinned */
\r
4295 size_t newsize = sp->size - extra;
\r
4296 /* Prefer mremap, fall back to munmap */
\r
4297 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
\r
4298 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
\r
4303 else if (HAVE_MORECORE) {
\r
4304 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
\r
4305 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
\r
4306 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4308 /* Make sure end of memory is where we last set it. */
\r
4309 char* old_br = (char*)(CALL_MORECORE(0));
\r
4310 if (old_br == sp->base + sp->size) {
\r
4311 char* rel_br = (char*)(CALL_MORECORE(-extra));
\r
4312 char* new_br = (char*)(CALL_MORECORE(0));
\r
4313 if (rel_br != CMFAIL && new_br < old_br)
\r
4314 released = old_br - new_br;
\r
4317 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4321 if (released != 0) {
\r
4322 sp->size -= released;
\r
4323 m->footprint -= released;
\r
4324 init_top(m, m->top, m->topsize - released);
\r
4325 check_top_chunk(m, m->top);
\r
4329 /* Unmap any unused mmapped segments */
\r
4331 released += release_unused_segments(m);
\r
4333 /* On failure, disable autotrim to avoid repeated failed future calls */
\r
4334 if (released == 0 && m->topsize > m->trim_check)
\r
4335 m->trim_check = MAX_SIZE_T;
\r
4338 return (released != 0)? 1 : 0;
\r
4341 /* Consolidate and bin a chunk. Differs from exported versions
\r
4342 of free mainly in that the chunk need not be marked as inuse.
\r
4344 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
\r
4345 mchunkptr next = chunk_plus_offset(p, psize);
\r
4348 size_t prevsize = p->prev_foot;
\r
4349 if (is_mmapped(p)) {
\r
4350 psize += prevsize + MMAP_FOOT_PAD;
\r
4351 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
4352 m->footprint -= psize;
\r
4355 prev = chunk_minus_offset(p, prevsize);
\r
4356 psize += prevsize;
\r
4358 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
\r
4360 unlink_chunk(m, p, prevsize);
\r
4362 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
4363 m->dvsize = psize;
\r
4364 set_free_with_pinuse(p, psize, next);
\r
4369 CORRUPTION_ERROR_ACTION(m);
\r
4373 if (RTCHECK(ok_address(m, next))) {
\r
4374 if (!cinuse(next)) { /* consolidate forward */
\r
4375 if (next == m->top) {
\r
4376 size_t tsize = m->topsize += psize;
\r
4378 p->head = tsize | PINUSE_BIT;
\r
4385 else if (next == m->dv) {
\r
4386 size_t dsize = m->dvsize += psize;
\r
4388 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
4392 size_t nsize = chunksize(next);
\r
4394 unlink_chunk(m, next, nsize);
\r
4395 set_size_and_pinuse_of_free_chunk(p, psize);
\r
4397 m->dvsize = psize;
\r
4403 set_free_with_pinuse(p, psize, next);
\r
4405 insert_chunk(m, p, psize);
\r
4408 CORRUPTION_ERROR_ACTION(m);
\r
4412 /* ---------------------------- malloc --------------------------- */
\r
4414 /* allocate a large request from the best fitting chunk in a treebin */
\r
4415 static void* tmalloc_large(mstate m, size_t nb) {
\r
4417 size_t rsize = -nb; /* Unsigned negation */
\r
4420 compute_tree_index(nb, idx);
\r
4421 if ((t = *treebin_at(m, idx)) != 0) {
\r
4422 /* Traverse tree for this bin looking for node with size == nb */
\r
4423 size_t sizebits = nb << leftshift_for_tree_index(idx);
\r
4424 tchunkptr rst = 0; /* The deepest untaken right subtree */
\r
4427 size_t trem = chunksize(t) - nb;
\r
4428 if (trem < rsize) {
\r
4430 if ((rsize = trem) == 0)
\r
4434 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
\r
4435 if (rt != 0 && rt != t)
\r
4438 t = rst; /* set t to least subtree holding sizes > nb */
\r
4444 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
\r
4445 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
\r
4446 if (leftbits != 0) {
\r
4448 binmap_t leastbit = least_bit(leftbits);
\r
4449 compute_bit2idx(leastbit, i);
\r
4450 t = *treebin_at(m, i);
\r
4454 while (t != 0) { /* find smallest of tree or subtree */
\r
4455 size_t trem = chunksize(t) - nb;
\r
4456 if (trem < rsize) {
\r
4460 t = leftmost_child(t);
\r
4463 /* If dv is a better fit, return 0 so malloc will use it */
\r
4464 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
\r
4465 if (RTCHECK(ok_address(m, v))) { /* split */
\r
4466 mchunkptr r = chunk_plus_offset(v, nb);
\r
4467 assert(chunksize(v) == rsize + nb);
\r
4468 if (RTCHECK(ok_next(v, r))) {
\r
4469 unlink_large_chunk(m, v);
\r
4470 if (rsize < MIN_CHUNK_SIZE)
\r
4471 set_inuse_and_pinuse(m, v, (rsize + nb));
\r
4473 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
\r
4474 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4475 insert_chunk(m, r, rsize);
\r
4477 return chunk2mem(v);
\r
4480 CORRUPTION_ERROR_ACTION(m);
\r
4485 /* allocate a small request from the best fitting chunk in a treebin */
\r
4486 static void* tmalloc_small(mstate m, size_t nb) {
\r
4490 binmap_t leastbit = least_bit(m->treemap);
\r
4491 compute_bit2idx(leastbit, i);
\r
4492 v = t = *treebin_at(m, i);
\r
4493 rsize = chunksize(t) - nb;
\r
4495 while ((t = leftmost_child(t)) != 0) {
\r
4496 size_t trem = chunksize(t) - nb;
\r
4497 if (trem < rsize) {
\r
4503 if (RTCHECK(ok_address(m, v))) {
\r
4504 mchunkptr r = chunk_plus_offset(v, nb);
\r
4505 assert(chunksize(v) == rsize + nb);
\r
4506 if (RTCHECK(ok_next(v, r))) {
\r
4507 unlink_large_chunk(m, v);
\r
4508 if (rsize < MIN_CHUNK_SIZE)
\r
4509 set_inuse_and_pinuse(m, v, (rsize + nb));
\r
4511 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
\r
4512 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4513 replace_dv(m, r, rsize);
\r
4515 return chunk2mem(v);
\r
4519 CORRUPTION_ERROR_ACTION(m);
\r
4525 void* dlmalloc(size_t bytes) {
\r
4528 If a small request (< 256 bytes minus per-chunk overhead):
\r
4529 1. If one exists, use a remainderless chunk in associated smallbin.
\r
4530 (Remainderless means that there are too few excess bytes to
\r
4531 represent as a chunk.)
\r
4532 2. If it is big enough, use the dv chunk, which is normally the
\r
4533 chunk adjacent to the one used for the most recent small request.
\r
4534 3. If one exists, split the smallest available chunk in a bin,
\r
4535 saving remainder in dv.
\r
4536 4. If it is big enough, use the top chunk.
\r
4537 5. If available, get memory from system and use it
\r
4538 Otherwise, for a large request:
\r
4539 1. Find the smallest available binned chunk that fits, and use it
\r
4540 if it is better fitting than dv chunk, splitting if necessary.
\r
4541 2. If better fitting than any binned chunk, use the dv chunk.
\r
4542 3. If it is big enough, use the top chunk.
\r
4543 4. If request size >= mmap threshold, try to directly mmap this chunk.
\r
4544 5. If available, get memory from system and use it
\r
4546 The ugly goto's here ensure that postaction occurs along all paths.
\r
4550 ensure_initialization(); /* initialize in sys_alloc if not using locks */
\r
4553 if (!PREACTION(gm)) {
\r
4556 if (bytes <= MAX_SMALL_REQUEST) {
\r
4558 binmap_t smallbits;
\r
4559 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
\r
4560 idx = small_index(nb);
\r
4561 smallbits = gm->smallmap >> idx;
\r
4563 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
\r
4565 idx += ~smallbits & 1; /* Uses next bin if idx empty */
\r
4566 b = smallbin_at(gm, idx);
\r
4568 assert(chunksize(p) == small_index2size(idx));
\r
4569 unlink_first_small_chunk(gm, b, p, idx);
\r
4570 set_inuse_and_pinuse(gm, p, small_index2size(idx));
\r
4571 mem = chunk2mem(p);
\r
4572 check_malloced_chunk(gm, mem, nb);
\r
4576 else if (nb > gm->dvsize) {
\r
4577 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
\r
4578 mchunkptr b, p, r;
\r
4581 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
\r
4582 binmap_t leastbit = least_bit(leftbits);
\r
4583 compute_bit2idx(leastbit, i);
\r
4584 b = smallbin_at(gm, i);
\r
4586 assert(chunksize(p) == small_index2size(i));
\r
4587 unlink_first_small_chunk(gm, b, p, i);
\r
4588 rsize = small_index2size(i) - nb;
\r
4589 /* Fit here cannot be remainderless if 4byte sizes */
\r
4590 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
\r
4591 set_inuse_and_pinuse(gm, p, small_index2size(i));
\r
4593 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4594 r = chunk_plus_offset(p, nb);
\r
4595 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4596 replace_dv(gm, r, rsize);
\r
4598 mem = chunk2mem(p);
\r
4599 check_malloced_chunk(gm, mem, nb);
\r
4603 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
\r
4604 check_malloced_chunk(gm, mem, nb);
\r
4609 else if (bytes >= MAX_REQUEST)
\r
4610 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
\r
4612 nb = pad_request(bytes);
\r
4613 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
\r
4614 check_malloced_chunk(gm, mem, nb);
\r
4619 if (nb <= gm->dvsize) {
\r
4620 size_t rsize = gm->dvsize - nb;
\r
4621 mchunkptr p = gm->dv;
\r
4622 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
\r
4623 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
\r
4624 gm->dvsize = rsize;
\r
4625 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4626 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4628 else { /* exhaust dv */
\r
4629 size_t dvs = gm->dvsize;
\r
4632 set_inuse_and_pinuse(gm, p, dvs);
\r
4634 mem = chunk2mem(p);
\r
4635 check_malloced_chunk(gm, mem, nb);
\r
4639 else if (nb < gm->topsize) { /* Split top */
\r
4640 size_t rsize = gm->topsize -= nb;
\r
4641 mchunkptr p = gm->top;
\r
4642 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
\r
4643 r->head = rsize | PINUSE_BIT;
\r
4644 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4645 mem = chunk2mem(p);
\r
4646 check_top_chunk(gm, gm->top);
\r
4647 check_malloced_chunk(gm, mem, nb);
\r
4651 mem = sys_alloc(gm, nb);
\r
4661 /* ---------------------------- free --------------------------- */
\r
4663 void dlfree(void* mem) {
\r
4665 Consolidate freed chunks with preceeding or succeeding bordering
\r
4666 free chunks, if they exist, and then place in a bin. Intermixed
\r
4667 with special cases for top, dv, mmapped chunks, and usage errors.
\r
4671 mchunkptr p = mem2chunk(mem);
\r
4673 mstate fm = get_mstate_for(p);
\r
4674 if (!ok_magic(fm)) {
\r
4675 USAGE_ERROR_ACTION(fm, p);
\r
4678 #else /* FOOTERS */
\r
4680 #endif /* FOOTERS */
\r
4681 if (!PREACTION(fm)) {
\r
4682 check_inuse_chunk(fm, p);
\r
4683 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
\r
4684 size_t psize = chunksize(p);
\r
4685 mchunkptr next = chunk_plus_offset(p, psize);
\r
4687 size_t prevsize = p->prev_foot;
\r
4688 if (is_mmapped(p)) {
\r
4689 psize += prevsize + MMAP_FOOT_PAD;
\r
4690 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
4691 fm->footprint -= psize;
\r
4695 mchunkptr prev = chunk_minus_offset(p, prevsize);
\r
4696 psize += prevsize;
\r
4698 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
\r
4699 if (p != fm->dv) {
\r
4700 unlink_chunk(fm, p, prevsize);
\r
4702 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
4703 fm->dvsize = psize;
\r
4704 set_free_with_pinuse(p, psize, next);
\r
4713 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
\r
4714 if (!cinuse(next)) { /* consolidate forward */
\r
4715 if (next == fm->top) {
\r
4716 size_t tsize = fm->topsize += psize;
\r
4718 p->head = tsize | PINUSE_BIT;
\r
4719 if (p == fm->dv) {
\r
4723 if (should_trim(fm, tsize))
\r
4727 else if (next == fm->dv) {
\r
4728 size_t dsize = fm->dvsize += psize;
\r
4730 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
4734 size_t nsize = chunksize(next);
\r
4736 unlink_chunk(fm, next, nsize);
\r
4737 set_size_and_pinuse_of_free_chunk(p, psize);
\r
4738 if (p == fm->dv) {
\r
4739 fm->dvsize = psize;
\r
4745 set_free_with_pinuse(p, psize, next);
\r
4747 if (is_small(psize)) {
\r
4748 insert_small_chunk(fm, p, psize);
\r
4749 check_free_chunk(fm, p);
\r
4752 tchunkptr tp = (tchunkptr)p;
\r
4753 insert_large_chunk(fm, tp, psize);
\r
4754 check_free_chunk(fm, p);
\r
4755 if (--fm->release_checks == 0)
\r
4756 release_unused_segments(fm);
\r
4762 USAGE_ERROR_ACTION(fm, p);
\r
4769 #endif /* FOOTERS */
\r
4772 void* dlcalloc(size_t n_elements, size_t elem_size) {
\r
4775 if (n_elements != 0) {
\r
4776 req = n_elements * elem_size;
\r
4777 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
\r
4778 (req / n_elements != elem_size))
\r
4779 req = MAX_SIZE_T; /* force downstream failure on overflow */
\r
4781 mem = dlmalloc(req);
\r
4782 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
\r
4783 memset(mem, 0, req);
\r
4787 #endif /* !ONLY_MSPACES */
\r
4789 /* ------------ Internal support for realloc, memalign, etc -------------- */
\r
4791 /* Try to realloc; only in-place unless can_move true */
\r
4792 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
\r
4794 mchunkptr newp = 0;
\r
4795 size_t oldsize = chunksize(p);
\r
4796 mchunkptr next = chunk_plus_offset(p, oldsize);
\r
4797 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
\r
4798 ok_next(p, next) && ok_pinuse(next))) {
\r
4799 if (is_mmapped(p)) {
\r
4800 newp = mmap_resize(m, p, nb, can_move);
\r
4802 else if (oldsize >= nb) { /* already big enough */
\r
4803 size_t rsize = oldsize - nb;
\r
4804 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
\r
4805 mchunkptr r = chunk_plus_offset(p, nb);
\r
4806 set_inuse(m, p, nb);
\r
4807 set_inuse(m, r, rsize);
\r
4808 dispose_chunk(m, r, rsize);
\r
4812 else if (next == m->top) { /* extend into top */
\r
4813 if (oldsize + m->topsize > nb) {
\r
4814 size_t newsize = oldsize + m->topsize;
\r
4815 size_t newtopsize = newsize - nb;
\r
4816 mchunkptr newtop = chunk_plus_offset(p, nb);
\r
4817 set_inuse(m, p, nb);
\r
4818 newtop->head = newtopsize |PINUSE_BIT;
\r
4820 m->topsize = newtopsize;
\r
4824 else if (next == m->dv) { /* extend into dv */
\r
4825 size_t dvs = m->dvsize;
\r
4826 if (oldsize + dvs >= nb) {
\r
4827 size_t dsize = oldsize + dvs - nb;
\r
4828 if (dsize >= MIN_CHUNK_SIZE) {
\r
4829 mchunkptr r = chunk_plus_offset(p, nb);
\r
4830 mchunkptr n = chunk_plus_offset(r, dsize);
\r
4831 set_inuse(m, p, nb);
\r
4832 set_size_and_pinuse_of_free_chunk(r, dsize);
\r
4834 m->dvsize = dsize;
\r
4837 else { /* exhaust dv */
\r
4838 size_t newsize = oldsize + dvs;
\r
4839 set_inuse(m, p, newsize);
\r
4846 else if (!cinuse(next)) { /* extend into next free chunk */
\r
4847 size_t nextsize = chunksize(next);
\r
4848 if (oldsize + nextsize >= nb) {
\r
4849 size_t rsize = oldsize + nextsize - nb;
\r
4850 unlink_chunk(m, next, nextsize);
\r
4851 if (rsize < MIN_CHUNK_SIZE) {
\r
4852 size_t newsize = oldsize + nextsize;
\r
4853 set_inuse(m, p, newsize);
\r
4856 mchunkptr r = chunk_plus_offset(p, nb);
\r
4857 set_inuse(m, p, nb);
\r
4858 set_inuse(m, r, rsize);
\r
4859 dispose_chunk(m, r, rsize);
\r
4866 USAGE_ERROR_ACTION(m, oldmem);
\r
4871 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
\r
4873 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
\r
4874 alignment = MIN_CHUNK_SIZE;
\r
4875 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
\r
4876 size_t a = MALLOC_ALIGNMENT << 1;
\r
4877 while (a < alignment) a <<= 1;
\r
4880 if (bytes >= MAX_REQUEST - alignment) {
\r
4881 if (m != 0) { /* Test isn't needed but avoids compiler warning */
\r
4882 MALLOC_FAILURE_ACTION;
\r
4886 size_t nb = request2size(bytes);
\r
4887 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
\r
4888 mem = internal_malloc(m, req);
\r
4890 mchunkptr p = mem2chunk(mem);
\r
4893 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
\r
4895 Find an aligned spot inside chunk. Since we need to give
\r
4896 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
\r
4897 the first calculation places us at a spot with less than
\r
4898 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
\r
4899 We've allocated enough total room so that this is always
\r
4902 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
\r
4905 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
\r
4906 br : br+alignment;
\r
4907 mchunkptr newp = (mchunkptr)pos;
\r
4908 size_t leadsize = pos - (char*)(p);
\r
4909 size_t newsize = chunksize(p) - leadsize;
\r
4911 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
\r
4912 newp->prev_foot = p->prev_foot + leadsize;
\r
4913 newp->head = newsize;
\r
4915 else { /* Otherwise, give back leader, use the rest */
\r
4916 set_inuse(m, newp, newsize);
\r
4917 set_inuse(m, p, leadsize);
\r
4918 dispose_chunk(m, p, leadsize);
\r
4923 /* Give back spare room at the end */
\r
4924 if (!is_mmapped(p)) {
\r
4925 size_t size = chunksize(p);
\r
4926 if (size > nb + MIN_CHUNK_SIZE) {
\r
4927 size_t remainder_size = size - nb;
\r
4928 mchunkptr remainder = chunk_plus_offset(p, nb);
\r
4929 set_inuse(m, p, nb);
\r
4930 set_inuse(m, remainder, remainder_size);
\r
4931 dispose_chunk(m, remainder, remainder_size);
\r
4935 mem = chunk2mem(p);
\r
4936 assert (chunksize(p) >= nb);
\r
4937 assert(((size_t)mem & (alignment - 1)) == 0);
\r
4938 check_inuse_chunk(m, p);
\r
4946 Common support for independent_X routines, handling
\r
4947 all of the combinations that can result.
\r
4949 bit 0 set if all elements are same size (using sizes[0])
\r
4950 bit 1 set if elements should be zeroed
\r
4952 static void** ialloc(mstate m,
\r
4953 size_t n_elements,
\r
4958 size_t element_size; /* chunksize of each element, if all same */
\r
4959 size_t contents_size; /* total size of elements */
\r
4960 size_t array_size; /* request size of pointer array */
\r
4961 void* mem; /* malloced aggregate space */
\r
4962 mchunkptr p; /* corresponding chunk */
\r
4963 size_t remainder_size; /* remaining bytes while splitting */
\r
4964 void** marray; /* either "chunks" or malloced ptr array */
\r
4965 mchunkptr array_chunk; /* chunk for malloced ptr array */
\r
4966 flag_t was_enabled; /* to disable mmap */
\r
4970 ensure_initialization();
\r
4971 /* compute array length, if needed */
\r
4972 if (chunks != 0) {
\r
4973 if (n_elements == 0)
\r
4974 return chunks; /* nothing to do */
\r
4979 /* if empty req, must still return chunk representing empty array */
\r
4980 if (n_elements == 0)
\r
4981 return (void**)internal_malloc(m, 0);
\r
4983 array_size = request2size(n_elements * (sizeof(void*)));
\r
4986 /* compute total element size */
\r
4987 if (opts & 0x1) { /* all-same-size */
\r
4988 element_size = request2size(*sizes);
\r
4989 contents_size = n_elements * element_size;
\r
4991 else { /* add up all the sizes */
\r
4993 contents_size = 0;
\r
4994 for (i = 0; i != n_elements; ++i)
\r
4995 contents_size += request2size(sizes[i]);
\r
4998 size = contents_size + array_size;
\r
5001 Allocate the aggregate chunk. First disable direct-mmapping so
\r
5002 malloc won't use it, since we would not be able to later
\r
5003 free/realloc space internal to a segregated mmap region.
\r
5005 was_enabled = use_mmap(m);
\r
5007 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
\r
5013 if (PREACTION(m)) return 0;
\r
5014 p = mem2chunk(mem);
\r
5015 remainder_size = chunksize(p);
\r
5017 assert(!is_mmapped(p));
\r
5019 if (opts & 0x2) { /* optionally clear the elements */
\r
5020 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
\r
5023 /* If not provided, allocate the pointer array as final part of chunk */
\r
5024 if (marray == 0) {
\r
5025 size_t array_chunk_size;
\r
5026 array_chunk = chunk_plus_offset(p, contents_size);
\r
5027 array_chunk_size = remainder_size - contents_size;
\r
5028 marray = (void**) (chunk2mem(array_chunk));
\r
5029 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
\r
5030 remainder_size = contents_size;
\r
5033 /* split out elements */
\r
5034 for (i = 0; ; ++i) {
\r
5035 marray[i] = chunk2mem(p);
\r
5036 if (i != n_elements-1) {
\r
5037 if (element_size != 0)
\r
5038 size = element_size;
\r
5040 size = request2size(sizes[i]);
\r
5041 remainder_size -= size;
\r
5042 set_size_and_pinuse_of_inuse_chunk(m, p, size);
\r
5043 p = chunk_plus_offset(p, size);
\r
5045 else { /* the final element absorbs any overallocation slop */
\r
5046 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
\r
5052 if (marray != chunks) {
\r
5053 /* final element must have exactly exhausted chunk */
\r
5054 if (element_size != 0) {
\r
5055 assert(remainder_size == element_size);
\r
5058 assert(remainder_size == request2size(sizes[i]));
\r
5060 check_inuse_chunk(m, mem2chunk(marray));
\r
5062 for (i = 0; i != n_elements; ++i)
\r
5063 check_inuse_chunk(m, mem2chunk(marray[i]));
\r
5065 #endif /* DEBUG */
\r
5071 /* Try to free all pointers in the given array.
\r
5072 Note: this could be made faster, by delaying consolidation,
\r
5073 at the price of disabling some user integrity checks, We
\r
5074 still optimize some consolidations by combining adjacent
\r
5075 chunks before freeing, which will occur often if allocated
\r
5076 with ialloc or the array is sorted.
\r
5078 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
\r
5079 size_t unfreed = 0;
\r
5080 if (!PREACTION(m)) {
\r
5082 void** fence = &(array[nelem]);
\r
5083 for (a = array; a != fence; ++a) {
\r
5086 mchunkptr p = mem2chunk(mem);
\r
5087 size_t psize = chunksize(p);
\r
5089 if (get_mstate_for(p) != m) {
\r
5094 check_inuse_chunk(m, p);
\r
5096 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
\r
5097 void ** b = a + 1; /* try to merge with next chunk */
\r
5098 mchunkptr next = next_chunk(p);
\r
5099 if (b != fence && *b == chunk2mem(next)) {
\r
5100 size_t newsize = chunksize(next) + psize;
\r
5101 set_inuse(m, p, newsize);
\r
5102 *b = chunk2mem(p);
\r
5105 dispose_chunk(m, p, psize);
\r
5108 CORRUPTION_ERROR_ACTION(m);
\r
5113 if (should_trim(m, m->topsize))
\r
5121 #if MALLOC_INSPECT_ALL
\r
5122 static void internal_inspect_all(mstate m,
\r
5123 void(*handler)(void *start,
\r
5125 size_t used_bytes,
\r
5126 void* callback_arg),
\r
5128 if (is_initialized(m)) {
\r
5129 mchunkptr top = m->top;
\r
5131 for (s = &m->seg; s != 0; s = s->next) {
\r
5132 mchunkptr q = align_as_chunk(s->base);
\r
5133 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
\r
5134 mchunkptr next = next_chunk(q);
\r
5135 size_t sz = chunksize(q);
\r
5138 if (is_inuse(q)) {
\r
5139 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
\r
5140 start = chunk2mem(q);
\r
5144 if (is_small(sz)) { /* offset by possible bookkeeping */
\r
5145 start = (void*)((char*)q + sizeof(malloc_chunk));
\r
5148 start = (void*)((char*)q + sizeof(malloc_tree_chunk));
\r
5151 if (start < (void*)next) /* skip if all space is bookkeeping */
\r
5152 handler(start, next, used, arg);
\r
5160 #endif /* MALLOC_INSPECT_ALL */
\r
5162 /* ------------------ Exported realloc, memalign, etc -------------------- */
\r
5166 void* dlrealloc(void* oldmem, size_t bytes) {
\r
5168 if (oldmem == 0) {
\r
5169 mem = dlmalloc(bytes);
\r
5171 else if (bytes >= MAX_REQUEST) {
\r
5172 MALLOC_FAILURE_ACTION;
\r
5174 #ifdef REALLOC_ZERO_BYTES_FREES
\r
5175 else if (bytes == 0) {
\r
5178 #endif /* REALLOC_ZERO_BYTES_FREES */
\r
5180 size_t nb = request2size(bytes);
\r
5181 mchunkptr oldp = mem2chunk(oldmem);
\r
5184 #else /* FOOTERS */
\r
5185 mstate m = get_mstate_for(oldp);
\r
5186 if (!ok_magic(m)) {
\r
5187 USAGE_ERROR_ACTION(m, oldmem);
\r
5190 #endif /* FOOTERS */
\r
5191 if (!PREACTION(m)) {
\r
5192 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
\r
5195 check_inuse_chunk(m, newp);
\r
5196 mem = chunk2mem(newp);
\r
5199 mem = internal_malloc(m, bytes);
\r
5201 size_t oc = chunksize(oldp) - overhead_for(oldp);
\r
5202 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
\r
5203 internal_free(m, oldmem);
\r
5211 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
\r
5213 if (oldmem != 0) {
\r
5214 if (bytes >= MAX_REQUEST) {
\r
5215 MALLOC_FAILURE_ACTION;
\r
5218 size_t nb = request2size(bytes);
\r
5219 mchunkptr oldp = mem2chunk(oldmem);
\r
5222 #else /* FOOTERS */
\r
5223 mstate m = get_mstate_for(oldp);
\r
5224 if (!ok_magic(m)) {
\r
5225 USAGE_ERROR_ACTION(m, oldmem);
\r
5228 #endif /* FOOTERS */
\r
5229 if (!PREACTION(m)) {
\r
5230 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
\r
5232 if (newp == oldp) {
\r
5233 check_inuse_chunk(m, newp);
\r
5242 void* dlmemalign(size_t alignment, size_t bytes) {
\r
5243 if (alignment <= MALLOC_ALIGNMENT) {
\r
5244 return dlmalloc(bytes);
\r
5246 return internal_memalign(gm, alignment, bytes);
\r
5249 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
\r
5251 if (alignment == MALLOC_ALIGNMENT)
\r
5252 mem = dlmalloc(bytes);
\r
5254 size_t d = alignment / sizeof(void*);
\r
5255 size_t r = alignment % sizeof(void*);
\r
5256 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
\r
5258 else if (bytes >= MAX_REQUEST - alignment) {
\r
5259 if (alignment < MIN_CHUNK_SIZE)
\r
5260 alignment = MIN_CHUNK_SIZE;
\r
5261 mem = internal_memalign(gm, alignment, bytes);
\r
5272 void* dlvalloc(size_t bytes) {
\r
5274 ensure_initialization();
\r
5275 pagesz = mparams.page_size;
\r
5276 return dlmemalign(pagesz, bytes);
\r
5279 void* dlpvalloc(size_t bytes) {
\r
5281 ensure_initialization();
\r
5282 pagesz = mparams.page_size;
\r
5283 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
\r
5286 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
\r
5288 size_t sz = elem_size; /* serves as 1-element array */
\r
5289 return ialloc(gm, n_elements, &sz, 3, chunks);
\r
5292 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
\r
5294 return ialloc(gm, n_elements, sizes, 0, chunks);
\r
5297 size_t dlbulk_free(void* array[], size_t nelem) {
\r
5298 return internal_bulk_free(gm, array, nelem);
\r
5301 #if MALLOC_INSPECT_ALL
\r
5302 void dlmalloc_inspect_all(void(*handler)(void *start,
\r
5304 size_t used_bytes,
\r
5305 void* callback_arg),
\r
5307 ensure_initialization();
\r
5308 if (!PREACTION(gm)) {
\r
5309 internal_inspect_all(gm, handler, arg);
\r
5313 #endif /* MALLOC_INSPECT_ALL */
\r
5315 int dlmalloc_trim(size_t pad) {
\r
5317 ensure_initialization();
\r
5318 if (!PREACTION(gm)) {
\r
5319 result = sys_trim(gm, pad);
\r
5325 size_t dlmalloc_footprint(void) {
\r
5326 return gm->footprint;
\r
5329 size_t dlmalloc_max_footprint(void) {
\r
5330 return gm->max_footprint;
\r
5333 size_t dlmalloc_footprint_limit(void) {
\r
5334 size_t maf = gm->footprint_limit;
\r
5335 return maf == 0 ? MAX_SIZE_T : maf;
\r
5338 size_t dlmalloc_set_footprint_limit(size_t bytes) {
\r
5339 size_t result; /* invert sense of 0 */
\r
5341 result = granularity_align(1); /* Use minimal size */
\r
5342 if (bytes == MAX_SIZE_T)
\r
5343 result = 0; /* disable */
\r
5345 result = granularity_align(bytes);
\r
5346 return gm->footprint_limit = result;
\r
5350 struct mallinfo dlmallinfo(void) {
\r
5351 return internal_mallinfo(gm);
\r
5353 #endif /* NO_MALLINFO */
\r
5355 #if !NO_MALLOC_STATS
\r
5356 void dlmalloc_stats() {
\r
5357 internal_malloc_stats(gm);
\r
5359 #endif /* NO_MALLOC_STATS */
\r
5361 int dlmallopt(int param_number, int value) {
\r
5362 return change_mparam(param_number, value);
\r
5365 size_t dlmalloc_usable_size(void* mem) {
\r
5367 mchunkptr p = mem2chunk(mem);
\r
5369 return chunksize(p) - overhead_for(p);
\r
5374 #endif /* !ONLY_MSPACES */
\r
5376 /* ----------------------------- user mspaces ---------------------------- */
\r
5380 static mstate init_user_mstate(char* tbase, size_t tsize) {
\r
5381 size_t msize = pad_request(sizeof(struct malloc_state));
\r
5383 mchunkptr msp = align_as_chunk(tbase);
\r
5384 mstate m = (mstate)(chunk2mem(msp));
\r
5385 memset(m, 0, msize);
\r
5386 (void)INITIAL_LOCK(&m->mutex);
\r
5387 msp->head = (msize|INUSE_BITS);
\r
5388 m->seg.base = m->least_addr = tbase;
\r
5389 m->seg.size = m->footprint = m->max_footprint = tsize;
\r
5390 m->magic = mparams.magic;
\r
5391 m->release_checks = MAX_RELEASE_CHECK_RATE;
\r
5392 m->mflags = mparams.default_mflags;
\r
5395 disable_contiguous(m);
\r
5397 mn = next_chunk(mem2chunk(m));
\r
5398 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
\r
5399 check_top_chunk(m, m->top);
\r
5403 mspace create_mspace(size_t capacity, int locked) {
\r
5406 ensure_initialization();
\r
5407 msize = pad_request(sizeof(struct malloc_state));
\r
5408 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
\r
5409 size_t rs = ((capacity == 0)? mparams.granularity :
\r
5410 (capacity + TOP_FOOT_SIZE + msize));
\r
5411 size_t tsize = granularity_align(rs);
\r
5412 char* tbase = (char*)(CALL_MMAP(tsize));
\r
5413 if (tbase != CMFAIL) {
\r
5414 m = init_user_mstate(tbase, tsize);
\r
5415 m->seg.sflags = USE_MMAP_BIT;
\r
5416 set_lock(m, locked);
\r
5422 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
\r
5425 ensure_initialization();
\r
5426 msize = pad_request(sizeof(struct malloc_state));
\r
5427 if (capacity > msize + TOP_FOOT_SIZE &&
\r
5428 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
\r
5429 m = init_user_mstate((char*)base, capacity);
\r
5430 m->seg.sflags = EXTERN_BIT;
\r
5431 set_lock(m, locked);
\r
5436 int mspace_track_large_chunks(mspace msp, int enable) {
\r
5438 mstate ms = (mstate)msp;
\r
5439 if (!PREACTION(ms)) {
\r
5440 if (!use_mmap(ms))
\r
5451 size_t destroy_mspace(mspace msp) {
\r
5453 mstate ms = (mstate)msp;
\r
5454 if (ok_magic(ms)) {
\r
5455 msegmentptr sp = &ms->seg;
\r
5456 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
\r
5458 char* base = sp->base;
\r
5459 size_t size = sp->size;
\r
5460 flag_t flag = sp->sflags;
\r
5462 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
\r
5463 CALL_MUNMAP(base, size) == 0)
\r
5468 USAGE_ERROR_ACTION(ms,ms);
\r
5474 mspace versions of routines are near-clones of the global
\r
5475 versions. This is not so nice but better than the alternatives.
\r
5478 void* mspace_malloc(mspace msp, size_t bytes) {
\r
5479 mstate ms = (mstate)msp;
\r
5480 if (!ok_magic(ms)) {
\r
5481 USAGE_ERROR_ACTION(ms,ms);
\r
5484 if (!PREACTION(ms)) {
\r
5487 if (bytes <= MAX_SMALL_REQUEST) {
\r
5489 binmap_t smallbits;
\r
5490 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
\r
5491 idx = small_index(nb);
\r
5492 smallbits = ms->smallmap >> idx;
\r
5494 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
\r
5496 idx += ~smallbits & 1; /* Uses next bin if idx empty */
\r
5497 b = smallbin_at(ms, idx);
\r
5499 assert(chunksize(p) == small_index2size(idx));
\r
5500 unlink_first_small_chunk(ms, b, p, idx);
\r
5501 set_inuse_and_pinuse(ms, p, small_index2size(idx));
\r
5502 mem = chunk2mem(p);
\r
5503 check_malloced_chunk(ms, mem, nb);
\r
5507 else if (nb > ms->dvsize) {
\r
5508 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
\r
5509 mchunkptr b, p, r;
\r
5512 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
\r
5513 binmap_t leastbit = least_bit(leftbits);
\r
5514 compute_bit2idx(leastbit, i);
\r
5515 b = smallbin_at(ms, i);
\r
5517 assert(chunksize(p) == small_index2size(i));
\r
5518 unlink_first_small_chunk(ms, b, p, i);
\r
5519 rsize = small_index2size(i) - nb;
\r
5520 /* Fit here cannot be remainderless if 4byte sizes */
\r
5521 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
\r
5522 set_inuse_and_pinuse(ms, p, small_index2size(i));
\r
5524 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5525 r = chunk_plus_offset(p, nb);
\r
5526 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
5527 replace_dv(ms, r, rsize);
\r
5529 mem = chunk2mem(p);
\r
5530 check_malloced_chunk(ms, mem, nb);
\r
5534 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
\r
5535 check_malloced_chunk(ms, mem, nb);
\r
5540 else if (bytes >= MAX_REQUEST)
\r
5541 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
\r
5543 nb = pad_request(bytes);
\r
5544 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
\r
5545 check_malloced_chunk(ms, mem, nb);
\r
5550 if (nb <= ms->dvsize) {
\r
5551 size_t rsize = ms->dvsize - nb;
\r
5552 mchunkptr p = ms->dv;
\r
5553 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
\r
5554 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
\r
5555 ms->dvsize = rsize;
\r
5556 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
5557 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5559 else { /* exhaust dv */
\r
5560 size_t dvs = ms->dvsize;
\r
5563 set_inuse_and_pinuse(ms, p, dvs);
\r
5565 mem = chunk2mem(p);
\r
5566 check_malloced_chunk(ms, mem, nb);
\r
5570 else if (nb < ms->topsize) { /* Split top */
\r
5571 size_t rsize = ms->topsize -= nb;
\r
5572 mchunkptr p = ms->top;
\r
5573 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
\r
5574 r->head = rsize | PINUSE_BIT;
\r
5575 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5576 mem = chunk2mem(p);
\r
5577 check_top_chunk(ms, ms->top);
\r
5578 check_malloced_chunk(ms, mem, nb);
\r
5582 mem = sys_alloc(ms, nb);
\r
5592 void mspace_free(mspace msp, void* mem) {
\r
5594 mchunkptr p = mem2chunk(mem);
\r
5596 mstate fm = get_mstate_for(p);
\r
5597 msp = msp; /* placate people compiling -Wunused */
\r
5598 #else /* FOOTERS */
\r
5599 mstate fm = (mstate)msp;
\r
5600 #endif /* FOOTERS */
\r
5601 if (!ok_magic(fm)) {
\r
5602 USAGE_ERROR_ACTION(fm, p);
\r
5605 if (!PREACTION(fm)) {
\r
5606 check_inuse_chunk(fm, p);
\r
5607 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
\r
5608 size_t psize = chunksize(p);
\r
5609 mchunkptr next = chunk_plus_offset(p, psize);
\r
5611 size_t prevsize = p->prev_foot;
\r
5612 if (is_mmapped(p)) {
\r
5613 psize += prevsize + MMAP_FOOT_PAD;
\r
5614 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
5615 fm->footprint -= psize;
\r
5619 mchunkptr prev = chunk_minus_offset(p, prevsize);
\r
5620 psize += prevsize;
\r
5622 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
\r
5623 if (p != fm->dv) {
\r
5624 unlink_chunk(fm, p, prevsize);
\r
5626 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
5627 fm->dvsize = psize;
\r
5628 set_free_with_pinuse(p, psize, next);
\r
5637 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
\r
5638 if (!cinuse(next)) { /* consolidate forward */
\r
5639 if (next == fm->top) {
\r
5640 size_t tsize = fm->topsize += psize;
\r
5642 p->head = tsize | PINUSE_BIT;
\r
5643 if (p == fm->dv) {
\r
5647 if (should_trim(fm, tsize))
\r
5651 else if (next == fm->dv) {
\r
5652 size_t dsize = fm->dvsize += psize;
\r
5654 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
5658 size_t nsize = chunksize(next);
\r
5660 unlink_chunk(fm, next, nsize);
\r
5661 set_size_and_pinuse_of_free_chunk(p, psize);
\r
5662 if (p == fm->dv) {
\r
5663 fm->dvsize = psize;
\r
5669 set_free_with_pinuse(p, psize, next);
\r
5671 if (is_small(psize)) {
\r
5672 insert_small_chunk(fm, p, psize);
\r
5673 check_free_chunk(fm, p);
\r
5676 tchunkptr tp = (tchunkptr)p;
\r
5677 insert_large_chunk(fm, tp, psize);
\r
5678 check_free_chunk(fm, p);
\r
5679 if (--fm->release_checks == 0)
\r
5680 release_unused_segments(fm);
\r
5686 USAGE_ERROR_ACTION(fm, p);
\r
5693 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
\r
5696 mstate ms = (mstate)msp;
\r
5697 if (!ok_magic(ms)) {
\r
5698 USAGE_ERROR_ACTION(ms,ms);
\r
5701 if (n_elements != 0) {
\r
5702 req = n_elements * elem_size;
\r
5703 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
\r
5704 (req / n_elements != elem_size))
\r
5705 req = MAX_SIZE_T; /* force downstream failure on overflow */
\r
5707 mem = internal_malloc(ms, req);
\r
5708 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
\r
5709 memset(mem, 0, req);
\r
5713 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
\r
5715 if (oldmem == 0) {
\r
5716 mem = mspace_malloc(msp, bytes);
\r
5718 else if (bytes >= MAX_REQUEST) {
\r
5719 MALLOC_FAILURE_ACTION;
\r
5721 #ifdef REALLOC_ZERO_BYTES_FREES
\r
5722 else if (bytes == 0) {
\r
5723 mspace_free(msp, oldmem);
\r
5725 #endif /* REALLOC_ZERO_BYTES_FREES */
\r
5727 size_t nb = request2size(bytes);
\r
5728 mchunkptr oldp = mem2chunk(oldmem);
\r
5730 mstate m = (mstate)msp;
\r
5731 #else /* FOOTERS */
\r
5732 mstate m = get_mstate_for(oldp);
\r
5733 if (!ok_magic(m)) {
\r
5734 USAGE_ERROR_ACTION(m, oldmem);
\r
5737 #endif /* FOOTERS */
\r
5738 if (!PREACTION(m)) {
\r
5739 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
\r
5742 check_inuse_chunk(m, newp);
\r
5743 mem = chunk2mem(newp);
\r
5746 mem = mspace_malloc(m, bytes);
\r
5748 size_t oc = chunksize(oldp) - overhead_for(oldp);
\r
5749 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
\r
5750 mspace_free(m, oldmem);
\r
5758 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
\r
5760 if (oldmem != 0) {
\r
5761 if (bytes >= MAX_REQUEST) {
\r
5762 MALLOC_FAILURE_ACTION;
\r
5765 size_t nb = request2size(bytes);
\r
5766 mchunkptr oldp = mem2chunk(oldmem);
\r
5768 mstate m = (mstate)msp;
\r
5769 #else /* FOOTERS */
\r
5770 mstate m = get_mstate_for(oldp);
\r
5771 msp = msp; /* placate people compiling -Wunused */
\r
5772 if (!ok_magic(m)) {
\r
5773 USAGE_ERROR_ACTION(m, oldmem);
\r
5776 #endif /* FOOTERS */
\r
5777 if (!PREACTION(m)) {
\r
5778 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
\r
5780 if (newp == oldp) {
\r
5781 check_inuse_chunk(m, newp);
\r
5790 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
\r
5791 mstate ms = (mstate)msp;
\r
5792 if (!ok_magic(ms)) {
\r
5793 USAGE_ERROR_ACTION(ms,ms);
\r
5796 if (alignment <= MALLOC_ALIGNMENT)
\r
5797 return mspace_malloc(msp, bytes);
\r
5798 return internal_memalign(ms, alignment, bytes);
\r
5801 void** mspace_independent_calloc(mspace msp, size_t n_elements,
\r
5802 size_t elem_size, void* chunks[]) {
\r
5803 size_t sz = elem_size; /* serves as 1-element array */
\r
5804 mstate ms = (mstate)msp;
\r
5805 if (!ok_magic(ms)) {
\r
5806 USAGE_ERROR_ACTION(ms,ms);
\r
5809 return ialloc(ms, n_elements, &sz, 3, chunks);
\r
5812 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
\r
5813 size_t sizes[], void* chunks[]) {
\r
5814 mstate ms = (mstate)msp;
\r
5815 if (!ok_magic(ms)) {
\r
5816 USAGE_ERROR_ACTION(ms,ms);
\r
5819 return ialloc(ms, n_elements, sizes, 0, chunks);
\r
5822 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
\r
5823 return internal_bulk_free((mstate)msp, array, nelem);
\r
5826 #if MALLOC_INSPECT_ALL
\r
5827 void mspace_inspect_all(mspace msp,
\r
5828 void(*handler)(void *start,
\r
5830 size_t used_bytes,
\r
5831 void* callback_arg),
\r
5833 mstate ms = (mstate)msp;
\r
5834 if (ok_magic(ms)) {
\r
5835 if (!PREACTION(ms)) {
\r
5836 internal_inspect_all(ms, handler, arg);
\r
5841 USAGE_ERROR_ACTION(ms,ms);
\r
5844 #endif /* MALLOC_INSPECT_ALL */
\r
5846 int mspace_trim(mspace msp, size_t pad) {
\r
5848 mstate ms = (mstate)msp;
\r
5849 if (ok_magic(ms)) {
\r
5850 if (!PREACTION(ms)) {
\r
5851 result = sys_trim(ms, pad);
\r
5856 USAGE_ERROR_ACTION(ms,ms);
\r
5861 #if !NO_MALLOC_STATS
\r
5862 void mspace_malloc_stats(mspace msp) {
\r
5863 mstate ms = (mstate)msp;
\r
5864 if (ok_magic(ms)) {
\r
5865 internal_malloc_stats(ms);
\r
5868 USAGE_ERROR_ACTION(ms,ms);
\r
5871 #endif /* NO_MALLOC_STATS */
\r
5873 size_t mspace_footprint(mspace msp) {
\r
5874 size_t result = 0;
\r
5875 mstate ms = (mstate)msp;
\r
5876 if (ok_magic(ms)) {
\r
5877 result = ms->footprint;
\r
5880 USAGE_ERROR_ACTION(ms,ms);
\r
5885 size_t mspace_max_footprint(mspace msp) {
\r
5886 size_t result = 0;
\r
5887 mstate ms = (mstate)msp;
\r
5888 if (ok_magic(ms)) {
\r
5889 result = ms->max_footprint;
\r
5892 USAGE_ERROR_ACTION(ms,ms);
\r
5897 size_t mspace_footprint_limit(mspace msp) {
\r
5898 size_t result = 0;
\r
5899 mstate ms = (mstate)msp;
\r
5900 if (ok_magic(ms)) {
\r
5901 size_t maf = ms->footprint_limit;
\r
5902 result = (maf == 0) ? MAX_SIZE_T : maf;
\r
5905 USAGE_ERROR_ACTION(ms,ms);
\r
5910 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
\r
5911 size_t result = 0;
\r
5912 mstate ms = (mstate)msp;
\r
5913 if (ok_magic(ms)) {
\r
5915 result = granularity_align(1); /* Use minimal size */
\r
5916 if (bytes == MAX_SIZE_T)
\r
5917 result = 0; /* disable */
\r
5919 result = granularity_align(bytes);
\r
5920 ms->footprint_limit = result;
\r
5923 USAGE_ERROR_ACTION(ms,ms);
\r
5929 struct mallinfo mspace_mallinfo(mspace msp) {
\r
5930 mstate ms = (mstate)msp;
\r
5931 if (!ok_magic(ms)) {
\r
5932 USAGE_ERROR_ACTION(ms,ms);
\r
5934 return internal_mallinfo(ms);
\r
5936 #endif /* NO_MALLINFO */
\r
5938 size_t mspace_usable_size(void* mem) {
\r
5940 mchunkptr p = mem2chunk(mem);
\r
5942 return chunksize(p) - overhead_for(p);
\r
5947 int mspace_mallopt(int param_number, int value) {
\r
5948 return change_mparam(param_number, value);
\r
5951 #endif /* MSPACES */
\r
5954 /* -------------------- Alternative MORECORE functions ------------------- */
\r
5957 Guidelines for creating a custom version of MORECORE:
\r
5959 * For best performance, MORECORE should allocate in multiples of pagesize.
\r
5960 * MORECORE may allocate more memory than requested. (Or even less,
\r
5961 but this will usually result in a malloc failure.)
\r
5962 * MORECORE must not allocate memory when given argument zero, but
\r
5963 instead return one past the end address of memory from previous
\r
5965 * For best performance, consecutive calls to MORECORE with positive
\r
5966 arguments should return increasing addresses, indicating that
\r
5967 space has been contiguously extended.
\r
5968 * Even though consecutive calls to MORECORE need not return contiguous
\r
5969 addresses, it must be OK for malloc'ed chunks to span multiple
\r
5970 regions in those cases where they do happen to be contiguous.
\r
5971 * MORECORE need not handle negative arguments -- it may instead
\r
5972 just return MFAIL when given negative arguments.
\r
5973 Negative arguments are always multiples of pagesize. MORECORE
\r
5974 must not misinterpret negative args as large positive unsigned
\r
5975 args. You can suppress all such calls from even occurring by defining
\r
5976 MORECORE_CANNOT_TRIM,
\r
5978 As an example alternative MORECORE, here is a custom allocator
\r
5979 kindly contributed for pre-OSX macOS. It uses virtually but not
\r
5980 necessarily physically contiguous non-paged memory (locked in,
\r
5981 present and won't get swapped out). You can use it by uncommenting
\r
5982 this section, adding some #includes, and setting up the appropriate
\r
5985 #define MORECORE osMoreCore
\r
5987 There is also a shutdown routine that should somehow be called for
\r
5988 cleanup upon program exit.
\r
5990 #define MAX_POOL_ENTRIES 100
\r
5991 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
\r
5992 static int next_os_pool;
\r
5993 void *our_os_pools[MAX_POOL_ENTRIES];
\r
5995 void *osMoreCore(int size)
\r
5998 static void *sbrk_top = 0;
\r
6002 if (size < MINIMUM_MORECORE_SIZE)
\r
6003 size = MINIMUM_MORECORE_SIZE;
\r
6004 if (CurrentExecutionLevel() == kTaskLevel)
\r
6005 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
\r
6008 return (void *) MFAIL;
\r
6010 // save ptrs so they can be freed during cleanup
\r
6011 our_os_pools[next_os_pool] = ptr;
\r
6013 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
\r
6014 sbrk_top = (char *) ptr + size;
\r
6017 else if (size < 0)
\r
6019 // we don't currently support shrink behavior
\r
6020 return (void *) MFAIL;
\r
6028 // cleanup any allocated memory pools
\r
6029 // called as last thing before shutting down driver
\r
6031 void osCleanupMem(void)
\r
6035 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
\r
6038 PoolDeallocate(*ptr);
\r
6046 /* -----------------------------------------------------------------------
\r
6048 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
\r
6049 * Always perform unlink checks unless INSECURE
\r
6050 * Add posix_memalign.
\r
6051 * Improve realloc to expand in more cases; expose realloc_in_place.
\r
6052 Thanks to Peter Buhr for the suggestion.
\r
6053 * Add footprint_limit, inspect_all, bulk_free. Thanks
\r
6054 to Barry Hayes and others for the suggestions.
\r
6055 * Internal refactorings to avoid calls while holding locks
\r
6056 * Use non-reentrant locks by default. Thanks to Roland McGrath
\r
6057 for the suggestion.
\r
6058 * Small fixes to mspace_destroy, reset_on_error.
\r
6059 * Various configuration extensions/changes. Thanks
\r
6060 to all who contributed these.
\r
6062 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
\r
6063 * Update Creative Commons URL
\r
6065 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
\r
6066 * Use zeros instead of prev foot for is_mmapped
\r
6067 * Add mspace_track_large_chunks; thanks to Jean Brouwers
\r
6068 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
\r
6069 * Fix insufficient sys_alloc padding when using 16byte alignment
\r
6070 * Fix bad error check in mspace_footprint
\r
6071 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
\r
6072 * Reentrant spin locks; thanks to Earl Chew and others
\r
6073 * Win32 improvements; thanks to Niall Douglas and Earl Chew
\r
6074 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
\r
6075 * Extension hook in malloc_state
\r
6076 * Various small adjustments to reduce warnings on some compilers
\r
6077 * Various configuration extensions/changes for more platforms. Thanks
\r
6078 to all who contributed these.
\r
6080 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
\r
6081 * Add max_footprint functions
\r
6082 * Ensure all appropriate literals are size_t
\r
6083 * Fix conditional compilation problem for some #define settings
\r
6084 * Avoid concatenating segments with the one provided
\r
6085 in create_mspace_with_base
\r
6086 * Rename some variables to avoid compiler shadowing warnings
\r
6087 * Use explicit lock initialization.
\r
6088 * Better handling of sbrk interference.
\r
6089 * Simplify and fix segment insertion, trimming and mspace_destroy
\r
6090 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
\r
6091 * Thanks especially to Dennis Flanagan for help on these.
\r
6093 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
\r
6094 * Fix memalign brace error.
\r
6096 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
\r
6097 * Fix improper #endif nesting in C++
\r
6098 * Add explicit casts needed for C++
\r
6100 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
\r
6101 * Use trees for large bins
\r
6103 * Use segments to unify sbrk-based and mmap-based system allocation,
\r
6104 removing need for emulation on most platforms without sbrk.
\r
6105 * Default safety checks
\r
6106 * Optional footer checks. Thanks to William Robertson for the idea.
\r
6107 * Internal code refactoring
\r
6108 * Incorporate suggestions and platform-specific changes.
\r
6109 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
\r
6110 Aaron Bachmann, Emery Berger, and others.
\r
6111 * Speed up non-fastbin processing enough to remove fastbins.
\r
6112 * Remove useless cfree() to avoid conflicts with other apps.
\r
6113 * Remove internal memcpy, memset. Compilers handle builtins better.
\r
6114 * Remove some options that no one ever used and rename others.
\r
6116 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
\r
6117 * Fix malloc_state bitmap array misdeclaration
\r
6119 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
\r
6120 * Allow tuning of FIRST_SORTED_BIN_SIZE
\r
6121 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
\r
6122 * Better detection and support for non-contiguousness of MORECORE.
\r
6123 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
\r
6124 * Bypass most of malloc if no frees. Thanks To Emery Berger.
\r
6125 * Fix freeing of old top non-contiguous chunk im sysmalloc.
\r
6126 * Raised default trim and map thresholds to 256K.
\r
6127 * Fix mmap-related #defines. Thanks to Lubos Lunak.
\r
6128 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
\r
6129 * Branch-free bin calculation
\r
6130 * Default trim and mmap thresholds now 256K.
\r
6132 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
\r
6133 * Introduce independent_comalloc and independent_calloc.
\r
6134 Thanks to Michael Pachos for motivation and help.
\r
6135 * Make optional .h file available
\r
6136 * Allow > 2GB requests on 32bit systems.
\r
6137 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
\r
6138 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
\r
6140 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
\r
6141 helping test this.)
\r
6142 * memalign: check alignment arg
\r
6143 * realloc: don't try to shift chunks backwards, since this
\r
6144 leads to more fragmentation in some programs and doesn't
\r
6145 seem to help in any others.
\r
6146 * Collect all cases in malloc requiring system memory into sysmalloc
\r
6147 * Use mmap as backup to sbrk
\r
6148 * Place all internal state in malloc_state
\r
6149 * Introduce fastbins (although similar to 2.5.1)
\r
6150 * Many minor tunings and cosmetic improvements
\r
6151 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
\r
6152 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
\r
6153 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
\r
6154 * Include errno.h to support default failure action.
\r
6156 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
\r
6157 * return null for negative arguments
\r
6158 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
\r
6159 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
\r
6160 (e.g. WIN32 platforms)
\r
6161 * Cleanup header file inclusion for WIN32 platforms
\r
6162 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
\r
6163 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
\r
6164 memory allocation routines
\r
6165 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
\r
6166 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
\r
6167 usage of 'assert' in non-WIN32 code
\r
6168 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
\r
6169 avoid infinite loop
\r
6170 * Always call 'fREe()' rather than 'free()'
\r
6172 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
\r
6173 * Fixed ordering problem with boundary-stamping
\r
6175 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
\r
6176 * Added pvalloc, as recommended by H.J. Liu
\r
6177 * Added 64bit pointer support mainly from Wolfram Gloger
\r
6178 * Added anonymously donated WIN32 sbrk emulation
\r
6179 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
\r
6180 * malloc_extend_top: fix mask error that caused wastage after
\r
6182 * Add linux mremap support code from HJ Liu
\r
6184 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
\r
6185 * Integrated most documentation with the code.
\r
6186 * Add support for mmap, with help from
\r
6187 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
\r
6188 * Use last_remainder in more cases.
\r
6189 * Pack bins using idea from colin@nyx10.cs.du.edu
\r
6190 * Use ordered bins instead of best-fit threshhold
\r
6191 * Eliminate block-local decls to simplify tracing and debugging.
\r
6192 * Support another case of realloc via move into top
\r
6193 * Fix error occuring when initial sbrk_base not word-aligned.
\r
6194 * Rely on page size for units instead of SBRK_UNIT to
\r
6195 avoid surprises about sbrk alignment conventions.
\r
6196 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
\r
6197 (raymond@es.ele.tue.nl) for the suggestion.
\r
6198 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
\r
6199 * More precautions for cases where other routines call sbrk,
\r
6200 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
\r
6201 * Added macros etc., allowing use in linux libc from
\r
6202 H.J. Lu (hjl@gnu.ai.mit.edu)
\r
6203 * Inverted this history list
\r
6205 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
\r
6206 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
\r
6207 * Removed all preallocation code since under current scheme
\r
6208 the work required to undo bad preallocations exceeds
\r
6209 the work saved in good cases for most test programs.
\r
6210 * No longer use return list or unconsolidated bins since
\r
6211 no scheme using them consistently outperforms those that don't
\r
6212 given above changes.
\r
6213 * Use best fit for very large chunks to prevent some worst-cases.
\r
6214 * Added some support for debugging
\r
6216 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
\r
6217 * Removed footers when chunks are in use. Thanks to
\r
6218 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
\r
6220 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
\r
6221 * Added malloc_trim, with help from Wolfram Gloger
\r
6222 (wmglo@Dent.MED.Uni-Muenchen.DE).
\r
6224 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
\r
6226 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
\r
6227 * realloc: try to expand in both directions
\r
6228 * malloc: swap order of clean-bin strategy;
\r
6229 * realloc: only conditionally expand backwards
\r
6230 * Try not to scavenge used bins
\r
6231 * Use bin counts as a guide to preallocation
\r
6232 * Occasionally bin return list chunks in first scan
\r
6233 * Add a few optimizations from colin@nyx10.cs.du.edu
\r
6235 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
\r
6236 * faster bin computation & slightly different binning
\r
6237 * merged all consolidations to one part of malloc proper
\r
6238 (eliminating old malloc_find_space & malloc_clean_bin)
\r
6239 * Scan 2 returns chunks (not just 1)
\r
6240 * Propagate failure in realloc if malloc returns 0
\r
6241 * Add stuff to allow compilation on non-ANSI compilers
\r
6242 from kpv@research.att.com
\r
6244 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
\r
6245 * removed potential for odd address access in prev_chunk
\r
6246 * removed dependency on getpagesize.h
\r
6247 * misc cosmetics and a bit more internal documentation
\r
6248 * anticosmetics: mangled names in macros to evade debugger strangeness
\r
6249 * tested on sparc, hp-700, dec-mips, rs6000
\r
6250 with gcc & native cc (hp, dec only) allowing
\r
6251 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
\r
6253 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
\r
6254 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
\r
6255 structure of old version, but most details differ.)
\r
6261 #include <_PDCLIB_test.h>
\r
6263 /* TODO: TEST ME */
\r
6266 return TEST_RESULTS;
\r