2 This is a version (aka dlmalloc) of malloc/free/realloc written by
\r
3 Doug Lea and released to the public domain, as explained at
\r
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
\r
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
\r
7 * Version 2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
\r
9 Note: There may be an updated version of this malloc obtainable at
\r
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
\r
11 Check before installing!
\r
15 This library is all in one file to simplify the most common usage:
\r
16 ftp it, compile it (-O3), and link it into another program. All of
\r
17 the compile-time options default to reasonable values for use on
\r
18 most platforms. You might later want to step through various
\r
19 compile-time and dynamic tuning options.
\r
21 For convenience, an include file for code using this malloc is at:
\r
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.5.h
\r
23 You don't really need this .h file unless you call functions not
\r
24 defined in your system include files. The .h file contains only the
\r
25 excerpts from this file needed for using this malloc on ANSI C/C++
\r
26 systems, so long as you haven't changed compile-time options about
\r
27 naming and tuning parameters. If you do, then you can create your
\r
28 own malloc.h that does include all settings by cutting at the point
\r
29 indicated below. Note that you may already by default be using a C
\r
30 library containing a malloc that is based on some version of this
\r
31 malloc (for example in linux). You might still want to use the one
\r
32 in this file to customize settings or to avoid overheads associated
\r
33 with library versions.
\r
37 Supported pointer/size_t representation: 4 or 8 bytes
\r
38 size_t MUST be an unsigned type of the same width as
\r
39 pointers. (If you are using an ancient system that declares
\r
40 size_t as a signed type, or need it to be a different width
\r
41 than pointers, you can use a previous release of this malloc
\r
42 (e.g. 2.7.2) supporting these.)
\r
44 Alignment: 8 bytes (default)
\r
45 This suffices for nearly all current machines and C compilers.
\r
46 However, you can define MALLOC_ALIGNMENT to be wider than this
\r
47 if necessary (up to 128bytes), at the expense of using more space.
\r
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
\r
50 8 or 16 bytes (if 8byte sizes)
\r
51 Each malloced chunk has a hidden word of overhead holding size
\r
52 and status information, and additional cross-check word
\r
53 if FOOTERS is defined.
\r
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
\r
56 8-byte ptrs: 32 bytes (including overhead)
\r
58 Even a request for zero bytes (i.e., malloc(0)) returns a
\r
59 pointer to something of the minimum allocatable size.
\r
60 The maximum overhead wastage (i.e., number of extra bytes
\r
61 allocated than were requested in malloc) is less than or equal
\r
62 to the minimum size, except for requests >= mmap_threshold that
\r
63 are serviced via mmap(), where the worst case wastage is about
\r
64 32 bytes plus the remainder from a system page (the minimal
\r
65 mmap unit); typically 4096 or 8192 bytes.
\r
67 Security: static-safe; optionally more or less
\r
68 The "security" of malloc refers to the ability of malicious
\r
69 code to accentuate the effects of errors (for example, freeing
\r
70 space that is not currently malloc'ed or overwriting past the
\r
71 ends of chunks) in code that calls malloc. This malloc
\r
72 guarantees not to modify any memory locations below the base of
\r
73 heap, i.e., static variables, even in the presence of usage
\r
74 errors. The routines additionally detect most improper frees
\r
75 and reallocs. All this holds as long as the static bookkeeping
\r
76 for malloc itself is not corrupted by some other means. This
\r
77 is only one aspect of security -- these checks do not, and
\r
78 cannot, detect all possible programming errors.
\r
80 If FOOTERS is defined nonzero, then each allocated chunk
\r
81 carries an additional check word to verify that it was malloced
\r
82 from its space. These check words are the same within each
\r
83 execution of a program using malloc, but differ across
\r
84 executions, so externally crafted fake chunks cannot be
\r
85 freed. This improves security by rejecting frees/reallocs that
\r
86 could corrupt heap memory, in addition to the checks preventing
\r
87 writes to statics that are always on. This may further improve
\r
88 security at the expense of time and space overhead. (Note that
\r
89 FOOTERS may also be worth using with MSPACES.)
\r
91 By default detected errors cause the program to abort (calling
\r
92 "abort()"). You can override this to instead proceed past
\r
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
\r
94 has no effect, and a malloc that encounters a bad address
\r
95 caused by user overwrites will ignore the bad address by
\r
96 dropping pointers and indices to all known memory. This may
\r
97 be appropriate for programs that should continue if at all
\r
98 possible in the face of programming errors, although they may
\r
99 run out of memory because dropped memory is never reclaimed.
\r
101 If you don't like either of these options, you can define
\r
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
\r
103 else. And if if you are sure that your program using malloc has
\r
104 no errors or vulnerabilities, you can define INSECURE to 1,
\r
105 which might (or might not) provide a small performance improvement.
\r
107 It is also possible to limit the maximum total allocatable
\r
108 space, using malloc_set_footprint_limit. This is not
\r
109 designed as a security feature in itself (calls to set limits
\r
110 are not screened or privileged), but may be useful as one
\r
111 aspect of a secure implementation.
\r
113 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
\r
114 When USE_LOCKS is defined, each public call to malloc, free,
\r
115 etc is surrounded with a lock. By default, this uses a plain
\r
116 pthread mutex, win32 critical section, or a spin-lock if if
\r
117 available for the platform and not disabled by setting
\r
118 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
\r
119 recursive versions are used instead (which are not required for
\r
120 base functionality but may be needed in layered extensions).
\r
121 Using a global lock is not especially fast, and can be a major
\r
122 bottleneck. It is designed only to provide minimal protection
\r
123 in concurrent environments, and to provide a basis for
\r
124 extensions. If you are using malloc in a concurrent program,
\r
125 consider instead using nedmalloc
\r
126 (http://www.nedprod.com/programs/portable/nedmalloc/) or
\r
127 ptmalloc (See http://www.malloc.de), which are derived from
\r
128 versions of this malloc.
\r
130 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
\r
131 This malloc can use unix sbrk or any emulation (invoked using
\r
132 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
\r
133 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
\r
134 memory. On most unix systems, it tends to work best if both
\r
135 MORECORE and MMAP are enabled. On Win32, it uses emulations
\r
136 based on VirtualAlloc. It also uses common C library functions
\r
139 Compliance: I believe it is compliant with the Single Unix Specification
\r
140 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
\r
143 * Overview of algorithms
\r
145 This is not the fastest, most space-conserving, most portable, or
\r
146 most tunable malloc ever written. However it is among the fastest
\r
147 while also being among the most space-conserving, portable and
\r
148 tunable. Consistent balance across these factors results in a good
\r
149 general-purpose allocator for malloc-intensive programs.
\r
151 In most ways, this malloc is a best-fit allocator. Generally, it
\r
152 chooses the best-fitting existing chunk for a request, with ties
\r
153 broken in approximately least-recently-used order. (This strategy
\r
154 normally maintains low fragmentation.) However, for requests less
\r
155 than 256bytes, it deviates from best-fit when there is not an
\r
156 exactly fitting available chunk by preferring to use space adjacent
\r
157 to that used for the previous small request, as well as by breaking
\r
158 ties in approximately most-recently-used order. (These enhance
\r
159 locality of series of small allocations.) And for very large requests
\r
160 (>= 256Kb by default), it relies on system memory mapping
\r
161 facilities, if supported. (This helps avoid carrying around and
\r
162 possibly fragmenting memory used only for large chunks.)
\r
164 All operations (except malloc_stats and mallinfo) have execution
\r
165 times that are bounded by a constant factor of the number of bits in
\r
166 a size_t, not counting any clearing in calloc or copying in realloc,
\r
167 or actions surrounding MORECORE and MMAP that have times
\r
168 proportional to the number of non-contiguous regions returned by
\r
169 system allocation routines, which is often just 1. In real-time
\r
170 applications, you can optionally suppress segment traversals using
\r
171 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
\r
172 system allocators return non-contiguous spaces, at the typical
\r
173 expense of carrying around more memory and increased fragmentation.
\r
175 The implementation is not very modular and seriously overuses
\r
176 macros. Perhaps someday all C compilers will do as good a job
\r
177 inlining modular code as can now be done by brute-force expansion,
\r
178 but now, enough of them seem not to.
\r
180 Some compilers issue a lot of warnings about code that is
\r
181 dead/unreachable only on some platforms, and also about intentional
\r
182 uses of negation on unsigned types. All known cases of each can be
\r
185 For a longer but out of date high-level description, see
\r
186 http://gee.cs.oswego.edu/dl/html/malloc.html
\r
189 If MSPACES is defined, then in addition to malloc, free, etc.,
\r
190 this file also defines mspace_malloc, mspace_free, etc. These
\r
191 are versions of malloc routines that take an "mspace" argument
\r
192 obtained using create_mspace, to control all internal bookkeeping.
\r
193 If ONLY_MSPACES is defined, only these versions are compiled.
\r
194 So if you would like to use this allocator for only some allocations,
\r
195 and your system malloc for others, you can compile with
\r
196 ONLY_MSPACES and then do something like...
\r
197 static mspace mymspace = create_mspace(0,0); // for example
\r
198 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
\r
200 (Note: If you only need one instance of an mspace, you can instead
\r
201 use "USE_DL_PREFIX" to relabel the global malloc.)
\r
203 You can similarly create thread-local allocators by storing
\r
204 mspaces as thread-locals. For example:
\r
205 static __thread mspace tlms = 0;
\r
206 void* tlmalloc(size_t bytes) {
\r
207 if (tlms == 0) tlms = create_mspace(0, 0);
\r
208 return mspace_malloc(tlms, bytes);
\r
210 void tlfree(void* mem) { mspace_free(tlms, mem); }
\r
212 Unless FOOTERS is defined, each mspace is completely independent.
\r
213 You cannot allocate from one and free to another (although
\r
214 conformance is only weakly checked, so usage errors are not always
\r
215 caught). If FOOTERS is defined, then each chunk carries around a tag
\r
216 indicating its originating mspace, and frees are directed to their
\r
217 originating spaces. Normally, this requires use of locks.
\r
219 ------------------------- Compile-time options ---------------------------
\r
221 Be careful in setting #define values for numerical constants of type
\r
222 size_t. On some systems, literal values are not automatically extended
\r
223 to size_t precision unless they are explicitly casted. You can also
\r
224 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
\r
226 WIN32 default: defined if _WIN32 defined
\r
227 Defining WIN32 sets up defaults for MS environment and compilers.
\r
228 Otherwise defaults are for unix. Beware that there seem to be some
\r
229 cases where this malloc might not be a pure drop-in replacement for
\r
230 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
\r
231 SetDIBits()) may be due to bugs in some video driver implementations
\r
232 when pixel buffers are malloc()ed, and the region spans more than
\r
233 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
\r
234 default granularity, pixel buffers may straddle virtual allocation
\r
235 regions more often than when using the Microsoft allocator. You can
\r
236 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
\r
237 buffers rather than using malloc(). If this is not possible,
\r
238 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
\r
239 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
\r
240 conditions use _MSC_VER to distinguish them.
\r
242 DLMALLOC_EXPORT default: extern
\r
243 Defines how public APIs are declared. If you want to export via a
\r
244 Windows DLL, you might define this as
\r
245 #define DLMALLOC_EXPORT extern __declspace(dllexport)
\r
246 If you want a POSIX ELF shared object, you might use
\r
247 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
\r
249 MALLOC_ALIGNMENT default: (size_t)8
\r
250 Controls the minimum alignment for malloc'ed chunks. It must be a
\r
251 power of two and at least 8, even on machines for which smaller
\r
252 alignments would suffice. It may be defined as larger than this
\r
253 though. Note however that code and data structures are optimized for
\r
254 the case of 8-byte alignment.
\r
256 MSPACES default: 0 (false)
\r
257 If true, compile in support for independent allocation spaces.
\r
258 This is only supported if HAVE_MMAP is true.
\r
260 ONLY_MSPACES default: 0 (false)
\r
261 If true, only compile in mspace versions, not regular versions.
\r
263 USE_LOCKS default: 0 (false)
\r
264 Causes each call to each public routine to be surrounded with
\r
265 pthread or WIN32 mutex lock/unlock. (If set true, this can be
\r
266 overridden on a per-mspace basis for mspace versions.) If set to a
\r
267 non-zero value other than 1, locks are used, but their
\r
268 implementation is left out, so lock functions must be supplied manually,
\r
269 as described below.
\r
271 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
\r
272 If true, uses custom spin locks for locking. This is currently
\r
273 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
\r
274 MS compilers. Otherwise, posix locks or win32 critical sections are
\r
277 USE_RECURSIVE_LOCKS default: not defined
\r
278 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
\r
279 uses plain mutexes. This is not required for malloc proper, but may
\r
280 be needed for layered allocators such as nedmalloc.
\r
283 If true, provide extra checking and dispatching by placing
\r
284 information in the footers of allocated chunks. This adds
\r
285 space and time overhead.
\r
287 INSECURE default: 0
\r
288 If true, omit checks for usage errors and heap space overwrites.
\r
290 USE_DL_PREFIX default: NOT defined
\r
291 Causes compiler to prefix all public routines with the string 'dl'.
\r
292 This can be useful when you only want to use this malloc in one part
\r
293 of a program, using your regular system malloc elsewhere.
\r
295 MALLOC_INSPECT_ALL default: NOT defined
\r
296 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
\r
297 perform traversal of all heap space. Unless access to these
\r
298 functions is otherwise restricted, you probably do not want to
\r
299 include them in secure implementations.
\r
301 ABORT default: defined as abort()
\r
302 Defines how to abort on failed checks. On most systems, a failed
\r
303 check cannot die with an "assert" or even print an informative
\r
304 message, because the underlying print routines in turn call malloc,
\r
305 which will fail again. Generally, the best policy is to simply call
\r
306 abort(). It's not very useful to do more than this because many
\r
307 errors due to overwriting will show up as address faults (null, odd
\r
308 addresses etc) rather than malloc-triggered checks, so will also
\r
309 abort. Also, most compilers know that abort() does not return, so
\r
310 can better optimize code conditionally calling it.
\r
312 PROCEED_ON_ERROR default: defined as 0 (false)
\r
313 Controls whether detected bad addresses cause them to bypassed
\r
314 rather than aborting. If set, detected bad arguments to free and
\r
315 realloc are ignored. And all bookkeeping information is zeroed out
\r
316 upon a detected overwrite of freed heap space, thus losing the
\r
317 ability to ever return it from malloc again, but enabling the
\r
318 application to proceed. If PROCEED_ON_ERROR is defined, the
\r
319 static variable malloc_corruption_error_count is compiled in
\r
320 and can be examined to see if errors have occurred. This option
\r
321 generates slower code than the default abort policy.
\r
323 DEBUG default: NOT defined
\r
324 The DEBUG setting is mainly intended for people trying to modify
\r
325 this code or diagnose problems when porting to new platforms.
\r
326 However, it may also be able to better isolate user errors than just
\r
327 using runtime checks. The assertions in the check routines spell
\r
328 out in more detail the assumptions and invariants underlying the
\r
329 algorithms. The checking is fairly extensive, and will slow down
\r
330 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
\r
331 set will attempt to check every non-mmapped allocated and free chunk
\r
332 in the course of computing the summaries.
\r
334 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
\r
335 Debugging assertion failures can be nearly impossible if your
\r
336 version of the assert macro causes malloc to be called, which will
\r
337 lead to a cascade of further failures, blowing the runtime stack.
\r
338 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
\r
339 which will usually make debugging easier.
\r
341 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
\r
342 The action to take before "return 0" when malloc fails to be able to
\r
343 return memory because there is none available.
\r
345 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
\r
346 True if this system supports sbrk or an emulation of it.
\r
348 MORECORE default: sbrk
\r
349 The name of the sbrk-style system routine to call to obtain more
\r
350 memory. See below for guidance on writing custom MORECORE
\r
351 functions. The type of the argument to sbrk/MORECORE varies across
\r
352 systems. It cannot be size_t, because it supports negative
\r
353 arguments, so it is normally the signed type of the same width as
\r
354 size_t (sometimes declared as "intptr_t"). It doesn't much matter
\r
355 though. Internally, we only call it with arguments less than half
\r
356 the max value of a size_t, which should work across all reasonable
\r
357 possibilities, although sometimes generating compiler warnings.
\r
359 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
\r
360 If true, take advantage of fact that consecutive calls to MORECORE
\r
361 with positive arguments always return contiguous increasing
\r
362 addresses. This is true of unix sbrk. It does not hurt too much to
\r
363 set it true anyway, since malloc copes with non-contiguities.
\r
364 Setting it false when definitely non-contiguous saves time
\r
365 and possibly wasted space it would take to discover this though.
\r
367 MORECORE_CANNOT_TRIM default: NOT defined
\r
368 True if MORECORE cannot release space back to the system when given
\r
369 negative arguments. This is generally necessary only if you are
\r
370 using a hand-crafted MORECORE function that cannot handle negative
\r
373 NO_SEGMENT_TRAVERSAL default: 0
\r
374 If non-zero, suppresses traversals of memory segments
\r
375 returned by either MORECORE or CALL_MMAP. This disables
\r
376 merging of segments that are contiguous, and selectively
\r
377 releasing them to the OS if unused, but bounds execution times.
\r
379 HAVE_MMAP default: 1 (true)
\r
380 True if this system supports mmap or an emulation of it. If so, and
\r
381 HAVE_MORECORE is not true, MMAP is used for all system
\r
382 allocation. If set and HAVE_MORECORE is true as well, MMAP is
\r
383 primarily used to directly allocate very large blocks. It is also
\r
384 used as a backup strategy in cases where MORECORE fails to provide
\r
385 space from system. Note: A single call to MUNMAP is assumed to be
\r
386 able to unmap memory that may have be allocated using multiple calls
\r
387 to MMAP, so long as they are adjacent.
\r
389 HAVE_MREMAP default: 1 on linux, else 0
\r
390 If true realloc() uses mremap() to re-allocate large blocks and
\r
391 extend or shrink allocation spaces.
\r
393 MMAP_CLEARS default: 1 except on WINCE.
\r
394 True if mmap clears memory so calloc doesn't need to. This is true
\r
395 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
\r
397 USE_BUILTIN_FFS default: 0 (i.e., not used)
\r
398 Causes malloc to use the builtin ffs() function to compute indices.
\r
399 Some compilers may recognize and intrinsify ffs to be faster than the
\r
400 supplied C version. Also, the case of x86 using gcc is special-cased
\r
401 to an asm instruction, so is already as fast as it can be, and so
\r
402 this setting has no effect. Similarly for Win32 under recent MS compilers.
\r
403 (On most x86s, the asm version is only slightly faster than the C version.)
\r
405 malloc_getpagesize default: derive from system includes, or 4096.
\r
406 The system page size. To the extent possible, this malloc manages
\r
407 memory from the system in page-size units. This may be (and
\r
408 usually is) a function rather than a constant. This is ignored
\r
409 if WIN32, where page size is determined using getSystemInfo during
\r
412 USE_DEV_RANDOM default: 0 (i.e., not used)
\r
413 Causes malloc to use /dev/random to initialize secure magic seed for
\r
414 stamping footers. Otherwise, the current time is used.
\r
416 NO_MALLINFO default: 0
\r
417 If defined, don't compile "mallinfo". This can be a simple way
\r
418 of dealing with mismatches between system declarations and
\r
419 those in this file.
\r
421 MALLINFO_FIELD_TYPE default: size_t
\r
422 The type of the fields in the mallinfo struct. This was originally
\r
423 defined as "int" in SVID etc, but is more usefully defined as
\r
424 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
\r
426 NO_MALLOC_STATS default: 0
\r
427 If defined, don't compile "malloc_stats". This avoids calls to
\r
428 fprintf and bringing in stdio dependencies you might not want.
\r
430 REALLOC_ZERO_BYTES_FREES default: not defined
\r
431 This should be set if a call to realloc with zero bytes should
\r
432 be the same as a call to free. Some people think it should. Otherwise,
\r
433 since this malloc returns a unique pointer for malloc(0), so does
\r
436 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
\r
437 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
\r
438 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
\r
439 Define these if your system does not have these header files.
\r
440 You might need to manually insert some of the declarations they provide.
\r
442 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
\r
443 system_info.dwAllocationGranularity in WIN32,
\r
445 Also settable using mallopt(M_GRANULARITY, x)
\r
446 The unit for allocating and deallocating memory from the system. On
\r
447 most systems with contiguous MORECORE, there is no reason to
\r
448 make this more than a page. However, systems with MMAP tend to
\r
449 either require or encourage larger granularities. You can increase
\r
450 this value to prevent system allocation functions to be called so
\r
451 often, especially if they are slow. The value must be at least one
\r
452 page and must be a power of two. Setting to 0 causes initialization
\r
453 to either page size or win32 region size. (Note: In previous
\r
454 versions of malloc, the equivalent of this option was called
\r
457 DEFAULT_TRIM_THRESHOLD default: 2MB
\r
458 Also settable using mallopt(M_TRIM_THRESHOLD, x)
\r
459 The maximum amount of unused top-most memory to keep before
\r
460 releasing via malloc_trim in free(). Automatic trimming is mainly
\r
461 useful in long-lived programs using contiguous MORECORE. Because
\r
462 trimming via sbrk can be slow on some systems, and can sometimes be
\r
463 wasteful (in cases where programs immediately afterward allocate
\r
464 more large chunks) the value should be high enough so that your
\r
465 overall system performance would improve by releasing this much
\r
466 memory. As a rough guide, you might set to a value close to the
\r
467 average size of a process (program) running on your system.
\r
468 Releasing this much memory would allow such a process to run in
\r
469 memory. Generally, it is worth tuning trim thresholds when a
\r
470 program undergoes phases where several large chunks are allocated
\r
471 and released in ways that can reuse each other's storage, perhaps
\r
472 mixed with phases where there are no such chunks at all. The trim
\r
473 value must be greater than page size to have any useful effect. To
\r
474 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
\r
475 some people use of mallocing a huge space and then freeing it at
\r
476 program startup, in an attempt to reserve system memory, doesn't
\r
477 have the intended effect under automatic trimming, since that memory
\r
478 will immediately be returned to the system.
\r
480 DEFAULT_MMAP_THRESHOLD default: 256K
\r
481 Also settable using mallopt(M_MMAP_THRESHOLD, x)
\r
482 The request size threshold for using MMAP to directly service a
\r
483 request. Requests of at least this size that cannot be allocated
\r
484 using already-existing space will be serviced via mmap. (If enough
\r
485 normal freed space already exists it is used instead.) Using mmap
\r
486 segregates relatively large chunks of memory so that they can be
\r
487 individually obtained and released from the host system. A request
\r
488 serviced through mmap is never reused by any other request (at least
\r
489 not directly; the system may just so happen to remap successive
\r
490 requests to the same locations). Segregating space in this way has
\r
491 the benefits that: Mmapped space can always be individually released
\r
492 back to the system, which helps keep the system level memory demands
\r
493 of a long-lived program low. Also, mapped memory doesn't become
\r
494 `locked' between other chunks, as can happen with normally allocated
\r
495 chunks, which means that even trimming via malloc_trim would not
\r
496 release them. However, it has the disadvantage that the space
\r
497 cannot be reclaimed, consolidated, and then used to service later
\r
498 requests, as happens with normal chunks. The advantages of mmap
\r
499 nearly always outweigh disadvantages for "large" chunks, but the
\r
500 value of "large" may vary across systems. The default is an
\r
501 empirically derived value that works well in most systems. You can
\r
502 disable mmap by setting to MAX_SIZE_T.
\r
504 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
\r
505 The number of consolidated frees between checks to release
\r
506 unused segments when freeing. When using non-contiguous segments,
\r
507 especially with multiple mspaces, checking only for topmost space
\r
508 doesn't always suffice to trigger trimming. To compensate for this,
\r
509 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
\r
510 current number of segments, if greater) try to release unused
\r
511 segments to the OS when freeing chunks that result in
\r
512 consolidation. The best value for this parameter is a compromise
\r
513 between slowing down frees with relatively costly checks that
\r
514 rarely trigger versus holding on to unused memory. To effectively
\r
515 disable, set to MAX_SIZE_T. This may lead to a very slight speed
\r
516 improvement at the expense of carrying around more memory.
\r
520 #include "dlmalloc.h"
\r
522 /* Version identifier to allow people to support multiple versions */
\r
523 #ifndef DLMALLOC_VERSION
\r
524 #define DLMALLOC_VERSION 20805
\r
525 #endif /* DLMALLOC_VERSION */
\r
527 #ifndef DLMALLOC_EXPORT
\r
528 #define DLMALLOC_EXPORT extern
\r
534 #endif /* _WIN32 */
\r
536 #define LACKS_FCNTL_H
\r
538 #endif /* _WIN32_WCE */
\r
541 #define WIN32_LEAN_AND_MEAN
\r
542 #include <windows.h>
\r
544 #define HAVE_MMAP 1
\r
545 #define HAVE_MORECORE 0
\r
546 #define LACKS_UNISTD_H
\r
547 #define LACKS_SYS_PARAM_H
\r
548 #define LACKS_SYS_MMAN_H
\r
549 #define LACKS_STRING_H
\r
550 #define LACKS_STRINGS_H
\r
551 #define LACKS_SYS_TYPES_H
\r
552 #define LACKS_ERRNO_H
\r
553 #define LACKS_SCHED_H
\r
554 #ifndef MALLOC_FAILURE_ACTION
\r
555 #define MALLOC_FAILURE_ACTION
\r
556 #endif /* MALLOC_FAILURE_ACTION */
\r
557 #ifndef MMAP_CLEARS
\r
558 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
\r
559 #define MMAP_CLEARS 0
\r
561 #define MMAP_CLEARS 1
\r
562 #endif /* _WIN32_WCE */
\r
563 #endif /*MMAP_CLEARS */
\r
566 #if defined(DARWIN) || defined(_DARWIN)
\r
567 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
\r
568 #ifndef HAVE_MORECORE
\r
569 #define HAVE_MORECORE 0
\r
570 #define HAVE_MMAP 1
\r
571 /* OSX allocators provide 16 byte alignment */
\r
572 #ifndef MALLOC_ALIGNMENT
\r
573 #define MALLOC_ALIGNMENT ((size_t)16U)
\r
575 #endif /* HAVE_MORECORE */
\r
576 #endif /* DARWIN */
\r
578 #ifndef LACKS_SYS_TYPES_H
\r
579 #include <sys/types.h> /* For size_t */
\r
580 #endif /* LACKS_SYS_TYPES_H */
\r
582 /* The maximum possible size_t value has all bits set */
\r
583 #define MAX_SIZE_T (~(size_t)0)
\r
585 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
\r
586 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
\r
587 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
\r
588 #endif /* USE_LOCKS */
\r
590 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
\r
591 #if ((defined(__GNUC__) && \
\r
592 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
\r
593 defined(__i386__) || defined(__x86_64__))) || \
\r
594 (defined(_MSC_VER) && _MSC_VER>=1310))
\r
595 #ifndef USE_SPIN_LOCKS
\r
596 #define USE_SPIN_LOCKS 1
\r
597 #endif /* USE_SPIN_LOCKS */
\r
598 #elif USE_SPIN_LOCKS
\r
599 #error "USE_SPIN_LOCKS defined without implementation"
\r
600 #endif /* ... locks available... */
\r
601 #elif !defined(USE_SPIN_LOCKS)
\r
602 #define USE_SPIN_LOCKS 0
\r
603 #endif /* USE_LOCKS */
\r
605 #ifndef ONLY_MSPACES
\r
606 #define ONLY_MSPACES 0
\r
607 #endif /* ONLY_MSPACES */
\r
611 #else /* ONLY_MSPACES */
\r
613 #endif /* ONLY_MSPACES */
\r
614 #endif /* MSPACES */
\r
615 #ifndef MALLOC_ALIGNMENT
\r
616 #define MALLOC_ALIGNMENT ((size_t)8U)
\r
617 #endif /* MALLOC_ALIGNMENT */
\r
620 #endif /* FOOTERS */
\r
622 #define ABORT abort()
\r
624 #ifndef ABORT_ON_ASSERT_FAILURE
\r
625 #define ABORT_ON_ASSERT_FAILURE 1
\r
626 #endif /* ABORT_ON_ASSERT_FAILURE */
\r
627 #ifndef PROCEED_ON_ERROR
\r
628 #define PROCEED_ON_ERROR 0
\r
629 #endif /* PROCEED_ON_ERROR */
\r
633 #endif /* INSECURE */
\r
634 #ifndef MALLOC_INSPECT_ALL
\r
635 #define MALLOC_INSPECT_ALL 0
\r
636 #endif /* MALLOC_INSPECT_ALL */
\r
638 #define HAVE_MMAP 1
\r
639 #endif /* HAVE_MMAP */
\r
640 #ifndef MMAP_CLEARS
\r
641 #define MMAP_CLEARS 1
\r
642 #endif /* MMAP_CLEARS */
\r
643 #ifndef HAVE_MREMAP
\r
645 #define HAVE_MREMAP 1
\r
646 #define _GNU_SOURCE /* Turns on mremap() definition */
\r
648 #define HAVE_MREMAP 0
\r
650 #endif /* HAVE_MREMAP */
\r
651 #ifndef MALLOC_FAILURE_ACTION
\r
652 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
\r
653 #endif /* MALLOC_FAILURE_ACTION */
\r
654 #ifndef HAVE_MORECORE
\r
656 #define HAVE_MORECORE 0
\r
657 #else /* ONLY_MSPACES */
\r
658 #define HAVE_MORECORE 1
\r
659 #endif /* ONLY_MSPACES */
\r
660 #endif /* HAVE_MORECORE */
\r
662 #define MORECORE_CONTIGUOUS 0
\r
663 #else /* !HAVE_MORECORE */
\r
664 #define MORECORE_DEFAULT sbrk
\r
665 #ifndef MORECORE_CONTIGUOUS
\r
666 #define MORECORE_CONTIGUOUS 1
\r
667 #endif /* MORECORE_CONTIGUOUS */
\r
668 #endif /* HAVE_MORECORE */
\r
669 #ifndef DEFAULT_GRANULARITY
\r
670 #if (MORECORE_CONTIGUOUS || defined(WIN32))
\r
671 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
\r
672 #else /* MORECORE_CONTIGUOUS */
\r
673 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
\r
674 #endif /* MORECORE_CONTIGUOUS */
\r
675 #endif /* DEFAULT_GRANULARITY */
\r
676 #ifndef DEFAULT_TRIM_THRESHOLD
\r
677 #ifndef MORECORE_CANNOT_TRIM
\r
678 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
\r
679 #else /* MORECORE_CANNOT_TRIM */
\r
680 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
\r
681 #endif /* MORECORE_CANNOT_TRIM */
\r
682 #endif /* DEFAULT_TRIM_THRESHOLD */
\r
683 #ifndef DEFAULT_MMAP_THRESHOLD
\r
685 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
\r
686 #else /* HAVE_MMAP */
\r
687 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
\r
688 #endif /* HAVE_MMAP */
\r
689 #endif /* DEFAULT_MMAP_THRESHOLD */
\r
690 #ifndef MAX_RELEASE_CHECK_RATE
\r
692 #define MAX_RELEASE_CHECK_RATE 4095
\r
694 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
\r
695 #endif /* HAVE_MMAP */
\r
696 #endif /* MAX_RELEASE_CHECK_RATE */
\r
697 #ifndef USE_BUILTIN_FFS
\r
698 #define USE_BUILTIN_FFS 0
\r
699 #endif /* USE_BUILTIN_FFS */
\r
700 #ifndef USE_DEV_RANDOM
\r
701 #define USE_DEV_RANDOM 0
\r
702 #endif /* USE_DEV_RANDOM */
\r
703 #ifndef NO_MALLINFO
\r
704 #define NO_MALLINFO 0
\r
705 #endif /* NO_MALLINFO */
\r
706 #ifndef MALLINFO_FIELD_TYPE
\r
707 #define MALLINFO_FIELD_TYPE size_t
\r
708 #endif /* MALLINFO_FIELD_TYPE */
\r
709 #ifndef NO_MALLOC_STATS
\r
710 #define NO_MALLOC_STATS 0
\r
711 #endif /* NO_MALLOC_STATS */
\r
712 #ifndef NO_SEGMENT_TRAVERSAL
\r
713 #define NO_SEGMENT_TRAVERSAL 0
\r
714 #endif /* NO_SEGMENT_TRAVERSAL */
\r
717 mallopt tuning options. SVID/XPG defines four standard parameter
\r
718 numbers for mallopt, normally defined in malloc.h. None of these
\r
719 are used in this malloc, so setting them has no effect. But this
\r
720 malloc does support the following options.
\r
723 #define M_TRIM_THRESHOLD (-1)
\r
724 #define M_GRANULARITY (-2)
\r
725 #define M_MMAP_THRESHOLD (-3)
\r
727 /* ------------------------ Mallinfo declarations ------------------------ */
\r
731 This version of malloc supports the standard SVID/XPG mallinfo
\r
732 routine that returns a struct containing usage properties and
\r
733 statistics. It should work on any system that has a
\r
734 /usr/include/malloc.h defining struct mallinfo. The main
\r
735 declaration needed is the mallinfo struct that is returned (by-copy)
\r
736 by mallinfo(). The malloinfo struct contains a bunch of fields that
\r
737 are not even meaningful in this version of malloc. These fields are
\r
738 are instead filled by mallinfo() with other numbers that might be of
\r
741 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
\r
742 /usr/include/malloc.h file that includes a declaration of struct
\r
743 mallinfo. If so, it is included; else a compliant version is
\r
744 declared below. These must be precisely the same for mallinfo() to
\r
745 work. The original SVID version of this struct, defined on most
\r
746 systems with mallinfo, declares all fields as ints. But some others
\r
747 define as unsigned long. If your system defines the fields using a
\r
748 type of different width than listed here, you MUST #include your
\r
749 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
\r
752 /* #define HAVE_USR_INCLUDE_MALLOC_H */
\r
754 #ifdef HAVE_USR_INCLUDE_MALLOC_H
\r
755 #include "/usr/include/malloc.h"
\r
756 #else /* HAVE_USR_INCLUDE_MALLOC_H */
\r
757 #ifndef STRUCT_MALLINFO_DECLARED
\r
758 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
\r
759 #define _STRUCT_MALLINFO
\r
760 #define STRUCT_MALLINFO_DECLARED 1
\r
762 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
\r
763 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
\r
764 MALLINFO_FIELD_TYPE smblks; /* always 0 */
\r
765 MALLINFO_FIELD_TYPE hblks; /* always 0 */
\r
766 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
\r
767 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
\r
768 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
\r
769 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
\r
770 MALLINFO_FIELD_TYPE fordblks; /* total free space */
\r
771 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
\r
773 #endif /* STRUCT_MALLINFO_DECLARED */
\r
774 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
\r
775 #endif /* NO_MALLINFO */
\r
778 Try to persuade compilers to inline. The most critical functions for
\r
779 inlining are defined as macros, so these aren't used for them.
\r
782 #ifndef FORCEINLINE
\r
783 #if defined(__GNUC__)
\r
784 #define FORCEINLINE __inline __attribute__ ((always_inline))
\r
785 #elif defined(_MSC_VER)
\r
786 #define FORCEINLINE __forceinline
\r
790 #if defined(__GNUC__)
\r
791 #define NOINLINE __attribute__ ((noinline))
\r
792 #elif defined(_MSC_VER)
\r
793 #define NOINLINE __declspec(noinline)
\r
801 #ifndef FORCEINLINE
\r
802 #define FORCEINLINE inline
\r
804 #endif /* __cplusplus */
\r
805 #ifndef FORCEINLINE
\r
806 #define FORCEINLINE
\r
811 /* ------------------- Declarations of public routines ------------------- */
\r
813 #ifndef USE_DL_PREFIX
\r
814 #define dlcalloc calloc
\r
815 #define dlfree free
\r
816 #define dlmalloc malloc
\r
817 #define dlmemalign memalign
\r
818 #define dlposix_memalign posix_memalign
\r
819 #define dlrealloc realloc
\r
820 #define dlrealloc_in_place realloc_in_place
\r
821 #define dlvalloc valloc
\r
822 #define dlpvalloc pvalloc
\r
823 #define dlmallinfo mallinfo
\r
824 #define dlmallopt mallopt
\r
825 #define dlmalloc_trim malloc_trim
\r
826 #define dlmalloc_stats malloc_stats
\r
827 #define dlmalloc_usable_size malloc_usable_size
\r
828 #define dlmalloc_footprint malloc_footprint
\r
829 #define dlmalloc_max_footprint malloc_max_footprint
\r
830 #define dlmalloc_footprint_limit malloc_footprint_limit
\r
831 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
\r
832 #define dlmalloc_inspect_all malloc_inspect_all
\r
833 #define dlindependent_calloc independent_calloc
\r
834 #define dlindependent_comalloc independent_comalloc
\r
835 #define dlbulk_free bulk_free
\r
836 #endif /* USE_DL_PREFIX */
\r
840 Returns a pointer to a newly allocated chunk of at least n bytes, or
\r
841 null if no space is available, in which case errno is set to ENOMEM
\r
844 If n is zero, malloc returns a minimum-sized chunk. (The minimum
\r
845 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
\r
846 systems.) Note that size_t is an unsigned type, so calls with
\r
847 arguments that would be negative if signed are interpreted as
\r
848 requests for huge amounts of space, which will often fail. The
\r
849 maximum supported value of n differs across systems, but is in all
\r
850 cases less than the maximum representable value of a size_t.
\r
852 DLMALLOC_EXPORT void* dlmalloc(size_t);
\r
856 Releases the chunk of memory pointed to by p, that had been previously
\r
857 allocated using malloc or a related routine such as realloc.
\r
858 It has no effect if p is null. If p was not malloced or already
\r
859 freed, free(p) will by default cause the current program to abort.
\r
861 DLMALLOC_EXPORT void dlfree(void*);
\r
864 calloc(size_t n_elements, size_t element_size);
\r
865 Returns a pointer to n_elements * element_size bytes, with all locations
\r
868 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
\r
871 realloc(void* p, size_t n)
\r
872 Returns a pointer to a chunk of size n that contains the same data
\r
873 as does chunk p up to the minimum of (n, p's size) bytes, or null
\r
874 if no space is available.
\r
876 The returned pointer may or may not be the same as p. The algorithm
\r
877 prefers extending p in most cases when possible, otherwise it
\r
878 employs the equivalent of a malloc-copy-free sequence.
\r
880 If p is null, realloc is equivalent to malloc.
\r
882 If space is not available, realloc returns null, errno is set (if on
\r
883 ANSI) and p is NOT freed.
\r
885 if n is for fewer bytes than already held by p, the newly unused
\r
886 space is lopped off and freed if possible. realloc with a size
\r
887 argument of zero (re)allocates a minimum-sized chunk.
\r
889 The old unix realloc convention of allowing the last-free'd chunk
\r
890 to be used as an argument to realloc is not supported.
\r
892 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
\r
895 realloc_in_place(void* p, size_t n)
\r
896 Resizes the space allocated for p to size n, only if this can be
\r
897 done without moving p (i.e., only if there is adjacent space
\r
898 available if n is greater than p's current allocated size, or n is
\r
899 less than or equal to p's size). This may be used instead of plain
\r
900 realloc if an alternative allocation strategy is needed upon failure
\r
901 to expand space; for example, reallocation of a buffer that must be
\r
902 memory-aligned or cleared. You can use realloc_in_place to trigger
\r
903 these alternatives only when needed.
\r
905 Returns p if successful; otherwise null.
\r
907 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
\r
910 memalign(size_t alignment, size_t n);
\r
911 Returns a pointer to a newly allocated chunk of n bytes, aligned
\r
912 in accord with the alignment argument.
\r
914 The alignment argument should be a power of two. If the argument is
\r
915 not a power of two, the nearest greater power is used.
\r
916 8-byte alignment is guaranteed by normal malloc calls, so don't
\r
917 bother calling memalign with an argument of 8 or less.
\r
919 Overreliance on memalign is a sure way to fragment space.
\r
921 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
\r
924 int posix_memalign(void** pp, size_t alignment, size_t n);
\r
925 Allocates a chunk of n bytes, aligned in accord with the alignment
\r
926 argument. Differs from memalign only in that it (1) assigns the
\r
927 allocated memory to *pp rather than returning it, (2) fails and
\r
928 returns EINVAL if the alignment is not a power of two (3) fails and
\r
929 returns ENOMEM if memory cannot be allocated.
\r
931 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
\r
935 Equivalent to memalign(pagesize, n), where pagesize is the page
\r
936 size of the system. If the pagesize is unknown, 4096 is used.
\r
938 DLMALLOC_EXPORT void* dlvalloc(size_t);
\r
941 mallopt(int parameter_number, int parameter_value)
\r
942 Sets tunable parameters The format is to provide a
\r
943 (parameter-number, parameter-value) pair. mallopt then sets the
\r
944 corresponding parameter to the argument value if it can (i.e., so
\r
945 long as the value is meaningful), and returns 1 if successful else
\r
946 0. To workaround the fact that mallopt is specified to use int,
\r
947 not size_t parameters, the value -1 is specially treated as the
\r
948 maximum unsigned size_t value.
\r
950 SVID/XPG/ANSI defines four standard param numbers for mallopt,
\r
951 normally defined in malloc.h. None of these are use in this malloc,
\r
952 so setting them has no effect. But this malloc also supports other
\r
953 options in mallopt. See below for details. Briefly, supported
\r
954 parameters are as follows (listed defaults are for "typical"
\r
957 Symbol param # default allowed param values
\r
958 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
\r
959 M_GRANULARITY -2 page size any power of 2 >= page size
\r
960 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
\r
962 DLMALLOC_EXPORT int dlmallopt(int, int);
\r
965 malloc_footprint();
\r
966 Returns the number of bytes obtained from the system. The total
\r
967 number of bytes allocated by malloc, realloc etc., is less than this
\r
968 value. Unlike mallinfo, this function returns only a precomputed
\r
969 result, so can be called frequently to monitor memory consumption.
\r
970 Even if locks are otherwise defined, this function does not use them,
\r
971 so results might not be up to date.
\r
973 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
\r
976 malloc_max_footprint();
\r
977 Returns the maximum number of bytes obtained from the system. This
\r
978 value will be greater than current footprint if deallocated space
\r
979 has been reclaimed by the system. The peak number of bytes allocated
\r
980 by malloc, realloc etc., is less than this value. Unlike mallinfo,
\r
981 this function returns only a precomputed result, so can be called
\r
982 frequently to monitor memory consumption. Even if locks are
\r
983 otherwise defined, this function does not use them, so results might
\r
986 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
\r
989 malloc_footprint_limit();
\r
990 Returns the number of bytes that the heap is allowed to obtain from
\r
991 the system, returning the last value returned by
\r
992 malloc_set_footprint_limit, or the maximum size_t value if
\r
993 never set. The returned value reflects a permission. There is no
\r
994 guarantee that this number of bytes can actually be obtained from
\r
997 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(void);
\r
1000 malloc_set_footprint_limit();
\r
1001 Sets the maximum number of bytes to obtain from the system, causing
\r
1002 failure returns from malloc and related functions upon attempts to
\r
1003 exceed this value. The argument value may be subject to page
\r
1004 rounding to an enforceable limit; this actual value is returned.
\r
1005 Using an argument of the maximum possible size_t effectively
\r
1006 disables checks. If the argument is less than or equal to the
\r
1007 current malloc_footprint, then all future allocations that require
\r
1008 additional system memory will fail. However, invocation cannot
\r
1009 retroactively deallocate existing used memory.
\r
1011 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
\r
1013 #if MALLOC_INSPECT_ALL
\r
1015 malloc_inspect_all(void(*handler)(void *start,
\r
1017 size_t used_bytes,
\r
1018 void* callback_arg),
\r
1020 Traverses the heap and calls the given handler for each managed
\r
1021 region, skipping all bytes that are (or may be) used for bookkeeping
\r
1022 purposes. Traversal does not include include chunks that have been
\r
1023 directly memory mapped. Each reported region begins at the start
\r
1024 address, and continues up to but not including the end address. The
\r
1025 first used_bytes of the region contain allocated data. If
\r
1026 used_bytes is zero, the region is unallocated. The handler is
\r
1027 invoked with the given callback argument. If locks are defined, they
\r
1028 are held during the entire traversal. It is a bad idea to invoke
\r
1029 other malloc functions from within the handler.
\r
1031 For example, to count the number of in-use chunks with size greater
\r
1032 than 1000, you could write:
\r
1033 static int count = 0;
\r
1034 void count_chunks(void* start, void* end, size_t used, void* arg) {
\r
1035 if (used >= 1000) ++count;
\r
1038 malloc_inspect_all(count_chunks, NULL);
\r
1040 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
\r
1042 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
\r
1045 #endif /* MALLOC_INSPECT_ALL */
\r
1050 Returns (by copy) a struct containing various summary statistics:
\r
1052 arena: current total non-mmapped bytes allocated from system
\r
1053 ordblks: the number of free chunks
\r
1054 smblks: always zero.
\r
1055 hblks: current number of mmapped regions
\r
1056 hblkhd: total bytes held in mmapped regions
\r
1057 usmblks: the maximum total allocated space. This will be greater
\r
1058 than current total if trimming has occurred.
\r
1059 fsmblks: always zero
\r
1060 uordblks: current total allocated space (normal or mmapped)
\r
1061 fordblks: total free space
\r
1062 keepcost: the maximum number of bytes that could ideally be released
\r
1063 back to system via malloc_trim. ("ideally" means that
\r
1064 it ignores page restrictions etc.)
\r
1066 Because these fields are ints, but internal bookkeeping may
\r
1067 be kept as longs, the reported values may wrap around zero and
\r
1068 thus be inaccurate.
\r
1070 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
\r
1071 #endif /* NO_MALLINFO */
\r
1074 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
\r
1076 independent_calloc is similar to calloc, but instead of returning a
\r
1077 single cleared space, it returns an array of pointers to n_elements
\r
1078 independent elements that can hold contents of size elem_size, each
\r
1079 of which starts out cleared, and can be independently freed,
\r
1080 realloc'ed etc. The elements are guaranteed to be adjacently
\r
1081 allocated (this is not guaranteed to occur with multiple callocs or
\r
1082 mallocs), which may also improve cache locality in some
\r
1085 The "chunks" argument is optional (i.e., may be null, which is
\r
1086 probably the most typical usage). If it is null, the returned array
\r
1087 is itself dynamically allocated and should also be freed when it is
\r
1088 no longer needed. Otherwise, the chunks array must be of at least
\r
1089 n_elements in length. It is filled in with the pointers to the
\r
1092 In either case, independent_calloc returns this pointer array, or
\r
1093 null if the allocation failed. If n_elements is zero and "chunks"
\r
1094 is null, it returns a chunk representing an array with zero elements
\r
1095 (which should be freed if not wanted).
\r
1097 Each element must be freed when it is no longer needed. This can be
\r
1098 done all at once using bulk_free.
\r
1100 independent_calloc simplifies and speeds up implementations of many
\r
1101 kinds of pools. It may also be useful when constructing large data
\r
1102 structures that initially have a fixed number of fixed-sized nodes,
\r
1103 but the number is not known at compile time, and some of the nodes
\r
1104 may later need to be freed. For example:
\r
1106 struct Node { int item; struct Node* next; };
\r
1108 struct Node* build_list() {
\r
1109 struct Node** pool;
\r
1110 int n = read_number_of_nodes_needed();
\r
1111 if (n <= 0) return 0;
\r
1112 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
\r
1113 if (pool == 0) die();
\r
1114 // organize into a linked list...
\r
1115 struct Node* first = pool[0];
\r
1116 for (i = 0; i < n-1; ++i)
\r
1117 pool[i]->next = pool[i+1];
\r
1118 free(pool); // Can now free the array (or not, if it is needed later)
\r
1122 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
\r
1125 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
\r
1127 independent_comalloc allocates, all at once, a set of n_elements
\r
1128 chunks with sizes indicated in the "sizes" array. It returns
\r
1129 an array of pointers to these elements, each of which can be
\r
1130 independently freed, realloc'ed etc. The elements are guaranteed to
\r
1131 be adjacently allocated (this is not guaranteed to occur with
\r
1132 multiple callocs or mallocs), which may also improve cache locality
\r
1133 in some applications.
\r
1135 The "chunks" argument is optional (i.e., may be null). If it is null
\r
1136 the returned array is itself dynamically allocated and should also
\r
1137 be freed when it is no longer needed. Otherwise, the chunks array
\r
1138 must be of at least n_elements in length. It is filled in with the
\r
1139 pointers to the chunks.
\r
1141 In either case, independent_comalloc returns this pointer array, or
\r
1142 null if the allocation failed. If n_elements is zero and chunks is
\r
1143 null, it returns a chunk representing an array with zero elements
\r
1144 (which should be freed if not wanted).
\r
1146 Each element must be freed when it is no longer needed. This can be
\r
1147 done all at once using bulk_free.
\r
1149 independent_comallac differs from independent_calloc in that each
\r
1150 element may have a different size, and also that it does not
\r
1151 automatically clear elements.
\r
1153 independent_comalloc can be used to speed up allocation in cases
\r
1154 where several structs or objects must always be allocated at the
\r
1155 same time. For example:
\r
1157 struct Head { ... }
\r
1158 struct Foot { ... }
\r
1160 void send_message(char* msg) {
\r
1161 int msglen = strlen(msg);
\r
1162 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
\r
1164 if (independent_comalloc(3, sizes, chunks) == 0)
\r
1166 struct Head* head = (struct Head*)(chunks[0]);
\r
1167 char* body = (char*)(chunks[1]);
\r
1168 struct Foot* foot = (struct Foot*)(chunks[2]);
\r
1172 In general though, independent_comalloc is worth using only for
\r
1173 larger values of n_elements. For small values, you probably won't
\r
1174 detect enough difference from series of malloc calls to bother.
\r
1176 Overuse of independent_comalloc can increase overall memory usage,
\r
1177 since it cannot reuse existing noncontiguous small chunks that
\r
1178 might be available for some of the elements.
\r
1180 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
\r
1183 bulk_free(void* array[], size_t n_elements)
\r
1184 Frees and clears (sets to null) each non-null pointer in the given
\r
1185 array. This is likely to be faster than freeing them one-by-one.
\r
1186 If footers are used, pointers that have been allocated in different
\r
1187 mspaces are not freed or cleared, and the count of all such pointers
\r
1188 is returned. For large arrays of pointers with poor locality, it
\r
1189 may be worthwhile to sort this array before calling bulk_free.
\r
1191 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
\r
1194 pvalloc(size_t n);
\r
1195 Equivalent to valloc(minimum-page-that-holds(n)), that is,
\r
1196 round up n to nearest pagesize.
\r
1198 DLMALLOC_EXPORT void* dlpvalloc(size_t);
\r
1201 malloc_trim(size_t pad);
\r
1203 If possible, gives memory back to the system (via negative arguments
\r
1204 to sbrk) if there is unused memory at the `high' end of the malloc
\r
1205 pool or in unused MMAP segments. You can call this after freeing
\r
1206 large blocks of memory to potentially reduce the system-level memory
\r
1207 requirements of a program. However, it cannot guarantee to reduce
\r
1208 memory. Under some allocation patterns, some large free blocks of
\r
1209 memory will be locked between two used chunks, so they cannot be
\r
1210 given back to the system.
\r
1212 The `pad' argument to malloc_trim represents the amount of free
\r
1213 trailing space to leave untrimmed. If this argument is zero, only
\r
1214 the minimum amount of memory to maintain internal data structures
\r
1215 will be left. Non-zero arguments can be supplied to maintain enough
\r
1216 trailing space to service future expected allocations without having
\r
1217 to re-obtain memory from the system.
\r
1219 Malloc_trim returns 1 if it actually released any memory, else 0.
\r
1221 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
\r
1225 Prints on stderr the amount of space obtained from the system (both
\r
1226 via sbrk and mmap), the maximum amount (which may be more than
\r
1227 current if malloc_trim and/or munmap got called), and the current
\r
1228 number of bytes allocated via malloc (or realloc, etc) but not yet
\r
1229 freed. Note that this is the number of bytes allocated, not the
\r
1230 number requested. It will be larger than the number requested
\r
1231 because of alignment and bookkeeping overhead. Because it includes
\r
1232 alignment wastage as being in use, this figure may be greater than
\r
1233 zero even when no user-level chunks are allocated.
\r
1235 The reported current and maximum system memory can be inaccurate if
\r
1236 a program makes other calls to system memory allocation functions
\r
1237 (normally sbrk) outside of malloc.
\r
1239 malloc_stats prints only the most commonly interesting statistics.
\r
1240 More information can be obtained by calling mallinfo.
\r
1242 DLMALLOC_EXPORT void dlmalloc_stats(void);
\r
1244 #endif /* ONLY_MSPACES */
\r
1247 malloc_usable_size(void* p);
\r
1249 Returns the number of bytes you can actually use in
\r
1250 an allocated chunk, which may be more than you requested (although
\r
1251 often not) due to alignment and minimum size constraints.
\r
1252 You can use this many bytes without worrying about
\r
1253 overwriting other allocated objects. This is not a particularly great
\r
1254 programming practice. malloc_usable_size can be more useful in
\r
1255 debugging and assertions, for example:
\r
1258 assert(malloc_usable_size(p) >= 256);
\r
1260 size_t dlmalloc_usable_size(void*);
\r
1265 mspace is an opaque type representing an independent
\r
1266 region of space that supports mspace_malloc, etc.
\r
1268 typedef void* mspace;
\r
1271 create_mspace creates and returns a new independent space with the
\r
1272 given initial capacity, or, if 0, the default granularity size. It
\r
1273 returns null if there is no system memory available to create the
\r
1274 space. If argument locked is non-zero, the space uses a separate
\r
1275 lock to control access. The capacity of the space will grow
\r
1276 dynamically as needed to service mspace_malloc requests. You can
\r
1277 control the sizes of incremental increases of this space by
\r
1278 compiling with a different DEFAULT_GRANULARITY or dynamically
\r
1279 setting with mallopt(M_GRANULARITY, value).
\r
1281 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
\r
1284 destroy_mspace destroys the given space, and attempts to return all
\r
1285 of its memory back to the system, returning the total number of
\r
1286 bytes freed. After destruction, the results of access to all memory
\r
1287 used by the space become undefined.
\r
1289 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
\r
1292 create_mspace_with_base uses the memory supplied as the initial base
\r
1293 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
\r
1294 space is used for bookkeeping, so the capacity must be at least this
\r
1295 large. (Otherwise 0 is returned.) When this initial space is
\r
1296 exhausted, additional memory will be obtained from the system.
\r
1297 Destroying this space will deallocate all additionally allocated
\r
1298 space (if possible) but not the initial base.
\r
1300 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
\r
1303 mspace_track_large_chunks controls whether requests for large chunks
\r
1304 are allocated in their own untracked mmapped regions, separate from
\r
1305 others in this mspace. By default large chunks are not tracked,
\r
1306 which reduces fragmentation. However, such chunks are not
\r
1307 necessarily released to the system upon destroy_mspace. Enabling
\r
1308 tracking by setting to true may increase fragmentation, but avoids
\r
1309 leakage when relying on destroy_mspace to release all memory
\r
1310 allocated using this space. The function returns the previous
\r
1313 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
\r
1317 mspace_malloc behaves as malloc, but operates within
\r
1320 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
\r
1323 mspace_free behaves as free, but operates within
\r
1326 If compiled with FOOTERS==1, mspace_free is not actually needed.
\r
1327 free may be called instead of mspace_free because freed chunks from
\r
1328 any space are handled by their originating spaces.
\r
1330 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
\r
1333 mspace_realloc behaves as realloc, but operates within
\r
1336 If compiled with FOOTERS==1, mspace_realloc is not actually
\r
1337 needed. realloc may be called instead of mspace_realloc because
\r
1338 realloced chunks from any space are handled by their originating
\r
1341 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
\r
1344 mspace_calloc behaves as calloc, but operates within
\r
1347 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
\r
1350 mspace_memalign behaves as memalign, but operates within
\r
1353 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
\r
1356 mspace_independent_calloc behaves as independent_calloc, but
\r
1357 operates within the given space.
\r
1359 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
\r
1360 size_t elem_size, void* chunks[]);
\r
1363 mspace_independent_comalloc behaves as independent_comalloc, but
\r
1364 operates within the given space.
\r
1366 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
\r
1367 size_t sizes[], void* chunks[]);
\r
1370 mspace_footprint() returns the number of bytes obtained from the
\r
1371 system for this space.
\r
1373 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
\r
1376 mspace_max_footprint() returns the peak number of bytes obtained from the
\r
1377 system for this space.
\r
1379 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
\r
1384 mspace_mallinfo behaves as mallinfo, but reports properties of
\r
1387 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
\r
1388 #endif /* NO_MALLINFO */
\r
1391 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
\r
1393 DLMALLOC_EXPORT size_t mspace_usable_size(void* mem);
\r
1396 mspace_malloc_stats behaves as malloc_stats, but reports
\r
1397 properties of the given space.
\r
1399 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
\r
1402 mspace_trim behaves as malloc_trim, but
\r
1403 operates within the given space.
\r
1405 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
\r
1408 An alias for mallopt.
\r
1410 DLMALLOC_EXPORT int mspace_mallopt(int, int);
\r
1412 #endif /* MSPACES */
\r
1414 #ifdef __cplusplus
\r
1415 } /* end of extern "C" */
\r
1416 #endif /* __cplusplus */
\r
1419 ========================================================================
\r
1420 To make a fully customizable malloc.h header file, cut everything
\r
1421 above this line, put into file malloc.h, edit to suit, and #include it
\r
1422 on the next line, as well as in programs that use this malloc.
\r
1423 ========================================================================
\r
1426 /* #include "malloc.h" */
\r
1428 /*------------------------------ internal #includes ---------------------- */
\r
1431 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
\r
1432 #endif /* _MSC_VER */
\r
1433 #if !NO_MALLOC_STATS
\r
1434 #include <stdio.h> /* for printing in malloc_stats */
\r
1435 #endif /* NO_MALLOC_STATS */
\r
1436 #ifndef LACKS_ERRNO_H
\r
1437 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
\r
1438 #endif /* LACKS_ERRNO_H */
\r
1440 #if ABORT_ON_ASSERT_FAILURE
\r
1442 #define assert(x) if(!(x)) ABORT
\r
1443 #else /* ABORT_ON_ASSERT_FAILURE */
\r
1444 #include <assert.h>
\r
1445 #endif /* ABORT_ON_ASSERT_FAILURE */
\r
1451 #endif /* DEBUG */
\r
1452 #if !defined(WIN32) && !defined(LACKS_TIME_H)
\r
1453 #include <time.h> /* for magic initialization */
\r
1454 #endif /* WIN32 */
\r
1455 #ifndef LACKS_STDLIB_H
\r
1456 #include <stdlib.h> /* for abort() */
\r
1457 #endif /* LACKS_STDLIB_H */
\r
1458 #ifndef LACKS_STRING_H
\r
1459 #include <string.h> /* for memset etc */
\r
1460 #endif /* LACKS_STRING_H */
\r
1461 #if USE_BUILTIN_FFS
\r
1462 #ifndef LACKS_STRINGS_H
\r
1463 #include <strings.h> /* for ffs */
\r
1464 #endif /* LACKS_STRINGS_H */
\r
1465 #endif /* USE_BUILTIN_FFS */
\r
1467 #ifndef LACKS_SYS_MMAN_H
\r
1468 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
\r
1469 #if (defined(linux) && !defined(__USE_GNU))
\r
1470 #define __USE_GNU 1
\r
1471 #include <sys/mman.h> /* for mmap */
\r
1474 #include <sys/mman.h> /* for mmap */
\r
1475 #endif /* linux */
\r
1476 #endif /* LACKS_SYS_MMAN_H */
\r
1477 #ifndef LACKS_FCNTL_H
\r
1478 #include <fcntl.h>
\r
1479 #endif /* LACKS_FCNTL_H */
\r
1480 #endif /* HAVE_MMAP */
\r
1481 #ifndef LACKS_UNISTD_H
\r
1482 #include <unistd.h> /* for sbrk, sysconf */
\r
1483 #else /* LACKS_UNISTD_H */
\r
1484 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
\r
1485 /*extern void* sbrk(ptrdiff_t);*/
\r
1486 #endif /* FreeBSD etc */
\r
1487 #endif /* LACKS_UNISTD_H */
\r
1489 /* Declarations for locking */
\r
1492 #if defined (__SVR4) && defined (__sun) /* solaris */
\r
1493 #include <thread.h>
\r
1494 #elif !defined(LACKS_SCHED_H)
\r
1495 #include <sched.h>
\r
1496 #endif /* solaris or LACKS_SCHED_H */
\r
1497 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
\r
1498 /*#include <pthread.h>*/
\r
1499 #endif /* USE_RECURSIVE_LOCKS ... */
\r
1500 #elif defined(_MSC_VER)
\r
1502 /* These are already defined on AMD64 builds */
\r
1503 #ifdef __cplusplus
\r
1505 #endif /* __cplusplus */
\r
1506 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
\r
1507 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
\r
1508 #ifdef __cplusplus
\r
1510 #endif /* __cplusplus */
\r
1511 #endif /* _M_AMD64 */
\r
1512 #pragma intrinsic (_InterlockedCompareExchange)
\r
1513 #pragma intrinsic (_InterlockedExchange)
\r
1514 #define interlockedcompareexchange _InterlockedCompareExchange
\r
1515 #define interlockedexchange _InterlockedExchange
\r
1516 #elif defined(WIN32) && defined(__GNUC__)
\r
1517 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
\r
1518 #define interlockedexchange __sync_lock_test_and_set
\r
1519 #endif /* Win32 */
\r
1520 #endif /* USE_LOCKS */
\r
1522 /* Declarations for bit scanning on win32 */
\r
1523 #if defined(_MSC_VER) && _MSC_VER>=1300
\r
1524 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
\r
1525 #ifdef __cplusplus
\r
1527 #endif /* __cplusplus */
\r
1528 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
\r
1529 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
\r
1530 #ifdef __cplusplus
\r
1532 #endif /* __cplusplus */
\r
1534 #define BitScanForward _BitScanForward
\r
1535 #define BitScanReverse _BitScanReverse
\r
1536 #pragma intrinsic(_BitScanForward)
\r
1537 #pragma intrinsic(_BitScanReverse)
\r
1538 #endif /* BitScanForward */
\r
1539 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
\r
1542 #ifndef malloc_getpagesize
\r
1543 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
\r
1544 # ifndef _SC_PAGE_SIZE
\r
1545 # define _SC_PAGE_SIZE _SC_PAGESIZE
\r
1548 # ifdef _SC_PAGE_SIZE
\r
1549 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
\r
1551 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
\r
1552 extern size_t getpagesize();
\r
1553 # define malloc_getpagesize getpagesize()
\r
1555 # ifdef WIN32 /* use supplied emulation of getpagesize */
\r
1556 # define malloc_getpagesize getpagesize()
\r
1558 # ifndef LACKS_SYS_PARAM_H
\r
1559 # include <sys/param.h>
\r
1561 # ifdef EXEC_PAGESIZE
\r
1562 # define malloc_getpagesize EXEC_PAGESIZE
\r
1566 # define malloc_getpagesize NBPG
\r
1568 # define malloc_getpagesize (NBPG * CLSIZE)
\r
1572 # define malloc_getpagesize NBPC
\r
1575 # define malloc_getpagesize PAGESIZE
\r
1576 # else /* just guess */
\r
1577 # define malloc_getpagesize ((size_t)4096U)
\r
1588 /* ------------------- size_t and alignment properties -------------------- */
\r
1590 /* The byte and bit size of a size_t */
\r
1591 #define SIZE_T_SIZE (sizeof(size_t))
\r
1592 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
\r
1594 /* Some constants coerced to size_t */
\r
1595 /* Annoying but necessary to avoid errors on some platforms */
\r
1596 #define SIZE_T_ZERO ((size_t)0)
\r
1597 #define SIZE_T_ONE ((size_t)1)
\r
1598 #define SIZE_T_TWO ((size_t)2)
\r
1599 #define SIZE_T_FOUR ((size_t)4)
\r
1600 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
\r
1601 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
\r
1602 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
\r
1603 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
\r
1605 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
\r
1606 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
\r
1608 /* True if address a has acceptable alignment */
\r
1609 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
\r
1611 /* the number of bytes to offset an address to align it */
\r
1612 #define align_offset(A)\
\r
1613 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
\r
1614 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
\r
1616 /* -------------------------- MMAP preliminaries ------------------------- */
\r
1619 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
\r
1620 checks to fail so compiler optimizer can delete code rather than
\r
1621 using so many "#if"s.
\r
1625 /* MORECORE and MMAP must return MFAIL on failure */
\r
1626 #define MFAIL ((void*)(MAX_SIZE_T))
\r
1627 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
\r
1631 #ifdef MMAP_DEFAULT
\r
1632 #elif !defined(WIN32)
\r
1633 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
\r
1634 #define MMAP_PROT (PROT_READ|PROT_WRITE)
\r
1635 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
\r
1636 #define MAP_ANONYMOUS MAP_ANON
\r
1637 #endif /* MAP_ANON */
\r
1638 #ifdef MAP_ANONYMOUS
\r
1639 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
\r
1640 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
\r
1641 #else /* MAP_ANONYMOUS */
\r
1643 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
\r
1644 is unlikely to be needed, but is supplied just in case.
\r
1646 #define MMAP_FLAGS (MAP_PRIVATE)
\r
1647 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
\r
1648 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
\r
1649 (dev_zero_fd = open("/dev/zero", O_RDWR), \
\r
1650 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
\r
1651 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
\r
1652 #endif /* MAP_ANONYMOUS */
\r
1654 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
\r
1658 /* Win32 MMAP via VirtualAlloc */
\r
1659 static FORCEINLINE void* win32mmap(size_t size) {
\r
1660 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
\r
1661 return (ptr != 0)? ptr: MFAIL;
\r
1664 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
\r
1665 static FORCEINLINE void* win32direct_mmap(size_t size) {
\r
1666 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
\r
1668 return (ptr != 0)? ptr: MFAIL;
\r
1671 /* This function supports releasing coalesed segments */
\r
1672 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
\r
1673 MEMORY_BASIC_INFORMATION minfo;
\r
1674 char* cptr = (char*)ptr;
\r
1676 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
\r
1678 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
\r
1679 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
\r
1681 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
\r
1683 cptr += minfo.RegionSize;
\r
1684 size -= minfo.RegionSize;
\r
1689 #define MMAP_DEFAULT(s) win32mmap(s)
\r
1690 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
\r
1691 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
\r
1692 #endif /* WIN32 */
\r
1693 #endif /* HAVE_MMAP */
\r
1695 #if HAVE_MREMAP && !defined(MREMAP_DEFAULT)
\r
1697 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
\r
1698 #endif /* WIN32 */
\r
1699 #endif /* HAVE_MREMAP */
\r
1702 * Define CALL_MORECORE
\r
1706 #define CALL_MORECORE(S) MORECORE(S)
\r
1707 #else /* MORECORE */
\r
1708 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
\r
1709 #endif /* MORECORE */
\r
1710 #else /* HAVE_MORECORE */
\r
1711 #define CALL_MORECORE(S) MFAIL
\r
1712 #endif /* HAVE_MORECORE */
\r
1715 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
\r
1718 #define USE_MMAP_BIT (SIZE_T_ONE)
\r
1721 #define CALL_MMAP(s) MMAP(s)
\r
1723 #define CALL_MMAP(s) MMAP_DEFAULT(s)
\r
1726 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
\r
1727 #else /* MUNMAP */
\r
1728 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
\r
1729 #endif /* MUNMAP */
\r
1730 #ifdef DIRECT_MMAP
\r
1731 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
\r
1732 #else /* DIRECT_MMAP */
\r
1733 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
\r
1734 #endif /* DIRECT_MMAP */
\r
1735 #else /* HAVE_MMAP */
\r
1736 #define USE_MMAP_BIT (SIZE_T_ZERO)
\r
1738 #define MMAP(s) MFAIL
\r
1739 #define MUNMAP(a, s) (-1)
\r
1740 #define DIRECT_MMAP(s) MFAIL
\r
1741 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
\r
1742 #define CALL_MMAP(s) MMAP(s)
\r
1743 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
\r
1744 #endif /* HAVE_MMAP */
\r
1747 * Define CALL_MREMAP
\r
1749 #if HAVE_MMAP && HAVE_MREMAP
\r
1751 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
\r
1752 #else /* MREMAP */
\r
1753 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
\r
1754 #endif /* MREMAP */
\r
1755 #else /* HAVE_MMAP && HAVE_MREMAP */
\r
1756 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
\r
1757 #endif /* HAVE_MMAP && HAVE_MREMAP */
\r
1759 /* mstate bit set if continguous morecore disabled or failed */
\r
1760 #define USE_NONCONTIGUOUS_BIT (4U)
\r
1762 /* segment bit set in create_mspace_with_base */
\r
1763 #define EXTERN_BIT (8U)
\r
1766 /* --------------------------- Lock preliminaries ------------------------ */
\r
1769 When locks are defined, there is one global lock, plus
\r
1770 one per-mspace lock.
\r
1772 The global lock_ensures that mparams.magic and other unique
\r
1773 mparams values are initialized only once. It also protects
\r
1774 sequences of calls to MORECORE. In many cases sys_alloc requires
\r
1775 two calls, that should not be interleaved with calls by other
\r
1776 threads. This does not protect against direct calls to MORECORE
\r
1777 by other threads not using this lock, so there is still code to
\r
1778 cope the best we can on interference.
\r
1780 Per-mspace locks surround calls to malloc, free, etc.
\r
1781 By default, locks are simple non-reentrant mutexes.
\r
1783 Because lock-protected regions generally have bounded times, it is
\r
1784 OK to use the supplied simple spinlocks. Spinlocks are likely to
\r
1785 improve performance for lightly contended applications, but worsen
\r
1786 performance under heavy contention.
\r
1788 If USE_LOCKS is > 1, the definitions of lock routines here are
\r
1789 bypassed, in which case you will need to define the type MLOCK_T,
\r
1790 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
\r
1791 and TRY_LOCK. You must also declare a
\r
1792 static MLOCK_T malloc_global_mutex = { initialization values };.
\r
1797 #define USE_LOCK_BIT (0U)
\r
1798 #define INITIAL_LOCK(l) (0)
\r
1799 #define DESTROY_LOCK(l) (0)
\r
1800 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
\r
1801 #define RELEASE_MALLOC_GLOBAL_LOCK()
\r
1805 /* ----------------------- User-defined locks ------------------------ */
\r
1806 /* Define your own lock implementation here */
\r
1807 /* #define INITIAL_LOCK(lk) ... */
\r
1808 /* #define DESTROY_LOCK(lk) ... */
\r
1809 /* #define ACQUIRE_LOCK(lk) ... */
\r
1810 /* #define RELEASE_LOCK(lk) ... */
\r
1811 /* #define TRY_LOCK(lk) ... */
\r
1812 /* static MLOCK_T malloc_global_mutex = ... */
\r
1814 #elif USE_SPIN_LOCKS
\r
1816 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
\r
1817 /* Note CAS_LOCK defined to return 0 on success */
\r
1819 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
\r
1820 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
\r
1821 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
\r
1823 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
\r
1824 /* Custom spin locks for older gcc on x86 */
\r
1825 static FORCEINLINE int x86_cas_lock(int *sl) {
\r
1829 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
\r
1831 : "r" (val), "m" (*(sl)), "0"(cmp)
\r
1832 : "memory", "cc");
\r
1836 static FORCEINLINE void x86_clear_lock(int* sl) {
\r
1840 __asm__ __volatile__ ("lock; xchgl %0, %1"
\r
1842 : "m" (*(sl)), "0"(prev)
\r
1846 #define CAS_LOCK(sl) x86_cas_lock(sl)
\r
1847 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
\r
1849 #else /* Win32 MSC */
\r
1850 #define CAS_LOCK(sl) interlockedexchange(sl, 1)
\r
1851 #define CLEAR_LOCK(sl) interlockedexchange (sl, 0)
\r
1853 #endif /* ... gcc spins locks ... */
\r
1855 /* How to yield for a spin lock */
\r
1856 #define SPINS_PER_YIELD 63
\r
1857 #if defined(_MSC_VER)
\r
1858 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
\r
1859 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
\r
1860 #elif defined (__SVR4) && defined (__sun) /* solaris */
\r
1861 #define SPIN_LOCK_YIELD thr_yield();
\r
1862 #elif !defined(LACKS_SCHED_H)
\r
1863 #define SPIN_LOCK_YIELD sched_yield();
\r
1865 #define SPIN_LOCK_YIELD
\r
1866 #endif /* ... yield ... */
\r
1868 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
\r
1869 /* Plain spin locks use single word (embedded in malloc_states) */
\r
1870 static int spin_acquire_lock(int *sl) {
\r
1872 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
\r
1873 if ((++spins & SPINS_PER_YIELD) == 0) {
\r
1880 #define MLOCK_T int
\r
1881 #define TRY_LOCK(sl) !CAS_LOCK(sl)
\r
1882 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
\r
1883 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
\r
1884 #define INITIAL_LOCK(sl) (*sl = 0)
\r
1885 #define DESTROY_LOCK(sl) (0)
\r
1886 static MLOCK_T malloc_global_mutex = 0;
\r
1888 #else /* USE_RECURSIVE_LOCKS */
\r
1889 /* types for lock owners */
\r
1891 #define THREAD_ID_T DWORD
\r
1892 #define CURRENT_THREAD GetCurrentThreadId()
\r
1893 #define EQ_OWNER(X,Y) ((X) == (Y))
\r
1896 Note: the following assume that pthread_t is a type that can be
\r
1897 initialized to (casted) zero. If this is not the case, you will need to
\r
1898 somehow redefine these or not use spin locks.
\r
1900 #define THREAD_ID_T pthread_t
\r
1901 #define CURRENT_THREAD pthread_self()
\r
1902 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
\r
1905 struct malloc_recursive_lock {
\r
1908 THREAD_ID_T threadid;
\r
1911 #define MLOCK_T struct malloc_recursive_lock
\r
1912 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
\r
1914 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
\r
1915 assert(lk->sl != 0);
\r
1916 if (--lk->c == 0) {
\r
1917 CLEAR_LOCK(&lk->sl);
\r
1921 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
\r
1922 THREAD_ID_T mythreadid = CURRENT_THREAD;
\r
1925 if (*((volatile int *)(&lk->sl)) == 0) {
\r
1926 if (!CAS_LOCK(&lk->sl)) {
\r
1927 lk->threadid = mythreadid;
\r
1932 else if (EQ_OWNER(lk->threadid, mythreadid)) {
\r
1936 if ((++spins & SPINS_PER_YIELD) == 0) {
\r
1942 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
\r
1943 THREAD_ID_T mythreadid = CURRENT_THREAD;
\r
1944 if (*((volatile int *)(&lk->sl)) == 0) {
\r
1945 if (!CAS_LOCK(&lk->sl)) {
\r
1946 lk->threadid = mythreadid;
\r
1951 else if (EQ_OWNER(lk->threadid, mythreadid)) {
\r
1958 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
\r
1959 #define TRY_LOCK(lk) recursive_try_lock(lk)
\r
1960 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
\r
1961 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
\r
1962 #define DESTROY_LOCK(lk) (0)
\r
1963 #endif /* USE_RECURSIVE_LOCKS */
\r
1965 #elif defined(WIN32) /* Win32 critical sections */
\r
1966 #define MLOCK_T CRITICAL_SECTION
\r
1967 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
\r
1968 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
\r
1969 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
\r
1970 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
\r
1971 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
\r
1972 #define NEED_GLOBAL_LOCK_INIT
\r
1974 static MLOCK_T malloc_global_mutex;
\r
1975 static volatile long malloc_global_mutex_status;
\r
1977 /* Use spin loop to initialize global lock */
\r
1978 static void init_malloc_global_mutex() {
\r
1980 long stat = malloc_global_mutex_status;
\r
1983 /* transition to < 0 while initializing, then to > 0) */
\r
1985 interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
\r
1986 InitializeCriticalSection(&malloc_global_mutex);
\r
1987 interlockedexchange(&malloc_global_mutex_status,1);
\r
1990 SleepEx(0, FALSE);
\r
1994 #else /* pthreads-based locks */
\r
1995 #define MLOCK_T pthread_mutex_t
\r
1996 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
\r
1997 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
\r
1998 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
\r
1999 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
\r
2000 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
\r
2002 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
\r
2003 /* Cope with old-style linux recursive lock initialization by adding */
\r
2004 /* skipped internal declaration from pthread.h */
\r
2005 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
\r
2007 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
\r
2008 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
\r
2009 #endif /* USE_RECURSIVE_LOCKS ... */
\r
2011 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
\r
2013 static int pthread_init_lock (MLOCK_T *lk) {
\r
2014 pthread_mutexattr_t attr;
\r
2015 if (pthread_mutexattr_init(&attr)) return 1;
\r
2016 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
\r
2017 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
\r
2019 if (pthread_mutex_init(lk, &attr)) return 1;
\r
2020 if (pthread_mutexattr_destroy(&attr)) return 1;
\r
2024 #endif /* ... lock types ... */
\r
2026 /* Common code for all lock types */
\r
2027 #define USE_LOCK_BIT (2U)
\r
2029 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
\r
2030 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
\r
2033 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
\r
2034 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
\r
2037 #endif /* USE_LOCKS */
\r
2039 /* ----------------------- Chunk representations ------------------------ */
\r
2042 (The following includes lightly edited explanations by Colin Plumb.)
\r
2044 The malloc_chunk declaration below is misleading (but accurate and
\r
2045 necessary). It declares a "view" into memory allowing access to
\r
2046 necessary fields at known offsets from a given base.
\r
2048 Chunks of memory are maintained using a `boundary tag' method as
\r
2049 originally described by Knuth. (See the paper by Paul Wilson
\r
2050 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
\r
2051 techniques.) Sizes of free chunks are stored both in the front of
\r
2052 each chunk and at the end. This makes consolidating fragmented
\r
2053 chunks into bigger chunks fast. The head fields also hold bits
\r
2054 representing whether chunks are free or in use.
\r
2056 Here are some pictures to make it clearer. They are "exploded" to
\r
2057 show that the state of a chunk can be thought of as extending from
\r
2058 the high 31 bits of the head field of its header through the
\r
2059 prev_foot and PINUSE_BIT bit of the following chunk header.
\r
2061 A chunk that's in use looks like:
\r
2063 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2064 | Size of previous chunk (if P = 0) |
\r
2065 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2066 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
\r
2067 | Size of this chunk 1| +-+
\r
2068 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2074 +- size - sizeof(size_t) available payload bytes -+
\r
2078 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2079 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
\r
2080 | Size of next chunk (may or may not be in use) | +-+
\r
2081 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2083 And if it's free, it looks like this:
\r
2086 | User payload (must be in use, or we would have merged!) |
\r
2087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2088 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
\r
2089 | Size of this chunk 0| +-+
\r
2090 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2092 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2094 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2096 +- size - sizeof(struct chunk) unused bytes -+
\r
2098 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2099 | Size of this chunk |
\r
2100 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2101 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
\r
2102 | Size of next chunk (must be in use, or we would have merged)| +-+
\r
2103 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2105 +- User payload -+
\r
2107 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2110 Note that since we always merge adjacent free chunks, the chunks
\r
2111 adjacent to a free chunk must be in use.
\r
2113 Given a pointer to a chunk (which can be derived trivially from the
\r
2114 payload pointer) we can, in O(1) time, find out whether the adjacent
\r
2115 chunks are free, and if so, unlink them from the lists that they
\r
2116 are on and merge them with the current chunk.
\r
2118 Chunks always begin on even word boundaries, so the mem portion
\r
2119 (which is returned to the user) is also on an even word boundary, and
\r
2120 thus at least double-word aligned.
\r
2122 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
\r
2123 chunk size (which is always a multiple of two words), is an in-use
\r
2124 bit for the *previous* chunk. If that bit is *clear*, then the
\r
2125 word before the current chunk size contains the previous chunk
\r
2126 size, and can be used to find the front of the previous chunk.
\r
2127 The very first chunk allocated always has this bit set, preventing
\r
2128 access to non-existent (or non-owned) memory. If pinuse is set for
\r
2129 any given chunk, then you CANNOT determine the size of the
\r
2130 previous chunk, and might even get a memory addressing fault when
\r
2133 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
\r
2134 the chunk size redundantly records whether the current chunk is
\r
2135 inuse (unless the chunk is mmapped). This redundancy enables usage
\r
2136 checks within free and realloc, and reduces indirection when freeing
\r
2137 and consolidating chunks.
\r
2139 Each freshly allocated chunk must have both cinuse and pinuse set.
\r
2140 That is, each allocated chunk borders either a previously allocated
\r
2141 and still in-use chunk, or the base of its memory arena. This is
\r
2142 ensured by making all allocations from the `lowest' part of any
\r
2143 found chunk. Further, no free chunk physically borders another one,
\r
2144 so each free chunk is known to be preceded and followed by either
\r
2145 inuse chunks or the ends of memory.
\r
2147 Note that the `foot' of the current chunk is actually represented
\r
2148 as the prev_foot of the NEXT chunk. This makes it easier to
\r
2149 deal with alignments etc but can be very confusing when trying
\r
2150 to extend or adapt this code.
\r
2152 The exceptions to all this are
\r
2154 1. The special chunk `top' is the top-most available chunk (i.e.,
\r
2155 the one bordering the end of available memory). It is treated
\r
2156 specially. Top is never included in any bin, is used only if
\r
2157 no other chunk is available, and is released back to the
\r
2158 system if it is very large (see M_TRIM_THRESHOLD). In effect,
\r
2159 the top chunk is treated as larger (and thus less well
\r
2160 fitting) than any other available chunk. The top chunk
\r
2161 doesn't update its trailing size field since there is no next
\r
2162 contiguous chunk that would have to index off it. However,
\r
2163 space is still allocated for it (TOP_FOOT_SIZE) to enable
\r
2164 separation or merging when space is extended.
\r
2166 3. Chunks allocated via mmap, have both cinuse and pinuse bits
\r
2167 cleared in their head fields. Because they are allocated
\r
2168 one-by-one, each must carry its own prev_foot field, which is
\r
2169 also used to hold the offset this chunk has within its mmapped
\r
2170 region, which is needed to preserve alignment. Each mmapped
\r
2171 chunk is trailed by the first two fields of a fake next-chunk
\r
2172 for sake of usage checks.
\r
2176 struct malloc_chunk {
\r
2177 size_t prev_foot; /* Size of previous chunk (if free). */
\r
2178 size_t head; /* Size and inuse bits. */
\r
2179 struct malloc_chunk* fd; /* double links -- used only if free. */
\r
2180 struct malloc_chunk* bk;
\r
2183 typedef struct malloc_chunk mchunk;
\r
2184 typedef struct malloc_chunk* mchunkptr;
\r
2185 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
\r
2186 typedef unsigned int bindex_t; /* Described below */
\r
2187 typedef unsigned int binmap_t; /* Described below */
\r
2188 typedef unsigned int flag_t; /* The type of various bit flag sets */
\r
2190 /* ------------------- Chunks sizes and alignments ----------------------- */
\r
2192 #define MCHUNK_SIZE (sizeof(mchunk))
\r
2195 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
\r
2196 #else /* FOOTERS */
\r
2197 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
\r
2198 #endif /* FOOTERS */
\r
2200 /* MMapped chunks need a second word of overhead ... */
\r
2201 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
\r
2202 /* ... and additional padding for fake next-chunk at foot */
\r
2203 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
\r
2205 /* The smallest size we can malloc is an aligned minimal chunk */
\r
2206 #define MIN_CHUNK_SIZE\
\r
2207 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
\r
2209 /* conversion from malloc headers to user pointers, and back */
\r
2210 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
\r
2211 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
\r
2212 /* chunk associated with aligned address A */
\r
2213 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
\r
2215 /* Bounds on request (not chunk) sizes. */
\r
2216 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
\r
2217 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
\r
2219 /* pad request bytes into a usable size */
\r
2220 #define pad_request(req) \
\r
2221 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
\r
2223 /* pad request, checking for minimum (but not maximum) */
\r
2224 #define request2size(req) \
\r
2225 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
\r
2228 /* ------------------ Operations on head and foot fields ----------------- */
\r
2231 The head field of a chunk is or'ed with PINUSE_BIT when previous
\r
2232 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
\r
2233 use, unless mmapped, in which case both bits are cleared.
\r
2235 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
\r
2238 #define PINUSE_BIT (SIZE_T_ONE)
\r
2239 #define CINUSE_BIT (SIZE_T_TWO)
\r
2240 #define FLAG4_BIT (SIZE_T_FOUR)
\r
2241 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
\r
2242 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
\r
2244 /* Head value for fenceposts */
\r
2245 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
\r
2247 /* extraction of fields from head words */
\r
2248 #define cinuse(p) ((p)->head & CINUSE_BIT)
\r
2249 #define pinuse(p) ((p)->head & PINUSE_BIT)
\r
2250 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
\r
2251 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
\r
2252 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
\r
2254 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
\r
2256 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
\r
2257 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
\r
2258 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
\r
2260 /* Treat space at ptr +/- offset as a chunk */
\r
2261 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
\r
2262 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
\r
2264 /* Ptr to next or previous physical malloc_chunk. */
\r
2265 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
\r
2266 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
\r
2268 /* extract next chunk's pinuse bit */
\r
2269 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
\r
2271 /* Get/set size at footer */
\r
2272 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
\r
2273 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
\r
2275 /* Set size, pinuse bit, and foot */
\r
2276 #define set_size_and_pinuse_of_free_chunk(p, s)\
\r
2277 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
\r
2279 /* Set size, pinuse bit, foot, and clear next pinuse */
\r
2280 #define set_free_with_pinuse(p, s, n)\
\r
2281 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
\r
2283 /* Get the internal overhead associated with chunk p */
\r
2284 #define overhead_for(p)\
\r
2285 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
\r
2287 /* Return true if malloced space is not necessarily cleared */
\r
2289 #define calloc_must_clear(p) (!is_mmapped(p))
\r
2290 #else /* MMAP_CLEARS */
\r
2291 #define calloc_must_clear(p) (1)
\r
2292 #endif /* MMAP_CLEARS */
\r
2294 /* ---------------------- Overlaid data structures ----------------------- */
\r
2297 When chunks are not in use, they are treated as nodes of either
\r
2300 "Small" chunks are stored in circular doubly-linked lists, and look
\r
2303 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2304 | Size of previous chunk |
\r
2305 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2306 `head:' | Size of chunk, in bytes |P|
\r
2307 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2308 | Forward pointer to next chunk in list |
\r
2309 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2310 | Back pointer to previous chunk in list |
\r
2311 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2312 | Unused space (may be 0 bytes long) .
\r
2315 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2316 `foot:' | Size of chunk, in bytes |
\r
2317 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2319 Larger chunks are kept in a form of bitwise digital trees (aka
\r
2320 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
\r
2321 free chunks greater than 256 bytes, their size doesn't impose any
\r
2322 constraints on user chunk sizes. Each node looks like:
\r
2324 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2325 | Size of previous chunk |
\r
2326 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2327 `head:' | Size of chunk, in bytes |P|
\r
2328 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2329 | Forward pointer to next chunk of same size |
\r
2330 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2331 | Back pointer to previous chunk of same size |
\r
2332 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2333 | Pointer to left child (child[0]) |
\r
2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2335 | Pointer to right child (child[1]) |
\r
2336 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2337 | Pointer to parent |
\r
2338 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2339 | bin index of this chunk |
\r
2340 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2343 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2344 `foot:' | Size of chunk, in bytes |
\r
2345 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\r
2347 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
\r
2348 of the same size are arranged in a circularly-linked list, with only
\r
2349 the oldest chunk (the next to be used, in our FIFO ordering)
\r
2350 actually in the tree. (Tree members are distinguished by a non-null
\r
2351 parent pointer.) If a chunk with the same size an an existing node
\r
2352 is inserted, it is linked off the existing node using pointers that
\r
2353 work in the same way as fd/bk pointers of small chunks.
\r
2355 Each tree contains a power of 2 sized range of chunk sizes (the
\r
2356 smallest is 0x100 <= x < 0x180), which is is divided in half at each
\r
2357 tree level, with the chunks in the smaller half of the range (0x100
\r
2358 <= x < 0x140 for the top nose) in the left subtree and the larger
\r
2359 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
\r
2360 done by inspecting individual bits.
\r
2362 Using these rules, each node's left subtree contains all smaller
\r
2363 sizes than its right subtree. However, the node at the root of each
\r
2364 subtree has no particular ordering relationship to either. (The
\r
2365 dividing line between the subtree sizes is based on trie relation.)
\r
2366 If we remove the last chunk of a given size from the interior of the
\r
2367 tree, we need to replace it with a leaf node. The tree ordering
\r
2368 rules permit a node to be replaced by any leaf below it.
\r
2370 The smallest chunk in a tree (a common operation in a best-fit
\r
2371 allocator) can be found by walking a path to the leftmost leaf in
\r
2372 the tree. Unlike a usual binary tree, where we follow left child
\r
2373 pointers until we reach a null, here we follow the right child
\r
2374 pointer any time the left one is null, until we reach a leaf with
\r
2375 both child pointers null. The smallest chunk in the tree will be
\r
2376 somewhere along that path.
\r
2378 The worst case number of steps to add, find, or remove a node is
\r
2379 bounded by the number of bits differentiating chunks within
\r
2380 bins. Under current bin calculations, this ranges from 6 up to 21
\r
2381 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
\r
2382 is of course much better.
\r
2385 struct malloc_tree_chunk {
\r
2386 /* The first four fields must be compatible with malloc_chunk */
\r
2389 struct malloc_tree_chunk* fd;
\r
2390 struct malloc_tree_chunk* bk;
\r
2392 struct malloc_tree_chunk* child[2];
\r
2393 struct malloc_tree_chunk* parent;
\r
2397 typedef struct malloc_tree_chunk tchunk;
\r
2398 typedef struct malloc_tree_chunk* tchunkptr;
\r
2399 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
\r
2401 /* A little helper macro for trees */
\r
2402 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
\r
2404 /* ----------------------------- Segments -------------------------------- */
\r
2407 Each malloc space may include non-contiguous segments, held in a
\r
2408 list headed by an embedded malloc_segment record representing the
\r
2409 top-most space. Segments also include flags holding properties of
\r
2410 the space. Large chunks that are directly allocated by mmap are not
\r
2411 included in this list. They are instead independently created and
\r
2412 destroyed without otherwise keeping track of them.
\r
2414 Segment management mainly comes into play for spaces allocated by
\r
2415 MMAP. Any call to MMAP might or might not return memory that is
\r
2416 adjacent to an existing segment. MORECORE normally contiguously
\r
2417 extends the current space, so this space is almost always adjacent,
\r
2418 which is simpler and faster to deal with. (This is why MORECORE is
\r
2419 used preferentially to MMAP when both are available -- see
\r
2420 sys_alloc.) When allocating using MMAP, we don't use any of the
\r
2421 hinting mechanisms (inconsistently) supported in various
\r
2422 implementations of unix mmap, or distinguish reserving from
\r
2423 committing memory. Instead, we just ask for space, and exploit
\r
2424 contiguity when we get it. It is probably possible to do
\r
2425 better than this on some systems, but no general scheme seems
\r
2426 to be significantly better.
\r
2428 Management entails a simpler variant of the consolidation scheme
\r
2429 used for chunks to reduce fragmentation -- new adjacent memory is
\r
2430 normally prepended or appended to an existing segment. However,
\r
2431 there are limitations compared to chunk consolidation that mostly
\r
2432 reflect the fact that segment processing is relatively infrequent
\r
2433 (occurring only when getting memory from system) and that we
\r
2434 don't expect to have huge numbers of segments:
\r
2436 * Segments are not indexed, so traversal requires linear scans. (It
\r
2437 would be possible to index these, but is not worth the extra
\r
2438 overhead and complexity for most programs on most platforms.)
\r
2439 * New segments are only appended to old ones when holding top-most
\r
2440 memory; if they cannot be prepended to others, they are held in
\r
2441 different segments.
\r
2443 Except for the top-most segment of an mstate, each segment record
\r
2444 is kept at the tail of its segment. Segments are added by pushing
\r
2445 segment records onto the list headed by &mstate.seg for the
\r
2446 containing mstate.
\r
2448 Segment flags control allocation/merge/deallocation policies:
\r
2449 * If EXTERN_BIT set, then we did not allocate this segment,
\r
2450 and so should not try to deallocate or merge with others.
\r
2451 (This currently holds only for the initial segment passed
\r
2452 into create_mspace_with_base.)
\r
2453 * If USE_MMAP_BIT set, the segment may be merged with
\r
2454 other surrounding mmapped segments and trimmed/de-allocated
\r
2456 * If neither bit is set, then the segment was obtained using
\r
2457 MORECORE so can be merged with surrounding MORECORE'd segments
\r
2458 and deallocated/trimmed using MORECORE with negative arguments.
\r
2461 struct malloc_segment {
\r
2462 char* base; /* base address */
\r
2463 size_t size; /* allocated size */
\r
2464 struct malloc_segment* next; /* ptr to next segment */
\r
2465 flag_t sflags; /* mmap and extern flag */
\r
2468 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
\r
2469 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
\r
2471 typedef struct malloc_segment msegment;
\r
2472 typedef struct malloc_segment* msegmentptr;
\r
2474 /* ---------------------------- malloc_state ----------------------------- */
\r
2477 A malloc_state holds all of the bookkeeping for a space.
\r
2478 The main fields are:
\r
2481 The topmost chunk of the currently active segment. Its size is
\r
2482 cached in topsize. The actual size of topmost space is
\r
2483 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
\r
2484 fenceposts and segment records if necessary when getting more
\r
2485 space from the system. The size at which to autotrim top is
\r
2486 cached from mparams in trim_check, except that it is disabled if
\r
2487 an autotrim fails.
\r
2489 Designated victim (dv)
\r
2490 This is the preferred chunk for servicing small requests that
\r
2491 don't have exact fits. It is normally the chunk split off most
\r
2492 recently to service another small request. Its size is cached in
\r
2493 dvsize. The link fields of this chunk are not maintained since it
\r
2494 is not kept in a bin.
\r
2497 An array of bin headers for free chunks. These bins hold chunks
\r
2498 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
\r
2499 chunks of all the same size, spaced 8 bytes apart. To simplify
\r
2500 use in double-linked lists, each bin header acts as a malloc_chunk
\r
2501 pointing to the real first node, if it exists (else pointing to
\r
2502 itself). This avoids special-casing for headers. But to avoid
\r
2503 waste, we allocate only the fd/bk pointers of bins, and then use
\r
2504 repositioning tricks to treat these as the fields of a chunk.
\r
2507 Treebins are pointers to the roots of trees holding a range of
\r
2508 sizes. There are 2 equally spaced treebins for each power of two
\r
2509 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
\r
2513 There is one bit map for small bins ("smallmap") and one for
\r
2514 treebins ("treemap). Each bin sets its bit when non-empty, and
\r
2515 clears the bit when empty. Bit operations are then used to avoid
\r
2516 bin-by-bin searching -- nearly all "search" is done without ever
\r
2517 looking at bins that won't be selected. The bit maps
\r
2518 conservatively use 32 bits per map word, even if on 64bit system.
\r
2519 For a good description of some of the bit-based techniques used
\r
2520 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
\r
2521 supplement at http://hackersdelight.org/). Many of these are
\r
2522 intended to reduce the branchiness of paths through malloc etc, as
\r
2523 well as to reduce the number of memory locations read or written.
\r
2526 A list of segments headed by an embedded malloc_segment record
\r
2527 representing the initial space.
\r
2529 Address check support
\r
2530 The least_addr field is the least address ever obtained from
\r
2531 MORECORE or MMAP. Attempted frees and reallocs of any address less
\r
2532 than this are trapped (unless INSECURE is defined).
\r
2535 A cross-check field that should always hold same value as mparams.magic.
\r
2537 Max allowed footprint
\r
2538 The maximum allowed bytes to allocate from system (zero means no limit)
\r
2541 Bits recording whether to use MMAP, locks, or contiguous MORECORE
\r
2544 Each space keeps track of current and maximum system memory
\r
2545 obtained via MORECORE or MMAP.
\r
2548 Fields holding the amount of unused topmost memory that should trigger
\r
2549 trimming, and a counter to force periodic scanning to release unused
\r
2550 non-topmost segments.
\r
2553 If USE_LOCKS is defined, the "mutex" lock is acquired and released
\r
2554 around every public call using this mspace.
\r
2557 A void* pointer and a size_t field that can be used to help implement
\r
2558 extensions to this malloc.
\r
2561 /* Bin types, widths and sizes */
\r
2562 #define NSMALLBINS (32U)
\r
2563 #define NTREEBINS (32U)
\r
2564 #define SMALLBIN_SHIFT (3U)
\r
2565 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
\r
2566 #define TREEBIN_SHIFT (8U)
\r
2567 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
\r
2568 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
\r
2569 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
\r
2571 struct malloc_state {
\r
2572 binmap_t smallmap;
\r
2579 size_t trim_check;
\r
2580 size_t release_checks;
\r
2582 mchunkptr smallbins[(NSMALLBINS+1)*2];
\r
2583 tbinptr treebins[NTREEBINS];
\r
2585 size_t max_footprint;
\r
2586 size_t footprint_limit; /* zero means no limit */
\r
2589 MLOCK_T mutex; /* locate lock among fields that rarely change */
\r
2590 #endif /* USE_LOCKS */
\r
2592 void* extp; /* Unused but available for extensions */
\r
2596 typedef struct malloc_state* mstate;
\r
2598 /* ------------- Global malloc_state and malloc_params ------------------- */
\r
2601 malloc_params holds global properties, including those that can be
\r
2602 dynamically set using mallopt. There is a single instance, mparams,
\r
2603 initialized in init_mparams. Note that the non-zeroness of "magic"
\r
2604 also serves as an initialization flag.
\r
2607 struct malloc_params {
\r
2610 size_t granularity;
\r
2611 size_t mmap_threshold;
\r
2612 size_t trim_threshold;
\r
2613 flag_t default_mflags;
\r
2616 static struct malloc_params mparams;
\r
2618 /* Ensure mparams initialized */
\r
2619 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
\r
2623 /* The global malloc_state used for all non-"mspace" calls */
\r
2624 static struct malloc_state _gm_;
\r
2625 #define gm (&_gm_)
\r
2626 #define is_global(M) ((M) == &_gm_)
\r
2628 #endif /* !ONLY_MSPACES */
\r
2630 #define is_initialized(M) ((M)->top != 0)
\r
2632 /* -------------------------- system alloc setup ------------------------- */
\r
2634 /* Operations on mflags */
\r
2636 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
\r
2637 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
\r
2639 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
\r
2641 #define disable_lock(M)
\r
2644 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
\r
2645 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
\r
2647 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
\r
2649 #define disable_mmap(M)
\r
2652 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
\r
2653 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
\r
2655 #define set_lock(M,L)\
\r
2656 ((M)->mflags = (L)?\
\r
2657 ((M)->mflags | USE_LOCK_BIT) :\
\r
2658 ((M)->mflags & ~USE_LOCK_BIT))
\r
2660 /* page-align a size */
\r
2661 #define page_align(S)\
\r
2662 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
\r
2664 /* granularity-align a size */
\r
2665 #define granularity_align(S)\
\r
2666 (((S) + (mparams.granularity - SIZE_T_ONE))\
\r
2667 & ~(mparams.granularity - SIZE_T_ONE))
\r
2670 /* For mmap, use granularity alignment on windows, else page-align */
\r
2672 #define mmap_align(S) granularity_align(S)
\r
2674 #define mmap_align(S) page_align(S)
\r
2677 /* For sys_alloc, enough padding to ensure can malloc request on success */
\r
2678 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
\r
2680 #define is_page_aligned(S)\
\r
2681 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
\r
2682 #define is_granularity_aligned(S)\
\r
2683 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
\r
2685 /* True if segment S holds address A */
\r
2686 #define segment_holds(S, A)\
\r
2687 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
\r
2689 /* Return segment holding given address */
\r
2690 static msegmentptr segment_holding(mstate m, char* addr) {
\r
2691 msegmentptr sp = &m->seg;
\r
2693 if (addr >= sp->base && addr < sp->base + sp->size)
\r
2695 if ((sp = sp->next) == 0)
\r
2700 /* Return true if segment contains a segment link */
\r
2701 static int has_segment_link(mstate m, msegmentptr ss) {
\r
2702 msegmentptr sp = &m->seg;
\r
2704 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
\r
2706 if ((sp = sp->next) == 0)
\r
2711 #ifndef MORECORE_CANNOT_TRIM
\r
2712 #define should_trim(M,s) ((s) > (M)->trim_check)
\r
2713 #else /* MORECORE_CANNOT_TRIM */
\r
2714 #define should_trim(M,s) (0)
\r
2715 #endif /* MORECORE_CANNOT_TRIM */
\r
2718 TOP_FOOT_SIZE is padding at the end of a segment, including space
\r
2719 that may be needed to place segment records and fenceposts when new
\r
2720 noncontiguous segments are added.
\r
2722 #define TOP_FOOT_SIZE\
\r
2723 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
\r
2726 /* ------------------------------- Hooks -------------------------------- */
\r
2729 PREACTION should be defined to return 0 on success, and nonzero on
\r
2730 failure. If you are not using locking, you can redefine these to do
\r
2731 anything you like.
\r
2735 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
\r
2736 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
\r
2737 #else /* USE_LOCKS */
\r
2740 #define PREACTION(M) (0)
\r
2741 #endif /* PREACTION */
\r
2743 #ifndef POSTACTION
\r
2744 #define POSTACTION(M)
\r
2745 #endif /* POSTACTION */
\r
2747 #endif /* USE_LOCKS */
\r
2750 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
\r
2751 USAGE_ERROR_ACTION is triggered on detected bad frees and
\r
2752 reallocs. The argument p is an address that might have triggered the
\r
2753 fault. It is ignored by the two predefined actions, but might be
\r
2754 useful in custom actions that try to help diagnose errors.
\r
2757 #if PROCEED_ON_ERROR
\r
2759 /* A count of the number of corruption errors causing resets */
\r
2760 int malloc_corruption_error_count;
\r
2762 /* default corruption action */
\r
2763 static void reset_on_error(mstate m);
\r
2765 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
\r
2766 #define USAGE_ERROR_ACTION(m, p)
\r
2768 #else /* PROCEED_ON_ERROR */
\r
2770 #ifndef CORRUPTION_ERROR_ACTION
\r
2771 #define CORRUPTION_ERROR_ACTION(m) ABORT
\r
2772 #endif /* CORRUPTION_ERROR_ACTION */
\r
2774 #ifndef USAGE_ERROR_ACTION
\r
2775 #define USAGE_ERROR_ACTION(m,p) ABORT
\r
2776 #endif /* USAGE_ERROR_ACTION */
\r
2778 #endif /* PROCEED_ON_ERROR */
\r
2781 /* -------------------------- Debugging setup ---------------------------- */
\r
2785 #define check_free_chunk(M,P)
\r
2786 #define check_inuse_chunk(M,P)
\r
2787 #define check_malloced_chunk(M,P,N)
\r
2788 #define check_mmapped_chunk(M,P)
\r
2789 #define check_malloc_state(M)
\r
2790 #define check_top_chunk(M,P)
\r
2793 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
\r
2794 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
\r
2795 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
\r
2796 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
\r
2797 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
\r
2798 #define check_malloc_state(M) do_check_malloc_state(M)
\r
2800 static void do_check_any_chunk(mstate m, mchunkptr p);
\r
2801 static void do_check_top_chunk(mstate m, mchunkptr p);
\r
2802 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
\r
2803 static void do_check_inuse_chunk(mstate m, mchunkptr p);
\r
2804 static void do_check_free_chunk(mstate m, mchunkptr p);
\r
2805 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
\r
2806 static void do_check_tree(mstate m, tchunkptr t);
\r
2807 static void do_check_treebin(mstate m, bindex_t i);
\r
2808 static void do_check_smallbin(mstate m, bindex_t i);
\r
2809 static void do_check_malloc_state(mstate m);
\r
2810 static int bin_find(mstate m, mchunkptr x);
\r
2811 static size_t traverse_and_check(mstate m);
\r
2812 #endif /* DEBUG */
\r
2814 /* ---------------------------- Indexing Bins ---------------------------- */
\r
2816 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
\r
2817 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
\r
2818 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
\r
2819 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
\r
2821 /* addressing by index. See above about smallbin repositioning */
\r
2822 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
\r
2823 #define treebin_at(M,i) (&((M)->treebins[i]))
\r
2825 /* assign tree index for size S to variable I. Use x86 asm if possible */
\r
2826 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
\r
2827 #define compute_tree_index(S, I)\
\r
2829 unsigned int X = S >> TREEBIN_SHIFT;\
\r
2832 else if (X > 0xFFFF)\
\r
2835 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
\r
2836 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2840 #elif defined (__INTEL_COMPILER)
\r
2841 #define compute_tree_index(S, I)\
\r
2843 size_t X = S >> TREEBIN_SHIFT;\
\r
2846 else if (X > 0xFFFF)\
\r
2849 unsigned int K = _bit_scan_reverse (X); \
\r
2850 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2854 #elif defined(_MSC_VER) && _MSC_VER>=1300
\r
2855 #define compute_tree_index(S, I)\
\r
2857 size_t X = S >> TREEBIN_SHIFT;\
\r
2860 else if (X > 0xFFFF)\
\r
2864 _BitScanReverse((DWORD *) &K, (DWORD) X);\
\r
2865 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
\r
2870 #define compute_tree_index(S, I)\
\r
2872 size_t X = S >> TREEBIN_SHIFT;\
\r
2875 else if (X > 0xFFFF)\
\r
2878 unsigned int Y = (unsigned int)X;\
\r
2879 unsigned int N = ((Y - 0x100) >> 16) & 8;\
\r
2880 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
\r
2882 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
\r
2883 K = 14 - N + ((Y <<= K) >> 15);\
\r
2884 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
\r
2889 /* Bit representing maximum resolved size in a treebin at i */
\r
2890 #define bit_for_tree_index(i) \
\r
2891 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
\r
2893 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
\r
2894 #define leftshift_for_tree_index(i) \
\r
2895 ((i == NTREEBINS-1)? 0 : \
\r
2896 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
\r
2898 /* The size of the smallest chunk held in bin with index i */
\r
2899 #define minsize_for_tree_index(i) \
\r
2900 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
\r
2901 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
\r
2904 /* ------------------------ Operations on bin maps ----------------------- */
\r
2906 /* bit corresponding to given index */
\r
2907 #define idx2bit(i) ((binmap_t)(1) << (i))
\r
2909 /* Mark/Clear bits with given index */
\r
2910 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
\r
2911 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
\r
2912 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
\r
2914 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
\r
2915 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
\r
2916 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
\r
2918 /* isolate the least set bit of a bitmap */
\r
2919 #define least_bit(x) ((x) & -(x))
\r
2921 /* mask with all bits to left of least bit of x on */
\r
2922 #define left_bits(x) ((x<<1) | -(x<<1))
\r
2924 /* mask with all bits to left of or equal to least bit of x on */
\r
2925 #define same_or_left_bits(x) ((x) | -(x))
\r
2927 /* index corresponding to given bit. Use x86 asm if possible */
\r
2929 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
\r
2930 #define compute_bit2idx(X, I)\
\r
2933 J = __builtin_ctz(X); \
\r
2937 #elif defined (__INTEL_COMPILER)
\r
2938 #define compute_bit2idx(X, I)\
\r
2941 J = _bit_scan_forward (X); \
\r
2945 #elif defined(_MSC_VER) && _MSC_VER>=1300
\r
2946 #define compute_bit2idx(X, I)\
\r
2949 _BitScanForward((DWORD *) &J, X);\
\r
2953 #elif USE_BUILTIN_FFS
\r
2954 #define compute_bit2idx(X, I) I = ffs(X)-1
\r
2957 #define compute_bit2idx(X, I)\
\r
2959 unsigned int Y = X - 1;\
\r
2960 unsigned int K = Y >> (16-4) & 16;\
\r
2961 unsigned int N = K; Y >>= K;\
\r
2962 N += K = Y >> (8-3) & 8; Y >>= K;\
\r
2963 N += K = Y >> (4-2) & 4; Y >>= K;\
\r
2964 N += K = Y >> (2-1) & 2; Y >>= K;\
\r
2965 N += K = Y >> (1-0) & 1; Y >>= K;\
\r
2966 I = (bindex_t)(N + Y);\
\r
2971 /* ----------------------- Runtime Check Support ------------------------- */
\r
2974 For security, the main invariant is that malloc/free/etc never
\r
2975 writes to a static address other than malloc_state, unless static
\r
2976 malloc_state itself has been corrupted, which cannot occur via
\r
2977 malloc (because of these checks). In essence this means that we
\r
2978 believe all pointers, sizes, maps etc held in malloc_state, but
\r
2979 check all of those linked or offsetted from other embedded data
\r
2980 structures. These checks are interspersed with main code in a way
\r
2981 that tends to minimize their run-time cost.
\r
2983 When FOOTERS is defined, in addition to range checking, we also
\r
2984 verify footer fields of inuse chunks, which can be used guarantee
\r
2985 that the mstate controlling malloc/free is intact. This is a
\r
2986 streamlined version of the approach described by William Robertson
\r
2987 et al in "Run-time Detection of Heap-based Overflows" LISA'03
\r
2988 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
\r
2989 of an inuse chunk holds the xor of its mstate and a random seed,
\r
2990 that is checked upon calls to free() and realloc(). This is
\r
2991 (probabalistically) unguessable from outside the program, but can be
\r
2992 computed by any code successfully malloc'ing any chunk, so does not
\r
2993 itself provide protection against code that has already broken
\r
2994 security through some other means. Unlike Robertson et al, we
\r
2995 always dynamically check addresses of all offset chunks (previous,
\r
2996 next, etc). This turns out to be cheaper than relying on hashes.
\r
3000 /* Check if address a is at least as high as any from MORECORE or MMAP */
\r
3001 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
\r
3002 /* Check if address of next chunk n is higher than base chunk p */
\r
3003 #define ok_next(p, n) ((char*)(p) < (char*)(n))
\r
3004 /* Check if p has inuse status */
\r
3005 #define ok_inuse(p) is_inuse(p)
\r
3006 /* Check if p has its pinuse bit on */
\r
3007 #define ok_pinuse(p) pinuse(p)
\r
3009 #else /* !INSECURE */
\r
3010 #define ok_address(M, a) (1)
\r
3011 #define ok_next(b, n) (1)
\r
3012 #define ok_inuse(p) (1)
\r
3013 #define ok_pinuse(p) (1)
\r
3014 #endif /* !INSECURE */
\r
3016 #if (FOOTERS && !INSECURE)
\r
3017 /* Check if (alleged) mstate m has expected magic field */
\r
3018 #define ok_magic(M) ((M)->magic == mparams.magic)
\r
3019 #else /* (FOOTERS && !INSECURE) */
\r
3020 #define ok_magic(M) (1)
\r
3021 #endif /* (FOOTERS && !INSECURE) */
\r
3023 /* In gcc, use __builtin_expect to minimize impact of checks */
\r
3025 #if defined(__GNUC__) && __GNUC__ >= 3
\r
3026 #define RTCHECK(e) __builtin_expect(e, 1)
\r
3028 #define RTCHECK(e) (e)
\r
3030 #else /* !INSECURE */
\r
3031 #define RTCHECK(e) (1)
\r
3032 #endif /* !INSECURE */
\r
3034 /* macros to set up inuse chunks with or without footers */
\r
3038 #define mark_inuse_foot(M,p,s)
\r
3040 /* Macros for setting head/foot of non-mmapped chunks */
\r
3042 /* Set cinuse bit and pinuse bit of next chunk */
\r
3043 #define set_inuse(M,p,s)\
\r
3044 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
\r
3045 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
\r
3047 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
\r
3048 #define set_inuse_and_pinuse(M,p,s)\
\r
3049 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3050 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
\r
3052 /* Set size, cinuse and pinuse bit of this chunk */
\r
3053 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
\r
3054 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
\r
3056 #else /* FOOTERS */
\r
3058 /* Set foot of inuse chunk to be xor of mstate and seed */
\r
3059 #define mark_inuse_foot(M,p,s)\
\r
3060 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
\r
3062 #define get_mstate_for(p)\
\r
3063 ((mstate)(((mchunkptr)((char*)(p) +\
\r
3064 (chunksize(p))))->prev_foot ^ mparams.magic))
\r
3066 #define set_inuse(M,p,s)\
\r
3067 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
\r
3068 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
\r
3069 mark_inuse_foot(M,p,s))
\r
3071 #define set_inuse_and_pinuse(M,p,s)\
\r
3072 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3073 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
\r
3074 mark_inuse_foot(M,p,s))
\r
3076 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
\r
3077 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
\r
3078 mark_inuse_foot(M, p, s))
\r
3080 #endif /* !FOOTERS */
\r
3082 /* ---------------------------- setting mparams -------------------------- */
\r
3084 /* Initialize mparams */
\r
3085 static int init_mparams(void) {
\r
3086 #ifdef NEED_GLOBAL_LOCK_INIT
\r
3087 call_once(&malloc_global_mutex_init_once, init_malloc_global_mutex);
\r
3090 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
3091 if (mparams.magic == 0) {
\r
3097 psize = malloc_getpagesize;
\r
3098 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
\r
3101 SYSTEM_INFO system_info;
\r
3102 GetSystemInfo(&system_info);
\r
3103 psize = system_info.dwPageSize;
\r
3104 gsize = ((DEFAULT_GRANULARITY != 0)?
\r
3105 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
\r
3107 #endif /* WIN32 */
\r
3109 /* Sanity-check configuration:
\r
3110 size_t must be unsigned and as wide as pointer type.
\r
3111 ints must be at least 4 bytes.
\r
3112 alignment must be at least 8.
\r
3113 Alignment, min chunk size, and page size must all be powers of 2.
\r
3115 if ((sizeof(size_t) != sizeof(char*)) ||
\r
3116 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
\r
3117 (sizeof(int) < 4) ||
\r
3118 (MALLOC_ALIGNMENT < (size_t)8U) ||
\r
3119 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
\r
3120 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
\r
3121 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
\r
3122 ((psize & (psize-SIZE_T_ONE)) != 0))
\r
3125 mparams.granularity = gsize;
\r
3126 mparams.page_size = psize;
\r
3127 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
\r
3128 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
\r
3129 #if MORECORE_CONTIGUOUS
\r
3130 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
\r
3131 #else /* MORECORE_CONTIGUOUS */
\r
3132 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
\r
3133 #endif /* MORECORE_CONTIGUOUS */
\r
3136 /* Set up lock for main malloc area */
\r
3137 gm->mflags = mparams.default_mflags;
\r
3138 (void)INITIAL_LOCK(&gm->mutex);
\r
3142 #if USE_DEV_RANDOM
\r
3144 unsigned char buf[sizeof(size_t)];
\r
3145 /* Try to use /dev/urandom, else fall back on using time */
\r
3146 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
\r
3147 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
\r
3148 magic = *((size_t *) buf);
\r
3152 #endif /* USE_DEV_RANDOM */
\r
3154 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
\r
3155 #elif defined(LACKS_TIME_H)
\r
3156 magic = (size_t)&magic ^ (size_t)0x55555555U;
\r
3158 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
\r
3160 magic |= (size_t)8U; /* ensure nonzero */
\r
3161 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
\r
3162 /* Until memory modes commonly available, use volatile-write */
\r
3163 (*(volatile size_t *)(&(mparams.magic))) = magic;
\r
3167 RELEASE_MALLOC_GLOBAL_LOCK();
\r
3171 /* support for mallopt */
\r
3172 static int change_mparam(int param_number, int value) {
\r
3174 ensure_initialization();
\r
3175 val = (value == -1)? MAX_SIZE_T : (size_t)value;
\r
3176 switch(param_number) {
\r
3177 case M_TRIM_THRESHOLD:
\r
3178 mparams.trim_threshold = val;
\r
3180 case M_GRANULARITY:
\r
3181 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
\r
3182 mparams.granularity = val;
\r
3187 case M_MMAP_THRESHOLD:
\r
3188 mparams.mmap_threshold = val;
\r
3196 /* ------------------------- Debugging Support --------------------------- */
\r
3198 /* Check properties of any chunk, whether free, inuse, mmapped etc */
\r
3199 static void do_check_any_chunk(mstate m, mchunkptr p) {
\r
3200 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3201 assert(ok_address(m, p));
\r
3204 /* Check properties of top chunk */
\r
3205 static void do_check_top_chunk(mstate m, mchunkptr p) {
\r
3206 msegmentptr sp = segment_holding(m, (char*)p);
\r
3207 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
\r
3209 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3210 assert(ok_address(m, p));
\r
3211 assert(sz == m->topsize);
\r
3213 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
\r
3214 assert(pinuse(p));
\r
3215 assert(!pinuse(chunk_plus_offset(p, sz)));
\r
3218 /* Check properties of (inuse) mmapped chunks */
\r
3219 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
\r
3220 size_t sz = chunksize(p);
\r
3221 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
\r
3222 assert(is_mmapped(p));
\r
3223 assert(use_mmap(m));
\r
3224 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
\r
3225 assert(ok_address(m, p));
\r
3226 assert(!is_small(sz));
\r
3227 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
\r
3228 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
\r
3229 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
\r
3232 /* Check properties of inuse chunks */
\r
3233 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
\r
3234 do_check_any_chunk(m, p);
\r
3235 assert(is_inuse(p));
\r
3236 assert(next_pinuse(p));
\r
3237 /* If not pinuse and not mmapped, previous chunk has OK offset */
\r
3238 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
\r
3239 if (is_mmapped(p))
\r
3240 do_check_mmapped_chunk(m, p);
\r
3243 /* Check properties of free chunks */
\r
3244 static void do_check_free_chunk(mstate m, mchunkptr p) {
\r
3245 size_t sz = chunksize(p);
\r
3246 mchunkptr next = chunk_plus_offset(p, sz);
\r
3247 do_check_any_chunk(m, p);
\r
3248 assert(!is_inuse(p));
\r
3249 assert(!next_pinuse(p));
\r
3250 assert (!is_mmapped(p));
\r
3251 if (p != m->dv && p != m->top) {
\r
3252 if (sz >= MIN_CHUNK_SIZE) {
\r
3253 assert((sz & CHUNK_ALIGN_MASK) == 0);
\r
3254 assert(is_aligned(chunk2mem(p)));
\r
3255 assert(next->prev_foot == sz);
\r
3256 assert(pinuse(p));
\r
3257 assert (next == m->top || is_inuse(next));
\r
3258 assert(p->fd->bk == p);
\r
3259 assert(p->bk->fd == p);
\r
3261 else /* markers are always of size SIZE_T_SIZE */
\r
3262 assert(sz == SIZE_T_SIZE);
\r
3266 /* Check properties of malloced chunks at the point they are malloced */
\r
3267 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
\r
3269 mchunkptr p = mem2chunk(mem);
\r
3270 size_t sz = p->head & ~INUSE_BITS;
\r
3271 do_check_inuse_chunk(m, p);
\r
3272 assert((sz & CHUNK_ALIGN_MASK) == 0);
\r
3273 assert(sz >= MIN_CHUNK_SIZE);
\r
3275 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
\r
3276 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
\r
3280 /* Check a tree and its subtrees. */
\r
3281 static void do_check_tree(mstate m, tchunkptr t) {
\r
3282 tchunkptr head = 0;
\r
3284 bindex_t tindex = t->index;
\r
3285 size_t tsize = chunksize(t);
\r
3287 compute_tree_index(tsize, idx);
\r
3288 assert(tindex == idx);
\r
3289 assert(tsize >= MIN_LARGE_SIZE);
\r
3290 assert(tsize >= minsize_for_tree_index(idx));
\r
3291 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
\r
3293 do { /* traverse through chain of same-sized nodes */
\r
3294 do_check_any_chunk(m, ((mchunkptr)u));
\r
3295 assert(u->index == tindex);
\r
3296 assert(chunksize(u) == tsize);
\r
3297 assert(!is_inuse(u));
\r
3298 assert(!next_pinuse(u));
\r
3299 assert(u->fd->bk == u);
\r
3300 assert(u->bk->fd == u);
\r
3301 if (u->parent == 0) {
\r
3302 assert(u->child[0] == 0);
\r
3303 assert(u->child[1] == 0);
\r
3306 assert(head == 0); /* only one node on chain has parent */
\r
3308 assert(u->parent != u);
\r
3309 assert (u->parent->child[0] == u ||
\r
3310 u->parent->child[1] == u ||
\r
3311 *((tbinptr*)(u->parent)) == u);
\r
3312 if (u->child[0] != 0) {
\r
3313 assert(u->child[0]->parent == u);
\r
3314 assert(u->child[0] != u);
\r
3315 do_check_tree(m, u->child[0]);
\r
3317 if (u->child[1] != 0) {
\r
3318 assert(u->child[1]->parent == u);
\r
3319 assert(u->child[1] != u);
\r
3320 do_check_tree(m, u->child[1]);
\r
3322 if (u->child[0] != 0 && u->child[1] != 0) {
\r
3323 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
\r
3328 assert(head != 0);
\r
3331 /* Check all the chunks in a treebin. */
\r
3332 static void do_check_treebin(mstate m, bindex_t i) {
\r
3333 tbinptr* tb = treebin_at(m, i);
\r
3334 tchunkptr t = *tb;
\r
3335 int empty = (m->treemap & (1U << i)) == 0;
\r
3339 do_check_tree(m, t);
\r
3342 /* Check all the chunks in a smallbin. */
\r
3343 static void do_check_smallbin(mstate m, bindex_t i) {
\r
3344 sbinptr b = smallbin_at(m, i);
\r
3345 mchunkptr p = b->bk;
\r
3346 unsigned int empty = (m->smallmap & (1U << i)) == 0;
\r
3350 for (; p != b; p = p->bk) {
\r
3351 size_t size = chunksize(p);
\r
3353 /* each chunk claims to be free */
\r
3354 do_check_free_chunk(m, p);
\r
3355 /* chunk belongs in bin */
\r
3356 assert(small_index(size) == i);
\r
3357 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
\r
3358 /* chunk is followed by an inuse chunk */
\r
3359 q = next_chunk(p);
\r
3360 if (q->head != FENCEPOST_HEAD)
\r
3361 do_check_inuse_chunk(m, q);
\r
3366 /* Find x in a bin. Used in other check functions. */
\r
3367 static int bin_find(mstate m, mchunkptr x) {
\r
3368 size_t size = chunksize(x);
\r
3369 if (is_small(size)) {
\r
3370 bindex_t sidx = small_index(size);
\r
3371 sbinptr b = smallbin_at(m, sidx);
\r
3372 if (smallmap_is_marked(m, sidx)) {
\r
3377 } while ((p = p->fd) != b);
\r
3382 compute_tree_index(size, tidx);
\r
3383 if (treemap_is_marked(m, tidx)) {
\r
3384 tchunkptr t = *treebin_at(m, tidx);
\r
3385 size_t sizebits = size << leftshift_for_tree_index(tidx);
\r
3386 while (t != 0 && chunksize(t) != size) {
\r
3387 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
\r
3393 if (u == (tchunkptr)x)
\r
3395 } while ((u = u->fd) != t);
\r
3402 /* Traverse each chunk and check it; return total */
\r
3403 static size_t traverse_and_check(mstate m) {
\r
3405 if (is_initialized(m)) {
\r
3406 msegmentptr s = &m->seg;
\r
3407 sum += m->topsize + TOP_FOOT_SIZE;
\r
3409 mchunkptr q = align_as_chunk(s->base);
\r
3410 mchunkptr lastq = 0;
\r
3411 assert(pinuse(q));
\r
3412 while (segment_holds(s, q) &&
\r
3413 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3414 sum += chunksize(q);
\r
3415 if (is_inuse(q)) {
\r
3416 assert(!bin_find(m, q));
\r
3417 do_check_inuse_chunk(m, q);
\r
3420 assert(q == m->dv || bin_find(m, q));
\r
3421 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
\r
3422 do_check_free_chunk(m, q);
\r
3425 q = next_chunk(q);
\r
3434 /* Check all properties of malloc_state. */
\r
3435 static void do_check_malloc_state(mstate m) {
\r
3439 for (i = 0; i < NSMALLBINS; ++i)
\r
3440 do_check_smallbin(m, i);
\r
3441 for (i = 0; i < NTREEBINS; ++i)
\r
3442 do_check_treebin(m, i);
\r
3444 if (m->dvsize != 0) { /* check dv chunk */
\r
3445 do_check_any_chunk(m, m->dv);
\r
3446 assert(m->dvsize == chunksize(m->dv));
\r
3447 assert(m->dvsize >= MIN_CHUNK_SIZE);
\r
3448 assert(bin_find(m, m->dv) == 0);
\r
3451 if (m->top != 0) { /* check top chunk */
\r
3452 do_check_top_chunk(m, m->top);
\r
3453 /*assert(m->topsize == chunksize(m->top)); redundant */
\r
3454 assert(m->topsize > 0);
\r
3455 assert(bin_find(m, m->top) == 0);
\r
3458 total = traverse_and_check(m);
\r
3459 assert(total <= m->footprint);
\r
3460 assert(m->footprint <= m->max_footprint);
\r
3462 #endif /* DEBUG */
\r
3464 /* ----------------------------- statistics ------------------------------ */
\r
3467 static struct mallinfo internal_mallinfo(mstate m) {
\r
3468 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
\r
3469 ensure_initialization();
\r
3470 if (!PREACTION(m)) {
\r
3471 check_malloc_state(m);
\r
3472 if (is_initialized(m)) {
\r
3473 size_t nfree = SIZE_T_ONE; /* top always free */
\r
3474 size_t mfree = m->topsize + TOP_FOOT_SIZE;
\r
3475 size_t sum = mfree;
\r
3476 msegmentptr s = &m->seg;
\r
3478 mchunkptr q = align_as_chunk(s->base);
\r
3479 while (segment_holds(s, q) &&
\r
3480 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3481 size_t sz = chunksize(q);
\r
3483 if (!is_inuse(q)) {
\r
3487 q = next_chunk(q);
\r
3493 nm.ordblks = nfree;
\r
3494 nm.hblkhd = m->footprint - sum;
\r
3495 nm.usmblks = m->max_footprint;
\r
3496 nm.uordblks = m->footprint - mfree;
\r
3497 nm.fordblks = mfree;
\r
3498 nm.keepcost = m->topsize;
\r
3505 #endif /* !NO_MALLINFO */
\r
3507 #if !NO_MALLOC_STATS
\r
3508 static void internal_malloc_stats(mstate m) {
\r
3509 ensure_initialization();
\r
3510 if (!PREACTION(m)) {
\r
3514 check_malloc_state(m);
\r
3515 if (is_initialized(m)) {
\r
3516 msegmentptr s = &m->seg;
\r
3517 maxfp = m->max_footprint;
\r
3518 fp = m->footprint;
\r
3519 used = fp - (m->topsize + TOP_FOOT_SIZE);
\r
3522 mchunkptr q = align_as_chunk(s->base);
\r
3523 while (segment_holds(s, q) &&
\r
3524 q != m->top && q->head != FENCEPOST_HEAD) {
\r
3526 used -= chunksize(q);
\r
3527 q = next_chunk(q);
\r
3532 POSTACTION(m); /* drop lock */
\r
3533 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
\r
3534 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
\r
3535 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
\r
3538 #endif /* NO_MALLOC_STATS */
\r
3540 /* ----------------------- Operations on smallbins ----------------------- */
\r
3543 Various forms of linking and unlinking are defined as macros. Even
\r
3544 the ones for trees, which are very long but have very short typical
\r
3545 paths. This is ugly but reduces reliance on inlining support of
\r
3549 /* Link a free chunk into a smallbin */
\r
3550 #define insert_small_chunk(M, P, S) {\
\r
3551 bindex_t I = small_index(S);\
\r
3552 mchunkptr B = smallbin_at(M, I);\
\r
3554 assert(S >= MIN_CHUNK_SIZE);\
\r
3555 if (!smallmap_is_marked(M, I))\
\r
3556 mark_smallmap(M, I);\
\r
3557 else if (RTCHECK(ok_address(M, B->fd)))\
\r
3560 CORRUPTION_ERROR_ACTION(M);\
\r
3568 /* Unlink a chunk from a smallbin */
\r
3569 #define unlink_small_chunk(M, P, S) {\
\r
3570 mchunkptr F = P->fd;\
\r
3571 mchunkptr B = P->bk;\
\r
3572 bindex_t I = small_index(S);\
\r
3575 assert(chunksize(P) == small_index2size(I));\
\r
3576 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
\r
3578 clear_smallmap(M, I);\
\r
3580 else if (RTCHECK(B == smallbin_at(M,I) ||\
\r
3581 (ok_address(M, B) && B->fd == P))) {\
\r
3586 CORRUPTION_ERROR_ACTION(M);\
\r
3590 CORRUPTION_ERROR_ACTION(M);\
\r
3594 /* Unlink the first chunk from a smallbin */
\r
3595 #define unlink_first_small_chunk(M, B, P, I) {\
\r
3596 mchunkptr F = P->fd;\
\r
3599 assert(chunksize(P) == small_index2size(I));\
\r
3601 clear_smallmap(M, I);\
\r
3603 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
\r
3608 CORRUPTION_ERROR_ACTION(M);\
\r
3612 /* Replace dv node, binning the old one */
\r
3613 /* Used only when dvsize known to be small */
\r
3614 #define replace_dv(M, P, S) {\
\r
3615 size_t DVS = M->dvsize;\
\r
3616 assert(is_small(DVS));\
\r
3618 mchunkptr DV = M->dv;\
\r
3619 insert_small_chunk(M, DV, DVS);\
\r
3625 /* ------------------------- Operations on trees ------------------------- */
\r
3627 /* Insert chunk into tree */
\r
3628 #define insert_large_chunk(M, X, S) {\
\r
3631 compute_tree_index(S, I);\
\r
3632 H = treebin_at(M, I);\
\r
3634 X->child[0] = X->child[1] = 0;\
\r
3635 if (!treemap_is_marked(M, I)) {\
\r
3636 mark_treemap(M, I);\
\r
3638 X->parent = (tchunkptr)H;\
\r
3639 X->fd = X->bk = X;\
\r
3642 tchunkptr T = *H;\
\r
3643 size_t K = S << leftshift_for_tree_index(I);\
\r
3645 if (chunksize(T) != S) {\
\r
3646 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
\r
3650 else if (RTCHECK(ok_address(M, C))) {\
\r
3653 X->fd = X->bk = X;\
\r
3657 CORRUPTION_ERROR_ACTION(M);\
\r
3662 tchunkptr F = T->fd;\
\r
3663 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
\r
3664 T->fd = F->bk = X;\
\r
3671 CORRUPTION_ERROR_ACTION(M);\
\r
3682 1. If x is a chained node, unlink it from its same-sized fd/bk links
\r
3683 and choose its bk node as its replacement.
\r
3684 2. If x was the last node of its size, but not a leaf node, it must
\r
3685 be replaced with a leaf node (not merely one with an open left or
\r
3686 right), to make sure that lefts and rights of descendents
\r
3687 correspond properly to bit masks. We use the rightmost descendent
\r
3688 of x. We could use any other leaf, but this is easy to locate and
\r
3689 tends to counteract removal of leftmosts elsewhere, and so keeps
\r
3690 paths shorter than minimally guaranteed. This doesn't loop much
\r
3691 because on average a node in a tree is near the bottom.
\r
3692 3. If x is the base of a chain (i.e., has parent links) relink
\r
3693 x's parent and children to x's replacement (or null if none).
\r
3696 #define unlink_large_chunk(M, X) {\
\r
3697 tchunkptr XP = X->parent;\
\r
3699 if (X->bk != X) {\
\r
3700 tchunkptr F = X->fd;\
\r
3702 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
\r
3707 CORRUPTION_ERROR_ACTION(M);\
\r
3712 if (((R = *(RP = &(X->child[1]))) != 0) ||\
\r
3713 ((R = *(RP = &(X->child[0]))) != 0)) {\
\r
3715 while ((*(CP = &(R->child[1])) != 0) ||\
\r
3716 (*(CP = &(R->child[0])) != 0)) {\
\r
3719 if (RTCHECK(ok_address(M, RP)))\
\r
3722 CORRUPTION_ERROR_ACTION(M);\
\r
3727 tbinptr* H = treebin_at(M, X->index);\
\r
3729 if ((*H = R) == 0) \
\r
3730 clear_treemap(M, X->index);\
\r
3732 else if (RTCHECK(ok_address(M, XP))) {\
\r
3733 if (XP->child[0] == X) \
\r
3734 XP->child[0] = R;\
\r
3736 XP->child[1] = R;\
\r
3739 CORRUPTION_ERROR_ACTION(M);\
\r
3741 if (RTCHECK(ok_address(M, R))) {\
\r
3742 tchunkptr C0, C1;\
\r
3744 if ((C0 = X->child[0]) != 0) {\
\r
3745 if (RTCHECK(ok_address(M, C0))) {\
\r
3746 R->child[0] = C0;\
\r
3750 CORRUPTION_ERROR_ACTION(M);\
\r
3752 if ((C1 = X->child[1]) != 0) {\
\r
3753 if (RTCHECK(ok_address(M, C1))) {\
\r
3754 R->child[1] = C1;\
\r
3758 CORRUPTION_ERROR_ACTION(M);\
\r
3762 CORRUPTION_ERROR_ACTION(M);\
\r
3767 /* Relays to large vs small bin operations */
\r
3769 #define insert_chunk(M, P, S)\
\r
3770 if (is_small(S)) insert_small_chunk(M, P, S)\
\r
3771 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
\r
3773 #define unlink_chunk(M, P, S)\
\r
3774 if (is_small(S)) unlink_small_chunk(M, P, S)\
\r
3775 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
\r
3778 /* Relays to internal calls to malloc/free from realloc, memalign etc */
\r
3781 #define internal_malloc(m, b) mspace_malloc(m, b)
\r
3782 #define internal_free(m, mem) mspace_free(m,mem);
\r
3783 #else /* ONLY_MSPACES */
\r
3785 #define internal_malloc(m, b)\
\r
3786 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
\r
3787 #define internal_free(m, mem)\
\r
3788 if (m == gm) dlfree(mem); else mspace_free(m,mem);
\r
3789 #else /* MSPACES */
\r
3790 #define internal_malloc(m, b) dlmalloc(b)
\r
3791 #define internal_free(m, mem) dlfree(mem)
\r
3792 #endif /* MSPACES */
\r
3793 #endif /* ONLY_MSPACES */
\r
3795 /* ----------------------- Direct-mmapping chunks ----------------------- */
\r
3798 Directly mmapped chunks are set up with an offset to the start of
\r
3799 the mmapped region stored in the prev_foot field of the chunk. This
\r
3800 allows reconstruction of the required argument to MUNMAP when freed,
\r
3801 and also allows adjustment of the returned chunk to meet alignment
\r
3802 requirements (especially in memalign).
\r
3805 /* Malloc using mmap */
\r
3806 static void* mmap_alloc(mstate m, size_t nb) {
\r
3807 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3808 if (m->footprint_limit != 0) {
\r
3809 size_t fp = m->footprint + mmsize;
\r
3810 if (fp <= m->footprint || fp > m->footprint_limit)
\r
3813 if (mmsize > nb) { /* Check for wrap around 0 */
\r
3814 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
\r
3815 if (mm != CMFAIL) {
\r
3816 size_t offset = align_offset(chunk2mem(mm));
\r
3817 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
\r
3818 mchunkptr p = (mchunkptr)(mm + offset);
\r
3819 p->prev_foot = offset;
\r
3821 mark_inuse_foot(m, p, psize);
\r
3822 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
\r
3823 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
\r
3825 if (m->least_addr == 0 || mm < m->least_addr)
\r
3826 m->least_addr = mm;
\r
3827 if ((m->footprint += mmsize) > m->max_footprint)
\r
3828 m->max_footprint = m->footprint;
\r
3829 assert(is_aligned(chunk2mem(p)));
\r
3830 check_mmapped_chunk(m, p);
\r
3831 return chunk2mem(p);
\r
3837 /* Realloc using mmap */
\r
3838 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
\r
3839 size_t oldsize = chunksize(oldp);
\r
3840 flags = flags; /* placate people compiling -Wunused */
\r
3841 if (is_small(nb)) /* Can't shrink mmap regions below small size */
\r
3843 /* Keep old chunk if big enough but not too big */
\r
3844 if (oldsize >= nb + SIZE_T_SIZE &&
\r
3845 (oldsize - nb) <= (mparams.granularity << 1))
\r
3848 size_t offset = oldp->prev_foot;
\r
3849 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
\r
3850 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3851 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
\r
3852 oldmmsize, newmmsize, flags);
\r
3853 if (cp != CMFAIL) {
\r
3854 mchunkptr newp = (mchunkptr)(cp + offset);
\r
3855 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
\r
3856 newp->head = psize;
\r
3857 mark_inuse_foot(m, newp, psize);
\r
3858 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
\r
3859 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
\r
3861 if (cp < m->least_addr)
\r
3862 m->least_addr = cp;
\r
3863 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
\r
3864 m->max_footprint = m->footprint;
\r
3865 check_mmapped_chunk(m, newp);
\r
3873 /* -------------------------- mspace management -------------------------- */
\r
3875 /* Initialize top chunk and its size */
\r
3876 static void init_top(mstate m, mchunkptr p, size_t psize) {
\r
3877 /* Ensure alignment */
\r
3878 size_t offset = align_offset(chunk2mem(p));
\r
3879 p = (mchunkptr)((char*)p + offset);
\r
3883 m->topsize = psize;
\r
3884 p->head = psize | PINUSE_BIT;
\r
3885 /* set size of fake trailing chunk holding overhead space only once */
\r
3886 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
\r
3887 m->trim_check = mparams.trim_threshold; /* reset on each update */
\r
3890 /* Initialize bins for a new mstate that is otherwise zeroed out */
\r
3891 static void init_bins(mstate m) {
\r
3892 /* Establish circular links for smallbins */
\r
3894 for (i = 0; i < NSMALLBINS; ++i) {
\r
3895 sbinptr bin = smallbin_at(m,i);
\r
3896 bin->fd = bin->bk = bin;
\r
3900 #if PROCEED_ON_ERROR
\r
3902 /* default corruption action */
\r
3903 static void reset_on_error(mstate m) {
\r
3905 ++malloc_corruption_error_count;
\r
3906 /* Reinitialize fields to forget about all memory */
\r
3907 m->smallmap = m->treemap = 0;
\r
3908 m->dvsize = m->topsize = 0;
\r
3912 m->top = m->dv = 0;
\r
3913 for (i = 0; i < NTREEBINS; ++i)
\r
3914 *treebin_at(m, i) = 0;
\r
3917 #endif /* PROCEED_ON_ERROR */
\r
3919 /* Allocate chunk and prepend remainder with chunk in successor base. */
\r
3920 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
\r
3922 mchunkptr p = align_as_chunk(newbase);
\r
3923 mchunkptr oldfirst = align_as_chunk(oldbase);
\r
3924 size_t psize = (char*)oldfirst - (char*)p;
\r
3925 mchunkptr q = chunk_plus_offset(p, nb);
\r
3926 size_t qsize = psize - nb;
\r
3927 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
\r
3929 assert((char*)oldfirst > (char*)q);
\r
3930 assert(pinuse(oldfirst));
\r
3931 assert(qsize >= MIN_CHUNK_SIZE);
\r
3933 /* consolidate remainder with first chunk of old base */
\r
3934 if (oldfirst == m->top) {
\r
3935 size_t tsize = m->topsize += qsize;
\r
3937 q->head = tsize | PINUSE_BIT;
\r
3938 check_top_chunk(m, q);
\r
3940 else if (oldfirst == m->dv) {
\r
3941 size_t dsize = m->dvsize += qsize;
\r
3943 set_size_and_pinuse_of_free_chunk(q, dsize);
\r
3946 if (!is_inuse(oldfirst)) {
\r
3947 size_t nsize = chunksize(oldfirst);
\r
3948 unlink_chunk(m, oldfirst, nsize);
\r
3949 oldfirst = chunk_plus_offset(oldfirst, nsize);
\r
3952 set_free_with_pinuse(q, qsize, oldfirst);
\r
3953 insert_chunk(m, q, qsize);
\r
3954 check_free_chunk(m, q);
\r
3957 check_malloced_chunk(m, chunk2mem(p), nb);
\r
3958 return chunk2mem(p);
\r
3961 /* Add a segment to hold a new noncontiguous region */
\r
3962 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
\r
3963 /* Determine locations and sizes of segment, fenceposts, old top */
\r
3964 char* old_top = (char*)m->top;
\r
3965 msegmentptr oldsp = segment_holding(m, old_top);
\r
3966 char* old_end = oldsp->base + oldsp->size;
\r
3967 size_t ssize = pad_request(sizeof(struct malloc_segment));
\r
3968 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
\r
3969 size_t offset = align_offset(chunk2mem(rawsp));
\r
3970 char* asp = rawsp + offset;
\r
3971 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
\r
3972 mchunkptr sp = (mchunkptr)csp;
\r
3973 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
\r
3974 mchunkptr tnext = chunk_plus_offset(sp, ssize);
\r
3975 mchunkptr p = tnext;
\r
3978 /* reset top to new space */
\r
3979 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
\r
3981 /* Set up segment record */
\r
3982 assert(is_aligned(ss));
\r
3983 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
\r
3984 *ss = m->seg; /* Push current record */
\r
3985 m->seg.base = tbase;
\r
3986 m->seg.size = tsize;
\r
3987 m->seg.sflags = mmapped;
\r
3990 /* Insert trailing fenceposts */
\r
3992 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
\r
3993 p->head = FENCEPOST_HEAD;
\r
3995 if ((char*)(&(nextp->head)) < old_end)
\r
4000 assert(nfences >= 2);
\r
4002 /* Insert the rest of old top into a bin as an ordinary free chunk */
\r
4003 if (csp != old_top) {
\r
4004 mchunkptr q = (mchunkptr)old_top;
\r
4005 size_t psize = csp - old_top;
\r
4006 mchunkptr tn = chunk_plus_offset(q, psize);
\r
4007 set_free_with_pinuse(q, psize, tn);
\r
4008 insert_chunk(m, q, psize);
\r
4011 check_top_chunk(m, m->top);
\r
4014 /* -------------------------- System allocation -------------------------- */
\r
4016 /* Get memory from system using MORECORE or MMAP */
\r
4017 static void* sys_alloc(mstate m, size_t nb) {
\r
4018 char* tbase = CMFAIL;
\r
4020 flag_t mmap_flag = 0;
\r
4021 size_t asize; /* allocation size */
\r
4023 ensure_initialization();
\r
4025 /* Directly map large chunks, but only if already initialized */
\r
4026 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
\r
4027 void* mem = mmap_alloc(m, nb);
\r
4032 asize = granularity_align(nb + SYS_ALLOC_PADDING);
\r
4034 return 0; /* wraparound */
\r
4035 if (m->footprint_limit != 0) {
\r
4036 size_t fp = m->footprint + asize;
\r
4037 if (fp <= m->footprint || fp > m->footprint_limit)
\r
4042 Try getting memory in any of three ways (in most-preferred to
\r
4043 least-preferred order):
\r
4044 1. A call to MORECORE that can normally contiguously extend memory.
\r
4045 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
\r
4046 or main space is mmapped or a previous contiguous call failed)
\r
4047 2. A call to MMAP new space (disabled if not HAVE_MMAP).
\r
4048 Note that under the default settings, if MORECORE is unable to
\r
4049 fulfill a request, and HAVE_MMAP is true, then mmap is
\r
4050 used as a noncontiguous system allocator. This is a useful backup
\r
4051 strategy for systems with holes in address spaces -- in this case
\r
4052 sbrk cannot contiguously expand the heap, but mmap may be able to
\r
4054 3. A call to MORECORE that cannot usually contiguously extend memory.
\r
4055 (disabled if not HAVE_MORECORE)
\r
4057 In all cases, we need to request enough bytes from system to ensure
\r
4058 we can malloc nb bytes upon success, so pad with enough space for
\r
4059 top_foot, plus alignment-pad to make sure we don't lose bytes if
\r
4060 not on boundary, and round this up to a granularity unit.
\r
4063 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
\r
4064 char* br = CMFAIL;
\r
4065 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
\r
4066 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4068 if (ss == 0) { /* First time through or recovery */
\r
4069 char* base = (char*)CALL_MORECORE(0);
\r
4070 if (base != CMFAIL) {
\r
4072 /* Adjust to end on a page boundary */
\r
4073 if (!is_page_aligned(base))
\r
4074 asize += (page_align((size_t)base) - (size_t)base);
\r
4075 fp = m->footprint + asize; /* recheck limits */
\r
4076 if (asize > nb && asize < HALF_MAX_SIZE_T &&
\r
4077 (m->footprint_limit == 0 ||
\r
4078 (fp > m->footprint && fp <= m->footprint_limit)) &&
\r
4079 (br = (char*)(CALL_MORECORE(asize))) == base) {
\r
4086 /* Subtract out existing available top space from MORECORE request. */
\r
4087 asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
\r
4088 /* Use mem here only if it did continuously extend old space */
\r
4089 if (asize < HALF_MAX_SIZE_T &&
\r
4090 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
\r
4096 if (tbase == CMFAIL) { /* Cope with partial failure */
\r
4097 if (br != CMFAIL) { /* Try to use/extend the space we did get */
\r
4098 if (asize < HALF_MAX_SIZE_T &&
\r
4099 asize < nb + SYS_ALLOC_PADDING) {
\r
4100 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
\r
4101 if (esize < HALF_MAX_SIZE_T) {
\r
4102 char* end = (char*)CALL_MORECORE(esize);
\r
4103 if (end != CMFAIL)
\r
4105 else { /* Can't use; try to release */
\r
4106 (void) CALL_MORECORE(-asize);
\r
4112 if (br != CMFAIL) { /* Use the space we did get */
\r
4117 disable_contiguous(m); /* Don't try contiguous path in the future */
\r
4120 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4123 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
\r
4124 char* mp = (char*)(CALL_MMAP(asize));
\r
4125 if (mp != CMFAIL) {
\r
4128 mmap_flag = USE_MMAP_BIT;
\r
4132 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
\r
4133 if (asize < HALF_MAX_SIZE_T) {
\r
4134 char* br = CMFAIL;
\r
4135 char* end = CMFAIL;
\r
4136 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4137 br = (char*)(CALL_MORECORE(asize));
\r
4138 end = (char*)(CALL_MORECORE(0));
\r
4139 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4140 if (br != CMFAIL && end != CMFAIL && br < end) {
\r
4141 size_t ssize = end - br;
\r
4142 if (ssize > nb + TOP_FOOT_SIZE) {
\r
4150 if (tbase != CMFAIL) {
\r
4152 if ((m->footprint += tsize) > m->max_footprint)
\r
4153 m->max_footprint = m->footprint;
\r
4155 if (!is_initialized(m)) { /* first-time initialization */
\r
4156 if (m->least_addr == 0 || tbase < m->least_addr)
\r
4157 m->least_addr = tbase;
\r
4158 m->seg.base = tbase;
\r
4159 m->seg.size = tsize;
\r
4160 m->seg.sflags = mmap_flag;
\r
4161 m->magic = mparams.magic;
\r
4162 m->release_checks = MAX_RELEASE_CHECK_RATE;
\r
4166 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
\r
4170 /* Offset top by embedded malloc_state */
\r
4171 mchunkptr mn = next_chunk(mem2chunk(m));
\r
4172 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
\r
4177 /* Try to merge with an existing segment */
\r
4178 msegmentptr sp = &m->seg;
\r
4179 /* Only consider most recent segment if traversal suppressed */
\r
4180 while (sp != 0 && tbase != sp->base + sp->size)
\r
4181 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
\r
4183 !is_extern_segment(sp) &&
\r
4184 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
\r
4185 segment_holds(sp, m->top)) { /* append */
\r
4186 sp->size += tsize;
\r
4187 init_top(m, m->top, m->topsize + tsize);
\r
4190 if (tbase < m->least_addr)
\r
4191 m->least_addr = tbase;
\r
4193 while (sp != 0 && sp->base != tbase + tsize)
\r
4194 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
\r
4196 !is_extern_segment(sp) &&
\r
4197 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
\r
4198 char* oldbase = sp->base;
\r
4200 sp->size += tsize;
\r
4201 return prepend_alloc(m, tbase, oldbase, nb);
\r
4204 add_segment(m, tbase, tsize, mmap_flag);
\r
4208 if (nb < m->topsize) { /* Allocate from new or extended top space */
\r
4209 size_t rsize = m->topsize -= nb;
\r
4210 mchunkptr p = m->top;
\r
4211 mchunkptr r = m->top = chunk_plus_offset(p, nb);
\r
4212 r->head = rsize | PINUSE_BIT;
\r
4213 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
\r
4214 check_top_chunk(m, m->top);
\r
4215 check_malloced_chunk(m, chunk2mem(p), nb);
\r
4216 return chunk2mem(p);
\r
4220 MALLOC_FAILURE_ACTION;
\r
4224 /* ----------------------- system deallocation -------------------------- */
\r
4226 /* Unmap and unlink any mmapped segments that don't contain used chunks */
\r
4227 static size_t release_unused_segments(mstate m) {
\r
4228 size_t released = 0;
\r
4230 msegmentptr pred = &m->seg;
\r
4231 msegmentptr sp = pred->next;
\r
4233 char* base = sp->base;
\r
4234 size_t size = sp->size;
\r
4235 msegmentptr next = sp->next;
\r
4237 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
\r
4238 mchunkptr p = align_as_chunk(base);
\r
4239 size_t psize = chunksize(p);
\r
4240 /* Can unmap if first chunk holds entire segment and not pinned */
\r
4241 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
\r
4242 tchunkptr tp = (tchunkptr)p;
\r
4243 assert(segment_holds(sp, (char*)sp));
\r
4249 unlink_large_chunk(m, tp);
\r
4251 if (CALL_MUNMAP(base, size) == 0) {
\r
4253 m->footprint -= size;
\r
4254 /* unlink obsoleted record */
\r
4258 else { /* back out if cannot unmap */
\r
4259 insert_large_chunk(m, tp, psize);
\r
4263 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
\r
4268 /* Reset check counter */
\r
4269 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
\r
4270 nsegs : MAX_RELEASE_CHECK_RATE);
\r
4274 static int sys_trim(mstate m, size_t pad) {
\r
4275 size_t released = 0;
\r
4276 ensure_initialization();
\r
4277 if (pad < MAX_REQUEST && is_initialized(m)) {
\r
4278 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
\r
4280 if (m->topsize > pad) {
\r
4281 /* Shrink top space in granularity-size units, keeping at least one */
\r
4282 size_t unit = mparams.granularity;
\r
4283 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
\r
4284 SIZE_T_ONE) * unit;
\r
4285 msegmentptr sp = segment_holding(m, (char*)m->top);
\r
4287 if (!is_extern_segment(sp)) {
\r
4288 if (is_mmapped_segment(sp)) {
\r
4290 sp->size >= extra &&
\r
4291 !has_segment_link(m, sp)) { /* can't shrink if pinned */
\r
4292 size_t newsize = sp->size - extra;
\r
4293 /* Prefer mremap, fall back to munmap */
\r
4294 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
\r
4295 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
\r
4300 else if (HAVE_MORECORE) {
\r
4301 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
\r
4302 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
\r
4303 ACQUIRE_MALLOC_GLOBAL_LOCK();
\r
4305 /* Make sure end of memory is where we last set it. */
\r
4306 char* old_br = (char*)(CALL_MORECORE(0));
\r
4307 if (old_br == sp->base + sp->size) {
\r
4308 char* rel_br = (char*)(CALL_MORECORE(-extra));
\r
4309 char* new_br = (char*)(CALL_MORECORE(0));
\r
4310 if (rel_br != CMFAIL && new_br < old_br)
\r
4311 released = old_br - new_br;
\r
4314 RELEASE_MALLOC_GLOBAL_LOCK();
\r
4318 if (released != 0) {
\r
4319 sp->size -= released;
\r
4320 m->footprint -= released;
\r
4321 init_top(m, m->top, m->topsize - released);
\r
4322 check_top_chunk(m, m->top);
\r
4326 /* Unmap any unused mmapped segments */
\r
4328 released += release_unused_segments(m);
\r
4330 /* On failure, disable autotrim to avoid repeated failed future calls */
\r
4331 if (released == 0 && m->topsize > m->trim_check)
\r
4332 m->trim_check = MAX_SIZE_T;
\r
4335 return (released != 0)? 1 : 0;
\r
4338 /* Consolidate and bin a chunk. Differs from exported versions
\r
4339 of free mainly in that the chunk need not be marked as inuse.
\r
4341 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
\r
4342 mchunkptr next = chunk_plus_offset(p, psize);
\r
4345 size_t prevsize = p->prev_foot;
\r
4346 if (is_mmapped(p)) {
\r
4347 psize += prevsize + MMAP_FOOT_PAD;
\r
4348 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
4349 m->footprint -= psize;
\r
4352 prev = chunk_minus_offset(p, prevsize);
\r
4353 psize += prevsize;
\r
4355 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
\r
4357 unlink_chunk(m, p, prevsize);
\r
4359 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
4360 m->dvsize = psize;
\r
4361 set_free_with_pinuse(p, psize, next);
\r
4366 CORRUPTION_ERROR_ACTION(m);
\r
4370 if (RTCHECK(ok_address(m, next))) {
\r
4371 if (!cinuse(next)) { /* consolidate forward */
\r
4372 if (next == m->top) {
\r
4373 size_t tsize = m->topsize += psize;
\r
4375 p->head = tsize | PINUSE_BIT;
\r
4382 else if (next == m->dv) {
\r
4383 size_t dsize = m->dvsize += psize;
\r
4385 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
4389 size_t nsize = chunksize(next);
\r
4391 unlink_chunk(m, next, nsize);
\r
4392 set_size_and_pinuse_of_free_chunk(p, psize);
\r
4394 m->dvsize = psize;
\r
4400 set_free_with_pinuse(p, psize, next);
\r
4402 insert_chunk(m, p, psize);
\r
4405 CORRUPTION_ERROR_ACTION(m);
\r
4409 /* ---------------------------- malloc --------------------------- */
\r
4411 /* allocate a large request from the best fitting chunk in a treebin */
\r
4412 static void* tmalloc_large(mstate m, size_t nb) {
\r
4414 size_t rsize = -nb; /* Unsigned negation */
\r
4417 compute_tree_index(nb, idx);
\r
4418 if ((t = *treebin_at(m, idx)) != 0) {
\r
4419 /* Traverse tree for this bin looking for node with size == nb */
\r
4420 size_t sizebits = nb << leftshift_for_tree_index(idx);
\r
4421 tchunkptr rst = 0; /* The deepest untaken right subtree */
\r
4424 size_t trem = chunksize(t) - nb;
\r
4425 if (trem < rsize) {
\r
4427 if ((rsize = trem) == 0)
\r
4431 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
\r
4432 if (rt != 0 && rt != t)
\r
4435 t = rst; /* set t to least subtree holding sizes > nb */
\r
4441 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
\r
4442 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
\r
4443 if (leftbits != 0) {
\r
4445 binmap_t leastbit = least_bit(leftbits);
\r
4446 compute_bit2idx(leastbit, i);
\r
4447 t = *treebin_at(m, i);
\r
4451 while (t != 0) { /* find smallest of tree or subtree */
\r
4452 size_t trem = chunksize(t) - nb;
\r
4453 if (trem < rsize) {
\r
4457 t = leftmost_child(t);
\r
4460 /* If dv is a better fit, return 0 so malloc will use it */
\r
4461 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
\r
4462 if (RTCHECK(ok_address(m, v))) { /* split */
\r
4463 mchunkptr r = chunk_plus_offset(v, nb);
\r
4464 assert(chunksize(v) == rsize + nb);
\r
4465 if (RTCHECK(ok_next(v, r))) {
\r
4466 unlink_large_chunk(m, v);
\r
4467 if (rsize < MIN_CHUNK_SIZE)
\r
4468 set_inuse_and_pinuse(m, v, (rsize + nb));
\r
4470 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
\r
4471 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4472 insert_chunk(m, r, rsize);
\r
4474 return chunk2mem(v);
\r
4477 CORRUPTION_ERROR_ACTION(m);
\r
4482 /* allocate a small request from the best fitting chunk in a treebin */
\r
4483 static void* tmalloc_small(mstate m, size_t nb) {
\r
4487 binmap_t leastbit = least_bit(m->treemap);
\r
4488 compute_bit2idx(leastbit, i);
\r
4489 v = t = *treebin_at(m, i);
\r
4490 rsize = chunksize(t) - nb;
\r
4492 while ((t = leftmost_child(t)) != 0) {
\r
4493 size_t trem = chunksize(t) - nb;
\r
4494 if (trem < rsize) {
\r
4500 if (RTCHECK(ok_address(m, v))) {
\r
4501 mchunkptr r = chunk_plus_offset(v, nb);
\r
4502 assert(chunksize(v) == rsize + nb);
\r
4503 if (RTCHECK(ok_next(v, r))) {
\r
4504 unlink_large_chunk(m, v);
\r
4505 if (rsize < MIN_CHUNK_SIZE)
\r
4506 set_inuse_and_pinuse(m, v, (rsize + nb));
\r
4508 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
\r
4509 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4510 replace_dv(m, r, rsize);
\r
4512 return chunk2mem(v);
\r
4516 CORRUPTION_ERROR_ACTION(m);
\r
4522 void* dlmalloc(size_t bytes) {
\r
4525 If a small request (< 256 bytes minus per-chunk overhead):
\r
4526 1. If one exists, use a remainderless chunk in associated smallbin.
\r
4527 (Remainderless means that there are too few excess bytes to
\r
4528 represent as a chunk.)
\r
4529 2. If it is big enough, use the dv chunk, which is normally the
\r
4530 chunk adjacent to the one used for the most recent small request.
\r
4531 3. If one exists, split the smallest available chunk in a bin,
\r
4532 saving remainder in dv.
\r
4533 4. If it is big enough, use the top chunk.
\r
4534 5. If available, get memory from system and use it
\r
4535 Otherwise, for a large request:
\r
4536 1. Find the smallest available binned chunk that fits, and use it
\r
4537 if it is better fitting than dv chunk, splitting if necessary.
\r
4538 2. If better fitting than any binned chunk, use the dv chunk.
\r
4539 3. If it is big enough, use the top chunk.
\r
4540 4. If request size >= mmap threshold, try to directly mmap this chunk.
\r
4541 5. If available, get memory from system and use it
\r
4543 The ugly goto's here ensure that postaction occurs along all paths.
\r
4547 ensure_initialization(); /* initialize in sys_alloc if not using locks */
\r
4550 if (!PREACTION(gm)) {
\r
4553 if (bytes <= MAX_SMALL_REQUEST) {
\r
4555 binmap_t smallbits;
\r
4556 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
\r
4557 idx = small_index(nb);
\r
4558 smallbits = gm->smallmap >> idx;
\r
4560 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
\r
4562 idx += ~smallbits & 1; /* Uses next bin if idx empty */
\r
4563 b = smallbin_at(gm, idx);
\r
4565 assert(chunksize(p) == small_index2size(idx));
\r
4566 unlink_first_small_chunk(gm, b, p, idx);
\r
4567 set_inuse_and_pinuse(gm, p, small_index2size(idx));
\r
4568 mem = chunk2mem(p);
\r
4569 check_malloced_chunk(gm, mem, nb);
\r
4573 else if (nb > gm->dvsize) {
\r
4574 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
\r
4575 mchunkptr b, p, r;
\r
4578 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
\r
4579 binmap_t leastbit = least_bit(leftbits);
\r
4580 compute_bit2idx(leastbit, i);
\r
4581 b = smallbin_at(gm, i);
\r
4583 assert(chunksize(p) == small_index2size(i));
\r
4584 unlink_first_small_chunk(gm, b, p, i);
\r
4585 rsize = small_index2size(i) - nb;
\r
4586 /* Fit here cannot be remainderless if 4byte sizes */
\r
4587 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
\r
4588 set_inuse_and_pinuse(gm, p, small_index2size(i));
\r
4590 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4591 r = chunk_plus_offset(p, nb);
\r
4592 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4593 replace_dv(gm, r, rsize);
\r
4595 mem = chunk2mem(p);
\r
4596 check_malloced_chunk(gm, mem, nb);
\r
4600 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
\r
4601 check_malloced_chunk(gm, mem, nb);
\r
4606 else if (bytes >= MAX_REQUEST)
\r
4607 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
\r
4609 nb = pad_request(bytes);
\r
4610 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
\r
4611 check_malloced_chunk(gm, mem, nb);
\r
4616 if (nb <= gm->dvsize) {
\r
4617 size_t rsize = gm->dvsize - nb;
\r
4618 mchunkptr p = gm->dv;
\r
4619 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
\r
4620 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
\r
4621 gm->dvsize = rsize;
\r
4622 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
4623 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4625 else { /* exhaust dv */
\r
4626 size_t dvs = gm->dvsize;
\r
4629 set_inuse_and_pinuse(gm, p, dvs);
\r
4631 mem = chunk2mem(p);
\r
4632 check_malloced_chunk(gm, mem, nb);
\r
4636 else if (nb < gm->topsize) { /* Split top */
\r
4637 size_t rsize = gm->topsize -= nb;
\r
4638 mchunkptr p = gm->top;
\r
4639 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
\r
4640 r->head = rsize | PINUSE_BIT;
\r
4641 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
\r
4642 mem = chunk2mem(p);
\r
4643 check_top_chunk(gm, gm->top);
\r
4644 check_malloced_chunk(gm, mem, nb);
\r
4648 mem = sys_alloc(gm, nb);
\r
4658 /* ---------------------------- free --------------------------- */
\r
4660 void dlfree(void* mem) {
\r
4662 Consolidate freed chunks with preceeding or succeeding bordering
\r
4663 free chunks, if they exist, and then place in a bin. Intermixed
\r
4664 with special cases for top, dv, mmapped chunks, and usage errors.
\r
4668 mchunkptr p = mem2chunk(mem);
\r
4670 mstate fm = get_mstate_for(p);
\r
4671 if (!ok_magic(fm)) {
\r
4672 USAGE_ERROR_ACTION(fm, p);
\r
4675 #else /* FOOTERS */
\r
4677 #endif /* FOOTERS */
\r
4678 if (!PREACTION(fm)) {
\r
4679 check_inuse_chunk(fm, p);
\r
4680 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
\r
4681 size_t psize = chunksize(p);
\r
4682 mchunkptr next = chunk_plus_offset(p, psize);
\r
4684 size_t prevsize = p->prev_foot;
\r
4685 if (is_mmapped(p)) {
\r
4686 psize += prevsize + MMAP_FOOT_PAD;
\r
4687 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
4688 fm->footprint -= psize;
\r
4692 mchunkptr prev = chunk_minus_offset(p, prevsize);
\r
4693 psize += prevsize;
\r
4695 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
\r
4696 if (p != fm->dv) {
\r
4697 unlink_chunk(fm, p, prevsize);
\r
4699 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
4700 fm->dvsize = psize;
\r
4701 set_free_with_pinuse(p, psize, next);
\r
4710 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
\r
4711 if (!cinuse(next)) { /* consolidate forward */
\r
4712 if (next == fm->top) {
\r
4713 size_t tsize = fm->topsize += psize;
\r
4715 p->head = tsize | PINUSE_BIT;
\r
4716 if (p == fm->dv) {
\r
4720 if (should_trim(fm, tsize))
\r
4724 else if (next == fm->dv) {
\r
4725 size_t dsize = fm->dvsize += psize;
\r
4727 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
4731 size_t nsize = chunksize(next);
\r
4733 unlink_chunk(fm, next, nsize);
\r
4734 set_size_and_pinuse_of_free_chunk(p, psize);
\r
4735 if (p == fm->dv) {
\r
4736 fm->dvsize = psize;
\r
4742 set_free_with_pinuse(p, psize, next);
\r
4744 if (is_small(psize)) {
\r
4745 insert_small_chunk(fm, p, psize);
\r
4746 check_free_chunk(fm, p);
\r
4749 tchunkptr tp = (tchunkptr)p;
\r
4750 insert_large_chunk(fm, tp, psize);
\r
4751 check_free_chunk(fm, p);
\r
4752 if (--fm->release_checks == 0)
\r
4753 release_unused_segments(fm);
\r
4759 USAGE_ERROR_ACTION(fm, p);
\r
4766 #endif /* FOOTERS */
\r
4769 void* dlcalloc(size_t n_elements, size_t elem_size) {
\r
4772 if (n_elements != 0) {
\r
4773 req = n_elements * elem_size;
\r
4774 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
\r
4775 (req / n_elements != elem_size))
\r
4776 req = MAX_SIZE_T; /* force downstream failure on overflow */
\r
4778 mem = dlmalloc(req);
\r
4779 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
\r
4780 memset(mem, 0, req);
\r
4784 #endif /* !ONLY_MSPACES */
\r
4786 /* ------------ Internal support for realloc, memalign, etc -------------- */
\r
4788 /* Try to realloc; only in-place unless can_move true */
\r
4789 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
\r
4791 mchunkptr newp = 0;
\r
4792 size_t oldsize = chunksize(p);
\r
4793 mchunkptr next = chunk_plus_offset(p, oldsize);
\r
4794 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
\r
4795 ok_next(p, next) && ok_pinuse(next))) {
\r
4796 if (is_mmapped(p)) {
\r
4797 newp = mmap_resize(m, p, nb, can_move);
\r
4799 else if (oldsize >= nb) { /* already big enough */
\r
4800 size_t rsize = oldsize - nb;
\r
4801 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
\r
4802 mchunkptr r = chunk_plus_offset(p, nb);
\r
4803 set_inuse(m, p, nb);
\r
4804 set_inuse(m, r, rsize);
\r
4805 dispose_chunk(m, r, rsize);
\r
4809 else if (next == m->top) { /* extend into top */
\r
4810 if (oldsize + m->topsize > nb) {
\r
4811 size_t newsize = oldsize + m->topsize;
\r
4812 size_t newtopsize = newsize - nb;
\r
4813 mchunkptr newtop = chunk_plus_offset(p, nb);
\r
4814 set_inuse(m, p, nb);
\r
4815 newtop->head = newtopsize |PINUSE_BIT;
\r
4817 m->topsize = newtopsize;
\r
4821 else if (next == m->dv) { /* extend into dv */
\r
4822 size_t dvs = m->dvsize;
\r
4823 if (oldsize + dvs >= nb) {
\r
4824 size_t dsize = oldsize + dvs - nb;
\r
4825 if (dsize >= MIN_CHUNK_SIZE) {
\r
4826 mchunkptr r = chunk_plus_offset(p, nb);
\r
4827 mchunkptr n = chunk_plus_offset(r, dsize);
\r
4828 set_inuse(m, p, nb);
\r
4829 set_size_and_pinuse_of_free_chunk(r, dsize);
\r
4831 m->dvsize = dsize;
\r
4834 else { /* exhaust dv */
\r
4835 size_t newsize = oldsize + dvs;
\r
4836 set_inuse(m, p, newsize);
\r
4843 else if (!cinuse(next)) { /* extend into next free chunk */
\r
4844 size_t nextsize = chunksize(next);
\r
4845 if (oldsize + nextsize >= nb) {
\r
4846 size_t rsize = oldsize + nextsize - nb;
\r
4847 unlink_chunk(m, next, nextsize);
\r
4848 if (rsize < MIN_CHUNK_SIZE) {
\r
4849 size_t newsize = oldsize + nextsize;
\r
4850 set_inuse(m, p, newsize);
\r
4853 mchunkptr r = chunk_plus_offset(p, nb);
\r
4854 set_inuse(m, p, nb);
\r
4855 set_inuse(m, r, rsize);
\r
4856 dispose_chunk(m, r, rsize);
\r
4863 USAGE_ERROR_ACTION(m, oldmem);
\r
4868 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
\r
4870 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
\r
4871 alignment = MIN_CHUNK_SIZE;
\r
4872 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
\r
4873 size_t a = MALLOC_ALIGNMENT << 1;
\r
4874 while (a < alignment) a <<= 1;
\r
4877 if (bytes >= MAX_REQUEST - alignment) {
\r
4878 if (m != 0) { /* Test isn't needed but avoids compiler warning */
\r
4879 MALLOC_FAILURE_ACTION;
\r
4883 size_t nb = request2size(bytes);
\r
4884 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
\r
4885 mem = internal_malloc(m, req);
\r
4887 mchunkptr p = mem2chunk(mem);
\r
4890 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
\r
4892 Find an aligned spot inside chunk. Since we need to give
\r
4893 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
\r
4894 the first calculation places us at a spot with less than
\r
4895 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
\r
4896 We've allocated enough total room so that this is always
\r
4899 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
\r
4902 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
\r
4903 br : br+alignment;
\r
4904 mchunkptr newp = (mchunkptr)pos;
\r
4905 size_t leadsize = pos - (char*)(p);
\r
4906 size_t newsize = chunksize(p) - leadsize;
\r
4908 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
\r
4909 newp->prev_foot = p->prev_foot + leadsize;
\r
4910 newp->head = newsize;
\r
4912 else { /* Otherwise, give back leader, use the rest */
\r
4913 set_inuse(m, newp, newsize);
\r
4914 set_inuse(m, p, leadsize);
\r
4915 dispose_chunk(m, p, leadsize);
\r
4920 /* Give back spare room at the end */
\r
4921 if (!is_mmapped(p)) {
\r
4922 size_t size = chunksize(p);
\r
4923 if (size > nb + MIN_CHUNK_SIZE) {
\r
4924 size_t remainder_size = size - nb;
\r
4925 mchunkptr remainder = chunk_plus_offset(p, nb);
\r
4926 set_inuse(m, p, nb);
\r
4927 set_inuse(m, remainder, remainder_size);
\r
4928 dispose_chunk(m, remainder, remainder_size);
\r
4932 mem = chunk2mem(p);
\r
4933 assert (chunksize(p) >= nb);
\r
4934 assert(((size_t)mem & (alignment - 1)) == 0);
\r
4935 check_inuse_chunk(m, p);
\r
4943 Common support for independent_X routines, handling
\r
4944 all of the combinations that can result.
\r
4946 bit 0 set if all elements are same size (using sizes[0])
\r
4947 bit 1 set if elements should be zeroed
\r
4949 static void** ialloc(mstate m,
\r
4950 size_t n_elements,
\r
4955 size_t element_size; /* chunksize of each element, if all same */
\r
4956 size_t contents_size; /* total size of elements */
\r
4957 size_t array_size; /* request size of pointer array */
\r
4958 void* mem; /* malloced aggregate space */
\r
4959 mchunkptr p; /* corresponding chunk */
\r
4960 size_t remainder_size; /* remaining bytes while splitting */
\r
4961 void** marray; /* either "chunks" or malloced ptr array */
\r
4962 mchunkptr array_chunk; /* chunk for malloced ptr array */
\r
4963 flag_t was_enabled; /* to disable mmap */
\r
4967 ensure_initialization();
\r
4968 /* compute array length, if needed */
\r
4969 if (chunks != 0) {
\r
4970 if (n_elements == 0)
\r
4971 return chunks; /* nothing to do */
\r
4976 /* if empty req, must still return chunk representing empty array */
\r
4977 if (n_elements == 0)
\r
4978 return (void**)internal_malloc(m, 0);
\r
4980 array_size = request2size(n_elements * (sizeof(void*)));
\r
4983 /* compute total element size */
\r
4984 if (opts & 0x1) { /* all-same-size */
\r
4985 element_size = request2size(*sizes);
\r
4986 contents_size = n_elements * element_size;
\r
4988 else { /* add up all the sizes */
\r
4990 contents_size = 0;
\r
4991 for (i = 0; i != n_elements; ++i)
\r
4992 contents_size += request2size(sizes[i]);
\r
4995 size = contents_size + array_size;
\r
4998 Allocate the aggregate chunk. First disable direct-mmapping so
\r
4999 malloc won't use it, since we would not be able to later
\r
5000 free/realloc space internal to a segregated mmap region.
\r
5002 was_enabled = use_mmap(m);
\r
5004 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
\r
5010 if (PREACTION(m)) return 0;
\r
5011 p = mem2chunk(mem);
\r
5012 remainder_size = chunksize(p);
\r
5014 assert(!is_mmapped(p));
\r
5016 if (opts & 0x2) { /* optionally clear the elements */
\r
5017 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
\r
5020 /* If not provided, allocate the pointer array as final part of chunk */
\r
5021 if (marray == 0) {
\r
5022 size_t array_chunk_size;
\r
5023 array_chunk = chunk_plus_offset(p, contents_size);
\r
5024 array_chunk_size = remainder_size - contents_size;
\r
5025 marray = (void**) (chunk2mem(array_chunk));
\r
5026 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
\r
5027 remainder_size = contents_size;
\r
5030 /* split out elements */
\r
5031 for (i = 0; ; ++i) {
\r
5032 marray[i] = chunk2mem(p);
\r
5033 if (i != n_elements-1) {
\r
5034 if (element_size != 0)
\r
5035 size = element_size;
\r
5037 size = request2size(sizes[i]);
\r
5038 remainder_size -= size;
\r
5039 set_size_and_pinuse_of_inuse_chunk(m, p, size);
\r
5040 p = chunk_plus_offset(p, size);
\r
5042 else { /* the final element absorbs any overallocation slop */
\r
5043 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
\r
5049 if (marray != chunks) {
\r
5050 /* final element must have exactly exhausted chunk */
\r
5051 if (element_size != 0) {
\r
5052 assert(remainder_size == element_size);
\r
5055 assert(remainder_size == request2size(sizes[i]));
\r
5057 check_inuse_chunk(m, mem2chunk(marray));
\r
5059 for (i = 0; i != n_elements; ++i)
\r
5060 check_inuse_chunk(m, mem2chunk(marray[i]));
\r
5062 #endif /* DEBUG */
\r
5068 /* Try to free all pointers in the given array.
\r
5069 Note: this could be made faster, by delaying consolidation,
\r
5070 at the price of disabling some user integrity checks, We
\r
5071 still optimize some consolidations by combining adjacent
\r
5072 chunks before freeing, which will occur often if allocated
\r
5073 with ialloc or the array is sorted.
\r
5075 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
\r
5076 size_t unfreed = 0;
\r
5077 if (!PREACTION(m)) {
\r
5079 void** fence = &(array[nelem]);
\r
5080 for (a = array; a != fence; ++a) {
\r
5083 mchunkptr p = mem2chunk(mem);
\r
5084 size_t psize = chunksize(p);
\r
5086 if (get_mstate_for(p) != m) {
\r
5091 check_inuse_chunk(m, p);
\r
5093 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
\r
5094 void ** b = a + 1; /* try to merge with next chunk */
\r
5095 mchunkptr next = next_chunk(p);
\r
5096 if (b != fence && *b == chunk2mem(next)) {
\r
5097 size_t newsize = chunksize(next) + psize;
\r
5098 set_inuse(m, p, newsize);
\r
5099 *b = chunk2mem(p);
\r
5102 dispose_chunk(m, p, psize);
\r
5105 CORRUPTION_ERROR_ACTION(m);
\r
5110 if (should_trim(m, m->topsize))
\r
5118 #if MALLOC_INSPECT_ALL
\r
5119 static void internal_inspect_all(mstate m,
\r
5120 void(*handler)(void *start,
\r
5122 size_t used_bytes,
\r
5123 void* callback_arg),
\r
5125 if (is_initialized(m)) {
\r
5126 mchunkptr top = m->top;
\r
5128 for (s = &m->seg; s != 0; s = s->next) {
\r
5129 mchunkptr q = align_as_chunk(s->base);
\r
5130 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
\r
5131 mchunkptr next = next_chunk(q);
\r
5132 size_t sz = chunksize(q);
\r
5135 if (is_inuse(q)) {
\r
5136 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
\r
5137 start = chunk2mem(q);
\r
5141 if (is_small(sz)) { /* offset by possible bookkeeping */
\r
5142 start = (void*)((char*)q + sizeof(malloc_chunk));
\r
5145 start = (void*)((char*)q + sizeof(malloc_tree_chunk));
\r
5148 if (start < (void*)next) /* skip if all space is bookkeeping */
\r
5149 handler(start, next, used, arg);
\r
5157 #endif /* MALLOC_INSPECT_ALL */
\r
5159 /* ------------------ Exported realloc, memalign, etc -------------------- */
\r
5163 void* dlrealloc(void* oldmem, size_t bytes) {
\r
5165 if (oldmem == 0) {
\r
5166 mem = dlmalloc(bytes);
\r
5168 else if (bytes >= MAX_REQUEST) {
\r
5169 MALLOC_FAILURE_ACTION;
\r
5171 #ifdef REALLOC_ZERO_BYTES_FREES
\r
5172 else if (bytes == 0) {
\r
5175 #endif /* REALLOC_ZERO_BYTES_FREES */
\r
5177 size_t nb = request2size(bytes);
\r
5178 mchunkptr oldp = mem2chunk(oldmem);
\r
5181 #else /* FOOTERS */
\r
5182 mstate m = get_mstate_for(oldp);
\r
5183 if (!ok_magic(m)) {
\r
5184 USAGE_ERROR_ACTION(m, oldmem);
\r
5187 #endif /* FOOTERS */
\r
5188 if (!PREACTION(m)) {
\r
5189 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
\r
5192 check_inuse_chunk(m, newp);
\r
5193 mem = chunk2mem(newp);
\r
5196 mem = internal_malloc(m, bytes);
\r
5198 size_t oc = chunksize(oldp) - overhead_for(oldp);
\r
5199 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
\r
5200 internal_free(m, oldmem);
\r
5208 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
\r
5210 if (oldmem != 0) {
\r
5211 if (bytes >= MAX_REQUEST) {
\r
5212 MALLOC_FAILURE_ACTION;
\r
5215 size_t nb = request2size(bytes);
\r
5216 mchunkptr oldp = mem2chunk(oldmem);
\r
5219 #else /* FOOTERS */
\r
5220 mstate m = get_mstate_for(oldp);
\r
5221 if (!ok_magic(m)) {
\r
5222 USAGE_ERROR_ACTION(m, oldmem);
\r
5225 #endif /* FOOTERS */
\r
5226 if (!PREACTION(m)) {
\r
5227 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
\r
5229 if (newp == oldp) {
\r
5230 check_inuse_chunk(m, newp);
\r
5239 void* dlmemalign(size_t alignment, size_t bytes) {
\r
5240 if (alignment <= MALLOC_ALIGNMENT) {
\r
5241 return dlmalloc(bytes);
\r
5243 return internal_memalign(gm, alignment, bytes);
\r
5246 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
\r
5248 if (alignment == MALLOC_ALIGNMENT)
\r
5249 mem = dlmalloc(bytes);
\r
5251 size_t d = alignment / sizeof(void*);
\r
5252 size_t r = alignment % sizeof(void*);
\r
5253 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
\r
5255 else if (bytes >= MAX_REQUEST - alignment) {
\r
5256 if (alignment < MIN_CHUNK_SIZE)
\r
5257 alignment = MIN_CHUNK_SIZE;
\r
5258 mem = internal_memalign(gm, alignment, bytes);
\r
5269 void* dlvalloc(size_t bytes) {
\r
5271 ensure_initialization();
\r
5272 pagesz = mparams.page_size;
\r
5273 return dlmemalign(pagesz, bytes);
\r
5276 void* dlpvalloc(size_t bytes) {
\r
5278 ensure_initialization();
\r
5279 pagesz = mparams.page_size;
\r
5280 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
\r
5283 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
\r
5285 size_t sz = elem_size; /* serves as 1-element array */
\r
5286 return ialloc(gm, n_elements, &sz, 3, chunks);
\r
5289 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
\r
5291 return ialloc(gm, n_elements, sizes, 0, chunks);
\r
5294 size_t dlbulk_free(void* array[], size_t nelem) {
\r
5295 return internal_bulk_free(gm, array, nelem);
\r
5298 #if MALLOC_INSPECT_ALL
\r
5299 void dlmalloc_inspect_all(void(*handler)(void *start,
\r
5301 size_t used_bytes,
\r
5302 void* callback_arg),
\r
5304 ensure_initialization();
\r
5305 if (!PREACTION(gm)) {
\r
5306 internal_inspect_all(gm, handler, arg);
\r
5310 #endif /* MALLOC_INSPECT_ALL */
\r
5312 int dlmalloc_trim(size_t pad) {
\r
5314 ensure_initialization();
\r
5315 if (!PREACTION(gm)) {
\r
5316 result = sys_trim(gm, pad);
\r
5322 size_t dlmalloc_footprint(void) {
\r
5323 return gm->footprint;
\r
5326 size_t dlmalloc_max_footprint(void) {
\r
5327 return gm->max_footprint;
\r
5330 size_t dlmalloc_footprint_limit(void) {
\r
5331 size_t maf = gm->footprint_limit;
\r
5332 return maf == 0 ? MAX_SIZE_T : maf;
\r
5335 size_t dlmalloc_set_footprint_limit(size_t bytes) {
\r
5336 size_t result; /* invert sense of 0 */
\r
5338 result = granularity_align(1); /* Use minimal size */
\r
5339 if (bytes == MAX_SIZE_T)
\r
5340 result = 0; /* disable */
\r
5342 result = granularity_align(bytes);
\r
5343 return gm->footprint_limit = result;
\r
5347 struct mallinfo dlmallinfo(void) {
\r
5348 return internal_mallinfo(gm);
\r
5350 #endif /* NO_MALLINFO */
\r
5352 #if !NO_MALLOC_STATS
\r
5353 void dlmalloc_stats() {
\r
5354 internal_malloc_stats(gm);
\r
5356 #endif /* NO_MALLOC_STATS */
\r
5358 int dlmallopt(int param_number, int value) {
\r
5359 return change_mparam(param_number, value);
\r
5362 size_t dlmalloc_usable_size(void* mem) {
\r
5364 mchunkptr p = mem2chunk(mem);
\r
5366 return chunksize(p) - overhead_for(p);
\r
5371 #endif /* !ONLY_MSPACES */
\r
5373 /* ----------------------------- user mspaces ---------------------------- */
\r
5377 static mstate init_user_mstate(char* tbase, size_t tsize) {
\r
5378 size_t msize = pad_request(sizeof(struct malloc_state));
\r
5380 mchunkptr msp = align_as_chunk(tbase);
\r
5381 mstate m = (mstate)(chunk2mem(msp));
\r
5382 memset(m, 0, msize);
\r
5383 (void)INITIAL_LOCK(&m->mutex);
\r
5384 msp->head = (msize|INUSE_BITS);
\r
5385 m->seg.base = m->least_addr = tbase;
\r
5386 m->seg.size = m->footprint = m->max_footprint = tsize;
\r
5387 m->magic = mparams.magic;
\r
5388 m->release_checks = MAX_RELEASE_CHECK_RATE;
\r
5389 m->mflags = mparams.default_mflags;
\r
5392 disable_contiguous(m);
\r
5394 mn = next_chunk(mem2chunk(m));
\r
5395 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
\r
5396 check_top_chunk(m, m->top);
\r
5400 mspace create_mspace(size_t capacity, int locked) {
\r
5403 ensure_initialization();
\r
5404 msize = pad_request(sizeof(struct malloc_state));
\r
5405 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
\r
5406 size_t rs = ((capacity == 0)? mparams.granularity :
\r
5407 (capacity + TOP_FOOT_SIZE + msize));
\r
5408 size_t tsize = granularity_align(rs);
\r
5409 char* tbase = (char*)(CALL_MMAP(tsize));
\r
5410 if (tbase != CMFAIL) {
\r
5411 m = init_user_mstate(tbase, tsize);
\r
5412 m->seg.sflags = USE_MMAP_BIT;
\r
5413 set_lock(m, locked);
\r
5419 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
\r
5422 ensure_initialization();
\r
5423 msize = pad_request(sizeof(struct malloc_state));
\r
5424 if (capacity > msize + TOP_FOOT_SIZE &&
\r
5425 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
\r
5426 m = init_user_mstate((char*)base, capacity);
\r
5427 m->seg.sflags = EXTERN_BIT;
\r
5428 set_lock(m, locked);
\r
5433 int mspace_track_large_chunks(mspace msp, int enable) {
\r
5435 mstate ms = (mstate)msp;
\r
5436 if (!PREACTION(ms)) {
\r
5437 if (!use_mmap(ms))
\r
5448 size_t destroy_mspace(mspace msp) {
\r
5450 mstate ms = (mstate)msp;
\r
5451 if (ok_magic(ms)) {
\r
5452 msegmentptr sp = &ms->seg;
\r
5453 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
\r
5455 char* base = sp->base;
\r
5456 size_t size = sp->size;
\r
5457 flag_t flag = sp->sflags;
\r
5459 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
\r
5460 CALL_MUNMAP(base, size) == 0)
\r
5465 USAGE_ERROR_ACTION(ms,ms);
\r
5471 mspace versions of routines are near-clones of the global
\r
5472 versions. This is not so nice but better than the alternatives.
\r
5475 void* mspace_malloc(mspace msp, size_t bytes) {
\r
5476 mstate ms = (mstate)msp;
\r
5477 if (!ok_magic(ms)) {
\r
5478 USAGE_ERROR_ACTION(ms,ms);
\r
5481 if (!PREACTION(ms)) {
\r
5484 if (bytes <= MAX_SMALL_REQUEST) {
\r
5486 binmap_t smallbits;
\r
5487 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
\r
5488 idx = small_index(nb);
\r
5489 smallbits = ms->smallmap >> idx;
\r
5491 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
\r
5493 idx += ~smallbits & 1; /* Uses next bin if idx empty */
\r
5494 b = smallbin_at(ms, idx);
\r
5496 assert(chunksize(p) == small_index2size(idx));
\r
5497 unlink_first_small_chunk(ms, b, p, idx);
\r
5498 set_inuse_and_pinuse(ms, p, small_index2size(idx));
\r
5499 mem = chunk2mem(p);
\r
5500 check_malloced_chunk(ms, mem, nb);
\r
5504 else if (nb > ms->dvsize) {
\r
5505 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
\r
5506 mchunkptr b, p, r;
\r
5509 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
\r
5510 binmap_t leastbit = least_bit(leftbits);
\r
5511 compute_bit2idx(leastbit, i);
\r
5512 b = smallbin_at(ms, i);
\r
5514 assert(chunksize(p) == small_index2size(i));
\r
5515 unlink_first_small_chunk(ms, b, p, i);
\r
5516 rsize = small_index2size(i) - nb;
\r
5517 /* Fit here cannot be remainderless if 4byte sizes */
\r
5518 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
\r
5519 set_inuse_and_pinuse(ms, p, small_index2size(i));
\r
5521 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5522 r = chunk_plus_offset(p, nb);
\r
5523 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
5524 replace_dv(ms, r, rsize);
\r
5526 mem = chunk2mem(p);
\r
5527 check_malloced_chunk(ms, mem, nb);
\r
5531 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
\r
5532 check_malloced_chunk(ms, mem, nb);
\r
5537 else if (bytes >= MAX_REQUEST)
\r
5538 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
\r
5540 nb = pad_request(bytes);
\r
5541 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
\r
5542 check_malloced_chunk(ms, mem, nb);
\r
5547 if (nb <= ms->dvsize) {
\r
5548 size_t rsize = ms->dvsize - nb;
\r
5549 mchunkptr p = ms->dv;
\r
5550 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
\r
5551 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
\r
5552 ms->dvsize = rsize;
\r
5553 set_size_and_pinuse_of_free_chunk(r, rsize);
\r
5554 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5556 else { /* exhaust dv */
\r
5557 size_t dvs = ms->dvsize;
\r
5560 set_inuse_and_pinuse(ms, p, dvs);
\r
5562 mem = chunk2mem(p);
\r
5563 check_malloced_chunk(ms, mem, nb);
\r
5567 else if (nb < ms->topsize) { /* Split top */
\r
5568 size_t rsize = ms->topsize -= nb;
\r
5569 mchunkptr p = ms->top;
\r
5570 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
\r
5571 r->head = rsize | PINUSE_BIT;
\r
5572 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
\r
5573 mem = chunk2mem(p);
\r
5574 check_top_chunk(ms, ms->top);
\r
5575 check_malloced_chunk(ms, mem, nb);
\r
5579 mem = sys_alloc(ms, nb);
\r
5589 void mspace_free(mspace msp, void* mem) {
\r
5591 mchunkptr p = mem2chunk(mem);
\r
5593 mstate fm = get_mstate_for(p);
\r
5594 msp = msp; /* placate people compiling -Wunused */
\r
5595 #else /* FOOTERS */
\r
5596 mstate fm = (mstate)msp;
\r
5597 #endif /* FOOTERS */
\r
5598 if (!ok_magic(fm)) {
\r
5599 USAGE_ERROR_ACTION(fm, p);
\r
5602 if (!PREACTION(fm)) {
\r
5603 check_inuse_chunk(fm, p);
\r
5604 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
\r
5605 size_t psize = chunksize(p);
\r
5606 mchunkptr next = chunk_plus_offset(p, psize);
\r
5608 size_t prevsize = p->prev_foot;
\r
5609 if (is_mmapped(p)) {
\r
5610 psize += prevsize + MMAP_FOOT_PAD;
\r
5611 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
\r
5612 fm->footprint -= psize;
\r
5616 mchunkptr prev = chunk_minus_offset(p, prevsize);
\r
5617 psize += prevsize;
\r
5619 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
\r
5620 if (p != fm->dv) {
\r
5621 unlink_chunk(fm, p, prevsize);
\r
5623 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
\r
5624 fm->dvsize = psize;
\r
5625 set_free_with_pinuse(p, psize, next);
\r
5634 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
\r
5635 if (!cinuse(next)) { /* consolidate forward */
\r
5636 if (next == fm->top) {
\r
5637 size_t tsize = fm->topsize += psize;
\r
5639 p->head = tsize | PINUSE_BIT;
\r
5640 if (p == fm->dv) {
\r
5644 if (should_trim(fm, tsize))
\r
5648 else if (next == fm->dv) {
\r
5649 size_t dsize = fm->dvsize += psize;
\r
5651 set_size_and_pinuse_of_free_chunk(p, dsize);
\r
5655 size_t nsize = chunksize(next);
\r
5657 unlink_chunk(fm, next, nsize);
\r
5658 set_size_and_pinuse_of_free_chunk(p, psize);
\r
5659 if (p == fm->dv) {
\r
5660 fm->dvsize = psize;
\r
5666 set_free_with_pinuse(p, psize, next);
\r
5668 if (is_small(psize)) {
\r
5669 insert_small_chunk(fm, p, psize);
\r
5670 check_free_chunk(fm, p);
\r
5673 tchunkptr tp = (tchunkptr)p;
\r
5674 insert_large_chunk(fm, tp, psize);
\r
5675 check_free_chunk(fm, p);
\r
5676 if (--fm->release_checks == 0)
\r
5677 release_unused_segments(fm);
\r
5683 USAGE_ERROR_ACTION(fm, p);
\r
5690 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
\r
5693 mstate ms = (mstate)msp;
\r
5694 if (!ok_magic(ms)) {
\r
5695 USAGE_ERROR_ACTION(ms,ms);
\r
5698 if (n_elements != 0) {
\r
5699 req = n_elements * elem_size;
\r
5700 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
\r
5701 (req / n_elements != elem_size))
\r
5702 req = MAX_SIZE_T; /* force downstream failure on overflow */
\r
5704 mem = internal_malloc(ms, req);
\r
5705 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
\r
5706 memset(mem, 0, req);
\r
5710 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
\r
5712 if (oldmem == 0) {
\r
5713 mem = mspace_malloc(msp, bytes);
\r
5715 else if (bytes >= MAX_REQUEST) {
\r
5716 MALLOC_FAILURE_ACTION;
\r
5718 #ifdef REALLOC_ZERO_BYTES_FREES
\r
5719 else if (bytes == 0) {
\r
5720 mspace_free(msp, oldmem);
\r
5722 #endif /* REALLOC_ZERO_BYTES_FREES */
\r
5724 size_t nb = request2size(bytes);
\r
5725 mchunkptr oldp = mem2chunk(oldmem);
\r
5727 mstate m = (mstate)msp;
\r
5728 #else /* FOOTERS */
\r
5729 mstate m = get_mstate_for(oldp);
\r
5730 if (!ok_magic(m)) {
\r
5731 USAGE_ERROR_ACTION(m, oldmem);
\r
5734 #endif /* FOOTERS */
\r
5735 if (!PREACTION(m)) {
\r
5736 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
\r
5739 check_inuse_chunk(m, newp);
\r
5740 mem = chunk2mem(newp);
\r
5743 mem = mspace_malloc(m, bytes);
\r
5745 size_t oc = chunksize(oldp) - overhead_for(oldp);
\r
5746 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
\r
5747 mspace_free(m, oldmem);
\r
5755 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
\r
5757 if (oldmem != 0) {
\r
5758 if (bytes >= MAX_REQUEST) {
\r
5759 MALLOC_FAILURE_ACTION;
\r
5762 size_t nb = request2size(bytes);
\r
5763 mchunkptr oldp = mem2chunk(oldmem);
\r
5765 mstate m = (mstate)msp;
\r
5766 #else /* FOOTERS */
\r
5767 mstate m = get_mstate_for(oldp);
\r
5768 msp = msp; /* placate people compiling -Wunused */
\r
5769 if (!ok_magic(m)) {
\r
5770 USAGE_ERROR_ACTION(m, oldmem);
\r
5773 #endif /* FOOTERS */
\r
5774 if (!PREACTION(m)) {
\r
5775 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
\r
5777 if (newp == oldp) {
\r
5778 check_inuse_chunk(m, newp);
\r
5787 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
\r
5788 mstate ms = (mstate)msp;
\r
5789 if (!ok_magic(ms)) {
\r
5790 USAGE_ERROR_ACTION(ms,ms);
\r
5793 if (alignment <= MALLOC_ALIGNMENT)
\r
5794 return mspace_malloc(msp, bytes);
\r
5795 return internal_memalign(ms, alignment, bytes);
\r
5798 void** mspace_independent_calloc(mspace msp, size_t n_elements,
\r
5799 size_t elem_size, void* chunks[]) {
\r
5800 size_t sz = elem_size; /* serves as 1-element array */
\r
5801 mstate ms = (mstate)msp;
\r
5802 if (!ok_magic(ms)) {
\r
5803 USAGE_ERROR_ACTION(ms,ms);
\r
5806 return ialloc(ms, n_elements, &sz, 3, chunks);
\r
5809 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
\r
5810 size_t sizes[], void* chunks[]) {
\r
5811 mstate ms = (mstate)msp;
\r
5812 if (!ok_magic(ms)) {
\r
5813 USAGE_ERROR_ACTION(ms,ms);
\r
5816 return ialloc(ms, n_elements, sizes, 0, chunks);
\r
5819 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
\r
5820 return internal_bulk_free((mstate)msp, array, nelem);
\r
5823 #if MALLOC_INSPECT_ALL
\r
5824 void mspace_inspect_all(mspace msp,
\r
5825 void(*handler)(void *start,
\r
5827 size_t used_bytes,
\r
5828 void* callback_arg),
\r
5830 mstate ms = (mstate)msp;
\r
5831 if (ok_magic(ms)) {
\r
5832 if (!PREACTION(ms)) {
\r
5833 internal_inspect_all(ms, handler, arg);
\r
5838 USAGE_ERROR_ACTION(ms,ms);
\r
5841 #endif /* MALLOC_INSPECT_ALL */
\r
5843 int mspace_trim(mspace msp, size_t pad) {
\r
5845 mstate ms = (mstate)msp;
\r
5846 if (ok_magic(ms)) {
\r
5847 if (!PREACTION(ms)) {
\r
5848 result = sys_trim(ms, pad);
\r
5853 USAGE_ERROR_ACTION(ms,ms);
\r
5858 #if !NO_MALLOC_STATS
\r
5859 void mspace_malloc_stats(mspace msp) {
\r
5860 mstate ms = (mstate)msp;
\r
5861 if (ok_magic(ms)) {
\r
5862 internal_malloc_stats(ms);
\r
5865 USAGE_ERROR_ACTION(ms,ms);
\r
5868 #endif /* NO_MALLOC_STATS */
\r
5870 size_t mspace_footprint(mspace msp) {
\r
5871 size_t result = 0;
\r
5872 mstate ms = (mstate)msp;
\r
5873 if (ok_magic(ms)) {
\r
5874 result = ms->footprint;
\r
5877 USAGE_ERROR_ACTION(ms,ms);
\r
5882 size_t mspace_max_footprint(mspace msp) {
\r
5883 size_t result = 0;
\r
5884 mstate ms = (mstate)msp;
\r
5885 if (ok_magic(ms)) {
\r
5886 result = ms->max_footprint;
\r
5889 USAGE_ERROR_ACTION(ms,ms);
\r
5894 size_t mspace_footprint_limit(mspace msp) {
\r
5895 size_t result = 0;
\r
5896 mstate ms = (mstate)msp;
\r
5897 if (ok_magic(ms)) {
\r
5898 size_t maf = ms->footprint_limit;
\r
5899 result = (maf == 0) ? MAX_SIZE_T : maf;
\r
5902 USAGE_ERROR_ACTION(ms,ms);
\r
5907 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
\r
5908 size_t result = 0;
\r
5909 mstate ms = (mstate)msp;
\r
5910 if (ok_magic(ms)) {
\r
5912 result = granularity_align(1); /* Use minimal size */
\r
5913 if (bytes == MAX_SIZE_T)
\r
5914 result = 0; /* disable */
\r
5916 result = granularity_align(bytes);
\r
5917 ms->footprint_limit = result;
\r
5920 USAGE_ERROR_ACTION(ms,ms);
\r
5926 struct mallinfo mspace_mallinfo(mspace msp) {
\r
5927 mstate ms = (mstate)msp;
\r
5928 if (!ok_magic(ms)) {
\r
5929 USAGE_ERROR_ACTION(ms,ms);
\r
5931 return internal_mallinfo(ms);
\r
5933 #endif /* NO_MALLINFO */
\r
5935 size_t mspace_usable_size(void* mem) {
\r
5937 mchunkptr p = mem2chunk(mem);
\r
5939 return chunksize(p) - overhead_for(p);
\r
5944 int mspace_mallopt(int param_number, int value) {
\r
5945 return change_mparam(param_number, value);
\r
5948 #endif /* MSPACES */
\r
5951 /* -------------------- Alternative MORECORE functions ------------------- */
\r
5954 Guidelines for creating a custom version of MORECORE:
\r
5956 * For best performance, MORECORE should allocate in multiples of pagesize.
\r
5957 * MORECORE may allocate more memory than requested. (Or even less,
\r
5958 but this will usually result in a malloc failure.)
\r
5959 * MORECORE must not allocate memory when given argument zero, but
\r
5960 instead return one past the end address of memory from previous
\r
5962 * For best performance, consecutive calls to MORECORE with positive
\r
5963 arguments should return increasing addresses, indicating that
\r
5964 space has been contiguously extended.
\r
5965 * Even though consecutive calls to MORECORE need not return contiguous
\r
5966 addresses, it must be OK for malloc'ed chunks to span multiple
\r
5967 regions in those cases where they do happen to be contiguous.
\r
5968 * MORECORE need not handle negative arguments -- it may instead
\r
5969 just return MFAIL when given negative arguments.
\r
5970 Negative arguments are always multiples of pagesize. MORECORE
\r
5971 must not misinterpret negative args as large positive unsigned
\r
5972 args. You can suppress all such calls from even occurring by defining
\r
5973 MORECORE_CANNOT_TRIM,
\r
5975 As an example alternative MORECORE, here is a custom allocator
\r
5976 kindly contributed for pre-OSX macOS. It uses virtually but not
\r
5977 necessarily physically contiguous non-paged memory (locked in,
\r
5978 present and won't get swapped out). You can use it by uncommenting
\r
5979 this section, adding some #includes, and setting up the appropriate
\r
5982 #define MORECORE osMoreCore
\r
5984 There is also a shutdown routine that should somehow be called for
\r
5985 cleanup upon program exit.
\r
5987 #define MAX_POOL_ENTRIES 100
\r
5988 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
\r
5989 static int next_os_pool;
\r
5990 void *our_os_pools[MAX_POOL_ENTRIES];
\r
5992 void *osMoreCore(int size)
\r
5995 static void *sbrk_top = 0;
\r
5999 if (size < MINIMUM_MORECORE_SIZE)
\r
6000 size = MINIMUM_MORECORE_SIZE;
\r
6001 if (CurrentExecutionLevel() == kTaskLevel)
\r
6002 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
\r
6005 return (void *) MFAIL;
\r
6007 // save ptrs so they can be freed during cleanup
\r
6008 our_os_pools[next_os_pool] = ptr;
\r
6010 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
\r
6011 sbrk_top = (char *) ptr + size;
\r
6014 else if (size < 0)
\r
6016 // we don't currently support shrink behavior
\r
6017 return (void *) MFAIL;
\r
6025 // cleanup any allocated memory pools
\r
6026 // called as last thing before shutting down driver
\r
6028 void osCleanupMem(void)
\r
6032 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
\r
6035 PoolDeallocate(*ptr);
\r
6043 /* -----------------------------------------------------------------------
\r
6045 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
\r
6046 * Always perform unlink checks unless INSECURE
\r
6047 * Add posix_memalign.
\r
6048 * Improve realloc to expand in more cases; expose realloc_in_place.
\r
6049 Thanks to Peter Buhr for the suggestion.
\r
6050 * Add footprint_limit, inspect_all, bulk_free. Thanks
\r
6051 to Barry Hayes and others for the suggestions.
\r
6052 * Internal refactorings to avoid calls while holding locks
\r
6053 * Use non-reentrant locks by default. Thanks to Roland McGrath
\r
6054 for the suggestion.
\r
6055 * Small fixes to mspace_destroy, reset_on_error.
\r
6056 * Various configuration extensions/changes. Thanks
\r
6057 to all who contributed these.
\r
6059 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
\r
6060 * Update Creative Commons URL
\r
6062 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
\r
6063 * Use zeros instead of prev foot for is_mmapped
\r
6064 * Add mspace_track_large_chunks; thanks to Jean Brouwers
\r
6065 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
\r
6066 * Fix insufficient sys_alloc padding when using 16byte alignment
\r
6067 * Fix bad error check in mspace_footprint
\r
6068 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
\r
6069 * Reentrant spin locks; thanks to Earl Chew and others
\r
6070 * Win32 improvements; thanks to Niall Douglas and Earl Chew
\r
6071 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
\r
6072 * Extension hook in malloc_state
\r
6073 * Various small adjustments to reduce warnings on some compilers
\r
6074 * Various configuration extensions/changes for more platforms. Thanks
\r
6075 to all who contributed these.
\r
6077 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
\r
6078 * Add max_footprint functions
\r
6079 * Ensure all appropriate literals are size_t
\r
6080 * Fix conditional compilation problem for some #define settings
\r
6081 * Avoid concatenating segments with the one provided
\r
6082 in create_mspace_with_base
\r
6083 * Rename some variables to avoid compiler shadowing warnings
\r
6084 * Use explicit lock initialization.
\r
6085 * Better handling of sbrk interference.
\r
6086 * Simplify and fix segment insertion, trimming and mspace_destroy
\r
6087 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
\r
6088 * Thanks especially to Dennis Flanagan for help on these.
\r
6090 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
\r
6091 * Fix memalign brace error.
\r
6093 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
\r
6094 * Fix improper #endif nesting in C++
\r
6095 * Add explicit casts needed for C++
\r
6097 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
\r
6098 * Use trees for large bins
\r
6100 * Use segments to unify sbrk-based and mmap-based system allocation,
\r
6101 removing need for emulation on most platforms without sbrk.
\r
6102 * Default safety checks
\r
6103 * Optional footer checks. Thanks to William Robertson for the idea.
\r
6104 * Internal code refactoring
\r
6105 * Incorporate suggestions and platform-specific changes.
\r
6106 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
\r
6107 Aaron Bachmann, Emery Berger, and others.
\r
6108 * Speed up non-fastbin processing enough to remove fastbins.
\r
6109 * Remove useless cfree() to avoid conflicts with other apps.
\r
6110 * Remove internal memcpy, memset. Compilers handle builtins better.
\r
6111 * Remove some options that no one ever used and rename others.
\r
6113 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
\r
6114 * Fix malloc_state bitmap array misdeclaration
\r
6116 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
\r
6117 * Allow tuning of FIRST_SORTED_BIN_SIZE
\r
6118 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
\r
6119 * Better detection and support for non-contiguousness of MORECORE.
\r
6120 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
\r
6121 * Bypass most of malloc if no frees. Thanks To Emery Berger.
\r
6122 * Fix freeing of old top non-contiguous chunk im sysmalloc.
\r
6123 * Raised default trim and map thresholds to 256K.
\r
6124 * Fix mmap-related #defines. Thanks to Lubos Lunak.
\r
6125 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
\r
6126 * Branch-free bin calculation
\r
6127 * Default trim and mmap thresholds now 256K.
\r
6129 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
\r
6130 * Introduce independent_comalloc and independent_calloc.
\r
6131 Thanks to Michael Pachos for motivation and help.
\r
6132 * Make optional .h file available
\r
6133 * Allow > 2GB requests on 32bit systems.
\r
6134 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
\r
6135 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
\r
6137 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
\r
6138 helping test this.)
\r
6139 * memalign: check alignment arg
\r
6140 * realloc: don't try to shift chunks backwards, since this
\r
6141 leads to more fragmentation in some programs and doesn't
\r
6142 seem to help in any others.
\r
6143 * Collect all cases in malloc requiring system memory into sysmalloc
\r
6144 * Use mmap as backup to sbrk
\r
6145 * Place all internal state in malloc_state
\r
6146 * Introduce fastbins (although similar to 2.5.1)
\r
6147 * Many minor tunings and cosmetic improvements
\r
6148 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
\r
6149 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
\r
6150 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
\r
6151 * Include errno.h to support default failure action.
\r
6153 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
\r
6154 * return null for negative arguments
\r
6155 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
\r
6156 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
\r
6157 (e.g. WIN32 platforms)
\r
6158 * Cleanup header file inclusion for WIN32 platforms
\r
6159 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
\r
6160 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
\r
6161 memory allocation routines
\r
6162 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
\r
6163 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
\r
6164 usage of 'assert' in non-WIN32 code
\r
6165 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
\r
6166 avoid infinite loop
\r
6167 * Always call 'fREe()' rather than 'free()'
\r
6169 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
\r
6170 * Fixed ordering problem with boundary-stamping
\r
6172 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
\r
6173 * Added pvalloc, as recommended by H.J. Liu
\r
6174 * Added 64bit pointer support mainly from Wolfram Gloger
\r
6175 * Added anonymously donated WIN32 sbrk emulation
\r
6176 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
\r
6177 * malloc_extend_top: fix mask error that caused wastage after
\r
6179 * Add linux mremap support code from HJ Liu
\r
6181 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
\r
6182 * Integrated most documentation with the code.
\r
6183 * Add support for mmap, with help from
\r
6184 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
\r
6185 * Use last_remainder in more cases.
\r
6186 * Pack bins using idea from colin@nyx10.cs.du.edu
\r
6187 * Use ordered bins instead of best-fit threshhold
\r
6188 * Eliminate block-local decls to simplify tracing and debugging.
\r
6189 * Support another case of realloc via move into top
\r
6190 * Fix error occuring when initial sbrk_base not word-aligned.
\r
6191 * Rely on page size for units instead of SBRK_UNIT to
\r
6192 avoid surprises about sbrk alignment conventions.
\r
6193 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
\r
6194 (raymond@es.ele.tue.nl) for the suggestion.
\r
6195 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
\r
6196 * More precautions for cases where other routines call sbrk,
\r
6197 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
\r
6198 * Added macros etc., allowing use in linux libc from
\r
6199 H.J. Lu (hjl@gnu.ai.mit.edu)
\r
6200 * Inverted this history list
\r
6202 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
\r
6203 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
\r
6204 * Removed all preallocation code since under current scheme
\r
6205 the work required to undo bad preallocations exceeds
\r
6206 the work saved in good cases for most test programs.
\r
6207 * No longer use return list or unconsolidated bins since
\r
6208 no scheme using them consistently outperforms those that don't
\r
6209 given above changes.
\r
6210 * Use best fit for very large chunks to prevent some worst-cases.
\r
6211 * Added some support for debugging
\r
6213 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
\r
6214 * Removed footers when chunks are in use. Thanks to
\r
6215 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
\r
6217 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
\r
6218 * Added malloc_trim, with help from Wolfram Gloger
\r
6219 (wmglo@Dent.MED.Uni-Muenchen.DE).
\r
6221 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
\r
6223 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
\r
6224 * realloc: try to expand in both directions
\r
6225 * malloc: swap order of clean-bin strategy;
\r
6226 * realloc: only conditionally expand backwards
\r
6227 * Try not to scavenge used bins
\r
6228 * Use bin counts as a guide to preallocation
\r
6229 * Occasionally bin return list chunks in first scan
\r
6230 * Add a few optimizations from colin@nyx10.cs.du.edu
\r
6232 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
\r
6233 * faster bin computation & slightly different binning
\r
6234 * merged all consolidations to one part of malloc proper
\r
6235 (eliminating old malloc_find_space & malloc_clean_bin)
\r
6236 * Scan 2 returns chunks (not just 1)
\r
6237 * Propagate failure in realloc if malloc returns 0
\r
6238 * Add stuff to allow compilation on non-ANSI compilers
\r
6239 from kpv@research.att.com
\r
6241 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
\r
6242 * removed potential for odd address access in prev_chunk
\r
6243 * removed dependency on getpagesize.h
\r
6244 * misc cosmetics and a bit more internal documentation
\r
6245 * anticosmetics: mangled names in macros to evade debugger strangeness
\r
6246 * tested on sparc, hp-700, dec-mips, rs6000
\r
6247 with gcc & native cc (hp, dec only) allowing
\r
6248 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
\r
6250 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
\r
6251 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
\r
6252 structure of old version, but most details differ.)
\r
6258 #include <_PDCLIB_test.h>
\r
6260 /* TODO: TEST ME */
\r
6263 return TEST_RESULTS;
\r