home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.windows.x
- Path: sparky!uunet!orca!orca!pmartz
- From: pmartz@dsd.es.com (Paul Martz)
- Subject: Re: FLAME, FLAME ON X!!!
- Message-ID: <1992Nov20.224644.10664@dsd.es.com>
- Sender: usenet@dsd.es.com
- Nntp-Posting-Host: bambam-gw
- Reply-To: pmartz@dsd.es.com (Paul Martz)
- Organization: Evans & Sutherland Computer Corp., Salt Lake City, UT
- References: <RJC.92Nov13144522@daiches.cogsci.ed.ac.uk> <1992Nov16.162457.2923@osf.org> <1ebs7rINNp9k@armory.centerline.com> <1992Nov19.040434.12181@osf.org> <1ejeh8INN1jq@armory.centerline.com>
- Date: Fri, 20 Nov 92 22:46:44 GMT
- Lines: 29
-
- In article <1ejeh8INN1jq@armory.centerline.com>, jimf@centerline.com (Jim Frost) writes:
- > dbrooks@osf.org (David Brooks) writes:
- > Look at it this way -- if you do all allocations in a power of two
- > then to have a list for every possible different sized allocation you
- > need only thirty-two lists (actually 28 or 29 depending on the minimum
- > allocation size, or even fewer if you consider that you can't really
- > allocate a 4 gig area). That's real convenient -- you can basically
- > find an empty space (if there is one) immediately, making your
- > allocator REALLY fast. It's my assumption that that's what the GNU
- > malloc does, although I've never looked it the code.
- >
- > The downside is that it's wasteful for allocations that don't fit
- > well. An 65-byte allocation will return 128 bytes. No big deal, at
- > least as long as you're not doing a lot of those allocations. As the
- > sizes get larger, however, it becomes a really big deal -- a 1.1
- > megabyte allocation will return 2 megabytes, for instance. Once you
- > have a few of those you're starting to really waste memory.
-
- Everything's a trade-off. Why get a 2meg chunk for a 1.1meg request?
- Go get 1.1meg instead, and stick it in a list for chunks sized between
- 1 and 2 megs. Then finding a free chunk the correct size becomes a
- matter of searching through the list of similarly sized chunks to find
- one big enough to fulfill the request. Slightly less time efficient
- than the algorithm you outline above, but it licks your memory
- efficiency issue. Maybe not quite as good at handling fragmentation.
- --
-
- -paul pmartz@dsd.es.com
- Evans & Sutherland
-