home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.sys.sgi
- Path: sparky!uunet!stanford.edu!CSD-NewsHost.Stanford.EDU!news
- From: philip@ziggy.stanford.edu (Philip Machanick)
- Subject: Re: finding page size?
- Message-ID: <1992Aug18.213842.6074@CSD-NewsHost.Stanford.EDU>
- Sender: news@CSD-NewsHost.Stanford.EDU
- Reply-To: philip@ziggy.stanford.edu (Philip Machanick)
- Organization: CS Department, Stanford University, California, USA
- References: <l92h9dINNeqj@spim.mips.com>
- Date: Tue, 18 Aug 1992 21:38:42 GMT
- Lines: 54
-
- In article <l92h9dINNeqj@spim.mips.com> mash@mips.com (John Mashey) writes:
- > In article <onughl0@zuni.esd.sgi.com> olson@anchor.esd.sgi.com (Dave Olson)
- writes:
- > To amplify this a little more, and perhaps stir up some discussion
- > about what people should be doing later:
- > a) As Dave says, R4000s support multiple page sizes, and you certainly
- > don't want to build 4K into your programs.
- > b) This means, not only might different programs have different
- > sized pages ... but different parts of the same program might
- > have different-sized pages ... and in fact, there are ways of
- > using the R4000 TLB where the page size of a given chunk of
- > virtual memory changes around during execution....
- >
- > Hence, this leads to the question: if you are using the result of
- getpagesize,
- > what are you expecting it to tell you, and how are you using it?
- > This might provide some guidance about what getpagesize ought to do
- > in an OS that really uses multiple sizes at the same time.
- > Note that getpagesize explicitly says its return is not necesarily
- > the size of underlying hardware pages....
- >
- > In a multiple-page size environment, there are at least the following
- > potential values that might make sense:
- > a) 4K: the minimum possible size of allocation.
- > b) 8K: 2 of those, since the R4000's TLB conveniently pairs pages,
- > and there might be some advantage under some schemes in allocating 8K chunks.
- > c) XK, where X is the *largest* size that the OS is willing to put in the
- TLB,
- > i.e., so that asking for this much space could be a good hint to the OS to
- > allocate this much much physical memory in one chunk and be able to map it
- > with a single TLB entry.
-
- My problem is with object-oriented (C++) code where objects are allocated
- randomly. For a reasonably long time the same objects are in some sense close
- together (locality) but because they are randomly allocated they end up being
- on a large number of pages (probably one page per object). A result of this is
- probably a high number of TLB misses. I have done a temporary workaround by
- allocating related objects out of the same page (which I carve up into a free
- list). When an object moves to a different part of the data structures, I
- reallocate it out of pages that its new "neighbours" are allocated out of.
-
- If future architectures have a much lower TLB miss cost this problem would go
- away. Much larger page sizes could help by reducing the probability that data
- allocated at different times ends up on a different page and virtually
- addressed caches would be a help. Since the data fits in RAM it may make sense
- for a page size = total RAM requirement. I haven't finished measurements and
- testing but the performance impact of TLB misses in this case appears to be
- about 20%.
-
- [Not really relevant but the program is a particle-based wind tunnel
- simulation.]
- --
- Philip Machanick
- philip@pescadero.stanford.edu
-