home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.arch
- Path: sparky!uunet!iWarp.intel.com|ichips!ichips!glew
- From: glew@pdx007.intel.com (Andy Glew)
- Subject: Re: Scheduling in Shared Memory Multiprocessor Systems
- In-Reply-To: david@elroy.jpl.nasa.gov's message of Fri, 24 Jul 1992 18:32:13 GMT
- Message-ID: <GLEW.92Jul26201316@pdx007.intel.com>
- Sender: news@ichips.intel.com (News Account)
- Organization: Intel Corp., Hillsboro, Oregon
- References: <1992Jul15.040528.16289@access.usask.ca> <GLEW.92Jul23215649@pdx007.intel.com>
- <1992Jul24.183213.9699@elroy.jpl.nasa.gov>
- Date: Mon, 27 Jul 1992 04:13:16 GMT
- Lines: 26
-
- >Gould PN machines used a simple affinity algorithm:
- > The run queue was divided into 32 priorities (each a single chain)
- > - the classic BSD run queue.
- Isn't this an artifact of the Vax architecture having 32 hardware
- queues and instructions to manipulate them? Have most BSD vendors
- kept this structure or gone with something radically different?
-
- I wasn't aware of any limitation on VAX queues.
-
- What was convenient, however, was keeping a bit per queue indicating
- occupancy. 32 bit words. That way simple tests against 0 and FFS
- could be used to query the run queue.
-
- At one occasion we did try > 32 queues, but the overhead wasn't worth it.
-
- With 64 bit integers common, 64 run queues will be likely.
- --
-
- Andy Glew, glew@ichips.intel.com
- Intel Corp., M/S JF1-19, 5200 NE Elam Young Pkwy,
- Hillsboro, Oregon 97124-6497
-
- This is a private posting; it does not indicate opinions or positions
- of Intel Corp.
-
- Intel Inside (tm)
-