home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.arch
- Path: sparky!uunet!elroy.jpl.nasa.gov!sdd.hp.com!cs.utexas.edu!torn!utzoo!henry
- From: henry@zoo.toronto.edu (Henry Spencer)
- Subject: Re: CISC Microcode (was Re: RISC Mainframe)
- Message-ID: <Brsx7o.G69@zoo.toronto.edu>
- Date: Wed, 22 Jul 1992 17:42:59 GMT
- References: <13v85hINN2og@rodan.UU.NET> <GLEW.92Jul14234349@pdx007.intel.com> <141o6mINN10h@rodan.UU.NET> <id.Z4JR.B6I@ferranti.com> <BrM8Gv.E3r@zoo.toronto.edu> <ADAMS.92Jul21011202@PDV2.pdv2.fmr.maschinenbau.th-darmstadt.de>
- Organization: U of Toronto Zoology
- Lines: 41
-
- In article <ADAMS.92Jul21011202@PDV2.pdv2.fmr.maschinenbau.th-darmstadt.de> adams@pdv2.fmr.maschinenbau.th-darmstadt.de (Adams) writes:
- >> The theory behind things like bcopy boxes is that CPU cycles are slow
- >> and costly compared to memory cycles. Not any more, they aren't.
- >
- >... when were CPU cycles slower than memory cycles?
-
- Basically, in the old days. For example, microcode made sense when the
- CPU was slow enough that using a memory to control it wouldn't slow it down.
- Not any more.
-
- >Most literature I know of claims CPUs to be ten times faster than
- >memory...
-
- *Today*, yes.
-
- >This makes dedicated DMA-engines valuable, as they work
- >parallel to CPU and issue transfers only during bus idle states,
- >just grabbing the rest of memory bandwidth, but not more.
-
- Uh, where do you find idle states? If the CPU is that much faster than
- the memory, it wants all the memory bandwidth it can get, and the memory
- designers sweat and strain to give it more. You get idle states only
- when the CPU is *slow*, so it can't use the full memory bandwidth.
-
- Dedicated DMA engines make sense if
-
- 1. the CPU can't move data around at full memory speed
- 2. normal execution doesn't use the full memory bandwidth
- 3. interrupt overhead is too high for timely device handling
- 4. bad hardware design cripples CPU data movement
-
- It should be easy to see that as the CPU gets faster, it *can* move data
- around at full speed, and it wants all the bandwidth it can find (adding
- caches helps some, but doesn't solve the problem). Item 3 can still be
- an issue, particularly on old architectures with clunky interrupt handling,
- or in applications with really ferocious data-movement requirements.
- Item 4 is constantly with us, its latest instantiation being caches that
- can't be bypassed (or can't be bypassed without a major performance hit).
- --
- There is nothing wrong with making | Henry Spencer @ U of Toronto Zoology
- mistakes, but... make *new* ones. -D.Sim| henry@zoo.toronto.edu utzoo!henry
-