: >To make a long story short "if you get a caching
: > disk controller YOU LOSE THE OPTION of where the memory gets used."
:
: I disagree. I have a IDE cache-ing controller on my new PC (486DX-33). At first I was dubious of the benefits of a hardware cache over the software option, but not now.....
: The main advantage is that your main CPU is not burdened with the mundane cache control functions and can get on with pushing Windows (or whatever) along.
: It is particularly noticeable to me when I'm compiling, I save then compile my source code, with the IDE cache this is done in parallel, software cacheing could not do this.
: A further advantage is gained if you use "write-back cacheing" (disk writes are cached as well as reads). If you use software chaching and your machine crashes (not so unusual even under Win 3.1) bang goes your new data... This does not happen in hardware chaching.
This is not necessarily true! Certainly there are cases where a software crash
can occur and the hardware controller will save you. BUT a power glitch, for
example (unless you have a an UPS) could just as easily lose date on the card
as in main memory. Non-volatile hardware cache would be super.
Don't confuse adding more CPU power ( i.e. the CPU on the controller)
with a direct hardware cacheing "advantage". If you spend X dollars
more for a cacheing controller, what extra CPU power could you have
gotten instead. Buying the faster CPU instead gets you the power to use in a
broad range of areas not just to move bytes onto the disk. This is also
why I am a fan of SCSI with bus-mastering. Here the CPU doesn't ((may
not )have to move the bytes at all since the SCSI card does all the
work. Moreover, one can have several seeks/writes/reads all in process
at once.
In short, probably the most effective is to add memory and CPU speed
with straight IDE and then go SCSI if you need more performance.