home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!europa.asd.contel.com!darwin.sura.net!wupost!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!husc-news.harvard.edu!husc10.harvard.edu!joltes
- Newsgroups: vmsnet.internals
- Subject: Re: VMS tuning for a vaxcluster (LAVC)
- Message-ID: <1992Aug31.105416.15245@husc3.harvard.edu>
- From: joltes@husc10.harvard.edu (Richard Joltes)
- Date: 31 Aug 92 10:54:14 EDT
- References: <9208281916.AA02286@sndsu1.sinet.slb.com>
- Organization: Harvard University Science Center
- Nntp-Posting-Host: husc10.harvard.edu
- Lines: 57
-
- <brydon@dsny25.sinet.slb.com> writes:
-
- [much useful stuff deleted]
-
- >Okay, but what about the situation of one satellite vaxstation on a lightly
- >loaded network? I think page read I/O's should be considered separately from
- >page write I/O's. Does this satellite system do page read I/O's any faster
- >over the ethernet versus to local disk? [I would imagine that page write
- >I/O's would be dependent entirely on the relative disk speeds of the local
- >versus boot node disk speed.]
-
- Nope. The local disk's still faster. When I and a friend did VMS tuning
- studies for <a former employer> we found that even on an unloaded net the
- local disk was faster. Mind you, this was using VAXstation IIs with between
- 5 and 16MB and RD54s. Mileage on newer systems may vary, but I'd bet that
- the local disk still beats Ethernet traffic.
-
- Using a VAXserver 3600 (with RA81 system/application disk) we saw around a
- 15% loss in performance using the same VS II when we dropped the local page
- disk. The systems had been AUTOGENed and adjusted to reflect the changes.
-
- >About the bandwidth issue: Indeed the bandwidth of ethernet versus disk
- >accesses are about the same but on a disk access you have some relatively
- >expensive things going on before the data transfer: seek time, latency,
- >controller and CPU issues. This kind of overhead of course goes on with
- >ethernet but at microsecond speeds rather than millisecond.
-
- Sure, but what about all the processing that Ethernet requires? Your I/O
- request goes through multiple layers of s/w during its trip from the
- workstation to the server, then back through the same layers on the return.
- How does that compare with a local disk I/O? What happens if the Ethernet
- burps (fragged packet, collision, whatever)? There are more variables here,
- and more potential for error. The studies that we conducted all pointed to
- higher efficiency with a local disk, no matter what the Ethernet load. This
- may indicate internal problems with Ethernet transfers or with DEC's LAVC
- code, or it may just be that local work is simply faster.
-
- My tuning philosophy has always been to spread out the I/O channels and
- balance the load across them. Adding the page/swap load to the ethernet
- violates that rule, so I don't do it. If you're doing X on your workstations
- you're already loading up the 10MB ethernet bandwidth (admittedly you'd most
- likely have to run many apps across the net to peak things out too
- badly), and I'd rather not add any unnecessary disk accesses to that traffic.
-
- Plan for performance down the road. Any LAVC will probably grow over time, and
- it's easier to configure a small cluster once than to go back and install local
- disks once growth causes performance to degrade. RZ23s are cheap, and make
- reasonable local disks. RD53s are practically free nowadays, too. The gain
- in performance is worth the initial cost.
-
- --------------------------------------------------------------------------------
- Dick Joltes joltes@husc.harvard.edu
- Hardware & Networking Manager, Computer Services joltes@husc.bitnet
- Harvard University Science Center
-
- "Mind you, not as bad as the night Archie Pettigrew ate some
- sheep's testicles for a bet...God, that bloody sheep kicked him..."
-