home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!darwin.sura.net!spool.mu.edu!uwm.edu!psuvax1!psuvm!cunyvm!afecu
- Organization: City University of New York/ University Computer Center
- Date: Tuesday, 29 Dec 1992 16:56:29 EST
- From: Aron Eisenpress <AFECU@CUNYVM.BITNET>
- Message-ID: <92364.165629AFECU@CUNYVM.BITNET>
- Newsgroups: bit.listserv.ibm-main
- Subject: Re: Logical Partitions/Physical Partitions
- References: <9212260553.AA11648@brazos.is.rice.edu>
- Lines: 29
-
- In article <9212260553.AA11648@brazos.is.rice.edu>, David E Boyes
- <dboyes@IS.RICE.EDU> says:
- >
- > [...]
- >
- >If you're going to run serious production guests under VM, you do
- >it with dedicated devices, not full pack minidisks. By doing so,
- >you retain the advantages of both systems in error recovery and
- >system management. You also do it with VM/XA or VM/ESA -- not HPO
- >or SP. Most of the things that the MVSers have pointed out were
- >true under HPO or SP, but have changed radically since VM/XA, and
- >particularly since the introduction of VM/ESA.
-
- Ah, but if you dedicate the disks then you can't also get to them from a
- test MVS guest! (Well, you can if you have an additional path and define
- that as a different device, but that doesn't save you any hardware over the
- LPAR setup.) And if you don't dedicate the disks you will see a performance
- degradation, even with full-pack minidisks.
-
- I agree with Leonard and Michael on this issue. We ran an MVS production
- guest under VM/XA and have since partitioned that machine to run MVS native.
- Aside from the disk sharing issues, the additional level of complexity for
- the operations folks really hurt us. And we already had the necessary control
- unit paths, 3088, etc., since we've had multiple MVS mainframes for a long
- time (and we needed them back when we ran MVS as a PMA guest under VM/HPO).
- -------
-
- -- Aron Eisenpress, City U of NY
- (AFECU@CUNYVM.BITNET, or AFECU@CUNYVM.CUNY.EDU on the Internet)
-