home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.arch
- Path: sparky!uunet!UB.com!pacbell.com!ames!agate!spool.mu.edu!yale.edu!ira.uka.de!math.fu-berlin.de!mailgzrz.TU-Berlin.DE!news.netmbx.de!Germany.EU.net!mcsun!news.funet.fi!cs.joensuu.fi!jahonen
- From: jahonen@cs.joensuu.fi (Jarmo Ahonen)
- Subject: Re: HOw many PC's make an Amdahl mainframe
- Message-ID: <1993Jan27.064625.1647@cs.joensuu.fi>
- Organization: University of Joensuu
- References: <1k46ioINNijv@fido.asd.sgi.com> <1993Jan26.215541.9957@adobe.com> <1993Jan27.024351.17902@news.arc.nasa.gov>
- Date: Wed, 27 Jan 1993 06:46:25 GMT
- Lines: 55
-
- lamaster@pioneer.arc.nasa.gov (Hugh LaMaster) writes:
-
- >In article <1993Jan26.215541.9957@adobe.com>, zstern@adobe.com (Zalman Stern) writes:
- >|> In article <1k46ioINNijv@fido.asd.sgi.com> gints@prophet.esd.sgi.com (Gints
- >|> Comments like this indicate that the big iron boys are pretty desperate.
- >|> They are fighting the dynamics of the market. There are *relatively* few
- >|> problems that require "mainframe power." Over time mainframes must either
- >|> become cheaper or more powerful. More powerful is limited because there are
- >|> fewer and fewer problems that require that kind of power and it becomes
- >|> harder and harder to make the machine more powerful. Cheaper is a problem
- >|> because the margins go down and a lot of a mainframe's costs are not
- >|> hardware oriented. (I.e. service, paying for the space and other central
- >|> facilities required.)
-
- [stuff deleted]
-
- >The reason I am posting this is that I think it is a big mistake
- >to underestimate the importance of big iron systems. IMHO, most large
- >data centers will continue to use large systems to provide the cheapest
- >deliverable "bulk" CPU cycles, to manage large and growing data
- >requirements, and to provide the highest possible transaction rates.
- >As well as large engineering and scientific simulations.
- >What has already disappeared is the use of "mainframes" for relatively
- >mundane word processing, etc. But, the data management requirements
- >are still there. Today, big iron systems have memory bandwidths of
- >1-200 GBytes/sec, raw IO bandwidths of 1000 MBytes/sec., I/O
- >throughputs of around 100-200 MBytes/sec and up to 2000 integer-scalar
- >MIPS. While this is maybe only ~200 486 systems, there is an enormous
- >difference in the capability between such systems. A 200:1 difference in
- >power totally changes the capability of a system, as was pointed out
- >recently at Supercomputing '92. I could go on, but, let's not confuse
- >the financial troubles of a few big iron manufacturers with the
- >disappearance of their market.
-
- That is true. I have run some simulations on CONVEX machines, workstations,
- and PC's. I have an interesting simulation which would take about 3 months
- on my PC, one month on the fastest workstation, and two weeks on a CONVEX.
- (pretty well vectorizing code with big datasets, as you may quess. The PC
- and workstation times are estimates from partial simulations.)
-
- The PC time is completely unusab|e, even if I divide the problem for one
- hundred PC's, I'd not get much more than communication traffic between those
- PC's. The workstation time is almost usable, and the fortnight on CONVEX is
- clearly practically usefull. Not every problem can be distributed for a
- number of PC's because the dependencies would require so much communication
- that most of the cycles would be wasted for it.
-
- I'm sure that there will always be need for machines that are 6+ times
- faster than the fastest PC. Well, 100+ is always better :-).
-
- ----------------------------------------------------------------------
- Jarmo J. Ahonen
- Computing Centre, Lappeenranta University of Technology, P.O.Box 20,
- SF-53851 Lappeenranta, Finland. email: Jarmo.Ahonen@lut.fi
-
-