home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.os.os2.advocacy
- Path: sparky!uunet!grebyn!daily!mfraioli
- From: mfraioli@grebyn.com (Marc Fraioli)
- Subject: Re: OS/2 bigot meets NT....
- Message-ID: <1992Dec27.172250.4801@grebyn.com>
- Organization: Grebyn Timesharing
- References: <1992Dec25.232450.19632@actrix.gen.nz> <1992Dec27.011721.23160@unvax.union.edu> <1992Dec26.224701.5914@ais.com>
- Date: Sun, 27 Dec 1992 17:22:50 GMT
- Lines: 57
-
- In article <1992Dec26.224701.5914@ais.com> bruce@ais.com (Bruce C. Wright) writes:
- >In article <1992Dec27.011721.23160@unvax.union.edu>, pallantj@unvax.union.edu (Joseph C. Pallante) writes:
- >> I have a question....
- >>
- >> All this debate is over: Many, Many crashes because a PC has only
- >> 8 megs of RAM.
- >>
- >> My question: Why does NT crash (or OS/2 for that matter) because of lack
- >> of enough RAM? I would expect it to be slow because the OS would have
- >> to manipulate the memory, do some swapping, etc... But, if it follows
- >> all the rules it should, theoreticaly, it should not crash. It should
- >> just take longer to do its job, due to the overhead of running on
- >> a machine with little memory.
- >
- >Not true.
- >
- >You are correct that the OS should not have a `hard' crash (as in a
- >memory protect violation or the like) in a low-memory situation, and
- >that if it does it indicates a bug in the OS.
- >
- >But a `soft' crash, where the system just seems to `go away' indefinitely,
- >is still possible. The problem is that in situations where one or another
- >resource is overcommitted, you can often reach situations where in order
- >for the OS to proceed, _every_ _process_ on the OS requires something that
- >is already owned by another process. This is known as a deadlock or a
- >deadly embrace. In general, the only ways to avoid this in all cases
- >either require that there be a large excess of resources (and that the
- >system enforce that a program can't use more of its quota of resources
- >and that the sum of all the quotas isn't more than the total on the
- >system), or to greatly increase the complexity (and CPU requirements) of
- >using the API. These are usually not considered practical solutions
- >because they greatly increase the amount of resources required on the
- >system to avoid a problem that's pretty rare anyway except in very
- >overcommitted situations; the problem rarely happens in `slightly'
- >overcommitted systems.
- >
- >Even on a virtual memory system, the system still needs a certain amount
- >of real memory in order to run itself, map I/O buffers, and keep track of
- >virtual memory, etc. If the system is sufficiently overcommited on memory,
- >it may not be able to do all these things with available memory and in
- >fact may encounter situations where every process within the system is
- >waiting for memory currently in use by another process to be freed.
- >
- I thought that for this reason, the kernel of the OS is not made to be
- swappable, and runs at the highest possible priority, a priority that no
- other process can have. This way, the system should always be able to
- recover. So if your system requires 4,000k on a 4MB system, you
- wouldn't be able to run many programs, but the OS itself should stay up.
- This doesn't really require increasing the complexity of the API-- it is
- in fact transparent to the programmer. I just remember IBM making a big
- deal out of the fact that it's AIX 3.1 (I think that's the right version
- #) kernel was actually pageable-- the implication being that most
- aren't.
-
- --
- Marc Fraioli
- mfraioli@grebyn.com (So I'm a minimalist...)
-