home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.os.linux
- Path: sparky!uunet!mcsun!Germany.EU.net!ira.uka.de!rz.uni-karlsruhe.de!usenet
- From: S_TITZ@iravcl.ira.uka.de (Olaf Titz)
- Subject: RISC approach to OS - Re: GNU kids on the block?
- In-Reply-To: davidsen@ariel.crd.GE.COM's message of 27 Aug 92 13: 57:03 GMT
- Message-ID: <1992Aug28.171744.6460@rz.uni-karlsruhe.de>
- Sender: usenet@rz.uni-karlsruhe.de (USENET News System)
- Organization: Fachschaft Informatik, Uni Karlsruhe
- References: <ROLAND.92Aug24194541@churchy.gnu.ai.mit.edu> <1992Aug25.123854.26792@uwm.edu> <1992Aug25.195316.9174@kithrup.COM> <1992Aug27.135703.9312@crd.ge.com>
- Date: Fri, 28 Aug 1992 17:17:44 GMT
- X-News-Reader: VMS NEWS 1.23
- Lines: 59
-
- In <1992Aug27.135703.9312@crd.ge.com> davidsen@ariel.crd.GE.COM writes:
-
- > Here comes that idea again... The first o/s I helped write ran about
- > 2/3 of the kernel in user mode, with user programs mapped into the
- > addressing space. Multics was using rings to get some of the same
- > things you get with setuid(), namely a limited set of privileges. GCOS
- > used a multi-threaded kernel (sort of) with multiple processors all
- > scampering around inside waving flags at one another. It even had
- > almost lightweight processes to handle i/o interrupts in user space.
- > That was mid 60's and it's interesting that the idea of monolithic
- > kernel is once again drifting out of vogue. Unfortunately I don't think
-
- While the Linux kernel does its job well, its being monolithic is a
- problem since all of the parts are interdependent, and to comprehend
- the work of one of them, you have to know the whole system. This may
- work for Linux but is unacceptable for bigger systems.
-
- > the multiserver is right direction, since everything else in computers
- > is headed for less compleity rather than more. Multi-server is the CISC
- > of software, a sort of hypercube of processes rather than processors.
- >
- > I'm all in favor of modularity (look at some of my net code), but I am
- > not convinced that this is the best way to get there.
-
- Depends on what you want to achieve. A true distributed system could
- transparently allocate resources of different machines to a job.
- Whether you want this is another question.
-
- > I like the Linux RISC-like approach, do only a few things, but very
- > well and very fast. Build the complex functions out of sequences of
- > simple operations. To me this means simple kernel calls and the library
- > providing the complex stuff.
-
- I completely agree, and I wonder why this issue now is raised again by
- Linux where this was the design principle of the original UNIX 20
- years ago. And now UNIX is a huge giant with a lot of memory donuts
- :-) needed to feed him.
-
- But for this approach you need at least shared libraries to avoid a
- large-scale memory waste. And it comes down to the Amiga which has NO
- real OS kernel, where ALL is shared libraries. Seems to me one of the
- most well-designed OSs that ever existed, but been undervalued mostly
- because of hardware and marketing problems :-(
- (one of them being hardware dependence...)
-
- > Don't take this as a rejection of multi-server by me, I'm unconvinced
- > rather than convinced against. Sort of a software agnostic.
-
- Again: Computers are NOT the right place to practice religion. :-)
- Better an agnostic than determinedly believing in something that can
- possibly be proven wrong. :-)
-
- MfG,
- Olaf
- --
- Olaf Titz - comp.sc.student - Univ of Karlsruhe - s_titz@iravcl.ira.uka.de -
- uknf@dkauni2.bitnet - praetorius@irc - +49-721-60439 - did i forget something?
- Der gr"une Punkt ist halt genauso sinnvoll wie ne T"UV-Plakette
- auf nem Schrotthaufen. - Thomas Volkmar Worm
-