home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.unix.ultrix
- Path: sparky!uunet!sun-barr!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!haven.umd.edu!decuac!hussar.dco.dec.com!mjr
- From: mjr@hussar.dco.dec.com (Marcus J. Ranum)
- Subject: Re: Is local caching configurable?
- Message-ID: <1992Nov10.143148.18071@decuac.dec.com>
- Sender: news@decuac.dec.com (USENET News System)
- Nntp-Posting-Host: hussar.dco.dec.com
- Organization: Digital Equipment Corporation, Washington ULTRIX Resource Center
- References: <776@sandia.UUCP> <1992Nov10.071109.19304@arb-phys.uni-dortmund.de>
- Date: Tue, 10 Nov 1992 14:31:48 GMT
- Lines: 32
-
- wb@arb-phys.uni-dortmund.de (Wilhelm B. Kloke) writes:
- >This is quite normal, the data are cached three times when writing on NFS;
- >first normal Unix write cache; second the NFS server process and third
- >the write cache on the server.
-
- On the client, writes are cached only until the block is
- full, to prevent abysmal performance on small writes. Once the block
- is full, the client sends the NFS write request to the server, and
- waits for the response. In true non-protocol-violating-NFS the server
- does not ACK the write until it is committed to stable storage. Your
- remark implies that the server can cache the write like a normal
- UNIX filesystem asynchronous I/O, which is not the case. Some versions
- of NFS do this, but they are not adhering strictly to the protocol,
- though they gain a great deal in performance that way. The "NFS server
- process" (are you referring to nfsd?) is simply a holder process that
- is used as a kernel device to block until the I/O completes.
-
-
- >Of course you pay for the better write performance via NFS by loading
- >the server and using up it's memory. You may speed up your local write
- >access by adding more memory (bigger buffer pool) to your machine.
-
- Again, in a true NFS configuration, you're going to be more
- bounded by performance of the I/O subsystem on the server than by
- the server's memory. If you start using asychronous NFS (in ULTRIX
- to do this you have to patch your kernel with unsupported code) you
- then start using the buffer cache on the server as a *write* cache
- instead of just a read cache. Since ULTRIX has fixed sized buffers,
- you don't "use up the server's memory" - but you will affect the
- cache access patterns.
-
- mjr.
-