home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.sys.sgi
- Path: sparky!uunet!europa.asd.contel.com!darwin.sura.net!spool.mu.edu!decwrl!sun-barr!sh.wide!wnoc-tyo-news!ccut!crow!trich
- From: trich@crow.omni.co.jp (Timothy Richards)
- Subject: Re: nullrecv question
- Message-ID: <1992Nov4.101627.10106@crow.omni.co.jp>
- Sender: trich@crow.omni.co.jp
- Reply-To: trich@crow.omni.co.jp
- Organization: Omnibus Japan, Inc.
- Date: Wed, 4 Nov 92 10:16:27 GMT
- Lines: 85
-
- "Douglas E. James" <rz81134%executioner.d50.lilly.com@BRL.MIL> writes:
-
- > We are currently implementing a SGI PowerFile 100 fileserver. I was
- > looking through the NFS manuals and ran across the nfsstat command. The output
- > follows: (the machine is a 340S with 64MB memory running IRIX 4.0.4 using its
- > FDDI interface as the primary interface)
- >
- > Server rpc:
- > calls badcalls nullrecv badlen xdrcall
- > 2244071 0 1377407 0 0
- >
- > Should I be concerned at the nullrecv number..? It appears to be a
- > bit high. I checked other machines that were exporting NFS directories, and
- > their counts were much lower for this nullrecv value.
- >
- > I also noticed the nfsd were accumulating some time ......
-
- I noticed these things too and raised the issue with support, I also
- noticed that these situations only occur on multi-cpu machines.
- Anyway here are the responses I got from NSG/SGI support
- ( which seemed quite unsatisfactory and I guess I just decided to drop
- the matter.)
-
- > There is a slight difference between irix3.3+ and irix4.0+.
- > Under 3.3 all network processes ran on the same processor.
- > Under 4.0+ the networking processes can run on different processors
- > to distribute the load (this improves performance).
- >
- > That was the only change I found.
- >
- > To lock all network processes to one processor on a 4.0+ multi processor
- > machine you would have to change /etc/init.d/network, like this:
- >
- > # if test -x /usr/etc/rtnetd; then
- > # # Always start on multiprocessors for better throughput
- > # if $IS_ON rtnetd || test `mpadmin -u | wc -l` -gt 1; then
- > # /usr/etc/rtnetd `cat $CONFIG/rtnetd.options 2> /dev/null`
- > # $ECHO " rtnetd\c"
- > # fi
- > # fi
- >
- > You might try that as an experiment; but, it would reduce network
- > performance.
-
- Well this isn't what the man page for rtnetd says at all !!!
- The man page says that rtnetd halts network packet processing whenever
- the load gets too high. So this seems to be very missleading advice.
-
-
- Again also NSG/SGI support writes....
-
- > On our main source tree system we get:
- >
- > calls badcalls nullrecv badlen xdrcall
- > 32317149 0 16653080 0 0
- >
- > It is running 4 nfsd processes and has been up 10 days.
- >
- > >From "Managing NFS and NIS" by O'Reilly (a great book that we use):
- >
- > The nullrecv field is incremented whenever an nfsd daemon is
- > scheduled to run but finds that there is no packet on the
- > NFS service socket queue. If the server is running an
- > excessive number of nfsd daemons, it is possible that there
- > will be more runnable daemons than requests to drain from
- > the NFS socket, so some daemons wake up but do not receive
- > any data.
- >
- > nullrecv > 0. NFS requests are not arriving fast enough to keep
- > all of the nfsd daemons busy. Reduce the number of NFS server
- > daemons untill nullrecv is not incremented
- >
- > So, I dont think there is any problem. He might want to run
- > fewer nfsd daemons.
-
- As you can see the explanation from NSG/SGI seems highly contraditory.
- Its my opinion that the nullrecv output on multi-cpu machines is
- broken and meaningless. As for the accumilating cpu time of the nfsd,
- I don't know, I guess you can just take there word for it that its not
- a problem.
- --
- -----------------------------------------------------------------------------
- Timothy Richards, Omnibus Japan. work: +81 (3) 5706-8357
- [Uucp] ccut.cc.u-tokyo.ac.jp!crow!trich home: +81 (3) 3720-4088
- [Internet] trich@omni.co.jp
-