"Linux Gazette...making Linux just a little more fun!"


(?) The Answer Guy (!)


By James T. Dennis, answerguy@ssc.com
Starshine Technical Services, http://www.starshine.org/


Contents:

(!)Greetings from Jim Dennis

(?)Version-a-go-go and the Tragedy of being "Left Behind"
(?)Removing Lilo from a multi-boot machine
(?)Question on sendmail... --or--
'sendmail' FEATURE creatures for virtual domain and generic re-write tables
(?)Kernel crashes
(?)Winmodems --or--
More on 'WinModems': How to "lose" Gracefully - Just say No!
(?)Mail on a LAN Linux to NT --or--
Basic e-mail Setup for Linux?
(?)Remote Tape Backups
(?)adduser
(?)Letter to Dell - Linux on Dell Hardware
(?)Hello --or--
Connecting a Dumb Terminal to your Linux System
(?)Why Linux?
(?)Redhat telnet
(?)Network Cards
(?)A little note about "good times" or emailed viruses --or--
"Good Times" are Spread to the "Great Unwashed"
(?)The Answer Guy --or--
Regarding the Column's New Look
(?)TACACS+ client for Linux --or--
TACACS and RADIUS Authentication Models for Linux and/or PAM
(?)Sendmail jam --or--
'sendmail' Log Jams and Capacity Problems: running extra 'sendmail -q' processes
(?)PPP connection and diald --or--
Co-ordinating diald and Manual PPP
(?)getting ppp-2.3.3 to work
(?)Mail access --or--
Getting at MS-Mail from within Linux: The Myriad Ways to Co-exist with MS Windows
(?)Program for Mailer Daemons --or--
Automated Handling for MAILER-DAEMON Messages: Read The Sources, Luke.


Linux Gazette: The Answer Guy for June, 1998

The theme for this month seems to be "vendor support for Linux." From the responses to my open letter to Dell, through the common problems with "winmodems" and "winprinters" and even to the impossible dream of running MS Windows applications and accessing Microsoft proprietary formats from native Linux applications --- we continue to fight uphill battles with so many vendors.

This isn't new in the broader Unix world. Readers of A Quarter Century of Unix by Peter H. Salus should recognize this as a as an attitude that has dominated hardware vendors for almost thirty years. They've been prdicting the "death" of Unix (and the "death of the Internet) almost since from the beginning.

There is some hope on the horizon. As some of you may have heard or read corelcomputing (the hardware division of the famous software company) is basing it's NC (network computer) on a Strong-ARM version of Linux. Within a week or two after that Corel Software announced their intention of porting the rest of the applications suite to Linux (their WordPerfect 7 and 8 have been available in Linux versions for some time).

A little further afield it appears that Apple Inc is starthing to make some sense with their future OS strategy --- by "thinking different", or "outside of the box" in a manner of speaking. Specifically they've apparently decided to skip the planned version of Rhapsody with its "blue" and "yellow" boxes that separated the MacOS and the Mach/NeXTStep (Unix) personalities. Apparently buried in their announcement for MacOS X ("ten") is the i rumor that your "NeXT" (Rhapsody) native applications will co-exist on the same desktop with yor MacOS programs --- and that the MacOS API's will be seamlessly supported with all the multi-threaded support that the Mach microkernel can provide. Of course you have to hear that as rumors, or read between the lines with a considerable background in the Macintosh architecture since it is not apparent from their own press releases, or from the San Jose Mercury News articles on the subject. The San Francisco Examiner sings a similarly hollow tune. However, I'm not alone in my opinion as we see in David K. Every's article.

I suspect he knows way more than I do on the subject.

Oddly the MacOS Rumors web site seems to have no mention MacOS X on their site.

What does this have to do with Linux? Well, I can only continue to speculate that mkLinux binaries will eventually run under MacOS X (Rhapsody). I can also still hope that, with the progress in the G3's, and the plans for the G4 generations of the PowerPC platform, and hopefully the continued availability development of the DEC (Compaq) Alpha processor, we'll see some real choices and competition in the market place. Linux is the one OS that crosses all of these (and Sun SPARC's and SGI MIPS and others). Some form of Unix is available on just about every platform, whether or not it supports Linux.

As we look beyond the world of PC clones we see that there is some vendor support. There is some hope that Microsoft's legacy will be the separation of hardware vendors from their "control" hegemony. Before Microsoft it was the norm for computer manufacturers to almost completely control the availability of software for their platforms --- Unix has undermined that control for over two decades. The popular backlash from Microsoft's own unique form of control --- over the collective Wintel platform --- may finally completely sever the puppet's strings. The trickles of vendor support that you're seeing now is largely a survival strategy. So not only will these vendors give up the efforts to control their customer's range of software choices, they'll be glad they did it, considering the alternative.


Jim Dennis


(?)Version-a-go-go and the Tragedy of being "Left Behind"

From Richard Storey on 20 May 1998

To a "newbie" on the edge of installing Linux some of what I read leaves me concerned that some form of minor shakeout is building up in the Linux versions arena. It has me confused about which direction to turn because I'm not really interested in installing a lot of stuff, configuring it and then finding that 6 mos. later I am out on a limb due to some standards shift.

(!) I can understand your concern. This is a problem that IT managers face all the time when selecting hardware and software for their companies. It affects the home user, too but no one gets fired over those little disasters (well, it might cause the occasional divorce but ....).

That's why the rule in MIS/IT used to be "no one ever got fired for buying IBM" (and why we see such "devotion" to Microsofts products today).

However, I can lay your fears to rest on a couple of grounds. This is not "the market" --- it is the free software world. (In this particular case I'm referring to GNU and Linux free software and not merely to a broader world of "open software").

(?) What is this issue about regarding GNU gcc libraries and some versions shifting to a new standard? I've seen bits of info. on it and did somewhat understand what it meant, but I'm not a programmer, therefore, I don't get the big picture here.

(!) The debate about glibc2 (Linux libc 6) and libc5 is mostly of concern to programmers. Even as a sysadmin I'm fairly oblivious to it. It's really a bear for package and distribution maintainers (which is probably where the messages you're thinking about are coming from).

There is probably quite a bit of traffic about the pros and cons of each. I won't get into that, mostly because I'm simply not technically qualified to do so. (I'm not a programmer either).

The high elevation overview is that glibc and libc5 can co-exist roughly to the same degree that libc5 co-exists with libc4 and a.out co-exists with ELF. Nobody is being left "high and dry." In this respect it is completely different than the shift from DOS and Windows 3.x to Windows '95 and/or from either of those to NT. It's also a far cry from the shameful way that Microsoft and IBM have treated their OS/2 adopters.

Zooming in a little bit I can say that the next major release of most Linux distributions will be glibc based. Most will probably ship with optional libc5 libraries for those who want or need to run programs that are linked against them.

glibc is the reference implementation of the 86Open standard. This should be supported by almost all x86 Unix vendors within the next couple of years. (Hopefully most of us will have moved to PPC, Alpha, Merced [though its release schedule has been stretched], or whatever by then -- but I'm the one with the 10 year old server that handles all the mail into and out of my domain --- so don't bet on it).

The hope is that we'll finally have true binary compatibility across the PC Unix flavors. SCO and Sun have traditionally bolluxed this up in the interest of their market rivalry --- but the increasing acceptance of Linux and other GNU software makes it their only reasonable option. Neither of them can force the market to adopt their standards (iBCS and the x86 ABI) and the consumer shrink wrap software market is rapidly shifting to Linux.

It should also be much easier for Linux to keep pace with the rest of GNU development as we adopt glibc. There should be less duplicated effort in porting new library features from glibc/gcc to Linux than there was under all of the previous Linux libc's.

Right now we are in a transition between them, just as we were a couple of years ago when we shifted from a.out to ELF. 'a.out' and ELF are "linking formats" --- different ways of representing the same machine language instructions. They require different loading methods by the kernel, in order to execute properly. It is possible (trivial, in fact) to support a.out and ELF on the same system concurrently. In fact I can compile a.out as a "loadable module" and configure my system to automatically and transparently load that --- which saves a bit of kernel memory when I'm not running any older apps --- but allows me to do so without any concern.

Although shared libraries are completely different from (and independent of) executable formats the similarity is that we (as users and admins) can mostly just let the programmers, distribution and package maintainers take care of all that.

Let me try and give some background on this:

Most programs under Linux (and most other modern forms of Unix) are "dynamically linked" against "shared libraries." Windows "DLL's" (dynamically linked libraries) are an example of Microsoft's attempt to implement similar features for their OS. (I believe that Unix shared libraries pre-date OS/2 and MS Windows by a considerable margin).

Almost all dynamically linked programs are linked against some version of "libc" (which provides the functions that you get when you use a #include <stdio.h> directive in a C program). Obviously your distribution includes the libc that most of its programs are linked against.

It can also include other versions of the shared libraries. A binary specifies the major and minimum minor version of the libary that it requires. So a program linked against libc5 might specify that it needs libc5 or libc5.4.

If you only have libc5.3 and the program requires libc5.4 you'll get an error message. If you have libc5.3 and libc5.4 then the program should run fine. Any program that only requires libc5 (which fails to specify the minor version) will get the last version in the libc5.x series (assuming your ldconfig is correct).

This is not to say that the system is perfect. Occasionally you'll find a program like Netscape Navigator or StarOffice that specifies a less specific library then it should (or sometimes it might just have the wrong version specified). When this happens the bugs might be pretty subtle. This is especially true when a program "depends upon a bug in the libraries" (so the fix to the library breaks the programs that are linked to it).

In the worst cases you can just put copies of the necessary (working) libraries into a directory and start the affected program through a small shell script wrapper. That wrapper just exports environment variable(s): LD_PRELOAD and/or LD_LIBRARY_PATH to point to these libraries or this directory (respectively). These magic environment variables will force the dynamic linker to over-ride its normal linking conventions according to your needs.

(This is obviously only a problem when the sources to the affected application are unavailable since re-compiling and re-linking solves the real problems).

In truly desperate cases you could possibly get a statically linked binary. This would contain all the code it requires and no dynamic linking would be necessary (or possible).

Note that the problem that I've just described relates shared libraries already. This is not new with the introduction of glibc --- since the actual cases where I've had to use LD_PRELOAD were all with libc5.

(?) Some defections at Debian have me wondering about using that version.

(!) I'm not sure I understand this. First I'm not sure which defections you're referring to. I presume you've read some messages to the affect that some developers or package maintainers refuse to make this migration (to glibc).

More importantly I'm not sure which 'version' you are referring to.

The current version of Debian (1.3) uses libc5. The next version (currently in feature freeze --- slated to be 2.0) uses glibc.

(?) I've read about some major problems with RedHat 5.0, but 4.x isn't compatable with the new GNU gcc libs, (right?).

(!) I wouldn't call the problems with Red Hat 5.0 to be "major." When I was in tech support and QA we used a system of bug categories ranging from "cat 1" to "cat 4." The categories were roughly:
  1. causes data loss or crashes the whole system
  2. dies and is unusable
  3. major function fails
  4. cosmetic
... and the bugs that I've seen from RH5 have all been at cat 3 or lower. (Granted people might argue about the severity level of various security bugs --- but let's not get into that).

However I agree that there have been many bugs in that release --- and that many of these have been glaring and very ugly.

One of the two times that I've had dinner with Erik Troan (he's a senior developer and architect for Red Hat Inc) I asked him why they forced out glibc support so soon and for such a major release.

He gave a refreshingly forthright response by asking:

"How many glibc users were there a month ago?"

(essentially none --- just a few developers)... and:

"How many are out there now?"

Basically it sounds like Red Hat Inc knew that there were going to be problems with glibc --- and made the strategic decision to ship them out and force them to be found and fixed. This had to hurt but was probably viewed as the only way to move the whole Linux community forward.

I think they could have worked a little bit longer before release (since I really get a bad taste in my mouth when 'vipw' segfaults right after fresh installation -- 'vipw' is the "vi for your passwd file" that sysadmins use to safely, manually, add new accounts or make other changes). I was also hoping they'd wait until the 2.2 kernel was ready so they could wrap both into one new release.

(However, I guess that would have put them about 6 to 8 months behind their own schedule. They seem to put out a new minor version every four to six months --- or not quite quarterly).

At the same time I have to recognize their approach as a service to the community. They have fixed the bugs quickly (and newer pressings of the same CD's contain many of these fixes). Like all shrink wrap software companies Red Hat Inc. is forced (by the distribution channel) to do "silent inlining" (incorporation of bug fixes into production runs without incrementing the version number). This is a sad fact of the industry --- and one that costs users and software companies millions of hours of troubleshooting time and confusion.

(My suggestion was that RH cut monthly CD's of their bug fixes and 'contrib' directory and offer subscriptions and direct orders of those. I don't know if they've ever taken me up on that. My concern is to save some of that bandwidth. I occasionally burn CD's of this sort and hand them out at users groups meetings so they can be shared freely).

(?) Could you explain some of these shifts on the Linux Versions field?

(!) Well, I hope this has helped. There have been many sorts of shifts and transitions in Linux over the years. Obviously there were shifts from libc4 to libc5, and i shifts from a.out to ELF, and between various kernel versions: .99 to 1.0. --> 1.1 --> 1.2 --> 2.0 and the current effort to get 2.2 "out the door."

I think that all of these shifts have been progressive.

The worst one we suffered through was the change in the /proc filesystem structures between 1.2 and 2.0. This was the only time that a newly compiled kernel caused vital "user space" programs to just keel over and die (things like the 'ps' and 'top' commands would segfault).

That was ugly!

There was no easy way to switch between the kernel versions on the same root fs. The best solution at the time seemed to be to keep one root fs with the old "procps" suite on it and another with the new one. You'd then have whichever of these you were using mount all your other filesystems. For those of us that habitually create small root filesystems and create primary and alternate/emergency versions of that --- it was too much of a problem.

(I usually use about 63Mb --- sometimes as much as 127Mb and mount /usr, /var, and others --- or at least mount /usr and /usr/local and make thinks like /var, /opt, and /tmp into appropriate symlinks. I just isolated the "bad" binaries by moving them from under /usr to /root/.usr/ and replaced them with symlinks. Then the 'ps' that I called was automatically resolved to the one under my root fs -- which was different depending on which kernel I boot from).

I see no evidence that glibc (or the 2.2 kernel) will pose these sorts of problems. The worst problem I foresee with glibc is that it is so big. This is a problem for creating boot diskettes. I've heard that it compresses down to almost the same size as libc5 --- which means that glibc based diskettes might be possible using compressed initrd (initialization RAM disks). At the same time it is probably unnecessary. The main features of glibc seems to have to be in the support for NIS (network resolution of things like user account and group information --- things normally done by accessing local files like /etc/passwd and /etc/group). Many of these new features are probably unnecessary on rescue diskettes.

One final note about all these transitions and changes:

You aren't forced to go along for the ride. You can sit back and run Red Hat 4.2 or Debian 1.3 for as long as you like. You can install them and install glibc with them. You aren't forced to upgrade or change everything at once.

These "old" versions are never really "unsupported" --- there was someone who just released an updated Linux-Lite (based on the 1.0.9 kernel). This is one of the smallest, most stable kernels in the history of the OS. It has been updated to support ELF and have a few bug fixes applied. It can boot in about 2Mb of RAM (which is just enough for an internal router or print server) and handy for embedded applications that need TCP/IP.

Since we have the sources, and we're licensed to modify and redistribute them in perpetuity (the heart of the GPL) we can continue to maintain them as long as we like. Obviously there are some people out there who still like.

(?) Thanks.
RS


(?)Removing Lilo from a multi-boot machine

From Samuel Posten on 20 May 1998

could you please point me to some references regarding removal of LILO from a machine that has been set up to run both win 95 and Linux, preferably without losing any of the Win 95 partitions.

(!) Boot up a copy of DOS (from a floppy). Any copy of DOS later than 5.0 will do. Type FDISK /MBR.

That's the short form. There are some special situations which might require special handling --- but they are increasingly rare (special boot sectors used to be used for large (greater than 32Mb!) drives to replace the INT 13H calls that are normally handled by the BIOS for all (real mode) disk handling.

(You might boot from your original Win '95 setup diskette and exit out of the installation program --- I think that trick still works). You should definitely create a bootable DOS diskette (that would be MS-DOS 7.0 --- the real OS that's hidden under Win '95's interface/GUI).

It used to simply be a matter of running the command: FORMAT A: /S and copying COMMAND.COM unto a floppy --- but MS has probably made it much more complicated these days. I honestly haven't used '95 enough to know.

(?) If this can't be done, I'll just have to stick with running LILO, no big deal, but its a pain to have to tell it to boot Win 95 each time, as it defaults to the Linux system that no longer exists (I wiped those partitions and had to make them Win 95 devices.

(!) Sorry to lose another Linux user. However, if MS keeps on their current course --- you may be back before you know it.

(?) Any help would be appreciated!

(!) This is an alarmingly common question. I've copied Werner Almesberger to ask him to consider adding this as a note in the lilo 'man page' and to the authors of the Installation-HOWTO (although, Custom LILO Configuration is worth a look -- it's possible to have LILO point at another OS' partition) and the "Linux Installation and Getting Started Guide" (the key part of the LDP -- the Linux Documentation Project).

I can't promise anything --- but I think (after the countless times that I've answered this and seen it answered in the newsgroups) that we should include a little section --- one paragraph in most cases about uninstalling LILO (and a whole section in LIGS about uninstalling all of Linux). We're not trying to trap people into being "stuck" with our software and it's merely a bit of documentation.

The problem is that it's the sort of thing that most of us old DOS hacks take for granted (I spent years doing tech support and repairing MBR's from PC viruses and rebuilding partition tables with Norton's DiskEdit).

So, let's hope that Matt, Eric, and Werner will consider adding this little tidbit to their docs --- and let's hope even more fervently that a few of the users out there will look at the docs for LILO, and will actually read some HOWTO's and guides as they consider installing (or uninstalling) Linux.

(I supposed we could also contact Red Hat, Debian, S.u.S.E., Caldera and the others to suggest that they all add an "Un-install" (Remove) option to their boot/setup tools).

(?) Sam Posten


(?)Another reason for Removing Lilo...

From Sam Posten on 26 May 1998


>> It depends on the nature of the BS virus. Some of them encrypte the logical boot record or cross-link the FAT's against their code, or play other games. In those cases just blowing away the virus locks you out of your system.
Hmm, its been a while since I've researched virii. Having Viruscan running full time has made me lazy I guess!

(!) I used to work for McAfee, and for Norton before them. So I have considerable professional experience with the critters. Running a good scanner is a good idea. (VirusScan is good, FProt and Thunderbyte used to be pretty good, as well. The latter two were sometimes on the leading edge and sometimes just neck-and-neck with McAfee).

Since I've left McAfee (now called Network Associates) I have no idea how their anti-virus products stack up. When I left I'd been their Unix sysadmin for over a year so I was a little out of the loop by then. Now I use Linux/Unix exclusively and haven't dealt with any real virus infections for a few years.

Actually there was the issue of the "Bliss" virus for Linux. This was apparently a "lab strain" that "got out" into "the wild." (Yes, these terms are all used by computer virus researchers, as rather obvious analogues to their biological counterparts).

In the case of "Bliss" there were a few people who did catch this virus. Naturally they were running this new program as 'root' (breaking the cardinal rule of systems administration) and the program went and modified some other programs.

At the time one of my buddies from McAfee was staying with me (he lives down in L.A. and stays up here during part of most weeks). He's the head of their AV research department. So he and I chatted about it for a couple of minutes and concluded that McAfee's existing virus scanner for Linux could be updated to detect "Bliss" and he assigned one of his AV researchers (also a former housemate of ours) to the job and they updated their signature file (.DAT). I don't recall that any changes were needed for the engine (the .EXE).

This was heralded by McAfee's marketing team as the "first live, wild virus incident under Unix." There ensued the usual flamefest on USENet (comp.virus) which argued that this wasn't "really a virus" and that McAfee's Associates were hyping it up and taking advantage of the situation, etc.

"Bliss" did have a command line option to uninstall itself. It did, however, modify other programs to link its own code into them (which is the definition of a computer virus). McAfee did take advantage of the opportunity to tout its own horn. The people who caught "Bliss" did display gross ignorance of proper system administration practice (or, in at least one case, foolhardy disregard for it).

The bottom line is that a properly administered Linux system is a very poor host for virus transmission.
(?)
>> In short, stick with Unix/Linux. Using these with any modicum of proper system administration practices will very likely be the end of your virus hunting days.

Gotta use the best tools for each job, and right now that means I gotta do windoze at home, at least part of the time. Thanks for the insight.

Sam

(!) Naturally it does come down to requirements analysis and the availability of packages that meet those requirements. StarOffice, Applixware, and Corel all seem to be producing personal productivity suites for Linux that either rival, MS Office, or soon will.

However, you define the criteria for "best" when it comes to your jobs. It sounds like you're going to keep your eyes on the Linux market, and you may find at some point in the near future that you do have a choice for your applications.

It's also important for you to supply your software vendor with feedback. If the primary reason you're running Windows is to support QuickBooks, call Intuit and let them know. If you need access to some reference CD's (Grolier's Encyclopedia, some electronic dictionary, whatever) let the publishers know that you need cross-OS support.

(Those CD books are one of the first places in the retail, shrinkwrapped software market where I hope to see Java take over).


(?)'sendmail' FEATURE creatures for virtual domain and generic re-write tables

From Benjamin Peikes on 18 May 1998

Jim,

I have a quick sendmail question for you. I have set up virtual hosting where I add an account for each user and then map the incoming and outgoing address for each account. The problem is that I add an account, i.e. bendtg and then map outgoing mail to be ben@dtgroup.com and mail incoming mail for ben@dtgroup.com to go to the bendtg account. The problem is that I will also get mail going to bendtg@anyotherhostI.receive.mail.for. I was wondering if you knew what I need to set so that it only accepts mail for a particular list of addresses that I specify. Thanks alot.

Ben

(!) I thought I answered this for you earlier this month. Is this a resend --- or a refinement to an earlier question?

[Nope, he's only got the one, but we do have a lot of sendmail questions this month. -- Heather]

In any event the FEATURE's that you might want to enable and use in your 'sendmail' "mc" (M4 configuration file) are the "virtusertable", and the "genericstable"

These can allow you to support the re-writing of addresses in outgoing mail, and to support things like matching a whole domain to a single mbox (mailbox folder) file. You can also creating entries in the virtusertable that look just like aliases --- except that they include host/domain portions of addresses (parts to the right of the "@" (at) sign).

Unfortunately I don't have working samples of these files but the M4/mc file would look something like:
OSTYPE(`linux')
VERSIONID(`@(#)YOURDOMAINHERE.mc	.1 (BP) 8/11/95')
FEATURE(`genericstable')dnl
FEATURE(`mailertable')dnl
FEATURE(`virtusertable')dnl
FEATURE(`domaintable')dnl
And you'll have to create these tables (usually as dbm files) You can read more about these advanced sendmail features in the famous "Bat Book" (Sendmail 2nd edition by Bryan Costales, from O'Reilly & Associates).

After you've merged some of these features into your mc file you'll build a sendmail.cf file by running m4 on it (I'd usually do an RCS check ('ci -l') of my old /etc/sendmail.cf file before overwriting it with the new one -- and I keep my mc files under version control as well).

Once you've created the sendmail.cf file (and tested that it hasn't broken any of the feature you were already using) you need to create one or more tables (depending on which combination of 'genericstable', 'virtusertable', and 'domaintable' (and other features/tables you choose to use). These are created with a text editor and must be "compiled" or "made" into a suitable format (usually some dbm variant) using the 'makemap' command.

The Costales book goes into that in some detail -- but the thing is 800 pages long and it's easy to get lost in that tome. So you might want to just read the 'makemap' man page.

Basically all this 'makemap' stuff is just like running 'newaliases' after you change the /etc/aliases file. It's even possible to force sendmail to use a straight text file for a table if you want to (but that's hackish and definitely more trouble than its worth.

I have another problem with this whole approach. If you are mapping all of the mail to a given domain or for a given host into a single "drop file" (mbox folder) which some user is getting (say, via POP or IMAP --- perhaps using Eric Raymond's 'fetchmail' package) your customers still have a problem if they then need to split the mail into multiple addresses at their site.

I've been told by one of Netcom's senior techs that they resolve this with a custom re-writing rule that take the envelope addressee(s) (the address or list of addresses as it was passed to the receiving sendmail daemon) and add those as Bcc: header lines before putting the message in the drop file.

(The effect of this is that if a piece of mail was addressed to you, and copied to some partner at your site --- the receiving 'fetchmail' process should process those Bcc: lines --- as appropriate to your domain).

I haven't confirmed this, nor have a concocted a custom FEATURE macro (m4) or rewriting ruleset to do this --- though I'd like to see one and play with it.

My personal opinion is that all this virtual mail domain to "drop file" stuff is ugly and hackish --- so I still use a uucp feed to get mail from my ISP to my domain and back (and I use it to get my netnews, too).

As always the best sources of sendmail support on the 'net are:
NetNews: comp.mail.sendmail
The FAQ (web version):
comp.mail.sendmail Frequently Asked Questions (Part 1 of 2)
http://www.cis.ohio-state.edu/hypertext/faq/usenet/mail/sendmail-faq/part1/faq.html -- You want Q3.7: How do I manage several (virtual) domains?
Other Web Resources:
http://www.sendmail.org/
http://www.sendmail.com/
Harker Systems:
http://www.harker.com/
(Offers wonderful and very detailed seminars on 'sendmail' and DNS. I've taken it --- and only wish that I'd had the time to apply even a fraction of that in my consulting over the last several months. I've Bcc'd him on this message as a courtesy).


(?)Kernel crashes

From David W. MacDougall on 18 May 1998

Hello Answer Guy,

Since December, I have tried posting this problem to newsgroups and the responses I received were not helpful.

I am running Red Hat 5.0 on a Pentium 233 processor with 128mb of EDO RAM, Award BIOS, Award PnP BIOS, an Adaptec 2940 Host Adapter, and a Quantum SCSI 3.2gig hard drive. One of the boot up messages I get from the BIOS is "Checking DMI pool data."

I have tried running Slackware and Red Hat, both with kernel version 2.0.31 and 2.0.31. No matter what I do, it seems the kernel will "see" only about 14 mb of RAM (see meminfo file below). The first advice I got was to use an append command in LILO "mem=128mb" and several variations thereof. I tried entering that command every possible way, and though it did increase the amount of RAM available, the system would crash almost immediately, or as soon as I tried to access a disk drive. I would get a kernel panic message, then some messages about the SCSI hosts and then a complete freeze-up.

(!) First that append directive should be:
append="mem=128M"
note that the quotes are required to assign this value to the append directive because of the "=" sign that is contained within the value. Also note that this value should be written with a capital letter "M" and not as "mb" (I'm hoping that your question has a typo that's not reflected in your actual lilo.conf file). You could also specify the size in hexadecimal (precede it with "0x") and I suspect that you could specify it as something like mem=134217728 (which is 128M in decimal). However I've never seen anyone do that.

Note also: You can pass a parameter such as mem=128M to your kernel by simply typing it it on the lilo command line during boot. In other words, when you pause your system at the 'lilo' prompt during the boot cycle you can manually "append" kernel parameters as you select which kernel/lilo image to load.

For example most of my lilo configuration have org "original," old, cur "current," and new stanza (images). So I could temporarily limit this machine (canopus) to use only 32 of its 64Mb of RAM by typing something like:
new mem=32M
... at the lilo boot prompt. A line like:
old root=/dev/sdb1 single mem=8M
... would run my "old" kernel, mounting the first partition on my second SCSI disk as the root filesystem, and limiting the kernel to only use eight megabytes of RAM (and I'd better hope that the rc scripts on /dev/sdb1 give me a bit of swap space or that first few minutes will be painful on with only eight MB). You can also pass these parameters to your kernel via LOADLIN.EXE.

You can learn all about the other boot prompt parameters in the BootPrompt-HOWTO at any LDP mirror:
http://www.linuxresources.com/LDP/HOWTO/BootPrompt-HOWTO.html

If your actual attempts have been in the correct syntax (and the error is just in your question) than it sounds suspiciously like a memory hole or a chunk of address space being used as a framebuffer (like a weird video card).

In the early 386 days (many years and three or four generations of processor ago) there were some systems that had 'top memory' or a 'memory hole' --- a chunk of address space used by the motherboard's chipset at about the 16Mb line (since nobody would have more than 16Mb of RAM <g>). Back in those days I was on the Quearterdeck tech support team (Linux didn't exist, yet) and we used to use the QEMM "notop" switch (if I remember it correctly) to work around the problems when this oddity wasn't automatically detected by QEMM.

However I don't think any Pentium or PII motherboard would have such a problem (it would definitely be considered a design flaw these days).

Are you saying that the kernel seems to be stable when you run it with only 14 or 15Mb of RAM --- but crashes soon after you force your kernel to use the rest? Have you tried setting the append line to 64Mb (or just manually passing a mem=64M parameter to your kernel)?

You basically want to narrow down exactly where the crash is occuring. Once you think you have it --- it's worth taking out your RAM DIMM's (SIMM's or whatever they are in your case) and swapping them (put all the DIMM's from one bank into the to slots and vice versa). If the crase "moves" with the chips than replace the RAM modules (the things might even work in another box --- since you can sometimes see some inexplicable "timing" glitches that amount to: "this board doesn't "like" those chips"

(Sounds scientific, doesn't it!).

Other than that you might consider removing all the cards in the system --- putting in the cheapest, plainest video card and IDE/multi-function card and drive you can find and testing it with that configuration. (Then you re-introduce your preferred adapters until the problems re-occur. In this way you isolated the problem to a specific hardware or software component).

(?) My question is this: Where is my problem? Is there something inherent in the kernel that won't let it see any more than 14mb of RAM on this system without my having to add append statements to LILO? And why does the kernel panic once is does see more memory? I have tried adjusting the amount of memory stipulated in the command line to lower levels, tried entering it in HEX, and in bytes. The results are always the same.

(!) There is definitely nothing that limits Linux to 14Mb of RAM. My oldest system ('antares' --- the old 33Mhz 386 that handles my uucp mail, INN netnews, fax, dial-in and dial-out modem, is the household web and POP server, and is the backup masquerading router when my ISDN goes out) has used 32Mb of RAM (auto-detected) for several years --- from Linux 0.99p10 through my current 2.0.33. That machine is about a decade old now. 'Betelgeuse' and 'Canopus' each have 64Mb. I've managed and configured systems with 128Mb and more (although most PC hardware tops out at 128 or 256 Mb).

Most systems will need the mem= parameter if they have more than 64Mb since there is no standard way to detect more than that. The 2.1/2.2 kernels may auto-detect large memory configurations on some systems, but I've heard that some of the methods for probing for this sort of RAM can lock up some other systems (something that prevents users from installing NT on some of those "unsupported" systems on the Microsoft "bad boy" list --- or so I've heard).

(?) Or is my problem more with my Adaptec host adapter? Why will it work just fine with only 14mb of RAM but fall to pieces with more? (I also have an SCSI CD-ROM drive and a second 3.2 SCSI Quantum hard drive (which I use for Windows 95 in this dual-boot system.)

(!) As I've said, I don't know. It could be any component of the system. However I've used Adaptec 2940's in 128Mb systems with Linux and FreeBSD. So that, by itself, is not likely to be the problem.

It's also almost inconceivable that the model of hard drive would affect this situation. Basically any SCSI hard drive will usually work on any supported SCSI controller without crashing the kernel. (A couple of SCSI devices can fight with one another and crash the SCSI bus --- particularly if you mix "differential" devices with others. However, you'd get kernel messages that would clearly indicate the subsystem involved --- and it would be independent of the memory layout (in every case I can think of).

(?) I am reluctant to keep fiddling with the append parameters because every time the system crashes, it eats a hole or two in the filesystem. A Linux-knowledgeable friend suggested I might want to try upgrading to the development kernel (2.1.??) to see if that would cure the problem. He also suggested I write to Linux Torvalds, but I thought I should try you first.

(!) I would definitely not bother Linus Torvalds directly with a problem of this sort. If I could isolate it to a specific module or block of code I might --- but I'd problem just post it to the Linux Kernel mailing list in any event. Linus is quite active on that list --- and it would simply be rude to request is personal attention to something that any kernel developer might be able to handle.

(?) I feel sad sitting here with the world's greatest OS and all this RAM I can't use. Any guidance or direction you could give me would be greatly appreciated!

(!) Again, double and triple check the syntax of your mem= parameter. Then try different values --- 64Mb, 96Mb, etc to isolate the specific limit. Read the BootPrompt-HOWTO. Try taking out that Matrox Mystique and using just a plain VGA card for testing. Disable all "Plug-n-Pray" features on your motherboard.

The first steps in all troubleshooting are to precisely describe and isolate the problem. In the worst case you might have a bad motherboard or some bad memory chips.

(?) Thank you,
Dave

--------------
proc/meminfo
--------------

        total:    used:    free:  shared: buffers: 
cached:
Mem:  14004224 13795328   208896  8200192   180224 
4214784
Swap: 41119744 17133568 23986176
MemTotal:     13676 kB
MemFree:        204 kB
MemShared:     8008 kB
Buffers:        176 kB
Cached:        4116 kB
SwapTotal:    40156 kB
SwapFree:     23424 kB


--------------------
/proc/cpuinfo
--------------------

processor       : 0
cpu             : 586
model           : 4
vendor_id       : GenuineIntel
stepping        : 3
fdiv_bug        : no
hlt_bug         : no
fpu             : yes
fpu_exception   : yes
cpuid           : yes
wp              : yes
flags           : fpu vme de pse tsc msr mce cx8 mmx
bogomips        : 348.16


---------
/proc/scsi
----------

Attached devices: 
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: QUANTUM  Model: FIREBALL_TM3200S Rev: 300X
  Type:   Direct-Access                    ANSI SCSI
revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: QUANTUM  Model: FIREBALL ST3.2S  Rev: 0F0C
  Type:   Direct-Access                    ANSI SCSI
revision: 02
Host: scsi0 Channel: 00 Id: 06 Lun: 00
  Vendor: MATSHITA Model: CD-ROM CR-508    Rev: XS03
  Type:   CD-ROM                           ANSI SCSI
revision: 02


---------------
/proc/pci
---------------

PCI devices found:
  Bus  0, device  19, function  0:
    VGA compatible controller: Matrox Mystique (rev 3).
      Medium devsel.  Fast back-to-back capable.  IRQ
10.  Master Capable.  Late
ncy=32.  
      Prefetchable 32 bit memory at 0xe0000000.
      Non-prefetchable 32 bit memory at 0xe1000000.
      Non-prefetchable 32 bit memory at 0xe0800000.
  Bus  0, device  18, function  0:
    SCSI storage controller: Adaptec AIC-7881U (rev 0).
      Medium devsel.  Fast back-to-back capable.  IRQ
11.  Master Capable.  Late
ncy=32.  Min Gnt=8.Max Lat=8.
      I/O at 0x6000.
      Non-prefetchable 32 bit memory at 0xe1004000.
  Bus  0, device   7, function  1:
    IDE interface: Intel 82371SB Natoma/Triton II PIIX3
(rev 0).
      Medium devsel.  Fast back-to-back capable. 
Master Capable.  Latency=32.  
      I/O at 0xf000.
  Bus  0, device   7, function  0:
    ISA bridge: Intel 82371SB Natoma/Triton II PIIX3
(rev 1).
      Medium devsel.  Fast back-to-back capable. 
Master Capable.  No bursts.  
  Bus  0, device   0, function  0:
    Host bridge: Intel 82437VX Triton II (rev 2).
      Medium devsel.  Master Capable.  Latency=32.  


(?)WinModem and WinPrinters --- Just Say "No!"

From Richard Storey on 15 May 1998

I am in the process of gearing up to install Linux and so forth on a new HD. I read a few days ago that Win modems, which I have in the form of a US Robotics 56k X2 voice modem, bla, bla, leave off some of the on-board chips which normally carry on some functions of the modem. The article said that the modem drivers take over these functions and pass the load over to the CPU. That's not good, but my thoughts are that if this is true and Windows is supposed to be running for these drivers will this keep me from being able to run the modem while in Linux?

(!) I'm afraid you're stuck. Last I heard these companies won't release the specs. Don't buy any more of these modems and printers.

My fear is that a certain software company (no names but the initials are "Microsoft") will see the WinModems and WinPrinters as the amazing boon and raging success that they are (for MS OS') and encourage the development of more lobotomized, cut rate peripherals, components and whole systems.

Reading between the lines about Microsoft/Intel's PC98 "recommendations" (http://www.microsoft.com/hwdev/pc98.htm) I see plenty of opportunity for proprietary, non-disclosed software drivers to be required for system operation.

If I was a "mad scientist" and I wanted to control the PC marketplace and keep it safe from free software (open source or otherwise) I'd run the scam like this:

Not that any of this is happening.... or is it?

(?) Hoping I'm not stuck with buying another modem right now.
RS

(!) Call USR and give them a piece of your mind. Let them know that you might be willing to let your CPU take the load IF you had some choice in what OS your system was running.

However, there is nothing in the hardware market today that is quite so odious has having no choice about the software that drives your peripherals. This is especially true of modems and printers which had been pretty reasonably standardized for almost 20 years (before the PC was even marketed by IBM we at Hayes AT command set for modems and printers that could take simple parallel text output).

(?) More on 'WinModems': How to "Lose" Gracefully

Score:
Winmodems: 1
Free Software Users: 0

From Richard Storey on 18 May 1998

[Sorry for sending the whole thing back to you but I figured you must get ton of mail and don't always know who's is what!]

(!) Yes, I do. However, a small excerpt is usually sufficient to jog the ol' noggin

(?) Thanks for the essay. I never expected such a thoughtful and extensive response. This choir says Amen! The Winmodem was in small print, but I still, at that time didn't know that some of the functions were dumped off to the software.

The funny thing is that IBM wrote the drivers for it and they are not working properly. I updated the modem's firmware to V.90 and now the comm/port gets hung up, requiring a reboot to reinstall on a regular basis. I spent 6 hours on the phone today with IBM tech support doing brain surgery on my system solve it to no avail. My next strategy is to get them to take it back.

(!) Hmm. Typical. The firmware upgade probably doesn't work with the software drives.

Obviously you should do your best to get them to take it back. It isn't fullfulling your requirements. When enough of us as consumers can communicate our needs to enough vendors --- they will meet them, or other, new vendors will take over the niche.

(?) Anyway, my next step is to plan out my software uses, make a full effort to dump windoze entirely, and start a support group for former windows users (the abused) and those who would like to get out of abusive relationships with PC operating systems. ;-))

Cheers --RS

(!) Sounds like a veteran of some 12 step program.

Hmmm....
  1. Admitted we were powerless over proprietary OS'
  2. Came to believe that access to source code could restore our systems to usability and stability.
  3. Made a conscious descision to turn our systems into workstations.(*)
. . .

... we could work on that list --- at the risk of giving offense....

There are some excellent Linux Users Groups forming across the world. There are also at least three actively maintained lists of LUGS:
LUGR (LUG Registry)
http://www.linux.org/users/index.html
GLUE (Groups of Linux Users Everywhere)
http://www.linuxresources.com/glue/
LUG List
http://www.nllgg.nl/lugww/
There are also HOW-TO's on forming users groups (http://www.linuxresources.com/LDP/HOWTO/User-Group-HOWTO.html) and on "Advocacy" (http://www.linuxresources.com/LDP/HOWTO/mini/Advocacy.html). Finally there is a fairly new and relatively quiet mailing list for LUG organizers at: lug_support@ntlug.org (subscribe with an appropriate message to majordomo@ntlug.org and read the North Texas Linux Users Group web pages, http://www.ntlug.org/ for details).
(*) This is an inside reference to the various 12 step programs such as Alcoholics Anonymous. No, I'm not a member (and it would fly in the face of their traditions to announce it "at the level of press, TV, or radio"). However, I am close to a number of recovering addicts and alcoholics. You can find out more at:
Alcoholics Anonymous Web Site
http://www.alcoholics-anonymous.org/
... and at an unofficial and more heartwarming site:
http://webhome.idirect.com/~avroarow/P6.HTM


(?)Basic e-mail Setup for Linux?

From Greg on 15 May 1998 Hi again Answer Guy!

I E-Mailed you about a week ago regarding getting a network card to work under Redhat 5. I found that the problem was the motherbourd I was using (Intel al440lx) which after a bit of examination I found didn't work under NT with a network card either. Now with justa standard 166 I can reach my NT network fine and they can reach me too.

(!) Glad you tracked it down.

(?) My question for you this time is how I would set up email to be routed through my linux box between a NT network? I have read through a few of your tutorials (mini-procmail and other letters) but I am still left a little confused.

I attempted to send mail between the linux box and an NT box and while the mail sent eventualy, it was very slow (took 2 mins). I am sure this is a fairly simple thing to fix so your help would be much appreciated.

(!) Two minutes is not slow for e-mail. Also 'procmail' is a mail processing language --- it's geared for handling mail as it's delivered to the local machine. To get your mail to other machines you need an MTA, such as 'sendmail' (the most widely used: http://www.sendmail.org/ or http://www.sendmail.com/), 'qmail' (http:/www.qmail.org/), 'vmail' (http://wzv.win.tue.nl/vmail/), 'exim' (http://www.exim.org/), or 'smail' (http://www.sbay.org/smail-faq.html).

e-mail is a complicated subject. However, there are a few LDP HOWTO documents to look at:
The first and most obvious to read would be:
http://sunsite.unc.edu/LDP/HOWTO/Mail-HOWTO.html
You can browse around the other titles as you like.

It should be noted that there is nothing unusual about handling e-mail under Linux. All of the MTA's (transport agents, like 'sendmail'), MDA's (delivery agents, like 'procmail') and MUA's (user agents like 'pine', 'elm', and 'MH') are all used on a variety of Unix implementations.

(?) I have procmail on my system but I don't seem to have the configuration files for it. Do I have to make my own or have I screwed up somewhere. I am sure you have covered this on other sections of your site so if you can links would be great just to point me in the right direction.

(!) If your system is configured to use 'procmail' as the local delivery agent then you can simply create a ~/.procmailrc script (that is a file named .procmailrc in your home directory) to use it. If not, you can create a .forward file (as described in my introduction to procmail article) or you can reconfigure your MTA (probably sendmail) to change to it.

The easiest way to tell is to simply create a .procmailrc file, send yourself a slice of mail and wait until it's delivered. It should be pretty obvious if the script works (you refile the message to a folder, or you "split off" a backup copy of it with one of the recipes I described).

However, you shouldn't have to create a 'procmail' script for simple delivery into your "inbox" --- it's used to create auto-response scripts, and to do automated filtering, forwarding and filing.

I'd like to say alot more about e-mail as it is one of the most important aspects of the Internet. However, you may just have to wait for my book --- since that's how much there is to say on the subject. In any event it sounds like your e-mail is working so you'll definitely want to be more specific if you have other questions.

The best place to post questions on this topic is under the comp.mail.* netnews hierarchy. Naturally you should look for the associated and appropriate FAQ's and just "lurk" on these lists for a little bit, to get some idea of what they are all talking about.
The international repository of FAQ's on the web is at:
http://www.cis.ohio-state.edu/hypertext/faq/usenet/top.html
... and for e-mail you'd want to look in:
http://www.cis.ohio-state.edu/hypertext/faq/bngusenet/comp/mail/misc/top.html
Ohio State also maintains a wonderful repository of RFC's
http://www.cis.ohio-state.edu/hypertext/information/rfc.html
Via FTP the magic FAQ site is still:
ftp://rtfm.mit.edu/pub/
... and for questions about e-mail you'd want to look in:
ftp://rtfm.mit.edu/pub/faqs/mail/
As I say --- there's nothing that's Linux specific about any of this.

There's also an FAQ wannabe site (http://www.faq.org/) that has a miserable excuse for a search engine and seems to point alot of queries off to the CIS department at Ohio. (Tsk, Tsk, they could at least mirror the docs if they're going to take that domain name!). [Actually, they point at infoseek, and it points off to Ohio. About the same, really.]

(?) Thanks allot
Greg!


(?)Remote Tape Backups

From Bryan Andregg on 10 May 1998

>> On Thu, 19 Jun 1997 13:26:49 -0500, Gary Vinson wrote:
Hello,

We are running Redhat 4.1 (kernel of 2.0.37). We recently added a tape drive where we would like to do remote backups. Everything works fine as long as we are not root, ie, the remote backup using "tar -cvf remotehost:/dev/st0" works fine for non-root. But for root, we get a "Permission denied" error. I understand that hosts.equiv does not control remote access for root but tried adding entry for /root/.rhosts on the "remotehost", where the tape drive resides, without success. How are others handling this problem?
You need to allow root to login to tty's not listed in /etc/securetty. The proper way to fix this is to edit your /etc/pam.conf or /etc/pam.d/login (which ever is appropriate) and remove the line refering to securetty.

(!) I'm sorry to bring up such and old thread but this answer is still pretty bad and there is a much better solution (or several).

First you can use the following syntax:
tar -cvf operator@remotehost:/dev/st0 ...
... by creating an "operator" psuedo-user with appropriate permissions to the tape device on the remote host, and creating an ~operator/.rhosts with entries like:
myhost.mydomain.org root
For each of the hosts to be allowed to perform backups.

This doesn't expose the remotehost (tape server) to nearly as much risk as the approaches suggested in these responses (although the normal concerns about user r* access and host spoofing apply.

To alleviate that concern we can use the --rsh-command= switch to tar to force it to run over ssh like so:
tar -cvf operator@remotehost:/dev/st0 --rsh-command=/usr/local/bin/ssh ...
'cpio' and 'dump' also support this "user@remote.tapeserver...:/path" syntax --- though I don't see any option regarding the --rsh-command over-ride on those.

In most cases (at least with 'dump') you'll need to ensure that there is a copy of the 'rmt' command on the "operator's" $PATH (on tape server, of course).

(!)Bryan Clarifies...

From Bryan C. Andregg on 11 May 1998

Their question was not how to backup, but how to allow root to do backups.

Bryan C. Andregg

(!) My answer was about how to allow root (on the client) to do backups on a remote machine without exposing that remote machine (the tapehost) to the risks of allowing rsh root access. The point is that you don't have to change securetty or allow remote root access to provide tape service to to your clients.

Getting off of the security issues, there is another important note that's worth pointing out. If you just run these commands as I've described them, you'll probably find that the tape drive doesn't "stream" very well. That is to say that the flow of data to the drive will probably be "bursty" enough that you'll see the drive stopping, rewinding, and restarting frequently (several times per minute.

If you tape drive isn't streaming your backups can take ten times as long as it should --- and it will put even more wear and tear on the drive than that. So you want to avoid this non-streaming wherever possible.

The solution to this problem is to use Lee McLoughlin's buffer program. I'm pretty sure that this is the same Lee McLoughlin that wrote the popular FTP mirror Perl script.

This program reads input from one data stream (often a network socket) reblocks it, and writes it (usually to stdout which would usually be redirected to the tape drive). You'd also use this if your going to compress or encrypt the data as you write it to the tape drive (e.g. using gzip as a filter, or using the 'z' flag on GNU tar).

Here's an example of one of the earlier commands using
tar -czvf - ... | rsh -l operator otherhost "buffer > /dev/st0"


I didn't go into that detail in my other message since it related to the mechanics of the backup process rather than the specific security issues at hand. However, anyone else reading this message might put their tape drive and tapes through unecessary stress and get unsatisfactory performance and results by trying to follow these examples without using a copy of buffer

I've noticed that the S.u.S.E. distribution ships with a copy of buffer and Debian has it in a package for it (which will presumably be included in their "Official CD Sets" as the 2.0 distribution is finalized and "shipped". I'd like to see this included with Red Hat and I'd like to see GNU tar use it, if available, by default when it is called with the 'z' (gzip/compression) flag or with a remote file specification. Likewise for the appropriate options in the GNU cpio and dump packages.

This should not be a bit of hacker lore that must be passed down from one sysadmin to another. It should be documented in the man and info pages for all of the programs that conventionally write to tape drives, particularly if they support syntax to directly do so over network sockets, and through compression and/or encryption packages.

One final note about network and remote tape backups:

Most sysadmins seem to spend entirely too much of their time reinventing the backup wheel. I haven't looked at the slick commercial packages like BRU Taper seem to be mostly user interfaces.

Recently, however, I've been playing with the Advanced Maryland Automatic Network Disk Archiver from the University of Maryland. This seems to be well suited to enterprise data backups. It has not UI to speak of --- you add and configure the appropriate psuedo accounts and groups (providing network access over rsh or ssh), install the client and server components on your machines, in your /etc/services, and /etc/inetd.conf, and you add some cron jobs. It manages its own library of tapes and its own "holding space" on one of the server's filesystems.

Basically you just feed it tapes. One cron entry does an amcheck and mails the operator(s) if the wrong tape is in the drive. That's normally done during the day when you expect an operator to be around to fix the problem. Another entry writes the backups from the holding disk(s) to tapes; which would normally be done in the middle of the night. Amanda supports a variety of tape changers (and has an extensible design so any tape changer mechanism with a decent command-line control program can be used with it).

Many of the users on the Amanda mailing list (see their web site) are using it to maintain archives of hundreds of filesystems --- some of them measure their Amanda capacities in Terabytes!

The biggest problem with Amanda at this point is the lack of documentation for new users. It has plenty of features (the underlying backup processes use standard dump or GNU tar commands so the system is very portable, and some even use it to backup their NT systems).

Another problem is that Amanda is a complex system. I'd suggest that an initial backup of the tape server be created using some traditional Unix/Linux command like cpio or tar, and that the resulting tape be write-protected and permanently stored. (A removable medium, such as a CD-RW, CDR, LS-120, or whatever would also work). The point is that this should have the Amanda installation on it, so you can bootstrap from a tape server failure to do a full recovery.

Amanda deserves much more coverage than this; and perhaps, when I understand it well enough, I'll write an LG article on it. I think that every professional Unix and Linux sysadmin should take a look at it.


(?)adduser

From Tethys on 10 May 1998

>> I've switch from Slackware to Redhat Linux. The former has an "adduser" command and interactive (sort of). With the latter, I have to manually create the subdirectory (e.g. /home/<group>/<username>) of each user and fix the entry in /etc/passwd. Does anybody know a good utility for this?
Yep. It's called adduser :-)

It's present in all versions of RedHat I've used. It's not very configurable (although you can change defaults by editing the script itself), but it gets the job done.

It's in /usr/sbin, which may not be in your default path, although for root it really should be.

Tet

(!) The shadow suite comes with a much more powerful set of commands including: useradd, userdel, groupadd, groupmod, and groupdel. These contain switches to specify full name, home directory, shell, primary group, a list of other groups, and other information (you can even specify which UID should be used as the "base" or force it to "overlay" the new account's UID with an existing one --- if you absolutely must have multiple accounts share the same ID).

It appears (from my experience with Sun/Solaris systems) to be completely compatible with the equivalently named commands on those systems --- so creating scripts and even CGI forms to process new accounts en masse is pretty easy.

It does seem to require that you use "shadow" passwords --- but basically any system should do that in any event (and it should be the default for all distributions --- blast it!).

(Unfortunately that still isn't the case. Grrr!)


The original Open Letter to Dell was posted to comp.os.linux.advocacy and copied to Linux Weekly News and Dell Computer Corporation.


(?)Letter to Dell - Linux on Dell Hardware

From Rafael on 10 May 1998

Thanks Denis for your letter to DELL.

I bought some Dell stocks this year. Not much but that puts me in an awkward situation. Dell is doing well but it realy bothered me that they officialy issued that statement about Linux.

(!) As a shareholder, even of only a few shares, your message will probably get far more attention than mine.

I would love to see your letter to them, expressing your concerns both as a customer and as a stockholder.

(?) I wonder if we could find other investors, Linux users, admins perhaps and put a little pressure from that position. That would be even more effective if we do it openly on the web.

(!) I agree. You want to be even more "positive" in your tone since you actually have a financial stake in their future.

You want to emphasize how big a market you believe the Linux community to be --- point out that the first major company to offer Linux will probably remain at the top of that market for a long time, etc.

(?) I know that Dell signed a letter with other CEO's in support of MS. What a bummer.

(!) I personally can understand their official statements of support. From an official standpoint they claim that MS places no contractual restrictions on their choice of software bundles.

We can presume that there are veiled, subtle "issues" which suggest that any support for alternatives might result in unusual delays and backlogs or Dell's order fulfillment and possibly delays in the negotiation of new contracts and terms for future versions of MS products. It doesn't take much of this from a key supplier or customer (and MS probably does buy a large number of Dell workstations) to have a chilling effect.

Unfortunately none of those assertions are likely to be revealed in court --- and there's simply too much "plausible deniability" for them to have any effect in any event.

I'm not sure I can characterize it as a "bummer" --- since it is so utterly predictable.

What we want to do is to recast Linux as an "opportunity" for a "win-win" situation for Dell and MS. I think we can do this by pointing out that Dell offering "fine print" alternatives (No OS included, and Linux) will give the appearance of greater competition in the marketplace.

My plan is to outline this strategy to MS execs (I have a mole). Convince them that purely cosmetic notes in the marketing materials from Compaq will get the DoJ off their back and give them the ammo, in the arena of public opinion, to say: "Look! People have choices, and they still pick us almost all the time. The free market is working."

(This is bound to be more effective for them than that pathetic attempt to fabricate a "ground swell of grassroots support" --- as was reported by the L.A. Times recently).

(?)Dell & Linux

From Khimenko Victor on 10 May 1998

You must know this already but just in case: in linux-kernel list there are quite a few questions like this:

(!) While I do read lkern -- I queue up the digests for a week or so at a time and binge on them -- so I hadn't noticed these, yet.

(?)

~~~
I installed linux kernel 2.1.89 on a Dell with 2 Pentium Pros. But now "ps" fails?! Says "No processes available". During boot, I get error messages about various demons [sendmail, syslog...] trying to start, and this message gets printed for each demon. I think the demons are actually running, because [eg] if I try to start syslog, it tells me that an instance is running.

I looked in /proc, and things seem ok. Eg "cat /proc/cpuinfo" works, and shows both cpus.

Before this, I had already successfully installed 2.1.89 on an identically configured machine. ps worked there. And I'd also installed 2.1.89 on 3 Dells, containing 4 cpus each.

Any suggestions?
~~~

or this

~~~
I tried your io_apic.c fix on an Intel Alder 4x PPRO system (same motherboard,etc. as DELL PE-6100) and it had no effect: boot dies after first line about Uncompressing... OK booting...

I posted the log of a boot of 2.1.88 on the Alder earlier. Is there anything else I can tell you? By the way, 2.1.88 + aic7xxx 5.0.8 patch is VERY stable on these systems, and nothing else is right now. Also, I compiled 2.196 for a DELL 4200 (2x Pentium II) and it runs fine.
~~~

or this

~~~

Hi,

there's been some talk recently about patching 2.1.9x to support dual PCI busses. Does this mean that the 2.0.x series doesn't support it? If so, I might be in trouble... I've just ordered a dual PII-333 Dell server which has dual PCI 2.1 busses on it. This machine is to become our core mail, news, dns server for my network, and so I want to run a relatively stable server (it's replacing an aging Sun Sparc 5 which hasn't crashed for over year).

Am I going to have to risk the development series on a core server? Or will I be lucky....!

In the worst case, could I install a 2.0.x series and run it with a single bus (cards on the second not recognised) ??
Dual PCI busses should run fine with 2.0.x kernels as they locate PCI devices using the PCI BIOS which of course should handle the dual bus case. Since 2.1.9x we try to access the hardware directly in order to circumvent PCI BIOS bugs. Anyway, in 2.0.x /proc/pci will show only the first bus.
~~~



At least this means that some of Dell customers are using Linux :-)) And not only for jokes (if you'll just try to play some games with Linux you'll not going to buy dual Pentium II or 4x Pentium Pro server, right?). Thay just not tries to bother Dell with their problems since they are sure that Linux is not supported by Dell anyway...

(!) That is the point of my message. We are somewhat self-sufficient and that is great for our userbase and developers. However, as consumers we must communicate our requirements back to the vendors --- and we must do so proactively.

In other words every Unix, Linux, FreeBSD, NetBSD, OpenBSD, Solaris x86, and SCO user must tell Dell, and Compaq, and (and Apple and Umax) every other vendor that refuses to recognize our market:

We demand recognition and support

... and we must back that up with action by shopping with vendors that meet this requirement.

If we fail to do so, and we scramble about to reverse engineer every new wrinkle then we are failing as consumers (no matter how we shine as engineers). "They" won't (and shouldn't?) care about the "silent minority."

The risk and cost of this is that we may not always benefit from the same economy of scale that's enjoyed by the mass market. We may have to pay a bit more (though not quite as bit of a premium as we used to see between PC's and Macs nor nearly the discrepancy that still exists between micros and workstations).


(?) Connecting a Dumb Terminal to your Linux System

From Mark Cohen on 07 May 1998

Jim,

My name is Mark, I met you at the balug meeting this week. I just wanted to shoot off a note to you about getting getty to work on my linux box (RH5.0) Im trying to connect my dumb terminal to it (pilot)

Any help would be greatly appreciated!

-Mark

(!) The simplest method that I know of is to add a line like this to your /etc/inittab file:
t1:23:respawn:/sbin/agetty -L 38400,19200,9600,2400,1200 ttyS1 vt100
... assuming you have 'agetty', that you want to use a null modem on COM2 (use ttyS0 for COM1 etc), and that your communications package on the PalmPilot will do vt100 emulation).

If you don't have a copy of 'agetty' you can use a line like:
t1:2345:respawn:/sbin/getty ttyS2 DT38400 vt100
... assuming you have a reasonable /etc/gettydefs (like the default one that used to come with Red Hat 4.x --- and is probably unchanged in newer releases). I won't go into the details about how the gettydefs file is constructed, let's suffice it to say that the syntax was "baroque".

It's undoubtedly possible to use uugetty and getty_ps with these as well --- though I haven't ever bothered with those packages. It should also be possible to use mgetty (which I use for modem dial-in lines and incoming fax support). However that doesn't seem to work even when I use the -r switch as specified in the manual.

Definitely don't try this with mingetty --- that is designed purely for use with virtual consoles.

In any event, these examples use "t1" as the inittab entry "id" and I have them enabled at different runlevels (since these examples are from different machines on my network). Read the inittab(5) man page for details about what the fields mean.

After you've edited this file simply issue the command:

'telinit q'

... to "tell init" to re-read it's configuration file and implement your changes. In a few seconds you should be able to login on that line (you might have to hit [Enter] a couple of times to get a login prompt).

If you don't get a login prompt, or you see a console message like: "respawning too fast.... disabled for five minutes" (check your /var/log/messages file for this and similar errors from init and/or from any 'getty' that you happen to be using), you should double-check the syntax of your entry, double-check which serial port you're plugged into (remember Linux' numbering of serial and printer ports sometimes doesn't correspond to DOS/BIOS extensions --- some crufty hardware may cause confusion), and check for IRQ conflicts and cabling errors.

If you still have problems with it after you've double and triple checked every detail than you have some troubleshooting choices: In any of these cases you can play with a wide variety of 'setserial' and 'stty' commands to try and get the serial port to respond and/or behave properly. Before you spend too long with those, however, I have to say that the times when I've resorted to them as part of my troubleshooting have consistently been fixed by untangling an IRQ conflict or replacing a bad serial port (usually a whole multi-function controller, actually).

I personally have given up on the cheap $15 IDE/floppy/serial cards and I pay a bit extra for the QuickPath "FlexPort" cards (which usually come in at close to $100 US). Serial ports are hard enough to deal with without having flaky hardware underneath it. (Luckily most of the modern motherboard that have built-in serial ports have stopped putting in really cheap ones --- but it used to be that they were often junk and sometimes could not be disabled --- even if they had jumpers that purported to do so).

Anyway, good luck.

Personally I usually configure a "dumb terminal" port for all of my Linux boxes (eventually). This serves two important purposes: It is the most convenient way for me to get files to and from my laptop (for which I don't have a supported ethernet card). More importantly it gives me an extra troubleshooting option if my system "seems" hung. I can just plug in the old null modem and give it a go.

It can mean the difference between a clean shutdown and a game of "red-switch" roulette.

(Although it hasn't happened for any of my systems in so long I've almost given up on seeing it at all ;) --- but it used to be possible for the Linux console driver be completely unresponsive, and even for the network subsystem to be dead while the serial lines were still accessible. However, if you don't configure the terminal line in advance you don't have the option when you want it).


(?)Why Linux?

From Grey on 07 May 1998

Answer Guy,

I am trying to understand why there is so much interest in Linux. What does it offer, in this world of Macs and Win95 PCs, that makes it ....attractive and useful?

(!) My first stunned impression on reading this question was:
Is this a shill? How did this guy manage to find my column in LG without knowing a variety of answers to this question?

But that was a quick uncharitable moment.

(?) Are there any good 'What is Linux?' type articles I can look at. I am always tempted to purchase the Linux packages to try and determine what it is but I would not mind knowing before.

Thanks,
RL

(!) The fact that you are "tempted" (curious) is why you should play with Linux. It's your computer, and you should be able to "play" with it --- and you should have choices about how it operates. You computer should work in a way that suits your preferences and style --- you shouldn't have to adopt the style that's dictated by the trade press, the mass media, Bill Gates, Steve Jobs, Xerox PARC, or anyone else.

(I'm presuming you are a home user in this case --- but my argument applies equally to whole institutions --- they should have the choice to use and run software that suits their needs and preferences --- even such preferences as they dictate to their employees or userbase).

So, what is great about Linux? Choice.

You asked for some URL's to read testimonials about this: here's one that I'm reading right now:
John Kirch's "NT 4.0 vs. Unix"
http://www.kirch.net/unix-nt.html

(This isn't Linux specific --- but it does go into great detail and mentions Linux frequently in its analysis).

I found the link to that site from one of the LDP (Linux Documentation Project) mirrors. These LDP mirrors are the definitive place to get info about Linux. The "master" site is at:
Sunsite (U. of North Carolina):
http://sunsite.unc.edu/LDP/

... which is also the master repository for Linux software (just as ftp://prep.ai.mit.edu/ is the master repository for FSF GNUware). The LDP mirror I usually visit is at:
SSC's Linux Resources Pages:
http://www.linuxresources.com/LDP/

A couple of other great sources of Linux information are:
Linux Weekly News:
http://www.lwn.net/
Slashdot (Daily) News for Nerds:
http://www.slashdot.org/

... and, of course Linux Gazette (http://www.linuxgazette.com/) which was the first "webazine" to cover the topic and is still 100% volunteer.

Now, before I babble a bit about some of the other advantages to Linux let me digress to make two observations:
I can talk about features of Linux, Win '95, NT, MacOS, BeOS and many other operating systems and packages until my fingers are worn to nubs and you justifiably have no reason to care what I've said.

In order to discuss the possible benefits of Linux to you I'd have to know more about you --- your requirements, preferences, and constraints. I'd have to engage in a process of requirements analysis --- and the first step of that is to identify the involved parties (particularly the customer).

Modern mass marketing and advertising does not meet this need. It focuses on features rather than benefits because features can be touted with no understanding of a specific user's needs. For any given feature it may be of benefit to a given user, or it may be irrelevant or even detrimental to them.
That said, the other observation is that Linux is not quite yet appropriate for just anyone. To paraphrase a popular signature from USENET: "Linux is 'user-friendly'; it's just particular about who its friends are"

At the moment Linux is not the system I would provide to my mother for her first computer. She was interested in two things --- playing Mah Jongg and surfing the 'net. I got her a Mac Performa.

By the end of this year I might have a different view --- the KDE, Gnome, and GNUStep projects, among others, along with incremental improvements to the package management and management of products like Debian, Red Hat, and S.u.S.E. (among many others) may get us (the Linux community) to the point where shipping Linux systems to complete novices will make good business sense.
(Note: a number one priority advance that would help with this would be a multi-media "Welcome to Linux" interactive video system --- that would be run off a CD or (if they're supported by then) DVD disc).
I think it is already to the point that "normal" users can productively use Linux. Customers can go to VAResearch, Telenet, Apache Digital, PromoX, SWT, and other hardware vendors to get a system with Linux pre-installed. They can use these systems as easily as they could a similarly configured Win '95 box (and somewhat more easily than using an NT system).

We are now past due for Dell, Gateway, HP, Compaq, Zeos, IBM or some other upper tier hardware vendor to offer Linux pre-installed on their "BTO" (Build to Order) price lists. Soon I also hope to see Apple and UMAX offer mkLinux and LinuxPPC options on their PowerMac clones. I think this will happen before the end of this year (for at least one of them).

I hope that either this will happen, or one of the Linux hardware vendors will move into the same volume of sales and production currently enjoyed by one of these. Every reader of Linux Gazette, Linux Weekly News, Slashdot, Linux Journal, and all of the comp.os.linux.* and linux-* newsgroups and mailing lists can help make that happen by calling their vendor and just saying: "NO! I will not pay for a copy of Win '95 or NT that I plan to immediately and permanently replace with Linux!" (and taking their business elsewhere).

Now, "Grey", back to your question.

What is "great" about Linux?

The first thing I like about Linux is that I don't have to use any GUI. I don't like graphical screens. I often spend twelve hours at a stretch in front of my monitor and my eyes just can take any GUI for that long. My supporting a full range of applications from it's multiple text consoles Linux allows me to focus on one task at a time, giving it my full screen. At the same time I can be logged into a dozen or more session, as one or more users, to have all the benefits of multi-tasking. In addition I have options to use keyboard or mouse driven "cut and paste" between my applications.

I can use this same suite of applications on my old 386 and my Pentium 166 and on my Pentium II. I can use any application on any system on my network regardless of which machine is sitting in front of me (using telnet, ssh, or rlogin for text mode apps, and the communications protocols that are native to the X Windows System when I need a GUI).

I can sit in a coffee house a few miles away, dial into one of my machines (the 386 is the one with the modem) and use everything from my Ricochet equipped laptop that I could use if I was sitting at home in front of the machine myself.

That same modem (the one on the 386) is used to get all my mail and netnews (uucp) and was used as the dial-on-demand PPP link for my entire LAN for months (before I got the ISDN router that I currently use). When the ISDN goes out, I can switch back to using the 386 gateway in a couple of minutes.

That same modem is also used for dial out BBS and shell mode access by any system on my LAN (given that the user has the appropriate level of access).

That same modem is also use as the outgoing and incoming fax gateway.

So, I can use one modem for dial and out shell, networking, and fax for an entire network of systems --- and none of these functions "trip" over the others or conflict with any of the others.

Meanwhile one of my house guests might be using that same 386 to read mail or news, from a serial terminal line I keep in the living room, and my wife might be at the console (as she is now).

That 386, Antares, is over ten years old now. It has 32Mb of RAM, a 2Mb video card (yes, it can run X --- though it is a bit slow --- almost as slow as MS Windows used to be on it) about 6 Gig of disk space, a tape drive, a magneto optical drive and a few other toys. It ran Linux just fine with 16Mb of RAM and a 200Mb IDE disk drive (and still would, though I'd never fit my personal mail archives on that tiny drive).

(Incidentally, the the Caviar 200Mb drive in question is sitting in Canopus --- where it's not even in use. I have some purely archival files on it).

While MS Windows users were essentially forced to upgrade their systems to 486's and Pentiums in order to keep upgraded one their OS and major, critical software, I've been able to continue using my old system.

It wasn't until mid last year that I finally moved my home directory over to one of the Pentiums (Gnus, a newsreader for Emacs, just got to be too slow when I wanted to read a few thousand messages in a mailing list archive --- it would take two hours threading through them in the background before I could read them --- that same process take about 2 minutes on Canopus, the P166).

So, one advantage of Linux is its support for older equipment and unfashionable modes of use. Text mode is still widely used --- but every time I hear an "old-timer" say so it's amazing the looks it generates among "hip, savvy, modern users."

A byproduct of this support is that Linux is very friendly to blind and other physically challenged users. A friend of mine was hit with a stroke a couple of years ago. He has yet to regain significant use of one of his arms. Linux and MacOS are the easiest environments for him to use. It is trivial to enable "sticky" shift (Ctrl, Alt, and Shift) keys --- so that the user never has to co-ordinate the operation of two keys simultaneously (an action which the vast majority of us take completely for granted).

Once you reconfigure you keyboard under Linux all of the console applications use the new bindings. I've never seen a conflict. You can also configure similar features in X Windows (XFree86). Thus you can, with changes to the configuration of two subsystems, make every application on the system behave in a way that's compatible with a user's needs.

(It is also simple to associate these changes with a particular user -- so that other users of that system will not normally be affected by them).

I could go on and on. However, it would make sense for you to look at some of the other sites on the web that talk about Linux. Obviously you'll be completely overwhelmed if you do a Yahoo! search on just "Linux" (they are up to 13 categories and almost 600 sites --- compared to 17/1900 for "Unix" and about 19/660 for "Microsoft Windows" and 2/16 for MacOS)
My point is that there are too many of these to explore in a reasonable amount of time (I supposed you could surf the Yahoo! listed Linux sites in about 10 hours if you averaged only one minute per page --- and didn't follow any of them to anywhere else).

Obviously the Linux Gazette is one place to find out more, and the Linux Weekly News (http://www.lwn.net/) (formerly at http://www.eklektix.com/lwn/ is pretty good too (and comes out four times as often). If you start at SSC's Linux Resources Page (http://www.linux.resources.com/) and follow all links there you should get your fill of unabashed Linux advocacy.


(?)Redhat telnet

From Sam Erkulwater on 07 May 1998

I have a question that nobody seems to be able to answer for me. How do you setup a Redhat 5.0 machine so that it allows users to telnet to it? The machine cannot even telnet to itself, but, it can telnet to other machines. Other users can ping the Redhat machine without problems. The machine will give the user prompt, but when the correct password, the machine replies by saying "login incorrect" even thought I know for certain that the username and password are correct. This is the same for any user who tries to telnet to the Redhat 5.0 machine. Any suggestions?

Sam

(!) It sounds like you have a misconfigured PAM. Have you downloaded and applied the PAM and glibc fixes from Red Hat's site?


(?)Network Cards

From Greg on 04 May 1998

Hey there,

I recently purchased a copy of Redhat 5 and installed it on a system. My intentions for this system was for it to act as a proxy server and a mail server. I used a machine I had spare and installed. All of the devices detected but now I can't seem to get my network card to act. I checked the network settings in netcfg in xwindows but it said that the card was active. The card isn't responding (according to the lights) though. I have now tried two cards (3com 10/100 mb and a kingston 10 mb) and both have returned the same error on pinging an address (Network is unreachable.)

(!) Here's a synopsis of configuring a system for ethernet networking under Linux:
load modules (if necessary)
They may be built directly into your kernel.
ifconfig $IF $IPADDR $NETMASK $BCAST
If you get an error like:
SIOCSIFADDR: No such device
go back and load the correct driver or compile it into your kernel. If you still get that error it's probably an unsupported card --- or you're trying to use the wrong driver.
route add -net $NET $IF
Add an entry to the kernel's routing table to associate your LAN with that interface.
route add default gw $ROUTER (if necessary)
Add a route to point to your LAN's router to the "rest" of the world. Note, in some cases you might not define a default route. For example if your box is the router between your LAN and an ISP that you dial up with PPP or diald. (In that case the default route would be set by your pppd and diald packages whenever the link to your ISP became active).

(?) Is there something else I have to initialise to get my cards to work or do you think it could be a conflict of sorts?

(!) When trying to troubleshoot networking problems the commands you want to beat on are:
  • ifconfig -a
list all interfaces
  • route -n
list all defined routes
  • ping
try to reach various hosts
  • tcpdump -i $IF -vvv
dump all activity occuring on an interface
  • traceroute
watch how packets "try" to reach their destinations


In this case you want to post the output of your 'ifconfig -a' and 'route -n' commands as well as the IP address and network/mask that you're attempting to use (for each interface).

Note: since you mention that you're trying to configure this system as a proxy server it's important that you get each of of its interfaces working properly before attempting to use any routing, masquerading or proxying through it.

What proxy package(s) are you trying to use?

(?) Thanks
Greg!

(!) You're welcome.


The original message referred to was sent by Steve Wornham.


(?)"Good Times" are Spread to the "Great Unwashed"

From Tim Gray on 03 May 1998

Thank you for your section on hoaxes, I printed it out and posted it on the bulliten board at work to educate the masses there... but there is an interesting twist there.....

(!) That's exactly what I'd hoped for.

(?) My company (that I work for not my own) uses microsoft outlook for email, which wants to default to using microsoft word for email reading! Another perfect example of a big company not thinking! read the email- get a word macro virus! It is almost comical (From a developer piont of view) but a nightmare for commercial users.

(!) I thought I covered that. There is supposed to be some way to disable that "autoexec" macro in Word. Does that not work in the way that Outlook calls Word (using OLE methods)?

Meanwhile its just another way to keep up the sales of McAfee (now Network Associates, Inc) Anti-Virus and WebShield products. [disclaimer: this is handy for me since I still own some stock in them --- and handy for all my readers since my sales of that stock has allowed me to be very lackadaisacal about charging for my services]. BTW: Linux versions of some NAI AV products are available (for environments where you want to use Linux server packages to help protect your Windows and MS/PC-DOS clients).

There are many other AV products that available to help prevent the propagation of PC virus' --- including the propagation of Word, Excel, and Access macro virus'. Ultimately the issue may become so prevalent that it pushes more users to the relative security of Linux and other Unix based client solutions.

KDE, Gnome, GNUStep, and LyX (among the free products) and Applixware, Star Office, Corel WordPerfect for Linux, etc. should make that a much easier pill to swallow (for typical PC users).

(?) Hmm, can we patch pine to su it's self and run binary attchments so that linux can be as "good" as microsoft products? <-- very bad joke I admit... :-)

(!) As I've pointed out in the past --- Unix has had these sorts of problems in the past (with the autostart macros for 'vi' and 'emacs' that were supported in the distant past).

(?) Just a little bit of info, you probably already know it ..

(!) Most of it. It never ceases to amaze me how many organizations are willing to force their employees to use Outlook and Exchange when Eudora and Pegasus are freely available and practically any kind of server can support a decent POP services.

These simple methods (SMTP/POP on the servers and let users use the client of their choice) have a proven scalability and robustness that is unmatched. (As for choices of features --- that's a matter for client software).

(?) Thanks for a great column in the Gazzette!

(!) I'm glad you like it.

(?)Timothy Replied...

From Timothy D. Gray on 4 May 1998

Thanks for your email, it was very informative. Again thanks for what you do for the Linux community, without it it would be a darker place.


(?)Regarding the Column's New Look

From David Jao on 02 May 1998

(?) >> I've completely changed the look of this column.

I don't like this new look one bit. It's much harder to follow everything when every time you finish one question you have to go back to the main page and then pursue another link. I much prefer the all in one big page layout.

Just casting my vote for less page fragmentation, that's all.

-David

(!) David,

We'll be adding a few more navigation links to the bottom of each question, so you can continue onto the next one without going "back up" and "down" again.

Part of the impetus to "fragment" this column is to be more search engine friendly. When someone is looking for information relating to two or more concepts it is quite frustrating to keep hitting a few "eclectic" pages that cover diverse, unrelated material on large single HTML pages.

Also, while I appreciate that many of the regular readers of my column read the whole thing, straight through (apparently as much for my occasional rants and tirades as for the technical references), I have to be considerate for the "browsers" --- those readers who look over the table of contents and the list of question/titles and just want to load selected ones. I hate to waste their bandwidth by shipping them 100K when they only want to see a couple of 5 to 8 kilobyte blocks.

For those who prefer to fetch "The whole thing" (to mirror it unto their localhost pages or go through it using a file:/ URL) I definitely suggest following the link to the .tar.gz file as referenced by the main home page at:
http://www.linuxgazette.com/

(Yes, we get a whole domain, now. Though I have to wonder why SSC made it a .com rather than a .org)

The other motivation is that I simply want to make links of references I to the source material that I use to answer your questions. It has always bothered my sense of craftsmenship that these weren't "hot."

Finally it gives me the chance to proofread and comment on the collection of questions and answers before they "go to press." By that time the person who orginally posed their question has long since recieved it via e-mail (and usually solved their problem).

However, the point of putting it in the Gazette is to make the same answer available to a broader audience and to let the search engines pick up on them. Which leads us right back to my first reason for wanting to have them in separate pages.

If you see that "+LDAP +Linux +NT" comes up on an Alta Vista search, you want to know that this is an article about the co-existence of "light-weight directory access protocol" (LDAP) and Linux and NT. You'll be irritated and disappointed if it's a large page that has disparate references to each. (Ooops. Now someone's going to hit those references in this. Oh well, at least it will be a small page and quick to scan for relevance).

(!)Heather comments...

I really do apologize for not giving you internal navigation within the Answer Guy column. That's fixed this time, and I hope you find it useful. I was in a rush to make everything beautiful for release...

You could also read "The Whole Damn Thing" version of the Linux Gazette, in which the footers are stripped between Answer Guy letters, but you'd also be downloading and paging through the other Gazette articles.

If that was a bit more than you hoped, perhaps you understand what visitors coming in via a search engine might feel like. Since I personally use search engines a lot -- more than 70% of my web visits start with a search -- I sure notice.

Heather


(?)TACACS and RADIUS Authentication Models for Linux and/or PAM

From Alexander Belov on 01 May 1998

Hello!

I'm looking for TACACS+ client software for Linux. I mean software like portslave (RADIUS) which able to send Authentication-Authorization-Accounting requests to TACACS+ server. Is there such software?

(!) The first place I would look for a TACACS, XTACACS, or TACACS+ deamon is:

http://www.easynet.de/tacacs-faq/tacacs-faq-32.html

It should point to some reasonably portable code. (TACACS is an authentication service supported by Cisco, RADIUS is a similar "remote authentication for dial in user services" or something like that. I've heard of both being supported under Linux. These protocols are principally used by ISP's and by in the remote access systems of large business. They are typically used as a protocol between a terminal server and the hosts to which handle the accounting and authentication for those devices).

Another place I'd look for any Linux authentication services would be:

http://www.kernel.org/pub/linux/libs/pam/modules.html

I see some RADIUS modules there -- but no mention of a TACACS for PAM. I saw a reference to one copy that was working --- but following that URL now leads to a terse message that the PAM modules that used to be there are now "out of date" and that they would re-appear when the author had time to update them. No joy there.

Meanwhile there are a couple of good links to be had from DejaNews (that were either not at Yahoo! or were buried too deep for me to find):

On Steve Frampton's "Linux Administration Made Easy :-p" pages at: http://qlink.queensu.ca/~3srf/linux-admin/ he has a page on Authentication with TACACS (http://qlink.queensu.ca/~3srf/linux-admin/linux-admin-made-easy-6.html)

There he mentions (links) to an ftp site with the "Vikas" version of the xtacacsd. I dug around a bit (guessing HTTP URL's from the given FTP link) and found the:
Netplex Technologies Inc. Home Page
http://www.navya.com/

(?) Thank You Very Much
Alexander Belov

(!) I hope that helps. Somewhere in that morass I'm sure there's a way to get it all working.


(?)'sendmail' Log Jams and Capacity Problems

running extra 'sendmail -q' processes

From Robert Cotran on 28 Apr 1998

Hi there. You seem to know your stuff about Linux. I've been running it for two years. I have a little server at home. I was at first running Slackware, I'm not running RedHat 5.0. Here's my problem. After a while of running, my sendmail jams up. It doesn't let in any mail and it doesn't send out any queued mail. It's the strangest thing. And I had the same problem running Slackware. I figured that RedHat (having more updated programs) would be OK, but it's not. After a while, any queued messages just stay stuck in the /var/spool/mqueue dir, and incoming connections on port 25 are either REAL slow, or not possible. After rebooting however, all the queued mail zips out, and incoming connections are fine again. Sorry to bug you, but I just figured you might have an idea. Thanks for any help you can give me!

Rob

(!) You don't say what sort of volume your trying to bump through there or what sort of connection you have to the net. So I'll have to assume the worst.

When I was postmaster/sysadmin of a very high volume site I found that I needed to occasionally run extra copies of the command 'sendmail -q' concurrently to help reduce the backlog in the queue. This command simply makes 'sendmail' to one sweep through its queue (mostly outgoing) and look for items that are ready for a retry.

'sendmail' has locking mechanisms (those x* and l* files) to prevent concurrency issues. So this is a safe procedure.

Typically I find that it's the DNS requests that bog down the processing, so running a caching nameserver on the localhost and making it the primary entry in your /etc/resolv.conf can be a big win.

A trick I've used when a system gets temporarily really overloaded is to tar/gzip up a bunch of qf* and df* file pairs (queue control and data) --- ftp that to another system and extract them into a new directory. Then I have that system run several queue run processes in parallel to the other. This is particularly handy if the other system has a separate pipe (connection to the Internet).

In Robert Harker's 'sendmail' Seminar (which I took a few months ago) he describe a "requeue" mailer. This is an advanced technique that he would use to requeue some mail into a separate, low priority, queue directory. The idea is to prevent a relatively small number of messages to sites that are dead or down, and messages to bogus addresses (from your users' typos, etc) from clogging things up for the other 70 or 80 percent that could be expedited.

Another thing to seriously consider and investigate is whether you're site is being used as a spammer's relay. If your SMTP processing load has suddenly and drastically increased then you might have had some of your systems "hijacked" by spammers.

To understand this a little bit of explanation is in order. The Internet was founded in spirit of co-operation so 'sendmail' was designed and configured (for the last 20 years) to accept mail from anyone, to anyone. It would gladly relay mail at the request of any MTA (mail transport agent).

This was practical and reasonable --- since mail that needs multiple hops to get somewhere (say from department to hub/gateway, to other site's hub, to recipient's department and thence to recipient's home host) shouldn't need to carry "authentication" information for every leg of its journey.

However 'spammers' use this openness to send a few copies of a message with hundreds or thousands of recipients listed for each. This allows someone with a 28.8 modem to chew up thousands of gigabit hours of bandwidth over the entire 'net.

In any event you can find some instructions on how to use the recent configuration options in 'sendmail' to prevent some of this abuse. Look at http://www.sendmail.com/ for info on the new 8.9 release which supports a number of anti-spam features for details. (As you might expect this site also has lots of other sendmail information as does the older and now slightly out-of-date http://www.sendmail.org/). Sendmail 8.9 includes support for a FEATURE (m4 macro set) that links you into the "Real-Time Black Hole List" (RBL) that is maintained by Paul Vixie (author of DNS BIND/named and of the Vixie 'cron' among other things).

Like all sendmail FEATURE's this is optional.

For information information about the RBL look at http://maps.vix.com/rbl/

For more information about the fight against spam look at: http://www.cauce.org/

For the best free support on the net for sendmail related issues subscribe to the comp.mail.sendmail newsgroup.


(?)Co-ordinating diald and Manual PPP

From Keith Weisz on 30 Apr 1998

Mr. Dennis,
I set my Linux box up using the diald daemon for such things as fetchmail, multi-machine internet access, etc. Works good.

However I want to set up a standard PPP connection without demand dialing. pppd fails unless I kill diald.

Do I:

  1. Write a script to kill diald during the initiation of PPP, then write a script to reinitialize diald on PPP shutdown?
  2. Figure out how to make PPP and diald work together?
  3. Find a trick that I haven't had the patience to track down?

Any suggestions?
Keith Weisz

(!) Keith,

I use option number 1. -- just write a script that does the kill for you (something like:
kill $(cat /var/run/diald.pid)
Should kill it for you. (... where $() is the same as `` in bash or Korn shell).

Another alternative might be to define separate runlevels for diald mode and "manual" pppd mode. That would only be appropriate if you were using init to respawn diald (adding an appropriate line to the /etc/inittab file).


(?)getting ppp-2.3.3 to work

From tng on 29 Apr 1998

Wasn't exactly sure where to send comments, questions....

Anyway I finally decided to migrate to linux kernel 2.1.94 mainly because of the .94 indicates that they are almost ready for the next stable release...

(!) Now is indeed the time for a broader audience to do more testing of the new kernels. However it is still a beta, and it should be used on non-critical systems, personal workstations, testbeds, home servers etc.

Obviously for some purposes you need the new features of 2.1.x for some production work --- but you have to understand the risks you're taking in the process.

(?) The problem I have is ppp 2.3.3. I downloaded is read the README compiled the required parts and installed flawlessly...Now I CANNOT conect to my ISP.. They are running a linux network with redhat 5 for web hosting and slakeware controling the raid and passwords. I'm running slackware. (redhat would crash every couple days wipeing out my harddisk...got tired of rebuilding my system...got real good at backups : ) )

(!) Try at least the 2.1.98. I did read about a variety of problems with PPP and in the serial drivers in some of the 2.1.94 time frame.

It's also a good idea to double check your IRQ settings and view the results of 'setserial' and/or the system's autodetection of your serial settings. I also saw traffic that suggested that some cases where 2.1.x was more sensitive to some situations that 2.0 would ignore.

Also try the most recent stable kernel (2.0.33 or .34) with this new pppd.

(?) the ppp-2.2 I was using I had to use the +ua <file> switch where file contained the username and password for upap auth. after upgrading this swich was no longer available so I simply added it to my /etc/ppp/pap-secrets file:

username    *    password


this didn't work. Tried the following
localhost    *    username:password
*            *    username:password

(!) I've never used PAP/CHAP authentication with Linux. So I don't know if these are right. I would double check the PPP-HOWTO and I might even contact the author/maintainer of the 2.3.3 package to ask for a pointer. If you do so, please consider copying the maintainer for the PPP-HOWTO so that the docs can be updated as necessary (this may help reduce the number of time the pppd maintainer has to answer this question --- and it may be for a quick answer to the people who don't bother to look for a newer version of the HOWTO.

(?) My ISP hangs up on me. I changed the order of the fields every which way I could thing of but nothing worked. I would like to get my linux box back on the net because of better transfer times and a more stable environment. (linux connected at 33.6 and windoz connects and 24.# with the same serial settings modem init etc.) Please help...I hate to downgrade after houres of work upgrading.

(!) As with all PPP configurations I'd suggest using minicom to connect to the ISP manually, quit out of minicom without disconnecting (something that C-Kermit won't do). Then you should be able to start pppd on that line so that it won't attempt authentication.

If that works than you know that the serial line, modems, and ISP's settings are all right --- and you can focus on the chat script and the authentication options (which are often the hard and confusing parts of PPP configuration, since they don't happen interactively).

Also, consider setting the kdebug option, running the tests and including excerpts from the resulting log files in your messages to me, to the L.U.S.T. list (Linux User's Support Team is a tech support mailing list, reasonably low traffic and high S/N ratio), to the newsgroups (comp.os.linux.networking or c.o.l.setup might be most useful).

(don't forget to blot out any password and user ID info)


(?)Getting at MS-Mail from within Linux

The Myriad Ways to Co-exist with MS Windows

From Aubrey Pic on 28 Apr 1998

I connect my linux box, at work, to a Netware network. I, also, have Win95 on my box, ONLY to get my LAN mail. Is there any way (or is there a commercial pkg availible) to get my MS-MAIL messages w/out resorting to Win???? I have ncpfs running, so I can login & access my Netware account/directory, with no problems. I cringe every time I have to do this.

Aubrey Pic

(!) I really feel for you there. There are five techniques that come to mind:
  1. Get an extra cheap box to run Win '95 on. Put the two boxes on a KVM (keyboard, video, mouse) switch box.

    A bit expensive by some accounts -- but much less than the time you waste rebooted if you amoritize it over a year or two.
  2. MS-Exchange i (presumably the mail server that your site is using) can be configured to serve e-mail over POP. Essentially your postmaster would set you up with an account that had a local address but was routed as though it were to the Internet. You'd just use 'fetchmail' to grab your mail.

    This is probably the easiest and cheapest. However I've never administered an Exchange system --- so I have no idea how difficult this is or what sort of glitches you'll run into (particularly with various "rich text" embellishments that MS-Mail supports and that your associates might therefore use on mail that they consider to be "internal."
  3. Install an emulator such as Bochs, WINE, Wabi or DOSEmu (which allegedly can run Windows 3.x in standard mode).

    This is probably the most difficult approach (Wabi isn't too bad -- but it's only 3.x, too) The nice thing is hack value, you show your associates that Linux can run Windows apps.

    Bochs might be the most impressive, though undoubtedly the slowest. It can apparently run Win '95 under its emulation of a whole system (processor, chipset, video register set, disk controllers the works -- all virtual).
    (shareware in source form $25)
    (free)
    (free)
    (commercial ~$100 - 200)

  4. Get a G3 PowerMac and run Connectix Virtual PC under MacOS. (Apple has a special offer to include it free, until June 26.)

    Well, this is the most expensive and it does sell another copy of Win '95 for Microsoft. But that G3 is fast and should make a great platform for mkLinux and/or LinuxPPC in the next year or so.
  5. Run a copy of VNC --- with the server on some associate's system and the client on your Linux box. You won't get to read your mail concurrently with them. Or get a Citrix WinFrame server (runs over NT 3.51) and the Java "ICA" client so you can remotely run NT apps from your X desktop.


(?)Automated Handling for MAILER-DAEMON Messages

Read the Sources, Luke.

From Adrian Lattke on 28 Apr 1998

Mr.Dennis:

I must conratulate you. I found your page on "Procmail Mini-Tutorial: Automated Mail Handling" extremely useful. You see, I was browsing the web in search for a program that will filter Mailer Daemon bouncebacks. Would you happen to know, if there is any way, from a perl script, to determine if an email address is valid? Or, how exactly should I configure procmail and its files to filter daemons into a directory? What I want to say is, do you know of any program with a set of rules for identifying a bounceback, and extracting the address that it bounced from, appending it to a file?

Thank you very much for your help,
Adrian Lattke

(!) There is no way to determine if an address is "valid" in the sense that it really leads to someone's inbox, other than sending them mail and getting a response. Anything else is only a guess. (Technically that's "verification" rather than "validation" -- but the terms are often interchanged and misused).

In the more precise sense it is possible to validate a string as complying with RFC 822. That's the IETF document that defines the proper formatting and structure of Internet e-mail headers and addresses. However this is not a trivial task. (I think Tom Christiansen mentioned that it took him a hundred ugly lines of perl code to do it). You might look at CPAN (the Comprehensive Perl Archive Network --- a set of co-operating mirror sites that forms the canonical repository of publicly available perl sources libraries and modules). Look at http:/www.perl.org/ for starters.

As I said, procmail does handle many of these details for you --- which is why I use it. I figure Stephen R. van den Berg (procmail's author) knows alot more about RFC822 parsing than I want to.

Regarding your desire to automatically extract and record addresses that result in bounce messages (responses from MAILER-DAEMON):

I'd suggest that you grab a copy of SmartList (the automated mailing list management package that's built over procmail, and is by the same author). That has the best "bounce handling" features that are available among the Majordomo, ListServ, ListProc set (from what I hear on the list-managers mailing list at GreatCircle.com). So, you could grab it and look through its procmail sources to find out how it handles the automated removal of "dead" addresses from its subscriber lists. That's got to be pretty close to what you've described.


Copyright © 1998, James T. Dennis
Published in Linux Gazette Issue 29 June 1998


[ Table Of Contents ] [ Front Page ] [ Back ] [ Next ]