This is Info file textutils.info, produced by Makeinfo-1.64 from the input file /ade-src/fsf/textutils/doc/textutils.texi. START-INFO-DIR-ENTRY * Text utilities: (textutils). GNU text utilities. * cat: (textutils)cat invocation. Concatenate and write files. * cksum: (textutils)cksum invocation. Print POSIX CRC checksum. * comm: (textutils)comm invocation. Compare sorted files by line. * csplit: (textutils)csplit invocation. Split by context. * cut: (textutils)cut invocation. Print selected parts of lines. * expand: (textutils)expand invocation. Convert tabs to spaces. * fmt: (textutils)fmt invocation. Reformat paragraph text. * fold: (textutils)fold invocation. Wrap long input lines. * head: (textutils)head invocation. Output the first part of files. * join: (textutils)join invocation. Join lines on a common field. * md5sum: (textutils)md5sum invocation. Print or check message-digests. * nl: (textutils)nl invocation. Number lines and write files. * od: (textutils)od invocation. Dump files in octal, etc. * paste: (textutils)paste invocation. Merge lines of files. * pr: (textutils)pr invocation. Paginate or columnate files. * sort: (textutils)sort invocation. Sort text files. * split: (textutils)split invocation. Split into fixed-size pieces. * sum: (textutils)sum invocation. Print traditional checksum. * tac: (textutils)tac invocation. Reverse files. * tail: (textutils)tail invocation. Output the last part of files. * tr: (textutils)tr invocation. Translate characters. * unexpand: (textutils)unexpand invocation. Convert spaces to tabs. * uniq: (textutils)uniq invocation. Uniqify files. * wc: (textutils)wc invocation. Byte, word, and line counts. END-INFO-DIR-ENTRY This file documents the GNU text utilities. Copyright (C) 1994, 95, 96 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Foundation. File: textutils.info, Node: Putting the tools together, Prev: The `uniq' command, Up: Opening the software toolbox Putting the tools together ========================== Now, let's suppose this is a large BBS system with dozens of users logged in. The management wants the SysOp to write a program that will generate a sorted list of logged in users. Furthermore, even if a user is logged in multiple times, his or her name should only show up in the output once. The SysOp could sit down with the system documentation and write a C program that did this. It would take perhaps a couple of hundred lines of code and about two hours to write it, test it, and debug it. However, knowing the software toolbox, the SysOp can instead start out by generating just a list of logged on users: $ who | cut -c1-8 arnold miriam bill arnold Next, sort the list: $ who | cut -c1-8 | sort arnold arnold bill miriam Finally, run the sorted list through `uniq', to weed out duplicates: $ who | cut -c1-8 | sort | uniq arnold bill miriam The `sort' command actually has a `-u' option that does what `uniq' does. However, `uniq' has other uses for which one cannot substitute `sort -u'. The SysOp puts this pipeline into a shell script, and makes it available for all the users on the system: # cat > /usr/local/bin/listusers who | cut -c1-8 | sort | uniq ^D # chmod +x /usr/local/bin/listusers There are four major points to note here. First, with just four programs, on one command line, the SysOp was able to save about two hours worth of work. Furthermore, the shell pipeline is just about as efficient as the C program would be, and it is much more efficient in terms of programmer time. People time is much more expensive than computer time, and in our modern "there's never enough time to do everything" society, saving two hours of programmer time is no mean feat. Second, it is also important to emphasize that with the *combination* of the tools, it is possible to do a special purpose job never imagined by the authors of the individual programs. Third, it is also valuable to build up your pipeline in stages, as we did here. This allows you to view the data at each stage in the pipeline, which helps you acquire the confidence that you are indeed using these tools correctly. Finally, by bundling the pipeline in a shell script, other users can use your command, without having to remember the fancy plumbing you set up for them. In terms of how you run them, shell scripts and compiled programs are indistinguishable. After the previous warm-up exercise, we'll look at two additional, more complicated pipelines. For them, we need to introduce two more tools. The first is the `tr' command, which stands for "transliterate." The `tr' command works on a character-by-character basis, changing characters. Normally it is used for things like mapping upper case to lower case: $ echo ThIs ExAmPlE HaS MIXED case! | tr '[A-Z]' '[a-z]' this example has mixed case! There are several options of interest: work on the complement of the listed characters, i.e., operations apply to characters not in the given set delete characters in the first set from the output squeeze repeated characters in the output into just one character. We will be using all three options in a moment. The other command we'll look at is `comm'. The `comm' command takes two sorted input files as input data, and prints out the files' lines in three columns. The output columns are the data lines unique to the first file, the data lines unique to the second file, and the data lines that are common to both. The `-1', `-2', and `-3' command line options omit the respective columns. (This is non-intuitive and takes a little getting used to.) For example: $ cat f1 11111 22222 33333 44444 $ cat f2 00000 22222 33333 55555 $ comm f1 f2 00000 11111 22222 33333 44444 55555 The single dash as a filename tells `comm' to read standard input instead of a regular file. Now we're ready to build a fancy pipeline. The first application is a word frequency counter. This helps an author determine if he or she is over-using certain words. The first step is to change the case of all the letters in our input file to one case. "The" and "the" are the same word when doing counting. $ tr '[A-Z]' '[a-z]' < whats.gnu | ... The next step is to get rid of punctuation. Quoted words and unquoted words should be treated identically; it's easiest to just get the punctuation out of the way. $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | ... The second `tr' command operates on the complement of the listed characters, which are all the letters, the digits, the underscore, and the blank. The `\012' represents the newline character; it has to be left alone. (The ASCII TAB character should also be included for good measure in a production script.) At this point, we have data consisting of words separated by blank space. The words only contain alphanumeric characters (and the underscore). The next step is break the data apart so that we have one word per line. This makes the counting operation much easier, as we will see shortly. $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | > tr -s '[ ]' '\012' | ... This command turns blanks into newlines. The `-s' option squeezes multiple newline characters in the output into just one. This helps us avoid blank lines. (The `>' is the shell's "secondary prompt." This is what the shell prints when it notices you haven't finished typing in all of a command.) We now have data consisting of one word per line, no punctuation, all one case. We're ready to count each word: $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | > tr -s '[ ]' '\012' | sort | uniq -c | ... At this point, the data might look something like this: 60 a 2 able 6 about 1 above 2 accomplish 1 acquire 1 actually 2 additional The output is sorted by word, not by count! What we want is the most frequently used words first. Fortunately, this is easy to accomplish, with the help of two more `sort' options: do a numeric sort, not an ASCII one reverse the order of the sort The final pipeline looks like this: $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | > tr -s '[ ]' '\012' | sort | uniq -c | sort -nr 156 the 60 a 58 to 51 of 51 and ... Whew! That's a lot to digest. Yet, the same principles apply. With six commands, on two lines (really one long one split for convenience), we've created a program that does something interesting and useful, in much less time than we could have written a C program to do the same thing. A minor modification to the above pipeline can give us a simple spelling checker! To determine if you've spelled a word correctly, all you have to do is look it up in a dictionary. If it is not there, then chances are that your spelling is incorrect. So, we need a dictionary. If you have the Slackware Linux distribution, you have the file `/usr/lib/ispell/ispell.words', which is a sorted, 38,400 word dictionary. Now, how to compare our file with the dictionary? As before, we generate a sorted list of words, one per line: $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | > tr -s '[ ]' '\012' | sort -u | ... Now, all we need is a list of words that are *not* in the dictionary. Here is where the `comm' command comes in. $ tr '[A-Z]' '[a-z]' < whats.gnu | tr -cd '[A-Za-z0-9_ \012]' | > tr -s '[ ]' '\012' | sort -u | > comm -23 - /usr/lib/ispell/ispell.words The `-2' and `-3' options eliminate lines that are only in the dictionary (the second file), and lines that are in both files. Lines only in the first file (standard input, our stream of words), are words that are not in the dictionary. These are likely candidates for spelling errors. This pipeline was the first cut at a production spelling checker on Unix. There are some other tools that deserve brief mention. `grep' search files for text that matches a regular expression `egrep' like `grep', but with more powerful regular expressions count lines, words, characters `tee' a T-fitting for data pipes, copies data to files and to standard output `sed' the stream editor, an advanced tool `awk' a data manipulation language, another advanced tool The software tools philosophy also espoused the following bit of advice: "Let someone else do the hard part." This means, take something that gives you most of what you need, and then massage it the rest of the way until it's in the form that you want. To summarize: 1. Each program should do one thing well. No more, no less. 2. Combining programs with appropriate plumbing leads to results where the whole is greater than the sum of the parts. It also leads to novel uses of programs that the authors might never have imagined. 3. Programs should never print extraneous header or trailer data, since these could get sent on down a pipeline. (A point we didn't mention earlier.) 4. Let someone else do the hard part. 5. Know your toolbox! Use each program appropriately. If you don't have an appropriate tool, build one. As of this writing, all the programs we've discussed are available via anonymous `ftp' from `prep.ai.mit.edu' as `/pub/gnu/textutils-1.9.tar.gz' directory.(1) None of what I have presented in this column is new. The Software Tools philosophy was first introduced in the book `Software Tools', by Brian Kernighan and P.J. Plauger (Addison-Wesley, ISBN 0-201-03669-X). This book showed how to write and use software tools. It was written in 1976, using a preprocessor for FORTRAN named `ratfor' (RATional FORtran). At the time, C was not as ubiquitous as it is now; FORTRAN was. The last chapter presented a `ratfor' to FORTRAN processor, written in `ratfor'. `ratfor' looks an awful lot like C; if you know C, you won't have any problem following the code. In 1981, the book was updated and made available as `Software Tools in Pascal' (Addison-Wesley, ISBN 0-201-10342-7). Both books remain in print, and are well worth reading if you're a programmer. They certainly made a major change in how I view programming. Initially, the programs in both books were available (on 9-track tape) from Addison-Wesley. Unfortunately, this is no longer the case, although you might be able to find copies floating around the Internet. For a number of years, there was an active Software Tools Users Group, whose members had ported the original `ratfor' programs to essentially every computer system with a FORTRAN compiler. The popularity of the group waned in the middle '80s as Unix began to spread beyond universities. With the current proliferation of GNU code and other clones of Unix programs, these programs now receive little attention; modern C versions are much more efficient and do more than these programs do. Nevertheless, as exposition of good programming style, and evangelism for a still-valuable philosophy, these books are unparalleled, and I recommend them highly. Acknowledgment: I would like to express my gratitude to Brian Kernighan of Bell Labs, the original Software Toolsmith, for reviewing this column. ---------- Footnotes ---------- (1) Version 1.9 was current when this column was written. Check the nearest GNU archive for the current version. File: textutils.info, Node: Index, Prev: Opening the software toolbox, Up: Top Index ***** * Menu: * +COUNT: tail invocation. * +N: uniq invocation. * -address-radix: od invocation. * -all: unexpand invocation. * -before: tac invocation. * -binary: md5sum invocation. * -body-numbering: nl invocation. * -bytes <1>: split invocation. * -bytes <2>: head invocation. * -bytes <3>: fold invocation. * -bytes <4>: tail invocation. * -bytes <5>: wc invocation. * -bytes: cut invocation. * -characters: cut invocation. * -chars: wc invocation. * -check-chars: uniq invocation. * -count: uniq invocation. * -crown-margin: fmt invocation. * -delimiter: cut invocation. * -delimiters: paste invocation. * -digits: csplit invocation. * -elide-empty-files: csplit invocation. * -fields: cut invocation. * -follow: tail invocation. * -footer-numbering: nl invocation. * -format: od invocation. * -header-numbering: nl invocation. * -help: Common options. * -ignore-case <1>: uniq invocation. * -ignore-case: join invocation. * -initial: expand invocation. * -join-blank-lines: nl invocation. * -keep-files: csplit invocation. * -line-bytes: split invocation. * -lines <1>: split invocation. * -lines <2>: head invocation. * -lines <3>: tail invocation. * -lines: wc invocation. * -no-renumber: nl invocation. * -number: cat invocation. * -number-format: nl invocation. * -number-nonblank: cat invocation. * -number-separator: nl invocation. * -number-width: nl invocation. * -only-delimited: cut invocation. * -output-duplicates: od invocation. * -page-increment: nl invocation. * -prefix: csplit invocation. * -quiet <1>: tail invocation. * -quiet <2>: head invocation. * -quiet: csplit invocation. * -read-bytes: od invocation. * -regex: tac invocation. * -repeated: uniq invocation. * -section-delimiter: nl invocation. * -separator: tac invocation. * -serial: paste invocation. * -show-all: cat invocation. * -show-ends: cat invocation. * -show-nonprinting: cat invocation. * -show-tabs: cat invocation. * -silent <1>: tail invocation. * -silent <2>: head invocation. * -silent: csplit invocation. * -skip-bytes: od invocation. * -skip-chars: uniq invocation. * -skip-fields: uniq invocation. * -spaces: fold invocation. * -split-only: fmt invocation. * -squeeze-blank: cat invocation. * -starting-line-number: nl invocation. * -status: md5sum invocation. * -string: md5sum invocation. * -strings: od invocation. * -suffix: csplit invocation. * -sysv: sum invocation. * -tabs <1>: unexpand invocation. * -tabs: expand invocation. * -tagged-paragraph: fmt invocation. * -text: md5sum invocation. * -traditional: od invocation. * -uniform-spacing: fmt invocation. * -unique: uniq invocation. * -verbose <1>: tail invocation. * -verbose <2>: head invocation. * -verbose: split invocation. * -version: Common options. * -warn: md5sum invocation. * -width <1>: od invocation. * -width <2>: fmt invocation. * -width: fold invocation. * -words: wc invocation. * -1 <1>: join invocation. * -1: comm invocation. * -2 <1>: comm invocation. * -2: join invocation. * -3: comm invocation. * -COLUMN: pr invocation. * -COUNT <1>: head invocation. * -COUNT: tail invocation. * -N: uniq invocation. * -TAB <1>: unexpand invocation. * -TAB: expand invocation. * -WIDTH: fmt invocation. * -a <1>: unexpand invocation. * -a: join invocation. * -A <1>: cat invocation. * -A: od invocation. * -a <1>: pr invocation. * -a: od invocation. * -b <1>: csplit invocation. * -b <2>: split invocation. * -b <3>: cat invocation. * -b <4>: fold invocation. * -b <5>: cut invocation. * -b <6>: pr invocation. * -b <7>: nl invocation. * -b <8>: md5sum invocation. * -b <9>: tac invocation. * -b <10>: sort invocation. * -b: od invocation. * -c <1>: fmt invocation. * -c <2>: cut invocation. * -c <3>: od invocation. * -c <4>: pr invocation. * -c <5>: wc invocation. * -c <6>: tail invocation. * -c: sort invocation. * -C: split invocation. * -c <1>: head invocation. * -c: uniq invocation. * -d <1>: nl invocation. * -d <2>: sort invocation. * -d <3>: uniq invocation. * -d <4>: cut invocation. * -d <5>: pr invocation. * -d <6>: paste invocation. * -d: od invocation. * -e <1>: join invocation. * -e <2>: cat invocation. * -e: pr invocation. * -E: cat invocation. * -f <1>: cut invocation. * -f <2>: csplit invocation. * -f <3>: pr invocation. * -f: uniq invocation. * -F: pr invocation. * -f <1>: nl invocation. * -f <2>: tail invocation. * -f <3>: sort invocation. * -f: od invocation. * -g: sort invocation. * -h <1>: od invocation. * -h <2>: nl invocation. * -h: pr invocation. * -i <1>: od invocation. * -i <2>: nl invocation. * -i <3>: sort invocation. * -i <4>: pr invocation. * -i <5>: join invocation. * -i <6>: expand invocation. * -i: uniq invocation. * -j: od invocation. * -j1: join invocation. * -j2: join invocation. * -k <1>: csplit invocation. * -k: sort invocation. * -l <1>: od invocation. * -l <2>: split invocation. * -l <3>: nl invocation. * -l <4>: pr invocation. * -l: wc invocation. * -m <1>: sort invocation. * -m: pr invocation. * -n <1>: sort invocation. * -n <2>: pr invocation. * -n <3>: tail invocation. * -n <4>: head invocation. * -n: nl invocation. * -N: od invocation. * -n <1>: csplit invocation. * -n <2>: cat invocation. * -n: cut invocation. * -o <1>: pr invocation. * -o <2>: sort invocation. * -o: od invocation. * -p: nl invocation. * -q <1>: head invocation. * -q <2>: tail invocation. * -q: csplit invocation. * -r <1>: sort invocation. * -r <2>: tac invocation. * -r <3>: sum invocation. * -r: pr invocation. * -s <1>: cut invocation. * -s <2>: uniq invocation. * -s <3>: paste invocation. * -s <4>: cat invocation. * -s <5>: csplit invocation. * -s <6>: od invocation. * -s <7>: pr invocation. * -s <8>: fmt invocation. * -s <9>: fold invocation. * -s <10>: sum invocation. * -s <11>: tac invocation. * -s: nl invocation. * -T: cat invocation. * -t <1>: pr invocation. * -t <2>: cat invocation. * -t <3>: sort invocation. * -t <4>: md5sum invocation. * -t <5>: expand invocation. * -t <6>: fmt invocation. * -t <7>: od invocation. * -t: unexpand invocation. * -u <1>: cat invocation. * -u <2>: fmt invocation. * -u <3>: sort invocation. * -u: uniq invocation. * -v <1>: od invocation. * -v <2>: head invocation. * -v <3>: tail invocation. * -v <4>: cat invocation. * -v <5>: pr invocation. * -v: nl invocation. * -w <1>: uniq invocation. * -w <2>: wc invocation. * -w <3>: md5sum invocation. * -w <4>: pr invocation. * -w <5>: od invocation. * -w <6>: nl invocation. * -w <7>: fold invocation. * -w: fmt invocation. * -x: od invocation. * -z <1>: sort invocation. * -z: csplit invocation. * 128-bit checksum: md5sum invocation. * 16-bit checksum: sum invocation. * across columns: pr invocation. * alnum: Character sets. * alpha: Character sets. * ASCII dump of files: od invocation. * backslash escapes: Character sets. * balancing columns: pr invocation. * binary input files: md5sum invocation. * blank: Character sets. * blank lines, numbering: nl invocation. * blanks, ignoring leading: sort invocation. * body, numbering: nl invocation. * BSD sum: sum invocation. * BSD tail: tail invocation. * bugs, reporting: Introduction. * byte count: wc invocation. * case folding: sort invocation. * cat: cat invocation. * characters classes: Character sets. * checking for sortedness: sort invocation. * checksum, 128-bit: md5sum invocation. * checksum, 16-bit: sum invocation. * cksum: cksum invocation. * cntrl: Character sets. * comm: comm invocation. * common field, joining on: join invocation. * common lines: comm invocation. * common options: Common options. * comparing sorted files: comm invocation. * concatenate and write files: cat invocation. * context splitting: csplit invocation. * converting tabs to spaces: expand invocation. * copying files: cat invocation. * CRC checksum: cksum invocation. * crown margin: fmt invocation. * csplit: csplit invocation. * cut: cut invocation. * cyclic redundancy check: cksum invocation. * deleting characters: Squeezing. * differing lines: comm invocation. * digit: Character sets. * double spacing: pr invocation. * duplicate lines, outputting: uniq invocation. * empty lines, numbering: nl invocation. * entire files, output of: Output of entire files. * equivalence classes: Character sets. * expand: expand invocation. * field separator character: sort invocation. * file contents, dumping unambiguously: od invocation. * file offset radix: od invocation. * fingerprint, 128-bit: md5sum invocation. * first part of files, outputting: head invocation. * fmt: fmt invocation. * fold: fold invocation. * folding long input lines: fold invocation. * footers, numbering: nl invocation. * formatting file contents: Formatting file contents. * general numeric sort: sort invocation. * graph: Character sets. * growing files: tail invocation. * head: head invocation. * headers, numbering: nl invocation. * help, online: Common options. * hex dump of files: od invocation. * indenting lines: pr invocation. * initial part of files, outputting: head invocation. * initial tabs, converting: expand invocation. * input tabs: pr invocation. * introduction: Introduction. * join: join invocation. * Knuth, Donald E.: fmt invocation. * last part of files, outputting: tail invocation. * left margin: pr invocation. * line count: wc invocation. * line numbering: nl invocation. * line-breaking: fmt invocation. * line-by-line comparison: comm invocation. * ln format for nl: nl invocation. * logical pages, numbering on: nl invocation. * lower: Character sets. * md5sum: md5sum invocation. * merging files: paste invocation. * merging sorted files: sort invocation. * message-digest, 128-bit: md5sum invocation. * months, sorting by: sort invocation. * multicolumn output, generating: pr invocation. * nl: nl invocation. * numbering lines: nl invocation. * numeric sort: sort invocation. * octal dump of files: od invocation. * od: od invocation. * operating on characters: Operating on characters. * operating on sorted files: Operating on sorted files. * output file name prefix <1>: split invocation. * output file name prefix: csplit invocation. * output file name suffix: csplit invocation. * output of entire files: Output of entire files. * output of parts of files: Output of parts of files. * output tabs: pr invocation. * overwriting of input, allowed: sort invocation. * paragraphs, reformatting: fmt invocation. * parts of files, output of: Output of parts of files. * paste: paste invocation. * phone directory order: sort invocation. * pieces, splitting a file into: split invocation. * Plass, Michael F.: fmt invocation. * POSIX.2: Introduction. * POSIXLY_CORRECT: Warnings in tr. * pr: pr invocation. * print: Character sets. * printing, preparing files for: pr invocation. * punct: Character sets. * radix for file offsets: od invocation. * ranges: Character sets. * reformatting paragraph text: fmt invocation. * repeated characters: Character sets. * reverse sorting: sort invocation. * reversing files: tac invocation. * rn format for nl: nl invocation. * rz format for nl: nl invocation. * screen columns: fold invocation. * section delimiters of pages: nl invocation. * sentences and line-breaking: fmt invocation. * sort: sort invocation. * sort field: sort invocation. * sort zero-terminated lines: sort invocation. * sorted files, operations on: Operating on sorted files. * sorting files: sort invocation. * space: Character sets. * specifying sets of characters: Character sets. * split: split invocation. * splitting a file into pieces: split invocation. * splitting a file into pieces by context: csplit invocation. * squeezing blank lines: cat invocation. * squeezing repeat characters: Squeezing. * string constants, outputting: od invocation. * sum: sum invocation. * summarizing files: Summarizing files. * System V sum: sum invocation. * tabs to spaces, converting: expand invocation. * tabstops, setting: expand invocation. * tac: tac invocation. * tagged paragraphs: fmt invocation. * tail: tail invocation. * telephone directory order: sort invocation. * text input files: md5sum invocation. * text utilities: Top. * text, reformatting: fmt invocation. * TMPDIR: sort invocation. * total counts: wc invocation. * tr: tr invocation. * translating characters: Translating. * type size: od invocation. * unexpand: unexpand invocation. * uniq: uniq invocation. * uniqify files: uniq invocation. * uniqifying output: sort invocation. * unique lines, outputting: uniq invocation. * unprintable characters, ignoring: sort invocation. * upper: Character sets. * utilities for text handling: Top. * verifying MD5 checksums: md5sum invocation. * version number, finding: Common options. * wc: wc invocation. * word count: wc invocation. * wrapping long input lines: fold invocation. * xdigit: Character sets.