This document is also available in LaTeX2e and gzipped PostScript formats. The slides that accompanied the original talk are also available in LaTeX2e and gzipped PostScript formats.
Note: I've noticed that a few university courses' web pages link to this one. I'm glad that people are finding it useful. Just don't ask me to do your homework.
Why bother to learn shell programming, when you could be out there rollerblading or trying to get a date?
Because, in a word, it's useful.
Many standard utilities (rdist, make, cron, etc.) allow you to specify a command to run at a certain time. Usually, this command is simply passed to the Bourne shell, which means that you can execute whole scripts, should you choose to do so.
Lastly, Unix runs Bourne shell scripts when it boots. If you want to modify the boot-time behavior of a system, you need to learn to write and modify Bourne shell scripts.
First of all, what's a shell? Under Unix, a shell is a command interpreter. That is, it reads commands from the keyboard and executes them.
Furthermore, and this is what this tutorial is all about, you can put commands in a file and execute them all at once. This is known as a script. Here's a simple one:
#!/bin/sh # Rotate procmail log files cd /homes/arensb/Mail rm procmail.log.6 # This is redundant mv procmail.log.5 procmail.log.6 mv procmail.log.4 procmail.log.5 mv procmail.log.3 procmail.log.4 mv procmail.log.2 procmail.log.3 mv procmail.log.1 procmail.log.2 mv procmail.log.0 procmail.log.1 mv procmail.log procmail.log.0
There are several things to note here: first of all, comments begin with a hash (#) and continue to the end of the line (the first line is special, and we'll cover that in just a moment).
Secondly, the script itself is just a series of commands. I use this script to rotate log files, as it says. I could just as easily have typed these commands in by hand, but I'm lazy, and I don't feel like it. Plus, if I did, I might make a typo at the wrong moment and really make a mess.
The first line of any script must begin with #!, followed by the name of the interpreter.
A script, like any file that can be run as a command, needs to be executable: save this script as rotatelog and run
chmod +x rotatelog
to make it executable. You can now run it by running
./rotatelog
Unlike some other operating systems, Unix allows any program to be used as a script interpreter. This is why people talk about ``a Bourne shell script'' or ``an awk script.'' One might even write a more script, or an ls script (though the latter wouldn't be terribly useful). Hence, it is important to let Unix know which program will be interpreting the script.
When Unix tries to execute the script, it sees the first two characters (#!) and knows that it is a script. It then reads the rest of the line to find out which program is to execute the script. For a Bourne shell script, this will be /bin/sh. Hence, the first line of our script must be
#!/bin/sh
After the command interpreter, you can have one, and sometimes more, options. Some flavors of Unix only allow one, though, so don't assume that you can have more.
sh allows you to have variables, just like any programming languages. Variables do not need to be declared. To set a sh variable, use
and to use the value of the variable later, use
or
The latter syntax is useful if the variable name is immediately followed by other text:
#!/bin/sh COLOR=yellow echo This looks $COLORish echo This seems ${COLOR}ish
prints
This looks This seems yellowish
There is only one type of variable in sh: strings. This is somewhat limited, but is sufficient for most purposes.
A sh variable can be either a local variable or an environment variable. They both work the same way; the only difference lies in what happens when the script runs another program (which, as we saw earlier, it does all the time).
Environment variables are passed to subprocesses. Local variables are not.
By default, variables are local. To turn a local variable into an environment variable, use
Here's a simple wrapper for a program:
#!/bin/sh NETSCAPE_HOME=/usr/imports/libdata CLASSPATH=$NETSCAPE_HOME/classes export CLASSPATH $NETSCAPE_HOME/bin/netscape.bin
Here, NETSCAPE_HOME is a local variable; CLASSPATH is an environment variable. CLASSPATH will be passed to netscape.bin (netscape.bin uses the value of this variable to find Java class files); NETSCAPE_HOME is a convenience variable that is only used by the wrapper script; netscape.bin doesn't need to know about it, so it is kept local.
The only way to unexport a variable is to unset it:
This removes the variable from the shell's symbol table, effectively making as if it had never existed; as a side effect, the variable is also unexported.
Since you may want to use this variable later, it is better not to define it in the first place.
Also, note that if a variable was passed in as part of the environment, it is already an environment variable when your script starts running. If there is a variable that you really don't want to pass to any subprocesses, you should unset it near the top of your script. This is rare, but it might conceivably happen.
If you refer to a variable that hasn't been defined, sh substitutes the empty string.
#!/bin/sh echo aaa $FOO bbb echo xxx${FOO}yyy
prints
aaa bbb xxxyyy
sh treats certain variables specially: some are set for you when your script runs, and some affect the way commands are interpreted.
The most useful of these variables are the ones referring to the command-line arguments. $1 refers to the first command-line argument (after the name of the script), $2 refers to the second one, and so forth, up to $9.
If you have more than nine command-line arguments, you can use the shift command: this discards the first command-line argument, and bumps the remaining ones up by one position: $2 becomes $1, $8 becomes $7, and so forth.
The variable $0 (zero) contains the name of the script (argv[0] in C programs).
Often, it is useful to just list all of the command-line arguments. For this, sh provides the variables $* (star) and $@ (at). Each of these expands to a string containing all of the command-line arguments, as if you had used $1 $2 $3...
The difference between $* and $@ lies in the way they behave when they occur inside double quotes: $* behaves in the normal way, whereas $@ creates a separate double-quoted string for each command-line argument. That is, "$*" behaves as if you had written "$1 $2 $3", whereas "$@" behaves as if you had written "$1" "$2" "$3".
Finally, $# contains the number of command-line arguments that were given.
$? gives the exit status of the last command that was executed. This should be zero if the command exited normally.
$- lists all of the options with which sh was invoked. See sh(1) for details.
$$ holds the PID of the current process.
$! holds the PID of the last command that was executed in the background.
$IFS (Input Field Separator) determines how sh splits strings into words.
The ${VAR} construct is actually a special case of a more general class of constructs:
The above patterns test whether VAR is set and non-null. Without the colon, they only test whether VAR is set.
sh supports a limited form of pattern-matching. The operators are
When an expression containing these characters occurs in the middle of a command, sh substitutes the list of all files whose name matches the pattern. This is known as ``globbing.'' Otherwise, these are used mainly in the case construct.
As a special case, when a glob begins with * or ?, it does not match files that begin with a dot. To match these, you need to specify the dot explicitly (e.g., .*, /tmp/.*).
Note to MS-DOS users: under MS-DOS, the pattern *.* matches every file. In sh, it matches every file that contains a dot.
If you say something like
echo * MAKE $$$ FAST *
it won't do what you want: first of all, sh will expand the *s and replace them with a list of all the files in the current directory. Then, since any number of tabs or blanks can separate words, it will compress the three spaces into one. Finally, it will replace the first instance of $$ with the PID of the shell. This is where quoting comes in.
sh supports several types of quotes. Which one you use depends on what you want to do.
Just as in C strings, a backslash (``\'') removes any special meaning from the character that follows. If the character after the backslash isn't special to begin with, the backslash has no effect.
The backslash is itself special, so to escape it, just double it: \\.
Single quotes, such as
'foo'
work pretty much the way you'd expect: anything inside them (except a single quote) is quoted. You can say
echo '* MAKE $$$ FAST *'
and it'll come out the way you want it to.
Note that a backslash inside single quotes also loses its special meaning, so you don't need to double it. There is no way to have a single quote inside single quotes.
Double quotes, such as
"foo"
preserve spaces and most special characters. However, variables and backquoted expressions are expanded and replaced with their value.
If you have an expression within backquotes (also known as backticks), e.g.,
`cmd`
the expression is evaluated as a command, and replaced with whatever the expression prints to its standard output. Thus,
echo You are `whoami`
prints
You are arensb
(if you happen to be me, which I do).
sh understands several built-in commands, i.e., commands that do not correspond to any program. These commands include:
Execute commands in a subshell. That is, run them as if they were a single command. This is useful when I/O redirection is involved, since you can pipe data to or from a mini-script inside a pipeline.
The { commands; } variant is somewhat more efficient, since it doesn't spawn a true subshell. This also means that if you set variables inside of it, the changes will be visible in the rest of the script.
With no arguments, prints the values of all variables.
set -x turns on the x option to sh; set +x turns it off.
set args... sets the command-line arguments to args.
Bear in mind that the list of builtins varies from one implementation to another, so don't take this list as authoritative.
sh supports several flow-control constructs, which add power and flexibility to your scripts.
The if statement is a simple conditional. You've seen it in every programming language. Its syntax is
if condition ; then
commands
[elif condition ; then
commands]...
[else
commands]
fi
That is, an if-block, optionally followed by one or more elif-blocks (elif is short for ``else if''), optionally followed by an else-block, and terminated by fi.
The if statement pretty much does what you'd expect: if condition is true, it executes the if-block. Otherwise, it executes the else-block, if there is one. The elif construct is just syntactic sugar, to let you avoid nesting multiple if statements.
#!/bin/sh myname=`whoami` if [ $myname = root ]; then echo "Welcome to FooSoft 3.0" else echo "You must be root to run this script" exit 1 fi
The more observant among you (or those who are math majors) are thinking, ``Hey! You forgot to include the square brackets in the syntax definition!''
Actually, I didn't: [ is actually a command, /bin/[, and is another name for the test command. See below for details.
The condition can actually be any command. If it returns a zero exit status, the condition is true; otherwise, it is false. Thus, you can write things like
#!/bin/sh user=arnie if grep $user /etc/passwd; then echo "$user has an account" else echo "$user doesn't have an account" fi
The while statement should also be familiar to you from any number of other programming languages. Its syntax in sh is
As you might expect, the while loop executes commands as long as condition is true. Again, condition can be any command, and is true if the command exits with a zero exit status.
A while loop may contain two special commands: break and continue.
break exits the while loop immediately, jumping to the next statement after the done.
continue skips the rest of the body of the loop, and jumps back to the top, to where condition is evaluated.
The for loop iterates over all of the elements in a list. Its syntax is
list is zero or more words. The for construct will assign the variable var to each word in turn, then execute commands. For example:
#!/bin/sh for i in foo bar baz "do be do"; do echo "$i" done
will print
foo bar baz do be do
A for loop may also contain break and continue statements. They work the same way as in the while loop.
The case construct works like C's switch statement, except that it matches patterns instead of numerical values. Its syntax is
case expression in
pattern)
commands
;;
...
esac
expression is a string; this is generally either a variable or a backquoted command.
pattern is a glob pattern (see globbing).
The patterns are evaluated in the order in which they are seen, and only the first pattern that matches will be executed. Often, you'll want to include a ``none of the above'' clause; to do this, use * as your last pattern.
A command's input and/or output may be redirected to another command or to a file. By default, every process has three file descriptors: standard input (0), standard output (1) and standard error (2). By default, each of these is connected to the user's terminal.
However, one can do many interesting things by redirecting one or more file descriptor:
This construct isn't used nearly as often as it could be. It causes the command's standard input to come from... standard input, but only until word appears on a line by itself. Note that there is no space between << and word.
This can be used as a mini-file within a script, e.g.,
cat > foo.c <<EOT #include <stdio.h> main() { printf("Hello, world!\n"); } EOT
It is also useful for printing multiline messages, e.g.:
line=13 cat <<EOT An error occurred on line $line. See page 98 of the manual for details. EOT
As this example shows, by default, << acts like double quotes (i.e., variables are expanded). If, however, word is quoted, then << acts like single quotes.
Creates a pipeline: the standard output of command1 is connected to the standard input of command2. This is functionally identical to
except that no temporary file is created, and both commands can run at the same time
Any number of commands can be pipelined together.
If any of the redirection constructs is preceded by a digit, then it applies to the file descriptor with that number, rather than the default (0 or 1, as the case may be). For instance,
associates file descriptor 2 (standard error) with the same file as file descriptor 1 (standard output), then redirects both of them to filename.
This is also useful for printing error messages:
Note that I/O redirections are parsed in the order they are encountered, from left to right. This allows you to do fairly tricky things, including throwing out standard output, and piping standard output to a command.
When a group of commands occurs several times in a script, it is useful to define a function. Defining a function is a lot like creating a mini-script within a script.
A function is defined using
and is invoked like any other command:
You can redirect a function's I/O, embed it in backquotes, etc., just like any other command.
One way in which functions differ from external scripts is that the shell does not spawn a subshell to execute them. This means that if you set a variable inside a function, the new value will be visible outside of the function.
A function can use return n to terminate with an exit status of n. Obviously, it can also exit n, but that would terminate the entire script.
A function can take command-line arguments, just like any script. Intuitively enough, these are available through $1, $2... $9 just like the main script.
There are a number of commands that aren't part of sh, but are often used inside sh scripts. These include:
basename pathname prints the last component of pathname:
basename /foo/bar/baz
prints
baz
The complement of basename: dirname pathname prints all but the last component of pathname, that is the directory part: pathname:
dirname /foo/bar/baz
prints
/foo/bar
/bin/[ is another name for /bin/test. It evaluates its arguments as a boolean expression, and exits with an exit code of 0 if it is true, or 1 if it is false.
If test is invoked as [, then it requires a closing bracket ] as its last argument. Otherwise, there must be no closing bracket.
test understands the following expressions, among others:
Note that lazy evaluation does not apply, since all of the arguments to test are evaluated by sh before being passed to test. If you stand to benefit from lazy evaluation, use nested ifs.
echo is a built-in in most implementations of sh, but it also exists as a standalone command.
echo simply prints its arguments to standard output. It can also be told not to append a newline at the end: under BSD-like flavors of Unix, use
Under SystemV-ish flavors of Unix, use
Awk (and its derivatives, nawk and gawk) is a full-fledged scripting language. Inside sh scripts, it is generally used for its ability to split input lines into fields and print one or more fields. For instance, the following reads /etc/passwd and prints out the name and uid of each user:
awk -F : '{print $1, $3 }' /etc/passwd
The -F : option says that the input records are separated by colons. By default, awk uses whitespace as the field separator.
Sed (stream editor) is also a full-fledged scripting language, albeit a less powerful and more convoluted one than awk. In sh scripts, sed is mainly used to do string substitution: the following script reads standard input, replaces all instances of ``foo'' with ``bar'', and writes the result to standard output:
sed -e 's/foo/bar/g'
The trailing g says to replace all instances of ``foo'' with ``bar'' on a line. Without it, only the first instance would be replaced.
tee [-a] filename reads standard input, copies it to standard output, and saves a copy in the file filename.
By default, tee empties filename before it begins. With the -a option, it appends to filename.
Unfortunately, there are no symbolic debuggers such as gdb for sh scripts. When you're debugging a script, you'll have to rely the tried and true method of inserting trace statements, and using some useful options to sh:
The -n option causes sh to read the script but not execute any commands. This is useful for checking syntax.
The -x option causes sh to print each command to standard error before executing it. Since this can generate a lot of output, you may want to turn tracing on just before the section that you want to trace, and turn it off immediately afterward:
set -x # XXX - What's wrong with this code? grep $user /etc/passwd 1>&2 > /dev/null set +x
Here follow a few tips on style, as well as one or two tricks that you may find useful.
The advantages of sh are that it is portable (it is found on every flavor of Unix, and is reasonably standard from one implementation to the next), and can do most things that you may want to do with it. However, it is not particularly fast, and there are no good debugging tools for sh scripts.
Therefore, it is best to keep things simple and linear: do A, then do B, then do C, and exit. If you find yourself writing many nested loops, or building awk scripts on the fly, you're probably better off rewriting it in Perl or C.
If there's any chance that your script will need to be modified in a predictable way, then put a customization variable near the top of the script. For instance, if you need to run gmake, you might be tempted to write
However, someone else might have gmake installed somewhere else, so it is better to write
Functions are neat, but sh is not Pascal or C. In particular, don't try to encapsulate everything inside a function, and avoid having functions call each other. I once had to debug a script where the function calls were six deep at times. It wasn't pretty.
Remember that you can put newlines in single- or double-quoted strings. Feel free to use this fact if you need to print out a multi-line error message.
Let's say that your script allows the user to edit a file. It might be tempting to include the line
vi $filename
in your script. But let's say that the user prefers to use Emacs as his editor. In this case, he can set $VISUAL to indicate his preference.
However,
$VISUAL $filename
is no good either, because $VISUAL might not be set.
So use
: ${VISUAL:=vi} $VISUAL $filename
to set $VISUAL to a reasonable default, if the user hasn't set it.
As with any programming language, it is very easy to write sh scripts that don't do what you want, so a healthy dose of paranoia is a good thing. In particular, scripts that take input from the user must be able to handle any kind of input. CGI-bin scripts will almost certainly be given not only incorrect, but malicious input. Errors in scripts that run as root or bin can cause untold damage as well.
As we saw above, the way scripts work, Unix opens the file to find out which program will be the file's interpreter. It then invokes the interpeter, and passes it the script's pathname as a command line argument. The interpreter then opens the file, reads it, and executes it.
From the above, you can see that there is a delay between when the OS opens the script, and when the interpreter opens it. This means that there is a race condition that an attacker can exploit: create a symlink that points to the setuid script; then, after the OS has determined the interpeter, but before the interpreter opens the file, replace that symlink with some other script of your choice. Presto! Instant root shell!
This problem is inherent to the way scripts are processed, and therefore cannot easily be fixed.
Compiled programs do not suffer from this problem, since a.out (compiled executable) files are not closed then reopened, but directly loaded into memory. Hence, if you have an application that needs to be setuid, but is most easily written as a script, you can write a wrapper in C that simply exec()s the script. You still need to watch out for the usual problems that involve writing setuid programs, and you have to be paranoid when writing your script, but all of these problems are surmountable. The double-open problem is not.
The very first statement in your script should be
IFS=
which resets the input field separator to its default value. Otherwise, you inherit $IFS from the user, who may have set it to some bizarre value in order to make sh parse strings differently from the way you expect, and induce weird behavior.
Right after you set $IFS, make sure you set the execution path. Otherwise, you inherit it from the user, who may not have it set to the same value as you do.
In particular, the user might have ``.'' (dot) as the first element of his path, and put a program called ls or grep in the current directory, with disastrous results.
In general, never put ``.'' or any other relative directory on your path.
I like to begin by putting the line
PATH=
at the top of a new script, then add directories to it as necessary (and only add those directories that are necessary).
Remember that the expansion of a variable might include whitespace or other special characters, whether accidentally or on purpose. To guard against this, make sure you double-quote any variable that should be interpreted as a single word, or which might contain unusual characters (i.e., any user input, and anything derived from that).
I once had a script fail because a user had put a square bracket in his GCOS field in /etc/passwd. You're best off just quoting everything, unless you know for sure that you shouldn't.
Remember that variables may not be set, or may be set to the null string. For instance, you may be tempted to write
if [ $answer = yes ]; then
However, $answer might be set to the empty string, so sh would see if [ = yes ]; then, which would cause an error. Better to write
if [ "$answer" = yes ]; then
The danger here is that $answer might be set to -f, so sh would see if [ -f = yes ]; then, which would also cause an error.
Therefore, write
if [ x"$answer" = xyes ]; then
which avoids both of these problems.
The C shell, csh, and its variant tcsh is a fine interactive shell (I use tcsh), but is a lousy shell for writing scripts. See Tom Christiansen's article, Csh Programming Considered Harmful for the gory details.