Philosophy of UNIX Development

Jeff Foster
Ingeniously Simple
Published in
5 min readJul 31, 2019

--

Unix is a fascinating operating system. Originally conceived of at Bell Labs in the late 1960’s, it was borne out of frustration with the OS known as “Multics” (multiplexed information and computing service). Unix is now over 50 years old (!) and the Linux implementation powers huge swathes of the Internet.

So — why is Unix so popular?

In my mind, Unix’s success comes from the philosophical approach to development. The UNIX philosophy is documented by Doug McIlroy in the Bell System Technical Journal in 1978:

1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.

2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.

4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

This was over 40 years ago, and captures SOLID (single responsibility principle, open/closed), microservices, functional pipelines, agile and the spirit of DevOps!

For far more detail about the Unix philosophy, read this book (freely available here but buy a copy to support the author!).

Let’s look at some examples of the Unix philosophy in action.

Do one thing well

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.

cat does exactly one thing. It concatenates files and displays them on standard output. That’s all it does. It doesn’t do pagination. It doesn’t offer search functionality. It just does exactly what it says on the tin and no more.

tr is similar. It does “textual replacement” by reading from input, making any translations and writing to output.

tr -d aeiouAEIOU < file # Display file without vowels
tr eao 340 < file # Partially leet speak file

true and false are perhaps the best examples of doing one thing well. true does nothing, successfully! false does nothing.

false && echo Hi    # does nothing
true && echi Hi # Prints Hi

Composition

“Expect the output of every program to become the input to another”

In Unix, most operations have the ability to read and write to standard output in a well understood textual format. With a few commands, such as | , > and < we can feed the output of one program to another. Let’s look at some examples:

In this example, we use cat to output the contents of a file and feed the output into wc who can count the number of lines in a file.

cat foo.txt | wc -l

In this example, we use history to find our most frequently used commands by combining it with cut, sort, uniq and head.

history | cut -f5 -d” “ | sort -rn | uniq -c | sort -rn | head

xargs is the ultimate swiss-army knife allowing you to build up commands from standard output. Let’s use it to delete all “.tmp” files in the current directory after using find to locate them.

find -type f *.tmp | xargs rm

Everything is a file

In UNIX everything is a file (or more precisely, everything is a stream of bytes). This means that the same APIs/commands can be used for reading a CD-ROM drive, writing a network socket or finding out CPU info.

For example, the entire /proc file system on Linux isn’t really files — it’s a dynamic view of information that’s exposed as a bunch of file descriptors.

Some examples:

cat /proc/cpuinfo       # Displays your CPU info exposed as a filefoo > /dev/null         # Redirect output into a file called
# null (which discards everything)
od -vAn -N1 -td1 < /dev/urandom # Display a random 1 byte number
# (via https://unix.stackexchange.com/a/268960

Automation

Long before the “automate-all-the-things”, Unix was there, errr, automating all the things

Obligatory automate all the things image

cron (more here) has been automating all the things for the last 40+ years. Cron jobs are scheduled scripts that can run at fixed times or fixed intervals.

Each user on a Unix system has a set of scheduled tasks, visible using the crontab command. The file is in a very simply format that gives the date and time of the script that runs.

The at command is a friendlier alternative, here’s an example of firing a command at 1145 on Jan 31 (from here).

echo "cc -o foo foo.c" | at 1145 jan 31

Puppet, Chef, CFEngine, Ansible — all of these DevOps tools and born and bred on Unix based systems.

If you are on Windows now, you can use a Linux terminal (thanks to Windows Subsystem for Linux).

Even if you aren’t going to actively use Unix for development, it’s definitely worth understanding the basics of how Unix software is written and the philosophy that underpins it.

I’ll end with a quote from Brian Kernighan and Rob Pike. I think if you replace the word UNIX with whatever system you’re building, it’s a great philosophy for software design.

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can’t be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.

--

--

Jeff Foster
Ingeniously Simple

Director of Technology and Innovation at Redgate.