====== Paul T =====
For me, understanding Unix was the first step to understanding FBP.
Unix is *very* much closer to FBP, than is Windows, CMS, VAX/VMS, etc. In fact, one could strip out a bunch of noise from Unix and be left with an implementation of FBP. I’ve been toying with doing just that with Linux...
I’m trying to remember. I *think* that the best overview of a Unix-like systems is
http://www.amazon.com/Concurrent-Euclid-Addison-Wesley-computer-science/dp/0201106949/ref=sr_1_1?ie=UTF8&qid=1449861664&sr=8-1&keywords=concurrent+euclid+tunisIt was written for people who already understand computing and don’t need a hand-holding “now press the left-arrow key” type of namby pamby. 297 pages of discussion about Unix-like operating systems, including an assembler listing for a bare O/S. Holt’s claim to fame is his PhD thesis which brought understanding of deadlocks to the world.
The next best book is
http://www.amazon.com/Software-Tools-Brian-W-Kernighan/dp/020103669X/ref=sr_1_1?ie=UTF8&qid=1449861830&sr=8-1&keywords=software+toolswhich shows how to turn FORTRAN(!) into FBP. I used it (RATFOR) to get Unix-like behaviour on VAX/VMS at Mitel. (There’s also a version of the book that uses Pascal).
These days, one can run multiple operating systems on the same computer using VirtualBox.
pt
====== Paul M =====
This whole Unix topic is fascinating though, and I'm starting to see why FBP had such appeal to Unix-trained people - and presumably vice versa!
What I don't understand is why so few people picked up on the strong similarities - unless it was just those people who were using Unix in an FBP-like style... In other words, they (you) had already made the paradigm shift, but in the Unix world!
====== Paul T =====
The Unix story is varied. When it started to become popular I was in University. There was no open-source at the time and the source licence for Unix was like $50k. The only way you could see the source, or even get to use Unix, was to be in a university that had a source licence (cheaper (free?) for universities).
At the same time, I did the O/S course and had to read "Concurrent-Euclid Unix and Tunis" by Ric Holt. So, I got to understand intimately what was under the hood.
Then Tannenbaum came out with his book and it didn’t really take off.
Then Torvalds open-sourced linux and that is what started to gain popularity.
At the time, linux was ridiculously hard to install - you had to build it from source code, tweaking parameters to match your own system. I think I gave it a try and gave up.
At the same time, the competing O/S’es were VAX/VMS and VM/CMS and Windows 3.11, well and OS/2 (which was ridiculous to install/use and promoted by IBM, which people hated as much as Microsoft is hated nowadays).
The simplicity of bolting components together via Unix pipes was essentially unknown except to the small number of grad students who were fortunate enough to have used Unix in grad school.
I lucked out, without being a grad, because my buddy was a grad student and Small C had just been published in Dr. Dobb’s. It was written in C. The only C compilers available at the time were on Unix V6 and V7. My buddy and I went into his grad lab and typed the whole compiler in by hand into a PDP11 running Unix V7. At 2am that morning I got to see the first output from the compiler and instantly understood what a compiler “does”. I decided to devote my 4th year EE thesis to writing a C compiler myself. My thesis prof was a member of CSRI (Computer Systems Research Institute) and I got to use Unix for that whole year.
(At the end of the year, my prof called me in and grinned at me - he told me that just about every other student in 4th year couldn’t write one page of code without it containing a fatal error, yet I had produced a whole compiler. Then, he offered me a job in Halifax with the military, and I turned it down. Sigh.)
I got 7 job offers that year. I turned most of them down because they didn’t involve Unix. I accepted Mitel’s offer, even though they used VAX/VMS (1 minute, literally, to spawn a process!!!).
I ported the RATFOR tools to VAX/VMS, so that I could have a more productive environment (still 1 minute to start each process in a pipeline, sigh).
When Linux began to gain traction, it was taken over by various competing distributions (distros). They competed for market share based on feature sets and windowing and completely lost the original simplicity and light-weightedness of Unix. So, again, most people never got to see what Unix was about.
What *you* did with FBP was to discover the simplicity of Unix - bolted onto a clunky operating system.
Note that many of the progenitors of Unix came from CSRI at UofT. Kernighan did his undergrad at UofT. I rubbed shoulders with Rob Pike at CSRI while I was doing my C compiler (I doubt that he remembers me). Holt taught me compilers and O/S’es. CSRI was populated with minds who relished simplicity and worked hard to achieve it. Henry Spencer (most famous for regex) and several others whose names don’t come to mind at the moment. I believe that Berklee 4.1 Unix was mostly developed at UofT, then released, without attribution, by Berkley. [Another floor of the CSRI building housed Marshal McLuhan’s office - where he tore a strip off of John Lennon for being a tool of the establishment]. The UofT Comp Sci department (different than CSRI) most famously developed 3D realism (Alain Fournier) and lots of Lisp A.I. (“Planner”, I think).
I think that I backed into FBP because I already knew the beauty of Unix and that I studied the Software Tools book and because I became one of the few experts in porting the Concurrent Euclid language (which caused me to become expert in multi-processing and concurrency, because it was built-in to the Concurrent Euclid spec). [Euclid was designed to compete for the US Military standard language, but Ada won out].
So, I knew the beauty of bolting components together because of Unix and because of multi-processing. I wanted to use processes for everything, but, I couldn’t because Concurrent Euclid’s Hoare Monitors weren’t fast enough to use for building RS232 drivers. It could only handle 1200 baud, whereas I needed 9600 baud (on a MC6809).
And *that* set me on a quest to figure out how to make processes “cheaper”, so that I could use them to build device drivers. [Unix device drivers could not use processes, either, and resorted to very hoary ad-hoc C code - what a mess]. I wanted to bolt components together to make device drivers, where processes could *not* be used.
Interesting! So only you (and your co-workers) and early Unix users understood how simple things could be. Linux has actually destroyed Unix. Steam Engine Time came and went in a short number of years. It’s coming back now because of multiple core programming.
Wow, one could even use a full-screen editor (Vi) on Unix over 300 baud. [My C compiler was mostly written at home in the evenings over 300 baud]. I remember seeing the first revision of Berkley 4.2 - a grad student was editing in 2D, typing many commands well ahead of the system - what he typed would appear on a blank screen, then seconds later the rest of the screen would fill in, edits already accomplished.
I would love to see a project which strips Linux back to its Unix roots, then bolts visual FBP onto it instead of the Bourne shell…
What does this say about popularizing O/S’es and languages??? Something we need to think about.
The current mentality is that “functional programming” will solve all of the problems in the world - eventually (just like OOP was supposed to :-).
Hmm, looking at Functional Jobs reveals interesting things
https://functionaljobs.comThe “top” functional languages today are Clojure, Haskell (risky, hard), Erlang, (Scala, OCaml). Clojure is functional Lisp on the JVM. Erlang has its own VM (Beam) which claims to easily support 60,000 simultaneous processes.
Go was developed at Google and supports the concept of “ports” and pipes. (Rob Pike was one of the developers of Go).
Hmmmm
====== Paul T =====
I was going to mention Unix “cat” but forgot.
Wildly simple - it takes at most one argument.
In Unix shell, everything is a “file descriptor” (think port). Every program gets an array of 16 standard file descriptors (ports). #0 is called stdin, #1 stdout, #2 stderr.
The shell syntax allows these file descriptors to be plumbed together, e.g. the stdout of one program can be joined to the stdin of the next program (think wires, but no feedback allowed).
Every program also gets an array of strings representing the parsed out command line (the shell does the parsing, not the program). The 0th argument is always the name of the program (in Unix you can name the same program different ways, and the program will change its behaviour depending on it’s invocation name).
If there’s an argument on the command line, cat opens that file and pours its contents into the “stdout” file descriptor.
If there are no arguments, cat accepts input from stdin and pours it to stdout.
You can imagine how small the code is for this, even in C. From the top of my head:
#include <stdio.h>
int main (int argc, char **argv) {
FILE *fd;
char buff[1024];
if (argc == 1) { // no arguments, except program name
fd = stdin;
} else {
fd = fopen(argv[1], “r”);
}
if (fd == NULL) { // file not found
exit(1);
}
n = fread(fd, sizeof(buff));
while (n > 0) {
fwrite(stdout, buff, sizeof(buff));
n = fread(fd, sizeof(buff));
}
exit(0);
}
(give or take some error checking and sending an error message to stderr).
This is small and simple and wildly useful. Unix doesn’t need a “type” nor “print” command. If you want to display a file, just cat it. If you want to paginate the file, “cat" it to the “more” command. If you want to print it, “cat” it to the “lpr” (line printer) command.
> cat file.txt
> cat file.txt | more
> cat file.txt | lpr
GNU and Linux have corrupted this simplicity.
In Unix V7, you could print off the man page for *every* command in the system on fan-fold paper, and keep it in your briefcase with enough room left over to hold your lunch. Most commands required only one page to describe their function and all parameters. Of course, to print a man page on the printer, one would say something like: (where troff is the command to text format a troff text file)
man 0 cat | troff | lpr
And, with the standard library, any program could act like the shell by calling fork(), dup2() and exec(), so you could plumb programs together in C (or in the shell). My light-weight example “grash” implements an interpreter in 200 lines of C that plumbs Unix (FBP) components together at run time - only 8 instructions for the interpreter.
https://github.com/guitarvydas/vsh/blob/master/grash/grash.cWoah! The source for v7 is browsable on the web…
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/cat.chttp://minnie.tuhs.org/cgi-bin/utree.pl?file=V7pt