I have a hard copy of this from back in the day. It’s a great read and a mixture of historical artefact and still relevant criticism.
e.g. It’s really interesting reading about LISP machines but no-one’s building a new one. Equally, all the criticism of sendmail and csh is valid but no-one uses them anymore either.
Most of the reliability criticisms have been addressed over the years but people are still trying to address the design of C, usually by replacing it. Equally, sh remains a problematic scripting language but at least it’s reliably there, unlike many of its many alternatives.
Pretty much all of the benefits provided by the hardware of lisp machines are provided by modern CPUs, just in more general ways.
Most of the benefit was pushing their interpreter into microcode, leaving more of the data bus free for actual data. Now we have ubiquitous icaches which give you a pseudo harvard architecture when it comes to the core's external bandwidth.
Some of the benefit was having a separate core with it's own microcode doing some of the garbage collection work. Now we have ubiquitous general multicore systems.
> Equally, sh remains a problematic scripting language but at least it’s reliably there
I too still have a hard copy of this from way back. This book was my introduction to Unix, as I shifted from programming for DOS/Windows/NT to SunOS, and later, Linux. Despite the many issues (humorously) exposed by this book, the one thing that hooked me is what that quote above implies: It was accessible, durable, and thus worth taking the time to learn, warts and all.
Yeah, I learned an enormous amount from it when I encountered it (in hard copy of course) in 01996, and some of what I learned is now no longer relevant.
Also, Morello includes some Lisp-machine-like features. In my view knowing about the history of hardware architectures is far more important for designing new ones than for reproducing old ones.
Lots (most) OSes had a process concept. But in Unix, they not only existed, they were everywhere, they were dynamic, and they were "cheap". They were user accessible. A process with its ubiquitous stdin/out interface gave us great composablilty. We can click the processes together like legos.
For example, VMS had processes. But after 4 years of using it, I never tossed processes around using it like I did on Unix. I never "shelled out" of an editor. I never & something into the background. Just never came up. One terminal, one process.
On Unix, however, oh yea. Pipe construct on the command line, bang out of the editor, :r! in vi. And the eco-system the was created out of this simple concept. The "Unix Way(tm)".
And anything was a process. A C program. A shell script. At this level, everything was a "system language".
Then, They (those Unix wizard folks) made networking a well behaved citizen in this process stdin/out world. `inetd` could turn ANYTHING (because everything had stdin/out) into a network server. This command is magic: `ls | cpio -ov | rsh otherhost cat > /dev/tape`
Does `ls` know anything about file archives? No. Does `cpio` know anything about networking, or tape drives? No. Heck, `cat` doesn't know anything about tape drives.
You just could not, back in the day, plumb processes and computers together trivially like you could with Unix. Humans could do this. They didn't need to be wizard status, "admins", guys in white coats locked in the raised floor rooms, huffing Halon on the side. Assuming you could grok the arcane syntaxes (which, absolutely, were legion), you could make Unix do amazing things.
This flexibility allowed me to make all sorts of Rube-Goldbergian constructs of data flows and processes. Unix has always been empowering, and not constraining, once you accept it for what it is.
> Here is my metaphor: your book is a pudding stuffed with apposite
observations, many well-conceived. Like excrement, it contains
enough undigested nuggets of nutrition to sustain life for some. But
it is not a tasty pie: it reeks too much of contempt and of envy.
As a side point, I believe David Cutler, the venerable OS engineer who programmed and designed three OSes, did not like Unix very much back in the 90s. I wonder what was the reason, and did he change his mind later?
It was because adding one to each of the letters in UNIX yields gibberish, but adding one to each of the letters in VMS gets WNT.
Era-appropriate joking aside: There's no actual evidence that Cutler held the views on Unix, or even on DEC's Eunice, that have been ascribed to xem from anecdotes by Armando Stettner and edits to Wikipedia and writing by G. Pascal Zachary. I and others went into more detail on this years ago: https://news.ycombinator.com/item?id=22814012
[Cutler] expressed his low opinion of the Unix process input/output model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.
Kind of crazy that what is possibly a throwaway remark by Cutler has become the basis for this whole "lore" of Dave hating Unix so much he went and made VMS (or variations on that theme). I guess it's a good story...
As someone in the midst of transitioning to Linux for the first time ever, the thing is: I still kinda hate Unix, but my AI friends (Claude Code / Codex) are very good at Unix/Linux and the everything is a file nature of it is amenable to AI helping me make my OS do what I want in a way that Windows definitely isn't.
On UNIX the "everything is a file" quickly breaks down, when networking, or features added post UNIX System V get used, but the meme still holds apparently.
If you want really everything is a file, that was fixed by UNIX authors in Plan 9 and Inferno.
Well it depends on what "file" means. Linuxian interpretation would be that file is something you can get file descriptor for. And then the "everything is a file" mantra holds better again.
Windows is actually much closer to this limited, meaningless, form of the "everything is a file" meme. In Windows literally every kernel object is a Handle. A file, a thread, a mutex, a socket - all Handles. In Linux, some of these are file descriptors, some are completely different things.
Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them. So them being the same type is actually a hindrance, not a help - it makes it easier to accidentally pass a socket to a function that expects a file, and will fail badly when trying to, for example, seek() in it.
* to be fair, Windows actually has WaitForSingleObject / WaitForMultipleObjects as well, which I think does do something meaningful for any Handle. I don't think Linux has anything similar.
You can call write() and read() on any file descriptor, but it won't necessarily do something meaningful. For example, calling them on a socket in listen mode won't do anything meaningful. And many special files don't implement at least one of read or write - for example, reading or writing to many of the special files in /proc/fs doesn't do anything.
I was recently thinking that object orientation is kind of everything is a file 2.0 in the form everything is an object I mean ofcourse didn’t pan out that good.
Haven’t googled yet what people had to say about that already before.
P.s. big fan of ur comments.
> object orientation is kind of everything is a file 2.0 in the form everything is an object
That is why I love Plan 9. 9P serves you a tree of named objects that can be byte addressed. Those objects are on the other end of an RPC server that can run anywhere, on any machine, thanks to 9p being architecture agnostic. Those named objects could be memory, hardware devices, actual on-disk files, etc. Very flexible and simple architecture.
I rather pick Inferno, as it improved on top of Plan 9 learnings, like the safe userspace in form of Limbo, after conclusion throwing away Alef wasn't that great in the end.
Inferno was a commercial attempt at competing with Sun's Java. The plan 9 folks had to shift gears so they took Plan 9 and built a smaller portable version of it in about a year. Both the Plan 9 kernel and Inferno kernel share a lot of code and build system so moving code between them is pretty simple.
The real interesting magic behind Plan 9 is 9P and its VFS design so that leaves Inferno with one thing going for it: Dis, its user space VM. However, Dis does not protect memory as it was developed for mmu-less embedded use. It implicitly trusts the programmer not to clobber other programs memory. It is also hopelessly stuck in 32bit land.
These days Inferno is not actively maintained by anyone. There are a few forks in various states and a few attempts to make inferno 64 bit but so far no one has succeeded. You can check: https://github.com/henesy/awesome-inferno
Alef was abandoned because they needed to build a compiler for each arch and they already had a full C compiler suite. So they took the ideas from Alef and made the thread(2) C library. If you're curious about the history of Alef and how it influenced thread(2), Limbo and Go: https://seh.dev/go-legacy/
These days Plan 9 is still alive and well in the form of 9front, an actively developed fork. I know a lot of the devs and some of them daily drive their work via 9front running on actual hardware. I also daily drive 9front via drawterm to a physical CPU sever that also serves DNS and DHCP so my network is managed via ndb. Super simple to setup vs other clunky operating systems.
And lastly, I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
> I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
Doesn't Wasm/WASI provide these same features already? That doesn't seem like "a lot of work", it's basically there already. Does dis add anything compelling when compared to that existing technology stack?
Inferno was initially released in 1996, 21 years before WASM existed.
An inferno built using WASM would be interesting. Though WASI would likely be supplanted by a Plan 9/Inferno interface possibly with WASI compatibility. Instead of a hacked up hyper text viewer you start with a real portable virtual OS that can run hosted or native. Then you build whatever you'd like on top like HTML renderers, JS interpreters, media players/codecs, etc. You profile is a user account so you get security for free using the OS mechanisms. Would make a very interesting platform.
"We 've painted a dim picture of what it takes to bring IPEs to UNIX. The problems of locating. user interfaces. system seamlessness. and incrementality are hard to solve for current UNIXes--but not impossible. One of the reasons so little attention has been paid to the needs of IPEs in UNIX is that
UNIX had not had good examples of IPEs for inspiration. This is changing: for instance. one of this article's authors has helped to develop the Small talk IPE for UNIX (see the adjacent story). and two others of us are working to make the Cedar IPE available on UNIX.
What's more. new UNIX facilities. such as shared memory and lightweight processes (threads). go a long way toward enabling seamless integration. Of course. these features don't themselves deliver integration: that takes UNIX programmers shaping UNIX as they always have--in the context of a friendly and cooperative community. As more UNIX programmers come to know IPEs and their
power. UNIX itself will inevitably evolve toward being a full IPE. And then UNIX programmers can have what Lisp and Small talk and Cedar programmers have had for many years: a truly comfortable place to program."
Some GOSIP (remember that?) implementations on some Unicies did have files for network connections, but it was very much in the minority. Since BSD was the home of the first widely usable socket() implementations for TCP/IP it became the norm; sockets are a file, but just not linked to any filesystem and control is connect()/accept() and the networking equivalent (setsockopt()) of the Unix system call dumping ground; ioctl().
Yeah, I was really confused when I learned that every device was simply a file in /dev, except the network interfaces. I never understood why there is no /dev/eth0 ...
That was back in the mid-90s but even today I still don't understand why network interfaces are treated differently than other devices
It's probably because ethernet and early versions of what became TCP/IP were not originally developed on Unix, and weren't tied to it's paradigms, they were ported to it.
Plan 9 does exactly this but all networking protocols live in /net - ethernet, tcp, udp, tls, icmp, etc. The dial string in the form of "net!address!service" abstracts the protocol from the application. A program can dial tcp!1.2.3.4!7788 or maybe udp!1.2.3.4!7788. How about raw Ethernet? /net/ether1!aabbccddeeff!12345. The dial(2) routine takes a dial string and returns an fd you read() and write(). Very simple networking API.
What would it mean to write to a network interface? Blast everyone as multicast? Not that useful. But Plan9 had connections as files, though I’ve never tried.
That's a bad argument. What does it mean to write to a mouse device? To the audio mixer? To the i2c bus device? To a raw SCSI device (scanner or whatever)? Those are all not very useful either.
Especially since there actually is a very useful thing that writing to /dev/eth0 would do: Put a raw frame on the wire, and reading from it would read raw frames.
Having observed my fair share of beginners transition from win to linux, the most common source of pain I've seen is getting used to the file permissions, and playing fast and loose with sudo.
1. You start and stop services with 'systemctl start/stop nginx'. But logs for that service can be read through an easy-to-remember 'journalctl -xeu nginx.service'. Why not 'systemctl logs nginx'? Nobody knows.
2. If you look at the built-in help for systemctl, the top-level options list things like `--firmware-setup` and `--image-policy`.
3. systemd unifies devices, mounts, and services into unit files with consistent syntax. Except where it doesn't. For example, there's a way to specify a retry policy for a regular service, but not for mount units. Why? Nobody knows.
(To be clear, I _like_ systemd. But it definitely follows the true Unix philosophy of being wildly internally inconsistent.)
I like systemd too. After working with it for a long time, a lot of the "wtf" moments eventually are made clear as having at least some semblance of a good reason behind the decision.
1. systemctl is the controller. Its job is to change and report on the state of units. journalctl is the query engine. Merging the query engine into the systemctl controller would make the controller bloated and complex, so a dedicated tool is the cleaner approach. I think you can also rip out the journal and use other tools if you so decide, making building logs into systemctl a bad idea.
2. systemd is a system manager, not just a service manager. It replaced not only the old init system but also a collection of other tools that managed the machine's core state
3. A service runs a process, which can fail for many transient reasons. Trying again is a sensible and effective recovery strategy. A mount defines a state in the kernel. If it fails, it's almost always for a "hard" reason that an immediate retry won't fix. Retrying a failed mount would just waste time and spam logs during boot.
These are fine points, and there are rough edges, but:
1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
2. I am not sure about how I should parse this. You mean there are too many arguments in total (2a) or the man page or the help message is not ordered correctly (2b)?
(2a). If you just care about services, you already know [well] a handful of subcommands (start, stop, enable, etc.) and just use those, the other args don't get in your way. For example your everyday commands have safe, sane default options that you will not have to override 99% of the time.
Furthermore, this is much better than the alternative of having a dozen different utilities that have a non-trivial inter-utility interaction that has to be solved externally. Sometimes an application that does (just) one thing won't do well.
(2b). This is subjective (?). I have experienced a few week-long total internet outages (in Iran). I had to study the man pages and my offline resources in those contingencies, and have generally been (extremely) satisfied with the breadth, depth, and the organization of the systems docs. In the age of LLMs this is much less of a problem anyways. I think reading the man page of a well-known utility is not an everyday task, and for a one-off case you will grep the man page anyways.
3. Your point is ~valid. But automount exists for ephermal resources. By default, we won't touch a failing drive without some precautions at least. So fail-fast and no retry is not always wrong. Perhaps it is virtue signaling ... On my PC I don't want to retry anything if a mount fails. In fact I might even want it to fail to boot so that it doesn't go undetected.
Also, for something as critical as mounting, I would probably want other "smart" behavior as well (exponential backoff for network, email, alert, DB fail-over, etc.) and these require specific application knowledge.
> 1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
I'm not at all a systemd hater (I think it was needed and it's nowadays a very solid piece of software) but the logs thing should be totally tweakable when viewing it from `systemctl status` and it is n.... [goes to check the man page]
-n, --lines=
When used with status, controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output. Defaults to 10.
I parsed (2) in the obvious way of: A manual should start with the common stuff 99% of people need and not with something obscure that you will only need once you are at the level that you know the tool you're using inside out.
That is like opening the manual for your dishwasher and reading a section about how you may check the control-boards conformal coating after the warranty has expired. Useful when you need it and have the repair skills, but a bad way to start a manual.
How does that change my point about an order by frequency of use being superior? If it is a memory helper, then the stuff people tend to use more often is certainly the stuff that needs to be looked up more.
That’s a variable order. I prefer a more consistent order like a default section structure (which a lot of man pages adopt) and an alphabetical order for flags (which a lot of man pages also adopt).
When I open a manual it’s usually for: flags and argument ordering; argument format (for things like string format or globbing). Some manuals are short enough that it can serve as a guide, but most assumes domain knowledge.
What you want is a cheatsheet. And there’s a lot on the internet and even some tools that collect them. But most practitioners write shell aliases and functions
1. systemctl already supports logging output, and it's already overloaded.
2. The options are not filtered, so useful options like ('--lines') are lost. E.g. what other options apply to "systemctl status"? The systemd documentation, in general, is a mess. It's a good _reference_ documentation (like 'man') but not a good guide.
3. Network filesystems exist. And they can become unavailable for a time.
Systemd got better with time and I got better with it over time, which makes it acceptable for me now. I still miss SMF from Solaris years later though. I'm sure there are better systems out there but when the ubiquity is not there it's really hard to adopt them especially in corporate environments. And then you have to learn 2 things if you want to use something else at home, which is already too much for me...
What I would like to see is something that is to systemd what PipeWire is to PulseAudio.
Before PulseAudio getting audio to work properly was a struggle. PA introduced useful abstractions, but when it was rolled out it was a buggy mess. Eventually it got good over time.
Then PipeWire comes in, and it does more with less. The transition was so smooth, I did not even realize it I had been running it for a while, just one day I noticed it in the update logs.
systemd now works well enough, but it would be nice to get rid of that accumulated cruft.
Also he has no regards for breaking userspace to the point of needing to get scolded by Linus. But some ideas are good and there is a lot of pioneering work that moves the needle. The trajectories of PulseAudio and systemd are similar, it just needs cleaning up. PulseAudio got fixed up by PipeWire, whereas systemd is at the point of lifecycle yet to reach that stage.
Afaik one of the main problem with the software of his is that it tends to sacrifice ergonomics in the 99% common cases for some obscure theoretical observation.
This is of course about tradeoffs and about the complexities of the problems you're solving, but his software is full of choices that only make sense if you priorize elegant code over elegant software only to then grow into something that is neither.
Lennart worked at Red Hat when he was developing systemd. Red Hat's largest customers often have wacky, weird requirements that you would have never thought of unless you were in that specific customer's situation.
There's a podcast [1], which features him as a guest to talk about Linux in general. The main impressions I got from it: he is very confused about what UNIX is and he apparently despises UNIX.
I think he's well suited for his new employer (Microsoft).
The inconsistency comes from the author thinking "All this init stuff is ancient, and filled with absurd work arounds, hacks, and inconsistencies. I'll fix that!". Then as time passes discovering that "Oh wait, I should add a hack for this special case, and this one, and this one, guess these were really needed!" as bug reports come in over the years.
To be fair, this could happen to any of us, especially early in career. But the real hubris is presuming that things are, as they are, without cause or reason. Along with never really knowing how things actually worked. Or why.
I envision a layperson (which is sort of understanding the author had of modern init systems, when starting on systemd). Said person walks up to a complex series of gears, and thinks a peg is just there for no reason, looks unused, and pulls it out. Only to have the whole mess go bananas. You can follow this logic with all of the half baked, partially implemented services like timekeeping, DNS, and others that barely work correctly, and go sideways if looked at funny.
I think if the author took their current knowledge, and this time wrote it from scratch, it could be far better.
However there still seems to be a chip on their shoulder, with an idea that "I'll fix Linux!" still, when in reality these fixes are just creating immense complication with very minimal upside. So any re-write would likely still be an over-complicated contraption.
When a complex system cannot be meaningfully reduced, another approach might be trying to reduce scope.
Current areas include managing services on a server, managing a single-user laptop, and enterprise features for fleet of devices/users.
There is some overlap at the core where sharing code is useful, but it feels way more complexity than needed gets shipped to my laptop. I wonder how much could be shaved off when focusing only on a single scenario.
That way you turn a very complex system into a set of much simpler artificial systems that you can control the interaction.
On your example, that would mean having different kinds of configuration options that go for each of those scenarios, but still all on the same software.
One can argue that systemd tries this (for example, there are many kinds of services). But in many cases, it does the complete opposite of this and reducing scope.
Still, I don't think init systems are a wicked problem (and so, it doesn't need advanced solutions to managing complexity). The wickedness is caused by the systemd's decision to do everything.
I like openrc for laptop or workstation. Writing service is as easy as writing a systemd files (with less options of course, but I never really wanted those).
> The inconsistency comes from the author thinking "All this init stuff is ancient, and filled with absurd work arounds, hacks, and inconsistencies. I'll fix that!". Then as time passes discovering that "Oh wait, I should add a hack for this special case, and this one, and this one, guess these were really needed!" as bug reports come in over the years.
Don't forget the best one: "We don't support that uncommon use case, we will not accept nor maintain patches to support it, and you shoulden't do it that way anyway, and we are going to make it impossible in the future" -- to something that's worked well for decades.
I would like to subscribe to your newsletter... no but really if you ever do get around to writing that I want to read it. Ping me somehow, my Gmail username is the same as my HN username. Happy writing!
Why ? Systemd really fits the Unix haters handbook. It is anti unix as much as it can be ( one command to rule them all, binary logs, etc).
In the end it realy seems that the mantra: GNU is not UNIX is true. Just look at the GNU/Linux: pulseaudio, systemd, polkit, wayland, the big, fat linux kernel
For a brief period of time, binary configs[0] were a thing. In mobile world only, but still. It wasn't that people generally wanted them, but because random seek I/O latency on early mobile devices (and especially on their eMMC storage devices) was atrocious.
Opening up tens or hundreds of XML config files for resync was disgustingly slow. I've developed software on Maemo and Scratchbox; the I/O wait for on-device config changes was a real problem. So of course someone came up with a modified concept of Windows registry - a single, binary format config storage, with a suitably "easy" API. As a result you'd sacrifice write/update latency for the cases where you wanted to modify configurations and gain a much improved read/refresh latency when reading them up.
Of course that all broke down when reading a single config block required to read the entire freaking binary dump and the config storage itself was bigger than the block device cache. Turns out that if you give app developers a supposedly easy and low-friction mechanism to store app configs, their respective PMs would go wild and demand that everything is configurable. Multiply by tens, even low hundreds of apps, each registering an idle-loop callback to re-read their configs to guarantee they would always have the correct settings ready. A system intended to improve config load/read times ended up generating an increased demand for already constrained read I/O.
The EMACS hater handbook. Under a GFDL license, of course.
No multithreading, I/O locks under GNUs/eww, glacial slow email header parsing under GNUs, huge badass file for RMAIL if you don't like GNUs (instead of parsing MailDir) and so on.
I have no real experience with mbox and pop3 (maildir is what I’ve always used). But I still think you would need to partition mbox files because that’s what you would do with physical mail (which is the basis of the protocol and everything around it). I kinda like rmail.
The good news is that we now have alternative UI's in web/mobile, microkernel-based systems, and unikernels in high-level languages... all in production use.
They certainly came up with a lot of good one-liners for this book.
I wonder why Dennis Ritchie was so infuriated though. He criticizes them for wanting simple functionality, but it's not because language is a powerful tool for solving problems it's because it limits the potential of the platform to it's functionality (which has been simplified and in of itself limited).
So this is confusing to me. Using language to solve problems is the advantage that Unix offers. But, neither the authors nor Dennis care about this? Or they do care in limited ways, but ultimately it's about something else?
I was exactly thinking this the other day while running and seeing an old, rusted lamp post in a rural street: "this was probably put there over 50 years ago, in the early seventies", and then thought at things from "over 50 years ago" when I was a child and well, WW2 was in its making. I don't know while I thought that but that's probably also a sign of our age: WW2 was the biggest thing "from the past" that our families lived or were touched more or less directly.
But also ww2 is this black and white thing from our history books. As we get older we get to know more about how recent and relevant it is, we met people who lived it and told us about their actual experience. But it still feels like something that belongs to history rather than a recent event. That's kind of the "anything that happened before my birth I don't care" attitude of today's teenagers.
I'm a Brit born in 66, and growing up I felt that WW2 was recent history. War films were a dominant genre in my early life, we visited the German defences on the French coast while camping as a teenager along with my Grandfather, who served there and visited some locations he remembered. Some buildings still had war damage. In many ways the world of the 70s felt closer to the war era than to nowadays. It was still the cold war, and that was just an extension of the post-war stalemate.
As a German born in 87, it didn't feel _that_ recent any more. But it was definitely close, both my granddads served in the war and were scarred for life by that, mentally and physically. Family history a mess of war-torn biographies. I found some rusted, old big munition in the forest as a kid. Old bunkers and flak towers can still be seen in the cities, and many of the local kids in my hometown and age cohort adventured into the old mining shaft used as an air raid shelter and saw the gas masks that were still there. And then there was the GDR (or DDR, Deutsche Demokratische Republik, in German) and all the reunification that happened when I was already alive (although I was a child at the time).
Thinking about my childhood visit to Ost-Berlin still makes me shiver with thoughts about all the suffering. Many of the buildings still had bullet holes and it felt like you could touch history.
That's interesting. I was also born in 1966, but in the US. WW2 didn't seem/feel all that recent to me, probably because it had mostly happened far away. I was interested in learning about it and read lots of books, and watched movies. The drive to visit relatives did go by an aircraft carrier (USS Essex) at the scrapyard, but other than that physical artifacts of the war were rare. And the only relative I had who fought in the war was a great-uncle, but he passed away when I was very young.
I have a hard copy of this from back in the day. It’s a great read and a mixture of historical artefact and still relevant criticism.
e.g. It’s really interesting reading about LISP machines but no-one’s building a new one. Equally, all the criticism of sendmail and csh is valid but no-one uses them anymore either.
Most of the reliability criticisms have been addressed over the years but people are still trying to address the design of C, usually by replacing it. Equally, sh remains a problematic scripting language but at least it’s reliably there, unlike many of its many alternatives.
Pretty much all of the benefits provided by the hardware of lisp machines are provided by modern CPUs, just in more general ways.
Most of the benefit was pushing their interpreter into microcode, leaving more of the data bus free for actual data. Now we have ubiquitous icaches which give you a pseudo harvard architecture when it comes to the core's external bandwidth.
Some of the benefit was having a separate core with it's own microcode doing some of the garbage collection work. Now we have ubiquitous general multicore systems.
Etc.
> Equally, sh remains a problematic scripting language but at least it’s reliably there
I too still have a hard copy of this from way back. This book was my introduction to Unix, as I shifted from programming for DOS/Windows/NT to SunOS, and later, Linux. Despite the many issues (humorously) exposed by this book, the one thing that hooked me is what that quote above implies: It was accessible, durable, and thus worth taking the time to learn, warts and all.
Yeah, I learned an enormous amount from it when I encountered it (in hard copy of course) in 01996, and some of what I learned is now no longer relevant.
There are some people building new Lisp machines: https://opencores.org/projects/igor https://github.com/lisper/cpus-caddr https://interlisp.org/ http://pt.withington.org/publications/LispM.html http://pt.withington.org/publications/VLM.html https://github.com/dseagrav/ld http://www.aviduratas.de/lisp/lispmfpga/ https://groups.google.com/g/comp.lang.lisp/c/36_qKNErHAg https://frank-buss.de/lispcpu/
Also, Morello includes some Lisp-machine-like features. In my view knowing about the history of hardware architectures is far more important for designing new ones than for reproducing old ones.
>It’s really interesting reading about LISP machines but no-one’s building a new one
There have been two open source Lisp Machine OS created in the last 15 or 10 years.
However a big part of the power of the Symbolics/LMI machines was in the software itself (applications), and this is still propietary code.
To reimplement the Lisp Machine applications would take quite a big effort.
I love Unix.
It's my favorite OS.
And I like it for its fundamental process model.
That combined with stdin/out and pipes.
All stitched together with a process aware shell.
Lots (most) OSes had a process concept. But in Unix, they not only existed, they were everywhere, they were dynamic, and they were "cheap". They were user accessible. A process with its ubiquitous stdin/out interface gave us great composablilty. We can click the processes together like legos.
For example, VMS had processes. But after 4 years of using it, I never tossed processes around using it like I did on Unix. I never "shelled out" of an editor. I never & something into the background. Just never came up. One terminal, one process.
On Unix, however, oh yea. Pipe construct on the command line, bang out of the editor, :r! in vi. And the eco-system the was created out of this simple concept. The "Unix Way(tm)".
And anything was a process. A C program. A shell script. At this level, everything was a "system language".
Then, They (those Unix wizard folks) made networking a well behaved citizen in this process stdin/out world. `inetd` could turn ANYTHING (because everything had stdin/out) into a network server. This command is magic: `ls | cpio -ov | rsh otherhost cat > /dev/tape`
Does `ls` know anything about file archives? No. Does `cpio` know anything about networking, or tape drives? No. Heck, `cat` doesn't know anything about tape drives.
You just could not, back in the day, plumb processes and computers together trivially like you could with Unix. Humans could do this. They didn't need to be wizard status, "admins", guys in white coats locked in the raised floor rooms, huffing Halon on the side. Assuming you could grok the arcane syntaxes (which, absolutely, were legion), you could make Unix do amazing things.
This flexibility allowed me to make all sorts of Rube-Goldbergian constructs of data flows and processes. Unix has always been empowering, and not constraining, once you accept it for what it is.
I've always liked the end of the anti-foreword:
> Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.
You forgot the last bit: "Bon appetite!"
Definitely the politest way anyone has ever been told to eat shit in human history.
Props for the inclusion of that in the book. "Mighty white of them", as they used to say in Bechuanaland.
My first instinct was that this phrase is racist, especially with you saying it was used in an ex-British southern African colony.
This was the best I could find as to its origins:
https://boards.straightdope.com/t/where-did-thats-mighty-whi...
As a side point, I believe David Cutler, the venerable OS engineer who programmed and designed three OSes, did not like Unix very much back in the 90s. I wonder what was the reason, and did he change his mind later?
It was because adding one to each of the letters in UNIX yields gibberish, but adding one to each of the letters in VMS gets WNT.
Era-appropriate joking aside: There's no actual evidence that Cutler held the views on Unix, or even on DEC's Eunice, that have been ascribed to xem from anecdotes by Armando Stettner and edits to Wikipedia and writing by G. Pascal Zachary. I and others went into more detail on this years ago: https://news.ycombinator.com/item?id=22814012
Really? VOJY could easily be a modern company, couldn't it? The .dot was parked though ... :)
Didn’t Cutler design Mach?
No.
* https://cs.cmu.edu/afs/cs/project/mach/public/www/people-for...
The only thing I remember is from the book Showstopper
https://retrocomputing.stackexchange.com/questions/14150/how...
[Cutler] expressed his low opinion of the Unix process input/output model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.
Kind of crazy that what is possibly a throwaway remark by Cutler has become the basis for this whole "lore" of Dave hating Unix so much he went and made VMS (or variations on that theme). I guess it's a good story...
Almost all the major ways in which NT deviates from UNIX have strong cases as being architectural wins:
- A well-defined HAL for portability
- An object manager for unified resource lifecycle and governance
- Asynchronous I/O by default
- User-facing APIs bundled into independent “personalities” and decoupled from the kernel
The only real black mark I’m aware of is the move of the graphics subsystem into the kernel for performance, which I don’t think was Cutler’s idea.
As someone in the midst of transitioning to Linux for the first time ever, the thing is: I still kinda hate Unix, but my AI friends (Claude Code / Codex) are very good at Unix/Linux and the everything is a file nature of it is amenable to AI helping me make my OS do what I want in a way that Windows definitely isn't.
On UNIX the "everything is a file" quickly breaks down, when networking, or features added post UNIX System V get used, but the meme still holds apparently.
If you want really everything is a file, that was fixed by UNIX authors in Plan 9 and Inferno.
Well it depends on what "file" means. Linuxian interpretation would be that file is something you can get file descriptor for. And then the "everything is a file" mantra holds better again.
Windows is actually much closer to this limited, meaningless, form of the "everything is a file" meme. In Windows literally every kernel object is a Handle. A file, a thread, a mutex, a socket - all Handles. In Linux, some of these are file descriptors, some are completely different things.
Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them. So them being the same type is actually a hindrance, not a help - it makes it easier to accidentally pass a socket to a function that expects a file, and will fail badly when trying to, for example, seek() in it.
* to be fair, Windows actually has WaitForSingleObject / WaitForMultipleObjects as well, which I think does do something meaningful for any Handle. I don't think Linux has anything similar.
> Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them.
You can write and read on anything on Unix that "is a file". You can't open or close all of them.
Annoyingly, files come in 2 flavors, and you are supposed to optimize your reads and writes differently.
You can call write() and read() on any file descriptor, but it won't necessarily do something meaningful. For example, calling them on a socket in listen mode won't do anything meaningful. And many special files don't implement at least one of read or write - for example, reading or writing to many of the special files in /proc/fs doesn't do anything.
Although many people nowadays misunderstand Linux for UNIX, it still isn't the same.
I was recently thinking that object orientation is kind of everything is a file 2.0 in the form everything is an object I mean ofcourse didn’t pan out that good. Haven’t googled yet what people had to say about that already before. P.s. big fan of ur comments.
> object orientation is kind of everything is a file 2.0 in the form everything is an object
That is why I love Plan 9. 9P serves you a tree of named objects that can be byte addressed. Those objects are on the other end of an RPC server that can run anywhere, on any machine, thanks to 9p being architecture agnostic. Those named objects could be memory, hardware devices, actual on-disk files, etc. Very flexible and simple architecture.
I rather pick Inferno, as it improved on top of Plan 9 learnings, like the safe userspace in form of Limbo, after conclusion throwing away Alef wasn't that great in the end.
Inferno was a commercial attempt at competing with Sun's Java. The plan 9 folks had to shift gears so they took Plan 9 and built a smaller portable version of it in about a year. Both the Plan 9 kernel and Inferno kernel share a lot of code and build system so moving code between them is pretty simple.
The real interesting magic behind Plan 9 is 9P and its VFS design so that leaves Inferno with one thing going for it: Dis, its user space VM. However, Dis does not protect memory as it was developed for mmu-less embedded use. It implicitly trusts the programmer not to clobber other programs memory. It is also hopelessly stuck in 32bit land.
These days Inferno is not actively maintained by anyone. There are a few forks in various states and a few attempts to make inferno 64 bit but so far no one has succeeded. You can check: https://github.com/henesy/awesome-inferno
Alef was abandoned because they needed to build a compiler for each arch and they already had a full C compiler suite. So they took the ideas from Alef and made the thread(2) C library. If you're curious about the history of Alef and how it influenced thread(2), Limbo and Go: https://seh.dev/go-legacy/
These days Plan 9 is still alive and well in the form of 9front, an actively developed fork. I know a lot of the devs and some of them daily drive their work via 9front running on actual hardware. I also daily drive 9front via drawterm to a physical CPU sever that also serves DNS and DHCP so my network is managed via ndb. Super simple to setup vs other clunky operating systems.
And lastly, I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
> I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
Doesn't Wasm/WASI provide these same features already? That doesn't seem like "a lot of work", it's basically there already. Does dis add anything compelling when compared to that existing technology stack?
Inferno was initially released in 1996, 21 years before WASM existed.
An inferno built using WASM would be interesting. Though WASI would likely be supplanted by a Plan 9/Inferno interface possibly with WASI compatibility. Instead of a hacked up hyper text viewer you start with a real portable virtual OS that can run hosted or native. Then you build whatever you'd like on top like HTML renderers, JS interpreters, media players/codecs, etc. You profile is a user account so you get security for free using the OS mechanisms. Would make a very interesting platform.
I actually read a decent paper on that a while back
Unix, Plan 9 and the Lurking Smalltalk
https://www.humprog.org/~stephen/research/papers/kell19unix-...
Late binding is a bit out of fashion these days but it really brings a lot of cool benefits for composition.
There is also an interesting from Xerox PARC,
"UNIX Needs A True Integrated Environment: CASE Closed"
http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-89-4...
For the TL;DR; crowd
"We 've painted a dim picture of what it takes to bring IPEs to UNIX. The problems of locating. user interfaces. system seamlessness. and incrementality are hard to solve for current UNIXes--but not impossible. One of the reasons so little attention has been paid to the needs of IPEs in UNIX is that UNIX had not had good examples of IPEs for inspiration. This is changing: for instance. one of this article's authors has helped to develop the Small talk IPE for UNIX (see the adjacent story). and two others of us are working to make the Cedar IPE available on UNIX.
What's more. new UNIX facilities. such as shared memory and lightweight processes (threads). go a long way toward enabling seamless integration. Of course. these features don't themselves deliver integration: that takes UNIX programmers shaping UNIX as they always have--in the context of a friendly and cooperative community. As more UNIX programmers come to know IPEs and their power. UNIX itself will inevitably evolve toward being a full IPE. And then UNIX programmers can have what Lisp and Small talk and Cedar programmers have had for many years: a truly comfortable place to program."
Some GOSIP (remember that?) implementations on some Unicies did have files for network connections, but it was very much in the minority. Since BSD was the home of the first widely usable socket() implementations for TCP/IP it became the norm; sockets are a file, but just not linked to any filesystem and control is connect()/accept() and the networking equivalent (setsockopt()) of the Unix system call dumping ground; ioctl().
I don't remember any of the ones I used having it, or then I missed it.
Kind of, sockets don't do seek().
Psst - don't tell().
Yeah, I was really confused when I learned that every device was simply a file in /dev, except the network interfaces. I never understood why there is no /dev/eth0 ...
That was back in the mid-90s but even today I still don't understand why network interfaces are treated differently than other devices
It's probably because ethernet and early versions of what became TCP/IP were not originally developed on Unix, and weren't tied to it's paradigms, they were ported to it.
Plan 9 does exactly this but all networking protocols live in /net - ethernet, tcp, udp, tls, icmp, etc. The dial string in the form of "net!address!service" abstracts the protocol from the application. A program can dial tcp!1.2.3.4!7788 or maybe udp!1.2.3.4!7788. How about raw Ethernet? /net/ether1!aabbccddeeff!12345. The dial(2) routine takes a dial string and returns an fd you read() and write(). Very simple networking API.
What would it mean to write to a network interface? Blast everyone as multicast? Not that useful. But Plan9 had connections as files, though I’ve never tried.
That's a bad argument. What does it mean to write to a mouse device? To the audio mixer? To the i2c bus device? To a raw SCSI device (scanner or whatever)? Those are all not very useful either.
Especially since there actually is a very useful thing that writing to /dev/eth0 would do: Put a raw frame on the wire, and reading from it would read raw frames.
The problems of commercial Unix in 1993 are totally different from Linux in 2025.
Having observed my fair share of beginners transition from win to linux, the most common source of pain I've seen is getting used to the file permissions, and playing fast and loose with sudo.
I have the edition that came with the ‘Unix Barf Bag’. All very unfair really as those humble beginnings have kept many of us gainfully employed!!
> Unix was not designed for the Mac. What kind of challenge is there when you have that much RAM?
I want to write a systemd haters handbook.
Like:
1. You start and stop services with 'systemctl start/stop nginx'. But logs for that service can be read through an easy-to-remember 'journalctl -xeu nginx.service'. Why not 'systemctl logs nginx'? Nobody knows.
2. If you look at the built-in help for systemctl, the top-level options list things like `--firmware-setup` and `--image-policy`.
3. systemd unifies devices, mounts, and services into unit files with consistent syntax. Except where it doesn't. For example, there's a way to specify a retry policy for a regular service, but not for mount units. Why? Nobody knows.
(To be clear, I _like_ systemd. But it definitely follows the true Unix philosophy of being wildly internally inconsistent.)
I like systemd too. After working with it for a long time, a lot of the "wtf" moments eventually are made clear as having at least some semblance of a good reason behind the decision.
1. systemctl is the controller. Its job is to change and report on the state of units. journalctl is the query engine. Merging the query engine into the systemctl controller would make the controller bloated and complex, so a dedicated tool is the cleaner approach. I think you can also rip out the journal and use other tools if you so decide, making building logs into systemctl a bad idea.
2. systemd is a system manager, not just a service manager. It replaced not only the old init system but also a collection of other tools that managed the machine's core state
3. A service runs a process, which can fail for many transient reasons. Trying again is a sensible and effective recovery strategy. A mount defines a state in the kernel. If it fails, it's almost always for a "hard" reason that an immediate retry won't fix. Retrying a failed mount would just waste time and spam logs during boot.
These are fine points, and there are rough edges, but:
1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
2. I am not sure about how I should parse this. You mean there are too many arguments in total (2a) or the man page or the help message is not ordered correctly (2b)?
(2a). If you just care about services, you already know [well] a handful of subcommands (start, stop, enable, etc.) and just use those, the other args don't get in your way. For example your everyday commands have safe, sane default options that you will not have to override 99% of the time.
Furthermore, this is much better than the alternative of having a dozen different utilities that have a non-trivial inter-utility interaction that has to be solved externally. Sometimes an application that does (just) one thing won't do well.
(2b). This is subjective (?). I have experienced a few week-long total internet outages (in Iran). I had to study the man pages and my offline resources in those contingencies, and have generally been (extremely) satisfied with the breadth, depth, and the organization of the systems docs. In the age of LLMs this is much less of a problem anyways. I think reading the man page of a well-known utility is not an everyday task, and for a one-off case you will grep the man page anyways.
3. Your point is ~valid. But automount exists for ephermal resources. By default, we won't touch a failing drive without some precautions at least. So fail-fast and no retry is not always wrong. Perhaps it is virtue signaling ... On my PC I don't want to retry anything if a mount fails. In fact I might even want it to fail to boot so that it doesn't go undetected.
Also, for something as critical as mounting, I would probably want other "smart" behavior as well (exponential backoff for network, email, alert, DB fail-over, etc.) and these require specific application knowledge.
So ... they are trying to prevent a foot gun.
> 1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
I'm not at all a systemd hater (I think it was needed and it's nowadays a very solid piece of software) but the logs thing should be totally tweakable when viewing it from `systemctl status` and it is n.... [goes to check the man page]
Oooh, so TIL.OMG, TIL, too. This made my morning.
I parsed (2) in the obvious way of: A manual should start with the common stuff 99% of people need and not with something obscure that you will only need once you are at the level that you know the tool you're using inside out.
That is like opening the manual for your dishwasher and reading a section about how you may check the control-boards conformal coating after the warranty has expired. Useful when you need it and have the repair skills, but a bad way to start a manual.
That’s a tutorial or a getting started guide. The manual is a memory helper, like a tiny encyclopedia, not a teaching material.
tutorial vs reference https://diataxis.fr/
How does that change my point about an order by frequency of use being superior? If it is a memory helper, then the stuff people tend to use more often is certainly the stuff that needs to be looked up more.
That’s a variable order. I prefer a more consistent order like a default section structure (which a lot of man pages adopt) and an alphabetical order for flags (which a lot of man pages also adopt).
When I open a manual it’s usually for: flags and argument ordering; argument format (for things like string format or globbing). Some manuals are short enough that it can serve as a guide, but most assumes domain knowledge.
What you want is a cheatsheet. And there’s a lot on the internet and even some tools that collect them. But most practitioners write shell aliases and functions
1. systemctl already supports logging output, and it's already overloaded.
2. The options are not filtered, so useful options like ('--lines') are lost. E.g. what other options apply to "systemctl status"? The systemd documentation, in general, is a mess. It's a good _reference_ documentation (like 'man') but not a good guide.
3. Network filesystems exist. And they can become unavailable for a time.
Systemd got better with time and I got better with it over time, which makes it acceptable for me now. I still miss SMF from Solaris years later though. I'm sure there are better systems out there but when the ubiquity is not there it's really hard to adopt them especially in corporate environments. And then you have to learn 2 things if you want to use something else at home, which is already too much for me...
I also liked SMF as well, but I do admit I “cheated” by using a website to make the XML service manifests.
+1 I think such writing would find its audience.
What I would like to see is something that is to systemd what PipeWire is to PulseAudio.
Before PulseAudio getting audio to work properly was a struggle. PA introduced useful abstractions, but when it was rolled out it was a buggy mess. Eventually it got good over time. Then PipeWire comes in, and it does more with less. The transition was so smooth, I did not even realize it I had been running it for a while, just one day I noticed it in the update logs.
systemd now works well enough, but it would be nice to get rid of that accumulated cruft.
systemd and pulseaudio are by the same guy (avahi too). He just writes shit software that sort of works.
Also he has no regards for breaking userspace to the point of needing to get scolded by Linus. But some ideas are good and there is a lot of pioneering work that moves the needle. The trajectories of PulseAudio and systemd are similar, it just needs cleaning up. PulseAudio got fixed up by PipeWire, whereas systemd is at the point of lifecycle yet to reach that stage.
Afaik one of the main problem with the software of his is that it tends to sacrifice ergonomics in the 99% common cases for some obscure theoretical observation.
This is of course about tradeoffs and about the complexities of the problems you're solving, but his software is full of choices that only make sense if you priorize elegant code over elegant software only to then grow into something that is neither.
Lennart worked at Red Hat when he was developing systemd. Red Hat's largest customers often have wacky, weird requirements that you would have never thought of unless you were in that specific customer's situation.
Good point.
It doesn't mean the requirements and solutions aren't wacky, weird, or inscrutable though.
Yeah sure, I didn't intend to paint it as if these problems were easy to solve. They are not.
Didn't take it that way, I was trying not to minimize the opposite point of view. systemd is a riddle wrapped up in a big ball of wtf.
There's a podcast [1], which features him as a guest to talk about Linux in general. The main impressions I got from it: he is very confused about what UNIX is and he apparently despises UNIX.
I think he's well suited for his new employer (Microsoft).
[1] (in German) https://cre.fm/cre209-das-linux-system
The inconsistency comes from the author thinking "All this init stuff is ancient, and filled with absurd work arounds, hacks, and inconsistencies. I'll fix that!". Then as time passes discovering that "Oh wait, I should add a hack for this special case, and this one, and this one, guess these were really needed!" as bug reports come in over the years.
To be fair, this could happen to any of us, especially early in career. But the real hubris is presuming that things are, as they are, without cause or reason. Along with never really knowing how things actually worked. Or why.
I envision a layperson (which is sort of understanding the author had of modern init systems, when starting on systemd). Said person walks up to a complex series of gears, and thinks a peg is just there for no reason, looks unused, and pulls it out. Only to have the whole mess go bananas. You can follow this logic with all of the half baked, partially implemented services like timekeeping, DNS, and others that barely work correctly, and go sideways if looked at funny.
I think if the author took their current knowledge, and this time wrote it from scratch, it could be far better.
However there still seems to be a chip on their shoulder, with an idea that "I'll fix Linux!" still, when in reality these fixes are just creating immense complication with very minimal upside. So any re-write would likely still be an over-complicated contraption.
When a complex system cannot be meaningfully reduced, another approach might be trying to reduce scope.
Current areas include managing services on a server, managing a single-user laptop, and enterprise features for fleet of devices/users.
There is some overlap at the core where sharing code is useful, but it feels way more complexity than needed gets shipped to my laptop. I wonder how much could be shaved off when focusing only on a single scenario.
Yet another approach is exposing internal state.
That way you turn a very complex system into a set of much simpler artificial systems that you can control the interaction.
On your example, that would mean having different kinds of configuration options that go for each of those scenarios, but still all on the same software.
One can argue that systemd tries this (for example, there are many kinds of services). But in many cases, it does the complete opposite of this and reducing scope.
Still, I don't think init systems are a wicked problem (and so, it doesn't need advanced solutions to managing complexity). The wickedness is caused by the systemd's decision to do everything.
I like openrc for laptop or workstation. Writing service is as easy as writing a systemd files (with less options of course, but I never really wanted those).
> The inconsistency comes from the author thinking "All this init stuff is ancient, and filled with absurd work arounds, hacks, and inconsistencies. I'll fix that!". Then as time passes discovering that "Oh wait, I should add a hack for this special case, and this one, and this one, guess these were really needed!" as bug reports come in over the years.
Don't forget the best one: "We don't support that uncommon use case, we will not accept nor maintain patches to support it, and you shoulden't do it that way anyway, and we are going to make it impossible in the future" -- to something that's worked well for decades.
“Those who do not understand Unix are condemned to reinvent it, poorly.” — Henry Spencer, 1987
I disagree. As much as I dislike a lot of stuff in systemd, it was the _first_ init system that actually cares about reliability.
It evolved organically so it's a bit of a mess as a result, but it's the fate of most long-term projects (including Linux).
All those points could be fixed with a wrapper "systemd2" but I definitely see your points.
I like thinking of the minimum set of changes required to fix a problem and this could help, you probably could LLM most of it in less than 30min.
I would like to subscribe to your newsletter... no but really if you ever do get around to writing that I want to read it. Ping me somehow, my Gmail username is the same as my HN username. Happy writing!
> I want to write a systemd haters handbook.
Why ? Systemd really fits the Unix haters handbook. It is anti unix as much as it can be ( one command to rule them all, binary logs, etc).
In the end it realy seems that the mantra: GNU is not UNIX is true. Just look at the GNU/Linux: pulseaudio, systemd, polkit, wayland, the big, fat linux kernel
For a brief period of time, binary configs[0] were a thing. In mobile world only, but still. It wasn't that people generally wanted them, but because random seek I/O latency on early mobile devices (and especially on their eMMC storage devices) was atrocious.
Opening up tens or hundreds of XML config files for resync was disgustingly slow. I've developed software on Maemo and Scratchbox; the I/O wait for on-device config changes was a real problem. So of course someone came up with a modified concept of Windows registry - a single, binary format config storage, with a suitably "easy" API. As a result you'd sacrifice write/update latency for the cases where you wanted to modify configurations and gain a much improved read/refresh latency when reading them up.
Of course that all broke down when reading a single config block required to read the entire freaking binary dump and the config storage itself was bigger than the block device cache. Turns out that if you give app developers a supposedly easy and low-friction mechanism to store app configs, their respective PMs would go wild and demand that everything is configurable. Multiply by tens, even low hundreds of apps, each registering an idle-loop callback to re-read their configs to guarantee they would always have the correct settings ready. A system intended to improve config load/read times ended up generating an increased demand for already constrained read I/O.
0: https://wiki.gnome.org/Projects/dconf
GNU promotes Shepherd instead of SystemD.
We need, OTOH, the other side of the coin:
The EMACS hater handbook. Under a GFDL license, of course.
No multithreading, I/O locks under GNUs/eww, glacial slow email header parsing under GNUs, huge badass file for RMAIL if you don't like GNUs (instead of parsing MailDir) and so on.
>No multithreading, I/O locks under GNUs/eww, glacial slow
All this would not happen if RMS had chosen Common Lisp to implement it...
RMS hates Common Lisp because it's a bit bloated (tons) and the closes to GNU Emacs written in CL it's Lem and it feels far slower than Emacs.
True, we don't need a vi(m) haters handbook. That's just natural.
I have no real experience with mbox and pop3 (maildir is what I’ve always used). But I still think you would need to partition mbox files because that’s what you would do with physical mail (which is the basis of the protocol and everything around it). I kinda like rmail.
Always a good one for internet point farming.
That Ken Pier quote in the preface is still nasty work.
Huh. I forgot Don Norman wrote the foreword.
And "Worse is Better" was published here too? I don't know how anyone decided it was a good idea to make this, but I'm glad they did.
The only book I have that came with a barf bag. More books should do this.
The good news is that we now have alternative UI's in web/mobile, microkernel-based systems, and unikernels in high-level languages... all in production use.
They certainly came up with a lot of good one-liners for this book.
I wonder why Dennis Ritchie was so infuriated though. He criticizes them for wanting simple functionality, but it's not because language is a powerful tool for solving problems it's because it limits the potential of the platform to it's functionality (which has been simplified and in of itself limited).
So this is confusing to me. Using language to solve problems is the advantage that Unix offers. But, neither the authors nor Dennis care about this? Or they do care in limited ways, but ultimately it's about something else?
dr wasn't infuriated. He would not have written a funny forward for them if he were.
Thous shalt not write criticisms of a demigod!
Discussed a little, previously...
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=40110729 - April 2024 (87 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=38464715 - Nov 2023 (139 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=31417690 - May 2022 (86 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=19416485 - March 2019 (157 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=13781815 - March 2017 (307 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=9976694 - July 2015 (5 comments)
The Unix Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=7726115 - May 2014 (50 comments)
Anti-foreword to the Unix haters handbook by dmr - https://news.ycombinator.com/item?id=3106271 - Oct 2011 (31 comments)
The Unix Haters Handbook - https://news.ycombinator.com/item?id=1272975 - April 2010 (28 comments)
The Unix Hater’s Handbook, Reconsidered - https://news.ycombinator.com/item?id=319773 - Sept 2008 (5 comments)
As an aside: Hacker News is getting old, the 2008 discussion is closer to the book’s year (1994) than it is to now.
I just realized that my most famous comment on HN is the same age as I was when I won the Putnam.
A sobering thought is that, when Mozart was my age, he had been dead for two years.--Tom Lehrer
And for the millennials: compare the distance between your birth and ww2 vs your birth and now!
I was exactly thinking this the other day while running and seeing an old, rusted lamp post in a rural street: "this was probably put there over 50 years ago, in the early seventies", and then thought at things from "over 50 years ago" when I was a child and well, WW2 was in its making. I don't know while I thought that but that's probably also a sign of our age: WW2 was the biggest thing "from the past" that our families lived or were touched more or less directly.
But also ww2 is this black and white thing from our history books. As we get older we get to know more about how recent and relevant it is, we met people who lived it and told us about their actual experience. But it still feels like something that belongs to history rather than a recent event. That's kind of the "anything that happened before my birth I don't care" attitude of today's teenagers.
I'm a Brit born in 66, and growing up I felt that WW2 was recent history. War films were a dominant genre in my early life, we visited the German defences on the French coast while camping as a teenager along with my Grandfather, who served there and visited some locations he remembered. Some buildings still had war damage. In many ways the world of the 70s felt closer to the war era than to nowadays. It was still the cold war, and that was just an extension of the post-war stalemate.
As a German born in 87, it didn't feel _that_ recent any more. But it was definitely close, both my granddads served in the war and were scarred for life by that, mentally and physically. Family history a mess of war-torn biographies. I found some rusted, old big munition in the forest as a kid. Old bunkers and flak towers can still be seen in the cities, and many of the local kids in my hometown and age cohort adventured into the old mining shaft used as an air raid shelter and saw the gas masks that were still there. And then there was the GDR (or DDR, Deutsche Demokratische Republik, in German) and all the reunification that happened when I was already alive (although I was a child at the time). Thinking about my childhood visit to Ost-Berlin still makes me shiver with thoughts about all the suffering. Many of the buildings still had bullet holes and it felt like you could touch history.
That's interesting. I was also born in 1966, but in the US. WW2 didn't seem/feel all that recent to me, probably because it had mostly happened far away. I was interested in learning about it and read lots of books, and watched movies. The drive to visit relatives did go by an aircraft carrier (USS Essex) at the scrapyard, but other than that physical artifacts of the war were rare. And the only relative I had who fought in the war was a great-uncle, but he passed away when I was very young.
Thanks, I hate it.
Anyone who thinks that that is a lot should see how much it, and of course the mailing list, were brought up on Usenet in the 1990s. (-: