While these Unix reforms sound great in theory, they somewhat disregard Unix’s practical, evolved nature. It’s not just about clean ideas; it’s about what works in the real world, shaped by decades of use. Implementing such sweeping changes could take way longer than one would expect.
Err I meant the entire point of this to be that we can do nice things incrementally. E.g. I am linking LWN articles and whatnot to demonstrate that actually kernel devs thought some of this stuff is feasible.
If anything, the heavy lift here is not even changing kernels, but reforming programming language's standard/popular libraries to make this stuff accessible and ergonomic --- most software today isn't written in C directly using some kernel-developer-maintained headers. If we can relatively make the OS changes, but it takes a bunch more slow steps for the new interfaces to percolate out "regular" developers, that adds uncertainty. "Latency" is a bigger enemy than "feasibility".
You can find me jabbering away in many corners of the internet about trying to make standard libraries easier to evolve for precisely this reason.
The author explicitly mentions Fuchsia as going too far i.e. advocates for "let's try improving things little by little".
> While these Unix reforms sound great in theory, they somewhat disregard Unix’s practical, evolved nature.
To me that's a polite language for: it's a wild west of "oh crap, we forgot X, guess it's time for ungodly hacks in Y and Z".
I do somewhat agree with you though. Let's see what did our area's emergent behavior and hacks achieve and try to distill them in a more clean future API.