Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plan 9 is actually a great little OS, it is where linux could be if it wanted to badly enough.

Instead we're stuck with early 70's technology. Which is good enough, but it could be so much better if not for a bunch of NIH and ego.

No disrespect to the kernel devs, they're doing a great job. But longer term I wish there was some more real innovation instead of just a slightly better (and free) mousetrap.

For another interesting take on OSes look at QnX.



See also Rob Pike's talk "Systems Software Research is Irrelevant" (http://herpolhode.com/rob/utah2000.pdf).

BeOS was pretty interesting, too, but its story is too far off topic.


Perhaps a group could make a Linux distro with the goal of forking, and Plan9ifying, the source of every package they ship. No one would have to use the resulting distro (other than to demo the neat way that the Plan9ified packages integrate so easily with one another) but the original authors/maintainers of the upstream packages could adopt the Plan9ification patches a la carte, gradually increasing Plan9ification across the board. The patches of one package would always be guaranteed to work with the patches from any other, as the resulting combination has to hang together in the demo distro—and, since this would only be an eventual goal of the group, they would also have to make sure that all their patches didn't expect any Plan9ification on any other package's part.


That's not a trivial exercise. The philosophy behind plan 9 is radically different from some of the ways things are done in Linux, and it would take a great effort to get this even close to production grade.

Plan 9 is structurally very different from Unix under the hood, it is in many ways a better unix but backwards compatibility was not what they had in mind when they designed it.

Unix is said to have 'everything is a file' as its mantra, plan 9 shows what 'everything is a file' really means.


> The philosophy behind plan 9 is radically different from some of the ways things are done in Linux, and it would take a great effort to get this even close to production grade.

So it's not just "NIH and ego", it's the fact that people are loath to abandon working software for benefits that probably seem somewhat abstract?

The "crossing the chasm" approach is to find a niche where your product can win and then go from there. Taking on Linux head-on is just not going to be a winning proposition.


It was NIH and ego that stopped adoption from some of the more advanced knowledge available at the time when the relevant portions of linux were developed.

There was no 'working software' to abandon at the time, there is now.


Plan 9 came much later than Linux, if I'm not mistaken. Things like microkernels were around then, sure, but is Plan 9 interesting because its a microkernel or because of how you interact with it and how it's architected (everything is really a file)? Edit: it appears that Plan 9 existed internally at Bell labs at about the same time Linux was being developed, but was only released to the public, under a commercial license, in 1995. It was finally open sourced only a few years back.

Also, there certainly was plenty of existing Unix software out there when Linux (and BSD) came out. Not nearly as much as now, but nothing to laugh at either.


Don't forget minix and the Tanenbaum / Linus exchanges.

That's a pretty well documented era, and guess what, Tanenbaum was right.

But Linus was riding high on the momentum that he'd generated and Tanenbaum lost the popularity contest. But that didn't make him wrong, and over time those few disadvantages that have been trotted out as the reasons why he was wrong have all been put to rest.

That's part hindsight, but Tanenbaum had a huge amount of experience and was already ahead of 'conventional' unix. Some people think he wasn't ahead enough, but it was certainly a step forward from where linux is, even today.


> Tanenbaum was right

Citation needed. Where do you get that? Performance-wise the biggest problem of OS X is still that part that is based on the "many servers are the OS" idea.

That's the reason why this exists:

http://www.ertos.nicta.com.au/software/darbat/

Also, NT wanted to try moving in that direction but the critical parts are not "many servers."

Also, there's a reason why L4 was developed (note: post Tanenbaum): the "real" microkernels were very problematic performance-wise and it's actually harder to produce them (example: Gnu Hurd)

http://en.wikipedia.org/wiki/L4_microkernel_family

http://en.wikipedia.org/wiki/GNU_Hurd


On the other hand, you can view the current fad of virtualized everything and hypervisors as a kind of exokernel.


> you can view the current fad of virtualized everything and hypervisors as a kind of exokernel.

No I can't, as long as the underlying OS is still a monolithic one, thus selected because it's the fastest and the most convenient to be maintained.


Not all hypervisors have an underlying OS. I should expressed myself more clearly.

I was talking about bare metal hypervisors, not hosted hypervisors. (See http://en.wikipedia.org/wiki/Hypervisor#Classification)

We have some experiments with very stripped down domains (i.e. virtual machines) that make full use of paravirtualization. They are quite close to being processes in an exokernel OS. And boot up really fast---like processes should do.


Thanks, my wording was indeed clumsy, but it doesn't change the fact: the hypervisor is simply not doing the stuff that the classical OS kernel does, it's not replacing anything, it's just a layer with some specific new functionality, so when you're changing the meaning of the terms being discussed of course it can appear that you're winning the argument, whereas you're just performing tricks. No matter from which direction you observe the systems, the overall functionality possible with monolithic kernels is still not really substituted with something better.


> the hypervisor is simply not doing the stuff that the classical OS kernel does

And that makes the comparison with exokernels apt. Exokernels are not supposed to do what normal kernels do.

(Though if you run a normal kernel on top of a hypervisor or exokernel, in a sense you haven't reached the true potential of the system and your critique is more than valid.)


> Don't forget minix and the Tanenbaum / Linus exchanges.

That's what my 'microkernels' comment was referencing.

> Tanenbaum was right.

That is not clear to me. http://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deba...

Microkernels haven't exactly swept the world before them. Apparently, MacOS X and Windows NT based systems have some microkernel in their DNA, but it is my understanding that you couldn't really call them microkernels at this point.


Tanenbaum was right that linux was a giant step back in to the 70's.

Microkernels have swept the world before them in a way that you can't even imagine, in the embedded systems world where 'failure is not an option' microkernels rule supreme without any threat from larger stuff.

When you want deterministic hard real time with very tight upper boundaries on latency then a micro kernel will help tremendously.

Everything else is subordinate to the scheduler, even things that in 'monoliths' are part of the kernel, such as IO driver running as user processes.


Are we really that sure Tannedbaum was right?

http://kernel.org/doc/ols/2007/ols2007v1-pages-251-262.pdf


Plan9 is not microkernel.


I've gone through an 'alternative operating system' phase at some point and played around with all kinds of them.

I don't even think minix is a micro-kernel (Tanenbaum thinks it is), but QnX definitely is the real deal.

There are some very interesting concepts in Plan 9 that linux could have taken advantage of though.


> There are some very interesting concepts in Plan 9 that linux could have taken advantage of though.

Given what I posted above, it's not clear that Plan 9 was really on Linus' radar when he started or in the formative years of Linux. When Plan 9 came out, radically changing Linux might have been a significant departure from a working system with a growing amount of software available for it.


Oops, sorry, good point.


Disclaimer: I'm the author of the Glendix project.

As someone already pointed out Glendix tries to bring binaries over to make Linux feel more Plan9-ish. Unfortunately, I no longer have the luxury of being a grad student and so my time these days is very limited and have had to move on to other things. But if there's someone motivated enough to push Glendix further, I'd be more than happy to help!

As for the question of 'Why Plan 9', to put it simply - reading the source code makes me feel like a hacker again. A long time ago if you didn't understand how something worked, you could just peek at the source and everything would be clear. Plan 9 maintains that, the source code /is/ the documentation. Alas, I wish I could say the same of 'modern' free/open source software (Linux/BSD/GNU/what have you).


How did you have time to work on it when you were a grad student?


By convincing the right people that it was relevant to my course :) Also, Glendix was the culmination of my Bachelor "major project".

Yes, it is sometimes possible to do useful work in academia. Gasp!


I'm finishing my bachelors next spring, and I would kill to work on Glendix or Plan 9. Suggestions?


Porting one of the fine Plan 9 based filesystems to Linux seems like an excellent operating systems project. This would directly benefit Glendix as many applications require these synthetic filesystems provided by the Plan 9 kernel.

/net is a good example of how sockets are done away with, and /dev/draw provides a useful graphics API. You could argue in your project thesis that filesystems sometime provide a better abstraction than traditional programming-language based APIs; and prove it by porting one such filesystem over to Linux.


Project thesis

Unfortunately, I've already done my senior project. Though, I'd be happy to hack on Glendix stuff in my free time.

/net

I've not done any network or kernel programming, and I'm only slightly familiar with /net. If I wanted to hack on this, where would you recommend I start?

It seems like a Glendix /net could be implemented in user space. Yes?


A less ambitious goal would be to patch some of the widely-used higher-level applications so that they would run on Plan9. E.g., if I could get a virtual Plan9 machine with Python/django, PostgreSQL, and lighthttpd, I would have a starting point where I could do something practical, and then I could explore how to use the specific features of Plan9 to make those tools more productive.

It looks like even that would be a big undertaking: http://www.mail-archive.com/plan9-gsoc@googlegroups.com/msg0...


http://www.glendix.org/ is going in that kind of direction, albeit without explicitly doing the bit about porting existing packages. Instead it looks like they're building enough support in Linux to run Plan 9 binaries, and then bringing over the Plan 9 equivalents.

The other existing option would be Plan 9 from User Space (http://swtch.com/plan9port/) which includes Rob Pike's innovative editors, sam and acme.

Of course, the biggest problem these days is sourcing a true three-button mouse to really get the feel of the user interface. Mouse chords in acme don't really feel right with a mouse wheel.


Yes. Though not all mouse wheels are created equal. Some of them are much more suited to third-button use than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: