Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Rise of "Worse is Better" (jwz.org)
82 points by wslh on July 4, 2011 | hide | past | favorite | 32 comments


A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

-- John Gall


You know, this almost reminds me of the ML vs. Haskell debate. I remember fondly, hearing the 15-312 kids talk about how ML is right and everything else is wrong. The CMU PL dept was kind of adorable that way.

One of the classic examples is Haskell's typeclasses. Haskell's typeclasses are kinda kludgy because there's no way to provide more than one instance of a typeclass. ML's functors are way better.

But as it turns out, most of the time, we just need one instance. It's much simpler to invoke, and much easier to understand. If you need more than one instance, then use newtype.

In the end, we wouldn't be where we are in PL without the crazy folks at CMU and Bell labs. So I feel no small amount of sadness that ML didn't win. I concede the notion that ML may have made better design choices. But what matters more to me is that elements of FP get into the mainstream. Here, Haskell has done a better job of showing what is good with FP.


You have to concede it's a bit funny that you're pointing to Haskell as an example of the 'worse-is-better' approach that leads to viral adoption of dirty solutions over the endless pursuit of perfection.


If anything you could turn it around and say that ML's approach to control and data effects exemplifies worse-is-better on the purity spectrum. It took a long time for Haskell to solve these problems a la Haskell 98. It's only recently that Haskell has "caught on" relative to ML.


Haskell is much more "MIT approach" than "New Jersey approach". For a better example, consider how the quick & dirty hack that is Javascript is now far, for more commercially important than the entire FP language family combined (although, it's worth noting, somewhat inspired by FP).


As of importance, I boldly compare JS to Middle Ages' plague.

It is very important but not the way it helps.


So one of the first languages with closures to get huge mainstream popularity hasn't helped? right...


Closures are too small an addition comparing to huge step back in semantics. Side effects in JS are irrepressible.


It depends on what perspective you take I guess. From one view, ML is the worse language, but from other views it would be Haskell. Suppose you require a fully formal specification which has been verified in a mechanized way. You have that for Standard ML in Twelf and Haskell has nothing sort of that.

If you look at the concept of being a purely functional language however, it is the opposite with Standard ML being the "worse" animal.

Module system: Then SML got it right and Haskell got it Wrong.

Laziness/Strictness: This is a duality. There are advantages and disadvantages to both approaches so there is no worse/right choice IMO. If you look at the recent stuff on polarity in proof theory it becomes clear that when you latch onto a specific evaluation order, you make some things simple and other things hard.


Dude, you just DID NOT insult Haskell !

[Sort of relevant]

Apparently SPJ was once asked by a customs official in America if an american citizen could do what he did. Someone with him announced that his work as the creator of ML made him irreplaceable.

So you see, ML's got fanboys in high places. Even close to SPJ apparently.

Source for SPJ Koan : http://www.youtube.com/watch?v=NWSZ4c9yqW8 (around the 4 min mark).


FWIW you can link to a specific time in a YouTube video like this: http://www.youtube.com/watch?v=NWSZ4c9yqW8#t=3m0s


Perhaps Haskell compromised on first-class modules. But ML compromised on:

* Type-classes (which overlap but aren't really the same thing and are probably more important)

* Typed-effects (a.k.a purity): This is a big one to lose.

* Laziness/Strictness control

I think ML is more of a compromise than Haskell.


Actually, I was originally of the same opinion, but then I read some lecture slides from Simon PJ. He has suggested that laziness was perhaps not the best default. It is hard to reason about space leaks with laziness. Likewise for typeclasses being inferior to ML functors.

http://www.cs.nott.ac.uk/%7Egmh/appsem-slides/peytonjones.pp...

I personally disagree. I'm pretty sure laziness has made it much easier for me to try out an idea. If it works, then I'll test that I didn't space leak. The notion of it almost makes me feel dirty. ;-) :-P


> He has suggested that laziness was perhaps not the best default. It is hard to reason about space leaks with laziness.

Aha, but it's perhaps not the best default for practical reasons, and it is the best default for theoretical reasons. Does that not make ML's eagerness an example of worse is better, because the worse theoretical solution is simpler the the rubber meets the road and you just want to find out where you're leaking memory>


> Laziness/Strictness control

I don't think there was any compromise here, just a different choice of default. Both offer a great deal of control over what evaluation method to use.

Haskell, for what it's worth, compromised on:

* Formal definition - this is an often-overlooked win for SML, and the sort of thing that lots of languages could use.

* Module system - type classes complicate this problem, but the lack of a decent module system hurts Haskell when building large systems.


Well, I already mentioned the Module system part.

Good point on the Formal definition compromise.

But I think laziness-by-default has some fundamental advantages that SML pretty much loses: http://augustss.blogspot.com/2011/05/more-points-for-lazy-ev...


Richard Gabriel's thoughts on the whole thing: http://www.dreamsongs.com/WorseIsBetter.html


I think this is Richard Gabriel's thoughts, it is signed rpg@lucid.com at the bottom... which certainly isn't Jamie.


Well, yes, but what I linked is more recent.


This precise issue in Unix/POSIX causes a huge headache. I wonder how many thousands or millions of "xread/xwrite" loops have been written in user code to retry failed read and write calls, and how many of those loops have bugs?

Look at gnulib to see how hard it is to get these loops right:

http://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=blob;f=li... http://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=blob;f=li...

In the case of a partial read/write followed by an error, I don't believe it is possible to recover at all using just POSIX-defined calls.


Regarding how "the right thing" stacks up to the worse-is-better solution, the author writes:

> How does the right thing stack up? There are two basic scenarios: the ``big complex system scenario'' and the ``diamond-like jewel'' scenario.

> The ``diamond-like jewel'' scenario [Scheme] goes like this:

> The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors.

This doesn't sound like it's actually the case, is it? There are, of course, many different Scheme implementations; they seem to get implemented in reasonable amounts of time, and some execute code very quickly indeed.


Scheme is over thirty years old; over those years, a lot of research and experimentation has been devoted to making Scheme more efficient.

When Scheme first came out, its use of lexical scope was controversial, because everyone knew that dynamic scope was more efficient.


Cute. This is the same screed that was delivered in Structure and Interpretation of Computer Programs at MIT back when it was still taught (in Scheme, and this was in 1996!). I guess it is a balm for all the Scheme-heads that perhaps wondered why their language lost, and is still out there.


This is one of the first papers gone over in Computer Systems (6.033) so it's not totally gone.


I took this class 4 years ago in UMass Amherst. In Scheme. I loved it.


The funny thing is that Common Lisp can be used to deliver working systems just fine. In fact, there's nothing "perfect" about it. It has warts. But it is actually quite practical.

"Worse is better," is thankfully a philosophy that is pretty unique (afaik) to software development. It is only really applicable to a sub-set of software development as well. I cannot imagine how "worse is better" would work in mission critical systems where human lives are at stake. Or lab equipment, embedded systems, and other scenarios where failures can result in huge headaches.

I don't think there was a battle that was won by any side really. Either approach could be considered for any given problem. Practicality is important and zealotry should be avoided at all costs.


If you substitute "practicality" wherever you see "worse is better", you'll have a much clearer idea of what the "philosophy" really is, and realize it in no way conflicts with critical systems.

All software design and development is done to a set of requirements. The more disastrous a failure would be, the stricter the requirements will be, and the more rigorous the processes will be.

Like almost all other developers, my software does not run nuclear reactors, therefore I am not going to expend the resources to achieve the safety necessary for a nuclear reactor. It would be both unnecessary and economically infeasible.

Do I care if a client makes a call to my IPTV webservice and gets back corrupt data? Well, yeah, but I'm not going to step through the entire software stack to mathematically prove it's impossible, nor am I going to encase the servers in a lightyear of lead to protect them from cosmic rays. I'm going to follow general best practices, let some testers play with it for a few days (while thousands of other lines of code in the client and server get exercised at the exact same time) and ship it.

Why? Because the small risk of a user not being able to watch their favorite movie in some weird corner-case has less economic significance than me spending days making 100% sure everything is perfect in my new API call.

At the same time, I recognize components of my systems that can have serious economic consequences, and accordingly invest more time in getting them closer to "right" (or avoiding them entirely -- e.g. not so long ago I ripped out all the C-style string/buffer handling from what was ostensibly a C++ library (not mine, originally) and replaced it with std::string and std::vector, which I knew were right, because people smarter than me had made sure they were, because everyone knew they had to be, and time was invested accordingly).

This is a continuum, not a pigeonhole.


There's an interesting analogy in network protocols: OSI (the right thing) versus IP (worse is better).


I've always felt that calling this "Worse is Better" is misleading.

It's more that "Worse leads to more mind-share" (a kind of economic argument) and "Worse tools can lead to better design of system's built using them".

Neither of which actually mean that worse is necessarily better.


that totally left me hanging -- I want to hear more about the diamond!


Best is the enemy of good.


MIT == Academia, New Jersey == Real World. That's all this debate has ever truly been about.

"The right thing" in any particular situation is whatever results in a working system. Economics dictate that perfection rarely wins. There's a reason Unix came out of Bell Labs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: