The more I progress in our domain of expertise, the more I observe we're being incredibly wasteful† all over the place. For all the expressiveness power of our platforms and languages it somehow sounds insane that time (ruby -e '100_000_000.times {}') takes four solid seconds on my 3.4GHz machine††. I know, bogoMIPS are no benchmark, this is just to exemplify that layers of abstraction, while useful (necessary even), are also harmful, the underlying question being: how much layers is too much layers?
I dream of a system redesigned from the ground up, where hardware and software components, while conceptually isolated, cooperate instead of segregating each other to layers. See how ZFS made previously segregated layers cooperate to offer a robust system, see how TRIM operates on the lowest hardware levels by notifying of filesystem events, see how OSI levels get pierced through for QoS and reliability concerns. Notice how the increase in layers and thus holistic complexity rampantly leads to more bugs, more vulnerabilities, more energy wasted. We all know the fastest code is the one that does not execute, the most robust code is the one that doesn't get written, the most secure code is the one that doesn't exist. Why do I still see redraws and paintings and flashes in 2014? Why does a determined adversary has such a statistical advantage that he is almost guaranteed toget a foothold into my system? This is completely unacceptable. For as much as we love playing with it, the whole web stack, while a significant civilization milestone, is, as a whole, a massive technological failure (the native stack barely fares better).
† I consider wasteful and bloated subtly distinct
†† not at all an attack on Ruby, just what I happen to have at hand right now
I think the underlying cause of this overabstraction is largely a result of abstraction being excessively glorified (mostly) by academics and formal CS curricula. In some ways, it's similar to the OOP overuse that has thankfully decreased somewhat recently but was extremely prevalent throughout the 90s. In software engineering, we're constantly subjected to messages like: Abstraction is good. Abstraction is powerful. Abstraction is the way to solve problems. More abstraction is better. Even in the famously acclaimed SICP lecture series [1] there is this quote:
"So in that sense computer science is like an abstract form of engineering. It's the kind of engineering where you ignore the constraints that are imposed by reality."
There is an implication that we should be building more complex software just because we can, since that is somehow "better". Efficiency is only thought of in strictly algorithmic terms, constants are ignored, and we're almost taught that thinking about efficiency should be discouraged unless absolutely necessary because it's "premature optimisation". The (rapidly coming to an end) exponential growth of hardware power made this attitude acceptable, and lower-level knowledge of hardware (or just simple things like binary/bit fields) is undervalued "because we have these layers of abstraction" - often leading to adding another layer on top just to reinvent things that could be easily accomplished at a lower level.
The fact that many of those in the demoscene who produce amazing results yet have never formally studied computer science leads me to believe that there's a certain amount of indoctrination happening, and I think to reverse this there will need to be some very massive changes within CS education. Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources, and that often leads to very simple and elegant solutions, which is something that should definitely be encouraged more in mainstream software engineering. Instead the latter seem more interested in building large, absurdly complex, baroque architectures to solve simple problems.
Maybe the "every byte and clock cycle counts" attitude might not be ideal either for all problems, but not thinking at all about the amount of resources really needed to do something is worse.
> how much layers is too much layers?
Any more than is strictly necessary to perform the given task.
"Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources"
It probably doesn't hurt that nobody expects a demo scene app to adapt to radical changes in requirements, or to interoperate with other things that are changing as well - for that matter, to even conform to any specific requirements other than "being epic".
For instance, the linked 8088 demo encodes video in a format that's tightly coupled to both available CPU cycles and available memory bandwidth. Its goal is "display something at 24fps".
Not that I'm a fan of abstraction-for-its-own-sake, but putting scare-quotes around real problems like premature optimization is an excessive counter-reaction.
Up to the ~'60s gave us a vast theoretical foundation, and from then on we toyed with it, endlessly rediscovering it (worst case) or slightly prodding forward (best case), trying to turn this body of knowledge into something useful while accreting it into platforms of code, copper and silicon. My hope is that the next step will eventually be for some of us to stop our prototyping, think about what matters, and build stuff this time, not as a hyperactive yet legacy addicted child, but as a grown up, forward-thinking body that understands it's just not about a funny toy or a monolithic throwaway tool that will end up lasting decades, but a field that has a purpose and a responsibility.
To correct the quote:
Computer science is not an abstract form of engineering. Software (and hardware in the case it's made to run software) engineering is leveraging CS in the context of constraints imposed by reality.
> Any more than is strictly necessary to perform the given task.
Easy to say, but hard to define up front when 'task' is an OS + applications + browser + the hardware that supports it ;-)
This[0] is the typical scenario I'm hoping we would build a habit of doing.
> abstraction being excessively glorified (mostly) by academics and formal CS curricula.
It's not just academics, it's many developers, too.
We're in an old-school thread. We like what's really going on. Hang out in the Web Starter Kit from last night though, and you'll find tons of people who glorify abstraction.
The reality is that competing forces spread out the batter in different directions: the abstractionists write Java-like stuff. The old-schoolers exploit subtle non-linearities.
Actual commercial shipments rely on a complex "sandwich" of these opposed practices.
> Demoscene is all about creative, pragmatic ways to solve problems
Yes and I grew up with the demoscene (c64 and amiga 500) and it's also about magic, misdirection, being isolated for long winters and celebrating a peculiar set of values. Focus is shifted toward things that technologists know are possible, such as tight loops running a single algorithm that connects audio or video with pre-rendered data, not on what people want or need, such as CAD software or running mailing lists. Flexibility, integration and portability are eschewed in favor of performance.
Don't get me wrong, I LOVE the demoscene - it's the path that got me to love music. And I have near-total apathy for functional programming. I only code in Javascript when weapons are pointed at my heart, but with the proper balance, there are some very real reasons to make use of abstraction. It's not just academics, it's people solving real problems. The trick is to act strategically with respect to the question: which parts will you optimize and which parts will you offload to inefficient frameworks?
> I think to reverse this there will need to be some very massive changes within CS education.
For instance, starting it elementary school. A surprisingly large amount of the mathematical portion of CS has very little in the way of prerequisites.
Having been in the demoscene (Imphobia) for a long time and having been in more abstract (quad tree construction optimizations) stuff I can say that writing a demo is not the same as computing theory. Writing a demo is most often exploiting a very narrow area of a given technology to produce a seducing effect (more often than not, to fake something thought impossible so that it looks possible). So you're basically constraining the problem to fit your solution.
On the other hand, designing pure algorithms is about figuring a solution for a given, canonical and often unforgiving problem (quicksort, graph colouring ?). To me, this is much harder. It involves quite the same amount of creativity but somehow, it's harder on your brain : no you can't cheat, no you can't linearize n² that easily :-)
To take an example. You can make "convincing" 3D on a C64 in a demo because you can cheat, precalculate, optimize in various way for a given 3D scene. Now, if you want to do the same level of 3D but for a video game where the user can look at your scene from unplanned point of views, then you need to have more flexible algorithms such as BSP trees. So you end up working at the algorithm/abstract level...
A very good middle ground here was Quake's 3D engine. They used the BSP engine and optimized it with regular techniques (and there they used the very smart idea of potentially visible sets) but they also used techniques found in demo's (M. Abrash work on optimizing texture mapping is a nice "cheat" -- and super clever)
Now don't get me wrong, academics is not more impressive than demoscene (but certainly a bit more "useful" for the society as whole) These are just two different problems and there are bright minds that makes super impressive stuff in both of them...
I think to reverse this there will need to be some very massive changes within CS education.
Well, I mean, that is most definitely true regardless. But, with my experience getting my BS in CS a few years ago, it had nothing to do with "mainstream software engineering" either. I had classes on formal logic and automata, algorithms (using CLRS), programming language principles (where we compared the paradigms in Java, Lisp, Prolog, and others), microprocessor design (ASM, Verilog, VHDL), compilers, linear algebra, and so on. Very little in the way of architecting and implementing large, abstracted, real-world business applications or anything remotely web-related. In my experience I did not meet anyone interested in glorifying heaps of whiz-bang abstraction, they seemed to be more in line with the stereotypical "stubbornly resisting all change and new development" camp of academics.
I sense the frustration around this subject is building. What I'm afraid of is that once it boils over into action it will lead to a repetition of moves. That's the hard one, to get a 'fresh start' going is ridiculously easy and one of the reasons we have this mess in the first place.
Very hard to avoid the 'now you have two problems' trap.
Indeed. The problem with starting over is that anything you start over with is going to be simpler, at first. Thus potentially faster, easier, etc, etc.
Rewrites are hard and costly, which is rarely taken into account. Even just maintaining a competent fork is hard enough.
I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.
> I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.
And Elon Musk is busy doing rockets and electric cars!
I think it didn't happen because the people feeling this way are precisely in the situation to understand how vast and hard an undertaking it is, not only to achieve, but also to succeed.
Few have attempted a reboot, yet the zeitgeist is definitely there: ZFS, Wayland, Metal, A7, even TempleOS (or whatever its name is these days). Folks are starting to say themselves 'hey, we built things, we learned a ton, we do feel the result, while useful, is a mess but we now genuinely understand we need to start afresh and how'. It's as if everyone were using LISP on x86 and suddenly realised they might as well use LISP machines.
I too fear we just loop over, yet my hope is that in doing that looping, our field iteratively improves.
I'd answer in two ways: One, it is already happening. The 10M problem (10 million concurrent open network connection) is solved by getting the Linux kernel out of the way and managing your own network stack: http://highscalability.com/blog/2013/5/13/the-secret-to-10-m... - The beauty of their approach is that they still provide a running Linux on the side to manage the non-network hardware so you have a stable base to build and debug upon.
Two, I am not sure we are that much smarter now than we were then. As you have quoted a language problem I'll use one myself as an example. See this SO question: https://stackoverflow.com/questions/24015710/for-loop-over-t... . I wanted to have a "simple" loop over some code instantiating several templates. I say simple, because I had first written the same code in Python and found out it was too slow for my purposes and thus rewrote in in C++. In Python this loop is dead simple to implement, just use a standard for loop over a list of factory functions. In C++ I pay for the high efficiency by turning this same problem in an advanced case of template meta programming that in the end didn't even work out for me because one of the arguments was actually a "template template". And on the other hand, making the C++ meta programming environment more powerful has its own set of problems: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n361...
I'm finding that an inherent psychological part of software development is to accept that nothing will be perfect. Everything is fucked up at some level, and there's no practical way around it. You just bite the bullet.
I dream of a system redesigned from the ground up, where hardware and software components, while conceptually isolated, cooperate instead of segregating each other to layers. See how ZFS made previously segregated layers cooperate to offer a robust system, see how TRIM operates on the lowest hardware levels by notifying of filesystem events, see how OSI levels get pierced through for QoS and reliability concerns. Notice how the increase in layers and thus holistic complexity rampantly leads to more bugs, more vulnerabilities, more energy wasted. We all know the fastest code is the one that does not execute, the most robust code is the one that doesn't get written, the most secure code is the one that doesn't exist. Why do I still see redraws and paintings and flashes in 2014? Why does a determined adversary has such a statistical advantage that he is almost guaranteed toget a foothold into my system? This is completely unacceptable. For as much as we love playing with it, the whole web stack, while a significant civilization milestone, is, as a whole, a massive technological failure (the native stack barely fares better).
† I consider wasteful and bloated subtly distinct
†† not at all an attack on Ruby, just what I happen to have at hand right now