In my experience, the thing that's wrong about object-oriented programming is not the object-orientation itself, but the over-application of it.
- Sometimes more "procedural" programming provides better clarity. At other times, object-orientation is a cleaner approach. Neither should be a replacement for the other at all times.
- The idea that objects should reflect how we think of real objects in the physical world probably needs to die, or at least not be treated as a standard for object orientation, since objects in reality often do not fit into neat categories and hierarchies. Because reality is dirty, as is virtual reality, unanticipated exceptions to the rules end up causing OO zealots to create hack solutions that end up being rigid and less obvious to other programmers. The simplest example I can think of off the top of my head is the Fat Model Skinny Controller paradigm in MVC programming; since a model is often looked at as a representation of real-world objects, a programmer thinking in OO is likely to stick every conceivable property and behavior around the imaginary object into the model code, which sometimes results in files thousands of lines long with code that doesn't need to have anything to do with database abstraction. In such cases, a lot of code related to the concept behind a model would be better handled by helper functions or other classes.
In my experience, the thing that's wrong about object-oriented programming is not the object-orientation itself, but the over-application of it.
Or the misapplication of it. If an app is really about dataflow, but programmers have tried to shoehorn in a pedantic OO model, one can often end up with lots of nouns with names that talk about How (exposing too much detail) and lots of code that serves to stick data in this cubby, with other code taking that data out, then sticking it into another cubby, ad nauseum. In that case, the relationships between the cubbyholes, the data, and the dataflow are only maintained through naming conventions, which can in turn be misapplied or neglected.
This makes following the dataflow into detective work to circumvent missed naming conventions, where it should really just be following implementers/senders/references in the standard way provided for by the language.
since a model is often looked at as a representation of real-world objects, a programmer thinking in OO is likely to stick every conceivable property and behavior around the imaginary object into the model code
This is where YAGNI comes into play. (You Ain't Gonna Need It) Never implement something unless you have empirical data to back up the need. If you implement according to imagination... well, the imagination of creative tech people is often practically unbounded. So just imagine how badly scaled the result could be.
Well duh. We’ve been sold object oriented platforms for decades with the insistence that everything is an object, hiding its internals from the world. There was tepid support for representing plain data. At the heyday of the madness people were busy building object-over-network abominations like CORBA. Even today, the most popular enterprise platform, Java, does not support value types. In the free world, the widely used Python platform has added dataclasses only as recent as 3.7, the current version. No wonder OO style is widely overused.
At the heyday of the madness people were busy building object-over-network abominations like CORBA.
There was even an unpleasant period of time in the 90's when people tried to make some of the Linux OS distro end-user apps run off of CORBA to bring about the interconnected "sea of objects" vision, slowing things down a lot.
No wonder OO style is widely overused.
It's overused in part, because it's easily sold. It's easy to make sound-bites out of the doctrine. At the same time, it's a bit nebulous and it's easy to describe all of reality this way at first glance. (That said, in the right hands, it can make for some dandy, clean software. The tricky part is the "right hands.")
Basically, software is organizational politics. Software methodologies are dogma, with nothing to back them up, except maybe some savvy design and at times some mathematics. However, these aren't sufficient to deal with the complexities brought in by business needs, the real world, and the complications of developer communities.
We're back in the alchemy days of the programming discipline. This is perhaps why minimalism often rules the day. Do as little as possible. Make do with as little as possible. Make apps and modules as small as possible. As much as possible, do no harm.
> There was even an unpleasant period of time in the 90's when people tried to make some of the Linux OS distro end-user apps run off of CORBA to bring about the interconnected "sea of objects" vision, slowing things down a lot.
That was a thing on the GNOME desktop (GNOME being short for GNU Networked Object Model Environment). The CORBA framework was called bonobo, and it was basically a very early version of what's now achieved via dbus. KDE had its own equivalent, known as DCOP. AIUI, both systems were replaced altogether when dbus became a thing.
Software itself is about the flow of control and data, trouble often follows where dogma obscures this fundamental.
Trouble always follows where dogma conflicts with first principles and empirical data.
More correctly, a significant part of software development is organizational politics. The "flow of control and data" inevitably gets you embroiled in that.
No! Micro services are value-over-network, hopefully with idempotent semantics. CORBA is object-over-network, with clients holding references to server objects and a protocol that passes around object references over the network. It is true that one can do value-over-network within a CORBA environment by throwing away 95% of the features, but that is not idiomatic CORBA and was sneered upon as not object-oriented enough. This is a case of too many misguided features misleading users down unproductive paths, possibly even worse than selling [a-z0-9_ ]* as the ultimate programming language on account that there are subsets of this language that are actually useful.
Quite right, micro-services are the millennials rediscovering SUN RPC and XML RPC, while writing spaghetti structured code with a network in the middle.
No. Microservices pass data over the network. Distributed objects get handles to remote objects and then make calls to them. Sometimes there could be very many of them and it all got very complex and had poor performance. Microservices are still object oriented because each service is an object and the data being passed is a message. They are big objects. Alan Kay said that one if the mistakes with OO was not making objects bigger.
CORBA actually supports both "object handle over network" and "ship some actual local object as part of a request/response". It's basically a way of addressing typical FFI/IPC concerns in a way that's actually network- and location-transparent, and that thus also addresses the concerns of distributed programming (for example, making something network-transparent introduces network faults as a possibility, and your system needs to deal with them appropriately!)
Indeed, objects should have been larger. As large as a module. Perhaps with interfaces implemented by different concrete modules. Like non-oo programming language were doing, circa late 80s [modula 3, sml].
While at that, “message passing” [aka goto with arguments] is too fine grained of a concept, what people need 99% of the time is a structured function call.
It could be, if you also send the entire object with all its dependencies along with a remote procedure call. Kinda expensive and misses the point, though.
Point 2 rings true to me. The idea that you could model reality in a layman manner is borderline madness for engineering. You want to encode invariants and properties that are far from the superficial taxonomies you get fed in school.
Object orientation can provide a cute encoding of abstraction layers ala SICP to avoid dealing with a bunch of dangling functions:
One of the seminal books that people read about object modeling “Domain Driven Design” by Eric Evans writes about “transaction scripts”. Not everything needs to be modeled.
The thing about Smalltalk, is that it adheres very closely to "everything is an object." This means that even low level mechanisms in Smalltalk have to allow a very nimble use of objects. This means that there can be serious cost/benefit mismatches when translating libraries/patterns/designs from languages like Smalltalk to others. Creating a lambda in Smalltalk is trivial business as usual. Other environments require more thought, have more limitations, have higher costs of their use. Factories are how you do normal operations in Smalltalk. In other languages, they become infrastructure that needs to be maintained.
Not everything needs to be modeled in every language. Basically, you model to the point where the cost/benefit works out, but go no further. In Smalltalk, the cost of modeling things is designed to be so low, almost everything is an object. So of course, the cost/benefit in different environments can be drastically different, and in many, modeling everything can be a waste.
But, what do you think of the specific points that were made in the video (not the title)? Are they not worthy of discussion, since, ostensibly that is why we are here?
The problem for me in regards to OOP is the complexity related to the management of state. OOP encourages mutability and understanding the state of an object as its methods are called can be confusing when additional internal (and often times, private) methods are called. In functional programming, state is something acted upon by functions. It is as simple as f(x) = y. Reasoning in functional programming is much closer to the mathematics I've been trained in since grade school. In functional programming languages, mutability of an object is an explicit operation. It's easy to tell when state is being mutated visually from syntax. Mental overhead is greatly reduced when you can assume that your data structures are immutable.
In theory this sounds great but in reality every major application, game, operating system and even website is written in an OOP style. Why is that? People should first answer that question.
Yeah, real existing software has flaws like every other thing in reality but at least it exists unlike a significant piece of fp style software. I like fp and I think it has a lot to teach but I think it's a bit presumptuous to advocate against a major paradigm that has actually proven itself with one that has failed to do so for decades.
And if OOP is so horrible why do so many people write OOP style PHP and javascript where they really don't have to?
Personally, I don't think OOP is bad per se, I just think the type systems of the main OOP languages are too limited and inflexible. Even so, you can accomplish amazing stuff in them. Just look at the Spring Framework and what it helps you do with so little code.
A fair bit of that can be easily explained as cargo cult - a fair number of developers simply believe, because it's how they've been taught and what's been stated as fact over and over and over, that OOP is just 'how you write code'.
Ok but objects have been successful for a very long time. And this is in a field where there are more potential early adopters than most. Programmers like new stuff.
I think they have been successful because in the end a lot of the world we deal with behaves like objects. Devices, processes, remote machines all accept messages and those messages have 'side effects' in that they change the future behaviour of that thing.
My current understanding (sure to change!) is this:
Objects and messaging are best at the boundaries of system components. The internals end up being far simpler to state in functional terms, where the side effects can be pushed to the edges with far less inherent complexity.
By assuming that objects are successful everywhere and always, it's really ignoring the regions where objects are more or less useful, painting over the picture with broad strokes.
By learning multiple paradigms, you're learning broad principles that map well across all languages, as well as principles that are stronger or weaker in some paradigms than others. This enables you to be much more versatile when creating systems than devoting yourself to the 'One Way To Code' mentality.
>In theory this sounds great but in reality every major application, game, operating system and even website is written in an OOP style. Why is that? People should first answer that question.
Ok, try this on for size: OOP in general has one characteristic that makes it fantastic for large-team development, and that's extending over replacing. OOP makes it easy to never remove or change any existing code, and also "easy" to make changes or fixes by only adding more code. I wouldn't call it a good thing, but it does enable enormous numbers of people to all work on a codebase productively at the same time. I think this, more than anything else, is the power of OOP.
> And if OOP is so horrible why do so many people write OOP style PHP and javascript where they really don't have to?
Numbers. OOP supports big teams, so there are lots of OOP programmers in the wild. I genuinely think Javascript is sucking up to the Java developers of the world, and PHP is genuinely better for adopting OOP bits and pieces, but I'm not entirely sure why OOP devs and Java devs in particular seem to be immune to the industry mantra of "stay current and learn new things".
But Lisp's object system is kind of famous. Where are the people beating the averages in pure fp languages? There should be a lot of really amazing software if fp is as good as it's made out to be and employed exclusively by elite non-blub programmers. There really isn't though, except maybe in finance where F# is doing quite well. And that's great but even F# is multi paradigm.
My home-grown accounting system, written in a language with great support for functional programming, is based on objects.
accounts, transactions, transaction deltas, ledgers, are all objects. Starting from a blank slate, a file is loaded which defines the ledger entries. These definitions are processed one by one, using mutable updates to the object framework. What is left is then the state of the ledger with all those entries and that can be subject to various queries and reports.
fun read -- the last sentence included ! plenty to say about that article..
I get the sense that many commenters here, YNews in 2019, very much fit the description of a "Blub" programmer.. with the added twist that it is -giant-cloud-thing- platform and/or exciting web trends are my thing, to deepen the "Blub" point of view..
.. can't help but chuckle at the repeated and fairly clever jabs at ASM, considering it took M$FT ten years to make a windowing GUI that performed well enough on a PC, after the Mac OS was written in a lot of ASM.
Yes, but I/O is impure and sequentially dependent, unlike mathematics.
Maybe the "impedance mismatch" between pure-fp and I/O is too great, with the IO monad being the "leaky abstraction" of the FP world.
Then OO is not a bad solution to an irrelevant problem, but a pretty-good solution to a bad relevant problem.
EDIT:
To clarify: The "problem" is change.
Imperative programming relates sequential device-change over time, to sequential programming lines. OO programming is a structured imperative programming.
Pure FP langs model sequential change with (roughly,) lists of actions to-be-performed that are snaked through your program.
(Also: maybe OO's failure has more to do with its limited static analysis, type systems, than its model of change?)
Backus: Well, because the fundamental paradigm did not include a way of dealing with real time. It was a way of saying how to transform this thing into that thing, but there was no element of time involved, and that was where it got hung up.
Booch: That’s a problem you wrestled with for literally years.
Most of your application logic should not have to care about I/O. That is a concern which should stay at the edges. This way it is easier to reason about the program, and test it.
“Mental overhead is greatly reduced when you can assume that your data structures are immutable”.
I’ve been reading lots of articles claiming that but I still haven’t understand how that’s so. If you have fifty functions all acting on the same piece of state, why would returning new objects instead of mutating would greatly reduce mental overhead? Isn’t there still mental overhead in tracking which of the fifty functions made some transformation?
Maybe this is something I would understand better with more practice in functional programming (which I don’t have yet), but if anyone could provide an example of this in practice I’d really appreciate.
simply this: when any function returns a result, you never need to worry about anything else in the universe changing too. It's a self-contained operation with totally predictable results and no side effects.
Additionally you break up the implementation into two smaller tasks. First one being how to transform object A to object B. And second being how to wire up everything to pass object B to other functions needing it. In OOP we essentially do the two at once. It can turn into quite an overhead in complex system.
On the other hand, if the "everything else in the universe changes" model is established and understood, that can be an excellent way to streamline and decouple things.
I could see it working well with a React-style component model-- update the state, and anything derived from it automatically changes to match.
But this is (largely) a property of the functions you write rather than the data structures. You can get virtually all of the benefit in the procedural world by just writing pure functions.
when you have immutable data structures by default you get the additional assurance that other places in the code that previously used an input to a function can continue to work with their data without needing to worry that it might have been changed by its use by a function elsewhere, since that function cannot modify the data, only return a new 'version' (through structural-sharing)
Yeah but when you do need to mutate state, why returning a new object is any clearer than modifying an object and returning it? Especially when the new object you return is going to be used to update a state tree later. Isn’t this a side effect too, even if more indirect?
You don't have to worry about state mutating in the body of your functions, so when tracing or debugging the action of the function you can be very single minded and deterministic.
Its especially beneficial when you're doing multithreading work. I've debugged multithreading C++ and multithreaded elixir/erlang and fixing errors and identifying race conditions is night and day.
We wrote a front end in highly disciplined functional JavaScript on react with a single source of truth object in the center. I haven't touched JavaScript in a decade, and I could debug parts and add features with 100% confidence that I wasn't messing anything up (needed the frontend in a pinch so we hadn't done unit testing yet-i know I know, it's fixed now)
If you are curious, I recommend the video "boundaries" by Gary Bernhardt, of "wat" fame.
I've seen this posted here before. I take issue with anyone who gets onto a soapbox and announces "this might be the most important video you've ever watched" and then proceeds to espouse their opinion about something. It's egotistical and it immediately sets up any dissenting opinions as Obviously Wrong. I call BS on all of it. OOP can be written well. I've done it. I've seen it done by others. When you need a strong domain layer and strong data integrity it can really pay off dividends. I think what people are really trying to avoid when they say "OOP is bad" is poor design and entangled concerns. In general favor composition over inheritance, strongly defined data contracts, and a clean separation of concerns (business logic from presentation for instance) and you will find yourself with more maintainable and easier to read code. But thats just like, my opinion man. :)
I think one of the reasons anti-OOP opinions are so strong is that OOP was promoted as the end all savior of programming. At my University it was dogmatically emphasized.
"OOP" (e.g. subtype polymorphism) is useful for some problems, but worthless and kludgy for others; it's a hammer pounding lagbolts. A generation of programmers (mine) missed out on the richness of Lisp/ML flavored languages.
I didn't really come to Lisp/Ocaml until a decade after I'd been programming and after embracing it, I can't imagine going back.
Thankfully were in the multiparadigm era where modern languages are adopting the best of all world's.
The biggest problem with OOP is arguably inheritance, specifically implementation inheritance. The basic issue is that "protected" methods may be extended arbitrarily by a child class, with zero indication of what properties might in fact be relied upon by methods in the parent; this inevitably leads to a rather intractable version of the Fragile Base Class problem.
This pitfall of open-ended implementation inheritance (a.k.a. "open recursion"), which is precisely what "inheritance" per se provides over the well-known (and often used) combination of "object-based" composition and "interfaces"/traits/type classes, is pretty damning for OOP itself.
interestingly Lisp/ocaml/clojure rank low in popularity, clojure shows it’s not because of libraries.
That has to mean something other than libraries or unenlightend people. Idk maybe there is too much complexity in those languages and philosophy
Learning new languages is considered to be one of the highest costs a developer can pay. To be fair, there is a cost to picking up a new language, but nowhere near as high as is often believed, especially after one has learned two or three significantly different ones.
I'd argue that those languages don't see as much use in great part because they are not the first languages people are exposed to, and therefore most decide it's not worth the effort to learn them.
This advice/reflection could apply equally to any style of programming, except for one particular snippet. You haven't made or refuted any OOP-specific claims. Separation of concerns? Egotistical soapbox speaker? Strong domain layer? What do they have to do with OOP, as opposed to just P?
Except for this:
> favor composition over inheritence
Yes! Inheritance is OOP-specific, and I agree that you should avoid it.
Inheritance is a powerful tool. And like anything powerful, you need to think very hard about it use and assess whether another approach might be more well suited to what you are trying to accomplish. In my experience the reason people use inheritance is often simply to share common functionality. There are a lot of ways to get that done that don’t require inheritance.
if done improperly one ends up having to maintain backward compatibility in the parent object not just on the actual interface but also in the internal state. basically makes it too easy to violate incapsulation
one needs not to avoid it completely but need to be aware of the pitfalls
good on you to call BS a bit, but others have mentioned management of state specifically.. so "nouns" are ABCs, "verbs" defined, but after runtime for a while, and getting hammered or other difficult conditions, what is state like ?
good OOP can be really useful, without being a religion.. personally I would build general-purpose objects here and there to save some tedious decomposition..and the resulting seperation of code and concerns was fine .. specific things get specific code, while one or two item constructs somewhere off the beaten path might get lumped into a collection of "util" or similar.. its ok - it takes some practice and thinking ..
> "this might be the most important video you've ever watched"
This is standard practice on YouTube, for better or for worse - same with the usual appeal "hey, you can help me by liking, commenting and subscribing!" at the end. (I have not clicked on this video, so I have no idea if anything like that is in there too.) Gotta drive those "engagement" numbers!
Brian's video is one of the most influential bits of wisdom I've received in my career. Up until this video I did not have a language for explaining the problems I ran into trying to build OO systems. It also helped me understand why writing procedural-functional code felt so natural and easy.
I was a die hard OO/UML guy. I even contributed to various open source software projects for a model-driven-architecture framework, a UML editor, and integrations with Rational Rose. I was an evangelist for good OO practices, SOLID, etc... As a development coach I was quite fervent.
Looking back on that I feel like a fool. When I read the code refactoring examples from Uncle Bob and others I see all the problems Brian talks about in his video (and more). OO was a nice idea but we got the granularity and implementation wrong.
A 44 minute powerpoint on why OOP is bad? Please refactor into separate self-contained and neatly presented topics each with their own methods for breaking down this argument.
From the post; "The problem, however, is that, while organizing program state into a hierarchy is easy, OOP demands that we then organize program state manipulation into the same hierarchy, which is extremely difficult when we have a non-trivial amount of state. Anytime we have a cross-cutting concern, an operation that involves multiple objects that aren't immediately related, that operation should reside in the common ancestor of those objects"
No, OO does not make any demands like that. You are totally free to make objects and classes out of processes and algoithms, and that is a good tool to do the kind of cross cutting concerns the article talks about.
So OP somehow inferred that OOP requires operations to only be carried out by 'ancestor' objects and fails miserably. What they call 'ancestor' objects also have nothing to do with OOP in hierarchy as it is not an is-a relationship by rather ownership/has-a that is common in many styles including composition over inheritance. Basically they don't know what they're talking about.
This is someone who is confronting basic limits of representation and information flow and blaming OOP. There is no one best way to organize code just like there is no one best sentence to describe a rose.
"There is no one best way" summarizes what I think is a major unheralded realization in software engineering in the past 10 years.
Before that it was the search for the One Way. OOP and functional programming were probably the two biggest opposing schools during that era, but now the trend is languages that easily allow both plus plain vanilla procedural programming. It's the programmer's job to pick a paradigm for the problem being solved.
"there is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity"
The explosion of variety has actually pushed the other way. It was expensive, but MVS shops could push out highly available apps with a consistent UI very quickly.
It seems we've really only made progress on unit cost and ubiquity.
I think this is roughly why cloud is popular. It's not cheap, but if you're using their standard pieces (RDS, lambda, ECS, ELB, CloudWatch etc), you're working in a defined box with fewer choices.
I'm curious if this is the way older and more established engineering progressed. I would be interested in how, for example, designing and building a bridge to suit a specific use case has evolved over time.
There are two major concerns in software, performance and legibility. These concerns are somewhat orthogonal but neither can be abandoned, and the challenge is balancing them. By contrast in bridge-building (etc.) the two main concerns (performance and aesthetics) have been largely separated by discipline, into engineering and architecture, the former a purely mechanical discipline, the latter almost entirely artistic.
The main result of Modernism has been the subservience of the latter to the former, and in general the almost wholescale abandonment of aesthetic concerns. This is starting to come apart a little bit as people realize that this separation is unhealthy, and maybe we shouldn't build brutal, ugly flyovers through our cities because they have measurable effects on happiness, and thus property values, etc. (i.e., aesthetic concerns have functional consequences).
Similarly I think many software "engineers" take a long time to realize that one of their main functions is writing code that others can read; this is an aesthetic, artistic discipline that I think many developers reject as outside their domain. This, too, is starting to come apart as we slowly figure out that code's maintainability is a key determinant of its long-term success - that is, aesthetic concerns have functional consequences.
Software not being bound by physical constraints like other engineering disciplines is a huge distinction. So much so that I think software development is closer to writing a novel or movie script than it is to building a bridge. However, I would say the closer the software is to bare metal the closer it is to "real" engineering.
Could we please stop this kind of useless statements?
Aren't we developers supposed to be scientific-minded?
There is not such thing as "bad" or "good" in absolute. There are paradigmes that work better in certain contexts - and I use the word "context" in very general way here - and some that work worse.
If my consulting experience has taught me something, is that "it depends" is a valid comment most of the time. By the way, I also learned that if you stopped at this comment, you're not a good consultant.
I really liked the first half - he provides some insightful thoughts on both why OOP doesn't quite deliver on the promise, and why it continues to be popular anyway.
I am also becoming increasingly enamored of procedural programming as a default approach. Functional is also good - it's one of my first loves - but I find that it can be similarly prone to encouraging premature abstraction. Like OOP, that problem isn't going to become so apparent until you start using it in a large, long-lived business system with ever-shifting business requirements.
That said, I think that I've got to part ways with the video around about the part where it switches to talking about how the speaker thinks good code should be structured. Especially the bit about not keeping functions small - the problem with that is, it makes it way too easy for spaghetti code to sneak in. In the same way that, in the heat of the moment, a developer in an OO language is going to start tangling together the connections among objects a bit too liberally, a developer working in a 300-line procedural function is going to start fiddling with any variable that happens to be in scope a bit too liberally. Factoring out your functions does mean you have to name all those functions (which, I realize takes effort, but I also can't agree that it's a waste of effort), but it also serves as a way of making sure everyone stays honest about shared mutable state. Maybe it wouldn't be so bad in a language that has that "nested functions that aren't closures" feature, but, like he says, such a language does not currently exist.
The small-versus-large-functions discussion is not about spaghetti code. It's about the question whether you should chop one large block of first-do-this-then-do-that code into smaller independent blocks and then call them in that very sequence from a superordinate function.
In general I totally agree that sequential code should just be sequential, and it should not afford more functions. Because, the each time one looks at those additional functions, in your standard programming language, the first thing one must ask oneself is, "what is the context in which this function must work? What are all the callers of this function?". In simple, sequential code, the answer is often more clear. At least, it's clear that the block of code is really only ever "called" from one place.
The disadvantage is that the block can see unrelated variables that were defined higher up in the same function - or one must add a level of indentation everywhere to protect those.
My concern there is not that a monolithic block of sequential code is inherently spaghetti code. It's that their natural tendency is to spaghettify over time.
Clean 300-line functions form an unstable equilibrium point; it requires constant effort by someone who cares about code hygiene to keep them clean. More so than factored code.
Please excuse my sibling post. I realize I put not enough effort into actually reading your comment all the way through. I agree with what you say and want to add as an aside that I think LLVM has the inline-function feature that you mention.
OOP is a way of thinking about and structuring your program that works well for some problem domains and not for others. Like other paradigms it can become dogmatic and be overused. Also like other paradigms it became a huge fad for a while and now there is a backlash.
The claim that primitive types are better than interfaces (inside modules) reveals that he is comfortable with golang style. But this point misses that you can do _more_ things when you know the full concrete type of something, than when you program generically, only depending on the minimal requirements of what you need, which in turn opens up flexibility for the caller (constraints liberate).
Of course, one doesn't think too hard about this in golang, lacking generics
It is quite obvious to me that the author is ignorant on the history of computing.
The reason of the popularity of OOP is way longer than java, Xerox invented the graphical desktop and ALSO OOP.
When Steve Jobs was ousted from Apple he created NEXT because he believed OOP was the next big thing. It was, and it became the foundation of MacOS X. Microsoft copied Steve Jobs in Windows with MFC. Java copied all of them.
In the words of Steve Jobs, developing in OOP is not faster than not using it. The big difference was once it had been created, it can be reused easily. That was essential for complex systems. It became obvious for companies.
There is this religion today of functional programing, people come to me and say, look how great this is, without state parallel programming is so easy, now you can use 32 processors at the same time. Great!!, until we measure performance and the thing goes 50 times slower. So now you can use 32 processors for what used to take one.
Don't get me wrong, we use functional programming when it is the best tool for the job(it removes lots of bugs), but it is not panacea.
Oop is slower than functional in many cases. The highly optimized java stack isn’t that fast and still uses oodles of memory. And then there’s languages like ruby which are practically slower than writing shell scripts.
There are a few papers about Haskell's GC by Simon Marlow where this is discussed. The absence of mutable references is a big win for writing a concurrent collector.
I don't recall any of Marlow's papers having a table comparing performance of Haskell tracing GC implementations versus the tracing GC implementations available across several JVM implementations.
You never defined what you meant by a "good concurrent GC", only now are you asking for "performance" numbers.
My point is that writing a GC for Java has in fact proved very difficult. It took Sun and Oracle many man years and different GC designs to get where they are today. So it seems Java also needs a "good" concurrent GC. With both functional and OOP languages generating a lot of garbage and sharing language features, I don't see a big difference in requirements in practice. And sure enough, Clojure/Scala seems to work well using the JVM GC and F# seems to work well with the .NET GC too.
I think the burden of proof rests with you and your statement that the concurrent GC situation is somehow more challenging for an FP language.
Not shipping a working product before your deadline is bad. How you get there is up to you. OO isn't necessary but it can get you there. In a perfect world where deadlines didn't exist and everyone writing code was doing so for academic purposes, maybe OO wouldn't be necessary.
The irony of the anti-OOP bashers is that all successful GUI frameworks are OOP based, even those written in non-OOP languages like C.
Likewise all the functional programming languages that get used as example, are also not pure FP, rather multi-paradigm, also supporting concepts from OOP.
GUI frameworks have always been OOP-based because everybody was copying Xerox Parc. But even the most successful were not always easy to use (composition and control-flow/concurrency being particularly awkward). Modern GUIs are all based on HTML/CSS these days anyway and certainly not always OOP, as demonstrated by frameworks such as react.
I think that is a bit like saying Xerox parc guis were all pixels which certainly isn't oop. Modern guis are mostly OO code that targets the Dom or generates HTML on the server
I'm not a web developer, but I don't think the OOP features of JavaScript are really used much. I once saw some jQuery code and it looked pretty functional to me.
Where does Domain Driven Design fall? DDD typically has service class that operates on multiple business objects in a pipe-line like fashion. Being business programming, there is always side-effects but usually handle via repository or db manager classes. I.e. no active record.
It is perfectly sensible, easier even, to apply DDD concepts to small functional services. Define a bounded context and aggregates. Under the hood (inside the context) there are a number of strategies for handling side effects, data structures, and state transitions.
I'm finding DDD + functional languages + serverless architectures work quite well together, so far.
one issue with OOP in practice that I've seen is the entanglement of the domain representation (member variables of a class) and the varied operations on that data (methods). Classic OOP encourages you to manipulate objects by methods rather than free functions, which combines potentially unrelated functionality in the same object.
my rule of thumb with objects is to keep the methods to a minimum, to the extent all classes are either interfaces, implementations of interfaces or pure data classes. obviously this approach will be natural to ML programmers.
What else people can do - videos, articles - just to avoid learning a little bit of Java?
Stop that laziness and learn OOP properly, you'll find a good use for it!
> Stop that laziness and learn OOP properly, you'll find a good use for it!
This is my biggest problem with OOP: proponents keep shifting the goalposts to avoid criticisms. If someone follows OOP practice, and it doesn't work out perfectly, then they must not have been doing it "properly". Hence "OOP" becomes a nebulous term, encompassing a whole bunch of approaches (encapsulation, inheritance, subtype polymorphism, dynamic dispatch, SOLID, MVC, etc.) but if any of those don't work in some situation then they mysteriously don't count as 'proper OOP' in that case.
I used to be very deep down the OOP rabbit hole (oh the joys meta-object protocols!), but these days I tend to stick to functional programming. I wouldn't claim it's the best way to program, and there are different tools for different jobs, etc. but one thing I've taken to heart is that we should try to make the easy thing be the correct thing.
An example of this is static type systems: it's possible to write correct code without types, but it's much easier to get things wrong. Type checking rules out a lot of those wrong things, which makes it more likely we'll do the correct thing (note: I'm not saying static types make things easier, I'm saying that the easiest thing to do in the presence of static types is usually more correct than the easiest thing to do when there are no types). Automated testing is another example, as is purity, effect systems, capability models, etc. Even the fact that Java forces us to write a class in a correctly-named file just for "hello world" is an example of making the path of least resistance more "correct" (although I disagree with Java's notion of what's "correct").
Even if we concede that these complainers aren't doing OOP "properly", that just means OOP is fraught with gotchas, misaligned incentives and lacks objectively checkable criteria. I wouldn't want to pursue any practice where earnest, researched attempts to follow it not only lead to the very problems that it claimed to avoid, but is met with advice to follow the practice "properly". Just look at how much of softwareengineering.stackexchange.com is bogged down in philosophical pontificating about the nature of OOP!
As for your specific claim, after doing OOP in several languages for many years, professionally, academically and recreationally, I count Java among the worst (PHP is slightly worse). I could say that if you want to do OOP "properly" you should try Smalltalk, but that would be yet more philosophical snobbery (you should instead try Smalltalk because it's a great language ;) )
> > OOP is fraught with gotchas, misaligned incentives and lacks objectively checkable criteria
> Like, almost everything in programming these days (especially the frontend part)?
Everything has problems; that doesn't mean everything is equally problematic. My point is that we should favour practices which are unambiguous and encourage objectively good things, and that OOP isn't either of those.
W.r.t. functional programming, I wasn't trying to argue in favour of it specifically here. I write a lot of procedural code, and keep looking for opportunities to learn/apply logic programming ;)
There are a lot of mention on functional programming and OOP comparison in this thread. There's my 2 cents on the topic:
1. OOP is not necessarily bad.
2. The original OOP should be Smalltalk-like languages (messaging), but the term OOP we refer today basically related to Simula-like languages (inheritance, polymorphism).
3. OOP and FP are not exclusive. Language like Scala you can have a 'case class Stack[A]' with a method 'def push[A](a: A): Stack' returns new stack. First, it's a class and be able to have methods, inheritance etc. Second it's an ADT (Algebraic Data Type) and without any function with size effect. So both OOP and FP still makes perfect sense.
4. Main stream OOP languages design are mostly terrible. (Basically the 'Blub language' according to Paul Graham, I'll call those language Objective-Blubs because I want it to be a mainstream OO language without hurting someone).
5. There are a lot of arguments against Objective-Blub or OOP itself are related to state, side-effect etc. These are partially true. But I can't say stateless and pure is totally better than other approaches, I just feel better when I'm tackling complex domain so code does not affect each other in a crazy way. However if the whole program is pretty easy to make sense for you. For FP approach there is no obvious advantage over Objective-Blub. In this case choose the one with higher velocity.
6. The core problem that Objective-Blub to me is not simple state or side effects. It's the wrong/implicit modeling, if you are not modeling a problem, 9/10 you cannot address the problem. It's would bite you again and again. For small application these are fine, for large application these omitted modeling could be a real problem. But there are so few people mention this explicitly, please allow me to list some of them here:
a. Quick example - NullPointerException: Not modeling nullable concept makes you handle null everywhere. By explicit modeling Optional/Maybe/Some it could be resolved.
b. In Objective-Blub objects are reference type by default: Which means it's assume you only use the object in this specific machine and thread. Objective-Blub programmers always complain about object-relational impedance mismatch, and then they blame SQL. But the real problem is Objective-Blub itself. For Objective-Blub you have to throw away all the methods and references. Not only SQL, JSON or other serialization is also a big deal. If you use FP languages you just map tables to an ADT which is not a big deal. There's no assumption on identity on FP language at all, if you want an entity, just give it an id field. That also explained why FP languages is more concurrent friendly, because same data on different machines are still same data.
c. Sub-type polymorphism does not let programmer do the correct modeling easily: ADT have sum and product type. That's how you describe domain types, it's so straight-forwarded. Done. While languages like Objective-Blub have class and enums respectively, but no parametric enums. You have to write less obvious code to model simple concept like 'Payment = CreditCard(no, cvv) | Cash(amount) | FreeCoupon'
d. Sub-type polymorphism means strong assumption: This is so called inheritance, which encourage you do strong assumption like a Person extends a Head, which works but implicitly indicates a Person is a Head. This is ridiculous but if you convert Person and Head to some business type and services you can find too many code bases having this problem.
e. Sub-type polymorphism is under-powered than other polymorphisms: Less powerful is good when it's enough. But it's a huge problem when it's under-powered, which means you have to hack all the way round - like with old day Java, if you loop through an array, you have to cast type to every element. This is insane for a verbose statically typed language. Language should focus on simplify parametric polymorphism like OCaml or Haskell does to guide programmer choose a better way by default. Statically typed FP languages usually model effect with parametric/ad-hoc polymorphism, which could be a huge deal of separating dirty world concerns - like in Scala you can have your domain service accept F[_] as effect type parameter without knowing what this effect would be, it could be database IO or random generator or totally different abstract things.
f. Methods are everywhere: In Objective-Blub you have to write all methods in same class. So if a class having 100 methods is considered a code smell. And someone refactored them, move some of them to other classes and extract some of them to a new class. That's typically one reason why large code base are so hard to read, because typically in FP languages it's no big deal when you have hundreds of function operates on the same type, you just split them in several files if you want. For those extracted class and moved methods, they largely blurred how domain type and logic should be, because you have to create a new concepts, or move concepts to other places just because the old concept is too big.
g. Less expressive also blurred things: For the first time I was looking at a code base with design patterns everywhere I was so confused. Most patterns are the problem that language is so constraint that it cannot express simple idea, like TypeScript allow Partial<T> in constructor it would eliminate most of the builders. Like singleton and static properties could be addressed with Scala object. When a code base is filled with BarFactoryBuilder etc, that means the language itself does not modeling these common patterns at all. This make people(both reader and writer) fighting with languages instead of just expressing domain business.
yeah, that was pretty good. i guess OOP is not the end-all be all. prob would have been more useful if this was produced 10 years ago instead of 3 years ago. is anyone still using OOP?
Except it's not unfashionable. It's used by millions of programmers who just want to get stuff done. It's used by thousands of companies who don't really care how the native instructions are eventually generated. It's used by individuals and teams, application developers and library authors.
This premise of this video is flawed from the beginning and the whole thing is basically a gigantic straw man.
Not sure if nathanaldensr means that but that's what I would say is the straw man.
Also, it's been my experience a lot of developers have a lot to learn about OOP principles. The most common OOP idea that's seemed to have been lost is asking an object to reason about its current state. If I had a nickel for every time I see code like below...
String fooJson = new Gson().toJson(foo, Foo.class); or
It seems the author has not learned how to use OOP appropriately. His main argument can be summed up as “it’s hard to name things and hard to compose things” and that seems to apply to all programming.
When he talks about the "kingdom of nouns" he's not complaining that the problem with "Manager" classes is that they're hard to name. Instead, the argument is that these manager classes are hard to name because you've reached a point in encapsulation where the association between behavior and state in inherently unclear and overly abstracted.
I think the author would agree that all programing deals with the difficult problem of composition, however, the argument in this case is that OOP introduces unrealistic constraints on composition. The argument in this video is that encapsulation and the single responsibility principle requires you to choose between a difficult to maintain tree-based object hierarchy, or to abandon encapsulation, and that both of these are sub-optimal choices.
Finally, I think the author would agree that he "has not learned how to use OOP appropriately" but would go one step further and say that "it's impossible to 'use OOP appropriately.'"
You can make any number of arguments why X, Y or Z is bad (or, "considered harmful"). The truth is, if you want to get stuff done, objects often fit the domain well enough and put state in predictable places. A bandaid here or there, "friend" classes and so on may be required. But, you'll get stuff done even if it's not in some perfect stateless beauty.
- Sometimes more "procedural" programming provides better clarity. At other times, object-orientation is a cleaner approach. Neither should be a replacement for the other at all times.
- The idea that objects should reflect how we think of real objects in the physical world probably needs to die, or at least not be treated as a standard for object orientation, since objects in reality often do not fit into neat categories and hierarchies. Because reality is dirty, as is virtual reality, unanticipated exceptions to the rules end up causing OO zealots to create hack solutions that end up being rigid and less obvious to other programmers. The simplest example I can think of off the top of my head is the Fat Model Skinny Controller paradigm in MVC programming; since a model is often looked at as a representation of real-world objects, a programmer thinking in OO is likely to stick every conceivable property and behavior around the imaginary object into the model code, which sometimes results in files thousands of lines long with code that doesn't need to have anything to do with database abstraction. In such cases, a lot of code related to the concept behind a model would be better handled by helper functions or other classes.