The He-Man Pattern Haters Club

Richard Mansfield has a bone to pick with object oriented programming:

This is a companion discussion topic for the original blog entry at:

Good stuff, but the “OOP is Bad” site doesn’t belong to the author, it belongs to a megalomaniac nut named Bryce Jacobs. Just search for “Topmind” in Google groups to see what I mean.

Havin’ fun with it…

Thanks for pointing that out. Some reasonable responses to an overly religious article. This response in particular stood out to me:

But one thing I have to admit: badly written OO code is IMHO worse than badly written procedural code. With “bad” I mean OO programs that are bloated because of excessive usage of patterns and ignoring the principle of simplicity. I’m actually trying to fix such a beast and it’s so hard because the whole thing is absurdly complex with unnecessary abstractions, factories, proxies, managers, visitors and so on, just the whole catalogue of the GOF-Book. Bad procedural code often comes around in the form of long files of spaghetti code which can be quite easily wrapped up in a class and be locked away forever.

Enjoyed your post. I too read Richard’s article last week and thought it was interesting. I myself prefer to KISS whenever possible and not build a submarine when a rowboat will do.

You may enjoy this one as well:


I’m not that impressed with the author’s credentials. Looks to me like he’s spent most of his time writing about programming - not doing it.

Most professional programmers know that OOP is just one tool in the toolbox. The skilled ones know when to use it - and when to use something else.

Most professional programmers know that OOP is just one tool in the toolbox. The skilled ones know when to use it - and when to use something else

I agree. But the question is, which of these systems (OOP or procedural) produces better results in the hands of a naive, inexperienced developer? Because that’s invariably what you’re going to get.

Well designed methodologies/systems are designed to accommodate human error and inexperience. I’m not convinced the fancy patterns do this particularly well. They seem to require a high level of skill to A) determine when to use and B) implement correctly without overly complicating the code.

That is a great quote Jeff. There was a comment up there too about how too many abstractions are like chasing down way too many goto statements. That one had me bustin’ up.

Stuart Halloway takes the interesting view in his “Design Patterns Revisited” presentation that, in some sense, design patterns are missing elements of the language. He provides techniques using things like reflection and aspects to add them in. Some languages (Objective-C) allow you to add elements in as base components of the language, which reduces the complexity of the patterns to the point that they just become the tools that they are intended to be.

Jeff, I think that’s a valid concern and as always there is a tradeoff between understanding the pattern versus getting lost in a complex, say, payroll or graphics application just to see where it might fit. We do fit decorator into the context of Java I/O after explaining the simple pattern. We also address the “novice user seeing patterns everywhere” issue in the last chapter (as well as elsewhere).

Keep the comments coming! It can only improve future editions.


Head First Design Patterns was written specifically NOT to be Design Patterns for Dummies. Our readers are not dummies, they are beginners, who haven’t yet had the opportunity to learn about design patterns.

Stuart Halloway takes the interesting view in his “Design Patterns Revisited” presentation that, in some sense, design patterns are missing elements of the language

Agreed. I remember Paul Graham commenting that over half the patterns exist as language elements in Lisp. The best patterns are probably invisible because they’re part of the language.

Would the coffee pricing example stick in your head so that when you’re actually working on a REAL implementation you’d have the essence of the pattern in your head? That’s the whole point with Head First - learning, and in a way that makes people feel like they can go out and use the patterns

My concern is that beginners aren’t learning when a pattern is appropriate in a real world situation. This leads to inappropriate use of patterns in simple situations-- exactly as shown in the Head First example-- which leads to overly complex code (see quote above). That’s a classic beginner mistake. Simple situations deserve simple solutions.

Head First Design Patterns was written specifically NOT to be Design Patterns for Dummies

I didn’t mean it in a pejorative sense… I prefer books like Head First. I don’t enjoy dry academic books, and I’m sure four out of five Amazon readers would agree!

Hey Jeff, totally agreed patterns are great shared vocabularies, not implementations. But, with the Head First decorator comment, think about it from the perspective of someone learning the patterns (not an OO expert like yourself). Would the coffee pricing example stick in your head so that when you’re actually working on a REAL implementation you’d have the essence of the pattern in your head? That’s the whole point with Head First - learning, and in a way that makes people feel like they can go out and use the patterns, not in a way that makes them feel like a dummy. If you read the amazon reviews you’ll see people really do benefit from this approach.

Anyway, hope you’re enjoying the book.



Any textbook explanation of a complex technique will almost inevitably use bad examples. For a textbook, we want to use a simple example because the point is to demonstrate the technique and not get bogged down in the specific application. If I’m trying to explain, say, how inheritence works in OOP, I’d probably do it with some trivial operation like the parent adding two numbers together and the child over-riding this with a function that multiplies. I wouldn’t use a completely-developed model of mitosis in mitochondrial DNA, subtyped to handle meiosis, because the reader would then have to spend hours understanding the biology rather than learning the programming.

Thus, many textbook examples use a complex technique where no programmer in his right mind would use such a complex technique. If I really wanted to find out what 2+3 is, I wouldn’t write an object-oriented program with a hierarchy six deep, a web page, and a back-end database. Even “System.out.println(2+3);” would be far more complexity than called for. I would not use a computer at all. I would simply count on my fingers.

Surely no one would suggest that all textbooks examples should be sufficiently complex to merit the use of the technique being explained.

But there is the danger that if this point is not carefully explained to the student, he might get the very wrong idea that complex techniques should or must be used in such simple problems.

I’ve seen plenty of examples of code that is far more complex than necessary or appropriate to the problem. I’ve often wondered if the programmer used this technique because he really felt it was useful here or just because he remembered seeing it somewhere else and blindly applied it to the existing problem. Maybe this is how some of that happens.

Jeff, what’s the deal with your repeated bashing OOP when your statements are a clear indicator that you’ve never worked in a purely OOP environment.

Would I really design a point of sale system that used the Decorator pattern to represent coffee pricing? I think I’d use a simple relational database table and some procedural code. If I needed to add a topping, I’d simply add a record to the table-- no complex objects or inheritance models required.

Design pattern are for people who’ve read every OOP programming book (which are mostly poorly written with crap examples), learned every facet of OOP, and got bored. Your statement above Decorator pattern is like saying that you would use a H-Bomb to hammer a nail.

If you were to use OOP to connect to a DB in C# it would be easiest to create a LINQ-to-SQL ORM which automatically maps a table row into an object (aka entity), then you run a LINQ-to-SQL query (either pure LINQ or SPROC depending on performance or preference) to collect a list (intelligent array) of relevant entries, and enumerate the list using a foreach statement to handle each row (represented as an entity).

Creating a class that represents a table row (which is what an ORM does) is actually a hell of a lot more intuitive than storing the data into variables or arrays. In a dynamically typed language like PHP or JavaScript where all data is stored as strings (until you run mathematical operators) it might make sense to store a table row in an array. For statically typed languages the rows each could be a different type and therefore require a different type for each column.

Do yourself a favor… Quit reading trivial books that take a moons-eye view of OOP and learn what OOP really is. Otherwise, don’t bother using JavaScript. Every time you call new you’re instantiating a class and every time you use a statement.statement() you’re calling class.function() of a static class as defined in JavaScript’s DOM class hierarchy (yep that’s right, OOP).

Truth is, people who really use OOP (and know what they’re doing) almost never use inheritance. Plus, you can you use regular procedural code in OOP, it’s called static classes, static functions, and static variables. Classes and objects are only necessary if it makes sense to group some variables or behaviors, the higher level constructs (inheritance, polymorphism, encapsulation) are only to be used if it makes sense and Design Patterns are for crazy people.

There is very little coded evidence or demonstration pieces. Instead, it is a battle of anecdotes. We should expect more. This is the age of science, no? Also, most evidence examples seem to be from systems software. Is it perhaps the case that OOP does not work too well for business software? Interfaces are perhaps more stable in systems software due to de-facto standards. In that case, swappability of implementation, and thus polymorphism, makes more sense there. Maybe it is just that OOP’s shine is not domain-universal. This is why the best OOP examples are for systems software.

Evan, many OO proponents keep saying, if you use OOP long enough and do it right, then you’ll just see how it’s better. This is a self-fulfilling prophecy, for people who don’t like OOP will eventually leave it or reduce their use of it. A large part of it is personal preference. OOP maps to the way some developers think, but not all. I’ve been trying to figure out what is so great about OOP for years, but never get a consistent answer from OO proponents. This is likely because there is NO RIGHT ANSWER and people use what fits their way of thinking (if given a choice). Thus, heavy OO proponents tend to end up together. There’s also a trend of using OO for internal components (computer-centric stuff), but not for domain modeling. This fits my observation also: OO is lousy at domain modeling (business modeling), but okay-to-good for things like GUI’s, sockets, report-writers, etc.

So, I just read my previous comment and… I apologise for the harsh tone.

Tom, I think the reason that most OOP proponents can’t come up with a decent answer to the question “what is so great about OOP” is because most people are introduced to OOP in an academic setting. Where their heads are drilled with everything OOP from abstracts to patterns. So, when they finally get to using it they think they need to use every tool in their repertoire if for no better reason than to prove that they can. The other backlash to the academic mindset I mention is, OOP is inherently more complex than procedural programming. Having a post grad CS major try teach the concept of OOP to a procedural programmer is like a schizophrenic bashing his head against the wall to get the voices out. It may get through but it’s guaranteed to be a painful experience.

Now, back to the point. Why is OOP useful? Look at procedural programming, in its simplest form it’s a pretty easy concept to grasp. Execution starts at one point, does some stuff, and either exits or continues in an infinite loop. When code gets sufficiently complex, you abstract away certain processes into functions. Now you can call a specific action, give it the data it needs, and it spits out a result. No need to know about it’s internals as long as you have a good idea what the returned result is supposed to be. It has been proven time and again that procedural programming can model/describe anything. Almost all of your variables are scalar so their functionality is pretty obvious. Easy peasy.

Enter OOP. OOP in its most basic form is just a way to group common variables and functions. If you were modeling a real-time simulation of aircraft engines it would make sense to prototype a set of functions that do what an engine does and then call that set in order one time for each engine. In OOP you model the engine in a class, instantiate an object for each engine, and tell them all to run, all variables and functionality is internal to the engine so you don’t have to worry how it works.

Procedural programmers and OOP programmers see the world through different lenses. Procedural see bottom-up, as in, what needs to take place to make something happen. OOP programmers see top down, as in, here is model of everything that makes our little world. Now, how the hell do we make it all work together.

The best part about OOP is it’s OOP. It’s all about modeling objects and their interactions, it introduces the noun (object) to programming whereas procedural is just verbs (operations, decision logic) and state (variables). The worst part about OOP is it’s OOP. For every level of abstraction there are more rules (and who likes rules).

Remember what it was like to learn what variable scope was to functions. I’ll call that the first level of abstraction. OOP adds multiple levels on top of that. The next is class scope (public, private, internal, readonly, const, for variables and properties; and public, private, and internal for methods). Next, class type scope (static, instance). And finally inheritance scope (virtual, sealed, override, and abstract). Not only that but, how the hell do you know what order everything is executed in?

I agree OOP sucks… To learn. And, everybody who is learning it will write a lot of crappy hackish code to try to find ways around the limitations of what they don’t know about OOP yet. And, the only way to become proficient is by writing a lot of crappy hackish code. In fact, the day that a person becomes proficient in OOP is the day that they can sit back and ask, “what’s the best way to make this code as clean as possible,” instead of “how the hell am I going to get this damn thing to work.” There’s a long series of “oh s***” moments one has to experience to get there and it’s a long process. Especially if you already have clear alternative to tackling the same problem like extensive experience in writing procedural code. Basically, if you’re worried about design patterns and you don’t know the difference between public, private, internal, static, override, virtual, sealed, abstract, etc… You’re doing it wrong.

Sorry about the length of my post. I figured I’d make an attempt at explaining a realistic perspective of OOP from someone who doesn’t masterdebate to class diagrams of design (anti?) patterns. I wish I could reference a book that takes the similar approach but AFAIK it doesn’t exist yet. The books about OOP today suck at explaining how to program in OOP for the same reasons that professors suck at producing goods and services. Alhough they’re very interesting, they’re not very useful. Theory != application and extensive theory without any application is just gibberish. Sorry I didn’t include an example along the lines of business modeling. I haven’t done any yet so I wouldn’t know where to begin.

First, people keep using physical nouns analogies, such as your jet and jet parts. Yes, OOP may do well for physical modeling. OOP was actually born to assist with physical simulation. However, in my actual work, I don’t do physical modeling for the most part, and a nested-component view doesn’t work well when modeling the business domain. Things interweave more. A given association is only a temporary association of convenience for a given task or section, not a physical one. Please stop using physical examples; they are worn and can’t help any further. My domain doesn’t match them.

Second, I use relational design to assist my procedural designs, and thus it’s not entirely just “procedural” doing all the work and modeling. They work together. (Some say that OOP is for those who don’t “get” set theory and SQL.)

Third, most programmers tend to move out of direct coding in roughly ten years on average. There’s a lot of burn-out, wrist injury, and age discrimination for pure coders. Offshore outsourcing has made it even more competitive. So if it takes 7 years of experience to finally get OOP right (you said it has a long learning curve), then only 3 years are left of “good” code. Thus, 70% of all OOP code will have to suck by that logic. That’s not a very economical stance. If OOP takes that long to get right, there’s something wrong with it.