Your Code: OOP or POO?

I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax. Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult. I've always found inheritance hierarchies to be brittle and unstable, and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/03/your-code-oop-or-poo.html

One of the most infuriating comments I get from people reviewing code for the CSK or SubSonic is “this [one obscure part of your program] isn’t OO mmmm-kay”. It seems anything else needs to ride in the back of the architectural bus.

All arguments aside - the ability to creatively think and solve without religious boundaries is what makes developing fun. The OO police are probably gonna pile on here Jeff but well done in speaking up and telling us “Neo” - phytes to “Free our mind”.

Good onya!

i like old VB but the new OOP VB.net piss me off. if i want OOP, i go with C#. i use VB for the spaghetti code and damn it that’s how i like it. seriously, i love VB because i don’t have to code it the OOP way but how do you reuse your code if you don’t use OOP?

This is one reason I’m such a big fan of Test Driven Development (TDD), because it forces you to be that other programmer for a moment.

Why stop at considering the poor sap who has to maintain your code and use your APIs? Why not become that person for a brief moment.

I agree, the main goal in practice is to get your project done. You just have to keep in mind that the project might include more than the first release, so you want to plan for that growth. But in the end, OO isn’t the only way to satisfy principles of DRY (for maintainability), modularization and consistency (for concurrent development).

OOP was supposed to make a tree of reusable components deriving from base classes (okay, let’s say which in turn derive from Object to make it a tree). But almost every waste of time in OOP happens because people rewrite things from scratch. Most fields of programming would benefit greatly if people just got together and used one framework (for 3D games, say), instead of hodgepodges.

Just imagine how much useless information programmers have to learn just so they can use many programming languages, many frameworks, etc. In C, you close statements with ; and use {} for blocks. In VB statements end with line breaks unless you put underscores at the end, and blocks are written differently. A person making a website has to use PHP, HTML, Javascript and CSS effectively. Frameworks were supposed to get rid of all that.

Of course, as I take on another programming project and notice 100 different frameworks to do the same thing, I have a secret plan. To start my own OS, write everything from scratch, and just have ONE version of everything, so people wouldn’t reinvent the wheel 1000 times and try to be better than the others in 2 or 3 aspects.

Yes I know, I’m sure this plan shares some things with very evil plans that other people had. But I really know that dragging and dropping a chatroom onto my page and configuring its properties is much easier than every time slavishly writing javascript. That’s visual OO. :slight_smile:

My first major OOP application has only three interfaces and two design patterns - Factory and Template Method. So, instead of worrying that it is TOO simple, I should be happy that it WAS simple?

I totally agree with your point, as I see OO being useful for larger and long term projects. My own hatred of OO overkill is when the objects are so badly designed they can’t possibly be extended.

I’d disagree with the viewpoint of VB being for Spaghetti code. Even going back to VB5 “objects” were introduced, and generally did make life easier. It wasn’t proper OO but it was a good start.

My first major OOP application has only three interfaces and two design patterns […] I should be happy that it WAS simple?

Gene, some might ask what defects in the language cause those two design patterns to be necessary:

http://blog.plover.com/prog/design-patterns.html

But in general, yes!

Well, there’s objects and then there’s Objects. I work in Smalltalk - real objects everywhere and it feels pretty natural.

OTOH, Java’s Objects™ are characterized by cargo cult engineering. Lots of form of without function of. Factory is just one pattern that is horrifically overused in that world and usually for no good reason.

You have to know when to use sense. Very rare in software. Sometimes a script is just a script.

Making something ridiculously complex for the sake of making it simple is like trying to put out a fire with gasoline.

Some programmers just need to take a deep breath and write code that is a delicious salami sandwich, and not an extravagantly prepared four course meal that tastes like shit.

“It has been said that democracy is the worst form of government except all the others that have been tried.” - Churchill

Erm, I guess people do go object-crazy. The problem, as I’m sure is documented elsewhere, is the crappy teaching phase driving home that “OO is about all about inheritance” when it’s not. Inheritance is a powerful tool that is sorely abused. Most of my object hierarchies are flat, I mark all classes as sealed unless I do intend for someone to derive off of them, and I don’t create interfaces until I really need them (and usually it’s only for testing so I can swap in a test implementation).

OO to me just provides a better way to hide implementation and abstract ideas away so I can create more complex, but logically simpler programs because I don’t have to hold onto all the nuances of everything at once. It’s no panacea, but it is nicer to work with when done right.

Anyways, don’t throw the baby out with the bathwater. Just because some cars suck, do you stop driving altogether? So, until something better comes along …

I second Mr. Haack’s thoughts. I was very fortunate in both high school and college in having teachers who taught both the thinking structure for OOP and why it works. We consistently had to work in groups and be able to read each others’ code at a glance and understand what it did, how, and why.

I didn’t understand just how important that was until many years later. It has shaped every program I’ve touched since.

Just to second Phil Deneka’s point, it took me a few years of programming VB before the point of OO really sank in. I learnt Eiffel before I even touched VB, but without any real world experience of learning from my mistakes.

I semi-agree with your comment about design patterns. Some are just silly, like the Factory pattern, and truly are about working around an issue in the language (for example, the lack of classes as first class object in C++, where you could pass a reference to the Class object instead of a function pointer to the factory). Still, it helps establish a bit of language (saying “a subroutine” instead of “you know, put some code here that we can jump to from a bunch of places, putting the return address over here”), and I’d disagree with Mark in that it eventually leads to the languages supporting the concepts, because it gels into peoples mind as the very same thing, as opposed to people getting stuck to the tiny little difference that don’t matter (one guy puts the return address in a register, the other in a global variable, etc). It helps see the big picture.

After a while of “functors” being used in C++, I’m hoping that people will see that they’d really like simple anonymous closures (I’m thinking Perl-style closures, for example), as a first-class construct you can just whip up in the parameters of a function call, say. And it might be possible eventually because we can tell people “what if we made functors even easier and nicer to use?” and they’ll know that functors are, see all those they use, and the prospect of getting a better one could catch on.

Other patterns are really more abstract, like the Facade. I mean, “making a function that calls a bunch of functions for you” is kind of a weird feature one would want to have in a language, and one could argue that, well, we already have it!

But I’ll have to agree that patterns been really abused. What I particularly despise is some people who seem to insist there can only be one particular way to implement a pattern, when someone looks at my code and asks why I didn’t use the Foo pattern, and well, uh, I did, but I just did it different than you. That’s why there’s no concrete “Foo function”, it’s a pattern!

As for the main article, here’s a true life example: a co-worker once made a “Convert” class, that didn’t have any non-static member, just a few conversion methods. To top it off, he instantiated with “new” rather than just on the stack (this is in C++) and bloody LEAKED IT. Wow.

There’s also the opposite trend: the coders who just won’t use the language they’re using. The most common is the C coders who keep using char* and such in C++, complete with fixed sized buffers, overflows and other fun bits (or when using higher-level languages, can’t wrap their heads around a closure). That said, they tend to have to type a lot more to do as much damage as the overzealous OO programmer, so I tend to prefer the C coder (and I can fix their code with small local patches instead of ripping apart some grandiose framework).

programs where IFoos talk to IBars but there is only one
implementation of each interface!

Yeah… a good rule of thumb IME is not to try abstracting until you’ve got /two/ examples.

  • Because the payoff isn’t there,

  • Because you’ll probably not get the division right between what ought to be in the general interface and what are in fact just characteristics of a specific implementation.

It’s easy enough to insert a base interface/ABC later when it becomes worthwhile.

Jeff,

Thanks for your response. I get a lot from your blog and the responses I read there. RSS is cool.

I can’t go into too much detail, but the app was written to handle 100-200 different types of flat files, some with vanilla schemas and dozens with rocky-road-and-sprinkles schemas (indeterminate number of fields, multivalued fields, odd delimiters, positional files, etc.).

Since I was new at this, and since I had read The Pragmatic Programmer and was all about orthogonality and DRY (to the extent I could adhere to them), I kept inheritance down to a minimum and a much more experienced programmer refactored my original code and made the inheritance structure very much more better. I learned a lot from him and another programmer I worked with.

Anyway, the app was written in VB.net, Framework 2.0.

The factory was to fetch the proper parser I needed for a particular file. Pass in string, get parser object through interface.

The template method was used because I was going to do the same list of steps for each translation (validate, parse, etc.).

One of the benefits (unknown to me when I wrote it originally) was that other programmers were able to use my object library to do some back-end validation and fixing.

I’m sure there were many more elegant ways of solving this problem (and I am sure I will read them later in this post). But the code works, it is relatively easy to maintain, and I have even written a code generator to make new objects easy to create.

The reason for the original question was: is it too simple a design? Did I miss something? Judging by your answer, I should feel better about it.

Thanks again.

Ed wrote:
“The problem, as I’m sure is documented elsewhere, is the crappy teaching phase driving home that “OO is about all about inheritance” when it’s not. Inheritance is a powerful tool that is sorely abused.”

agreed. i personally think OOP is far more about information hiding than inheritance. this problem is also due to a language weakness.

i used to program a lot in Ada, and i rarely had a need to use the inheritance mechanism of the language, because the language provided me with all the tools i needed for good information hiding and modularization. C++ for example, as many other OO language, allows information hiding only by using a class. the language is missing a real module oriented method of programming and developpers uses classes to emulate it. this directly calls to derivation and inheritance hierarchies, because we are taught that classes exists to be derived.

looking back at programming language history, we can identify 3 trends, building on top of each others: procedural programming, module oriented programming and object-oriented programming.

procedural programming is what you are taught when learning algorithms. programs are built up of simple procedures calling other procedures. each procedure does one thing and do it well. C is typically procedural.

module oriented programming is the evolution of procedural programming: you assemble a set of related procedures and type definitions into a module. this provides information hiding and allows to ensure a strong structure of the program. a module, in OOP terms is like an interface, differing only by the way you achieve the derivation of the interface: in module oriented programming, you replace the source code of the implementation of the module. Modula-2 is the best example of module oriented programming, followed closely by Ada-83.

object oriented programming is the evolution of module oriented programming: you have the same tools as in module oriented programming but types and operations on a type are more closely assembled, and you have a programmatic way to replace the implementation of a module (now called a class) by another one behaving differently. C++ and Java are object oriented.

(of course, you can built a software using any of the three “orientation” given a language of almost any other “orientation”, it is just a difficult, often manual, and error-prone process: C can go object oriented if you really want it to…)

generally, people need to use module-oriented programming, but the lack of support by most mainstream language of the day makes them rely on object-oriented programming instead, calling for all possible abusements.

I read somewhere the following: “The difference between a terrorist and a Object-Oriented Methodologist is that you can try to negotiate with the terrorist”. A lot of people that behave like you describe in this post make it come true :o)

Excellent post, keep it coming :o)

Encapsulation and polymorphism are both far more powerful ideas than inheritance (interface inheritance excepted in strongly-typed languages).

When I worked as a programmer in a medium-sized IT shop, I saw far more people who didn’t have good OO skills, despite programming in an OO language, like Java, than people who overused inheritance. We had people who simply made everything static, for example. Our hiring process was admittedly terrible.

“The principles of object oriented programming are far more important than mindlessly, robotically instantiating objects everywhere”

As has been said for millennia: Do not follow in the footsteps of the Sages. Seek what they sought.