Your Code: OOP or POO?

I don’t know why people make such a fuss over objects, I really don’t.

Normally objects works better using composition rather than inheritence, anyway, not always, but mostly.

A lot of the problems I see described here are similar to when I see people using UML, a stick figure a few boxes and lines and hey presto a full design that will describe the interactions of 3000 lines of code, I don’t think so, a lot more detail please. Those three letters be they UML or OOP do not convey any special aura on the code. You still need to do a good job.

If people overuse features, then they are using them inappropriatly, presumably they don’t know any better, or don’t have the confidence of their own convictions. Or want to learn some new stuf and add it to thier CV/resume.

When push comes to shove, to me (and I seem to be in the minority here), there is very little conceptual difference in OOP code and non-OOP code, cetainly nothing revolutionary. It’s basically conceptual integrity through high cohesion, low coupling i.e. encapsualtion and private methods and state. Polmorphism icing on the cake.

Oh and yes, the “object-oriented” bit of OOP, does not refer to modeling real world objects in code. You are modelling conceptual objects in code, when was the last you could examine the metallurgy on your car object. You can’t, it dosn’t have any, not a real object, a conceptual object.

Clarity is king.

If you can look at any piece of your code and it holds together conceptually, fits in your head and presumably does the right thing, your home free.

So in conclusion, OOP its just stuff like anything else, stop getting your knickers in a twist.

When I studied computer science at the university of Oslo, we had a lecturer called Kristen Nygaard who actually invented object oriented programming. He invented OOP for a language called Simula as a technique for modeling real world objects and beheaviours. You do not have to program OOP in Simula, like in java.

Once, years ago, I got called in to salvage a C++/C project. The original developer had written a web of 300 classes (for an inventory-management system) with a mean of about twenty lines of code each; massive inheritace, aggregation, slam-bang, you name it. He had a jerry-built class browser which he used to “manage” it all. Quite a feat of solo engineering - if it had worked worth a da?n. When he left (willingly or otherwise, depending on who’s talking), we discovered that about 50 tiny classes didn’t even show up in his browswer, because as the comments said, “you should know this stuff is there already”.

Blech.

Two months later, a system with the required function, half the lines of code and ~15% of the class count went into production. Last I heard (about 3 years ago), that’s still the base for their production system. I never heard of the original coder again; I keep expecting to trip over a micro black hole somewhere formed when his navel-gazing recursed catastrophically. Occupational hazard for OO newbies, I believe.

A good expansion for “POO” would be useful.

Poorly Organized Objects ?
Procedural Organization Obfuscated ?

There should be a term for abusing OOP in particularly horrid ways, and this is as good a candidate as any.

As one of the commenters said using Test Driven Development is one of the best ways of developing code.
First think of the way you’ll use it and than code it.

Agree with the post, but PFO seems a better acronym.

Great post Jeff.

Developers who write clear, simple, maintainable code satisfy these criteria:

  1. They can program (as you pointed out last week, many can’t)

  2. They must directly support in production the code they write

  3. They are very close to their users – physically, organizationally, empathetically, whatever

  4. They expect their code to be used for the long term (not a throwaway work)

Of these, my peeve is #2. It’s amazing how unrecognizable your own code can be six months, a year later. If you have to support that code, if users can walk over to your desk a year after you wrote it and ream you out because your program doesn’t work – then you’re MUCH more inclined to focus at the start on writing it well, so that it can be easily read, understood, and maintained.

Life in OO land would be far less stressful, and far more productive, if folks had listened at the beginning. Do Not Build to the Bean Paradigm. But that paradigm has taken hold, and much of the Design Pattern stuff is an effort to deal with this. If the profession had listened to Holub/Stroustrop/Arnold/Meyer and implemented OO as objects having capabilities, rather than simplistically as data and method, we would be far better off. But that didn’t happen.

I like one of the quotes in Damian Conway’s Perl Best Practices:

Always write your code as though it will have to be maintained by an angry axe murderer who knows where you live.

I agree with most of what you say here, but I believe one of the prinicples you list is the root-cause of all OOP evilness: code re-use.

I have seen many projects collapse under the weight of OOP simply because the developers were trying to make everything re-usable. Making code truly re-usable take a lot of thought and effort and there simply isn’t enough time for this on most projects.

If we stop advocating code re-use (for general developers), more OOP projects will succeed.

I like to think of strict OOP as being extremely important from the outside looking in, and much less important from the inside looking inward. A good OO API is important for those who use your code. Well designed divisions of responsibility is important within the private code of your assembly is important to reduce complexity in code maintenance and increase reliability. Its very similar to views and SP’s in databases. You provide a very rational, straight forward interface for others to access what they need. It doesn’t matter that behind that interface is a completely different storage paradigm.

Captcha: orange. Again, and forever more, amen.

My thoughts:

  1. Your programming language doesn’t need to enforce public/private/protected members of objects. Just make everything public and yell at your coworker and check in a bug report if they use the interface wrong.

  2. In general, total program correctness is an impossible task (the Halting problem), so don’t worry about whether or not you can prove a program is correct or not. If your test cases are well written, you’ll have pretty good confidence that the program works correctly under a wide range of circumstances.

  3. There’s no reason not to mix OOP and procedural programming. Similarly, there’s no reason not to mix high-level languages and low-level languages. Use high-level where it makes sense (user interface code, et cetera). Use low-level where that makes sense (speed-critical procedures).

But (and word to Paul Graham): you can prove anything by oversimplifying the issue. I’d like to see the example problem in where " something a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods."

On the whole, I’d also be a lot happier if the Lisp hackers and Perl goons would keep their worldviews to themselves. What they do isn’t like what I do, and vice versa. I don’t pee in their pool, so they can damned well stay out of mine.

[rant]
Tell me, tell me, tell me–how does one determine program correctness in Perl? Answer: One can’t–Perl does what Larry Wall says it does, however he says it does it, and Larry doesn’t give a flying damn in a circus tent about correctness. Perl’s almost an anti-language. What relationship to actual computer languages does it have at all? (And the notion of Perl “best practices” can be summed up in one sentence: DON’T WRITE IN PERL.

Likewise LISP–the business of business IS business–and LISP isn’t. Example:

(let ((g (* 2 (or (gethash word good) 0)))
(b (or (gethash word bad) 0)))
(unless ( (+ g b) 5)
(max .01
(min .99 (float (/ (min 1 (/ b nbad))
(+ (min 1 (/ g ngood))
(min 1 (/ b nbad)))))))))

That’s not real programming, that’s BASIC written by ee cummings! (Yes, I realize he’s dead, I’m making a rhetorical point.) How the Hell is that maintainable–much less readable?

[/rant]

Hey, you can prove programs are correct for a large class of programs. The “halting problem” invokes the loading of “any” program. While you can’t prove EVERY program is “correct” or “will halt”, you can certainly build your programs in such a way as to PROVE that it is correct. And test-driven development with OOP is very good for that.

I think its important to remember that OO is just a tool, and it is one that sits upon other programming “paradigms”: procedural, structural, etc. In procedural programming we got things like subroutines and functions. In structured programming we have things stricter scope of variables (such as in if statements, loops, methods, etc.) In OOP, we have things like classes, properties, inheritance and interfaces. These “add-ons” to what came before, are there to solve specific problems at specific times, but we should NOT forget the basics. Many times, I see spaghetti code that ignores good, old fashioned, basic procedural and structured programming practices. This isn’t a lack of OOness, but just bad procedural code.

I too have also seen a few projects where the developer blindly made an object for every possible thing, but this is rare. Your comment about overuse of interfaces is well noted. I kind of think of OO as sharp tools–they are more complicated to use but are meant to solve complicated problems. Developers need to understand that there are trade-offs to using these tools and that you should only use them where they’re a good fit. If you have a simple problem to solve, use a simple solution. I’ve also heard of a developer team making an object for every stored procedure in their database, which also seems overkill to me (whouldn’t a method more closely match what a stored proc does?)

On my first OOP project, i went crazy with inheritance and it cost me. I was the stereotypical handy man who only had a hammer and saw every problem as a nail. As I matured, i find that i use inheritance much less and only in situations that seem to truly need it.

That being said, I love the OO mindset, and I love the tools that OOP languages bring to the table. I would say lets not discount the tools just because they are abused by people who don’t know better.

Finally, a word on the impedance mismatch thing: yes, if you don’t have a decent O/R Mapper and have to do that work yourself, the impedance mismatch is a productivity killer. But is that a fault of OO or a fault of relational technology which has dominated the database market for years? (Just as in the same way that OO has dominated the programming language market.) I think its time we merged these two paradigms together somehow with OO databases or something halfway in between object and relational. I know its been tried before and failed but is that due to a failing in the concept or just bad marketing? Shouldn’t database have a concept of inheritance? Shouldn’t i be able to have one table reference another without having to define primary and foreign keys? This is repetitive work that i do over and over and i think its time that the database software did it for me. Note, i’m not saying that you shouldn’t declare primary and foreign keys, i’m saying the software should do it for you, automating it somehow.

“i like old VB but the new OOP VB.net piss me off.”

People coming from a VB background would be more likely to make these sorts of comments.

Jeff, THANKS for writing about this topic.

I call this the “Huh?!” factor. Code should make you smile with enlightenment, not scratch your head in befuddlement. I’m sick of developers trying to prove how smart they (think they) are by making things needlessly abstract and complicated.

Once they can code their way out of a paper bag, the next thing they need to do is learn to find JUST THE RIGHT LEVEL of abstraction for the situation at hand.

http://bobondevelopment.com/2007/03/05/absuing-oop-over-abstracting-and-the-huh-factor/

mrprogguy,

I love the ee cummings comment. That was funny.

As for coding in Perl, if you look at things from a purist point of view (strongly typed languages that are compiled rather than interpreted, etc.), then Perl is a hard language to witness. For example (my Perl is rusty, so forgive):

sub foo {
my $arg = shift;
#What, pray tell, does shift do? I know, but do most programmers?

#or this
my $integer = 1; #holds the value 1
$integer = "moo"; #now it holds the string "moo" - weakly typed

#or even worse, evaluating the following for true or false
my $test = 0; #false
$test = 1; #true
$test = "zero"; #a string, therefore true
$test = "0"; #a string, therefore true
$test = "NULL"; #a string, therefore true
$test = undef; #always false
$test = ""; #empty string, therefore false
$test = "false"; #a string, therefore true

}

Perl allows all behavior, and due to the concept of entropy, or at least cynicism, most behavior is bad. Worst of all, Perl allows for very deep obfuscation, giving rise to the rite of passage for a Perl hacker, the JAPH (Just Another Perl Hacker). http://en.wikipedia.org/wiki/JAPH

But as naughty as Perl is, if the coder has self discipline, he or she can create orthogonal libraries, etc., and make the code sane and easy to maintain. And as I have been taught, when you need a shovel, grab a shovel, not a backhoe. Perl is great for scripting and such. It also has a rich set of libraries obtainable from CPAN.

And, you can even do object oriented programming in Perl. :slight_smile:

This whole thing entirely depends on the application you are writing.

If its a small-medium size app which has limited scope and scale then sure, breaking some style consistency is ok (again depending on where that is too).

For a larger application or framework, breaking a consistent style is very bad imho. It throws off other developers who are trying to use/read your code and turns it slowly into a big spaghetti mess.

Yes, creating interfaces for every object is overkill but not creating them isn’t not creating OO code right?

You know, honestly I thought POO stood for “Programming-Oriented Objects”. Here’s how I broke it down in my head:

Object-Oriented Programming is when you program for the objects themselves. All your code is centered around creating objects that inter-co-mingle (I love that non-word) correctly and fluidly. Often times speed and efficiency are sacrificed for the sake of a pleasing source code base. Basically, it’s when you use C# like a mantra.

Programming-Oriented Objects are what you get when you take an OOP language and hack at it like you’re programming in something like C. We’re talking using the low-level stuff that has almost no business being used in a high-level language. It’s like when you wrap C libraries to something like C#, Python or Ruby. It’s like… you can do it… but why the hell would you want to? Aside from adding functionality you wouldn’t have otherwise had (often times defeating the purpose of the language).

Anyhow, that’s just what I thought off the top of my head. Your explanation works, too. :slight_smile: