Falling Into The Pit of Success

Eric Lippert notes the perils of programming in C++:


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/08/falling-into-the-pit-of-success.html

“Lots of C++ developers bash the .NET GC simply because they don’t understand it. In short - you should NEVER pepper your code with gc.collect() simply because you THINK your app is using too much memory. The CLR allocates lots of memory when it’s not needed by other apps to minimize the number of iterations of the GC algorithm - but releases it when memory is tight.”

In theory, you’re correct. In practice, in certain cases the GC will happily allow your program to run out of memory. Most of the time I’ve seen it when calling “System.Collections.Generic.Dictionary`2.Resize”. This is with a process consuming between 1-1.5GB of memory on a box with 4GB of RAM, which also has the /3GB option turned on and the app made “LARGEADDRESSAWARE” (just to avoid as many OutOfMemory exceptions as possible). There is absolutely no reason for it to be throwing OutOfMemory exceptions, but unless we pepper the code with GC.Collect()s, it does. It’s not that WE think the program is taking up too much memory - the program thinks its taking too much memory.

Now, this is probably a bug in C# and not a failing of the GC model. My guess is the “System.Collections.Generic.Dictionary`2.Resize” method doesnt try to reclaim memory before throwing an OutOfMemory exception, for some reason. But when you come across problems like that in C#, what can you do? You’re pretty much hosed.

Thanks for the link Jeff.

Incidentally, I am very amused by the commenters above who aver that anyone who has these kinds of problems with C++ probably is not a very experienced or capable C++ developer.

It’s totally true. As I said four years ago, I consider myself a six-out-of-ten C++ programmer: http://blogs.msdn.com/ericlippert/archive/2003/12/01/53412.aspx

I have never written a C++ compiler, so how could I possibly be higher than six out of ten? I totally admit it: my understanding of the subtleties of the language is pretty weak.

And as I have only had twelve years full-time experience writing production compilers shipped to hundreds of millions of users in C++, I probably lack the impressive depth of experience that your commenters have.

:slight_smile:

Reading the last two paragraphs reminded me of a book we were required to read during university, “The Design of Everyday Things”. It’s a nice book and really brings the onus back onto the designer.

This sentiment is something I’ve tried to drill into my parents too. Whenever they use an interface that has been poorly designed (be it a website, or dvd remote control), they get frustrated with themselves because they think it’s their fault they are having trouble. I keep telling them, it’s not your fault, it’s just poorly designed.

Whoo Jeff! I wholeheartedly agree. I’ve been wanting such tools for years. About 7 years ago, before .NET came out, I designed a language named Q (but never implemented it, though I wrote up a spec). It was supposed to be between C++ and Java, giving you the power and speed of C++ (almost totally) while at the same time being much faster and giving you more control than Java. How did it do this? Actually, it allowed you to code using good practices and it would keep you from shooting yourself in the foot, but every time you wanted to do something potentially dangerous you’d have to be explicit about it. Basically, it fit the development paradigm that I imagine most developers would like to have. That is – code what you mean, and when bugs come up, be able to track them quickly because with that language, you know what had to cause them. I know it sounds far fetched or lofty
 but I actually thought it through a lot at the time :slight_smile:

Anyway, it’s still sitting on the back shelf.

But two of the first things I wrote in the spec had to do with the just typing in the language
 and they were:

Block comments (among other things) can be nested. I can’t believe why developers of compilers couldn’t simply handle /* /* */ */ easily. What if I want to comment out a bock of code, and then comment out another block around it? I currently can’t do that, and it’s such a simple fix – I don’t see why not to put it in.

Later down the spec was:

Case statements break by default. To continue to the next case, type continue. If you want multiple cases together, you would write case a, b, c:

What’s more common, when a new case statement begins – continuing to it or breaking? I don’t see why KR had to make continue the default, but it’s been that way ever since. Forgetting to put that “break” can cause you to have nasty bugs just because you FORGOT. If you want to continue to the next statement, you’ll actually explicitly think this thought, and specify it.

The principle I went by when designing this was “put what developers usually mean as the default, because they know what they want as an exception.”

Finally, I’ll put one more: semicolons. Forget one and it’s like an atomic bomb went off “somewhere over here” in your code. Cryptic errors start popping up. In Q, a line break by default ends a statement – again, because by far, MOST LINE BREAKS DO END STATEMENTS in code. So this is once again what most programmers mean when they press return after typing a statement. And if they DO want to continue to the next line, they explicitly think this thought, because it is an exception. However, semicolons still end statements; so you can put multiple statements on one line, or
continue them onto the next.

By the way, if someone wants to find out more about this language, or implement it with me, email me :slight_smile: gregory @@ gregory.net [i am not 'fraid of spam, hee hee]

Greg

Good point – but such a pity that the original piece quoted is really about the perils of programming in ‘C’. Even more a pity that most programmers will indeed write ‘C’ in any language (well, it is a slight improvement over FORTRAN, I suppose).

The problem, to use the vernacular of the slightly sleazier parts of the 'net, is “Bitches don’t know about my RAII.” – or, in general, proper separation of responsibilities so that objects manage themselves and are not beholden to their consumers to Do the Right Thing. Alas, this is not surprising, given that the way that C++ is usually introduced as C with knobs on.

Dynamic memory allocation may be the most common resource needing to be managed, but I always feel slightly uncomfortable about the way that managed languages (Java, Python, Ruby, everything .Net,
) make it more obtrusive to the consumer (be it using(), try
finally, ResourceReleasesResource,
) to manage everything else (file handles, sockets, graphics contexts, the many and various Win32 handles,
). And even with garbage collection, memory leaks via strong references that ought to be weak, or large graphs held in an inaccessible part of a necessary object, are still not beyond the wit of man to concoct.

Much of the contribution to the pit of success from the managed comes from the extensive standard APIs that they provide, which then has the beneficial effect that there is a reduced temptation for well-intentioned (and we know which Pit that leads to) roll-your-own replacements. Otherwise there is a fine line between automation of repetitive processes (good) and pretending that the complexity really isn’t there (bad).

Jeff Atwood wrote:
“a well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things.”

That was the goal when designing the Ada language
 25 years ago. And unfortunately, that’s also what made it fall slowly into the darkness of computer history.

People prefer a badly designed, expressiveless, macro assembler, “hacker” language like C because otherwise they feel their algorithmic creativity restrained. the real reason is that they never learnt to write good code, and were only taught the “hacker” part of computing.

People use the right tool for the right job, but when their definition of good is bad, they end up using the bad tool.

“Much of the contribution to the pit of success from the managed comes from the extensive standard APIs that they provide” - Except for Python, who’s libraries are in general terrible :stuck_out_tongue:

So the idea is to let the language hold our hand?

As a once-c++ programmer, I can honestly say that the ease of falling into the “pit of despair” was astounding. C# has come a long way, but I don’t believe that you can fall into a “pit of success” simply by using a language that makes it difficult to walk off an array. If the code is buggy, it’s buggy! If the logic is bad, it’s bad! C# and similar languages can’t save you from that. So while you might avoid the pit of despair, nothing lets you fall into success but yourself (no matter what language you use)!

I must say, I program C++ and if you know how to use it then it is a VERY powerful language. Agreed, you can easily f*** up, but then again, you can easily create a fast, efficient app too.

What happened with the other name for this metaphor, the “falling into the pit of success” thing? It is called “silver bullet”, and it solves all your problems for you.

What you just said now is pure short sightedness!I agree the pits are deep, both sucees pit and failure pit.As always,c++ programmers love living on the edge.quoting-‘If ur not living on the edge ur wasting space’ :wink:

I don’t want anyone’s applications ‘on the edge’. I want to use solid, stable, secure and preferrrably easy-to-use and intuitive applications (like, I must say, I’ve found many a Mac app to be).

I’m not a programmer: I’m just a user. That’s what I want.

In exchange for this, Programming Friends, I will give you my money!

Thanks,
:slight_smile:

I completely agree with C#. Normal code is pretty, small and most of the time even fast. But bad code, like late-bound calls get terribly ugly because you have to use Reflection or Code-Emit. I really like having to write bad code because every time I see Reflection code I know “uh uh something might easily break here so better watch out”.

But isn’t it easy to fall into big bad holes easily with Ruby and Python BECAUSE they allow too much? Because they don’t have a compiler that helps you find your bugs BEFORE running your app? Thats the main issue that has kept me away from those languages so far.

Jan hit the nail right on the head.

Every language for its purpose. If you need the speed/memory, use a language close to the hardware. With all its pitfalls of self-managed resources and cleaning them up.
If not you can use other, more managed languages. But don’t make it a rule, that they are better. And also not the other way around. Yes, you can get a memory-managment in C++ 
 but it costs you and you might want to use another hammer, erm, tool.

Personally I like C++ and think many of these pit falls are over rated once you know what you’re doing. Things like missing break statements in switch statements are so rate an occurance and so simple to see that I don’t think disallowing them outweighs the benefits of allowing them to fall through when required. The nice thing that Java and I think C# have is reflection allowing simple JUnit testing. I don’t rate memory management that highly, once you’re working with C++ properly in a good structured framework of your app then there is little advantage of garbage collection over deterministic destruction of objects. Plus templates are very nice to use to avoid casting all over the place.

Perhaps you should look into D (http://en.wikipedia.org/wiki/D_programming_language) which seems quite nice. I’m also looking forward to the C++0x release which looks like it’ll be adding some nice features lie the initialisation_list.

When programming in C++ I often spent more time worrying about the code than the solution itself.

Comparing C++ to C#, Java et al and say it’s a terrible language is like wondering why someone invented BW TV since color TV is obviously so much better.

To be honest to C++, one should point out that C++ is a huge step towards better programs when coming from its C ancestor. e.g. the language comes with destructors: A very nice technique to avoid memory leaks and double-free’s (and delete accepts NULL as opposed to free(), which is a bonus protection against these double free).

Of course GC is better. Then again, so is color TV.

I agree with Paul’s comments in that I didn’t really have a hard time with C or C++ with buffer overruns or bad pointers. Sure they did come up once in a while, but I didn’t have too many issues with the ‘language’. Now there were always run-time (functional bugs), but that isn’t necessarily indicative of the language (to an extent).

I think the main problem was the lack of good library support. That is what really makes .NET super-useful. The API design is very friendly. Now you could do something similar with C+±-there is nothing really preventing that.

I don’t know if Microsoft just had sucky-engineers, but most of the C++ libraries/Win32 API functions seemed to be designed by someone who writes device drivers.

There’s a certain ‘state machine’-like pattern that many of the API/class libraries seem to follow–a requirement for device drivers, but not for sending an email. Take a look at MAPI if you want to see what I am talking about–or any of the zillion #define arguments needed to pass into methods.

I think everyone (that has coded in C on other platforms than Windows) can agree that the Win16/Win32 API was very poorly designed and “was just plain screwy” compared to other offerings of the time. But Windows became the dominant OS and that was that


I really don’t miss C++ as most of the work I do today doesn’t require it, but looking back, it was fun to code in–we were solving different problems back then.

Great article, with one quibble – I’m a Massachusetts native, and while I see where you were trying to go, the Big Dig isn’t something on which you want to model your successful API or project. It has massive (cost) overruns, (water) leaks, and (ceiling) crashes