Die, You Gravy Sucking Pig Dog!

Jeff, you still have to alloc the bl variable:

Double bl = new Double();

@mq
I added that using as an afterthought. Please pretend the first two lines read:

var c = new Class()
using ©

My point then still stands.

I don’t like the oh let the computer deal with it attitude. This doesn’t mean I like messing around with pointers either, but there you go.

I like the C++ RIAA model – allocate containers on the stack if you
like, knowing full-well that their constructor/destructor semantics
will keep things tidy.

The finest GC experience I’ve ever had is with the Qt framework. I wouldn’t want to develop a graphical application with anything else. You can write statemets such as

(new SomeQtDerivedWindow())-foo();
SomeQtDerivedWindow bar;

and trust that both will delete themselves when it’s time.

I strongly dislike GC being a part of the language a priori, C++ has a no overhead policy for unused features and that’s the way it should be. The geniuses who wrote Qt designed a memory management system THAT WORKS IN THAT CONTEXT.

Other types of program might require different memory ownership rules. I despise Java’s lack of a predictable out-of-scope destructor – do people not realise that the lifetime of a variable has meaning in and of itself?

The database example is a good RIAA example. Allocate the object on
the stack and let its destructor close the connection. The object should therefore go out of scope as soon as possible to be efficient, but ISN’T THAT WHAT YOU WANT TO DO ANYWAY? Isn’t it good design to not have extraneous variables remaining defined beyond their use?

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. - Knuth

Mehrdad: The garbage collector is male?! How come it’s not female?

Because then the garbage collector would get to keep half your stuff.

The art of mallocing and freeing is still pretty important for a lot of applications.

Agree, which is why I said it can be a pretty important optimization – depending on the situation, obviously.

The point of this post is the strange tension between auto-GC, which works very well 99% of the time, and the 1% of the time you actually need to care when and how resources are released.

Oh, and figuring out when you’re in that 1%.

The problem is that we keep repeating our resource handling code all over the place and mixing it with business logic, queries and mostly everything under the sun.

As a programmer, sometimes the very best medicine is a few passionate methods or messages in code. I wrote a TCP and UDP based service that was terminated by sending a MurderDeathKill string as a request. It always felt good.

b1 = (double )malloc(msizeof(double));

Casting the return value of malloc (in C) is not required and hides faults.

it’s good practice to clean up after yourself. anything less, is a sure mark of a jackass.

I actually like the extension method. =)

I agree, however, that generally, people don’t know how to dispose properly of resources that need disposing. I find it to be one of a number of misconceptions around disposing of resources in general that many programmers face.

Add to that the fact that the guideline for implementing IDispose is wrong in some ways (IMO, I lay out my case here http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx) and it makes for a total crap-fest.

Basically, it comes to this (this is all in relation to .NET):

  • Always call Dispose on IDisposable implementations when done (even if you know the implementation doesn’t require it, it’s an implementation detail, so yes, MemoryStreams have Dispose called on them).
  • Don’t bother to set references to null, in release mode (in .NET), you actually increase the lifespan of the reference.
  • Call Close when you want to reopen a resource, Dispose when the resource is no longer needed.
  • Throw an exception in the finalizer for objects that implement IDispose (see blog entry for more details).

Oh, and your analogy between the managed and unmanaged code equivalent is incorrect, as you can easily declare a double on the stack in C/C++.

Letting the computer deal with it for you might work sometimes, but it makes programmers ‘soft.’ If you don’t understand the intricacies of memory management, pointer manipulation and cleaning up after yourself, you will NEVER BE A STELLAR PROGRAMMER.

I’m sure people can provide a counter-example to the above. Like, oh, I don’t know how memory allocation works but I can write a Perl module that proves that creationism is false, but I say bollocks to that – and I’m not even British. You might be able to do that, but think of how much better you’d be at it if you REALLY understood what was going on.

You can’t be a really good programmer without having a solid grasp of what’s really going on under the covers. I’m not suggesting that you understand how the electrons flow, but I am suggesting that you form a really solid understanding of how memory works or about bitwise operations.

Garbage collection has coddled us. We’ve gotten very hand-wavey about things. Oh, let the GC handle it is not good enough. Sure, you might be able to write some decent software and never worry about it, but you SHOULD know about it. You should understand your craft. You should understand that for certain applications, a garbage collected language will NEVER DO. You should be flexible enough to read parts of the Linux kernel (as an example) and grasp what’s going on.

Understanding malloc and free – and understanding what happens when you call those functions – will never hurt you. Knowing a language that won’t patronize or coddle you will NEVER hurt you. And now I’m going to end with a sweeping generalization that will scandalize the masses and begin an epic flamewar:

If you grok C, you’ll rock harder than anyone else that doesn’t - no matter what language you’re currently working in.

Oh, and figuring out when you’re in that 1%.
Thats why I like doing it manually, you never really know what the garbage collector is doing, and what you don’t know can be dangerous.

About the setting the pointers to null after disposing/freeing them, It can be a debugging technique:
if some code touches that pointer later, then you will get a nice clean null pointer exception. If you just leave it as it is, then its still technically pointing at valid data and the code might work, until more memory is allocated on top of it and you get corruption.

But its practically useless, because it won’t cross functions unless you use globals. I recommend valgrind instead.

Jeff,

You quoted Aristotle on this blog, you know the quote about success being a habit that has to be built. Cleaning up after yourself builds a habit of not only writing rock-solid code, but it also builds a habit of not using more memory than needed for any given task.

GC is definitely useful but it is non-deterministic. Just think of all pointless discussions about Dispose and Finalize in .NET. What is Finalize good for? Introducing a delay before cleaning up an object that is no longer needed?

Coding without knowing how to manage memory (and other resources) is exactly what lies at the root of the 25 programming mistakes you wrote the other day.

@Oliver: You’re right, it would do something then. However, the intention of the using statement is to create blocks like your first example that help do it automatically with scope. If C needs to be disposed when you’re done with it, there’s no reason not to declare it in the using block header.

anonymous: malloc() returns void*. In C, assigning this to another type (technically the void type isn’t really compatible with other types) this will result in a compiler warning. It’s an error in C++.

I think I’d like sqlConn.TakeOutBackAndShoot() better than sqlConn.DieYouGravySuckingPigDog()

But maybe that’s just me…

Brad Abrams at Microsoft once answered this question for me, via Scott Guthrie:

Yes, it is a bit confusing…. Dispose() is implemented because SqlConnection implements IDisposable such that it works correctly in C#’s using statement. We also added Close because there is such a strong prior art to using that name. But these methods are suppose to do the exact same thing.

See the guidelines here:

http://msdn2.microsoft.com/en-us/library/b1yfkh5e.aspx

Jeff - you’re wrong on this one. GC is indeed the One True Way, but garbage collectors manage memory, not resources. Resources still need to be disposed of.

A correct implementation of a garbage collection on a machine with infinite memory would be to never free anything. That approach does not apply to resources: there are only so many TCP ports, files on disk need to be unlocked so the user can reopen them (possibly in a different program), etc.

Moral of the story: if you’ve got objects that implement IDisposable, call Dispose() on them when you’re done. Pretend finalizers don’t exist: the garbage collector doesn’t know the weight of your resource, and is only invoked when it’s under memory pressure, not arbitrary resource pressure, for any particular value of resource.

I’d wager the majority of programmers alive today have never once worried about malloc().

That is a double-edged sword. In the one hand, it is good because of all that bull you just said.

On the other hand, it leads to a complete unawareness of what you are actually doing when you create some system-hogging resource. Many young programmers today write programs as if memory is infinite, and I think the primary reason they do that is because they don’t understand that creating an object actually does anything. It may be fine to do that in business where there’s a possibility of throwing more hardware at it, but it is a true coding horror in consumer applications, where it may be impossible to do that. Lest we forget - Word 6