Respecting Abstraction

In a recent post, Scott Koon proposes that to be a really good .NET programmer, you also need to be a really good C++ programmer:

This is a companion discussion topic for the original blog entry at:

I come from a C++ and CS background and I feel that both taught me good skills for dealing with software development in the real world. I see so many (and have interviewed so many) wanna-be-developers who know nothing about good programming paradigms, and only get away with working in the software industry because the MS .NET Framework takes the brunt of the hard coding work.

I’m not saying that all .NET programmers are not gifted (I am one), but it is a sad fact that the market is awash with so many people that really cannot code well. Back in the C++ days, when there was very little to help you when your application blew up because of a stray null pointer dereference, programmers and developers really had to know their trade.

Now that the .NET Framework is here my job is much easier - I can create an application in a day that used to take weeks in C++. Never less, my C++ days have taught me the skills needed for writing good efficient code, regardless of what language or framework I am writing developing on.

If I could count the number of developers who write code like:

string a, b, c, d
// Initialize a, b, c, and d.

string result = a + b + c + d;

Now, see often you see this in a C++ app.

Isn’t “all abstractions leak” something of a corollary to Godel’s incompleteness theorem? I don’t have the math skills to prove it, but it sounds right anyway. They leak not because they need to be fixed, but because they are abstractions - it’s an unavoidable aspect of their nature.

I love and respect abstractions - they’re the very foundation of human reason and knowledge. But there’s certain things that can’t be done within even the best abstraction - like pick which abstraction to use for a given context.

Knowledge of the abstraction underying the one you are using, and the one below that, etc. (turtles all the way down) is very valuable not only for troubleshooting, but for choosing which abstractions to use when, or how to make different abstractions work together.

There’s a balance between time spent learning your chosen abstractions, and time spent learning underlying, less abstract abstractions - determined by what you need to get to your goal. No one can possibly know all of it, but dogmatic reliance on either only the top layers or only the bottom layers is irrational.

–Kyle Bennett

I agree that experience is a blanket positive. A developer with 5-6 years in C++ before moving to C# will, all other things being equal, be superior to a developer with only 3 years of C#. I’d never propose that having a solid CS background is a BAD thing.

However, there are some negative aspects of “peeking under the covers”.

  1. It takes a lot of time. I feel a developer has to dedicate two solid years to .NET to simply become competent in the framework as a whole. It’s massive. And getting 50% larger in 2.0!

  2. It can teach you bad habits. As Jon Galloway says, “The point here is that modern programming is moving towards Domain Specific Languages (DSL’s) which efficiently communicate programmer intent to CPU cycles. C is not a good prototype for any of these DSL’s.”

Smart developers will know what to un-learn (COM, anyone?) to make room for the improvements of .NET:

Rob - I’m not sure I completely understand the point you were trying to make with your code fragment above. “string result = a + b + c + d” is a perfectly valid statement in C++ assuming you are using std::string.

As a .NET construct, it may or may not be reasonable to do such a thing given the existence of the stringbuilder class and its superior performance for concatenation.

I’m currently a full time .NET developer, and I find having a background in C to be rather helpful. I recently went back and “relearned” C since I never really “got” it the first time around, and noticed that the concepts have helped me out while writing C#. Specifically, I have a much better understanding of when to pass objects by value or reference, and delegates. It’s probably not absolutely mandatory to understand C, but all of the best C# developers I know do.

Of course, I very much prefer a GC’ed environment than one without, and I would rather poke myself with a hot iron than create a full application in just C.

Well my point originally was that learning C/C++ as your first language would make it easier to transition to a GC enabled language like Java or a .NET language. Not that you can’t be a competent programmer unless you learn C/C++, but that you end up being a better programmer by learning more about how the abstraction works under the hood. I stand by that.

My reasoning for that point is that none of the current frameworks are perfect. At some point if you are programming in C#/VB.NET or Java you will run into a situation where you have a memory leak due to some bug in the framework. Unless you understand how the framework is doing the heavy lifting, you can’t really know if it is failing and how to refactor your code to work around the frameworks failures.

Look at some of the .NET code being written out there. Look at the UrlBuddy at the Code4Fun site. It uses some PInvoke calls to watch the clip board. That means that not only do you have to have some understanding of C/C++ so you can figure out how to pass the variables in, but you also have to know how to get around in the Win32 API. Same with RSS Bandit. It’s not that they want to do things the hard way; It’s that the only way they can achieve the functionality they want is by reaching in to the API and finding a way. You can write around these limitations in the framework, but you lose some functionality.

Scott Koon needs a history lesson. Lets look at the purpose of C++ and C#.

C++ was invented with what I call a theory-mindset. By this I mean it gives developers all the power under the sun- power and beauty of controlling a machine. You can say in a way that you ought not to use C++ unless you are and expert for you can destroy your own foot and eveyone else’s who uses your code.

C# and .NET were invented with what I call money-mindset. The driving force is to get thing up fast and great; and forget about everything that do not help in meeting these goals. Things like pointers and memory management are of less concern compared to the business-need of getting things done.

Given the mindset, I do not think it a must to learn C++ intricacies if you really want to understand C#. The purposes of the languages are different so they are designed different despite how similar they appear. C++ can really help you understand the sweetness of Garbage collection. However, C# users uses are paid to get thing done and that is what C# is all about.

memory management is an optimization, not a requirement.

Sadly, no. I’ve just been spending a few days on a C# application that has a few cyclic references and forgotten unsubscribes to events that lead to the application grinding to a really slow state that did not quite halt over the course of a few hours. The cause? Somebody didn’t get it and just assumed the garbage collector “would get it”. Well… it didn’t. Not to mention what happens when you leak too many native resources, where at some point you have leaked the full set before a GC and your app comes crashing down - in a completely unrelated operation.

.NET is no alternative to resource management - you’d need to learn it better now that your language somewhat allows sloppy work.

I’ve been saying for a while that C# is basically C++ minus the template metaprogramming, but with a few much more complex things thrown in (think yield return, IL Emit, reflection, stuff like that). C# is not for people who consider C++ too complex - it’s for people who’d like just a bit more mess that is less debuggable.