The recent release of IronPython .NET, and Microsoft's subsequent hiring of its creator, got me thinking about typing. There's a really interesting, albeit old, post on the dubious benefit of strong typing at Bruce Eckel's blog. Which reminds me how much I hate constantly casting objects via CType() and DirectCast(). It's a waste of my time-- a productivity tax. You can disregard my opinion, sure, but Eckel is the author of Thinking in Java. And he's not the only one that feels this way:
This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2004/09/loose-typing-sinks-ships.html
The only errors that matter are runtime errors: until you have eliminated those, you don’t have a functional app. And those errors tend to be a hell of a lot more subtle than “Oops, I called the .Bark method on a Cat!”
Basically the value of compile time checking isn’t that great, compared to the overwhelming value of real world testing. That’s what all these hardcore Java figures are saying: they used to feel that way, too… until experience taught them otherwise. Just because your program compiles means basically nothing.
Once you factor in the cost (both mental overhead and simple keyboard typing) of all that “checking” in terms of programmer productivity (forced inheritance model to get a .Bark method, cast cast cast) … it’s pretty clear that dynamic typing is superior.
Python, Ruby and especially Perl rock. If I could find a job doing any of them I would, unfortunately… not many jobs are hiring Python people.
I would point out, however, that all three of the above mentioned languages support ^real inheritance. Loose typing isn’t a shortcut for inheritance, it’s just the byproduct of a non-compiled language. And while it’s great to use to make something work now, IMO, for larger projects you just have to ask the question of loose vs strong types: “do I want runtime errors or compile-time errors?”
Maybe I’m jumping in where I don’t belong, however I came across this article by accident and it has me concerned. I am a C# developer who has some history with VB. I can say with some confidence that I spend at a minimum 50% less of my time (runtime) debugging than I used too. Specifically when it comes to any type of web development where you might not totally see a bug until much later. With VB (O.K. VBScript in ASP) it was way too easy to fat-finger some code and not get a compile time error and not get a runtime error either until say a week or two after product goes out. The reason for not getting the error sooner is because of some weird data type mismatch scenario 15 steps into some task of some administration function only two people in the company use every 3rd month of every odd year (exaggeration to add emphasis on the frustration that ensues). Strong Type is better, for many reasons above this one. For example, ease of readability. Yah, casting is a pain, but there is never ambiguity when reading the code. This alone can be a time saver. Sure, I can feel the pain of just wanting to build some classes and not require some inheritance - but if you code using some simple patterns alot of this pain can be alleviated. Moreover VS.NET makes all of this pretty easy with autocomplete code sections and Intellisense. However, I cannot argue the productivity. Loose type is faster. But those mistakes that aren’t caught soon, cost almost double later. Code Complete has great graphs demonstrating this very point.
You’re implying that you would write code with casting mistakes so obvious that you would only have to F5 (run) the code to catch them.
Since when are you writing code that you do not run to test? Do you trust the compiler to catch every mistake you make and just ship the compiled EXE without ever running it yourself?
Compile time checking is nice, but it is absolutely no substitute for running your code to test it. In fact it may give you a false sense of security. Here’s a section from Bruce’s entry about that:
“When I wrote Thinking in C++, 1st edition, I incorporated a very crude form of testing: I wrote a program that would automatically extract all the code from the book (using comment markers placed in the code to find the beginning and ending of each listing), and then build makefiles that would compile all the code. This way I could guarantee that all the code in my books compiled and so, I reasoned, I could say “if it’s in the book, it’s correct.” I ignored the nagging voice that said “compiling doesn’t mean it executes properly,” because it was a big step to automate the code verification in the first place (as anyone who looks at programming books knows, most authors still don’t put much effort into verifying code correctness). But naturally, some of the examples didn’t run right, and when enough of these were reported over the years I began to realize I could no longer ignore the issue of testing.”
I think the example in my blog entry speaks for itself: the dynamically typed sample is simpler and far easier to read. Casting is “noise” that is required for compile time checking-- and as you yourself point out, along with Bruce Eckel, compile time testing CANNOT BE TRUSTED and therefore is of dubious value at best.
Whatever gets you to the unit testing (and real world testing) faster wins, and dynamic typing will unquestionably get you there faster.
“You’re implying that you would write code with casting mistakes so obvious that you would only have to F5 (run) the code to catch them.”
You might infer that from what I wrote. However this is far from the truth. Personally I am a unit testing junkie. But testing with a loose type language is (in my opinion) a lot easier to do, thus giving you, in your words “false sense of security”. And to be quite honest, I have become so productive with VS.NET due to Intellisense that I haven’t made a typeo/data type mismatch error in years. I can’t remember the last time I wrote code freehand. But I can recall the times when I didn’t have a compiler and did release code that I did test, only to have it come back and bite me in the ass. Like I said, runtime data type mismatch is a pretty compelling reason to decide against loose type languages.
And again, readability is such a big thing with my coding and my productivity. Right now I work in a company with roughly 100+ developers. It is a daily occurrence to have to read someone elses code or integrate with someone elses code. It pains me when I run into code that is less “self describing” than what others are. Even in C# or C++ you can become pretty loose, but with scripts or python it becomes hell. Code readability is a big production booster. Testing is a required part of development, but when those tests reveal bugs - I would rather be in a strong type language vs. loose type and be able to quickly parse out the errors because the code is “more” readable. Because testing will catch bugs in both.
And to prove I’m not a zealot, I will add the Perl is a language that I feel has powers above most other languages. I use it in prototypes to prove/disprove my C# code. It is apart of my testing tools. But I can’t imagine using it in my production code.
I know this is late, but…
To be fair, most of the clarity of the example you provide has to do with the ease of calling system libraries and the lack of main() requirements. In fact, I don’t see any explicit casting going on at all (typing yes, but not casting). To me - and I should preface this by saying that all of my experience is in 500K+ LOC systems - the real key is the ability to do more enforced contract-based development. In other words, I can assign a component to be developed and part of the signature will be the interfaces guaranteed to be supported on its method arguments. Without that, its so easy to get into situations where you have an object being passed in from /somewhere/ that fails a condition far down the chain, causing many different teams to get involved, etc.
I will agree that explicit casting is a PITA and, in a language with typed collections, should be pretty much completely unnecessary.
You should read Bruce Eckel’s blog. He’s the author of the original quote and he has a lot more to say on this subject:
He still thinks loose typing is more productive, but there are situations where strong typing is helpful.
Personally, I think strong typing is something that should be optionally enforced in the IDE but NOT in the compiler. That’s the best mixture of productivity, correctness, and performance. You can either do it, or not do it. Something that may be possible in Whidbey and VB.NET via this mechanism:
I have an entire theory about the direction of future languages in which the IDE drives most of the visible “features” and the compiler is an implementation detail: