Not trying to prove anything, simply stating facts. In my opinion var is less readable, but readability is subject by nature. And I would suggest that the more explicit code is the easier it would be to maintain, but again that’s just my opinion. If anyone has a measure of readability or maintainability that is not subjective I’d love to see it.
‘Seems like this is going in circles. I remember back when VB6 was what most business apps were written in and the fact that the compiler didn’t make you specify when you were allocating a variable was seen as some great evil and one of the reasons VB was a “toy language”.’
I remember when the weak typing of Perl was considered a great win. And then it wasn’t anymore, and Java’s inheritance-based type system was considered the salvation of development. Then Ruby’s dynamic typing was the key to its productivity. Round and round…
@Jeff Atwood
But why can’t the 1 simply grow/size to fit? Wouldn’t that be preferable to the overspecific data typing and the inevitable integer overflow weirdnesses errors and even exploits this results in?
Yes, for an entire class of apps, you would be better off using a simple, unbounded Number type, calculated and rounded using decimal rules instead of binary, so that math behaves like people expect it to. This is available in some languages, although they are “much slower” than native numeric types; although whether that’s “too slow” is another question entirely.
But sometimes specifying the width/representation is a requirement or a distinct advantage (usually in size/speed).
The new C++ standard (C++0x) seems to have moved towards this with “auto”, so instead of this:
vectorstring::const_iterator it = names.begin();
you will be able do this:
auto it = names.begin();
Assuming someone actually implements it Although I do wonder how this plays with inheritance. e.g. Would var be the most generic type available or always what you tell it to be (I guess the latter, since it makes most sense).
Ah, I hope “” and “” aren’t eaten by the post demon. Otherwise the vector was of strings there.
I agree that in the examples above, but when it comes to variables being assigned through a call to a function (something like var x = GetSomeCrazyStuff(x,y)) the data type in front of the variable makes it easier to figure out what GetSomeCrazyStuff is supposed to return.
As one of the few, remaining C/C++ developers left - I think this is the worst idea in the world. Why? its completely non readable and makes maintenance a bitch. Being lazy is not a good thing! Don’t be a lazy programmer!!
“Anything that removes redundancy from our code should be aggressively pursued”
Actually, that’s a pretty strong claim that would need justification. Many people argue that redundancy in code is a good thing because it helps catching logical errors in the code – provided the compiler checks all the information; else the redundancy is obviously harmful (because the information might differ).
That being said, I agree with you in this case. Redundancy in variable declaration is useless code bloat. I actually prefer the new notation for another reason: because it prefixes declarations with a keyword, analogously to VB’s ‘Dim’ statement. I’ve always hated C-like languages for their implied declarations which make the code much harder to parse (even for a human reader).