I think stuff like this is really useful. It lets you fill in functions that should be there, and that cut down on the amount of code in many other places. Imagine not having a left function, and having to write those 6 lines of code 100 times througout your project in various places. The .Net API now contains the String.Contains function. Which is very useful to find out if a string is contained within another string. In .Net 1.1 Of course, there’s always .indexof, but that doesn’t really make it completely explicit what’s going on. Using indexof is only really used because there wasn’t something more correct to use. Allowing developers to extend the core functions provided by the API can be extremely useful. Just make sure you don’t go over board, and start adding stuff like String.To1337Speak().
That code doesn’t actually exist; it’s only used as an example. I do agree with all the amendments / suggestions people have made, but critiquing dummy code is a little beside the point don’t you agree?
Just write real abstract dummy code then instead of C#.
Patching built-in or widely used classes definitely seems evil but modifying specific instances of a class can be extremely useful. Which is a good time to remind ourselves that 99% of the time we’re actually doing class-oriented and not object-oriented programming. What you really want to do is modify the behavior of objects and not the behavior of classes - otherwise they’d be a different class, wouldn’t they?
Alright, in order to weigh up for all the {} flying around, I’ll repost the final code from my post a few comments up (s/_/ /g):
(defmethod left ((sequence string) (length (eql 0)))
__)
(defmethod left (sequence (length (eql 0)))
__nil)
(defmethod left (sequence length)
__(cond
____((= length (length sequence)) sequence)
____(t (subseq sequence 0 length))))
CL-USER(1) (left 3)
CL-USER(1) (left '() 3)
NIL
CL-USER(1) (left foobar 3)
foobar
CL-USER(1) (left '(foo bar baz quux) 3)
(FOO BAR BAZ)
There, now we have a sane model where classes don’t have special privileges, so methods can specialize on /any/ argument. Much better!
(Of course, the example code itself is rather silly, but it’s just a direct translation of the original code.)
And oh, you should define the interface for the methods – generic function in Lisp – as well, in order to be a good citizen.
(defgeneric left (sequence length)
__(:documentation
Return length items of sequence, starting from left,
or the original sequence if length is longer than the sequence.
Zero length yields the empty sequence.))
I prefer to call it duck punching, so that you get the appropriate violence invoked.
Note the problem with duck punching is that you immediately lose stability, unless you happen to know every other instance of duck punching being done anywhere else in your entire application.
Classic example:
http://weblog.raganwald.com/2008/07/ive-seen-things-you-people-wouldnt.html
And that’s from a smart person who surrounds himself with smart people. Us lowly knuckle-draggers who don’t regularly delve into the bowels of Rails are really in trouble.
There’s also the problem of tracking changes, even if you manage to get a stable build. I discuss that scenario at length over at my blog.
http://enfranchisedmind.com/blog/2008/04/14/useful-things-about-static-typing/#comment-33124
Limiting duck punching to otherwise undefined functions is a good idea, because it gives you a limit on the ways you can shoot yourself in the foot, and a stp when you’re about to.
I’d like some clarification about monkeypatching being a security hole ( @Andy ). Ok, while overriding a private method is defeating the whole purpose of code visibility and leads to some mess, it’s an issue in code design, but as far as I know, private method were never meant as a security measure towards external exploits. Because anyway, if anyone gets as far as being able to override a private method through monkeypatching, you’re still as screwed up as if monkeypatching wasn’t available in the first place. It’s as much a security hole as reflection is.
And as for the XSS attack, it’s on browser side, and I don’t see the point either. Security doesn’t belong client-side, and as far as an XSS attack is possible it’s already too late, the same nasty stuff could be done with or without monkeypatching.
@Bobby:
I agree with Sub, it’s a shame you can’t re-use a static name on an extension method - the extension actually compiles, but in some situations you can’t use it (it’s really odd, almost seems broken).
Aside from that point, I agree there’s value in keeping things the C# way, but I also think there’s much to be had by expanding that as well. If we only followed the C# culture, we wouldn’t have unit tests until 2008, would be using datasets instead of NHibernate, and would use XML for everything. Some of the extensions that I added were heavily influenced by Ruby, which has a far more open concept and far more elegant syntax. Writing a view in rails is easily an order of magnitude cleaner than in ASP.NET MVC w/C# or VB.NET. For this reason, I think it’s good to challenge that status quo, so to speak.
To keep the C# culture the way it is for the sake of making it easy is highly questionable. The alpha dogs are pushing hard towards two extremes - specification-based languages (Spec#), and dynamic languages (Boo, IronPython and IronRuby). I’m not saying C# will die out, but I do think you’ll see more and more people take up specialized tools for the job (O/R mapping for your DAL, Spec# for your domain layer, and IronRuby for your presentation - heck most of us are already doing this in some form or another).
C# is by no means a holy grail, and the C# culture - by it’s very strong ties to the .NET culture - is far from perfect. To quote a former Microsoft developer, there are too many people these days calling themselves programmers who have no inclination to test the limits of their languages.
@Andy and Vincent,
In actionscript 3 the compiler won’t allow to override a private function, and the core features that are considered essential are defined as internal or as final which sets a common ground for future versions of the api.
@Kibee, well it’s actually not as bad to have a toL33tSp3aK as you can be sure that the language will, most likely, never have a native method with that name.
I honestly think that all of this has been lived through by the older generation of programer with Forth and Lisp, but somehow there’s a lot of new programmers that are pushed by the corporate world to just code as fast as possible and disregard whatever consequences the future might bring to their code.
@Karl I appreciate your response and surely didn’t mean to imply that you shouldn’t push the boundaries of the language.
It has nothing to do with the technology you use, and everything to do with how you read code (code clarity).
I don’t want C# code that reads like ruby. I want to borrow the concepts, but not necessarily names, especially redefining existing names and concepts. That just makes code harder to read and maintain by someone else.
To me, the point of coding using the accepted idioms is because you accept that language, and you want to read the code, and not read a java person writing ruby, or a ruby person writing java, etc. I write c# to look like c# and javascript like other javascript, and python to look like python. If you want to write python, why are you writing lisp?
There is truly a staggering amount of thought that goes into language design, despite what most of the punters on this blog go on about.
By all means make it better, just be careful you don’t invent an unneeded wheel.
What are you on about regarding Lisp and Forth? Yeah, those old people don’t know any better
Mind clarify yourself?
Old versions of Fortran let you redefine numeric constants in certain cases. The ultimate in Monkeypatching.
http://coding.derkeiler.com/Archive/Fortran/comp.lang.fortran/2005-01/0487.html
@Mikael
I meant that the older generation of programmers (I’m 27 and coding since 11) have been through this before and they managed to handle it (and that’s a good thing ) and provide us with an insight on how to work with overloading, overriding and so on… Recent languages are usually stricter because I believe the people that designed them have had these problems in the past.
In a perfect world we’d all be coding away in Lisp (a language that allows to redefine it’s own syntax) and be awesome programmers, but we’re not.
So to cut to the point, I feel that the biggest problem is that the new generation, and even mine, have somehow forgotten to look to the past for answers and somehow keep forgeting that ‘monkeypatching’ isn’t a new problem and has indeed been tackled by plenty of programmers before us. Throw that together with the ever growing demand for shorter development cycles and programmers that don’t work hard at understanding and you have the mess you have now.
What you really want to do is modify the behavior of objects and not the behavior of classes - otherwise they’d be a different class, wouldn’t they?
Well, sometimes you really want to modify the behaviour for a whole class - like, when the objects of this class are instanciated by a library or framework.
@all_the_bondage__discipline_programmers:
As usual, the problem is not with the feature, but with how it’s effectively used.
@KG on July 14, 2008 10:08 AM
Pretty much anything. Even a lighter shade of red would be better.
The problem isn’t monkey punching. The problem is programmers doing stuff that doesn’t work.
Several people have mentioned Forth allowing this. Doesn’t C allow similar things with macros and the preproccessor?
Forth lets you redefine absolutely anything you want to. For example:
5 constant 4
After this is compiled, every place your source code has the number 4 except inside strings, it will compile a 5 . I can’t think of any situation where this is useful, but it’s available. Every word can be redefined.
: DUP DROP ;
: : BYE ;
The only place this gets used is for compiling incompatible code together. Code written for different Forth versions can be run together, you set up different meanings for the same words in different scopes.
Forth79 definitions
: NOT 0= ;
Forth83 definitions
: NOT invert ;
The same word got defined to do two different things, one was a logical not that returns a true flag if the flag it receives is zero.
The other is a bitwise invert that flips every bit. If you have ancient Forth code from two different sources and you want to use them both withour rewriting, you probably can.
All this power that hardly ever gets used. Why not? Because you usually don’t have a real use for it. But there’s no need or reason to add code to the compiler to prevent it, either.
If you make a new function that does something new, usually you do better to give it a new name and not the same name as something else. Less confusing that way. You say what you want and the compiler does it. Simple and easy. You have to know what you want, but when you do know what you want you can get it a lot easier.
Yeah, the scope thing scares me too. I think the fundamental problem is that without monkeypatching, I can always tell where the code I’m calling is defined – find the definition of Class X, find the function, bam. But there’s no obvious browser to show me where the cobbled-on function is defined, and if I don’t extensively segregate my namespaces (I routinely have entire 20-class projects where all files are in the same namespace), I could really shoot myself in the foot. It’s an interesting idea, but I think we need more experience with the idiom.
You’re evil. Seriously.
Monkeypatching can be incredibly dangerous in the wrong hands
So can a sharp knife, yet even amateur chefs need one.
Can you imagine trying to cook a meal, or change your oil, or tune a violin, or climb a rock wall, using only Fisher-Price tools which are so blunt that they cannot possibly be dangerous in the wrong hands?
And all of those things are done every day, by amateurs. We’re supposed to be professionals. Whatever happened to learning how to use your tools properly?
Monkeypatching in the wrong hands sounds much less scary than having to use a language explicitly designed to prevent me from doing anything which other people wouldn’t be able to do without hurting themselves. There’s a reason nobody uses Pascal and Ada today, not even the big risk-averse companies.
Even if C# allowed Monkey Patching, unless you were defining your Monkey Patching in a system namespace somehow, it would be easy to find by checking out what using directives you had to find the offending overwriter.
Extension methods in C# roll like that. You don’t get them at all unless you opt in for them. You can’t universally add .Left to string that works in code you don’t control unless said code subscribes to it.