Monkeypatching For Humans

@Daniel Jalkut

The left and right conventions no doubt come from Microsoft BASIC, LEFT$ and RIGHT$. You are blessed, perhaps, not to have programed on a platform that Microsoft BASIC was a default language on. :wink: I’m learning Cocoa and when I see code like yours I am the one who gets puzzled, much as you are by the above convention, so I think your suggestion is probably not a good one for the typical C# programmer, despite what you may think, considering that many of them are familiar with some form of Microsoft BASIC and not with Objective-C and Cocoa.

Hey Now Jeff,
Monkeypatching, I didn’t know of that term till after I read the post. You gotta love learning from reading Coding Horror.

Coding Horror Fan,
Catto

Also called duck punching in languages with duck typing.

The idea being that if it walks like a duck and talks like a duck, it’s a duck, right? So if this duck is not giving you the noise that you want, you’ve got to just punch that duck until it returns what you expect.

Indeed, you can monkey-patch in Python, including monkey-patching individual instances. As far as I know, all of the metaprogramming techinques in Ruby have a nearly direct parallel in Python. (Except for the aformentioned restrictions on builtin types)

That monkey-patching is a more perjorative term to Pythonistas than to Rubyists has a lot to do with the culture surrounding the languages, obviously. But, I’ve noticed an interesting coincidence.

Black-magic metaprogramming in Python produces code with lots of dict’s and class’s, dunders and quoted identifiers in it. Its appearance just screams I am rewiring my microwave and voiding the warranty, and is just generally quite visually distinct.

On the other hand, Ruby metaprogramming code just looks like any other Ruby code, unless you’re paying particular attention to what methods it’s calling. It doesn’t look out of the ordinary.

I wouldn’t be surprised if one didn’t have something to do with the other :slight_smile:

I could see how this could be really, really bad- confusing the hell out of the other people you work with.

Nobody enjoys it when what you have worked with for the last 10 years changes, even the slightest. (see: vista)

Yeah, I’m no language designer. Think I’ll take heed, and will stay away from thinking about doing stuff like this.

This monkeypatching is old hat to Lisp and Smalltalk programmers too. All those other languages you mentioned (javascript, perl, ruby, python) are flawed in some way compared to Lisp and Smalltalk (slower VMs, disabled lambdas, ugly syntax, string-based eval/no-macros etc). They have a few bright spots, but for the most part, it is far better to use Lisp and Smalltalk as examples whenever meta-programming is mentioned.

To fully explore the possibilities of monkeypatching, you need to use a Smalltalk system like Squeak, or maybe Self, or maybe Io.

GvR is a respected computer scientist?

In C# you can only override inherited class members. That really doesn’t screw up anything unless you are very creative. I suppose same goes for Java.

Meh.

Summarized: powerful languages give you more ways to shoot yourself in the foot - be careful.

C++ let you redefine operators; misused, this could result in apparently simple code with complex and hard to see or debug consequences.

The C preprocessor allowed, via plain ol’ macros, the construction of code that looked nothing like C.

Java was designed as a C-like language which would avoid these pitfalls by removing the capabilities entirely - programmers responded by building preprocessors and auxiliary languages that effectively added them back in.

Let’s not forget the most obvious way of becoming a language designer: actually writing an interpreter or compiler for a new language (usually a domain-specific language). This happens all the time.

And it happens because it’s useful. Really useful. If you have a lot of code that pulls off the leftmost characters from a string, then that code becomes much more clear once your Left() method is introduced.

Don’t run scared from your language. Learn to use it.

Well… at first, I think that the debugging madness is mostly a hint towards debuggers that are not strong enough. In my opinion, if you run some program with maximum debug symbols, you should be able to say something like inspect monkey_patched_string in your debugger and see like:
Attributes: [list]
Methods: [list]

  • Left added in monkeycorp/monkeyserver/monkey_jeff/monkey_file.monkeyextension
  • Inherit modified in evil_joe/evil_plugin/evil_file.monkeyextension
    [more list]

That way, at least finding the problem is not that hard. :slight_smile:

On the other hand, I agree that monkey patching in a global scope is bad, because it is in the global scope. Whoever worked in large C-applications knows the pain of not having namespaces (usually, one creates his own namespaces by using fun names like SNetUtlDebugIterCurrentDefined (SNet Utilities List Iterator Function).
If one applied this to the monkey-patched methods over there, you would rather have sometihng like MonkeyServerLeft and EvilPluginInherit and the problems are gone.

Thus, I think, monkey patching needs to be tied to the scope you are in or needs extra syntax to import monkey patches in order to really work for larger things.
The first version would require some nested module system, and a monkeypatch applies to all module that are inside the current module - but the runtime system removes it as soon as a monkeypatched is passed to a sibling or a parent. That way, EvilPlugins inherit would not interfere with MonkeyServer, unless some other really kinky stuff is going on.

The other way is a bit more general, but requires more work. That way, MonkeyServer could say keep monkeypatched inherits from EvilPlugin. If he did that, he would get patched inherits, but it would be documented. If he did not do that, patches would be stripped.
The drawback to this more general method is that you need to add code for each method added (even though I am still thinking if added methods need to be annotated in this way)

Greetings

Welcome to the Monkeyhouse. It looks like modern languages have invented new ways of keeping old challenges unresolved.

I absolutely agree that problems caused by monkey patching are the result of cultural problems, not technical problems. Objective-C has had categories for over 20 years, and there hasn’t been an apocalypse yet. Of course, the use of categories for overriding existing methods is strongly discouraged (http://developer.apple.com/documentation/Cocoa/Conceptual/ObjectiveC/Articles/chapter_4_section_3.html#//apple_ref/doc/uid/TP30001163-CH20-TPXREF141), so there’s the cultural factor at work.

I find it discouraging that the hard-won lessons of older languages like Lisp, Smalltalk, and Objective-C seem to get lost in the buzz around newer dynamic languages. From an engineering standpoint, trendy != good. I think Ruby, Python, et al. are giving older dynamic languages a bad reputation.

Luckily I program only VB

Good examples of the evils of monkey-patching can, I think, be seen in Javascript – probably the most-widespread programming language in the world today, since you have at least as many JS interpreters on your computer as you have web browsers.

Popular, widespread frameworks like Prototype change (or, used to – don’t know if they managed to fix it by now, but I don’t think so) the innards of built-ins to the point of breaking OTHER frameworks (and much other JS code written for JS proper, without knowledge of the exact details of monkey-patching provided by this or that framework).

Just as theory predicts, it is thus shown to be true in practice that independent frameworks (and even modest snippets of code one is trying to reuse) cannot live together if one or more of them do use monkey-patching – because monkey-patching’s effects are covert, global and pervasive. Even the modest amount of monkey-patching allowed in Python (e.g. changing builtins, as in Andrew Dalke’s example) would be deleterious IF popular modules used it – but a feature that’s not to be used should not be in the language (I’ve long maintained that builtins should be locked against runtime patching, and it should at least be possible to similarly lock other modules, at least, and ideally any other object too – beyond ensuring against accidents, that would ease Python optimizations).

Alex

Is there anything more pure and godlike than programming your own programming language?

I wouldn’t say you’r creating your own language, rather your own framework.

But both cases are equally dangeruous :smiley:

and btw, please give a lifetime ban to all lamers who write first-comments and similar crap.

Agreed. I think the love afair programmers have with dynamic languages (granted, monkey patching isn’t a symptom of all dnymaic languages) is way overblown. You often end up with this overly clever solution that is hard to debug and harder to maintain. These crappy solutions are attractive to people who are more interested in hacking together some code then taking a disciplined software engineering approach.

public static string Left(string s, int len)
{
if(s.Length == 0 || len == 0)
return ;

return s.Length = len ? s : s.Substring(0, len);

}

Even if C# allowed Monkey Patching, unless you were defining your
Monkey Patching in a system namespace somehow, it would be easy to
find by checking out what using directives you had to find the
offending overwriter.

Extension methods in C# roll like that. You don’t get them at all
unless you opt in for them. You can’t universally add .Left to string
that works in code you don’t control unless said code subscribes to
it.

This still poses a problem. What if C# 4.0 added a ‘Left’ method to the String class – and one with a different interface or one that worked slightly differently? Then the extension method, and all code that relies on the extension method, breaks. OK, this is unlikely to happen for ‘Left’ and the String class, but it could very well happen elsewhere. I agree with what Jeff said in the article – extension methods make me nervous.

It’s worth mentioning that monkeypatching is an incredibly valuable
technique for writing unit tests.

Ah, you’re right – dynamically altering methods would be very
useful for testing.

Yes, dynamically altering methods would be very useful for logging, testing, and debugging. But how do you allow this without allowing Monkeypatching? The only thing I can think of is to allow one to call a function (or object method) prior to entering and after exiting the core function – much like aspect-oriented programming. And those functions must not throw or otherwise alter the behavior of the original function. THAT would allow some useful runtime behavior (like dynamic logging) while preventing the bad behavior seen in Monkeypatching.

Here’s to hoping some language writer and/or compiler writer can figure out how to accomplish this.

Remember that recursive FindControl method you wrote? Wouldn’t that be a good example of a partial method?

I guess this stuff isn’t for breaking programming languages, but more as a tool to augment closed libraries with useful functionality - useful for the User of these libraries.

Use it sparingly.

Ah, and yes: Monkey patching is kinda bad - but sometimes you want to be able to hack into your language. Very seldom. But being able to really does make you trust it just that bit more.

I’m an old developer. I’ve been at it for over 35 years. I’ve never been very impressed with the C syntax all over the place. When I was a young fellow in the 70’s I thought that the language FORTH was where everything would go. Extensible languages were just so cool. In hindsight, though, Jeff, I now can see the apocalypse of which you speak. Still and all, it was cool being involved at a time when languages were still evolving.