Revisiting Edit and Continue

Steve,

I agree the EC is a useful tool for folks building prototypes who may not have TDD skills, but I build prototypes without EC. I have yet to find a single justification for EC that design-for-testability and TDD hasn’t simply made irrelevant.

I build the same kind of apps that most EC users build. The need for TDD isn’t driven by the kind of application under construction.

It’s not that TDD is a replacement for EC, it’s that TDD changes the fundamental aspects of design so that EC isn’t needed. And the result is software that is clean and grokable and much more of a pleasure and much less painful to build on and maintain.

The productivity comes from software designs that result from TDD practice that do not bring the obstructions to progress that are inherent in the opaque software that results from traditional practices.

The entire reason and justification for things like EC is wrapped up in a need to surmount obstacles that are themselves inherent in traditional design and implementation approaches.

It might sound crazy to hear a group of developers suggest that in TDD we’ve found a new route to productivity and that it’s a greater store of productivity than we’ve ever found before, but that is precisely what what Agilists and TDD’ers are saying.

There’s no shortage of TDD practitioners to sing the praises of TDD, what’s required now are ears that haven’t been desensitized by the numbing drone of traditional development.

The need for EC is a symptom of certain styles of software design that have become prevalent during an age when Microsoft opened the flood gates to a segment of developers who were need during a coder staffing crisis during the boom years.

EC is like putting a Band Aid on a shut gun wound. The real problem is the designs that necessitate EC. Even worse are the people who negligently continue to insist on these designs and who are too lethargic to learn a few new tricks.

The greatest shame rests with Microsoft for encouraging Mort to continue to engage in negligent software practices, but there’s little hope that Microsoft will take a responsible stance and desist from its own negligence since the resulting deflation in productivity causes a percieved need for throngs of Mort programmers who in turn drive millions of dollars in revenue for expensive Microsoft developement tools.

Teaching some people how to use DEBUG mode to begin with is something of a challenge.

The concept that the debugger itself can tell you the exact state of your application at any given time seems to simply defy them. Run, crash, recompile they say. Of course, many of them take HOURS to track down the simpliest of glitches induced by a moment of distraction or simple typo.

Indeed, Edit and Continue is prefectly suited to the simple typo fix, and allow you to test the change in place without having waste time recompiling, rerunning and returning to the exact same code path. Edit and Continue actually DEFIES serious code changes, and as such the serious modifications require that you shut down your app, sit back and think a bit.

It is NOT about edit and continue at all, but rather about a poor experience with an early ineffective debugger which permanently scarred them, or people who are paid by the hour looking for ways to explain why it takes them 8x as long to fix problems as their compatriots who use the debugger.

In perhaps the most perverse use of the word, these people are luddites plain and simple!

There are those out there that don’t use debugging simply because they have never been taught how. Take some time, show them how it is done. My experience says that it should take no more than a year or two of concerted effort before they finally clue in, faster in many cases. Very few people prefer to do things the hard way, but there are the occasional purists who will resist as long as possible.

Scott,

See, I “get” where TDD is useful for biz objects, transactions, and other objects which encapsulate application logic. But I don’t get TDD for UI development. Which is where 90% of my pain during development is. 90% of my code is plumbing/data pump code. Put data in/get data out and display it. And that’s the easiest, and most tedious code, to have to write. But the remaining 10% is where most of my errors occur.

Have you practiced the Model-View-Controller pattern for UI code testability? It’s an Inversion of Control pattern for UI that is used to get all the testable code out of the opaque testing containers that Microsoft’s tools create and reshape them into testable patterns.

There’s usually some percentage of UI code that is better tested with human eyes and hands. The goal is to reduce that percentage to as low a number as possible and to automate the rest of the test using tools like NUnit. If you’ve got some high-end tool to automate some part of the remaining code, then you could drive toward an even smaller percentage of human-tested code. However, a significant number of UI test tools are more trouble than they’re worth.

How do I “test” that an event fired correctly?

Are you asking how to test the wire-up between an event and a handler, if the handler did the right thing, or if the event was actually raised by some UI interaction.?

Correction:

Meant to say Model-View-Presenter… Freudian slip.

What, you want to save games anywhere? YOU SPOILED LIMP-WRISTED WIMP!

But I agree that debugging is good, and edit continue is better. Knowing how to use this stuff is essential, sure, but when is that not true?

This reminds me of an episode on the Adobe forums. You see, the venerable word processing DTP application FrameMaker only recently got multi-level undo redo. Previously, you could undo only the last change, and that was that.

Now one user (a well-informed and generally very helpful FrameMaker veteran) came up and said that multiple undo levels were bad because they would seduce people to type without thinking since they could simply undo everything!

Clearly, this line of thought is indefensible since you’d really have to demand the abolishment of even a single undo level, or of all computer word processing, or of using paper instead of etching words into stone, if you followed it to the end.

But for many people, the belief that convenient tools create bad attitudes was always very attractive… I think it was Plato who complained about the invention writing since people wouldn’t learn anything by heart anymore!

Funny how this has degenerated into a conversation about the merits of TDD. I really think that deserves another post.

I think the real issue is adding EC to C# in VS.NET the BEST use of time considering all the other potential features that could be added instead?

If not, what would you put in front of EC that isn’t being added?

I think I’ll be taking the middle ground on this one, uncharacteristically… I think EC is a interesting, useful tool. Considering that Smalltalk and Microsoft QBasic had this functionality over ten years ago, it’s about time. (Even VisualAge Java had this feature five years ago.)

There are times when I’ve traced through a complicated call stack, found a silly mistake, and wanted to fix it right then and there. EC allows me to do so without having to start over and step through the code again.

That said… if you’re using the debugger much at all, you’re behind the times. Techniques like TDD (a href="http://www.jamesshore.com/Blog/Red-Green-Refactor.html)"http://www.jamesshore.com/Blog/Red-Green-Refactor.html)/a and Fail Fast (a href="http://www.martinfowler.com/ieeeSoftware/failFast.pdf),"http://www.martinfowler.com/ieeeSoftware/failFast.pdf),/a combined with good simple design and refactoring, mean that you shouldn’t need to use the debugger. If you’re debugging more than, oh, once a day, you have some skills to learn. The experienced programmers I know crack open the debugger less than once a week.

So EC is interesting and useful from a technical point of view. But I doubt I’ll be using it much.

Cheers,
Jim

I think Scott and Jim have already covered all the next points I would have made and all I really want to say further on this. I have been attempting to “pull up” this community so that Morts and everyone learns about these ways of working but Microsoft comes along and keeps dumbing it down and out with their tools (MSF, VS Team “test”, EC, etc). If you wanto to look at ways to dramatically improve the way you work you have to think out of the (tight) box Microsoft paints you into. Thats all I think I can say as I have no religion or anything to sell. I have tried all the other ways for the last 22 years and they didn’t work for me, so I stand by what I say because I live it every day in myself and my team and it works. Thanks for the discussion.

I think they are just affraid that some bad scripting kiddies might be able to produce better software. :slight_smile:

I agree, in a bizarre way, with Scott Bellware. He, and his ilk, obviously don’t know how to use EC and, therefore, shouldn’t use it. I wish they would spare me their condescending concern for us Morts. I do what I can to evangelize the use of unit tests, TDD, and other Agile practices among my fellow Morts. There is absolutely no conflict between using EC wisely and using TDD. You folks are doing nothing more than hurting the adoption of Agile practices with your uninformed criticisms of EC. It might be a good idea to just shut up for a minute and listen to the folks who have differing opinions. You just might learn something.

sigh

John,

I spent some time with a Mort who thought if he could only teach me how to use EC that I would see the light. He taught, I learned, it was just debugging in live mode on code that wouldn’t need to be debugged through layers of opacity if the layering wasn’t built to be opaque.

EC is a trouble-shooting crutch for folks who have simply copped out on learning about software layer opacity, IoC, and DI.

It’s never been a matter of knowing how to use a tool like EC, but how to use a tool effectively under the guise of an effective methodology. It’s the methodology that precipitates the EC tool that I take issue with. The tool itself is largely benign, except that it perpetuates lack-luster and less effective practices relative to TDD.

To wit, I don’t think we’re hurting Agile’s adoption among Morts. I have yet to meet a Mort who was interested in Agile. I don’t really think it’s much of a target audience for Agile.

I’m with Jack Greenfield on the notion that a small percentage of Microsoft developers are likely to ever be able to adopt Agile. But I would posit that if the non-Mort developers adopt Agile, that we’ll likely need a lot fewer Morts.

I look forward to your writings on TDD for Mort.

This is one of those cases of protesting a tool because it has potential to creating bad practices. The reality is that edit and continue does have a useful role to play in fixing small “oops” type bugs without throwing out the state and restarting.

However, I agree strongly that using it frequently may mean that you failed to put proper unit tests in your code: you shouldn’t be at the point of editing and continuing often as your test cases should point out the flaw. Additionally, making an edit “on the fly” invalidates any unit tests that have already run, which means another pass will need to be done anyway to be sure that everything is clean, and good unit tests will recreate the state you need for you on that next pass.

Totaly agreeing here again. It’s a burden when you can’t save anywhere in a game, although “genius” gamers don’t need it. Plus, it’s simply stupid not to be able to save anywhere, when you’re palying and another event occurs and you need to leave the PC to somebody else, or shut it down, etc…
Or the next save point was 2 min later, and you’ve been playing for 30. and you HAVE to stop the game.

What I find horrible is that some people can give so definitive assertions, that “edit and continue” IS bad, forever and always…
Pffff.
Be honest! Who among us developers, never forget an “i++ /i=i+1…” line of code, finishing in an endless loop which, because of an overflow, would give an error you don’t understand.
What’s more simple: read and understand ‘error 80521a1d line 425’, or just debug it ?
Those people pretending to own the truth and that “edit continue” is bad, are like the ones saying computers are destroying intelligence of kids, or such stupid stuff… In the same way coputers can co-exist with their “opponents” (books/ebooks for reading, handwriting, music player, etc…), “edit and continue” can coexist with “edit and compile”…
Thanks for pointing this out. Your blog is great !
Tom

I think the comparison to gaming is a little flawed. The point of a game is to provide entertainment, but things aren’t as entertaining if they’re totally unchallenging. Thus, one contrived way to increase the difficulty of a game is the inclusion of save points rather than being able to save anywhere (as a side bonus, it also cuts down on the amount of information you need to store).

By comparison, writing software is supposed to be easier. Like a game, it might be entertaining if you really like to do it. But anything that makes it harder is downright silly.

Therefore, the most convincing argument against EnC is that it does, in fact, make writing software harder. For instance, if you could show that code productivity decreased because people relied too much on EnC (at the expense of more sensible things like good design, etc.) and caused an overall drop in software quality, then indeed EnC would be bad. The other arguments (“don’t be in the debugger unless you have to”, etc.) tend to be spurious or based on opinions rather than a process of reasoning.

Don’t get me wrong: I’m squarely in the camp that EnC is one of the greatest invention since sliced bread (or maybe that’s milk-dunked Oreos… I haven’t decided yet). But I see where the opposite side is coming from.

I appreciate your thinking on this matter and quoting me. With all due respect, I think your analogy with hard core gamers doesn’t make much sense here. I am arguing aggainst being in the debugger at all. Its a large waste of effort for little result. Don’t get me wrong (and please don’t quote that I aggainst the debugger) because its not a black and white thing. When you need a debugger, you need one real bad. But those times should be few and far between. Debuggers are for intractable problems that you can’t find. Unit Tests are a far better use of time because 1) they are quick to write and 2) they produce better value for the you and the company because the “test” is now repeatable and automated. Also they serve as API documentation. I think EC comes out of the sloppy just hit F5 world of VB and people don’t think before they run. I think its not just dangerous but a big waste of time. I just wish people could see that they can write 2-4 line NUnit tests in about the same time as debugging and its a lot more rigorous in thinking, value added, etc.

Respectfully, I fully stand by my position. Thanks fro listening. Cheers, Sam

Edit Continue is a tool. It should be up to the programmer to use whatever tools are available for his particular situation, methodology aside. One of my favorite debugging techniques is changing the instruction pointer while I’m debugging. I’m not desiging when I do it, I’m trying to save TIME. EC is great for investigative work. I often use the debugger to trace through legacy code to understand how it works (absent good documentation). I use a debugger just to view data being passed back to gain insight in how a black-box library call works. EC and changing the instruction pointer can save tons of time in these scenarios.

correct stupid off by one errors and associative array references.

Scott, this is probably exactly the case where it would benefit you to write some unit tests before fixing an off by one error. Write tests to expose the error. Write tests to make sure correcting the off by one doesn’t cause other problems.

Simple inline fixes to off-by one errors are notorious for introducing unforeseen bugs. For example, I’ve had an off-by-one error in VB back in the day. A simple inline fix introduced another off-by-one error. It turned out that what I thought was a 0-index array was a 1-index array. The “obvious” fix introduced another bug.

Also, you never know when another programmer comes later, is working in that code, and changes it for some reason (perhaps indirectly by changing a function that supplies the index. Who knows?). It’d be nice to have a unit test to catch that.

However, I consider myself to be a TDD’er, but I’m not exactly a crusader. I don’t think TDD is necessarily incompatible with EC. I don’t think TDD necessarily invalidates EC.

I just think TDD is more valuable than EC. But if the majority of MS developers disagree, by all means, use EC to your hearts content.

But also take some time to look at TDD and learn how to minimize reliance on EC.

Well John, my contention is that once a Mort becomes a TDD’er, he is no longer interested in being a Mort - he has largely moved on from Mort practices. Mort practices and Agile practices are largely relative opposites.

I’ve introduced quite a few Morts to TDD through the workshops I’ve been doing in the US and Canada in conjunction with .NET user groups, as well as through consulting gigs. I’ve never been able to get a Mort to adopt TDD and still remain a Mort. By the time a Mort becomes interested in practicing TDD, he is already on the way out of Mort territory. So I guess it’s more appropriate to say that I’ve tried to encourage recovering Morts through the transition, and I’ve helped turn on the TDD lights for a few open-minded folks who hadn’t yet considered the possibility of a non-flat Earth.

The times where I have been asked to teach non-receptive, die-hard Morts to do TDD, the practices and disciplines often slid down Mort’s back like water off a duck - TDD often doesn’t stick to a die-hard Mort. Sometimes the material opens Mort’s eyes to another dimension of software development, but those aren’t the majority of situations. In these cases, the best I can hope for is to introduce them to mere automated testing, and usually not even the mere test-first practices stick - practices which would ultimately clear the path to understanding testability.

Many Morts don’t often naturally think about testability. Many Morts are only now thinking about testing, but only because Microsoft is telling them to do because of VSTS’ arrival on the scene. Testing and testability aren’t necessarily the same dimension of concern. Not recognizing this is often the cause of new developer testing effort failures. Once a Mort adds testability concerns to developer testing concerns, I would venture that they have started down the path to Mort recovery.

In my experience, the die-hard Mort is a Mort because continuous study and practice aren’t a part of his work method. Die-hard Morts typically learn the latest way to create opacity through Visual Studio designers, and that’s where it ends. Mort is often a Visual Studio user, period. And that alone doesn’t make for a good developer much in the way that owning a license for Microsoft Word doesn’t make someone a good writer.

I’m not worried about offending Morts with recriminations about their low-sustainability practices - especially when they hold tight to these practices out of fear or lethargy because those are recriminable qualities for a software developer, and low-sustainability is a recriminable quality for software. When Mort finally learns that testability offers much greater sustainability over Visual Studio’s RAD tooling, then Mort will be a thing of the past.

Oh, and John… “Mort” is a generalization. So I don;t feel that it’s inappropriate to speak generally about the persona.

the real question is (as Phil said)

Should Microsoft have spent time on EC
over other features?

and i think Lisa Simpson said it best:

“You’ll never go broke appealling to the lowest common denominator”

i don’t mean this in an elitist way: i think it’s a true and healthy point of view.

if the language features you choose to include will widen the base of users, then it will ultimately be a good thing. It’s the old distinction between low floor and low ceiling.

In some earlier thread on this topic, someone said “why shouldn’t a debugger allow you to actually debug?”

And excellent use of a straw man argument, Jeff!

cheers
lb